text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
$1 \leftrightarrow 2$ Processes of a Sterile Neutrino Around Electroweak Scale in the Thermal Plasma
In this paper, we will apply the Goldstone equivalence gauge to calculate the $1 \leftrightarrow 2$ processes of a sterile neutrino in the thermal plasma below the standard model (SM) critical temperature $T_c \approx 160 \text{ GeV}$. The sterile neutrino's mass is around the electroweak scale $50 \text{ GeV} \leq m_N \leq 200 \text{ GeV}$, and the acquired thermal averaged effective width $\bar{\Gamma}_{\text{tot}}$ is continuous around the cross-over. We will also apply our results to perform a preliminary calculation of the leptogenesis.
I. INTRODUCTION
Sterile neutrinos interacting with the plasma background of the early universe can become a potential solution to some cosmological particle physics problems. A prominent example is the leptogenesis [1]. The CP-violation effects of the sterile neutrino interactions with the light leptons give rise to the lepton number asymmetry in the plasma, and the baryon number asymmetry accordingly appears through the sphaleron effects(for some early works, see [2][3][4][5][6], and see [7][8][9][10] for reviews). The sterile neutrino can also become a portal to the dark matter. Being a variation of a secluded dark matter model, a "sterile-neutrino-philic dark matter" model [11][12][13][14][15][16][17][18] gives a different relic density result compared with the standard weakly interacting massive particle (WIMP) models [19]. In Ref. [20], we also studied a feebly interacting massive particle (FIMP) [21] version of such kind of models. Sometimes, sterile neutrinos themselves can also become the dark matter candidate. Among all these examples, a reliable calculation of the sterile neutrino's interaction with the thermal plasma is very crucial for the precise predictions of the related physical observables compared with the experimental data.
When m N ≫ T c ≃ 160GeV, where m N is the sterile neutrino mass and T c is the electroweak cross-over temperature [22], there are plenty of reliable discussions in the literature to calculate the sterile neutrino's production [23][24][25][26][27][28][29][30][31]. Since the crucial temperature T ∼ m N is well above the cross-over temperature, only the Higgs doublet and the active leptons participate the 1 ↔ 2 processes. The Higgs components receive a universal thermal mass correction, which is easy to be calculated. For lighter sterile neutrinos, successful leptogenesis can also be acquired through the resonant effects [32][33][34][35][36][37][38]. When m N ≪ T c , at T ∼ m N ≪ T c , the thermal mass terms can be safely neglected since the vacuum expectation value (vev) of the Higgs boson becomes fairly close to the zero-temperature value ∼ 246 GeV, and the boson's behaviours are similar to those in the zero-temperature situation [39].
In the literature, there seems to be a gap when m N ∼ T c . In this range the calculation is plagued by the intricate thermal corrections to the gauge and Higgs sectors. In Ref. [40], the authors estimated the U(1) Y ×SU(2) L gauge boson contributions by replacing them with the Goldstone degrees of freedom artificially assigned with the similar mass of the Higgs boson. We also applied this method in the corresponding calculations of our papers [12,20].
Such an ansatz might be inspired by the famous "Goldstone equivalence theorem" in the zero temperature, which requires more investigations in the thermal plasma case. A safe procedure is to return to the original form of the finite temperature propagators to integrate all the branch cuts and poles whatever appear, as described in Ref. [41][42][43][44]. However, it is formidable for one to follow the procedures there, and the relationship between the Goldstone and gauge boson becomes more obscure. Another fact is that the invariant squared mass of the sterile neutrino, which is denoted by K 2 in Ref. [41][42][43][44], had been neglected around T c there, so their method is not suitable to our interested K 2 = m 2 N ∼ T 2 c range. In Ref. [45] we proposed a method to decompose the massive gauge boson propagators in the thermal plasma. Poles indicating the "transverse" and "longitudinal" degrees of freedom arise as usual, and a branch cut which extremely resembles two massless poles was identified as the Goldstone boson's fragment. When T > T c , such a branch cut fragments into two actual poles corresponding to the Goldstone boson particles, and when T = 0, this branch cut completely disappears. In the finite temperature, the longitudinal polarization is also some intermediate state between the so-called "plasmon" and the Goldstone equivalent state. We made an analogy that the longitudinal polarization will "spew out" a fraction of the Goldstone boson in the finite temperature environment. This helps us include all the contributions from the transverse, longitudinal, Higgs and Goldstone degrees of freedom correctly, and help us clarify the relationship between the Goldstone and the gauge bosons in the plasma.
In this paper, with the method we have developed in Ref. [45], we will calculate the sterile neutrino 1 ↔ 2 processes near the electroweak cross-over temperature m N ∼ T ∼ T c . We will also roughly discuss the leptogenesis induced by these processes. A complete calculation of the sterile neutrino's interaction in the early universe should also include the more complicated 2 ↔ 2 scattering processes. In many cases when T ≫ m N , and the l-H-N Yukawa couplings y N 10 −8 which are sufficiently large, thermal equilibrium of the sterile neutrino does not require a detailed calculation. When the temperature drops down to the T ∼ m N scale, the out-of-equilibrium effects start to arise, and these 2 ↔ 2 processes are usually suppressed rapidly due to an additional number density factor compared with the 1 ↔ 2 processes. With these considerations, we leave the 2 ↔ 2 processes to our future study and do not consider their contributions on this stage. We also do not consider the contributions resumming the interchange/emission of the soft bosons [46][47][48] (sometimes called the LPM resummation) in this paper for brevity and simplicity.
We enumerate the channels and list the basic formulas in Sec. II. Details on phase space and thermal integrals are presented in Sec. III. Numerical results and a preliminary calculation of leptogenesis are displayed in Sec. IV. We summarize this paper in Sec. V.
II. BASIC CONCEPTS AND CHANNEL ENUMERATION
The Lagrangian of sterile neutrino is the standard one where H is the Higgs doublet, L i , i = 1, 2, 3 are the lepton doublets of three generations, N j are the sterile neutrinos. N j can be either Majorana or (pseudo-)Dirac spinors, and the corresponding kinematical and mass terms L N kin + L N mass differ by a factor of 1 2 . For simplicity here we only study the one Dirac sterile neutrino case. The interaction only involves one massless lepton. A general situation can be inferred from our results by simply multiplying some factors. Therefore, the Lagrangian we are relying on is given by where m N is the mass of the sterile neutrino.
Above the standard model (SM) critical temperature of the cross-over T > T c ≈ 160 GeV, the 1 ↔ 2 processes of the sterile neutrino have nothing to do with the W/Z boson. Only the Higgs doublets including the Goldstone components participate the couplings. The whole process is quite standard: the thermal effects correct the effective Higgs mass term δm 2 H, thermal = (g 2 1 + 3g 2 2 + 4y 2 t + 8λ) where g 1 , g 2 are the electroweak gauge coupling constants, y t is the top Yukawa coupling constant, and the λ is the 4-Higgs coupling constant. Leptons also receive the thermal mass corrections. In the thermal plasma, each pole in the leptonic propagators are split into two objects, so called a "particle" and a "hole". In the Ref. [23], both these two objects are combined into one single particle with the universal thermal mass correction to estimate the phase space. In this paper, we abandon this approximation, and earnestly sum over each contributions from these two degrees of freedom.
Below the critical temperature T < T c , the vacuum expectation value (vev) is estimated where v 0 = 246 GeV. This opens the sterile neutrino's oscillation into a highly off-shell active neutrino, and then it decays into a W/Z gauge boson plus a charged lepton/active neutrino. An on-shell W/Z boson can also decay into a pair of leptons, and the active neutrino product can also oscillate into a sterile neutrino through the vev. In this paper, we rely on the Goldstone equivalent gauge [49] to calculate the sterile neutrino's productions in the thermal plasma [45] below the critical temperature T c . Within this framework, each Goldstone degree of freedom is attributed into two parts: one is hidden inside the extended polarization vector of a longitudinal vector boson, another behaves like a massless particle during the calculations, and is regarded independently as a Goldstone boson's fraction. We enumerate and include all of the gauge polarizations and the Goldstone boson fraction's contributions. In the appendix, we will also show the equivalence between this gauge and the usually familiar R ξ gauge.
In the following subsections we will describe the details for each channel. Before starting them, we also note that we ignore some of the sub-dominant tachyonic branch cuts in the bosonic propagators, as illustrated in our Ref. [45], and as in Ref. [23], the sub-dominant branch cuts in the leptonic propagators are also neglected.
A. W channels
The Feynmann diagram of a sterile neutrino N decaying into a W + boson and a charged lepton l − is illustrated in Fig. 1. Since we are discussing a Dirac N, it is possible to inverse the arrows there to reformulate it into a N decay diagram. We neglect the anti-sterile neutrino's decay in our paper since the results are completely symmetric by neglecting the CP effects. The momentum flows are also defined in Fig. 1 and are defined relative to the plasma background reference, i.e., the plasma's four-vector velocity When, e.g., p 0 1 < 0, the same diagram can also be interpreted as a charged lepton's fusion with the sterile neutrino to generate a W + boson, which is the dual process of a W + decaying into a N, l + pair. This is the "inverse-decay" process of a W + boson, and we denote it with "ID" for abbreviation later. The thermal equilibrium condition guarantees the equality of the results from both the aspects of "decay" and "inverse-decay" processes of a W boson. Therefore, Fig. 1 can summarize all the possible 1 ↔ 2 processes of a (anti-)sterile neutrino.
The dispersion relation of a W boson is given by for transverse and longitudinal polarizations respectively, where and The vev dependent W boson mass is given by where g 2 is the weak coupling constant, and the Debye thermal mass m E2 takes the form Ignoring the lepton's vev dependent mass, since it is much smaller than the thermal mass term, the thermal corrected dispersion relation of the active lepton is given by(See page 140 in Ref. [50]) where Here Generally there are four solutions to the (10). When p 2 1 > m 2 f , this means a "particle" for p 0 1 > 0, and an "anti-particle" for p 0 1 < 0. When p 2 1 < m 2 f , this indicates a "hole" for p 0 1 > 0, and an "anti-hole" for p 0 1 < 0. The energy and momentum conservation laws are given by where θ p is the angle between p and p 1 . The subscript "p" denotes the "plasma", which means that this is the angle measured in the plasma rest frame. Given the sterile neutrino's energy and momentum p 0 , p, fixing the θ p , there are four unknown parameters p 0 1 , p 0 2 , | p 1 |, | p 2 | in just four equations (5,10,13,14). Solving these equations might give a set of solutions. If p 0 1 or p 0 2 is smaller than zero, it means that a lepton or a W boson becomes an initial state particle. We need to find all of the solutions to sum over all their contributions to the "interaction rate" γ N .
With the acquired p 1 and p 2 , we can then calculate the amplitude. In the Goldstone equivalence gauge, the "polarization vector" of a gauge boson is extended to a fivecomponent vector ǫ W n ±,Lin (p 2 ) = ǫ W n * ±,Lout (p 2 ), n = µ, 4 to include the Goldstone component (n = 4 denotes the Goldstone component). When contracting the indices, the metric ten- transverse polarization is the same as in the R ξ gauge with ǫ W 4 ± (p 2 ) = ǫ W 0 ± (p 2 ) = 0, and where n µ 2 = (1, − p 2 | p 2 | ) for the convention of (k µ ) = (k 0 , k) for any four-dimensional momentum k.
For the lepton spinors, we need to definẽ where for a "particle", i.e., p 2 1 > m 2 f , the "+" sign is adopted, and for a "hole", i.e., p 2 1 < m 2 f , the "-" sign is adopted. When p 0 1 > 0, a lepton (either a "particle" or a "hole") is created and aū s (p 1 ) appears in the amplitude. When p 0 1 < 0, an anti-lepton (either an anti-"particle" or an anti-"hole") is destroyed and av s (−p 1 ) appears in the amplitude respectively.
The amplitude of the gauge component, as denoted in the left panel of Fig. 1 when p 0 1 > 0 for the decay channel. Γ µ (p, p 1 ) is the HTL correction on gauge vertex introduced for a gauge invariant result. Its definition is given in (A4), followed by the detailed evaluation processes there in the appendix. If p 0 1 < 0, we only need to change theū s (p 1 ) intov s (−p 1 ) for the W -boson's inverse-decay channel. The Goldstone component of the amplitude as denoted in the right panel of Fig. 1, is written to be Again when p 0 1 < 0,ū(p 1 ) needs to be replaced withv(−p 1 ). In the above equations, P L,R = 1∓γ 5 2 , and the definition of p lT is where The complete amplitude should take the form where n = 0, 1, 2, 3, 4, t = ±, Lout. The squared amplitude should also take the statistic factor and the "renormalization constant". The complete result is where t = ±, L indices are not summed by the Einstein's sum rule, and and the "renormalization factors" are B. Z/γ channels Since W and B bosons receive the different thermal corrections, it disturbs the mixing angle for the "on-shell" Z/γ bosons. The mixing angles of the on-shell Z/γ bosons depend on their energy and momentum, so it is difficult to identify which is the Z or γ degree of freedom.
The vev dependent mass matrix for the B/W 3 field, or Z/γ particle is as usual Thermal effects correct the B and W 3 mass terms respectively, and therefore the thermal mass matrix is given by where Π W T,L (p 2 ) had already been given by (6). Π B T,L changes the m E2 in (6) into m E1 , The dispersion rate of this mixed Z/γ is given by the "secular equation" for a transverse/longitudinal Z/γ vector boson. I 2×2 is the 2 × 2 identity matrix. For a given p 2 as a solution of (30), matrix p 2 T,L (p 2 ) has a zero eigenvalue, and the corresponding eigenvector is denoted by In the zero temperature case, x Z 1 = − sin θ W , x Z 2 = cos θ W for the Z boson, and x γ 1 = cos θ W , x γ 2 = sin θ W for the photon, where θ W is the Weinberg angle. Since the neutrino does not interact with a pure photon, we can calculate the inner product x · x Z = −x 1 sin θ W + x 2 cos θ W to extract the Z part of the "on-shell' mixed boson to calculate its interactions with the leptons. The dispersion relation of a lepton and the energy-momentum conservation law is exactly the same with (10,13,14) in Sec. II A. Solve these equations with (30), we then acquire all the "on-shell" p 1 and p 2 .
The transverse polarization vectors of a Z/γ boson ǫ Z/γn ± is the same as the W-boson ǫ W n ± to satisfy p 2µ ǫ Z/γν ± = 0, ǫ Z/γ4 ± = 0 and p 2i ǫ Z/γi ± = 0. The longitudinal polarization vector is given by Compared with the (15), the extra (−x 1 sin θ W + x 2 cos θ W ) factor in the Goldstone component indicates that only the Z-component of the vector boson had "eaten" some Goldstone boson. The photon part of this vector boson had not devoured any Goldstone boson's fraction.
Then we are ready to write the amplitudes.
The total result of the squared amplitude is where the "renormalization constant" Z Z/γ(T /Lout) (p 2 ) is calculated to be
C. Goldstone channels
Besides the Goldstone components in the Z L and W L polarization vectors, the Goldstone boson's fragments also contribute to the 1 ↔ 2 rate. Rigorously speaking these remains are no longer a "particle" since they are "branch cuts" rather than "poles". However, since the imaginary parts peak significantly at p 0 2 = ±| p 2 |, we could apply the approximation to regard them as massless bosons. The corresponding Feynman diagrams are the same as the second panels in Fig. 1, 2 with the only difference that the Goldstone boson's components are no longer bounded with the longitudinal polarizations of the W and Z bosons.
The dispersion relation of a "massless" Goldstone boson is simple, Other equations are the same as the previous subsections. After solving (10,13,14) with (37), we then write down the final result of the squared amplitude for the charged Goldstone channel, where Z G ± (p 2 ) is calculated and defined by and the final result for the neutral Goldstone channel, where Here m Z (T ) = √ g 2 1 +g 2 2 2 v(T ), and
D. Higgs channels
The Higgs channel is quite straightforward, since the Higgs boson only receives a trivial mass correction from the thermal environment. Below the T c , m h (T ) ∝ v(T ), so and above the T c , m h (T ) becomes where m h0 = 125 GeV. Therefore the dispersion relation of a Higgs boson is simply Again solving (10,13,14) with (45) for the valid p 1 and p 2 , we then write down the amplitude, The total result of the squared amplitude is
III. PHASE SPACE AND THERMAL AVERAGE INTEGRATION
In the thermal background, the Lorentz invariance is broken so that we could not directly "boost" the center of momentum reference frame to calculate the 1 ↔ 2 processes of a sterile neutrino at rest. We could only rely on the definition of a width at an arbitrary reference where X = [W, (T, Lout)], [Z/γ, (T, Lout)], G ± , G 0 , h. Note that in the thermal plasma rest frame, there is still the symmetry of the system rotating along the p axis, thus eliminating the azimuthal angle φ to be a 2π factor. To integrate out the δ function, we calculate ∂| p 2 | ∂| p 1 | is extracted from the momentum conservation law (14), and the result is ∂p 0 1 ∂| p 1 | and ∂p 0 2 ∂| p 2 | can be extracted from the corresponding dispersion relations (5,30,37,45). Generally, if the dispersion relation of a momentum p Y is written to be F Therefore, (49) can be reduced to The thermal average integration is then simple, This γ X will enter the Boltzmann equation.
Straightforwardly applying (52-53) takes a problem. For each θ p , sometimes there are multiple solutions for the p 0 1 , p 1 , p 0 2 , p 2 values. One reason is that when a particle decays to every direction in its center of momentum frame, and while boosted to the plasma reference frame, one angle can pick up multiple different momentums. To cure this problem, one can adjust the integration order to calculate in the (inverse-)decayed particle's rest frame.
For example, for sterile neutrino's decay process, we rely on the N-rest frame by boosting the p 1 , p 2 into p 1N , p 2N . We then use p 1,2N as the input parameters to solve the various dispersion relations. We then need to calculate the Jacobian and delta function's factors in the new p 1N , p 2N parameters. Take the x-axis along the p direction, and without loss of generality, let p 1 be located in the x-y plain, and we have where β = | p| p 0 , γ = 1 √ 1−β 2 . A tedious calculation finally shows that where and ∂| p i | ∂p 0 i has been already calculated in (51). Then we can replace the dθ p with dθ N dθp dθ N in (52) to calculate this integral.
Inverse-decay processes are similar. For example, if we calculate the W-boson's inverse decay process Nl + → W + , we need to adjust the integration order of (52-53) to integrate out the d 3 p and d 3 p 1 phase space at first and finally calculate the d 3 p 2 integration. Boost to the W + 's rest frame to transfer to the p W , p 1W integration by replacing the corresponding indices in the Eqs. (54)(55)(56)(57)(58)(59)(60)(61)(62) to calculate the similar Jacobian and delta function's factors.
With this method, all the 1 ↔ 2 channels can be computed.
Let us summarize the numerical algorithm processes. To calculate one channel, e.g., N ↔ W l, one needs to follow these steps: • Fixing the p 0 , p, and θ N , we are going to solve the p 0 1N , p 0 2N , p 1N , p 2N . The equations to be solved are (5,10,13,14). They are defined with the parameters p 0 1 , p 0 2 , p 1 , p 2 and θ p , and these two sets of parameters are mediated by (54)(55)(56).
• With the acquired numerical solution of p 1 , p 2 and θ p , calculate the total squared amplitude through (22).
Meaning
Alias Meaning • Change p to calculate (53).
To calculate e.g., the Nl ↔ W channel, we need to integrate out the p and p 1 at first.
Thus, exchange the p and p 2 in the above items, and also change the subscript N into W . Therefore, we are also able to calculate the inverse-decay rate of a W boson below its threshold.
IV. NUMERICAL RESULTS
We have scanned the m N ∈ [50,200] GeV range by an interval of 1 GeV. For the leptonic sector, both "particle" and "hole" channels had been included. For the bosonic sector, all the transverse, longitudinal vector bosons, and the Goldstone, Higgs channels had been considered. We have enumerated all the 1 ↔ 2 possibilities, however, it is unnecessary to plot all of them. We sum over the results into 14 channels, and show the meaning of them in Tab. I. Notice that the channel N(W /Z) ↔ l − /ν is kinematically forbidden in our interested parameter space, so that they are neglected. Compared with the production rate γ X , it is more convenient to use the averaged decay width where g N is the degree of freedom of the sterile neutrino, and is cancelled by the same factor in n eq N . The comparison of this parameter with the Hubble constant H ≃ 1.66 √ g * In Fig. 4, we have selected the m N = 50, 100, 150, 200 GeV to plot their thermal averaged widths normalized by 1 |y N | 2 depending on the temperature T . Just below the critical temperature 100 GeV T < T c , the longitudinal W /Z and the Goldstones play crucial roles.
These two kinds of channels are complementary, and can be compared with the corresponding part of Fig. 1 in Ref. [40], in which large areas had been kinematically forbidden within the 100 GeV T < T c , 50 GeV m N 100 GeV ranges. Our calculations do not give such a remarkable suppression. To show this clearly, we also plot a total thermal averaged width Γ tot = XΓ X in Fig. 5. There we can see a similar suppression of the total thermal averaged width when T > T c compared with the Fig. 1 in Ref. [40], while when T < T c , only a slight and obscure suppression appears in roughly the same area.
In the rest of this section we show a preliminary calculation of the leptogenesis with all processes are not suppressed kinematically, we just keepΓ tot |y N | 2 ≥ 1.0 × 10 −3 GeV in this image. Therefore, most of the red parts in this image are actually much smaller than those plotted here. the results above. Above the sphaleron decoupling temperature, i.e., when T > T sph = 131.7 GeV [51], the B + L number does not conserve, so the lepton number asymmetry generated from the sterile neutrino 1 ↔ 2 processes is ported to the baryon number asymmetry through the sphaleron effects. To explain the observed ratio of baryon asymmetry normalized by the photon number density |η B0 | = |n B −nB| nγ ≈ 6 × 10 −10 in our current universe, |η L | = |n L −nL| nγ is calculated then to be 2.47 × 10 −8 [37] at T = T sph = 131.7 GeV. Including the 2 ↔ 2 wash-out terms, the Boltzmann equations are given by where η N = n N nγ , z = n N T , and γ D = X γ X is the summation over all the 1 ↔ 2 channels defined in (53). We shall neglect the 2 ↔ 2 contributions γ Hs,Ht,As,At in this paper, since we only calculate the situation that the sterile neutrino is initially in thermal equilibrium with the plasma when T ≫ m N . When T ∼ m N or T m N that the deviation from the thermal equilibrium becomes significant, the 2 ↔ 2 processes are usually suppressed by an additional n eq A,H,... factor compared with γ D . The CP-source parameter ǫ CP (z) originate from the one-loop interference with the tree-level amplitudes [32,36], and should depend on z. The identification of this parameter is beyond the scope of this paper. We only follow the section II of Ref. [40] to regard ǫ CP as a constant parameter to present our results of the successful leptogenesis in Fig. 6. Studies on At some proposed future leptonic colliders, with the aid of the secondary vertex detection, the sensitivity to y N at ILC [52][53][54][55][56], CEPC [57,58] and FCCee [59] can be significantly improved. Refs. [60][61][62][63][64] have discussed the corresponding searches at these colliders, and Ref. [65,66] have also discussed the proposals at the LHeC [67,68], Ref. [69] have discussed the similar parameter space at the LHC and beyond. Their results can roughly verify the parameter space within 50 GeV < m N < 90 GeV andm 1 eV.
Our contours are significantly different with the Fig. 3 in Ref. [40], especially for the 1 eV m 10 5 eV and 40 GeV m N 110 GeV area there, where quite a large void appeared due to the absence of the γ D kinematically forbidden below T c in their Fig. 1. In our paper, such an area is filled up with the N ↔ G +,0 (l − /ν), N ↔ W + T,L l − or N ↔ Z T,L ν channels, so that no significant distortions of the contours appear.
V. SUMMARY
We have calculated the 1 ↔ 2 processes of a sterile neutrino interacting with the gauge/Higgs bosons and leptons in the thermal plasma. We applied the Goldstoneequivalence gauge to evaluate the processes below the critical temperature T c ≈ 160 GeV, and our method is suitable for the sterile neutrino's mass m N ∼ T c . The results can be utilized in the studies involving the sterile neutrinos, and we have preliminarily calculated the leptogenesis as an example. Compared with Ref. [40], the results had been significantly changed due to the different kinematic threshold understandings in this paper. 1 ↔ 2 results are usually sufficient to study the processes in the temperature that is roughly of the same magnitude of the sterile neutrino's mass if one assumes an initially thermal equilibrium. Yet the non-perturbative corrections that the leptons and bosons interchange soft particles with the plasma and with each other have not been included. To carry forward our research to a wider temperature scale and to a more precise calculation, we will include all these effects in our further studies.
Appendix A: Aspect from the R ξ gauge The advantage of the Goldstone equivalent gauge is the anatomy of the longitudinal polarization and the remained Goldstone degrees of freedom contributions, which is convenient for one to follow a "tree-level" methodology. The result should be numerically equivalent to the traditional aspect to calculate the imaginary part of the one-loop propagators. In fact, we showed in Ref. [45] that similar "tree-level" logic can also be applied in the standard R ξ gauge if only the remained Goldstone degree of freedom is replaced by a "vector boson" with the polarization vector ∝ p, where p is the "vector boson"'s momentum. The equivalence of the results with different gauges is guaranteed by the Ward-Takahashi identity in the broken phase [70], where V=Z/W, m V (T ) is the gauge boson's mass originate from the vev, and M GS is the amplitude with the corresponding gauge boson replaced by a Goldstone external leg. For the W boson, just notice that the relationship between the polarization vectors under two gauges, where ǫ W L,R ξ is the familiar polarization vector in the R ξ gauge. One immediately finds out that the contribution from the difference between these two polarization vectors should always vanish according to the Ward-Takahashi identity in the broken phase.
For the mixing Z/γ case, things are a little bit complicated. Notice that in the (32), the mixing parameter −x 1 sin θ W + x 2 cos θ W factor is in the vertex term, while (31), the exactly same thing is attributed to the polarization vector. Remember also for a pure γ, it does not receive any mass from the vev so its amplitude completely disappears when dotted by the p µ 2 . Factoring out the common −x 1 sin θ W + x 2 cos θ W term, one find that the contribution from the difference between the two polarization vectors still vanishes in the amplitude, which is also guaranteed by the (A1) The above discussions only involve the longitudinal polarizations of the vector bosons.
For the Goldstone channels, we have pointed out in Ref. [45] that these Goldstone external legs can be replaced by a "vector boson" with the polarization vector p µ 2 im V , equivalent to picking up the "quasi-poles" corresponding to the ∝ p µ 2 p ν 2 terms in the R ξ propagator. One might notice that the Ward-Takahashi identity is not rigorously satisfied perturbative if one only keeps the tree-level part in (17,18,32,33). This can be fixed if we introduce the hard thermal one-loop corrections to the gauge vertices(Page 161 in Ref. [50]), where m f is again given by (12) andk = (1,ˆ k) andˆ k ·ˆ k = 1. The recovery of the (A1) can be seen by dotting the p 2 = p − p 1 into Γ µ , where Σ(p) is the hard thermal one-loop correction on a fermionic propagator of the active neutrino or a charged lepton. These two Σ's will help cancel the denominators in the i p / (1) −Σ(p (1) ) propagators on both sides of the gauge vertex, thus resuming the Ward-Takahashi identity in the broken phase.
We then contract the K µν with the t µ t ν , t µ a ν , t µ b ν , a µ a ν , b µ b ν , a µ b ν , l ν l ν to determine the A-G coefficients. Together with the traceless condition K µ µ = 0, The expressions are It is convenient to calculate all the integrals in (A10) within the l = (0, 1, 0, 0), a = where l 1 and l 2 are two unit vectors perpendicular to the p without the time component, and also l 1 ⊥ l 2 . K ⊥ = 2K ll − K tt due to the traceless condition. When, in the other case, and without loss of generality, when α > β and β ≪ 1, we can estimate the K µν by taking the β → 0 limit to acquire K tt,β→0 = artanhα α , and again K µν,β→0 (α, β, θ ab ) = K tt t µ t ν + K ll l 1µ l 1ν + K ll l 2µ l 2ν + K ⊥ (a − t) µ (a − t) ν If one wants a gauge invariant result whenever the HTL corrected dispersion relations are considered, (A4) should be included. We can estimate its contributions through a powercounting consideration. Neglecting (A4) will introduce a relative error of ∼ The above discussions depends on the assumption that K µν ∼ 1. However, the artanh functions in (A10) diverge when α, β → 1. This can be realized by observing the denominator of (A8), which can be close to zero when α, β approach 1. Fortunately, this usually happens when a largely boosted "hole" is created. The divergence is significantly suppressed by the "renormaliztion factor" Z l (p 1 ) = (26). Therefore, the final integrated rate nearly remains intact, although in this paper we still reckoned in the (A4) terms.
In fact, our practical evaluation shows that simpler tree-level vertex method gives not much difference in the final result compared with the data showed in this paper. The Goldstone equivalence gauge also takes another advantage in the tree-level vertex approximation.
If we fix on the R ξ gauge, one might introduce a discontinuity of the total effective decay rate over the cross-over temperature T c up to tree-level. Notice that below the T c , the Goldstone boson fraction's contributions are collected within the p µ 2 p ν 2 terms in the gauge boson components, while when T > T c , all the Goldstone contributions originate from the Yukawa couplings. A continuous transition between these two coupling formalisms requires (A4), and neglecting this will introduce a discontinuity. Therefore, we can see that attributing all the "Goldstone contribution" of a vector boson to the Goldstone Yukawa couplings, just as what we did in the Goldstone equivalence gauge, will automatically include the key part of the (A4) corrections to connect the two parts. Therefore, compared with the R ξ gauge, Goldstone equivalence gauge includes more hard thermal loop corrections on vertices up to a tree-level evaluation. | 8,412.6 | 2020-08-03T00:00:00.000 | [
"Physics"
] |
RAPID INTERNET OF THINGS (IOT) PROTOTYPE FOR ACCURATE PEOPLE COUNTING TOWARDS ENERGY EFFICIENT BUILDINGS
SUMMARY: According to the U.S. Department of Energy, a significant portion of energy used in buildings is wasted. If the occupancy quantity in a pre-determined thermal zone is aware, a building automation system (BAS) is able to intelligently adjust the building operation to provide “just-enough” heating, cooling, and ventilation capacities to building users. Therefore, an occupancy counting device that can be widely deployed at low prices with low failure rate, small form-factor, good usability, and conserved user privacy is highly desirable. Existing occupancy detection or recognition sensors (e.g., passive infrared, camera, acoustic, RFID, CO 2 ) cannot meet all these above system requirements. In this work, we present an IoT (Internet of Things) prototype that collects room occupancy information to assist in the operation of energy-efficient buildings. The proposed IoT prototype consists of Lattice iCE40-HX1K stick FPGA boards and Raspberry Pi modules. Two pairs of our prototypes are installed at a door frame. When a person walks through this door frame, blocking of active infrared streams between both pairs of IoT prototypes is detected. The direction of human movement is obtained through comparing occurrence time instances of two obstructive events. Thus, the change in occupancy quantity of a thermal zone is calculated and updated. Besides, an open-source application user interface is developed to allow anonymous users or building automation systems to easily acquire room occupancy information. We carry out a three-month random test of human entry and exit of a thermal zone, and find that the occupancy counting accuracy is 97%. The proposed design is completely made of off-the-shelf electronic components and the estimated cost is less than $160. To investigate the impact on building energy savings, we conduct a building energy simulation using EnergyPlus and find the payback period is approximately 4 months. In summary, the proposed design is miniature, non-intrusive, ease of use, low failure rate, and cost-effective for smart buildings.
INTRODUCTION
In the United States, the annual amount of energy to heat, cool, and ventilate buildings is huge -equivalent to 13 quadrillions British Thermal Units (BTUs).Most of this energy is wasted when buildings are completely unoccupied or run under the default levels of heating, cooling, and ventilation.Normally, the default levels of heating, cooling, and ventilation are supposed to meet the need for the maximum occupancy capacity.Occupancy number detection or recognition has great potential to drastically reduce the energy consumption and utility bills of buildings (Jain, 2016).Based on real-time occupancy information, it is estimated that adaptive heating, cooling, and ventilation reduce building energy consumption by up to 30%.Nowadays, miniature environmental sensors have been widely deployed in buildings.Although environmental parameters such as carbon dioxide or moisture level provide an implied indication of the human presence or occupancy quantity in a building zone, it is very difficult to extract a highly-accurate occupancy number with low failure rates in real time.Image/video capture technology, radio frequency and radar systems suffer from high implementation cost and inaccuracy.Even though many studies have been conducted to investigate whether CO2 sensors can accurately reflect the occupancy quantity.Unfortunately, it is concluded that long diffusion time and transient airflow patterns within a building severely degrade the occupancy detection sensitivity using CO2 sensors (ARPA-E Program Report, 2018) (Maripuu, 2009) (Jin, 2105).In addition to remarkable energy savings, real-time occupancy monitoring can also enhance building safety (Cheung, 2018) (Ciftler, 2018).For example, safety-critical buildings (such as shopping malls, museums, or restaurants) often requires maximum occupancy regulations to prevent over-occupancy.If a fire occurs when the actual number of occupants exceeds the maximum allowable number, people cannot leave in a safe time.In this scenario, if accurate live occupancy detection or recognition is realized, audio, visual, or text alerts can be triggered once the maximum occupancy level is reached.
Recently, according to U.S. Department of Energy (DOE), next generation of the smart building requires significantly more intelligence with the capability of real-time counting exactly how many people are inside a thermal zone.Real-time occupancy counting has a big impact on dynamic adjustment in operating parameters and set points of heating, ventilation and air conditioning (HVAC) equipment.Particularly, demand-driven HVAC control heavily depends on occupancy detection to provide "just-enough" heating, cooling and ventilation levels to users.For example, through proper control of a variable air volume (VAV) box, demand-driven ventilation regulates the right amount of fresh air needed by the occupants into a space.Based on the real-time number of occupants, good indoor air quality is achieved without excessive energy consumption.However, existing occupancy sensor systems are limited in their ability to meet such accurate detection or recognition need.To support this property, it is imperative to develop new sensor system architectures to improve counting accuracy, reliability, and usability with strict constraints of implementation cost and system size.With evolutions of the Internet of Things (IoT) technology (Jin, 2014) (Zanella, 2014), it is possible to develop a standalone IoT platform, in which a great number of signal processing and data computation can be run locally without any assistance from a cloud or central server.Existing occupancy detection or recognition systems have many drawbacks, such as low accuracy, poor adaptivity, inadequate privacy protection, expensive calibration and maintenance.Existing occupancy detection or recognition systems usually collect considerably rich and diverse data (such as CO2, humidity, temperature) that detail what is happening in a building zone.Then, statistics-based features are identified and extracted from a cloud or central server.Despite promising results that have been reported in a few case studies, a set of statistics-based features and parameter values, which are appropriate for a building, usually cannot be used for other buildings due to feature changes in different and uncertain building environments.Even though machine learning has looked into processing these raw sensor data, yet, existing machine learning methods have limitations in non-stationary environments, and suffer from large occupancy counting errors (Zheng, 2014).As will be reviewed and described in Section 2.2 and Table 3, modern machine learning algorithms result in occupancy counting errors exceeding 10%, which do not meet the accuracy requirement of U.S. Department of Energy (DOE).
In this work, we address the design challenge of next-generation occupancy number detection or recognition systems.In the proposed IoT prototype, personnel entry and exit to a building zone are detected and monitored by an innovative active infrared approach, whose output is considered as a good estimation of occupancy quantity.Raspberry Pi modules are connected with our developed active infrared FPGA boards to provide voltage supply and Wi-Fi access.A user interface is developed to allow building automation systems or anonymous customers to online track the real-time room occupancy information.Since all the processes of data acquisition, computation, and communication are performed in the proposed IoT prototype, our solution is self-contained.Because high computational-complexity machine learning algorithms are not executed in our IoT prototype, no cloud or central server is required.The entire IoT prototype is easy to install and does not require extra costs for commissioning and maintenance.Thanks to the built-in Wi-Fi server, wireless data communication of our IoT prototype does not depend on existing Wi-Fi infrastructures in deployed buildings.This feature enables our system to be applicable in various structures and ages of buildings.Moreover, built-in Wi-Fi enables remote system maintenance, commissioning, calibration, and upgrade.Since our IoT system does not track the identity or location of each building occupant, user privacy is well preserved.Our entire design is in a total dimension of 7.4 inches × 4.6 inches and less than 160g, hence, it is miniature and lightweight.All software codes (i.e., Linux operating system and signal processing algorithms) are stored in a micro-SD card of Raspberry Pi 3 module.Therefore, the whole design is portable and easy to duplicate for mass production.To our best knowledge, this is the first intelligent IoT sensor platform dedicated to accurate occupancy counting towards energy-efficient buildings.The entire system is scalable, flexible, easy to use or upgrade, robust, accurate, non-intrusive, secure, low power, and cost-effective.
To validate the benefits of energy savings in occupancy-based building operation, simulation is carried out with the aid of EnergyPlus, which is an open source building energy simulation engine released by the U.S. Department of Energy.Using a university auditorium as a thermal zone example over a one year period, 12% of electricity savings is realized when deploying our proposed IoT design for occupancy awareness.The estimated payback period is approximately 4 months, which indicates a cost-effective investment and it is affordable for building owners.
This paper makes the following contributions: (1) we propose a new methodology for non-intrusive and highaccurate room occupancy counting towards energy-efficient building applications.Compared with existing occupancy detection or recognition sensors (i.e., passive infrared, camera, acoustic, RFID, CO2) and complex machine learning algorithms, the proposed infrared-based detection scheme achieves a higher detection accuracy.
(2) In order to implement and test the proposed idea, we have developed an IoT prototype system using off-theshelf electronic components and an open-source user interface.This IoT prototype is self-contained, and easy to use, and cost-effective.As this research focuses on advances in real-time accurate building occupancy counting, recent developments in both industry and academia have been studied and reviewed.Many types of sensors have been used to detect room occupancy information (Labeodan, 2015) (Akkaya, 2015) (Huang, 2017).The advantages and disadvantages of these existing techniques are summarized in Table 1.The output of a passive infrared (PIR) sensor is binary and therefore it is conventionally used to detect human presence instead of providing an accurate number of occupants (Lam, 2009) (Agarwal, 2010).Radio-frequency identification (RFID) or wearable technology is proven for coarsegrained occupancy monitoring.However, since each RFID tag is associated with a particular person, privacy and security are the primary concern (Lee, 2008) (Li, 2012).Occupancy monitoring using ultrasonic approach is presented (Shih, 2015) (Shih, 2016).However, several drawbacks are associated, such as difficulty in calibration and coordination.Speech recognition and acoustics are potential techniques for predicting the occupancy information of buildings (Uziel, 2013) (Kelly, 2014) (Huang, 2016).Audio-based occupancy processing is not expensive, because basic required hardware resources consist of only microphones and microcontrollers.Yet, acoustic detection is rarely used for independent occupancy detection, since (a) non-human source sound waves from a building can trigger error detection, and (b) detection fails when someone is occupied in an HVAC zone but no sound.Therefore, acoustic-based occupancy detection is more accurate in a quiet office than a noisy supermarket or restaurant.Video or image cameras are also used to monitor building occupancy information (Erickson, 2009) (Benezeth, 2013) (Ahmed, 2013).However, due to the limitations of the line of sight, cameras cannot be placed in any position.High hardware cost and user privacy issues have severely hamper ed its widespread deployment.Wi-Fi probe request signal has been used to predict indoor occupancy information (Zou, 2017) (Ciftler, 2018).When using Wi-Fi probe request to compute the occupancy number, Wi-Fi network is required in buildings.This is not guaranteed in practice.Besides, each occupant needs to always carry a Wi-Fi device.If an occupant leaves their Wi-Fi device (such as a mobile phone) on a table in an office, but he/she goes to other places, the Wi-Fi probe request method will still estimate this occupant in this office.Room occupancy estimation errors arise in this case.Carbon dioxide levels predict room occupancy as a result of a linear relationship between the level of carbon dioxide and the number of inhabitants in a space (Sun, 2011) (Nassif, 2012).Even if the low-cost and non-intrusive, the level of carbon dioxide fluctuates with the HVAC operation, such as passive ventilation, unpredictable doors and windows open, locations of sensor placement, so an exact relationship between CO2 level and occupancy information varies case by case.Moreover, when an occupant leaves a room, the CO2 concentration remains constant for a long time, which reduces the sensitivity of occupancy detection.
Existing building occupancy detection or recognition mechanisms
Researchers also present to use hybrid environmental sensors to improve the detection accuracy.For example, the design utilizes CO2 and light sensors in a micro-scale wireless sensor module (Huang, 2017).In 2012, a combination of sensors including CO2, humidity, light, sound, and motion was implemented and tested for the occupancy monitoring performance (Yang, 2012).In 2016, Intel implemented an IoT-enabled smart office building, where 9,000 sensors are used to track and optimize building information, such as temperature, lighting, energy cost and room occupancy (Khandavilli, 2016).Despite its great potential to improve the building occupancy number through studying the cross-correlation of multi-sensor data (Kumar, 2016) (Das, 2017), it is a design challenge to extrapolate useful features of occupancy counting from original rich sensor data.Much useful information for determining occupancy characteristics is hidden or not easily discovered.Most existing sensor systems use extracted statistical features to analyze raw sensor data.Although promising results have been reported in a few buildings, the same features cannot be used in other buildings because of the dramatic changes in human behavioral characteristics in diverse and uncertain building environments.The reported accuracies of these above occupancy detection methods will be summarized in Table 3, where the accuracy of the proposed design will be compared with these existing works in Section 4.3.Therefore, it is necessary to develop direct non-intrusive monitoring platform for accurate building occupancy counting, instead of relying on computationally-intensive multimodal signal processing algorithms.Using an ultra-low power device to design a proper IoT system to meet these stringent requirements is the focus of this project.Machine learning (or deep machine learning) algorithms have been presented to process sensor data for building occupancy monitoring.Figure 1 illustrates the processing flow of generic machine learning techniques for room occupancy detection.A set of features is manually extracted by designers in machine learning algorithms, while deep machine learning automatically grasps the relevant features required to solve a problem.In (Yang, 2012), using a radial basis function (RBF) neural network, the cross-estimation tests produce an accuracy of 66%.In
Hybrid Sensors
another study (Javed, 2017), a random neural network model was developed to understand the relationship between occupancy level and CO2 concentration, room temperature, and humidity, where the reported occupancy accuracy was 87.4%.In (Ekwevugbe, 2013), a low-cost and non-intrusive sensor network was deployed in an office.The selected multi-sensory features were found using a neural network with up to 84.6% of the estimated accuracy.In (Candanedo, 2016), the researchers studied three different statistical classification models for occupancy detection using light, CO2, temperature, and humidity data.The accuracy of occupancy detection was found between 95% and 99%.However, their study is only limited to checking whether an office is occupied, instead of occupancy quantity.In (Raykov, 2016), a PIR sensor was combined with machine learning algorithms to estimate occupancy.Based on a microprocessor and a PIR sensor, a machine learning model was presented to run occupancy estimation algorithms in (Leech, 2017), where statistical regression model is used to fit the measured PIR sensor data.This work validates the feasibility of running machine learning algorithms on an IoT platform.A recent study (Ortega, 2015) has pointed out that neural network tends to produce noisy and unstable results over time.This is because these machine learning models treat training data as independent, thus ignoring the cross-correlation of multisensor data.
From the above discussion, existing occupancy detection or recognition mechanisms and data processing algorithms cannot fully meet the rigid requirements of next-generation occupancy number detection or recognition systems for smart buildings.To address these challenges, we investigate and develop an active infrared based IoT prototype for real-time occupancy counting with high accuracy and low failure rate.In this work, the proposed IoT design does not employ any machine learning techniques, but it provides an effective hardware platform to run machine learning algorithms in the future.Machine learning algorithms can be executed on the FPGAs and Raspberry Pi 3. In the next section, details of design considerations and system implementation are elaborated.
SYSTEM DESIGN AND IMPLEMENTATION
Figure 2 depicts the overview of our system deployment and interaction with a building automation system.In the buildings we studied, HVAC equipment and user graphical interface are purchased from Johnson Controls, which is a leading manufacturer and supplier of HVAC equipment and BAS tool.The right side of Figure 2 shows a snapshot of the BAS user interface provided by Johnson Controls.In general, BASs from other companies ( e.g., Honeywell, Siemens) are also applicable to our proposed people occupancy counting system, which is illustrated in the left side of Figure 2. A BAS tool connects, controls, and monitors the operation of various HVAC devices and sensors, enabling them to deliver and share information based on a dedicated communication protocol.So far, the communication protocol mostly used for BAS tools is the BAC net (Building Automation Controller Network).In addition, because a BAS tool typically supports data communication with the Internet, building operators can remotely access BAS to control its connected HVAC devices and sensors from anywhere in the world via the Internet.As shown in the right side of Figure 2, the user graphical interface usually provides real -time information on HVAC operation status, such as CO2 level setpoints, zone pressure, discharge air pressure and temperature, real-time temperature values, humidity, and air flow rate at some point of interest.This information allows building operators to quickly check the status of HVAC devices and respond immediately alerts for system failures.Passive infrared occupancy sensors are often used in conjunction with a wireless adaptor, which enables wireless communication between these occupancy sensors and a BAC network.By sensing the triggering events that occur in infrared streams, a BAS tool can obtain room occupancy count information, and then make decisions to optimize the control of heating and cooling systems within a building, providing optimized occupant comfort and energy efficiency.In this work, two kinds of modules (i.e., Raspberry Pi 3, and infrared FPGA boards) are installed in the entrance/exit door of an HVAC thermal zone.Two pairs of active infrared FPGA boards are set up near the door frame to monitor real-time doorway traffic, whose information is sent to Raspberry Pi modules via cable connections.A built-in Wi-Fi server, which is embedded in Raspberry Pi modules, enables wireless data communication with its related building automation system (BAS).It is an attractive property to establish a builtin Wi-Fi server in the proposed system.Thus, data communication does not depend on existing Wi-Fi infrastructures in a building.With this Wi-Fi server, the building automation system (i.e., on the right side of Figure 2) or building users/owners can easily access real-time occupancy information in this HVAC zone.Since all data acquisition and communication are performed locally, this proposed IoT prototype is self-contained and hence no cloud or server computation is needed.The entire design is easy to install by users and does not require labor costs for commissioning and maintenance.In our proposed building occupancy counting system, we the data in a database for on-demand delivery by a BAS tool.For example, we used a website database called ThingSpeak to record and visualize the raw sensor data for BAS tools.When the sensor data is set to update no less than every two minutes, ThingSpeak provides a free account.Since our infrared sensor output is updated only when someone walks through the door, it usually meets the two-minute requirement.
Figure 3 illustrates how the proposed IoT system operates.An infrared receiver (i.e., RX) FPGA board and an infrared transmitter (i.e., TX) FPGA board are needed on each side of a door opening.Therefore, two TX boards and two RX boards are attached to the left and right sides of a door fra me, respectively.Powered by a long-life battery, each TX board transmits an infrared stream towards its aligned RX board.In this way, two independent infrared streams are established in parallel.Each RX board keeps a cable connection with a Raspberry Pi 3 module.When an object (e.g., a person) walks through this door opening, infrared streams are blocked and interrupted by this human body.Once a signal blocking event is detected by an RX board, the blocking event and its occurrence time will be notified to a Raspberry Pi 3 module via cable connections.The moving direction of an occupant can be determined by comparing the occurrence time of events reported by two RX boards.Then, the Raspberry Pi 3 module calculates and updates the real-time occupancy quantity, which is instantaneously observed by building automation systems or building owners/users.When someone hovers in a doorway but does not enter a room, our system will not count in this event, because the time difference between two trigger events is not realistic short.We also configure an internal timer of the FPGA board to avoid detection error occurs persist over time.In fact, when a detection error occurs for a certain period of time (e.g., 5 minutes), our proposed system will be reset to eliminate the accumulation of detection errors.
Figure 3. The Concept of Proposed IoT Prototype with Highlighted Key Components
The required logic and control functions are implemented in a field-programmable gate array (FPGA) evaluation kit offered by Lattice Semiconductor.This evaluation kit mainly includes a Vishay TFDU4101 IrDA transceiver (which can be configured as a transmitter or receiver by FPGA logic and control functions), a 12MHz MEMS oscillator, an iCE40HX-1K FPGA chip, and an FTDI 2232H USB chip.Besides, there is an integrated development environment (IDE) and bit-stream generator offered by Lattice Semiconductor.As a result, this hardware evaluation kit is easy to use for rapid prototyping of our IoT design.Figure 4 shows electrical diagrams of the proposed design, where an infrared transmitter and receiver are configured for TX and RX boards, respectively.The Raspberry Pi module that is connected to a TX board only provides voltage supply and does not establish a built-in Wi-Fi service.The hardware installation and setup procedures are briefly described as follows.First, RX infrared FPGA boards are connected to the Raspberry Pi modules that are configured to execute RXrelated functions.To do this, the FPGA board that is intended to be placed on the outside of a door opening (where people walk in) is attached either directly or through an extension cord to the bottom right port that is facing towards the Raspberry Pi module.The FPGA board that is intended to be placed on the inside of a door opening is connected to the upper right port that is facing towards the Raspberry Pi module.Second, TX infrared FPGA boards are connected to the Raspberry Pi modules that are configured to execute TX-related functions.Third, RX and TX FPGA boards are aligned as best as they can to maximize the stream detection sensitivity.
Figure 4. Electrical Diagrams of the Proposed TX and RX boards
In addition to low-cost and reliable hardware development, software algorithm and user interface design is also indispensable, because they strongly promote data collection, signal processing, and user interface.In our design, a user interface has been developed to allow anonymous customers to track room occupancy information.The real-time building occupancy quantity is visualized in the user interface software.Our developed software codes and user interface are open-source and free to download from (https://github.com/nickwhetstone/DAC-IOT-SIUC).In our IoT prototype, all the codes for system configuration are stored in a micro-SD card.Customers only need to insert this SD card into the SD slot of Raspberry Pi, so no prerequisite knowledge about programming is required for users.A node.js server is implemented inside the Raspberry Pi, thus, a built-in Wi-Fi service is offered.
In our design, users do not have to know about Wi-Fi server settings.They can access Wi-Fi service by opening a web browser and visiting any ".com"URL.The operation details for checking the status of occupancy number on a mobile phone will be demonstrated in the proof-of-concept five-minute video in the link (https://sites.google.com/site/chaolushomesite) in Section 4.1.Overall, this design is user-transparent, easy-to-use, low-cost, low-power and high-accuracy for smart buildings.User transparency is attributed to small system dimension and light weight, which is 7.4 inches × 4.6 inches and less than 160 grams, respectively.As shown in the demonstration video, the developed user application interface is easy to use without any programming or networking knowledge.As will be shown in Table 2, the total cost of an entire system is $158, which is lower than the expected cost defined by the U.S. Department of Energy (ARPA-E Program Report, 2018).The entire system is implemented using low-power electronics, so the IoT system can be battery powered and have a long lifetime.
As will be discussed in Section 4.1, the average occupancy detection accuracy is approximately 97%, which outperforms existing approaches in the literature.
EXPERIMENTAL RESULTS
In this section, it is more appropriate to arrange cost savings estimates after system implementation and experimental results, because when using the EnergyPlus simulator for cost savings estimation, we need to first demonstrate that the proposed infrared occupancy counting system is operating as expected.Then, we will collect the average occupancy detection accuracy from the experimental results.This averaged accuracy will be used for cost savings estimation.Otherwise, if we run cost savings estimation before system implementation and experimental results, we don't know the average accuracy, so we cannot input proper parameter values in EnergyPlus simulator.Furthermore, if the proposed infrared occupancy counting system does not work at all, the design fails, so it is meaningless to study its cost savings performance.
System implementation and experimental results
The proposed IoT prototype has been implemented and experimentally validated.Figure 5 shows the system view and user interface on a mobile phone.Our entire design is in a total dimension of 7.4 inches × 4.6 inches and less than 160g.Figure 6 shows the experimental setup and testing implementation in an office building.When the proposed system detects a person entry or exit from this office, it transmits such a triggering event to the Raspberry Pi 3. A proof-of-concept five-minute video demonstration was also recorded and the access link is (https://sites.google.com/site/chaolushomesite).As demonstrated in this video, after a person walking through the door frame, when observing the user application interface on the mobile phone, it is found that the number of occupancy increases or decreases by one according to the direction of movement of this person.This video shows the user application interface is easy to use and it can immediately reflect changes in the room occupancy count.Similar to the on-site testing in the demonstration video, we have conducted a four-week measurement to evaluate the occupancy counting performance and found the detection accuracy is approximately 97%.The 3% failure rate is caused by the temporal noise or random interference from ambient environments.This problem is expected to be mitigated by using a robust custom circuit board to improve system reliability and noise immunity.The proposed IoT prototype is completed made of off-the-shelf hardware components with an estimated cost about $158, as shown in Table 2.This cost is affordable and cost-effective.This system provides a proof of concept for basic FPGA implementations of considerably smaller size, less than 1000 Look-Up Tables for the Internet of Things applications.We found that 14.8% of TX and 15.2% of RX FPGA resources have been utilized to realize logic and control functions.That means more than 80% of FPGA logic resources are available to extend other functionalities, such as advanced encryption standard (AES) encryption for enhanced communication security.
Here AES stands for the advanced encryption standard in data communication security.AES has been adopted by the U.S. government and is now used worldwide.As shown in Fig. 2, our proposed IoT system wirelessly communicates with a building automation system (BAS).Recent research has found that Internet-connected HVAC systems have flaws of being vulnerable to hacking and attacks (Peacock, 2014) (Jones, 2017).While HVAC data is generally not considered to be highly sensitive data, unsure Internet connection makes HVAC systems vulnerable.Without proper security protection, data communication from the Internet (i.e., Wi-Fi wireless connection in Figure 2) to HVAC systems is prone to distributed denial of service (DDoS) attacks, which are overwhelmed by data from many separate computers simultaneously.The inclusion of AES in our proposed IoT prototype helps ensure advanced data security to avoid DDoS attacks, and protects data to be transferred reliably and securely over Internet networks to reach HVAC systems.
Energy saving estimation using EnergyPlus simulator in a VAC HVAC building
To estimate long-term energy savings resultant from occupancy-driven building operation, we chose the HVAC thermal zone of university auditorium in the Student Services Building of SIUC campus and use its energy consumption as a baseline.The prerequisite knowledge for baseline calculation includes blueprints for original construction, historical energy bills, and current operating data in the building automation system.VAV HVAC equipment has been installed for this building.All useful data provided by the Physical Plant Engineers are imported into a design and analysis tool, EnergyPlus, which takes into account building envelope, windows, lighting, HVAC equipment, and weather condition.The weather data of Carbondale, Illinois are uploaded into EnergyPlus from a weather file.Since this building is a university auditorium, university staff knows what activities are organized and arranged with a roughly accurate number of occupants and the duration of activities.
Therefore, we used this information in the EnergyPlus simulations.In this study, we investigated the possible causes of this 3% detection error.After in-depth analysis and verification, we found that the 3% failure rate is caused by temporal noise or random interference/disturbance from ambient environments.This problem can be solved with a robust custom circuit board to improve system reliability and noise immunity.Since this 3% accuracy error is ubiquitous over time, rather than in a certain pattern or during a specific daily duration, we believe that modeling this 3% accuracy error as random occurs is sufficient, so the detection accuracy is flat as 97% during each day in a year.Therefore, there is no need to create a time-dependent occupancy schedule for EnergyPlus simulations.Assuming an average 97% of occupancy detection accuracy, simulation results for a one-year study show that the average electricity reduction of this auditorium is 12%.Specifically, when opening the IDF editor in the EnergyPlus simulator, there are some objects to configure for the class of "people".By default, dynamic building occupancy information is not used for HVAC control and operation.So we use the default number of people in the class of "people" to calculate the baseline result of energy consumption.In contrast, dynamic building occupancy information is used in occupancy-aware HVAC operation.Based on the actual number of room occupants, after considering an average accuracy of 97% in the occupancy count, we set the dynamic occupancy quantity in the EnergyPlus tool for simulations.By comparing the two EnergyPlus simulation results, we found that using our proposed infrared occupancy counting system can reduce the average power electricity consumption by 12% in one year.Given the electricity consumption of this auditorium is 80,000 kWh per year, a 12% reduction is equal to 9,600 kWh.Considering the electricity rate of the Carbondale campus is about 5 cents per kWh, the total electricity bill saving is $480 per year.As been mentioned earlier, since the proposed IoT prototype costs $158, the payback period is calculated to be approximately 4 months.The dynamic occupancy profile leads to a more prominent impact on HVAC operation than other energy sources (such as electricity in lifts and lighting).For example, lighting is ON when a room is occupied regardless of the number of occupants.Therefore, occupant presence or absence is a key control factor, and the lighting ON/OFF status does not change with the variation of occupancy number in a room.
Summary and comparison
Table 3 summarizes and compares the existing building occupancy detection methods in the literature.These prior approaches include using single environmental sensors, such as passive infrared, RFID, ultrasonic, CO2 sensors, acoustic, image camera, Wi-Fi probe request, and hybrid sensors.We compare them in terms of mechanism, cost, and accuracy of occupancy detection.Passive infrared and ultrasonic fail to count the occupancy quantity due to limitations of their detection mechanism.Other existing detection mechanisms result in a detection accuracy of no more than 90%, so they cannot meet the occupancy number detection requirements for next-generation smart buildings.While the use of multiple sensors is of great potential, it is not an easy task to handle multimodal signal processing due to useful information for determining occupancy characteristics is hidden or not easily discovered.This proposed work leads to higher accuracy without complex signal processing.No statistics-based machine learning algorithms are needed to be executed in our active infrared-based IoT prototype.
CONCLUSION
Energy-efficient demand-driven smart building has gained increasing attention for green and sustainable economics.As an emerging technique, Internet of Things (IoT) has a great potential to be widespread in smart buildings.This work presents the Lattice iCEstick evaluation kit as an open-source electronics platform to build an active infrared based occupancy counting system.This proposed design accurately counts the personnel entry and exit to a building zone.The estimated building occupancy count is sent to a Raspberry Pi, which supports wireless data communication with building automation systems or users.An open-source user interface has also been developed to facilitate data processing and visualization.The entire hardware and software design has been implemented and tested in this study.Experimental results show that the proposed design leads to an average counting accuracy of 97%.Furthermore, building energy simulation shows a significant impact of adopting this IoT prototype in a VAV HVAC thermal zone.12% of energy reduction is achieved and the resultant payback period is 4 months.In summary, our proposed IoT prototype is flexible, portable, easy to use or upgrade, robust, accurate, low-complexity, non-intrusive, and cost-effective.
FUTURE WORK
One limitation is that this proposed system does not handle well multiple people entering a space simultaneously.
To address this limitation, designers can try to adjust and set the appropriate threshold time interval of two infrared trigger events.For example, if the real detected time difference between two trigger events is longer than the defined threshold time interval, the system treats it as a multi-person entry activity.Another method of occupancy detection accuracy improvement is to use hybrid sensors (such as CO2, sound) and multimode data fusion to refine the detected occupancy number by the proposed active infrared sensors.The effectiveness of this hybrid method has been proved in our previous research and other works in the literature.
The biggest challenge in future work is to minimize the system size and cost.The current design consists of two Raspberry Pi 3 and four ICE40HX1K-STICK-EVNs. Looking at the current design, the system component with the highest cost is Raspberry Pi 3. The Raspberry Pi 3 does provide a way to connect the ICE40HX1K-STICK-EVNs with the world of IoT; however, only one Raspberry Pi 3 is necessary to implement the wireless network currently being used for the design.To reduce the system cost, future work will entail developing code on top of the TX node of ICE40HX1K-STICK-EVN that will enable the devices to continuously transmit infrared signals at successive time step interval, thus, eliminating the necessity of a redundant Raspberry Pi 3 to realize the signal transmission necessary for the people counting aspect of the overall design.This revision alone will save $35 in the design cost.An alternative path, or a path that may be taken after the first revision, is to remove the TX nodes all together.The system would make use of two ICE40 sticks that take the role of transmission as well as the signal reception.The ICE40HX1K-STICK-EVN would alternate between receiving a signal and replying to the received signal.The first stick to exceed the timeout period to wait for an acknowledgment from the other ICE40 HX1K-STICK-EVN is the device that detected either an entrance or an exit action occurring.This revision would cut the design cost in half, and is a viable way to work in the future.
Figure 1 .
Figure 1.Processing Flow of Generic Machine Learning Techniques for Room Occupancy Detection
Figure 2 .
Figure 2. Overview of Proposed System Deployment and Interaction with a Building Automation System
Figure 5 .
Figure 5. System View of an Entire Design Including a Lattice Evaluation Kit and a Raspberry Pi 3 module
Figure 6 .
Figure 6.Experimental Setups and Testing Implementation in an Office Building
Table 1 :
Comparison of Existing Building Occupancy Detection and Counting Methods
Table 2 :
The Proposed System Cost Breakdown
Table 3 .
Summary of Existing Building Occupancy Detection Methods | 8,251.6 | 2019-02-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Numerical and Experimental Investigation of Longitudinal Oscillations in Hall Thrusters
: One of the main oscillatory modes found ubiquitously in Hall thrusters is the so-called breathing mode. This is recognized as a relatively low-frequency (10–30 kHz), longitudinal oscillation of the discharge current and plasma parameters. In this paper, we present a synergic experimental and numerical investigation of the breathing mode in a 5 kW-class Hall thruster. To this aim, we propose the use of an informed 1D fully-fluid model to provide augmented data with respect to available experimental measurements. The experimental data consists of two datasets, i.e., the discharge current signal and the local near-plume plasma properties measured at high-frequency with a fast-diving triple Langmuir probe. The model is calibrated on the discharge current signal and its accuracy is assessed by comparing predictions against the available measurements of the near-plume plasma properties. It is shown that the model can be calibrated using the discharge current signal, which is easy to measure, and that, once calibrated, it can predict with reasonable accuracy the spatio-temporal distributions of the plasma properties, which would be difficult to measure or estimate otherwise. Finally, we describe how the augmented data obtained through the combination of experiments and calibrated model can provide insight into the breathing mode oscillations and the evolution of plasma properties.
Introduction
Hall thrusters are the most widely adopted electric propulsion technology for space applications. Their success is largely due to their high thrust efficiency and thrust-topower ratio, coupled with an overall robust design. During Hall thruster operation, a stream of electrons is generated by an external hollow cathode while neutral gas, typically xenon, is injected in the thruster annular ceramic channel through an anode. A magnetic circuit generates a predominantly radial magnetic field in proximity to the exit section of the discharge channel. The intensity of the magnetic field is tailored to ensure the magnetization of the electron population while leaving the ions unaffected. Part of the electrons generated at the cathode move towards the anode due to the potential difference imposed externally between the two. In the region where the magnetic field is high, the crossed electric and magnetic fields force the electrons to perform an E × B motion in the azimuthal direction. When the propellant neutral atoms reach the region of high azimuthal electron current, they are ionized by electron impact. The electrons generated by the ionization events join those coming from the cathode and drift towards the anode through successive collisions. The ions are instead accelerated by the applied electric field and exit the thruster at high speeds, generating thrust. Finally, a second population of electrons is emitted by the cathode to neutralize the outbound ion beam and preserve the charge balance of the system. More details on the physical processes involved in the plasma dynamics in Hall thrusters can be found, for instance, in [1,2].
Several oscillatory modes exist inside the plasma in the channel and in the near-plume of Hall thrusters. The characteristics and nature of these oscillations strongly depend on the thruster geometry, magnetic field topology and operating condition [3][4][5]. One of the most prominent unsteady modes of Hall thrusters is recognized as a low-frequency oscillation of the plasma properties in the longitudinal direction, which produces periodic rises in the thruster's discharge current. This so-called breathing mode was first described in the 1970s [6] and, since then, has been reported almost ubiquitously in Hall thrusters.
An intuitive understanding of the breathing mode can be traced back to the periodic replenishment of the channel by the injected propellant, followed by a rapid ionization and subsequent acceleration of the generated plasma. The ejected plasma generates a surge in the discharge current and simultaneously depletes the channel of the propellant particles before the cycle starts again with the injection of new neutral atoms through the anode. This intuitive description falls short of capturing the nature of breathing mode in its entirety, and the core physical origin of the instability is still a subject of debate and further investigations. This is partially due to the complex interaction of the longitudinal modes with other oscillations that generate anomalous diffusion of the electrons through the magnetic field [1] and partially because of the difficulties in gathering high-frequency experimental data on the plasma properties inside Hall thrusters.
From a numerical perspective, longitudinal oscillations in Hall thrusters have been investigated since the advent of plasma simulations in the field [7,8]. Numerical simulations are carried out mainly using two different modeling approaches, i.e., the fluid and kinetic descriptions. Moreover, the kinetic approach can be in a direct kinetic (DK) or particle-in-cell (PIC) formulation. Several advantages can be obtained with hybrid models that combine different approaches for different species within the same computational framework. Although complex methodologies are available and currently used to carry out 2D (see, for instance, [9][10][11][12][13][14][15][16][17][18]) or even 3D simulations (see, for instance, [19,20]) of Hall thruster discharges (see also [21,22] for recent reviews of the literature), 1D models are still very appealing for studying the fundamental mechanisms in the dynamics of the plasma discharge. Indeed, despite the unavoidable lower accuracy and prediction capabilities in comparison with more complex models, they are easier to handle and definitely cheaper from a computational viewpoint. Moreover, due to their simplicity, it is possible to apply ad-hoc tuning against reference data, for instance, when they are available from experiments or from more complex simulations, so as to reproduce at least a set of operating points in a fairly accurate way, thus partially compensating the inaccuracy related to the simplification of the physical model employed. In the literature, several 1D models, comprising both hybrid kinetic-fluid models [23][24][25] and fully-fluid models [26][27][28][29][30][31][32], have been successfully employed to simulate the global characteristics of plasma discharges in Hall thrusters. Among the two modeling approaches, fully-fluid models are easier to tune against reference data, as they are entirely formulated in terms of deterministic spatiotemporal plasma characteristics. In hybrid approaches, in any case, tuning can be carried out on the fluid part of the model, as proposed, for instance, in [33]. Moreover, fully-fluid models are particularly suitable to be used for stability analyses, as they can be linearized in a rather straightforward way. As a result, most of the stability analyses documented in the literature, performed to further investigate the nature of the breathing mode, are carried out starting from simplified dynamical systems, as in [34], or with fully-fluid models (see, for instance, [35,36]).
From the experimental perspective, gathering data on the plasma parameters in the channel and near-plume of Hall thrusters poses significant technological challenges. This is mainly due to the harsh environment that invasive probes need to withstand, limiting the maximum residence time in the high plasma density regions of the domain. This is especially true when high-frequency data need to be collected in order to reconstruct oscillatory modes, since long time series are necessary. Additionally, invasive probes can significantly disturb the plasma flow and invalidate the measurements. Therefore, significant care must be taken in the design of the probe and of the test setup. For what concerns non-invasive probes, such as optical sensors, the annular geometry of Hall thrusters and the intrinsically complex test setup hinder their applicability. Sensors installed on rapidly moving arms to probe the plasma in Hall thrusters have been used since the late 1990s [37]. Since then, similar concepts have been proposed, including single and triple Langmuir probes, as well as emissive probes, in order to gather a time-averaged picture of the plasma properties in the thruster near-plume [38][39][40][41][42][43]. The plasma perturbations occurring upon probe insertion complicate the interpretation of experimental data, particularly within the channel and in the vicinity of the magnetic peak, where the electron azimuthal current is the highest. The factors influencing the onset and the impact of these perturbations were studied by several authors, seeking to improve the reliability of the measured data and the quality of the information they provide. Haas et al. [37] identified material ablation of the probe as an important source of perturbation and suggested to minimize the probe residence time in the hotter plasma regions to reduce probe perturbations. Nevertheless, perturbations remain up to a certain degree and, as discussed by Jorns et al. in Reference [42], a sharp transition of global plasma parameters and a downstream shift in plasma properties are observed upon probe insertion in the channel. Optical methods, such as Laser Induced Fluorescence (LIF) [44][45][46][47][48][49] and Thompson scattering [50], provide non-intrusive alternatives to fast moving ionic probes for the exploration of the steady and unsteady behavior of the near-plume plasma of Hall thrusters. In spite of being capable of providing useful information, these diagnostic systems can present signal-to-noise interpretation issues and, in most cases, require complex and expensive setups, limiting their extensive use. Lobbia et al. made noteworthy contributions to the use of invasive diagnostics to study the evolution of low-frequency oscillations in Hall thrusters [51][52][53][54], positioning a single Langmuir probe (with a noise compensating null probe) in multiple fixed locations of the thruster plume and rapidly biasing it to obtain distributions of plasma properties on a 2D grid. Since single probes require time to undergo several bias cycles, measurements were limited to the far plume, where the probe could withstand the plasma conditions indefinitely.
The present paper proposes an investigation of the breathing mode in a 5 kW-class Hall thruster, SITAEL's HT5k DM2, based on the synergic combination of a 1D fully-fluid model and experimental data. The latter comprises two different datasets: (a) the signal of the thruster discharge current acquired with an oscilloscope; (b) the measurement of the local plasma properties in the channel and near-plume carried out with a fast moving triple Langmuir probe and processed with a dedicated diagnostic method [55]. The numerical model uses standard modeling methodologies and contains physically meaningful free parameters associated with those modeling aspects that are mostly affected by uncertainty. Within this context, the main contribution of the paper is to assess the potential of the considered 1D model when the free parameters are calibrated against easily measurable experimental quantities. To this purpose, the model has been calibrated using only the first of the two datasets, the current measurement, which does not require a complex experimental setup and is easy to acquire. Results show that calibration is indeed possible, leading to an accurate recovery of the time evolution of the discharge current. Successively, thanks to the second dataset, i.e., the measurement carried out with the triple Langmuir probe, it has been possible to assess the level of accuracy reached by the model in the prediction of the values and oscillations of the plasma properties in the thruster nearplume when calibration is carried out solely on the current signal. Results show that the model provides a good estimation of the plasma properties, which, conversely, are difficult to measure. As a result, the proposed calibrated model can also be interpreted as an extrapolator of partial experimental data in Hall thrusters. Thus, more than a predictive tool, it can be used as a complementary tool, together with experiments, to estimate quantities that cannot be measured and to orient more detailed measurements in the experiments. Finally, in the manuscript we provide an example of how the augmented data resulting from the joint use of experiments and calibrated model can be used to explore the variability of the plasma properties within typical cycles associated with the breathing mode.
Concerning the outline of the paper, Section 2 describes the numerical formulation adopted to model the plasma flow in Hall thrusters, discussing the main assumptions and presenting the core equations of the model. Section 3 details the characteristics of the test item and summarizes the experimental setup and data processing technique, described at length in [55]. The comparison between the results of the calibrated plasma simulations and the experimental results is then presented in Section 4. Finally, Section 5 summarizes the conclusions of the present research.
Numerical Description of the Plasma Dynamics
A fully-fluid unsteady 1D model is proposed here to describe the dynamics of the plasma discharge in the thruster. The plasma is assumed to be quasi-neutral and mainly composed of singly-charged ions. Three species are thus included in the model, namely electrons, singly-charged ions and neutrals. A drift-diffusion approximation is used to treat the electron flux. Ions are supposed isothermal and unmagnetized, and neutrals are assumed to have constant axial velocity (u n ) in the domain. Furthermore, the magnetic field is assumed to be purely radial. With reference to Figure 1, the model is aimed at providing time-varying, averaged plasma properties over z-sections inside the channel and in the near-plume, from the anode surface (z = 0) up to a virtual line (z = z f ) where cathode boundary conditions are imposed. Part of the plume is included in the spatial computational domain of the model in order to provide some information also outside of the channel. In the following, we concisely introduce the model equations dividing the discussion for each species included in the model. Since we are dealing with a 1D model, it is implicit that all the equations and quantities involved are averaged over z-sections.
Neutrals
Neutral atoms are assumed to have constant axial velocity u n = u n e z and constant temperature. Thus, only the continuity equation is included in the model as a classical linear transport/reaction equation with u n > 0 constant and assigned among the problem input data: ∂n n ∂t + ∂ ∂z (u n n n ) = −n e n n k I +ṅ w , where n n and n e are the number density of neutrals and electrons, respectively. As will be detailed later, plasma is assumed to be quasi-neutral, so that n e = n i where n i is the ion number density. The term k I in Equation (1) is the ionization-rate coefficient of the neutral propellant atoms due to electron impact, and the product n n k I is the corresponding ionization frequency. The ionization-rate coefficient k I is a function of the electron internal energy ε. More specifically, we use a tabulated version of k I using collision cross section data from the lxcat database [56] and processed with the Bolsig+ solver [57] in order to obtain rates for a Maxwellian electron energy distribution function as a function of ε.
In Equation (1), the termṅ w represents the neutral particles flowing in the domain from the lateral walls due to ion recombination. The model used to describe the plasma interaction with the channel walls and the expression forṅ w is presented in detail in Appendix A. Note that this term, as well as the other wall sink terms introduced in the electron momentum and energy equations (see Section 2.3), is modulated through a wall interaction coefficient, α, that accounts for the uncertainties in the description of the plasmawall interaction and is one of the calibration parameters that are used to fit the experimental reference data, as detailed later in Section 2.5.
Ions
Ions are assumed to be isothermal with constant temperature T i , which is assigned among the problem data. For this reason, only two scalar equations are included for the ion dynamics in the present 1D model, i.e., the continuity equation and momentum balance in the z-direction. Indicating with u i the ion velocity in the axial direction, the continuity equation writes as: ∂n i ∂t + ∂ ∂z (u i n i ) = n e n n k I −ṅ w .
Note that the decrease in ion density outside the thruster channel due to the plume expansion (see, for instance, [29]) is not taken into account here as it is not considered important in the dynamics of the low-frequency breathing mode, which is the main objective of the present analysis.
For the momentum balance in the z-direction, we assume unmagnetized, collisionless ions, and the resulting equation writes as follows: where p i is the ion pressure, m i is the ion mass and Φ is the electric potential (so that E z = − ∂Φ ∂z is the component of the electric field in the z-direction). We further specify that the ion pressure is related to the ion temperature, assuming an isotropic Maxwellian distribution for the random thermal ion velocity, thus leading to the ideal gas law: where k B is the Boltzmann constant. Note that in the momentum balance (3) we have neglected momentum transfer due to charge exchange collisions, which are sometimes included in similar models in the literature (see, for instance, [29]). While the formulation presented above allows for a finite, albeit constant, ion temperature, the energy of the ion fluid is largely dominated by the kinetic component, and thermal effects are not considered to have a significant impact on the dynamics of breathing mode oscillations. Therefore, in the simulation results presented in Section 4, the ions are assumed to be cold by setting T i = 0.
Electrons
The continuity equation for the electrons is identical to that of ions as a consequence of the quasi-neutral assumption, which implies the point-wise and instantaneous equality n i = n e : ∂n e ∂t + ∂ ∂z (u e n e ) = n e n n k I −ṅ w , where u e is the electron velocity component in the axial direction. Since the characteristic length and time scales of the phenomena under investigation are much larger than the Debye length and of the inverse of the plasma frequency, and considering that the wall sheaths are excluded from the domain, the quasi-neutral assumption is well justified. By subtracting the two conservation equations for ions and electrons, namely Equations (2) and (5), and multiplying by the electron charge e, the current conservation equation is obtained: ∂ ∂z (e n i u i − e n e u e ) = 0.
Equation (6) essentially states that the current density, defined as is a funcion of the sole time, i.e., J = J(t).
As concerns the momentum balance of the electrons, we use the drift-diffusion approximation, which implies that electrons are assumed to be at steady state within the characteristic evolutive time of ions, i.e., inertia terms are negligible in the momentum balance for electrons. This last assumption is consequent to the small ratio between electron and ion masses, i.e., m e /m i 1, which implies a low-Mach number approximation for electrons, i.e., electrons move at a much lower velocity with respect to their thermal speed, and this is shown in detail in the asymptotic analysis proposed in [58]. Finally, electrons are supposed to be magnetized, and the magnetic field is assumed to be purely radial, i.e., B = B r e r .
As a result, the momentum balance in the axial (z) and azimuthal (θ) directions becomes: m e n e ν e u e = − ∂p e ∂z + e n e ∂Φ ∂z + e n e u eθ B r , m e n e ν e u eθ = −e n e u e B r , where u eθ is the electron velocity component in the azimuthal direction and the coefficient ν e is the momentum transfer collision frequency. In the proposed model, ν e is given as the sum of several concomitant contributions. In particular: where ν ew is the electron-wall collision frequency, and ν c is a generic collision frequency taking into account all the collisions that electrons undergo with neutral atoms, including elastic, ionization and excitation collisions, whose values result from the product of the neutral density and the relevant reaction rate k x , which were obtained using Bolsig+ [57] and cross section data extracted from the lxcat database [56]. The wall collision frequency ν ew , which accounts for the electron momentum loss to the channel lateral walls, follows from the plasma-wall interaction model detailed in Appendix A, and it is modulated with the wall interaction coefficient α, as previously discussed. The anomalous collision frequency ν a is finally modeled to be proportional to the local cyclotron frequency of the electrons ω e = (e B r )/m e using the classical expression: where β is a free non-dimensional parameter among those included in the calibration (see Section 2.5).
The momentum balance in the azimuthal direction, Equation (9), can be solved in terms of u eθ and the result substituted in the axial balance (8) so as to obtain a single equation in u e : n e u e = −µ n e 1 e n e ∂p e ∂z The cross-field mobility µ in the previous equation is defined as: where µ 0 = e/(ν e m e ) is the unmagnetized mobility and Ω = ω e /ν e is the Hall parameter.
If we further assume that electron velocities follow an isotropic Maxwellian distribution of temperature T e , the electron pressure p e can be related to T e as follows: Thus, Equation (12) becomes: Unlike ions, the electron temperature is not constant and is among the unknowns of the problem. Thus, it is necessary to include the energy equation for electrons in the model. In particular, following Reference [29], we consider here the internal energy equation for the electrons, obtained by subtracting the kinetic energy equation from the equation for the total electron energy. As a result, the contribution of the electron potential Φ disappears in the internal energy balance. Using the relation between internal energy and electron temperature for a Maxwellian distribution (ε = 3 2 k B T e ), the conservation of internal energy writes as follows: In this equation, the term D is a diffusion term, the term H represents the conductive heat flow, S coll represents ionization and excitation losses, and S w (which is also modulated with the wall interaction coefficient α) models the energy losses at the walls and is discussed in detail in Appendix A. More specifically, the collisional energy loss S coll = −n n n e K is expressed in terms of the collisional energy loss coefficient K, which is a function of the electron energy. In the same way as the ionization rate, K(ε) values were extracted from the Biagi database [56] and were calculated as a function of the electron temperature for a Maxwellian energy distribution using Bolsig+ [57].
Model Formulation and Numerical Discretization
The 1D model is based on the equations introduced in the previous sections, which are further manipulated as detailed here. The first manipulation carried out concerns the ion axial momentum Equation (3). In particular, following the literature [28], the electric potential in the equation is eliminated by means of Equation (15), which gives: This simplification enhances the numerical stability in the solution of the hyperbolic (inviscid) part of the ion equations, as amply demonstrated in the literature. The main reason is that the correct acoustic speed to consider in the hyperbolic part of the ion equations is the one of Equation (17), i.e., the one related to the following equivalent sound speed c i for ions: A second manipulation of the equations consists in combining the expression for the current density, Equation (7), and the electron momentum Equation (12) and integrating over the entire domain to compute the discharge current density: where we assumed n = n i = n e and where ∆V is the potential difference between the first point of the investigated quasi-neutral domain (z = 0) and the cathode (z = z f ). Specifically, if V d is the total discharge voltage applied between anode and cathode, ∆V takes into account the potential drop (φ a ≥ 0) that occurs in front of the anode sheath: where j e,a = J − j i,a is the electron current density at the anode, j i,a is the ion current density at the anode, and j e,th = en k B T e 2πm e is the current density associated with the electron thermal flux, assuming a Maxwellian distribution function at the anode sheath edge. The expression for the sheath potential drop of Equation (21) follows a classical description of a charged electrode immersed in a plasma and is limited to zero since the sheath found in front of the anodes of Hall thrusters is considered to be electron repelling [29].
Once J is known (and thus the total discharge current I by multiplication with the channel cross section area), the electron velocity can be deduced by charge conservation: Finally, the electric potential can be computed, if needed, directly by using the electron momentum Equation (15), as usual when working with the quasi-neutral plasma approximation.
The final system of equations is strongly coupled and non-linear. However, when semidiscretized in time following a line-method approach, i.e., thinking about a discrete-in-time advancement of the model, the following fractional step method is proposed to advance the equations from a generic time level t k to t k+1 = t k + ∆t. Provided that all plasma quantities are known at time t k , sub-problems are identified and solved in consecutive order: 1.
The neutral continuity equation is solved providing n k+1 n ; 2.
The ions continuity (2) and momentum (17) The electron temperature T k+1 e is computed solving Equation (16) and considering the electron velocity u e at time level k; 4.
The current density J k+1 is computed using Equation (19);
5.
The electron velocity u k+1 e is computed, solving the charge conservation Equation (22).
The first three subproblems above are governed by PDEs, and appropriate boundary conditions must be thus supplied, while boundary conditions on the potential are already included in the integral equation at point 4, as discussed above.
Subproblem 1 is a scalar transport equation with constant convection velocity u n > 0. Consequently, only one boundary condition on n n is required, which is the neutral density specified at the anode, i.e., at z = 0. This boundary condition is easily recovered by knowing the thruster channel cross section area (A) and the total injected mass flow rate (ṁ) at the operating point under investigation; considering also the ion recombination at the anode, the condition reads as: Subproblem 2 is a hyperbolic system of two equations, thus a variable number of boundary conditions is needed at each of the two boundaries of the domain depending on the local characteristic velocities. In the simulations carried out here, the solutions are always supersonic at the outlet boundary (i.e., z = z f ), so that no boundary conditions are required on that point. Conversely, at the anode, the local characteristic velocities are such that there can be characteristic lines entering the domain. In particular, in order for the electron-repelling sheath to form at the anode, it is imposed that the ions must enter at the anode sheath edge with a velocity higher than or equal to the sound speed c i . Subproblem 3 is an elliptic 1D problem, which thus requires two boundary conditions on T e , one per domain boundary. At the anode, the electron energy flux is specified through a classical analysis of an electron-repelling sheath in front of a biased electrode, and is set equal to j e,a e (2 k B T e + e φ a ), where the values of the plasma parameters at the sheath edge are approximated with those at the center of the first cell adjacent to the anode. At the cathode boundary, instead, a constant temperature of 2 eV is imposed.
Concerning the numerical solution of the model's PDEs listed above, all of them are discretized in space using a finite-volume formulation. In particular, the neutral dynamics (point 1) are solved by a first-order upwind discretization. Ion dynamics (point 2), which is governed by a hyperbolic system of two equations/unknowns with a flux function satisfying the homogeneity property (see [59]), is solved by the Steger-Warming [60] flux vector splitting method. The same splitting method is used to impose characteristics-based boundary conditions on the system. The electron energy equation (point 3) is solved by a second-order finite-volume formulation with a central evaluation of gradients at the cell interfaces. Finally, the integral equation for the evaluation of current density (point 4) is carried out consistently with the finite-volume formulations adopted for the PDEs. Regarding time discretization, since the splitting of the whole coupled problem into the five consecutive subproblems described above is a first-order splitting in time, all subproblems are advanced in time by a first order Euler implicit/explicit scheme. The implicit scheme is used for the electron energy equation, while the remaining equations are advanced in time explicitly.
A six-core/12-thread i7-8750H CPU PC with 16 Gb of RAM and a solid state drive was used for the computation. Adopting the numerical scheme described above, the simulation time needed to compute 1 ms of the plasma dynamic was about 1310 s. Note that the code is not optimized, and there are ample margins of improvement in this respect.
Model Calibration
The model described in previous sections includes a number of calibration coefficients. These account for the unknowns in the physical description and allow to accommodate the simplifications introduced in the model. In detail, the three elements that are considered as calibration coefficients are: (i) the neutral velocity, u n , (ii) the wall interaction coefficient, α, and (iii) the anomalous diffusion coefficient, β, that can, in general, be a function of axial location, i.e., β = β(z).
The uncertainties on the value of the neutral velocity u n come from the absence of a detailed simulation of the flow injection and of the anode thermal behavior coupled with the assumption of a constant velocity across the domain. For what concerns the wall interaction coefficient α, a physical closure is avoided and the coefficient is retained in the model to account for the unresolved radial gradients in the plasma profiles and to counteract the uncertainties in the plasma-wall semiempirical descriptions (see Appendix A). Moreover, the coefficient allows to decrease the plasma-wall interaction, simulating partial shielding of the channel walls from the plasma due to the local inclination of the magnetic field, e.g., eroded and end-of-life conditions, such as the case under investigation in the present study. Finally, β represents the impact on the electron mobility of plasma turbulence and azimuthal oscillatory modes [1]. Following experimental and numerical investigations found in the literature [28,29,32,61], β was assumed to have two different values, inside and outside of the channel, with the outside anomalous coefficient significantly higher (by a factor of 100) than the inside one and with a smooth transition of variable steepness between the two.
The three parameters were progressively varied in order to allow the simulation results to match as closely as possible the experimental reference data. The overall discharge current signal was taken as the reference calibration quantity for the model. More in detail, while varying the calibration coefficients, when an oscillatory limit-cycle solution was found, the main parameters that were monitored and compared with experiments were: (i) the average, minimum and maximum value of the discharge current during the breathing mode cycle, (ii) the root mean square value of the oscillatory component of the discharge current signal and (iii) the dominant frequency component in the power spectral density of the discharge current signal.
Test Item, Facility and Global Diagnostics
The device under investigation is the second development model (DM2) of SITAEL's HT5k, a 5 kW-class Hall thruster. This thruster has been designed with a configurable soft alloy magnetic circuit, so as to generate fields with topologies spanning from conventional to magnetically shielded ones [38,62]. The present work deals only with a conventional magnetic topology, called M1 (see [38] for more details), with almost radial magnetic field lines at the channel exit. Nevertheless, it is worth noting that this configuration has a chamfered ceramic channel that reproduces an end-of-life condition. The thruster is coupled to an externally mounted HC20 high-current hollow cathode [63]. The thruster has a nominal discharge power of 4.5 kW. However, the current analysis focuses on a case in which the thruster operates at nearly 2.6 kW, with a discharge voltage of 300 V and a mass flow rate of 8.4 mg/s, as this condition is characterized by stronger breathing mode oscillations. Table 1 summarizes the operation parameters of the condition under study. The test campaign was carried out in SITAEL's IV4, a 2 m diameter and 4.2 m long vacuum facility [64]. This chamber is a nonmagnetic stainless-steel vessel equipped with an oil-free pumping system capable of a pumping speed of 70,000 L/s for xenon, thanks to a combination of cold heads, turbomolecular pumps, and cryogenic pumps. The system typically achieves a base pressure below 10 −7 mbar (<7.5 × 10 −8 Torr), and is capable of maintaining the pressure below 10 −4 mbar (<7.5 × 10 −5 Torr) during thruster operation. Additionally, the discharge current measurements were made using a LEM LA25-NP current probe and a Tektronix DPO 4104 digital oscilloscope with a sampling rate of 5 MHz.
Fast Diving Triple Langmuir Probe
To have a better insight into the behavior of the breathing mode and to assess the local effects on the plasma properties, the experimental setup included a fast diving triple Langmuir probe. The use of a triple probe, despite reducing the space resolution in the perpendicular plane, allows achieving a good temporal resolution without the need to perform rapid voltage sweeping, which introduces complications in the results interpretation due to the presence of important capacitive effects, as is the case with single Langmuir probes used in previous studies (see, for instance, [51,54]). This diagnostic is described in detail in previous publications from the authors [39,40,55], therefore, only a summarized discussion is included in the following.
Triple Langmuir probes are plasma diagnostics that consist of three conductive electrodes mounted on the tip of an isolating ceramic bar. Using a proper electric arrangement of the electrodes, this type of probe allows to obtain instantaneous measurements of the plasma density n, plasma-to-ground potential V gp and electron temperature T e , [55,65]. The three electrodes are typically denominated Bias (B), Common (C) and Float (F). A potential difference is applied between C and B while the F electrode is left floating. For the case under analysis, a bias of 30 V is applied between electrodes B and C. This value was selected to provide good accuracy of the measurements (in particular, for the electron temperature range of interest) and, at the same time, to comply with safety requirements and to avoid test setup complications. Triple Langmuir probe measurements consist of the simultaneous acquisition of the current flowing between B and C (I BC ), the voltage between F and C (V FC ) and the voltage between ground and C (V gC ). The three acquired values, coupled with a model of the plasma interaction with the electrodes, are sufficient to reconstruct the local values of plasma density, potential and electron temperature.
The three probe electrodes are made of a 75% tungsten 25% rhenium alloy, with a diameter of 0.178 mm and a length of 2.6 ± 0.1 mm. The electrodes are installed inside a 1/8 alumina insulating tube. Although it is true that the high secondary electron emission of alumina would result in a higher perturbative effect in the high-temperature plasma regions, its high mechanical strength and availability in the required format led to the selection of this ceramic for the probe body. A minimum distance of 2 mm between the electrodes was established to avoid undesired interaction effects. The Debye length of the probed plasma is always smaller than 0.1 mm, so the separation distance of 2 mm allows to always have at least 20 Debye lengths of separation between the electrode tips, which can be considered as the best trade-off between minimizing electrodes interaction and maximizing the probe spatial resolution. Additionally, the selected electrode length of 2.6 mm allows neglecting tip effects.
The probe was mounted on a mechanical arm, shown in Figure 2, that allowed for its rapid insertion and extraction from the plasma region. During its motion, the probe performed a circular trajectory with a radius of 350 mm. Data acquisition was only carried out in the final 0.27 rad arc of the probe motion in the near-plume and channel region of the thruster. Taking into account that the arm is 350 mm long, this implies that the probe tip had a maximum deviation from the channel centerline of 0.2 mm inside the channel and 9.5 mm in the plume. In order to minimize the probe exposure to the harmful high plasma density and temperature environment, the arm was kept in a parking position when non-operational. To perform the measurement, the arm was moved in and out of the acquisition region in 200 ms by a high-speed magnetic actuator. While moving, an encoder recorded the position of the probe with a resolution of 0.3 mm.
The adoption of a triple probe allows gathering instantaneous measurements of plasma properties without the need of voltage sweeps, at the cost of assuming that the electrons present in the plasma have a non-drifting Maxwellian velocity distribution function (EVDF). As explained by the authors in References [39,55], the assumptions on the EVDF are acceptable to a first order, particularly in the plume, where the electron drift velocity is negligible with respect to their thermal velocity. Moreover, a set of correction parameters, introduced by Saravia et al. in Reference [39], allows mitigating the impact of the electron drift on the measurements. It is important to note that the characteristic time of the probe motion (O(1) Hz) is orders of magnitude longer than that of the oscillatory phenomena under study (O(10 4 ) Hz), making it possible to assume that the probe is steady during several plasma oscillations, allowing to make considerations on the local unsteady behavior of the plasma even if the probe is moving. The probe signals were conditioned using an analog electronic box (see [55] for more details) and acquired with the same oscilloscope used to acquire the discharge current signal, i.e., a Tektronix DPO 4104 digital oscilloscope, set at a sampling rate of 5 MHz. The data storage was triggered using a photocell activated by the passage of the arm. The signal conditioning box was based on the AD215 120 kHz bandwidth, low distortion, isolation amplifiers, which limited the effective maximum frequency of the system. This bandwidth is much higher than the frequency expected from the breathing mode and its first harmonics, so that the information present in the gathered data is sufficient to properly measure the phenomena under study.
Time-Resolved Bayesian Reconstruction of Plasma Parameters
The reconstruction of the plasma properties is performed using a Bayesian inference methodology, described in detail in [55], which yields probability distributions for the plasma parameters and, as a consequence, permits calculating the most likely value and the corresponding uncertainty. Bayesian data analysis is a branch of Bayesian probability theory that permits a robust fusion of data coming from different sources while consistently keeping track of uncertainties. Within this framework, unknown parameters are treated as probability distributions and experimental data sets are combined with physical models and prior knowledge to infer the values of the investigated physical parameters. The method makes use of the Bayes theorem to update the probability distributions of the parameters by considering the likelihood of experimental measurements in the light of a physical model, and relies on the sum rule to marginalize the distributions and analyze each variable separately [66,67].
The process applied in the present investigation uses a parametric solution of the Lafambroise model of the particle collection in cylindrical probes [68], a Gaussian likelihood to uncertainties (model, measurements, etc.) and an implementation of the nested sampling algorithm to explore the parameter domain and calculate the parameter probability distributions [66,69].
Given that the electrons have a significant drift velocity in the regions close to the channel exit, the assumption of a non-drifting Maxwellian population is not completely fulfilled. Hence, a correction is introduced considering changes in the floating potential observed by the probe electrodes caused by the presence of the electron drift velocity. The value of the correction is calculated as a function of the position, using data obtained with a null bias applied between the B and C electrodes, and acquired with different probe geometric arrangements, so as to take into account the mutual screening effects between the different electrodes, as detailed in References [39,55]. The correction factors are then introduced as prior information in the analysis of unsteady data.
To perform the analysis, the raw data time series (V gC , V FC and I BC ) sampled at 5 MHz are downsampled, calculating the average and the covariance matrix of series of 20 samples. This process reduces the frequency of the new data set to 250 kHz, as shown in Figure 3. The resulting time series are then introduced in the Bayesian inference process together with the prior distributions of the sought parameters and the correction factors. This process leads to the estimation of the joint probability distributions and then marginalizes each parameter, yielding the time series of n, T e and V gp , as illustrated in Figure 4.
Results and Discussion
The detailed analysis of the experimental measurements is beyond the scope of this discussion; the reader is invited to read the work of Giannetti et al. [55] for additional details on the probe data. In the following, we focus on the simulation results obtained after model calibration, and on the comparison with the reference experimental data, highlighting the main physical phenomena that the model is capable of reproducing, as well as the main shortcomings.
In order to simulate the HT5k DM2 (M1) operation, the thruster salient features were imported into the model. These included the core geometrical properties of the channel, as well as the radial magnetic field profile along the channel centerline. Moreover, the xenon operating condition detailed in Table 1 was imported as input data for what concerns the injected mass flow rate and the applied discharge voltage. Finally, the simulation spatial domain was set to extend from the anode surface up to a full channel's length downstream of the exit section, i.e., z f = 2L ch , where cathode boundary conditions were applied. With the domain and all required inputs fixed, the model calibration procedure detailed in Section 2.5 was followed to find the best combination of the three parameters α, β and u n .
The best fit of the discharge current experimental data was found for u n = 395 m/s, α = 0.115, and β equal to 0.075 inside the channel and equal to 7.5 in the plume (keeping a ratio of 100 between the two). Figure 5 depicts a direct comparison between the discharge current measured by the oscilloscope during thruster testing and the discharge current predicted by the simulation of the operating point under investigation. Moreover, Table 2 reports the quantitative comparison of the main parameters of the current signal that were monitored during calibration in order to obtain the best match between the simulations and the experiment. The superposition of all main characteristics of the current signal is remarkable, with all control parameters exhibiting a relatively small difference between simulations and the reference experiment. This is strong evidence that the proposed numerical model incorporates all the necessary physics to correctly reproduce the core mechanism of breathing mode oscillations.
The neutral speed is typically linked to the anode temperature since the flow is considered sonic at injection. Following the accommodation coefficients discussed in [43], typical anode temperatures found in Hall thrusters translate to typical neutral injection speeds in the 100 to 300 m/s range. Therefore, the value of u n required to match the experimental results is relatively high. During the calibration process, it was found that the value of the neutral velocity has a direct impact on the dominant frequency of the discharge current oscillations when present. This is coherent with an intuitive understanding of breathing mode as an ionization instability: if the neutral speed increases, the time needed by the neutral flow to reach the high temperature region decreases and, thus, the plasma surges become more frequent. Since the experimental data presented a relatively high dominant frequency (see Table 2), a correspondingly high neutral speed was needed. Relaxing the assumption of constant neutral velocity over the domain, and thus solving the neutrals momentum equation, could result in calibrated injection speeds more in line with expectations.
For what concerns the wall interaction correction factor α, a significant reduction of the particle flow to the walls, when compared to that predicted by classical models, was needed to correctly reproduce the experimental results, and especially to match the total oscillation amplitude. This trend was expected, in part, since the radial gradients of the plasma properties are not resolved in the current model, and the plasma density is bound to decrease towards the walls due to the ion acceleration in the pre-sheath region. Moreover, a significant uncertainty exists in the representativeness of fluid sheath models, such as the one described in Appendix A, which, according to multiple studies, tend to overestimate the wall losses [70,71]. A further justification lies in the specific configuration of the thruster model under investigation. As described in Section 3.1, the M1 version of the HT5k DM2 thruster was designed to mimic end-of-life conditions of the Hall thruster, with a chamfer at the end of the ceramic channel that diminishes the local incidence angle of the magnetic field lines on the channel walls. As demonstrated in the literature [38,72,73], this can effectively produce a shielding effect of the channel walls from the plasma. It is not possible to reproduce this feature in a 1D model in which magnetic field lines are assumed to be radial and orthogonal to the channel walls, but this is reflected in the reduction of the wall interaction coefficient α.
Finally, it was found that the anomalous collisionality parameter β has a strong impact on the dynamic behavior of the solution. Particularly, slight variation in the anomalous diffusion parameter, especially inside the channel, implied significant differences in both the average current value and the amplitude of the oscillations. A similar trend was also observed by Hara et al. in [29]. Interestingly, also the steepness of the transition in the anomalous diffusion coefficient profile from the inside value to the outside one was found to have a significant influence on the oscillation dynamics. The resulting profile of the anomalous collision frequency is presented in Figure 6. It is worth noting how this profile recovers values of the anomalous collisionality close to the ones obtained by other authors and calibrated on experimental data [29,46,61].
With the model now calibrated on the experimental profile of the discharge current, it is possible to investigate the dynamics of all intensive plasma properties predicted by the simulations in the channel and near-plume. Figures 7 and 8 depict the evolution of all unknowns of the system over the entire z − t plane for a simulated time of 0.2 ms. Figure 7 details the dynamics of the heavy species (ions and neutrals), while Figure 8 reports the behavior of the properties related to the electron equations.
The results give a striking picture of the effects of breathing mode oscillations on the plasma. Neutral atoms are generated at the anode boundary through the mass flow injection of propellant. They move through the channel (see diagonal lines in Figure 7 (middle)) until they reach a region with a sufficiently high electron temperature and density to achieve effective ionization. At this point, a surge of plasma density occurs in the channel, indicated by the periodic density peaks of Figure 7 (left). This dynamic exchange between neutrals and plasma is very clear in the near anode region, where the plasma density and neutral density oscillations are almost in phase opposition. The newly generated highdensity plasma is then moved by the electric field both downstream, exiting the thruster at high speed, and upstream towards the anode, where a weak reversed electric field is established to counteract the local electron pressure and ensure the anode sheath boundary conditions are respected. This ion backflow recombines at the anode, increasing the local neutral density and further fueling the breathing mode feedback mechanism, before the cycle starts again. Observing that the oscillations are found for all properties over the whole plasma domain with the same characteristic frequency validates the interpretation of breathing mode as a global self-sustained instability. The same picture was deduced from the experimental results of [55] where a wavelet analysis of the measurements was performed over the probed region. The measurements performed and the data processing technique presented in [55] allowed the reconstruction of local oscillations of plasma density, potential and electron temperature in a region spanning from 15 mm upstream of the thruster exit plane up to 70 mm downstream. Although the domain investigated with the simulations was limited to just one channel length downstream of the channel exit section, it is interesting to compare the local measurements of the plasma properties with the results of the simulation in order to understand the degree of accuracy achieved by the adopted formulation in the reconstruction of the local plasma behavior after calibration on the sole discharge current signal. This comparison is presented in Figure 9, where the experimental oscillations are reported only for the region where the probe was not perturbing the plasma flow (z > L ch + 2 mm). It should be noted that the plasma potential measurements refer to the voltage difference between plasma and ground, while the simulations are relative to the cathode plasma potential, therefore, the comparison is only quantitatively valid neglecting the cathode reference potential. The order of magnitude of all relevant properties and the general trend of the profiles and oscillations over the probed domain is successfully reconstructed by the simulations: the plasma density average value and oscillation amplitude is higher closer to the anode and decreases moving away from the thruster exit plane, the potential drop is mostly concentrated in the high magnetic field region of the thruster, close to the channel exit. Consequently, the highest value of the electron temperature is found in the same region. The numerical model also manages to simulate the oscillation of the acceleration region during the breathing cycle, with the fraction of potential drop happening outside of the channel varying during the oscillation period.
Two major differences are found between the measurements and the simulation results. First, the plasma density average value and oscillation amplitude seem to be underestimated by the code in the near-plume region. The numerical results predict the peak of plasma density inside the channel, further upstream with respect to the measurements. Second, the predicted temperature profile, and especially the peak value, is higher than the measurements, even though the peak is in a region where the measurements may be in part affected by the perturbations induced by the probe. The adopted formulation appears to overestimate the electron temperature and to move the acceleration region more inside the channel with respect to the empirical evidence. For what concerns the density profile, the neutral back ingestion from the chamber and from the cathode during testing could supply additional slow particles in the near-plume region. In general, the discrepancies observed between numerical results, and experimental data are also tied to the core assumptions of the simplified model. First, the code presents values averaged over z-sections of the channel, while the measurements refer to the local values on the channel centerline. Moreover, the plume expansion is not modeled in detail in the simulations, leading to potential local inaccuracies in the representation of the plasma dynamics downstream of the channel exit. Considering the strong sensitivity of the local plasma parameters to the calibration coefficients, both discrepancies are also linked with the specific profile of the anomalous collisionality that we have selected and with the value of u n and α needed to match the current profile, but are also a consequence (at least partially) of the application of a 1D axial formulation to an end-of-life configuration, as previously discussed.
It is worth mentioning that, in general, the plasma turbulence effect on electron transport is a function of the local plasma properties (see, for example, Reference [74]) that vary during the breathing mode cycle. Therefore, β could also be a function of time, i.e., β = β(z, t). While we have decided to fix the axial profile of the anomalous collisionality to limit the degrees of freedom in the model calibration, experimental investigations in the literature [46] have highlighted how β varies significantly during the breathing mode cycle.
The adoption of a time-varying profile for the anomalous diffusion coefficient could allow for a closer representation of the experimental results within the boundaries of the presented formulation and will be the subject of future investigations.
Conclusions
In this paper, we have presented a synergic experimental/numerical investigation of the breathing mode in a 5 kW-class Hall thruster. In particular, we have shown the potential of using an informed 1D fully-fluid model of the plasma flow as a complementary tool with respect to the experiments. On the basis of available measurements, the current signal in particular, the model is aimed at estimating evolving plasma properties that are difficult to measure. Thanks to the availability of a detailed experimental database, which has been collected using a diagnostic technique based on the use of a fast-diving triple Langmuir probe, we have provided here an assessment of the prediction capabilities of the calibrated model.
More specifically, the model was set up to simulate one operating point of the HT5k and calibrated to reproduce the measured discharge current profile. Three calibration parameters implemented in the numerical description allowed to properly tune the model to the experimental observations: (i) the neutral injection velocity, (ii) the wall interaction coefficient and (iii) the anomalous diffusion profile. Following calibration, the discharge current profile predicted by the simulation closely matched the experimental results, exhibiting relatively small differences on all control metrics. This promising result confirms the capabilities of the adopted numerical description to reproduce the core mechanism of breathing mode oscillations.
The time evolution of the main plasma properties predicted by the calibrated simulations over the entire domain was presented and discussed, highlighting the dynamics of the longitudinal oscillation and its underlying global and self-sustained nature.
Finally, the measurements obtained for the plasma density, potential and electron temperature oscillations in the near-plume were compared to the numerical results. The model successfully managed to recreate the order of magnitude and overall trend of the properties' profiles. Nevertheless, the simulations underestimated the values of density, overestimated the electron temperature, and predicted the acceleration region to be further upstream with respect to the experimental results.
Overall, the adopted approach of informing a reduced-order numerical description of the plasma dynamics with high-frequency measurements of the discharge current has demonstrated to be a promising path to gather additional insight on the dynamics of oscillatory modes in Hall thrusters. The possibility of comparing the simulation results with measurements of the local plasma properties represents a unique framework for the identification of the main physical processes that are needed to model the discharge of Hall thrusters. Future investigations will explore the application of the described approach to other thruster configurations and operating conditions, focusing on the adoption of a time varying profile of the anomalous diffusion and on the inclusion of the effects of the magnetic field topology in the model. Since experimental measurements of the discharge current are often easily available from the functional characterization of a Hall thruster, the proposed combined numerical-experimental approach may represent an important step toward the understanding of the characteristic physical processes of Hall thruster discharges. Funding: The work described in this paper has been funded by the European Space Agency in the framework of the contract 4000113279/14/NL/KML "Low-Erosion Long-Life Hall-Effect Thruster" and by the European Union under the H2020 Programme ASPIRE-GA 101004366. The views expressed herein can in no way be taken to reflect the official opinion of the European Space Agency.
Data Availability Statement:
The data presented in this study are available in Reference [55].
Acknowledgments:
The authors wish to express their gratitude to Ugo Cesari, Nicola Giusti, Luca Pieri, Stefano Caneschi and Carlo Tellini for their valuable assistance in preparing and performing the experimental campaign. Fruitful discussion with Fabrizio Paganucci and Francesco Califano are gratefully acknowledged.
Conflicts of Interest:
The authors declare no conflict of interest.
Appendix A. Plasma-Wall Interactions
The expressions for the ion recombination at the walls, the electron-wall collision frequency, and the electron energy losses at the walls are determined based on the formulation of Hobbs and Wesson for a 1D sheath with cold ions and in the presence of secondary electron emission [75]. Those quantities are then modulated using the wall interaction coefficient α, which belongs to the set of parameters used for the calibration (see Section 2.5) and are included in the model for z < L ch .
The particle balance at the dielectric walls of the discharge channel can be expressed as: where Γ iw and Γ ew are, respectively, the ions and primary electrons fluxes to the dielectric walls, and σ is the effective secondary electron emission yield for the wall material considered (BN-SiO2 in this case). Different empirical formulas for the secondary emission yield at low plasma temperatures (up to 50 eV) can be found in the literature [2,21,30,31]. In this work, we adopted the expression recommended in Reference [2]. Also accounting for the possibility of sheath space charge saturation (that for xenon occurs when σ = 0.983), this expression writes as: σ = min 1, 36 · 0.123 T 0.528 eV , 0.983 .
Neglecting the minor modification to the Bohm velocity in the presence of secondary electron emission, discussed in Reference [75], the particle flux to the walls in a control volume delimited by two channel sections at a distance dz is: where R 1 and R 2 are the inner and outer channel radii and where we have introduced the wall interaction calibration coefficient α. The frequency of ions impacting the walls (ν iw ) can be defined as the flux of particles to the wall per unit density and per unit volume and, therefore, writes as: Consequently, the source/sink term representing the particle-wall interaction in the continuity equations can be expressed as: Note that the same expression applies to the electron population due to the zero net current condition to the walls and the quasineutrality assumption. Using Equation (A1), the equivalent electron collision frequency accounting for the momentum losses of the electron fluid to the walls can be defined as: Concerning the electron energy losses, assuming a Maxwellian electron velocity distribution function at the sheath edge, the mean energy of primary electrons lost at the walls is 2k B T e + eφ w , where the sheath potential drop is given by: Therefore, assuming that the secondary electrons thermalize with the main electron population, and neglecting the energy with which they are emitted from the wall, the term accounting for the electron power loss to the walls in the electron energy equation writes as: | 13,844.6 | 2021-05-26T00:00:00.000 | [
"Physics",
"Engineering"
] |
META-ANALYSIS OF THE EFFECT OF THE USE OF JIGSAW TYPE COOPERATIVE MODEL ON STUDENT LEARNING OUTCOMES IN LEARNING NATURAL SCIENCE IN JUNIOR HIGH SCHOOL AND PHYSICS IN SENIOR HIGH SCHOOL
This meta-analysis study aims to analyze the effect size of the use of the jigsaw cooperative model based on education level, material units, and aspects of learning outcomes including knowledge, attitudes, and skills. The research method used was a meta-analysis of 25 articles that have been published in various national and international journals and proceedings. The results of this study were as follows: (1) The use of the type cooperative model jigsaw in improving learning outcomes based on education level more have an effect on the high school level; (2) The use of the jigsaw type cooperative model in improve learning outcome based on material units more influential on units optics and thermodynamics material for high school level and affect the unit thermodynamics and vibration & wave materials for junior high school level; (3) The use of the type cooperative model jigsaw in improving learning outcomes has an effect on aspects of attitude, knowledge, and skills.
I. INTRODUCTION
Among the indicators of the quality of human resources is the level of education. Education is a necessary field of development in every country. Through education, students are given provisions that can provide experience in improving their lives so that they can develop according to the progress of the times. Education is an important aspect in life that can produce resources quality human being to be a provision in the future [1]. The quality of education describes the quality of human resources. By the concept of education modernization, the primary goal of the higher school are the formation of the graduate's creative personality with a set of competencies that are manifested in the ability to solve problems and challenges on various spheres of human activity, a graduate who is capable of self-development, self-education, professional development, and possesses social and professional mobility and is capable of innovation activities [2].
The implementation of education in Indonesia requires the participation of all parties, especially the role of the government. The government always implements various ways to advance the quality of education in Indonesia. The government distributes scholarship assistance to teachers and students, improves the quality and quality of teachers, and builds educational facilities and infrastructure, and makes curriculum changes. The curriculum used now is the new curriculum as a result of improvements from the old curriculum.
Natural Sciences and Physics are subjects in the new curriculum. Science is a science that focuses on the ability of students to be active in the learning process of the exploration process. In particular, current education seeks to help students learn to organize and construct opinions, formulate problems, develop hypotheses, and find evidence for themselves [3] and physics is a field of science that produces scientific components in the form of patterns of thinking, behavior and is carried out through scientific steps such as observation, problem formulation, hypothesis formulation, hypothesis testing, experimentation, making conclusions, to the discovery of a theory and concept through learning [4].
Through a review of 25 articles obtained, it is stated that the actual conditions in the field are not in line with the expected ideal conditions. Physics learning carried out in schools is still carried out in one direction so that learning that should be student-centered has not been implemented optimally, students have not been actively involved in solving various problems from the material. Physics he studied. As a result, students have not been able to interpret the material being studied with phenomena that occur to nature [5]. The formation of discussion groups is not effective Students may also think that seeking help can be interpreted as evidence for low ability [6] and the lack of positive collaboration between students in the learning process, students who have high knowledge have less role in directing their peers who have medium or low abilities. Causing some students to appear more dominant than other students, resulting in a gap between students and other students [7]. Interaction between students is also rare so that cooperation between students is not welldeveloped, so the learning process becomes boring and less fun [8].
The second real condition of learning physics is said to be difficult for most students [9]. most students learn physics by using the formula for understanding its physical meaning, so that physical is only considered as a lesson consisting of a collection of formulas that must be memorized [10]. Students do not like studying physics and increase the time of physics learning at school and home. Students will feel dizzy if physics lessons at school multiplied. Students never read physics books at home except only if there will be a repeat or a semester exam. Students who have no interest in spending time studying physics do not do homework at home but work in school [11]. Students are still not accustomed to understanding physics material that is abstract and foreign by students, causing less meaningful learning of physics so that students become bored and student learning outcomes are low [12]. Learning physics is still a textbook, a lot of memorization, and a lot of studying formulas [13]. In the physics learning process, it was found that the knowledge dimension had not been implemented properly [14].
The real condition of the three teachers was less than optimal in applying the selected model and the teacher applied more direct learning activities in the classroom. The teacher has not maximally applied the learning model. As a result, students are passive and do not develop. Another obstacle found in the learning process is the teacher's lack of understanding of the learning model.
From the problems that has been described, it is known that there are different conditions in the field with the ideal conditions that should be. To solve these problems, efforts are needed to improve the physics learning process, namely the selection of appropriate learning models. The learning model must be appropriate with basic competencies and material characteristics of learning [15]. Various models of which are cooperative. This model has various types, one of which is the jigsaw cooperative type.
II. METHOD
The research is applied to meta-analysis research. Study meta-analysis is defined as the study of several research results of similar problems [16]. The meta-analysis include research that is quantitative because it calculates it with numbers and statistics to adapt reports from many data sources. We conducted searches of electronic databases, manually searched for important publications, and also searched for relevant cited references to the identified research articles. The primary data sources of the study included journal articles, conference papers, and doctoral dissertations indexed in the Web of Knowledge Social Sciences Citation Index or the ERIC database [17]. The procedure for this meta-analysis is as follows: (1) Determining the selection criteria and understanding the topics to be summarized; (2) Strategies in browsing articles and who will browse the library; (3) clear categories in assessing the quality of research articles covering aspects of design, implementation, and analysis; (4) grouping and compiling research units to be grouped; (5) Planning the use of appropriate statistical models in combining the results; (6) calculate the effect size; (7) identify the heterogeneity of effect size; (8) Summarizing and interpreting the research results [18].
The technique in taking samples using the purposive sampling technique. The sample that has been searched is 25 articles. There are certain objectives in the research, namely: (1) the articles taken are articles written by students or researchers; (2) The article taken is the title with the experimental research method; (3) Articles sourced from various journals and proceedings; (4) The article contains statistical effect size data and is included in quantitative research; (5) Articles of the theme of jigsaw cooperatives model on junior high school science material and high school physics; (6) The sample taken from the level of education in the article is the junior high school and Senior high school levels.
The analysis technique used a quantitative approach to the calculation and analysis of the data already in the article. To analyze the data using effect size namely: (1) Equation of effect size of one sample group, if it is known that the average pretest-posttest and standard deviation pretest-posttest; (2)The equation for the test of the difference between the two sample groups is related if it is known that there is only the posttest mean and standard deviation from the two sample groups; (3) The equation for the test of two groups of related samples if the mean and standard deviation from the pretest and posttest of the two groups are known; (4) The equation for the different test of two related sample groups if the control member and the experimental member both has at-count value and have the number of students in both [19]. The following are criteria for effect size, namely:
III. RESULTS AND DISCUSSION
The data analyzed in this study amounted to 25 articles, of which 24 were national articles and 1 international article. Articles are grouped based on high school and junior high school education levels, material units, and learning outcomes aspects of knowledge, attitudes, and skills. The following are the results of the research.
1. Based on the education level. It can be understood that it turns out that both high school and junior high school levels use the jigsaw cooperative model to calculate the effect size on the high criteria. So that the use of the jigsaw cooperative model for high school and junior high school levels has a significant effect on student learning outcomes. However, when compared, it is better at the high school level than in junior high school. It can be seen that at the high school level for optical and thermodynamic material units, the results of calculating effect size of very high criteria and units of mechanics, electricity & magnetism obtained effect size calculations on medium criteria. At the junior high school level for the thermodynamics and vibration & wave material units, the results of the calculation of the effect size of the high criteria, and for the mechanical material on the medium criteria and the electric magnetic material unit on the low criteria. So the use of this model has a significant effect on student learning outcomes of the thermodynamics material unit at the high school and junior high school levels. Meanwhile, the optical material unit has a significant effect on the high school level, for the vibration & wave material unit it has a significant effect at the junior high school level. It can be seen that at the high school level for optical and thermodynamic material units, the results of calculating effect size on very high criteria and units of mechanics, electricity & magnetism obtained effect size calculations on medium criteria. At the junior high school level for the thermodynamics and vibration & wave material units, the results of the calculation of the effect size of the high criteria, and for the mechanical material on the medium criteria and the electric magnetic material unit on the low criteria. So the use of this model has a significant effect on student learning outcomes of the thermodynamics material unit at the high school and junior high school levels. Meanwhile, the optical material unit has a significant effect at the high school level, for the vibration & wave material unit has a significant effect on the junior high school level.
The first result is based on education level, the calculation of the effect size of high school level with 13 articles obtained a value of 0.76 high criteria and for junior high school level with 12 articles, the value is 0.75 high criteria. This proves that the high school and junior high school levels can improve student learning outcomes. However, when compared to the high school level, it is better than the junior high school level. A child's social thinking is influenced by his cognitive maturity. Based on this, the higher the level of education of a student, the more effective in learning. Because the cognitive maturity of a student can be formed into a learning process that is repeated and carried out continuously. In general, high school students have higher emotional maturity than junior high school students.
The second result are based on the material unit after analyzing the 5 material units in the article, it was obtained for the high school level the highest effect size result was in the optical material unit, namely 1.38 with very high criteria, and continued with the thermodynamics material unit with an effect size value of 1.23 very high criteria. Then the unit of mechanical and electrical magnetic material is in the medium criteria. At the junior high school level, the result of the highest effect size was in the vibration & wave material unit with an effect size value of 0.86 with a high criterion, followed by a thermodynamic material unit, which was 0.79 with a high criterion. Then the unit of mechanical material with an effect sizes value of 0.62 with moderate criteria and an electric magnet for an effect sizes value of 0.22 with low criteria. in line with [20] the opinion that other important problems of the physical matter is electricity and magnetism. Research has shown that students have more difficulty with magnetism even though they encounter more examples of problems with magnetism than with electricity in their daily life. It is stated that using complex mathematical operations for magnetic and notional teaching of subjects through complex verbal symbols to learn problems. Furthermore, it was found in related studies that students had products of the issues of changing the brightness of lamps with changes in resistance, current, series and parallel relations of resistance, ohm's law, and circuit theory. It turned out that in the learning process the use of the jigsaw cooperative model for optical, vibration & wave, and thermodynamic material units was better than the mechanical and electrical magnetic material units. The third result on the aspect of learning outcomes, at the high school level the calculation of the effect size of the attitude aspect obtained an average of 0.96 with very high criteria, knowledge, and skills aspects of 0.75 with high criteria. This shows that the use of the jigsaw cooperative model affects aspects of knowledge, attitudes, and skills. However, when compared, it is more influential in the aspect of attitude. By the articles that has been analyzed which state the achievement of attitudes, it can be seen that students that are given the jigsaw model are actively learning. Students who have a high scientific attitude dare to express their opinions. In addition, be optimistic about every action taken. At the junior high school level, the calculation of the effect size of the knowledge aspect obtained an average of 0.73 with moderate criteria. By the article that has been analyzed, it states that the Jigsaw type cooperative model is very well applied, by using this model, a student will explore the material being studied. They are also required to be able to become tutors in their friends of the group. However, at the junior high school level, there is no research article on aspects of attitudes and skills, so this shows that teachers only focus on aspects of knowledge and do not pay attention to student learning outcomes for aspects of attitudes and skills. Such as the opinion of Karacop [21], it is revealed that the Jigsaw method both for the theoretical and practical aspects of science courses gives importance to students in the realm of knowledge, skills, and effectiveness as well as the development of students' scientific process skills.
IV. CONCLUSION
According to the analysis and description of the research results that have been carried out, the following conclusions are obtained: (1) The value of the effect size of the high school level is 0.76 high criteria and 0.75 high criteria junior high school. The use of the jigsaw type cooperative model in improving learning outcomes based on education level is more influential at the high school level compared to junior high school; (2) The value of effect size of the high school level for optical material units is 1.38 very high criteria, thermodynamics 1.23 very high criteria, mechanics 0.51 moderate criteria, and electricity & magnetism 0.57 moderate criteria. At the junior high school level, the thermodynamics unit is 0.9 high criteria, vibration & waves criteria are 0.86 high criteria, mechanics is 0.62 moderate criteria and electric magnets is 0.22 low criteria. The use of a jigsaw cooperative models on improving learning outcomes based on material units, has more effect on optical and thermodynamic material units for high school level and has an effect on thermodynamics and vibration & wave material units for junior high school level; (3) The value of the effect size aspect of learning outcomes of the high school level for the attitude aspect is 0.96 very high criteria, the knowledge and skill aspect is 0.75 high criteria. At the junior high school level for the knowledge aspect of 0.73 medium criteria. The use of jigsaw cooperative models on improving learning outcomes has an effect on aspects of attitudes, knowledge, and skills. | 4,016.6 | 2021-11-29T00:00:00.000 | [
"Physics",
"Education"
] |
A Comprehensive Survey of Cognitive Graphs: Techniques, Applications, Challenges
The realization of the third-generation artificial intelligence (AI) requires the evolution 1 from perceptual intelligence to cognitive intelligence, where knowledge graphs may not meet 2 the practical needs anymore. Based on the dual channel theory, cognitive graphs are established 3 and developed through coordinating the implicit extraction module and the explicit reasoning 4 module as well as integrating knowledge graphs, cognitive reasoning and logical expressions, 5 which have achieved successes in multi-hop question answering. It is desired for cognitive graphs 6 to be widely used in advanced AI applications such as large-scale knowledge representations 7 and intelligent responses, promoting the development of Al dramatically. This review discusses 8 cognitive graphs systematically and elaborately, including basic concepts, generations, theories 9 and technologies. Moreover, we try to predict the development of cognitive intelligence in the 10 short-term future and further enlighten more researches and studies. 11
14
The previous few decades have witnessed the dramatic development of artificial 15 intelligence (AI). Broadly speaking, there have been three major stages during the 16 evolution of AI [1], as can be seen in Figure 1. The first stage is computational intelligence, which is owing to the fast computing 18 and mass storage capacities of computers. With the maturity of technologies such as grid 19 computing, distributed storage and quantum storage, the computing power of machines 20 has far exceeded that of human beings and laid a solid foundation for the next stages. 21 The second stage is perceptual intelligence, which is the current stage of AI. Perceptual 22 of the deep learning system and knowledge graphs. single-paragraph Q&A. However, in multi-hop questions, this method suffers from 107 "short-sighted retrieval". This means that the relevance between the text of last few 108 jumps and the question is very low, which is actually difficult to be directly retrieved, 109 resulting in a poor effect. In addition to retrieval problems, there are also two challenges 110 lying ahead, which are explainability and scalability. 111 Grounded on the dual process theory, an ideal cognitive graph can contribute to 112 all the three challenges significantly. It is an iterative framework to build the cognitive 113 graph step by step. As for the example of "Who is the director of the 2003 film which has 114 scenes in it filmed at the Quality Cafe in Los Angeles?", the overview procedure of the 115 cognitive graph is shown in Figure 5. 116 Models based on System 1 extract question-related entities from paragraphs to build 117 the cognitive graph and generate semantic vectors for each node. Then the relevant 118 paragraphs about new extracted entities are retrieved or just indexed from Wikipedia. 119 Meanwhile, models based on System 2 carry out reasoning based on semantic vectors 120 and compute clues to guide the extraction of System 1. After several iterations, System 2 121 selects a node as the predicted answer based on the reasoning results. Figure 6 shows 122 the detailed procedure of cognitive graph. 123 System 1 and System 2 can be established by various types of models. Since the 124 cognitive graph is initialized with entities extracted from questions, it is crucial to seek 125 out a powerful module to extract useful entities and generate semantic vectors for each 126 node. Recently, BERT [31] has been proved to be a successful language representation 127 model. Therefore, BERT is designed to serve as System 1. The input of System 1 consists 128 of three parts, including the question, the "clue" found in the previous paragraph and the Wikipedia document about an entity x (for example, x is the movie "Old School"). 130 The goal of System 1 is to extract the "next hop entity name" and "answer candidate" in 131 the document. For example, as shown in Figure 5, from the "quality café" paragraph, 132 "old school" and "gone in 60 seconds" are extracted as the entity names of the next jump. 133 These extracted entities and answer candidates will be added to the cognitive graph 134 as nodes. In addition, System 1 will calculate the semantic vector of current entity x, 135 which will be used as the initial value of relational reasoning in System 2. Owing to 136 the inductive bias of graph structure, GNN has presented remarkable performances on
164
As shown in Figure 8, for an intractable semantic retrieval question without any 165 entity mentioned, the cognitive graph finally gets the answer "Marijus Adomaitis", It is well known that human cognition can successfully integrate the connectionist (brain-177 inspired) and symbolic (mind-inspired) paradigms, where the language is a compelling 178 case in point. To build an intelligent cognitive graph, it is urgent and indispensable to 179 develop a framework that can routinely acquire, represent, and manipulate knowledge, 180 simultaneously using the knowledge in the service of reasoning logically like humans.
181
Thus, as shown in Figure 9, three core technical supports are needed as prerequisites 182 for building cognitive graphs: 1) large-scale knowledge graphs to support intuitive 183 knowledge expansion; 2) reasoning mechanisms to conduct complex reasoning and 184 make analytic decisions; 3) large-scale pre-trained natural language generating models 185 to explain the inference process and express the reasoning results in a human-friendly 186 way.
187 Figure 9. The Cornerstone of Cognitive Graph. On the way to build cognitive graphs, three core technical supports are needed as prerequisites for building a cognitive graph: large-scale knowledge graphs to support intuitive knowledge expansion; reasoning mechanisms to conduct complex reasoning and make analytic decisions; large-scale pre-trained natural language generating models to explain the inference process and express the reasoning results in a human-friendly way.
188
The knowledge graph is regarded as an important cornerstone in the transformation 189 from perceptual intelligence to cognitive intelligence. The techniques applied in knowledge graph building mainly include knowledge 206 graph construction and knowledge graph representation.
207
The overview framework of knowledge graph construction is shown in Figure 208 10. As can be seen, the whole framework mainly consists of four parts, which are 209 data acquisition, information acquisition, knowledge fusion and knowledge processing 210 respectively. 211 Figure 10. Overview of knowledge graph construction framework. The whole framework mainly consists of four parts: data acquisition, information acquisition, knowledge fusion, and knowledge processing [36].
212
Data Acquisition is the cornerstone of knowledge graph, whose goal is to extract 213 structured data from unstructured or semi-structured data. to ensure the quality of the knowledge base after quality assessment. Knowledge 246 processing mainly consist of ontology construction and quality evaluation. The task of quality assessment of knowledge base is usually carried out together 260 with the entity alignment task. Its significance is that the credibility of knowledge can be 261 quantified, and the quality of knowledge can be effectively guaranteed by retaining the 262 higher reliability and abandoning the lower confidence.
264
The knowledge graph representation is also called knowledge graph embedding.
265
The key idea is to embed entities and relationships into a low-dimensional continuous
519
The method of reasoning based on distributed representation is to map entities 520 and relationships to low-dimensional space vectors, and to use semantic expressions for 521 reasoning. The advantage is that it fully utilizes the structural information in the knowl-522 edge graph, and the method is convenient to extend for large-scale knowledge graphs.
523
The disadvantage is that this kind of approach does not consider prior knowledge when to fuse multi-source information and multiple methods to further improve reasoning 553 performance will also become a major research direction in the future. Among them, the 554 fusion mode, that is, how to fuse, is a major difficulty. To make a machine have cognitive intelligence, it not only requires the machine to 610 understand the data, process the data, and make decisions through cognitive reason-611 ing, but more importantly, the machine is supposed to have the ability to express the 612 reasoning results in a way that humans can understand. Therefore, how to make the 613 machine generate natural language in line with human understanding is a crucial aspect 614 of cognitive intelligence.
615
Along the way to cognitive intelligence, NLG plays a crucial role: it is responsible 616 for converts a cognitive system action into a human-understandable response. Therefore, 617 the response is supposed to be fluent, adequate. NLG has significant influence on users' 618 experience.
619
In this section, we comprehensively review the concept, key technologies, problems 620 and challenges and future research directions of NLG. Markov chains are the earliest algorithms for language generating. It predicts the 641 next word in a sentence from the current word. For example, the model is trained in the 642 following two sentences: "I drink coffee in the morning" and "I eat sandwiches with tea".
643
The probability of "coffee" after "drink" is 100%, and the probability of "eat" and "drink" In order to correctly predict the next word "Spanish", LSTM will pay more attention to 669 "span" in the previous sentence and use cell to memorize it. As the sequence is processed, 670 the cell stores the acquired information, which is used to predict the next word. When a 671 period is encountered, the forgetting gate will realize that the context of the sentence can be helpful in understanding forward words. Early language models could be trained 737 from left to right or right-to-left, but the two could not be conducted at the same time.
738
Masked language model: Humans understand language with contexts in mind.
739
BERT cleverly utilized the idea of filling in the blanks, put forward the masked language 740 model to achieve a two-way transformer. is insufficient. We need to extract confidence values to rank extracted rules. In this way,
961
It has been argued eloquently that to build a semantical, explainable and ultimately 962 trustworthy AI system, one needs to pay attention to a lot of aspects, such as integrated In essence, the crucial innovation of cognitive graphs is to reduce the information 972 loss during the construction of the graphs, transfer the pressure of information process-973 ing to retrieval and natural language understanding algorithms, and retain the graph 974 structures for explainable relational reasoning.
975
In the future, it is necessary to focus on how to capture structural information and 976 learn rule knowledge at the same time, so as to improve the performance of cognitive 977 graph reasoning. In the big data era, large-scale, diverse forms, scattered distribution, dy-978 namic changes and low-quality data features bring new challenges to AI technologies. It 979 is necessary not only to learn the distribution representation of data from the perspective 980 of perception but also to interpret the semantics of data from the perspective of cognition.
981
The research and development of cognitive graphs that integrate core technologies such 982 as common sense knowledge graphs, cognitive reasoning and logical expression will 983 become the key to the breakthrough of the next generation of AI technologies. Given 984 the fast pace at which developments occur both in industry and academia, we feel it is 985 helpful to point to potential future directions. reasoning about what will happen surrounding you. Obviously, you cannot reliably 992 make plans.
993
As mentioned previously, deep learning is essentially based on a "big data for small 994 tasks" paradigm, which has a demand for massive amounts of data in a single narrow 995 task. Yixin Zhu [140] proposed "small data for big tasks" paradigm which is capable of | 2,675.4 | 2021-08-06T00:00:00.000 | [
"Computer Science"
] |
Analysis of an Impulsive One-Predator and Two-Prey System with Stage-Structure and Generalized Functional Response
An impulsive one-predator and two-prey systemwith stage-structure and generalized functional response is proposed and analyzed. By reasonable assumption and theoretical analysis, we obtain conditions for the existence and global attractivity of the predatorextinction periodic solution. Sufficient conditions for the permanence of this system are established via impulsive differential comparison theorem. Furthermore, abundant results of numerical simulations are given by choosing two different and concrete functional responses, which indicate that impulsive effects, stage-structure, and functional responses are vital to the dynamical properties of this system. Finally, the biological meanings of the main results and some control strategies are given.
Introduction and Model Formulation
In real world, the properties of one-predator and one-prey system have been studied widely and many valuable results have been obtained.If examining the cases that there are two preys for a predator, then the above system cannot reflect the real behaviors of individuals accurately, so scholars proposed three-species predator-prey system.The relationship between species in three-species system may take many forms, such as one prey and two predators [1], a food chain [2,3], or two preys and one predator [4,5].On the other hand, for predator-prey model, in description of the relationship between predator and prey, a crucial element is the classic definition of a predator's functional response.Recently, the dynamics of predator-prey systems with different kinds of functional responses have been studied in relevant literature, such as Holling type [6], Crowley-Martin type [7][8][9], Beddington-DeAngelis type [10,11], Watt type [12,13], and Ivlev type [14].For example, Gakkhar and Naji [15] investigated the dynamical behaviors of the following threespecies system with nonlinear functional response: where 1 () and 2 () represent the two preys densities, respectively, and () represents the density of predators depending on the two preys.However, as Pei et al. [16] pointed out that system (1) could not provide an effective approach because there was no impulsive spraying pesticides or harvesting pest at different fixed moment.We know that pests may bring disastrous effects to their existing system when their amount reaches a certain level.For preventing large economic loss, chemical pesticides are often used in the process of pest management.As a matter of fact, the control on pests often makes pests reduce instantaneously in a short time.In the modeling process, these perturbations are often assumed to be in the form of impulses.Based on traditional models, impulsive differential equations are proposed and extensively used in some applied fields, especially in population dynamics; see [17][18][19].The theory of impulsive differential equation is now being recognized richer than the corresponding differential equation without impulses, which plays a key role in the development of biomathematics; see monographs [20,21] and references cited therein.
On the other hand, the stage-structure for predator was also not considered in system (1).In real world, many species go through two or more life stages when they proceed from birth to death.For many animals, their babies are raised by their parents or are dependent on the nutrition from the eggs they stay in.The babies are too weak to produce babies or capture their prey; hence their competition with other individuals of the community can be ignored.Therefore, it is reasonable to introduce stage-structure into competitive or predator-prey models.Many researchers have incorporated it into biological models, where stage-structure is modeled by using a time delay [22][23][24].Authors [5] pointed out that when the system contained time delay, it had more interesting behaviors.Their results showed that time delay could cause a stable equilibrium to become unstable and Hopf bifurcation could occur as the time delay crossed some critical values.These obtained results have shown that stage-structure plays a vital role in predator-prey models and stage-structured systems exhibit complicated properties.Moreover, Xu [25] showed that an important factor in modeling of predator-prey is the choice of functional response.Model with generalized functional response exhibited many universal properties, which could be applied to many fields because of its flexibility.Shao and Li [26] considered a predator-prey system with generalized functional response.Their results indicated that generalized functional response caused dynamical behaviors of the system to be very complex.
Based on these backgrounds, in this paper, developing system (1) with stage-structure, generalized function response, and impulsive spraying pesticides, we will consider the following one-predator and two-prey system: where 1 () and 2 () represent the densities of two different preys, respectively, and we assume that there is no competition between the two preys. 1 () and 2 () denote the densities of immature predator and mature predator, respectively. is the natural growth rate of () ( = 1, 2 By use of impulsive differential equation theory and some analysis techniques, we aim to investigate the existence and global attractivity of predator-extinction periodic solution and the permanence of (2).Further, by numerical analysis, we try to find out the effects of impulsive and stage-structure on this system.
Since 1 () does not appear in the first, the second, and the fourth equation of system (2), we can simplify (2) and restrict our attention to the following system: with initial conditions: From biological point of view, without loss of generality, in this paper, we assumed that () ( = 1,2) is strictly increasing, differential with (0) = 0, satisfying 0 < ()/ < (a constant) for all > 0. Further, we only consider (3) in the following biological meaning region: The rest of this paper is organized as follows.In Section 2, we give some notations, definitions, and lemmas.By using lemmas and impulsive comparison theorem, we discuss the existence of predator-extinction solution and permanence of system (3) in Sections 3 and 4, respectively.In Section 5, numerical simulations are given to show the complicated dynamical behaviors of (3).Finally, we end this paper by a brief discussion in Section 6.
Proof.Since (H3) holds and 1 (), 2 () are differential for all > 0, we can choose two positive constants 1 and 2 to be sufficiently small such that From the first equation of system (3), we have Consider the following impulsive comparison system: In view of Lemma 2, we obtain that with which is unique and globally asymptotically stable positive periodic solution of (12).By use of comparison theorem of impulsive differential equation, there exists 1 ∈ such that, for the sufficiently small constant 1 and all ∈ (, ( + 1)] ( > 1 ), we have Similarly, there exists 2 ∈ such that, for the sufficiently small constant 2 and all ∈ (, ( + 1)] ( > 2 ), we have Through observation of the third equation of (3), we have Consider the following differential comparison system: According to (11) and Lemma 1, we have lim In view of the positivity of 2 (), we have lim → ∞ 2 () = 0.It implies that for arbitrarily small positive constant 3 and large enough, we have Further, from the first and the fourth equation of (3), we have Considering the following comparison system of (20), by Lemma 2, we get the positive periodic solution of system (21) as follows: with By comparison theorem, for given constant 1 > 0 and large enough, we have * 3 ()− 1 < 1 ().Let 3 → 0, then * 3 () → * 1 (), so we have * 1 () − 3 < 1 ().It follows from ( 15) that 1 () < * 1 () + 1 for sufficiently large, which implies that 1 () → * 1 () as → ∞.Similarly, we can obtain 2 () → * 2 () as → ∞.This is the end of the proof.
Permanence of System (3)
Now we investigate the permanence of system (3).Before stating the theorem, we give the definition of permanence for system (3).
On one hand, from the first and the fourth equation of (3), combining inequality (27), we have Consider the following comparison system: According to Lemma 2 and (H5), by using comparison theorem, there exists an arbitrarily small constant 4 > 0, such that 1 () ≥ * 5 () − 4 for large enough, where * 5 () is the unique and globally stable positive periodic solution of (29) with the following form: for ∈ (, ( + 1)], and By using comparison theorem of impulsive differential equation, we can derive from (30) that for ∈ (, ( + 1)].Similarly, we have On the other hand, in order to prove the stability of 2 (), we define a Lyapunov function as follows: Calculating the derivative of () along solution 2 () of system (3), we get According to (H4), we can choose a positive constant 5 small enough such that For some constant * 2 (0 < * 2 < 3 ), we claim that 2 () < * 2 cannot be true for all > 0 .Suppose that the claim is invalid, then there exists a positive constant 0 such that 2 () < * 2 for all > 0 .From system (3), we have From the unique solution * 6 () of the comparison system of (37), we have () ≥ * 6 () − 5 , for large enough, where is the unique solution of the following system: for ∈ (, ( + 1)], with holds for sufficiently large.Similarly we have In view of (35), combining (41) and (42), we get , and 2 ( 1 + + 2 ) ≤ 0. However, from (43), we have This is a contradiction.Hence, for all > 1 , we have 2 () ≥ 2 > 0.
Numerical Simulation
For the generalized functional response of (3), there are many functional responses that meet the condition, such as Holling type I, Holling type II, Holling type III, Crowley-Martin type, Beddington-DeAngelis type, Watt type, and Ivlev type.In this section, we choose two concrete functional responses to illustrate the rationality of our results and try to find more dynamical behaviors of system (3).We choose such function response as Holling type II and Beddington-DeAngelis type as follows: Firstly, let and = 1.By calculation, all parameters satisfy conditions of Theorem 3; then we obtain from Theorem 3 that a predatorextinction solution of system (3) exists, which is globally attractive.By numerical analysis with MATLAB, we get the following simulation figures of a predator-extinction solution and its global attractivity.Figure 1 shows the existence of a predator-extinction solution with only one initial value and Figure 2 shows the attractivity of the predator-extinction solution; that is, regardless of different initial values, species 1 , 2 , and 2 converge to the predator-extinction solution.Secondly, we choose another set of parameters to illustrate the permanence of system (3).Take 1, = 0.25, = 1, and = 1.One can verify that conditions of Theorem 5 are satisfied; then from Theorem 5, system (3) is permanent.By simulation, the results can be indicated clearly by Figure 3. Figure 3(a) shows the permanence of (3) and Figure 3(b) gives a positive periodic solution of this system.
Thirdly, in view of (H4), we know that pest population will die out if 1 and 2 are larger than the corresponding threshold.In order to investigate the influence of 1 , 2 and time delay , we fix the same parameters in Figure 3 as follows.Consider that 1 = 0.65, 2 = 1, 1 = 0.65, 2 = 1, 1 = 0.5, 2 = 0.2, 1 = 1, 2 = 1, 1 = 1, 2 = 1, 1 = 1, 2 = 0.8, 3 = 1, 4 = 1, 5 = 1, = 0.25, and = 1.If 1 = 0.5, by simulation, pest 1 is driven to extinction (see Figure 4(a)), and if 2 = 0.65, then, similarly, pest 2 becomes extinct (see Figure 4(b)).If 1 = 0.5 and 2 = 0.65 at the same time, then not only both pests are going to extinct but also their predator dies out due to lack of food (see Figure 4(c)), which is contrary to the conservation of biological diversity.From biological point of view, we only need to control these two pests at a rational level by adjusting the value of 1 and 2 , respectively.Furthermore, by simulation, if time delay between immature predator and mature predator goes up to a threshold ( = 4), the predator will die out (see Figure 4(d)), so we claim that the stage-structure also plays an important role in the permanence of system (3).
Discussion
In this paper, considering the complicated effects from the real world, we introduce impulsive spraying pesticides, stagestructure for predator, and generalized functional response into one-predator and two-prey system.Firstly, we investigate the existence and global attractivity of predator-extinction periodic solution under the condition that − 1 ( 1 1 1 ( 1 )+ 2 2 2 ( 2 )) < 2 .Secondly, we obtain the sufficient conditions of the permanence.Finally, by numerical simulation with MATLAB, we further discuss some complicated dynamical behaviors of the system.Our obtained results imply that if 1 or 2 is larger than a threshold (because of lack of food or catching the pest that died from insecticide), the predator will be extinct (see Figure 1), and if pesticides are used too much or harvesting is excessive on two pests, three species will all die out (see Figure 4(c)).In order to keep biological balance or biological diversity, some protective measures can be taken to ensure 2 is less than the threshold (such as disease prevention and releasing immature or mature predator); then the system will be permanent (see Figures 1-3).By comparing Figure 3 with Figures 4(a) and 4(b), if we change parameters 1 and 2 , respectively, 1 and 2 will die out effectively, but the rest of population will still survive, which can be used to provide a reliable control strategy: if impulsive period is given, we can adjust 1 , 2 to give a protection for the predator.It will not only reduce the economic loss but also protect environment from damage.Finally, impulsive period affects the dynamical behaviors of the system heavily, which may [118.1, 147.4], more than one periodic solution appears.If a moderate pulse is given ( > 147), then the system shows chaotic phenomenon.The bifurcation diagrams include stable solutions, cycles, cascade, and chaos.bring chaotic phenomena, including stable solutions, cycles, cascade, and chaos (see Figure 5).
In a word, our obtained results show that all parameters 1 , 2 , , and bring great effects on the properties of system (3), which can be applied to ecological resource management.The complicated dynamical behaviors imply that the influence from parameters 1 , 2 , , and is worthy of being studied and we will continue to study the potential dynamical properties in the near future.
Figure 2 :
Figure 2: Dynamical behavior of system (3) with different initial values.These initial values are chosen randomly, and other parameters are the same as those in Figure1.One can find that the solutions are globally attractive.The difference between Figures1 and 2is that more initial values are chosen in Figure2to show that the solutions are globally attractive.
Figure 3 :
Figure3: The permanence of system(3) with initial values of 1 (0) = 0.1, 2 (0) = 0.8, 2 (0) = 0.5, and 2 = 0.2, and other parameters are the same as those in Figure1.Obviously, all these species can coexist and their densities go into a bounded region.(a) Time series of 1 , 2 , and 2 , which indicate that the solution of (3) goes into a bounded region to be permanent.(b) Phase portrait of system (3), which implies a positive periodic solution.
). and are coefficients of internal competition of prey () | 3,609.2 | 2015-11-24T00:00:00.000 | [
"Mathematics"
] |
Changes in Anti-Thyroglobulin IgG Glycosylation Patterns in Hashimoto’s Thyroiditis Patients
,
H ashimoto's thyroiditis (HT), an autoimmune thyroid disease, is one of the most widespread thyroid disorders. It is characterized by a diffuse goiter, lymphocytic infiltration in the thyroid tissue, and the presence of thyroid auto-antibodies in the sera of HT patients. The incidence rate of HT has recently increased so far for unknown reasons, and has reached 0.3-1.5 cases per 1000 population every year (1). HT is the most common cause of hypothyroidism, a condition that seriously affects the growth and development of children, in addition to lowering the quality of life (QOL) of adults. HT exhibits a complex etiology, which is currently incompletely under-stood. Thus, investigating the etiology of HT is paramount for the prevention and treatment of hypothyroidism.
Serum antithyroglobulin antibody (TgAb) is one of hallmarks of HT, where it reaches elevated levels in 80 -90% of all HT patients (2). In healthy individuals, TgAbs are only present in serum at low levels (3,4). In vitro experiments have shown that TgAb had an effect on antibody-dependent cellular cytotoxicity (ADCC), which indicated that it might be involved in thyrocyte destruction (5). TgAb predominantly consists of antibodies of the IgG class (6). IgG antibodies are glycoproteins, which on average contain 2.8 N-linked glycans per protein molecule. Two N-linked glycans are invariably located at asparagine 297 of the Fc region of the two heavy chains, and additional N-linked glycans are found within the Fab region (7). The two N-linked glycans within the Fc region were shown to play an important role not only in the structure but also in the Fc-mediated biological function of IgGs (8). Therefore, investigating the glycosylation patterns and levels of TgAb IgG in the sera of HT patients may help to better understand the biological role of TgAb in the pathogenesis of HT.
Glycosylation is one of the most widespread modifications found in proteins, and is considered to greatly affect a number of different protein functions, such as proteinprotein interactions, cell-cell recognition, adhesion, and motility (9 -12). Alterations of the glycosylation patterns of IgG have been found in many kinds of autoimmune diseases (13)(14)(15). It has been found that the level of IgG galactosylation is decreased in rheumatoid arthritis, and the decrease is related to the degree of the disease (16 -18). In addition, our previous study showed that the glycosylation patterns of sera TgAb IgG varied in different thyroid diseases. Also, the sialic acid content on TgAb IgG was negatively correlated with serum TgAb IgG levels in patients (19). Together, these results indicate that changes in the glycosylation pattern on TgAb might be involved in the pathogenesis of thyroid diseases. Therefore, in order to expand our current understanding of the pathogenesis of HT disease, we focused on investigating the alterations to the TgAb glycosylation patterns in HT patients.
Among the recently developed technologies for glycomic analysis, two methods have been established to analyze the protein glycosylation with high sensitivity and throughput (20). Tandem mass spectrometry (MS/MS) allows for powerful sequence analysis of N-linked carbohydrate chain (Ͻ 40 monosaccharide) of glycoprotein, only requiring minute amounts of glycan sample of approximately 50 -100 ng. In this method, N-linked carbohydrate chains are released from glycoproteins by endoglycosidases. The method has been successfully applied to the analysis of glycosylation patterns of proteins from serum and tissue samples of patients. The lectin microarray, in contrast, is based on the interaction of glycans with different glycan-binding proteins, such as lectins and antibodies, and has been developed to analyze glycosylation patterns in a quantitative manner. Even though it can only detect accessible carbohydrate motifs, and not the entire repertoire of glycoforms (21), lectin microarray is a sensitive and highthroughput platform that can detect and verify glycosylation changes from biosamples without requirement for sample pretreatment which usually needs only 1-5 L serum for glycosylation detection and identification. Both methods are comparable to glycosylation analysis both in terms of sensitivity and ability for high sample throughput.
Thus, the aim of our study was to identify the glycosylation patterns of TgAb IgG in the sera of HT patients, and their changes relative to those found in sera of healthy blood donors. To this end, we used matrix-assisted laser desorption/ ionization quadrupole ion trap time-of-flight mass spectrometry (MALDI-QIT-TOF-MS/MS) and high-density lectin microarray to detect the TgAb IgG glycosylation patterns.
Subjects
This study was approved by the Ethics Committee of Peking University First Hospital, and all participants gave informed written consent. A total of 32 HT patients diagnosed prior to this study with elevated TgAb levels and 15 healthy blood donors as controls were enrolled into this study from January 2011 to December 2012. At least 5 mL serum sample was obtained from each participant. Serum TgAbs and thyroid peroxidase antibodies (TPOAb) were analyzed using an electrochemiluminescence immunoassay and a Cobas e601 analyzer for signal detection (Roche Diagnostics, reference range 0 -115 IU/mL for TgAb, 0 -34 IU/mL for TPOAb). In the HT group, the TgAb levels ranged from 200 to 4000 IU/mL, with a number of samples exceeding 4000 IU/mL. As we found in our previous study, glycosylation of TgAb IgG might be correlated with TgAb IgG levels in the serum of HT patients (19). Therefore, HT patients in the present study were divided into two subgroups according to their serum TgAb levels. The 17 HT patients with TgAb levels exceeding 4000 IU/mL were classified as high TgAb level group (hHT), while the remaining patients exhibiting moderate TgAb levels (200 to 1500 IU/mL) were categorized as the medium level group (mHT). All HT patients previously diagnosed with hypothyroidism received levothyroxine treatment. Prior to this study, all these patients were verified to be euthyroid, ie, all showed normal thyroid function. Detailed information is provided in Table 1. In the control group, all individuals were euthyroid, thyroid autoantibody-negative, and were confirmed to have no past or family history for thyroid diseases. All serum samples collected were kept at Ϫ80°C until further use.
Affinity purification of total serum IgG
Serum samples were initially filtered three times using 0.20 m Minisart filters (Sartorius Stedim Biotech). IgG was purified using Hitrap Protein G HP (5 mL) and AKTA purifier (both GE Healthcare) according to the manufacturer's instructions. Briefly, serum sample was pumped after the column was equilibrated with five column volumes of binding buffer (0.02 M Tris, pH 7.2). Unbound protein was removed by washing with five column volumes of binding buffer. Bound IgG was eluted with five column volumes of elution buffer (0.1 M glycine, pH 2.7) and immediately neutralized with 0.2 M Tris (pH 9.0). The IgG solution was dialyzed with 0.01 M phosphate buffered saline (PBS) at 4°C for at least 16 h, and then ultrafiltered using Amicon Ultra Centrifugal Filters (Merk KGaA). Concentrated IgG samples were stored at Ϫ80°C until further use.
Affinity purification of TgAb IgG
To purify TgAb IgG from the total IgG samples, cyanogen bromide-activated Sepharose 4B (Sigma-Aldrich) was conjugated to human thyroglobulin (hTg) (Calbiochem). In this experiment, 2 g of cyanogen bromide-activated resin was washed and left to swell in aliquots of 400 mL cold hydrochloric acid (1 mM). 5 mL of hTg (1 mg/mL) dissolved in coupling buffer (0.1 M sodium bicarbonate buffer containing 0.5 M sodium chloride, pH 8.3) was incubated with approximately 5 mL Sepharose 4B gel for 2 h at room temperature. More than 95% of the hTg was found to have bound to the Sepharose gel. Unbound hTg was removed using the coupling buffer, and the remaining active groups on the Sepharose 4B were blocked with 0.2 M glycine (pH 8.0) for 2 h at room temperature. The Sepharose gel-bound hTg was then washed three times with high and low pH buffer solutions, consisting of either coupling buffer (pH 8.3) or 0.1 M acetate buffer containing 0.5 M sodium chloride (pH 4.0). Then, the Sepharose gel-bound hTg was transferred to a column XK16/20 with two adapters (GE Healthcare) and allowed to precipitate by gravity to the bottom of column for about 20 min. The column was packed according to the instructions of the manufacturer.
TgAb IgG was further purified using a similar process to that used for IgG purification. The binding buffer was 0.01 M PBS (pH 7.4) and the elution buffer was 0.1 M Glycine containing 0.5 M sodium chloride (pH 2.7). Protein concentrations were determined using a BCA protein assay kit (Kangweishiji) and all TgAb IgG samples were stored at Ϫ80°C until further use.
Glycosylation profiles analysis of purified TgAb IgG by MALDI-QIT-TOF-MS/MS
500 g mixed TgAb IgG from each group dissolved in 0.05 M ammonium bicarbonate buffer was treated successively with 10 g/L dithiothreitol solution for 60 min at 37°C, 12 g/L iodoacetamide in the dark at room temperature for 1 h, and trypsin at 37°C overnight. Three drops of 5% acetic acid were added to the sample after boiling for 2 min, and then it was desalted using C18-Sep-Pak column (Waters Associates). After the sample was dissolved again in ammonium bicarbonate buffer, it was incubated with N-Glycosidase F (New England Biolabs Ltd.) at 37°C for 20 h. The sample was then lyophilized, dissolved in 5% acetic acid, and desalted again. Sodium hydroxide slurry in dimethyl sulfoxide and methyl iodide were added to lyophilized sample and then vortexed at room temperature for 20 min. Afterwards, chloroform and D.I. water were added to the sample, mixed thoroughly and centrifuged, and the upper aqueous layer was removed. This procedure was repeated three times. The chloroform layer was dried off, and the sample was dissolved in 1:1 methanol: D.I. water, then again desalted and lyophilized. The derivatized glycans were dissolved in 10 L methanol. 1 L glycans was mixed with 1 L 2,5-dihydroxybenzoic acid solution (5 mg/mL 2,5-dihydroxybenzoic acid in 50% acetonitrile containing 0.1 M sodium chloride) for mass spectrometric analysis. The sample matrix mixture was spotted onto a MALDI-TOF plate, and allowed to dry at room temperature. The MALDI data were obtained at positive mode with power setting 70, mass range from 500 to 5000 U, 100 shots per sample using the MALDI-QIT-TOF Mass Spectrometer (Shimadzu Axima Resonance).
Detection of glycosylation of purified TgAb IgG using high-density lectin microarray
The lectin microarray used in our study was kindly provided by Professor Tao (Shanghai Jiao Tong University, China), and was designed according to a procedure published previously (20,22). The lectin microarray system consists of a panel of 94 lectins. The lectin specificities and sources have been reported in their studies (20,22). Prior to use, the surface of the microarray was first blocked for 1 h at room temperature by immersion in 0.05 M ethanolamine in a borate buffer (pH 8.0). The slide was then washed and dried by spinning at 500 g for 5 min. 10 g purified TgAb IgG from each participant was resuspended in 200 L PBST buffer (PBS buffer with 0.05% TWEEN-20). The samples were applied to the microarray and incubated at room temperature for 2 h. In order to oxidize the sugar groups, 2 g/mL hTg conjugated with Lightning-Link Rapid Cy5 (Innova Biosiences) was mixed with 0.02 M sodium periodate at 4°C for 1 h in the dark. Then, 200 L of 2 g/mL oxidized hTg-Cy5 conjugate was hybridized with the microarray for 1 h. After three washes with PBST buffer and two washes with D.I. water, the array was dried by spinning at 500 g for 5 min, and scanned using a Lux Scan 10K-A scanner (CapitalBio Corporation) at a wavelength of 647 nm and a photomultiplier tube setting of 800. The slide images were converted to numerical format for analysis. The signal-tonoise ratio (S/N) (the medium intensity of the spot foreground relative to the background) of each lectin spot was calculated.
Statistical analysis
Statistical analysis was performed using the SPSS statistics package (version 17.0). Quantitative data were presented as
Demographic data of participants
As shown in Table 1, significant differences in age and gender distribution existed neither between the HT and control group, nor between the mHT and hHT group. The average serum TgAb levels were significantly higher in the HT group compared with the control group (P Ͻ .01), and they were also significantly higher in the hHT group compared with the mHT group (P Ͻ .01). Eleven patients in the mHT group and 15 patients in the hHT group were found to test positive for TPOAb. The average serum TPOAb levels were significantly higher in the HT group compared with those in the control group (P Ͻ .01). However, no significant difference was measured between the mHT group [200.7(120.4 -600)] IU/mL and the hHT group [508.7(156.7-600)] IU/mL.
Glycosylation profile analysis of purified TgAb IgG by MALDI-QIT-TOF-MS/MS
IgG-associated N-glycans have a conserved heptasaccharide core that consists of N-acetylglucosamine (Gl-cNAc) and mannose (Man), while they show great heterogeneity due to the variations in terminal galactose (Gal), sialic acid (also N-acetyl neuraminic acid, abbreviated as Neu5Ac), core fucose (Fuc), and bisecting GlcNAc (7) ( Figure 1A). The carbohydrates, named glycoforms, of each mixed TgAb IgG sample from the mHT, hHT, and control groups were measured by MALDI-QIT-TOF-MS/ MS, and its analysis revealed that the glycosylation profiles of TgAb IgG from the mHT, hHT, and control groups were extremely similar (Figure 1, B and C). The first four of the most intense peaks were at the mass/charge (m/z) positions 1835.9, 2040.0, 2605.3, and 2850.4. The first three of those four represent three biantennary glycoforms, namely G0F (core fucosylated, no terminal galactose), G1F (core fucosylated, one terminal galactose without sialic acid), and G2SF (core fucosylated, two galactose with one terminal sialic acid). The last peak represents one complex triantennary glycoform (Figure 1, B and C). A total of 34 N-linked glycoforms were identified in the TgAb IgG, shown in Supplemental Table 1.
Detectable lectins and glycans of purified TgAb IgG in lectin microarray
In order to analyze the glycans present on purified TgAb IgG from the mHT, hHT, and control groups, we utilized a lectin microarray. To prevent nonspecific binding of hTg to lectins, the hTg-Cy5 conjugates were treated with sodium periodate, which resulted in the oxidization of the associated glycans. The detectable lectin signals were defined as (1) (S/N of lectin spots with sample) Ϫ (S/N of lectin spots with PBS) Ͼ 0.5; (2) (S/N of lectin spots with sample)/(S/N of lectin spots with PBS) Ͼ 1.5 for over 50% samples from at least one group. Lectins conforming to the first condition and those conforming to the second condition overlapped substantially, with a total number of 8 detectable lectins ( Table 2).
Amounts of glycans present on purified TgAb IgG differ between HT patients and healthy controls
Our results revealed the following changes of glycosylation on TgAb IgG for HT patients relative to controls Table 2) (all P Ͻ .001); (2) increased terminal Neu5Ac (sialic acid), as indicated by the increased binding to polyporus squamosus (PSA) and sambucus nigra I (SNA-I) (both P Ͻ .001); (3) increased core fucose detected by increased binding to Lens culinaris Lectin (LcH) (P Ͻ .001); (4) increased Gal(1-4)GlcNAc(1-2)Man glycans detected by increased binding to Phaseolus vulgaris Lectin (PHA-L) (P ϭ .00283). There was no significant difference of Sambucus nigra (SNA) between the HT group and the control group.
Discussion
Over the past decade, numerous studies have explored the various structures, as well as the biological and clinical roles of IgG glycosylation. HT is an autoimmune thyroid disease, and TgAbs are commonly found at high titers in the sera of HT patients (2). Studies aimed at understanding the mechanisms of loss of self-tolerance in HT have found that the titers and epitopes of serum TgAbs in HT patients differ from those in healthy individuals (4,23,24). As protein glycosylation is one of major post-translational modifications, and because of its important role in the Fc-mediated biological function of IgGs, studying the differential expression of TgAb IgG glycosylation patterns between HT patients and healthy controls will help to understand the biologic role of TgAb IgG in the pathogenesis of HT. It was shown previously that the N-linked glycans at the highly conserved asparagine 297 residue in the Fc region of IgG are mainly biantennary complex-type structures with a core heptasaccharide GlcNAc(1-2)Man(␣1-6)(GlcNAc(1-2)Man(␣1-3))Man(1-4)GlcNAc(1-4) GlcNAc (8). This can vary by the addition of fucose to the core GlcNAc, by addition of GlcNAc to the bisecting mannose, or by extending the arms with galactose and sialic acid (8). Our study presented here focused on the glycosylation of the thyroid specific antibody. Because the amount of purified TgAb IgG from any one participant is relatively small, we used a mixture of TgAb IgG from all participants in each group, with the glycoforms of 500 g TgAb IgG of the mHT, hHT, and control groups being profiled separately by MALDI-QIT-TOF-MS/MS. A total number of 34 glycoforms were found on TgAb IgG in our study. The results showed that there was no significant difference in glycosylation profiles of purified TgAb IgG between Hashimoto's patients and healthy controls. The most abundant structures of TgAb IgG were fucosylated in a biantennary manner, either containing one sialic acid or none.
Although MALDI-QIT-TOF-MS/MS can reveal molecular details of glycans, it requires a significant investment of time and an abundant amount of purified proteins. In contrast, the lectin microarray at hand was shown to facilitate the extraction the glycan structure information of glycoproteins at the nanogram level (20), in both a high-throughput and a sensitive fashion (25). Therefore, it was used to further compare the amounts of different glycans of TgAb IgG between the HT group and control group. Our results showed that the glycosylation levels of TgAb IgG differed between the HT group and control group, and that there were greater amounts of mannose, terminal sialic acid, core fucose, and Gal(1-4)GlcNAc(1-2)Man associated with TgAb IgG in HT patients. Changes in the patterns as well as levels of IgG glycosylation have been reported for many different autoimmune diseases, including primary Sjogren's syndrome, ANCA-associated systemic vasculitis, and myositis syndromes (13)(14)(15). Importantly, as several studies have shown, the changes of IgG glycosylation are correlated with the severity of rheumatoid arthritis, a fact that supports the idea that glycosylation changes play an important role in inflammatory processes in rheumatoid arthritis (16,26). Considering the important role of glycosylation in the structure and function of IgG, we speculated that the changes in TgAb IgG glycosylation patterns in HT patients might affect the structure and function of TgAb, especially its Fc-mediated functions, such as ADCC. In conclusion, the altered TgAb IgG glycosylation patterns observed here might contribute to the pathogenesis of HT. However, further studies are required to test the precise connection between TgAb IgG glycosylation and HT pathogenesis.
An earlier study by Wang (27) demonstrated that cytokines in the microenvironment of B cells can not only determine the subsequent differentiation of B cells into antibody-secreting cells such as interleukin-2 (IL-2) and IL-10, but also regulate the glycosylation of antibodies produced, such as interferon-␥ and IL-21. TgAb are produced by lymphocytes infiltrated in the thyroid tissue (28). It was reported that there is an imbalance of Th1/Th2 cells and increased Th17 cells in the sera of HT patients, and the cytokine profiles produced in HT patients are different from those in healthy donors, with increased levels of cytokines such as interferon-␥, IL-2, and/or IL-17 (29 -32). We speculate that the different cytokine patterns in the thyroid tissue of HT patients might contribute to the differentiation of different subsets of plasma cells, and subsequently affect the different levels of glycosylation on TgAb IgG. Further studies will be required to investigate the role of cytokines in the thyroid tissue on TgAb glycosylation in greater detail.
Glycosylation is an enzyme-mediated post-translational process (33), and changes in glycans therefore reflect altered enzyme expression levels and/or activities (34). Thus, the different patterns of TgAb IgG glycosylation in HT patients and healthy controls can be attributed to different levels or activities of corresponding enzymes in different subsets of plasma cells (35). Because the observed levels of several kinds of glycans on TgAb IgG were higher in HT patients than those observed for healthy controls in our study, we would argue that change of one glycosylation-related enzyme might not be sufficient to explain the extensive differences. Further studies on the expression pattern of these enzymes in plasma cells should shine light on the underlying mechanisms in further detail.
In addition, our present study showed that there were greater amounts of mannose, terminal sialic acid, core fucose, and Gal(1-4)GlcNAc(1-2)Man on TgAb IgG from the hHT than that from mHT. This indicates that the glycosylation pattern of TgAb might be related to the serum TgAb levels. In contrast, our previous study, in which we used elderberry lectin named SNA, showed that the sialic acid content on TgAb was negatively correlated with the serum TgAb IgG levels in all the patients of different thyroid diseases (19). This may be due to the different sources of SNA or different methods or different experimental material, with the former using serum, and our study presented here using purified TgAb IgG.
In conclusion, our present study provides for the first time evidence that the glycosylation levels of TgAb IgG in HT patients is elevated, and that the glycosylation patterns are altered relative to TgAb IgG found in healthy donors. Thus, our study provides new clues that should allow for more detailed exploration of the role of TgAb in the pathogenesis of HT in the future. | 4,985.4 | 2014-11-07T00:00:00.000 | [
"Biology",
"Medicine"
] |
Experimental and Numerical Investigation of the Internal Temperature of an Oil-Immersed Power Transformer with DOFS
: To accurately detect and monitor the internal temperature of an operating power transformer, the distributed optical fiber sensor (DOFS) was creatively applied inside an oil-immersed 35 kV transformer through high integration with the winding wire. On the former basis, the power transformer prototype with a completely global internal temperature sensing capability was successfully developed and it was also qualified for power grid operation through the ex-factory type tests. The internal spatially continuous temperature distribution of the operating transformer was then revealed through a heat-run test and the numerical simulation was also applied for further analysis. Hotspots of windings were continuously located and monitored (emerging at about 89% / 90% height of low / high voltage winding), which were furtherly compared with the IEC calculation results. This new nondestructive internal sensing method shows a broad application prospect in the electrical equipment field. Also, the revelation of transformer internal distributed temperature can o ff er a solid reference for both researchers and field operation sta ff .
Introduction
Transformer is the core equipment of power systems, and its safe operation is of great significance to the stability of the power supply. The failure of a large power transformer often leads to a blackout in an entire area, causing huge economic losses. As such real-time dynamic monitoring of the online transformer status has aroused wide interest [1][2][3].
The internal temperature of a transformer, especially the winding hotspot, has a direct influence on the insulation performance and its service life. Overheating during operation will decrease the life expectancy of the insulated materials and thus threaten the safety of the local grid, while running at a lower temperature means less load and the sacrifice of economic benefits [4]. Hence, the optimal balance between safety and economy requires actual data-based criterion to grasp. Moreover, the dynamic capacity-increasing and energy-saving control of the transformers are both empirical models, still lacking quantitative, reliable support [5]. It is therefore necessary to obtain the real time temperature distribution inside the transformer.
Currently, the traditional transformer temperature monitoring methods can be primarily divided into four types: empirical formula method [6,7], numerical simulation, infrared measurements [8,9] (Optical Time Domain Reflection) technology [26]. Lu has applied the DOFS to a laboratory energized transformer core based on Rayleigh scattering, but its limited detecting range is hard to monitor the field power transformer [27]. In commercial products, the Sensornet ltd (Hertfordshire, British) has reached a resolution of 0.02 • C per meter along the sensing fiber for 45 km in each direction, and can withstand up to 700 • C in a corrosive situation [28].
Therefore, it exhibits great potential for DOFS (Distributed Optical Fiber Sensor) application in the electrical apparatus field. In this contribution, according to the actual structures of an oil-immersed 35 kV power transformer, different laying schemes were designed for the DOFS. Verified by the corresponding tests, the transformer prototype with built-in distributed sensing fibers was successfully developed and was up to standard for power grid operation through the ex-factory type tests. Moreover, the internal real-time online temperature of an operating power transformer was also revealed in a distributed manner. The hotspots were accurately located and continuously monitored. Assisted by numerical simulations, the actual detected data were also compared with the IEC traditional calculation results. These first-hand data may provide a solid reference for the delicate management of power transformers.
Sensing Principle
When light transmits in an optical fiber, the light waves will be scattered to different degrees under the influence of medium molecules, resulting in a scattering spectrum with different frequencies [28]. Thus, the elastic scattering (Rayleigh scattering) and inelastic scattering (Brillouin scattering and Raman scattering) can be identified according to their frequency ( Figure 1). While Raman scattering, including two different frequency parts, has been discovered to have a strong temperature sensitivity among these scattered lights, especially in its high frequency region.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 3 of 17 Lu has applied the DOFS to a laboratory energized transformer core based on Rayleigh scattering, but its limited detecting range is hard to monitor the field power transformer [27]. In commercial products, the Sensornet ltd (Hertfordshire, British) has reached a resolution of 0.02 °C per meter along the sensing fiber for 45 km in each direction, and can withstand up to 700 °C in a corrosive situation [28]. Therefore, it exhibits great potential for DOFS (Distributed Optical Fiber Sensor) application in the electrical apparatus field. In this contribution, according to the actual structures of an oilimmersed 35 kV power transformer, different laying schemes were designed for the DOFS. Verified by the corresponding tests, the transformer prototype with built-in distributed sensing fibers was successfully developed and was up to standard for power grid operation through the ex-factory type tests. Moreover, the internal real-time online temperature of an operating power transformer was also revealed in a distributed manner. The hotspots were accurately located and continuously monitored. Assisted by numerical simulations, the actual detected data were also compared with the IEC traditional calculation results. These first-hand data may provide a solid reference for the delicate management of power transformers.
Sensing Principle
When light transmits in an optical fiber, the light waves will be scattered to different degrees under the influence of medium molecules, resulting in a scattering spectrum with different frequencies [28]. Thus, the elastic scattering (Rayleigh scattering) and inelastic scattering (Brillouin scattering and Raman scattering) can be identified according to their frequency ( Figure 1). While Raman scattering, including two different frequency parts, has been discovered to have a strong temperature sensitivity among these scattered lights, especially in its high frequency region. When light propagates through the fiber, the luminous flux of Stokes Raman scattering generated by each light pulse [29] is presented in Equation (1): The luminous flux of anti-Stokes Raman scattering can be expressed as Equation (2): where KS and KAS are the cross-section coefficients of optical fiber related to Stokes scattering and anti-Stokes scattering, respectively; S is the backscattering factor of the fiber; vS and vAS are the frequencies of Stokes and anti-Stokes scattering photons; ϕe is the number of incident laser pulse photons; α0, αS and αAS are the average propagation loss factors of incident light, Stokes scattering light and anti-Stokes scattering light, respectively; L is the distance between the incident end of fiber and the measured point; and RS(T) and RAS(T) are the corresponding coefficients, related to the particle distribution of fiber molecules at different energy levels, which act as the temperature When light propagates through the fiber, the luminous flux of Stokes Raman scattering generated by each light pulse [29] is presented in Equation (1): The luminous flux of anti-Stokes Raman scattering can be expressed as Equation (2): where K S and K AS are the cross-section coefficients of optical fiber related to Stokes scattering and anti-Stokes scattering, respectively; S is the backscattering factor of the fiber; v S and v AS are the frequencies of Stokes and anti-Stokes scattering photons; φ e is the number of incident laser pulse photons; α 0 , α S and α AS are the average propagation loss factors of incident light, Stokes scattering light and anti-Stokes scattering light, respectively; L is the distance between the incident end of fiber and the measured point; and R S (T) and R AS (T) are the corresponding coefficients, related to the particle distribution of fiber molecules at different energy levels, which act as the temperature modulation functions of Stokes Raman scattering and anti-Stokes Raman scattering [30], as shown in Equations (3) and (4): where h is the Planck constant (h = 6.626 × 10 −34 J·s); ∆v is the Raman phonon frequency (∆v = 1.32 × 10 13 Hz); k is the Boltzmann constant (k = 1.38 × 10 −23 J·K −1 ); and T is the thermodynamic temperature.
As the anti-Stokes Raman scattering has an obvious temperature sensitivity, it can be further used as the signal channel. Also, the temperature field can be obtained through the demodulation of these two scatterings from their ratio, as shown in Equation (5): It can be further displayed as Equation (6) when substituting Equations (3) and (4) into the above formula at the temperature of T 0 .
In order to perform temperature calibration, some front sections of the fiber sensor are selected as the calibration fiber, which will be placed in a thermostatic bath at temperature T 0 [29]. Hereby, in practical application, the temperature distribution curve along the entire optical fiber can be obtained by just measuring out the electrical levels of Φ AS (T), Φ S (T), Φ AS (T 0 ) and Φ S (T 0 ) after the photoelectric conversion, as shown in Equation (7): The relative sensitivity (S R ) of this demodulation method can be calculated out by differentiating the temperature T of Equation (6), as exhibited in Equation (8): In the range of 0 • C to 120 • C, the average temperature sensitivity S R = 1.065%/ • C. The working process of the optical fiber temperature sensor is shown in Figure 2. The pulsed laser enters the fiber through one end of the integrated wavelength division multiplexer (involving a 1 × 2 bidirectional coupler (BDC) and an optical fiber wavelength division multiplexer (OWDM)). Then its back scattering will be divided into Stokes and anti-Stokes Raman light by the integrated wavelength division multiplexer. After the photoelectric conversion in avalanche photodiode (APD) and high-speed analog-to-digital conversion, the processed signal will be delivered to a computer for temperature demodulation and data storage to achieve online distributed temperature measurement [30]. Appl. Sci. 2020, 10, x FOR PEER REVIEW 5 of 17
Design of the Distributed Optical Fiber Sensor Laying Scheme
The transformer prototype developed in this paper adopts a layered winding structure and its specific parameters are listed in Table 1. The detecting equipment is a commercial product (BY-DTS-4020, Weihai Beiyang Optoelectronic Info-Tech Co. Ltd., Weihai, China) and the relative parameters of the optical fiber and the instrument are also listed below. The high voltage (HV) winding utilizes the round copper wire while the low voltage (LV) winding uses the composite wire. The optical fiber laying scheme on the windings is shown in Figure 3.
Design of the Distributed Optical Fiber Sensor Laying Scheme
The transformer prototype developed in this paper adopts a layered winding structure and its specific parameters are listed in Table 1. The detecting equipment is a commercial product (BY-DTS-4020, Weihai Beiyang Optoelectronic Info-Tech Co. Ltd., Weihai, China) and the relative parameters of the optical fiber and the instrument are also listed below. The high voltage (HV) winding utilizes the round copper wire while the low voltage (LV) winding uses the composite wire. The optical fiber laying scheme on the windings is shown in Figure 3.
Design of the Distributed Optical Fiber Sensor Laying Scheme
The transformer prototype developed in this paper adopts a layered winding structure and its specific parameters are listed in Table 1. The detecting equipment is a commercial product (BY-DTS-4020, Weihai Beiyang Optoelectronic Info-Tech Co. Ltd., Weihai, China) and the relative parameters of the optical fiber and the instrument are also listed below. The high voltage (HV) winding utilizes the round copper wire while the low voltage (LV) winding uses the composite wire. The optical fiber laying scheme on the windings is shown in Figure 3. The LV winding of the 35 kV transformer is composed of composite conductors, the unit of which consists of six parallelly flat copper wires. This leaves no oil ducts between each adjacent wire (as shown in Figure 3a). Temperature sensing can be realized by attaching the optical fiber to the wire surface of the outermost turn. Actual field experience shows that the optical fiber will be highly integrated with the winding due to the friction and stress generated from the laying process. In our experiment, the fiber did not break and the mechanical stress was moderate all the time. Meanwhile, a layer of insulating paper will be wrapped around the fiber composite wire to weaken the negative deviation of measurement caused by the cooling medium.
The HV winding is made up by enameled round copper conductors, which also means the direct contact between adjacent wires. For the convenience of fiber laying, a layer of insulating paper (0.2 mm) will be applied to the surface of the winding in advance, after which the optical fiber will be spirally winded on the paper closely attached to the wire (as exhibited in Figure 3b). The remaining steps are the same as aforesaid.
Through the high integration and close contact of the DOFS and the winding wire, the temperature of the designed optical fiber will be synchronized with the adjacent conductor during the actual operation of a transformer. Thus, the real time distributed temperature monitoring of the whole winding can be achieved. Meanwhile, a pulley guide should be utilized during the fiber laying process to ensure that the DOFS is evenly winded and closely adheres to the winding. The optical fiber was laid between the wire and the insulating paper, which will maintain the original winding structure and helps to buffer the possible vibrations and knocks during the manufacturing process. The iron core limbs and the inner wall of the oil tank were also uniformly winded by the distributed optical fiber sensors. The whole system is based on Raman scattering and it is only temperature-related, which means that the mechanical stress or the possible vibrations will not cause any interference.
Pre-Experiments
To ensure that the optical fiber works stably under the high temperature environment of transformer and has good compatibility with transformer oil, in our former series work [31], the safety test was performed through the accelerated thermal aging method (the optical fiber was immersed in the transformer oil at 130 • C for 576 h) which can be considered as equal to the fiber working continuously for 21 years under the normal operation of transformers according to IEC standards [13]. Tetrafluoroethylene (ETFE) and polyimide (PI) were finally selected as the optical fiber sheath and coating layer material due to their stable performance after the long-term aging process in the insulating oil. Meanwhile, the electric performance of the selected optical fiber was also qualified for the actual operating of a transformer due to its good insulation properties [11].
To furtherly test the hot region measuring accuracy of the as designed DOFS composite transformer windings, a temperature-rise experiment was performed on an assembled winding. The schematic diagram of the test platform is shown in Figure 4. The distributed optical fiber sensor was fixed upon the outermost turn of winding with insulating paper, while the inner side of the winding wire was closely adhered to a heating tape to realize the temperature controlling. The heating tape adopted multi-point sampling through thermocouples to ensure that the temperature control error would reach 0.1 °C. The hot region detecting accuracy was explored by heating the discontinuous winding turns to different stable states (55 °C to 75 °C every 5 °C a state) and the measuring results are shown in Figure 5. The experimental test shows that the as designed DOFS composite winding has a spatial resolution of 0.8 m (each hot region is 3.2 m wide and consists of four sampling points) and a temperature accuracy less than 0.3 °C. For the continuously winded windings inside the transformer, this accuracy is sufficient to locate the exact local overheating turn. Meanwhile, there is no oil passages between each adjacent wire in a layered structure winding, thus, the temperature along windings will be uniform and continuous. In the case of densely winded DOFS, massive data can be hereby obtained and interpolation method can be utilized to estimate the temperature where located under the spatial resolution.
The commercial product has already integrated a section of calibrating optical fiber inside and the reference temperature is stable according to the manufacturer. However, the equipment was also calibrated before the experiment. The temperature calibration was conducted through the thermostatic bath with a temperature error of 0.01 °C. Also, in the actual experiments, a section of fiber was exposed directly into the ambient environment and it was also compared with the The distributed optical fiber sensor was fixed upon the outermost turn of winding with insulating paper, while the inner side of the winding wire was closely adhered to a heating tape to realize the temperature controlling. The heating tape adopted multi-point sampling through thermocouples to ensure that the temperature control error would reach 0.1 • C. The hot region detecting accuracy was explored by heating the discontinuous winding turns to different stable states (55 • C to 75 • C every 5 • C a state) and the measuring results are shown in Figure 5. The distributed optical fiber sensor was fixed upon the outermost turn of winding with insulating paper, while the inner side of the winding wire was closely adhered to a heating tape to realize the temperature controlling. The heating tape adopted multi-point sampling through thermocouples to ensure that the temperature control error would reach 0.1 °C. The hot region detecting accuracy was explored by heating the discontinuous winding turns to different stable states (55 °C to 75 °C every 5 °C a state) and the measuring results are shown in Figure 5. The experimental test shows that the as designed DOFS composite winding has a spatial resolution of 0.8 m (each hot region is 3.2 m wide and consists of four sampling points) and a temperature accuracy less than 0.3 °C. For the continuously winded windings inside the transformer, this accuracy is sufficient to locate the exact local overheating turn. Meanwhile, there is no oil passages between each adjacent wire in a layered structure winding, thus, the temperature along windings will be uniform and continuous. In the case of densely winded DOFS, massive data can be hereby obtained and interpolation method can be utilized to estimate the temperature where located under the spatial resolution.
The commercial product has already integrated a section of calibrating optical fiber inside and the reference temperature is stable according to the manufacturer. However, the equipment was also calibrated before the experiment. The temperature calibration was conducted through the thermostatic bath with a temperature error of 0.01 °C. Also, in the actual experiments, a section of fiber was exposed directly into the ambient environment and it was also compared with the thermocouples (exhibited in Figure 6). The ambient temperature detecting error (in average) was less The experimental test shows that the as designed DOFS composite winding has a spatial resolution of 0.8 m (each hot region is 3.2 m wide and consists of four sampling points) and a temperature accuracy less than 0.3 • C. For the continuously winded windings inside the transformer, this accuracy is sufficient to locate the exact local overheating turn. Meanwhile, there is no oil passages between each adjacent wire in a layered structure winding, thus, the temperature along windings will be uniform and continuous. In the case of densely winded DOFS, massive data can be hereby obtained and interpolation method can be utilized to estimate the temperature where located under the spatial resolution.
The commercial product has already integrated a section of calibrating optical fiber inside and the reference temperature is stable according to the manufacturer. However, the equipment was also calibrated before the experiment. The temperature calibration was conducted through the thermostatic bath with a temperature error of 0.01 • C. Also, in the actual experiments, a section of fiber was exposed directly into the ambient environment and it was also compared with the thermocouples (exhibited in Figure 6). The ambient temperature detecting error (in average) was less than 0.3 • C, indicating that the instrument was reliable.
Appl. Sci. 2020, 10, x FOR PEER REVIEW 8 of 17 Figure 6. Ambient temperature detecting results during the whole test.
Platform Setup
The distributed optical fiber sensor integrated power transformer was fabricated based on the aforesaid laying schemes and it was in strict accordance with the normal manufacturing process. An optical fiber flange was sealed on the oil tank for signal extraction. The transformer prototype was up to standard for power grid operation through the corresponding ex-factory tests performed strictly according to the relevant IEC industry standards [13] (including induced over voltage withstand test, power-frequency voltage withstand test, tightness test, load loss and no-load loss measurement, dielectric routine tests, etc.).
It serves as a strong proof that the as designed distributed optical fiber sensor enjoys a high safety and stability inside the transformers and can be furtherly used in actual industrial application. The real time internal distributed temperature information was obtained through the temperaturerise test. The optical fiber was connected to an analysis equipment through the fiber flange. The sensing framework is shown in Figure 7 and the field application is exhibited in Figure 8.
Platform Setup
The distributed optical fiber sensor integrated power transformer was fabricated based on the aforesaid laying schemes and it was in strict accordance with the normal manufacturing process. An optical fiber flange was sealed on the oil tank for signal extraction. The transformer prototype was up to standard for power grid operation through the corresponding ex-factory tests performed strictly according to the relevant IEC industry standards [13] (including induced over voltage withstand test, power-frequency voltage withstand test, tightness test, load loss and no-load loss measurement, dielectric routine tests, etc.).
It serves as a strong proof that the as designed distributed optical fiber sensor enjoys a high safety and stability inside the transformers and can be furtherly used in actual industrial application. The real time internal distributed temperature information was obtained through the temperature-rise test. The optical fiber was connected to an analysis equipment through the fiber flange. The sensing framework is shown in Figure 7 and the field application is exhibited in Figure 8.
Platform Setup
The distributed optical fiber sensor integrated power transformer was fabricated based on the aforesaid laying schemes and it was in strict accordance with the normal manufacturing process. An optical fiber flange was sealed on the oil tank for signal extraction. The transformer prototype was up to standard for power grid operation through the corresponding ex-factory tests performed strictly according to the relevant IEC industry standards [13] (including induced over voltage withstand test, power-frequency voltage withstand test, tightness test, load loss and no-load loss measurement, dielectric routine tests, etc.).
It serves as a strong proof that the as designed distributed optical fiber sensor enjoys a high safety and stability inside the transformers and can be furtherly used in actual industrial application. The real time internal distributed temperature information was obtained through the temperaturerise test. The optical fiber was connected to an analysis equipment through the fiber flange. The sensing framework is shown in Figure 7 and the field application is exhibited in Figure 8.
Fully Distributed Internal Temperature Revelation
The temperature-rise test was strictly performed according to the corresponding IEC standard [13], simultaneously, the spatiotemporal temperature changes inside the operating transformer were also monitored in a distributed manner. The temperature-rise test was performed with the shortcircuit method. It was composed of two steps, that is, applying total losses (first 8 h) and applying the rated current (the last hour). The as designed optical fiber sensor displays an effective sensing performance under the complex thermal conditions inside the power transformer and works stably all the time. The fiber laying length of each monitoring area is also exhibited in Figure 9.
For all the windings, the sensing fibers were connected with each other through the optical fiber patch cord on the outer side of the fiber flange (data of these extra fibers are not included in the results). The temperature distribution of the iron core limbs and the oil tank inside wall are shown in Figure 10.
As for Figure 9a,c,e, the HV winding temperature presents an increasing trend with the height (the fiber was uniformly and spirally winded along the winding surface, so the fiber length can be normalized to the percentage of winding height). The hotspot gradually appeared after 3 h (80% of the highest temperature), around 44 °C, 43 °C, and 42 °C for phase A, B, and C, respectively. At the end of first step (8 h), the hotspot appeared at around 89%, 90%, and 91% of the winding height for phase A, B and C, respectively.
The LV winding displays a higher temperature compared to the HV winding due to its higher current, as shown in Figure 9b,d,f. The temperature increases with the winding height except for a downtrend in the top area, which may be caused by its relatively better heat dissipation conditions. The hotspot gradually arose after 2 h, almost 60 °C located at 85~92% of the winding height. The hotspot continued to spread to a larger region with the passage of time. And at 8 h, the hotspot for phase A, B, and C appeared at 88%, 91% and 88% of the winding height, respectively.
For the core limbs, exhibited in Figure 10a-c, the temperature distribution shows a positive correlation with the height for each phase. The hotspot appeared between 94% and 96% of the limb height for phase A, B, and C, respectively. However, there is little magnetic flux in the iron core during the short-circuit temperature-rise test and thus, the detected temperature may have a lower value compared to the actual situations. The temperature along the inner wall of oil tank is shown in Figure 10d, which displays an increasing trend with time. However, the temperature distributed in an uneven way due to the continuously unstable oil flows along the tank wall and the hotspot appeared alternately on the upper or lower part of tank.
Fully Distributed Internal Temperature Revelation
The temperature-rise test was strictly performed according to the corresponding IEC standard [13], simultaneously, the spatiotemporal temperature changes inside the operating transformer were also monitored in a distributed manner. The temperature-rise test was performed with the short-circuit method. It was composed of two steps, that is, applying total losses (first 8 h) and applying the rated current (the last hour). The as designed optical fiber sensor displays an effective sensing performance under the complex thermal conditions inside the power transformer and works stably all the time. The fiber laying length of each monitoring area is also exhibited in Figure 9.
For all the windings, the sensing fibers were connected with each other through the optical fiber patch cord on the outer side of the fiber flange (data of these extra fibers are not included in the results). The temperature distribution of the iron core limbs and the oil tank inside wall are shown in Figure 10.
As for Figure 9a,c,e, the HV winding temperature presents an increasing trend with the height (the fiber was uniformly and spirally winded along the winding surface, so the fiber length can be normalized to the percentage of winding height). The hotspot gradually appeared after 3 h (80% of the highest temperature), around 44 • C, 43 • C, and 42 • C for phase A, B, and C, respectively. At the end of first step (8 h), the hotspot appeared at around 89%, 90%, and 91% of the winding height for phase A, B and C, respectively.
The LV winding displays a higher temperature compared to the HV winding due to its higher current, as shown in Figure 9b,d,f. The temperature increases with the winding height except for a downtrend in the top area, which may be caused by its relatively better heat dissipation conditions. The hotspot gradually arose after 2 h, almost 60 • C located at 85~92% of the winding height. The hotspot continued to spread to a larger region with the passage of time. And at 8 h, the hotspot for phase A, B, and C appeared at 88%, 91% and 88% of the winding height, respectively.
For the core limbs, exhibited in Figure 10a-c, the temperature distribution shows a positive correlation with the height for each phase. The hotspot appeared between 94% and 96% of the limb height for phase A, B, and C, respectively. However, there is little magnetic flux in the iron core during the short-circuit temperature-rise test and thus, the detected temperature may have a lower value compared to the actual situations. The temperature along the inner wall of oil tank is shown in Figure 10d, which displays an increasing trend with time. However, the temperature distributed in an uneven way due to the continuously unstable oil flows along the tank wall and the hotspot appeared alternately on the upper or lower part of tank. Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 17 The hotspots of all the windings were closely monitored and continuously located, as exhibited in Figure 11. And the DOFS detecting results are compared with the IEC traditional calculating results in Table 2.
As shown in Figure 11, the hotspots gradually came to a steady position after around 1 h for both the HV and LV windings. It can be possibly attributed to the fact that the oil gradually started to circulate at around 1 h. Before this time, the cold oil has a relatively large viscosity and there is almost no heat convection process inside the transformer, leading to a random position of the hotspot. The hotspots of all the windings were closely monitored and continuously located, as exhibited in Figure 11. And the DOFS detecting results are compared with the IEC traditional calculating results in Table 2. The hotspots of all the windings were closely monitored and continuously located, as exhibited in Figure 11. And the DOFS detecting results are compared with the IEC traditional calculating results in Table 2.
(a) (b) Figure 11. Hotspots trajectory of (a) HV windings and (b) LV windings. Figure 11. Hotspots trajectory of (a) HV windings and (b) LV windings. The actual status and life expectancy of a transformer is mainly determined by its insulation condition which is directly influenced by the hotspot temperature (HST). According to the traditional IEC calculation [13], the hotspot always appears at the top of windings with a relatively higher temperature, which may be attributed to the ignoration of the top area heat dissipation conditions in its idealized model. In fact, the cooling conditions in top area exhibit a great impact and cannot be easily ignored, as shown in Table 2, which will cause different shifts of hotspot location for different windings. Meanwhile, the IEC standard model also cannot obtain the distributed temperature data for different windings and lacks information about the iron core, oil tank, etc.
Obviously, the actual internal temperature distribution exhibits a strong position dependence, which can be possibly attributed to the different surrounding circumstances, such as irregular oil flows, various structural components, etc. Thus, the point-type detecting method would inevitably exist huge monitoring blind zones, leaving hidden dangers for the transformer's safe operation.
Thus, the direct contact detection of DOFS is of great significance for the field applications compared to the traditional model-based calculation and point-to-point measuring methods.
The hotspot spatial distribution (phase A) and the transient temperature changes are also exhibited in Figures 12 and 13.
Monitoring Method
Phase A Phase B Phase C Pos. 1 As shown in Figure 11, the hotspots gradually came to a steady position after around 1 h for both the HV and LV windings. It can be possibly attributed to the fact that the oil gradually started to circulate at around 1 h. Before this time, the cold oil has a relatively large viscosity and there is almost no heat convection process inside the transformer, leading to a random position of the hotspot.
The actual status and life expectancy of a transformer is mainly determined by its insulation condition which is directly influenced by the hotspot temperature (HST). According to the traditional IEC calculation [13], the hotspot always appears at the top of windings with a relatively higher temperature, which may be attributed to the ignoration of the top area heat dissipation conditions in its idealized model. In fact, the cooling conditions in top area exhibit a great impact and cannot be easily ignored, as shown in Table 2, which will cause different shifts of hotspot location for different windings. Meanwhile, the IEC standard model also cannot obtain the distributed temperature data for different windings and lacks information about the iron core, oil tank, etc.
Obviously, the actual internal temperature distribution exhibits a strong position dependence, which can be possibly attributed to the different surrounding circumstances, such as irregular oil flows, various structural components, etc. Thus, the point-type detecting method would inevitably exist huge monitoring blind zones, leaving hidden dangers for the transformer's safe operation.
Thus, the direct contact detection of DOFS is of great significance for the field applications compared to the traditional model-based calculation and point-to-point measuring methods.
The hotspot spatial distribution (phase A) and the transient temperature changes are also exhibited in Figures 12 and 13. (a) (b) Figure 12. Hotspots distribution in phase A of (a) HV windings, and (b) LV windings. Figure 12. Hotspots distribution in phase A of (a) HV windings, and (b) LV windings.
The transient hotspot temperature showed a rapid increase before 1 h and then a slow upward trend. This phenomenon was caused by the insulating oil flow. At the very start, the cold oil was static, resulting in a fast local temperature rise (no flow means no heat convection). However, with the heat generated from windings continuing to accumulate, once the oil lifting force was larger than its gravity (1 h for this transformer), the static oil which carried lots of heat, began to circulate. This process explains the slow ascent of the temperature in the later part. The transient hotspot temperature showed a rapid increase before 1 h and then a slow upward trend. This phenomenon was caused by the insulating oil flow. At the very start, the cold oil was static, resulting in a fast local temperature rise (no flow means no heat convection). However, with the heat generated from windings continuing to accumulate, once the oil lifting force was larger than its gravity (1 h for this transformer), the static oil which carried lots of heat, began to circulate. This process explains the slow ascent of the temperature in the later part.
Finite Elelment Simulation Results
The revelation of the distributed temperature information inside an operating power transformer has provided massive detailed data for both researchers and field staff. However, the DOFS detecting method only has great advantages in the temperature sensing and it is still hard to fully understand what is happening inside the power transformer. Thus, the FEM (Finite Element Method) has been utilized to further analyze the internal physical process.
The transformer internal transient process was mainly focused in this simulation based on the hydrokinetics theories. The corresponding parameters used in the calculation are all from the manufacturers and are listed in Table 3. The simulation results are exhibited in Figure 14.
Finite Elelment Simulation Results
The revelation of the distributed temperature information inside an operating power transformer has provided massive detailed data for both researchers and field staff. However, the DOFS detecting method only has great advantages in the temperature sensing and it is still hard to fully understand what is happening inside the power transformer. Thus, the FEM (Finite Element Method) has been utilized to further analyze the internal physical process.
The transformer internal transient process was mainly focused in this simulation based on the hydrokinetics theories. The corresponding parameters used in the calculation are all from the manufacturers and are listed in Table 3. The simulation results are exhibited in Figure 14. According to the real structure of the studied 35 kV oil-immersed power transformer, a 3D thermal-fluid simulation model has been established. The calculation model maintains as much as structural components as possible, such as wood padding blocks, insulation cardboards, detailed iron core (composed of many layers of laminates), insulation washers, cooling fins, oil tank, windings, etc. However, it is impossible to totally and accurately reconstruct a real transformer due to the model complexity and the calculating time (or rather the convergence). Thereby, some necessary simplifications are inevitable. In this model, the windings are considered as coaxial cylinders, ignoring the detailed electromagnetic wire structures and the insulating papers wrapped around due to that it has adopted the layered winding structure. This simplification can leave out plenty of tiny components which only add the calculating difficulties and weaken the convergence.
The whole simulation was based on the COMSOL MUTIPHYSICS software (5.4, COMSOL AB, Stockholm, Sweden) and it took around 60 h according to our current hardware facility. The whole model has around 6 million elements in total, including 1.66 million mesh vertices, 70 thousand edge units, etc. The mesh independency was also proven when further increasing the element density (no more than 3% changes in the result). Meanwhile, the structured grid (mainly the hexahedral elements) and unstructured grid (basically the tetrahedral elements) were applied for different areas according to their characteristics. According to the real structure of the studied 35 kV oil-immersed power transformer, a 3D thermal-fluid simulation model has been established. The calculation model maintains as much as structural components as possible, such as wood padding blocks, insulation cardboards, detailed iron core (composed of many layers of laminates), insulation washers, cooling fins, oil tank, windings, etc. However, it is impossible to totally and accurately reconstruct a real transformer due to the model complexity and the calculating time (or rather the convergence). Thereby, some necessary simplifications are inevitable. In this model, the windings are considered as coaxial cylinders, ignoring the detailed electromagnetic wire structures and the insulating papers wrapped around due to that it has adopted the layered winding structure. This simplification can leave out plenty of tiny components which only add the calculating difficulties and weaken the convergence.
The whole simulation was based on the COMSOL MUTIPHYSICS software (5.4, COMSOL AB, Stockholm, Sweden) and it took around 60 h according to our current hardware facility. The whole model has around 6 million elements in total, including 1.66 million mesh vertices, 70 thousand edge units, etc. The mesh independency was also proven when further increasing the element density (no more than 3% changes in the result). Meanwhile, the structured grid (mainly the hexahedral elements) and unstructured grid (basically the tetrahedral elements) were applied for different areas according to their characteristics. The thermal field transient simulation result (Figure 14a) showed a similar temperature distribution to the actual detected data. The HV winding hottest temperature reached about 57 • C, a little higher than the measured value. The flow field simulation result is exhibited in Figure 14b. There were several obvious vortexes appearing in the top area of the windings with a maximum flow velocity of 2 cm/s. For an ONAN power transformer, the circulation of insulating oil is mainly driven by the heat generated from the windings. Thus, the heated oil flows tend to climb up along the windings or the cardboards and gather in the top area where the heat emission conditions are relatively better. Then, part of the cooled oil begins to sink along the cooling fins while the rest would fall down directly and meet the upward streams to continuously form the vortexes (shown in detail in Figure 14c). The thermal flux distribution, displayed in Figure 14d, shows a similar process with the fluid field that the main heat flux is basically along the windings and reaches the highest value in the top area (the total heat exchange along the fins is also very active but it is not obvious when split into each one). The internal heat convection process is almost completed by the circulating oil and it is thus not strange that these two field simulations share a similar distribution.
The vector diagram of the oil streams is displayed in Figure 15, where different perspectives are presented to aid further understanding.
The flow vectors exhibit a same whole-region oil circulation with the aforesaid process. Indeed, the relatively good cooling conditions in the winding top area would have a considerable impact on the hotspot locations, which is usually ignored by the field operation and maintenance staff or the manufacturers. Also, the real location of the winding hotspot tends to appear at around 90% of the winding height rather than the winding top according to our simulation and actual detected data.
fall down directly and meet the upward streams to continuously form the vortexes (shown in detail in Figure 14c). The thermal flux distribution, displayed in Figure 14d, shows a similar process with the fluid field that the main heat flux is basically along the windings and reaches the highest value in the top area (the total heat exchange along the fins is also very active but it is not obvious when split into each one). The internal heat convection process is almost completed by the circulating oil and it is thus not strange that these two field simulations share a similar distribution.
The vector diagram of the oil streams is displayed in Figure 15, where different perspectives are presented to aid further understanding.
The flow vectors exhibit a same whole-region oil circulation with the aforesaid process. Indeed, the relatively good cooling conditions in the winding top area would have a considerable impact on the hotspot locations, which is usually ignored by the field operation and maintenance staff or the manufacturers. Also, the real location of the winding hotspot tends to appear at around 90% of the winding height rather than the winding top according to our simulation and actual detected data.
Conclusions
In this paper, the DOFS was applied inside an operating power transformer and verified by the corresponding tests. Then, the transformer with a completely internal temperature sensing capability was successfully developed and qualified for actual application through the ex-factory type tests. The internal temperature was obtained in a distributed manner and it can serve as an important reference for both the field operation staff and the relevant engineers.
From this research, it was proven that the distributed optical fiber sensors can be applied into the power transformers for a spatiotemporally continuous temperature monitoring with a space resolution of 80 cm and temperature accuracy less than 0.5 °C. For the large transformers, the spatial positioning accuracy is enough to locate the exact overheated winding turn. For the small transformers, especially the ones with layered winding structure, the proposed densely winded
Conclusions
In this paper, the DOFS was applied inside an operating power transformer and verified by the corresponding tests. Then, the transformer with a completely internal temperature sensing capability was successfully developed and qualified for actual application through the ex-factory type tests. The internal temperature was obtained in a distributed manner and it can serve as an important reference for both the field operation staff and the relevant engineers.
From this research, it was proven that the distributed optical fiber sensors can be applied into the power transformers for a spatiotemporally continuous temperature monitoring with a space resolution of 80 cm and temperature accuracy less than 0.5 • C. For the large transformers, the spatial positioning accuracy is enough to locate the exact overheated winding turn. For the small transformers, especially the ones with layered winding structure, the proposed densely winded optical fiber laying scheme is possibly a practical way to obtain the temperature where located under the spatial resolution.
Meanwhile, the real time internal temperature revelation shows that the hotspot real location is more likely at 90% of the winding height rather than the conventional cognition which asserts the hottest region is always at the winding top. The followed numerical simulation may partially account for this 10% deviation from the fluid field and thermal flux perspective.
In conclusion, the distributed sensing technology displays a promising future in the power transformer online temperature monitoring and may play an increasingly important role in the future electrical apparatus field.
Author Contributions: Y.L. and X.L. proposed the idea and designed the experiment; H.L. and X.L. performed the calculation; J.W. conducted the experiment; X.L. and X.F. prepared the paper. All authors have read and agreed to the published version of the manuscript. | 10,323.6 | 2020-08-18T00:00:00.000 | [
"Engineering",
"Physics"
] |
SUSY Searches at ATLAS
Recent results of searches for supersymmetry by the ATLAS collaboration in up to 2 fb-1 of sqrt(s) = 7 TeV pp collisions at the LHC are reported.
Introduction
Due to the high centre-of-mass energy of 7 TeV, the LHC has discovery potential for new heavy particles beyond the Tevatron limits even with little luminosity. This holds in particular for particles with colour charge, such as squarks and gluinos in supersymmetry (SUSY) [1]. However, due to the excellent luminosity performance of the LHC in 2011, sensitivity also exists for electroweak production of charginos and neutralinos, the supersymmetric partners of the electroweak gauge bosons and the Higgs boson. In this document, a number of results of ATLAS searches for supersymmetry with up to 2 fb −1 of LHC pp data at √ s = 7 TeV are summarized. Since none of the analyses have observed any excess above the Standard Model expectation, limits on SUSY parameters or masses of SUSY particles are set. It is, however, important to consider carefully the assumptions made in each of the limits, and the true constraints that they impose on supersymmetry.
Searches with jets and missing momentum
Assuming conservation of R-parity, the lightest supersymmetric particle (LSP) is stable and weakly interacting, and will typically escape detection. If the primary produced particles are squarks or gluinos (and assuming a negligible lifetime of these particles), this will lead to final states with energetic jets and significant missing transverse momentum.
ATLAS carries out analyses with a lepton veto [2], requiring one isolated lepton [3], or requiring two or more leptons [4]. In addition, a dedicated search is performed for events with high jet multiplicity [5]. Data samples corresponding to luminosities between 1.0 and 1.3 fb −1 are used. Events are triggered either on the presence of a jet plus large missing momentum, or on the presence of at least one high-p T lepton. Backgrounds to the searches arise from Standard Model processes such as vector boson production plus jets (W + jets, Z + jets), top quark pair production and single top production, QCD multijet production, and diboson production. Backgrounds are estimated a e-mail<EMAIL_ADDRESS> Fig. 1. Exclusion contours in the MSUGRA/CMSSM m 0 -m 1/2 plane for A 0 = 0, tan β = 10 and µ > 0, arising from the analysis with ≥ 2, ≥ 3 or ≥ 4 jets plus missing momentum, and the multijets plus missing momentum analysis.
in a semi-data-driven way, using control regions in combination with a transfer factor obtained from simulation.
The results are interpreted in the MSUGRA/CMSSM model, and in particular as limits in the plane spanned by the common scalar mass parameter at the GUT scale m 0 and the common gaugino mass parameter at the GUT scale m 1/2 , for values of the common trilinear coupling parameter A 0 = 0, Higgs mixing parameter µ > 0, and ratio of the vacuum expectation values of the two Higgs doublets tan β = 10. Figure 1 shows the results for the analyses with ≥ 2, ≥ 3 or ≥ 4 jets plus missing momentum, and the multijets plus missing momentum analysis. For a choice of parameters leading to equal squark and gluino masses, squark and gluino masses below approximately 1 TeV are excluded. The 1-lepton and 2-lepton results are less constraining in MSUGRA/CMSSM for this choice of parameters, but these analyses are complementary, and therefore no less important.
The search with two isolated opposite-charge leptons is also interpreted in the framework of minimal gauge mediated supersymmetry breaking (GMSB), as shown in Fig ure 2 [6]. Assuming a messenger mass scale M mes of 250 TeV, 3 generations of messengers (N 5 = 3) and µ > 0, limits are set on the effective SUSY breaking scale Λ and on tan β. These limits significantly improve on the LEP results.
Simplified model interpretation
ATLAS has found it useful to not only interpret the results in constrained models, but also in terms of simplified models assuming specific production and decay modes [7]. In such simplified models, the constraints implied by models like MSUGRA/CMSSM or GMSB are relaxed, leaving more freedom for variation of particle masses and decay modes. Interpretations in simplified models thus show better the limitations of the analyses as a function of the relevant kinematic variables, and aid in drawing conclusions from the results.
Inclusive search results with jets and missing momentum are interpreted using simplified models with either pair production of squarks or of gluinos, or production of squarkgluino pairs. Direct squark decays (q → qχ 0 1 ) or direct gluino decays (g → qqχ 0 1 ) are dominant if all other particle masses have multi-TeV values, so that those do not play a role. Additional complexity may be built in, for example by allowing one-step decays to intermediate charginos,χ ± , or heavier neutralinos,χ 0 2 . Figure 3 shows the ATLAS results interpreted in terms of limits on (first and second generation) squark and gluino masses, for three values of the LSP (χ 0 1 ) mass, and assuming that all other SUSY particles are very massive [8]. Further interpretations are done in terms of limits on gluino mass vs LSP mass assuming high squark masses, as shown for example in Figure 4 for direct decays, or in terms of limits on squark mass vs LSP mass assuming high gluino masses [3,8]. Figure 5 shows an example of limits in the gluino-LSP mass plane obtained from one-step gluino decays,g → qq ′χ ± ,χ ± → W ( * )χ 0 1 , by the one-lepton analysis. The chargino mass in such decays is a free parameter, characterized by x = (mχ± − mχ0 1 )/(mg − mχ0 1 ), and Figure 5 shows x = 1/2 as an example.
The results of the inclusive jets plus missing momentum searches, interpreted in these simplified models, indi- Cross section limits and exclusion contours in the gluinoneutralino mass plane, for direct gluino decays,g → qqχ 0 1 , as obtained by the no-lepton analysis. All squark masses are assumed to be multi-TeV, so that only gluino pair production takes place, and the direct decay is assumed to occur with 100% branching fraction.
cate that masses of first and second generation squarks and of gluinos must be above approximately 750 GeV. An important caveat in this interpretation is the fact that this is only true for neutralino LSP masses below approximately 250 GeV (as in MSUGRA/CMSSM for values of m 1/2 below O(600) GeV). For higher LSP masses, the squark and gluino mass limits are significantly less restraining. It will be a challenge for further analyses to extend the sensitivity of inclusive squark and gluino searches to the case of heavy neutralinos. If the LSP is heavy, events are characterized by less energetic jets and less missing transverse momentum. This will be more difficult to trigger on, and lead to higher Standard Model backgrounds in the analysis. Cross section limits and exclusion contours in the gluinoneutralino mass plane, for one-step gluino decays,g → qq ′χ ± , χ ± → W ( * )χ 0 1 , as obtained by the one-lepton analysis. Only gluino pair production is considered, the one-step gluino decay is assumed to occur with 100% branching fraction, and the chargino mass is characterized by x = 1/2 (see text).
SUSY and naturalness
Important motivations for electroweak-scale supersymmetry are the facts that SUSY might provide a natural solution to the hierarchy problem by preventing "unnatural" fine-tuning of the Higgs sector, and that the lightest stable SUSY particle is an excellent dark matter candidate. It is instructive to consider what such a motivation really requires from SUSY: a relatively light top quark partner (the stop,t) (and an associated sbottom-left quark,b L ), a gluino not much heavier than about 1.5 TeV to keep the stop light (the stop receives radiative corrections from loops liket →gt →t), and electroweak gauginos below the TeV scale [9]. There are no strong constraints on first and second generation squarks and sleptons; in fact heavy squarks and sleptons make it easier for SUSY to satisfy the strong constraints from flavour physics. Motivated by these considerations, ATLAS explicitly searches for third generation squarks and for electroweak gauginos.
Stop and sbottom searches
ATLAS has carried out a number of searches for supersymmetry with b-tagged jets, which are sensitive to sbottom and stop quarks production, either direct, or in gluino decays. Jets are tagged as originating from b-quarks by an algorithm that exploits both track impact parameter and secondary vertex information.
Direct sbottom pair production is searched for in a data sample corresponding to 2 fb −1 by requiring two b-tagged jets with p T > 130, 50 GeV and significant missing transverse momentum of more than 130 GeV [10]. The final discriminant is the boost-corrected contransverse mass m CT [11], and signal regions with m CT > 100, 150, 200 GeV are considered. No excesses are observed above the expected backgrounds of top, W+heavy flavour and Z+heavy flavour production. Figure 6 shows the resulting limits in the sbottomneutralino mass plane, assuming sbottom quark pair production and sbottom quark decay into a bottom quark plus a neutralino (LSP) with a 100% branching fraction. Under these assumptions, sbottom masses up to 390 GeV are excluded for neutralino masses below 60 GeV. ATLAS has searched for stop quark production in gluino decays [12] using an analysis requiring at least four high-p T jets of which at least one should be b-tagged, one isolated lepton, and significant missing transverse momentum. After applying the selection criteria, 74 events are observed in 1.0 fb −1 of data, where 55±14 background events are expected from a data-driven estimation procedure, or 52 ± 28 from Monte Carlo simulations. Since there is no significant excess, limits are set in the gluino-stop mass plane, assuming the gluino to decay asg →tt, and the stop quark to decay ast → bχ ± 1 , as shown in Figure 7. In addition, ATLAS has searched for sbottom production in gluino decays, setting limits in the gluino-sbottom mass plane and in the gluino-neutralino mass plane [13].
Further searches for direct stop quark pair production are in progress. These searches are challenging due to the similarity with the top quark pair production final state for stop masses similar to the top mass, and due to the low cross section for high stop masses. ATLAS has searched for signs of new phenomena in top quark pair events with large missing transverse momentum [14]; such an analysis is sensitive to pair production of massive partners of the top quark, decaying to a top quark and a long-lived undetected neutral particle. No excess above background was observed, and limits on the cross section for pair production of top quark partners are set. These limits constrain fermionic exotic fourth generation quarks, but not yet scalar partners of the top quark, such as the stop quark.
Electroweak gaugino searches
Searches for charginos and neutralinos are carried out via analyses of final states involving photons plus missing momentum, or multileptons plus missing momentum. In gauge mediation models, neutralinos decay to gravitinos plus one or more standard model particles, depending on the neutralino composition. For bino-like neutralinos, the final state consists of a pair of high-p T photons plus missing transverse momentum. ATLAS has searched for an excess in such final states using 1.1 fb −1 of data [15]. The selection requires two photons, identified with "tight" criteria, with p T > 25 GeV, and significant missing transverse momentum. The results are interpreted in the general gauge mediation model (GGM), in terms of limits in the gluino-neutralino mass plane, and assuming the neutralino to be the NLSP. The results are shown in Figure 8. The assumption is made that photons are produced promptly, i.e. cτ of the NLSP is assumed to be less than 0.1 mm. In this model, a gluino mass below 805 GeV is excluded for bino masses above 50 GeV.
The diphoton plus missing transverse momentum analysis is also interpreted in the minimal gauge mediation model (GMSB), for the SPS8 parameters M mes = 2Λ, N 5 = 1, tan β = 15 and µ > 0. The ATLAS results imply a lower limit on Λ for the SPS8 parameters of 145 TeV at 95% CL.
Multilepton analyses [4,16] are sensitive to production of charginos and/or neutralinos other than the LSP, decaying leptonically to the LSP. These analyses comprise the golden search modes at the Tevatron, but are also rapidly gaining relevance at the LHC. ATLAS searches for excesses in final states with three or more leptons on the 2011 data are in progress. ATLAS has published results of various analyses searching for dilepton events plus missing momentum, in 1.0 fb −1 of data [4]. Three signal regions are defined for opposite-charge leptons, and two signal regions are defined for same-charge leptons, with varying selection criteria on jets and on the missing transverse momentum. For all signal regions, the observed event count agrees with the expected background. The analysis selecting same-charge leptons plus large missing momentum is sensitive to electroweak gaugino production, and results for this analysis are shown in Figure 9. The interpretation is done in a simplified model assuming chargino (χ
Special final states
The number of different final states sensitive to SUSY production is very large. SUSY particles may be long-lived, when their decay is suppressed kinematically (split SUSY, R-hadrons, anomaly-mediated SUSY breaking, certain parts of phase space of gauge-mediated SUSY breaking) or by very small couplings (e.g. R-parity violation and for secondary vertices of decaying massive particles [20]. Furthermore, there is a dedicated search for third generation sneutrinos decaying to an electron-muon pair in Rparity violation scenarios [21]. It is also noteworthy that ATLAS has searched for a scalar partner of the gluon [22]. Kinked or disappearing tracks are a possible signature of high-p T massive particles decaying in the detector volume to an almost degenerate daughter particle, such as χ ± 1 →χ 0 1 π ± in anomaly-mediated SUSY breaking (AMSB) models, whereχ ± 1 andχ 0 1 are almost degenerate, and the resulting pion track has low p T and is easily missed in the reconstruction. ATLAS has searched for such signatures in 1.0 fb −1 of data [19], demanding a track p T of at least 10 GeV, good reconstruction quality in the silicon tracking detectors and in the inner layers of the transition radiation tracker (TRT), but no, or only few hits in the outer layer of the TRT. Backgrounds arise from tracks interacting with the TRT material (dominant), or from misreconstructed low-p T tracks. Figure 10 (top) shows probability density functions (pdfs) in p T for signal and background tracks; Figure 10 (bottom) shows the p T distribution of the 185 tracks in data satisfying the selection criteria, and the pdf fit to the data. The data is consistent with the background expectation, and upper limits on the signal are set. ATLAS has also searched for high-mass secondary vertices, consistent with the decay of massive particles, in 33 pb −1 of data collected in 2010. The analysis is designed in particular for the decayχ 0 →μµ and the R-parity violating decayμ → qq ′ through a non-zero λ ′ 2i j coupling [20]. Backgrounds arise from interactions in the inner detector material, and the fiducial volume of this analysis excludes regions with such detector material. A signal region is defined requiring a vertex mass of 10 GeV or more, with at least four tracks in the vertex, as shown in Figure 11. The data is consistent with the background hypothesis.
Conclusion and Outlook
The results of ATLAS supersymmetry searches are summarized in Figure 12.
Although no signs of SUSY have been found so far, it is important to realize that actual tests of "natural" SUSY are only just beginning [23]. In this respect, the LHC run of 2012, with an expected luminosity of more than 10 fb −1 , possibly at √ s = 8 TeV, will be very important. However, experimentally there will be considerable challenges in triggering, and in dealing with high pile-up conditions. In the longer term, increasing the LHC beam energy to > 6 TeV will again enable the crossing of kinematical barriers and open the way for multi-TeV SUSY searches. | 3,820.6 | 2012-01-22T00:00:00.000 | [
"Physics"
] |
Orientation-Encoding CNN for Point Cloud Classification and Segmentation
: With the introduction of effective and general deep learning network frameworks, deep learning based methods have achieved remarkable success in various visual tasks. However, there are still tough challenges in applying them to convolutional neural networks due to the lack of a potential rule structure of point clouds. Therefore, by taking the original point clouds as the input data, this paper proposes an orientation-encoding (OE) convolutional module and designs a convolutional neural network for effectively extracting local geometric features of point sets. By searching for the same number of points in 8 directions and arranging them in order in 8 directions, the OE convolution is then carried out according to the number of points in the direction, which realizes the effective feature learning of the local structure of the point sets. Further experiments on diverse datasets show that the proposed method has competitive performance on classification and segmentation tasks of point sets.
Introduction
At present, deep learning has achieved significantly success in image recognition tasks, such as image classification [1][2][3] and semantic segmentation [4,5]. The rapid development of two-dimensional data fields has promoted researchers' interest in three-dimensional data recognition and segmentation tasks. With the extensive application of 3D laser scanners and 3D depth sensors, algorithms for the effective analysis of point cloud data are required in terms of autonomous driving, robots, unmanned aerial vehicles, and virtual reality. It is not always feasible to directly apply the two-dimensional image deep learning methods to the three-dimensional data tasks, because in the three-dimensional scene composed of the point cloud, these point set objects are disordered and scattered in the three-dimensional space. It is also unreasonable to simply apply two-dimensional features to irregular point clouds through convolution operators, because these operations are carried out on regular grids. The methods of [6][7][8] try to address this problem by using three-dimensional convolutional neural network voxelization scenarios. However, as the main challenges of voxel representation are spatial sparsity and computational complexity, the researchers in [9,10] try to use special methods (such as octree) to solve the sparsity problem, but it takes a certain amount of time to convert the point cloud into voxels.
Because of the limitations of the various above explorations, the PointNet [11] structure directly uses the point cloud as the input data and then uses the T-net module to convert the input point cloud to solve the problem of rotation invariance of the point cloud object, combined with a Multi-Layer Perceptron (MLP) to extract the high-level semantic information of the input data object, and finally max pooling to extract the global information. The PointNet architecture solves the problem of point cloud disorder and produces a general network architecture for directly processing the point cloud data. However, the local geometric features of point cloud objects are not taken into account in the network architecture when extracting high-level semantic information. Afterwards, PointNet++ [12] downsamples the sample data by means of the Farthest Point Sampling (FPS) algorithm and uses the ball query algorithm to search the samples for a set of adjacent points within a certain range; then the original features of the combined point sets learn high-level semantic features through convolution operations. The core idea is to propose a hierarchical structure, which solves the defects of PointNet local feature extraction and further improves the performance of the network.
PointNet [11] and PointNet++ [12] are the first deep network frameworks for point set processing, and several studies have promoted this research direction by proposing improvements in structure or composition [13,14]. Considering the relative layout of adjacent points and their features, a new pooling strategy is combined to carry out spectral convolution on local graphs [13]. SpiderCNN [14] proposes a convolution kernel with parameterization by learning the weighting parameters from the features of the input point sets. These methods attempt to enrich feature sets through original point cloud data features to enhance the performance of point cloud classification and segmentation tasks. However, these schemes still have problems such as insufficient extraction of local features of the point cloud and poor universality and robustness of the network architecture; hence, the 3D point cloud data task is still a long-term and challenging process.
In this paper, we propose a new orientation-encoding convolutional neural network (OECNN) for the point cloud data. In order to overcome the problems of low accuracy and poor robustness of the network architecture, we adopt a special convolution method and a pooling strategy. Our main contributions in this paper are as follows: • We propose a general network architecture for point cloud classification and segmentation.
•
The framework is simple and effective.
•
The network has certain adaptability.
•
Our OE convolution and pooling strategies are perceptive to local geometric features of point sets.
Point Cloud Classification and Segmentation
In the point cloud model, each sample is composed of point sets. Point cloud classification can be stated as follows: Given a set of sample points in three-dimensional space, we learn the high-level semantic feature information of samples through neural network to match the sample label. Each sample matches the corresponding label, which is an end-to-end supervised learning process. The point cloud segmentation task is a further extension of the classification tasks, and its purpose is to match the category label of each point in the sample. As we have entered the era of big data, deep learning has been widely studied through the application of optimization algorithms in neural networks and various tasks that take point clouds as the research object have attracted researchers' attention.
Voxel Data
Voxel data is a regular data structure which is easy to process. VoxNet [7] and NormalNet [15] apply 3D convolution to a voxelization of point clouds. However, there are high computational and memory costs associated with using 3D convolution. A variety of work [9,16] is devoted to exploring the sparsity of voxelized point clouds to improve the efficiency of computing and memory. OctNet [9] uses the sparsity of the input data to divide the space using a series of unbalanced octrees, and each leaf node in the octree stores a pooled feature representation. This representation focuses on memory allocation and computation in the relevant dense regions and enables deeper networks to handle higher resolutions. The Sparse Submanifold CNN architecture [16] proposes sparse convolution operations to deal with spatial sparse data more effectively and use them to construct spatial sparse convolutional networks. In comparison, our OECNN is able to directly use point clouds as input data and process very sparse data.
Spatial Domain
The GeodesicCNN [17] is a generalization of the convolution network paradigm to non-Euclidean manifolds. Its construction is based on a local geodesic system consisting of polar coordinates to extract "patches", and the coefficients of the filters and linear combination weights are optimization variables that are used to learn to minimize specific cost functions.
An image is a function on regular grids F : R 2 . Let W be a (2m + 1) × (2m + 1) filter matrix, where m is a positive integer. The convolution in classic CNNs is: GeodesicCNN uses the patch operator D to map a point p and its neighbors, and then applies Equation (1). The method learns the influence of patch operation in the local polar coordinate system of point p. We offer an alternative viewpoint; instead of finding local parametrizations of the manifold, we view it as an embedded Euclidean space in R n and design convolution methods. Our method is more efficient for point cloud processing in the Euclidean space.
Method Design
We studied a series of different convolution operations [13,14,18] and pooling methods [14] for point cloud data. In PointSIFT [18], the authors proposed an operator with orientation-encoding and scale perception. They search eight nearest points for each point in eight directions and extract the features of point sets through three-layer convolution (PointSIFT convolution will carry out three-layer convolution according to three directions, xyz) and max pooling. However, when searching each nearest point, all the input points need to be traversed. The system also has some limitations in that only the eight nearest points can be searched for each point.
Unlike PointSIFT, in this paper, we proposed an orientation-encoding operator and carry out effective convolution in each direction. We divide the spherical region in a certain range into eight directions, search the same number of points in each direction for each point, and sort the searched point sets according to the direction. After that, we extract the corresponding features of point sets through a two-layer convolution operation (the OE convolution is convolved by the number of points in each direction) and top-k pooling strategy. The method in this paper can adjust the scale (such as radius and the number of points) according to the features learned from convolutional blocks by orientationencoding, which has certain adaptability. Moreover, convincing experimental results have been obtained on ModelNet40 and ShapeNet part datasets. PointSIFT convolution and OE convolution are shown in Figure 1.
Orientation-Encoding (OE) Architecture
We present our OE convolution module in this section. In order to capture shape patterns adaptively, we hope that shape information can be clearly encoded in different directions. Hence, we propose a new orientation-encoding convolution for all point operations. As illustrated in Figure 2a, PointSIFT can ideally search for the 8 nearest points (red points) in 8 directions for each point in the cube. However, this search method has a large fault tolerance and increases the computational complexity. Our OE search can selectively search for the desired number of points in 8 directions of the spherical area, with some flexibility, and better represents the surrounding point set features. Figure 2b shows searching for 4 points in each of the 8 directions.
tract the corresponding features of point sets through a two-layer convolution operation (the OE convolution is convolved by the number of points in each direction) and top-k pooling strategy. The method in this paper can adjust the scale (such as radius and the number of points) according to the features learned from convolutional blocks by orientation-encoding, which has certain adaptability. Moreover, convincing experimental results have been obtained on ModelNet40 and ShapeNet part datasets. PointSIFT convolution and OE convolution are shown in Figure 1.
Orientation-Encoding (OE) Architecture
We present our OE convolution module in this section. In order to capture shape patterns adaptively, we hope that shape information can be clearly encoded in different directions. Hence, we propose a new orientation-encoding convolution for all point operations. As illustrated in Figure 2a, PointSIFT can ideally search for the 8 nearest points (red points) in 8 directions for each point in the cube. However, this search method has a large fault tolerance and increases the computational complexity. Our OE search can selectively search for the desired number of points in 8 directions of the spherical area, with some flexibility, and better represents the surrounding point set features. Figure 2b shows searching for 4 points in each of the 8 directions.
In three-dimensional space, with an input of n points with d dimension features, for each point (with d dimensions), the 3D space is divided into 8 partitions with as the center, indicating 8 directions. We define variables for each direction to store the index of the points to be searched in each direction and define the corresponding indicators to indicate the number of local points to be searched in each direction. In the spherical region with radius r, we find m points for in each direction (let the number of local points Before the training, we rotate, jitter, and randomly select a fixed number of the sample data in each epoch; hence, the search of local points around the central point can be regarded as a random process. When we have searched the corresponding number of points within the radius r, then we do not need to search other points. In the process of searching local points for each central point, the worst situation is to traverse all sample
Orientation-Encoding (OE) Architecture
We present our OE convolution module in this section. In order to capture patterns adaptively, we hope that shape information can be clearly encoded in di directions. Hence, we propose a new orientation-encoding convolution for all poin ations. As illustrated in Figure 2a Before the training, we rotate, jitter, and randomly select a fixed number of th ple data in each epoch; hence, the search of local points around the central point be regarded as a random process. When we have searched the corresponding num points within the radius r, then we do not need to search other points. In the pro searching local points for each central point, the worst situation is to traverse all s points once. In PointSIFT, because the search objective is to find the nearest point i direction, all points in the sample need to be traversed eight times. Therefore, in t In three-dimensional space, with an input of n points with d dimension features, for each point p 0 (with d dimensions), the 3D space is divided into 8 partitions with p 0 as the center, indicating 8 directions. We define variables for each direction to store the index of the points to be searched in each direction and define the corresponding indicators to indicate the number of local points to be searched in each direction. In the spherical region with radius r, we find m points for p 0 in each direction (let the number of local points searched in the space of each point be M, m = M/8). m points represent local geometric features in one direction.
Before the training, we rotate, jitter, and randomly select a fixed number of the sample data in each epoch; hence, the search of local points around the central point p 0 can be regarded as a random process. When we have searched the corresponding number of points within the radius r, then we do not need to search other points. In the process of searching local points for each central point, the worst situation is to traverse all sample points once. In PointSIFT, because the search objective is to find the nearest point in each direction, all points in the sample need to be traversed eight times. Therefore, in theory, our method has a certain speed advantage and reduces the computational complexity. We can adjust the search range according to the radius r, so as to better capture the local information in each direction. In order to prevent not searching enough points in radius r, we use the point p 0 to initialize the required number of points (with d dimensions).
As shown in Figure 3, we propose the OE convolution module, which has two paths to extract local high-dimensional semantic information for each point in the sample. On the one hand, we first use OE search to find the local point sets and the corresponding features for each point and store the corresponding information by increasing the one dimension. In order to make the convolution orientation-aware, we conduct a two-layer convolution. The first layer of convolution is performed according to the number of points in each direction to obtain the remaining 8 points in 8 directions, with one point in each direction. The second convolution convolves the remaining 8 points, and then reduces the dimensions to obtain the corresponding point set features. We used the same output channel e for two convolutions. At this point, each point has local high-dimensional semantic feature information. On the other hand, we use the input point set features to directly perform the convolution with the output channel e to obtain the high-dimensional semantic features of each point. After that, we obtain richer local point set high-dimensional semantic information by performing an addition operation. our method has a certain speed advantage and reduces the computational complex can adjust the search range according to the radius r, so as to better capture the l formation in each direction. In order to prevent not searching enough points in r we use the point to initialize the required number of points (with d dimension As shown in Figure 3, we propose the OE convolution module, which has tw to extract local high-dimensional semantic information for each point in the sam the one hand, we first use OE search to find the local point sets and the corresp features for each point and store the corresponding information by increasing the mension. In order to make the convolution orientation-aware, we conduct a tw convolution. The first layer of convolution is performed according to the number o in each direction to obtain the remaining 8 points in 8 directions, with one point direction. The second convolution convolves the remaining 8 points, and then red dimensions to obtain the corresponding point set features. We used the same outp nel e for two convolutions. At this point, each point has local high-dimensional s feature information. On the other hand, we use the input point set features to perform the convolution with the output channel e to obtain the high-dimensional tic features of each point. After that, we obtain richer local point set high-dime semantic information by performing an addition operation. We put the obtained feature of points into a tensor ∈ × × . The two s directional convolution are: where and represent the weight parameters to be optimized, rep the convolution of m points along each direction, represents the convolutio remaining eight points in eight directions. In this paper, we set [⋅] = [ ℎ )]. After the convolution, each point is represented as a vector with e dimensio vector represents the shape pattern around .
Multi-Scale Architecture
Using an OE convolution module as a basic unit, we are able to build a mu network structure. An OE convolution module can capture arbitrary scale info We put the obtained feature of points into a tensor S ∈ R n×M×d . The two stages of directional convolution are: where A m and A 8 represent the weight parameters to be optimized, Conv m represents the convolution of m points along each direction, Conv 8 represents the convolution of the remaining eight points in eight directions. In this paper, we set g[·] = ReLU[Batch_norm(·)]. After the convolution, each point is represented as a vector with e dimensions. This vector represents the shape pattern around p 0 .
Multi-Scale Architecture
Using an OE convolution module as a basic unit, we are able to build a multi-scale network structure. An OE convolution module can capture arbitrary scale information from eight directions and select any number of points in each direction. If we stack several OE convolution modules to generate a deeper network structure, then the last layer can observe a larger three-dimensional region, and different OE units can have different scales. As illustrated in Figure 4, we can choose the appropriate scale and the number of points according to the features of the network and strive to better optimize the performance of the network. A simple but effective way to capture multi-scale patterns is to concatenate the output of different stacked units as a shortcut.
. Learn. Knowl. Extr. 2021, 3 FOR PEER REVIEW OE convolution modules to generate a deeper network structure, then observe a larger three-dimensional region, and different OE units c scales. As illustrated in Figure 4, we can choose the appropriate scale a points according to the features of the network and strive to better op mance of the network. A simple but effective way to capture multi-s concatenate the output of different stacked units as a shortcut.
x y z For a layer of OE convolution module, searching for M local point the sample can be regarded as a random process. When we stack OE ules, if we use the same scale and search the same number of local poin OE convolution module but with different numbers of output chann searched by each layer of OE convolution module are the same. In thi neural networks to learn different high-dimensional semantic informat cal point set in a certain range, and finally fuse the feature information If we use different scales and different numbers of local search points i OE convolution module, then the M points in each layer of the OE co are different, which will lead to a different local scope and different loca information collected by each layer of OE convolution module, which w mation redundancy. This is not conducive to the feature learning of the we set the same scale and the same number of local points in each laye lution module to generate more representative local high-dimension mation for experiments. In the following sections, we also conducted c iments on multi-scale and fixed-scale structures.
Top-k Pooling vs. Max Pooling
Max pooling can be seen as a special type of top-k pooling. By appl we can extract global point cloud features. However, because it doe For a layer of OE convolution module, searching for M local points for each point in the sample can be regarded as a random process. When we stack OE convolution modules, if we use the same scale and search the same number of local points for each layer of OE convolution module but with different numbers of output channels, the M points searched by each layer of OE convolution module are the same. In this way, we can use neural networks to learn different high-dimensional semantic information of the same local point set in a certain range, and finally fuse the feature information by concatenation. If we use different scales and different numbers of local search points in each layer of the OE convolution module, then the M points in each layer of the OE convolution module are different, which will lead to a different local scope and different local semantic feature information collected by each layer of OE convolution module, which will generate information redundancy. This is not conducive to the feature learning of the local range; hence, we set the same scale and the same number of local points in each layer of the OE convolution module to generate more representative local high-dimensional semantic information for experiments. In the following sections, we also conducted comparative experiments on multi-scale and fixed-scale structures.
Top-k Pooling vs. Max Pooling
Max pooling can be seen as a special type of top-k pooling. By applying max pooling, we can extract global point cloud features. However, because it does not have certain scalability and will lose data information, we adopt a selective top-k pooling strategy proposed in SpiderCNN [14]. Both max pooling and top-k pooling use a simple symmetric function to gather information from each point. Here, a symmetric function takes n vectors as inputs and outputs a vector representing global point cloud information in a sample which is invariant to the input order.
Our idea is to generate a function that can extract global features by applying a symmetric function in the feature space of a point set: where h is composed of a single variable function and max pooling (or a top-k pooling). f is the corresponding sample features, and the number of features of f top_k pooling is k times that of features of f maxpooling . The value of k determines that top-k pooling has good selectivity. Through the collection of h, we can learn a number of features to capture different properties of the set in different directions. Under the same experimental conditions, we compare the two pooling methods on ModelNet40 [19]. The max pooling classification accuracy is 92.2%, and the top-k pooling classification accuracy is 92.5% when the value of k is 2, which reflects the advantages of top-k pooling in extracting global feature information. In Figure 5, we use the 2 × 2 matrix to give the calculation process of max pooling and top-k pooling and show the selectivity of top-k pooling.
Mach. Learn. Knowl. Extr. 2021, 3 FOR PEER REVIEW selectivity. Through the collection of h, we can learn a number of features to capt ferent properties of the set in different directions. Under the same experimental tions, we compare the two pooling methods on ModelNet40 [19]. The max pooling fication accuracy is 92.2%, and the top-k pooling classification accuracy is 92.5% w value of k is 2, which reflects the advantages of top-k pooling in extracting global information. In Figure 5, we use the 2 × 2 matrix to give the calculation process pooling and top-k pooling and show the selectivity of top-k pooling.
Experimental Environment
We evaluated and analyzed the OE convolution (OEConv) module on the 3 clouds classification and segmentation. Through the 4-layer OE structure, we emp studied the key parameters and compare our model with the state-of-the-art meth models were constructed with Tensorflow 1.5 on 1080Ti GPU and trained using the optimizer with a learning rate of 0.001. The same data augmentation strategy as fo Net [11] was applied: the point cloud was randomly rotated along the up-axis a position of each point was jittered by a Gaussian noise with zero mean and 0.02 st
Experimental Environment
We evaluated and analyzed the OE convolution (OEConv) module on the 3D point clouds classification and segmentation. Through the 4-layer OE structure, we empirically studied the key parameters and compare our model with the state-of-the-art methods. All models were constructed with Tensorflow 1.5 on 1080Ti GPU and trained using the Adam optimizer with a learning rate of 0.001. The same data augmentation strategy as for PointNet [11] was applied: the point cloud was randomly rotated along the up-axis and the position of each point was jittered by a Gaussian noise with zero mean and 0.02 standard deviation. The system used was Ubuntu 16.04. A dropout rate of 0.5 was used with the fully connected layer. Batch normalization was used at the end of each OE convolution module with the decay set to 0.5 or 0.7. On a GTX 1080Ti, the forward-time of a OEConv layer (batch size 16) with in-channel 32 and out-channel 64 was 0.052 s. For the 4-layer OECNN (batch size 16), the total forward-pass time was 0.615 s.
Classification on ModelNet40
ModelNet40 [19] We compare three key parameters (the number of search points, different scales, and different values of k for top-k pooling) to improve the performance of the optimized network by using the single variable principle. The results are summarized in Figure 7. We saw that 16 is the optimal choice among 8, 16, 24, and 32 search points, and we chose a scale of 0.2 with top-2 pooling to get an accuracy of 92.5%. Then we used a fixed-parameter module to stack a 4-layer network structure, using top-4 pooling to get the best accuracy of 92.7%. We use a 4-layer multi-scale structure with different key parameters, and the classification accuracy is 92.6%. The result is slightly worse than for a 4-layer single-scale network with fixed parameters. We suspect that it may be due to the insufficient local features extracted from the multi-scale structure. To prevent overfitting, we apply the data augmentation method DP (random input dropout) introduced in [12] during training. Table 1 shows a comparison between OECNN and other models on ModelNet40. We also added the convolution operator proposed by PointSIFT into the OECNN network for comparison, and the result was only 90.3%. The 4-layer OECNN achieved an accuracy of 92.7%, which improves over the best reported result of models with 1024 input points. In Figure 8, we give a visualization of the misclassified samples of the two categories. We We compare three key parameters (the number of search points, different scales, and different values of k for top-k pooling) to improve the performance of the optimized network by using the single variable principle. The results are summarized in Figure 7. We saw that 16 is the optimal choice among 8, 16, 24, and 32 search points, and we chose a scale of 0.2 with top-2 pooling to get an accuracy of 92.5%. Then we used a fixed-parameter module to stack a 4-layer network structure, using top-4 pooling to get the best accuracy of 92.7%. We use a 4-layer multi-scale structure with different key parameters, and the classification accuracy is 92.6%. The result is slightly worse than for a 4-layer single-scale network with fixed parameters. We suspect that it may be due to the insufficient local features extracted from the multi-scale structure. To prevent overfitting, we apply the data augmentation method DP (random input dropout) introduced in [12] during training. Table 1 shows a comparison between OECNN and other models on ModelNet40. We also added the convolution operator proposed by PointSIFT into the OECNN network for comparison, and the result was only 90.3%. The 4-layer OECNN achieved an accuracy of 92.7%, which improves over the best reported result of models with 1024 input points. In Figure 8, we give a visualization of the misclassified samples of the two categories. We find that the reason for the misclassification is that they all have similar 3D geometric spatial features. augmentation method DP (random input dropout) introduced in [12] during training. Table 1 shows a comparison between OECNN and other models on ModelNet40. We also added the convolution operator proposed by PointSIFT into the OECNN network for comparison, and the result was only 90.3%. The 4-layer OECNN achieved an accuracy of 92.7%, which improves over the best reported result of models with 1024 input points. In Figure 8, we give a visualization of the misclassified samples of the two categories. We
Segmentation on ShapeNet Parts
The ShapeNet Parts segmentation dataset [6] contains 16,881 shapes from 16 classes, with the points of each sample labeled into one of 50 part types. We used the official training/testing split with 14,006 for training and 2847 for testing. The challenge of the task is to assign a part label to each point in the test set. The mIoU (mean intersection over union) (c) Different values of k for top-k pooling Figure 8. The visualization of misclassified samples on ModelNet40 [19].
Segmentation on ShapeNet Parts
The ShapeNet Parts segmentation dataset [6] contains 16,881 shapes from 16 classes, with the points of each sample labeled into one of 50 part types. We used the official training/testing split with 14,006 for training and 2847 for testing. The challenge of the task is to assign a part label to each point in the test set. The mIoU (mean intersection over union) as the evaluation metric is the average of all part categories. As shown in Figure 9, like classification, we also compared three key parameters in the segmentation task. We used an OECNN with one layer of OEConv (the output channel is 64) to explore the learning situation of local features and compare the impact of different scales, the numbers of search points, and the value of k for top-k pooling. We found that the best result was 85.01% using a radius of 0.2, 24 search points, and top_2 pooling. Then we used a radius of 0.2, 24 search points, and top_2 pooling to stack an OEConv structure into a 4-layer OECNN structure. The structure shown in Figure 10 was trained with a batch size of 16. We used the point coordinates as the input and assumed that category labels were known. The experimental results are summarized in Table 2. We see that the OECNN network structure achieved competitive experimental results on the ShapeNet Parts dataset.
Mach. Learn. Knowl. Extr. 2021, 3 FOR PEER REVIEW 10 In comparison to the other five methods in Table 2, our method has a mIoU of 85.5% for all shapes on the ShapeNet dataset, and 10 categories of mIoU are superior to the other methods. Further, we tested an implementation of our model with the operator proposed for PointSIFT; its mIoU only reached 84.9%. Based on our analysis, we conclude that our method is sensitive to local information with a similar spherical shape because our OEConv module is able to capture local scale information in any spherical range. In comparison to the other five methods in Table 2, our method has a mIoU of 85.5% for all shapes on the ShapeNet dataset, and 10 categories of mIoU are superior to the other methods. Further, we tested an implementation of our model with the operator proposed for PointSIFT; its mIoU only reached 84.9%. Based on our analysis, we conclude that our method is sensitive to local information with a similar spherical shape because our OEConv module is able to capture local scale information in any spherical range.
(c) Different values of k for top-k pooling All Shape Figure 10. The architecture of OECNN in the ShapeNet Parts segmentation [6] task. We show the qualitative results of segmentation on the ShapeNet Part dataset in Figure 11, where ground truth represents the visualization results made by real labels, prediction is the result predicted by the network, and difference represents the misclassified points (red points) between ground truth and prediction. Different colors (ground truth, prediction) represent different part labels. We can see that the segmentation of some points was not very good at the occlusion and the intersection of different parts. This may have lost some effective points for local feature learning.
Robustness Test
In this section, we additionally tested and analyzed the robustness of the OECNN on ModelNet40. We studied the effect of OECNN losing points. Following the settings for the experiments in Section 4.2, we trained a 4-layer OECNN and SpiderCNN with 512, 256, 128, 64, and 32 points as the input data. As shown in Figure 12, as the number of input points decreased, our classification accuracy on ModelNet40 decreased slightly until the number of input points drops to 256. Our classification accuracy was 92.6% when the number of input points was 512. When there were only 32 input points, our OECNN classification accuracy was 87.9%, which was better than that of the SpiderCNN. The disadvantage of our method is that we may not find the corresponding number of points in each direction in a local range, although we use the center point for initialization. This is not conducive to the semantic learning of local features but the comparison shows the effectiveness of our method.
We show the qualitative results of segmentation on the ShapeNet Part dataset in Figure 11, where ground truth represents the visualization results made by real labels, prediction is the result predicted by the network, and difference represents the misclassified points (red points) between ground truth and prediction. Different colors (ground truth prediction) represent different part labels. We can see that the segmentation of some points was not very good at the occlusion and the intersection of different parts. This may have lost some effective points for local feature learning.
Robustness Test
In this section, we additionally tested and analyzed the robustness of the OECNN on ModelNet40. We studied the effect of OECNN losing points. Following the settings for the experiments in Section 4.2, we trained a 4-layer OECNN and SpiderCNN with 512 256, 128, 64, and 32 points as the input data. As shown in Figure 12, as the number of input Mach. Learn. Knowl. Extr. 2021, 3 FOR PEER REVIEW 12 points decreased, our classification accuracy on ModelNet40 decreased slightly until the number of input points drops to 256. Our classification accuracy was 92.6% when the number of input points was 512. When there were only 32 input points, our OECNN classification accuracy was 87.9%, which was better than that of the SpiderCNN. The disadvantage of our method is that we may not find the corresponding number of points in each direction in a local range, although we use the center point for initialization. This is not conducive to the semantic learning of local features but the comparison shows the effectiveness of our method.
Ablation Experiments
To verify the effectiveness of OEConv, in Table 3, we calculated the results of classification and segmentation when the points in 8 directions were all filled by the center point . With these comparisons, we conclude that OEConv, which randomly selects points in 8 directions within a certain range, is key to the performance of OECNN. Table 4 summarizes space (number of parameters in the network) and time (floatingpoint operations/sample, forward time) complexity of our classification OECNN. We also compare OECNN to SpiderCNN and PointSIFT (put the convolution operator proposed by PointSIFT into the OECNN) architectures in previous work. While SpiderCNN and PointSIFT achieve high performance, OECNN is more efficient in computational cost (measured by FLOPs/sample and forward time). Besides, OECNN is much more spaceefficient than SpiderCNN in terms of parameters in the network. In the future, we will reduce the amount of network parameters and further study the features effectively.
Ablation Experiments
To verify the effectiveness of OEConv, in Table 3, we calculated the results of classification and segmentation when the points in 8 directions were all filled by the center point p 0 . With these comparisons, we conclude that OEConv, which randomly selects points in 8 directions within a certain range, is key to the performance of OECNN. Table 3. The selection of points in OEConv.
Classification (Accuracy) Segmentation (mIoU)
OEConv (filled by p 0 ) 91.5% 84.5% OEConv (random) 92.7% 85.5% Table 4 summarizes space (number of parameters in the network) and time (floatingpoint operations/sample, forward time) complexity of our classification OECNN. We also compare OECNN to SpiderCNN and PointSIFT (put the convolution operator proposed by PointSIFT into the OECNN) architectures in previous work. While SpiderCNN and PointSIFT achieve high performance, OECNN is more efficient in computational cost (measured by FLOPs/sample and forward time). Besides, OECNN is much more space-efficient than SpiderCNN in terms of parameters in the network. In the future, we will reduce the amount of network parameters and further study the features effectively.
Conclusions
In this paper, an orientation-encoding CNN is proposed, which improves the performance of classification and segmentation for unorganized 3D point clouds. First, an orientation-encoding module is used to search for points within a certain range of each point. Subsequently, we convolve the corresponding point set features in several directions to obtain more rich local features of each point. After that, top-k pooling is used to extract the global point set features. OECNN was trained more efficiently with augmented datasets using the proposed scheme. The experimental results show that the proposed method generates a significantly higher classification accuracy (92.7%) on ModelNet40 and achieves an mIoU of 85.5% on the ShapeNet Parts dataset. | 8,964.2 | 2021-08-02T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Building effective core/shell polymer nanoparticles for epoxy composite toughening based on Hansen solubility parameters
Particles have been demonstrated to toughen epoxy resins, especially for fiber-reinforced epoxy composites, and core/shell particles are one of them. It is known that not all particles toughen the same but most evaluations are through experimentation, and few studies have been conducted to accurately predict the particles’ toughening effect or guide the design of effective particles. In this study, efforts were made to find the control factors of core/shell particles, primarily interfacial compatibility and degree of dispersion, and how to predict them. Nanocomposites were fabricated by incorporating core/shell nanoparticles having various shell polymer compositions, especially their polarities. Their compatibility was estimated using a novel quantitative approach via adopting the theory of Hansen solubility parameters (HSP), in which the HSP of core/shell nanoparticles and the epoxy matrix were experimentally determined and compared. It was found that the HSP distance was a good predictor for particle dispersion and interfacial interaction. Particles having a small HSP distance (Ra) to the epoxy resin, represented by the polybutylacrylate core/polymethyl methacrylate shell particle having the smallest Ra of 0.50, indicated a uniform dispersion and strong interfacial bonding with the matrix and yielded outstanding toughening performance. In contrast, polybutylacrylate core/polyacrylonitrile shell particle having the largest HSP distance (6.56) formed aggregates and exhibited low interfacial interaction, leading to poor toughness. It was also demonstrated that HSP can provide an effective strategy to facilitate the design of effective core/shell nanoparticles for epoxy toughening.
Introduction
Epoxy resins are one of the most important thermoset polymers in composites. As thermosetting materials, epoxies exhibit a high degree of crosslinking, which endows them high rigidity and strength [1]. However, the highly crosslinked structure also makes epoxies inherently brittle, which consequently suffered poor crack resistance characterized by the low fracture toughness [2,3]. For decades, research on toughening the brittle epoxy resins has been active in both academia and industry. A major strategy to improve their fracture toughness is the incorporation of fillers, either soft or rigid, organic or inorganic. However, in most cases, these fillers are unsatisfactory because there is a trade-off between toughness and other important mechanical properties, such as strength, modulus, and glass transition temperature (T g ) [4]. In principle, as a measure for property balances, adding fillers containing soft and rigid phases into epoxy resin will improve its toughness without significantly affecting mechanical properties [5][6][7], wherein core/ shell particles were proven effective tougheners for brittle epoxy resin [8][9][10]. Although core/shell particles can increase the fracture toughness of epoxy matrix without significantly lowering other mechanical properties, previous work is more inclined to focus on the effect of particle size and particle loading, and there are almost no established particle design guidelines, and the theoretical explanation of toughening mechanism involving these particles is missing. In this work, by carefully controlling the composition and morphology of nanoparticles, a credible theory is proposed for the first time to quantitatively explain and predict the toughening effect and the performance of core/shell nanoparticles.
Rubber particles are the most commonly used and effective toughening additive for brittle epoxy resins, whether they are amorphous or semi-crystalline [11,12], thermosets or thermoplastics [13,14]. Nonetheless, the toughness improvement usually comes at the expense of a reduction in strength, modulus, and T g because of the low modulus of rubber particles [15,16]. Rigid particles, such as silica [17], carbon nanotubes [18,19], carbon nanofibers [20,21], and graphene and its derivates [22,23] are also used as tougheners for brittle epoxy resin due to low reductions in modulus and strength. However, these rigid fillers often do not impart significant toughness improvement.
Fillers containing soft and rigid phases prove effective to boost the toughness of epoxy resins with a low reduction in strength and modulus [4,24]. Core/shell polymer particles, a type of structured composite material, formed by at least two substances of different chemical composites, are considered as an ideal candidate as toughening agent [25]. Jiang et al. reported that simultaneous reinforcing and toughening effect could be obtained by adjusting the loading of core/shell particles and the ratio between the shell thickness and core diameter [8]. It was also demonstrated in an earlier publication that the critical stress intensity factor (K IC ) of epoxy resins increased by 220% via incorporation of soft poly (butyl acrylate) core and rigid poly(methyl methacrylate) shell nanoparticles [26].
The toughening efficiency of nanofillers depends heavily on their dispersion states and interfacial interaction between nanofillers and resin matrix [27][28][29], and core/shell particles are no exception. A well-dispersed filler in the epoxy matrix is a prerequisite for epoxy toughening, and strong filler-matrix interfacial interaction facilitates effective stress transfer between the two phases [30,31].
Good dispersion and strong interfacial interaction both require high surface compatibility between the filler and epoxy matrix [32]. The most commonly used method to improve the compatibility between a filler and the epoxy matrix is to introduce the filler surface with functional groups [32,33]. Literature studies showed that the filler surface modification with amine, carboxylic or epoxide groups not only improves the dispersion of filler in an epoxy matrix but also enhances the interfacial stress transfer between the filler and epoxy matrix through interfacial reactions induced by these surfacereactive groups [27,30].
However, a few studies have been conducted to explore the relationship among particle compositions, morphologies and the compatibility of epoxy resin from the perspective of particle design. In addition, to the best of our knowledge, most studies on epoxy toughening with core/shell particles were from the perspective of particle sizes or particle loading levels, and there is almost no description of core/shell particle's toughening effect based on its composition, especially based on the shell polymer polarity. Moreover, there is a clear absence of theories satisfactorily explaining why some particular core/shell particles toughen epoxy resins well, but the others do not, and some of them even significantly lower the toughness.
The design of effective core/shell particles today remains a work of art, even with the commercialization of a few successful products. Researchers rely largely on experience and trial-and-error when it comes to core/ shell particle synthesis, which limited the design space and prolonged the development cycle. It becomes essential to establish the design guidelines by building the fundamental understanding of composite performance as a function of particle composition and structure.
Therefore, it was the intention of this work, through careful control of nanoparticle composition and morphology, to obtain important insights into the fundamentals of core/shell nanoparticle toughening, from which credible theories were proposed for the first time to quantitatively explain and predict the toughening performance of core/shell nanoparticles. Three types of nanoparticles with the same core but different shell compositions were prepared. The influence of these three types of core/shell nanoparticles in epoxy resins was investigated by evaluating their dispersion state, rheological and mechanical properties. Furthermore, the Hansen solubility parameters (HSP) of the formulated epoxy resin and the shell compositions were measured experimentally and were used to quantitatively describe the compatibility between core/ shell nanoparticles and the epoxy matrix. The results not only showed that the performances of the core/shell nanoparticles in the epoxy matrix were greatly affected by the polarity of the shell polymer but also demonstrated good quantitative agreement with compatibility predictions based on the HSP theory. It is further illustrated that nanoparticles used for epoxy resin toughening could be designed or modified under the guidance of the HSP theory.
Preparation of core/shell nanoparticles
Formulations for nanoparticle synthesis are given in Table S1 (Supplementary Information ). A particle with a soft polymer core and a hard polymer shell was selected as the core/shell configuration because it provided the largest toughness improvement, based on the published work [26]. For the precise control of the particle composition, all core/shell particles were synthesized with the same soft core and different rigid shells via a two-stage polymerization. Furthermore, to minimize the effect of varying T g of the shell composition, three polymers having almost the same T g while exhibiting significant differences in polarity were selected to construct the shell, namely polymethyl methacrylate (PMMA, designated as M), polystyrene (PST, designated as S) and polyacrylonitrile (PAN, designated as A). The resulting core/shell structures were denoted B/M, B/A, B/S, wherein B was the poly(butyl acrylate) core and M was the poly(methyl methacrylate) shell, and so on. The B/S particles that were surfacemodified with GMA at different loading levels were named B/S-5, B/S-10 and B/S-15, wherein the number denoted the percentage of the functional monomer GMA in the shell polymer composition. The synthesis method and procedure were the same as reported in a previous study [26].
Epoxy composites fabrication
In order to maintain the particle morphology, the core/ shell nanoparticles were re-dispersed in the epoxy resin using a phase-transfer method, described in an earlier publication [26]. The epoxy resins were mixed with the anhydride hardener, with a 1:1 stoichiometric ratio of epoxy groups/anhydride groups. For all samples, the loadings of core/shell particles and cure accelerator (2E4MI) were kept at 10 and 0.5 wt%, respectively. The resin mixture was poured into aluminum molds with specific dimensions and degassed for 1 h at 60°C. All the particles were incorporated into the same resin formula and cured under the same profile, which was 120°C for 1 h, plus 140°C for 2 h and finally 170°C for 1 h.
Determination of HSP
HSP of core/shell particles and epoxy matrix were determined based on the observation of the interaction between the material and solvents [32,34]. In the case of formulated epoxy resins, 0.5 g of the resin was added into 18 different solvents with known HSP and dwelled for 24 h. The mixture was rated either as "1" or "0", based on the observation of the dispersion state of epoxy resins in the solvent. A rating of "1" represented a resin being fully dissolved in that particular solvent, while a rating of "0" denoting a resin that was insoluble. Due to the crosslinked structure and particulate form of the core/shell particles, they were dried into films and the film's degree of swelling was used for the rating. For higher precisions, the degree of swelling was rated according to six grades. A rating of "5" designated the highest degree of swelling in a particular solvent. A rating of "0" denoted no swelling or minimal degree of swelling. It was important that the particles be completely dried at a temperature higher than their minimum film formation temperature (MFFT), so that a film, instead of powder, could be obtained. However, by experimentally determining the degree of swelling for core/shell particles, two assumptions were made: (1) the synthesized core/shell particle had a perfect core/shell structure, that is, the core layer polymer was completely wrapped in the shell layer polymer, and (2) the swelling of core/shell particles was only related to the shell polymer and the contribution of the core layer polymer was neglected. The ratings for the resin and the core/shell particle film in the selected solvents were processed by HSPiP Software (Hansen Solubility Parameters in Practice, 5th Edition), and the HSP components (δ D , δ P , and δ H ), and the radius value (R o ) of the sphere of interaction were calculated and fitted.
Characterization
The particle size and size distribution of the core/shell particles in their original aqueous medium were characterized by dynamic light scattering (DLS) on a Malvern Nano-ZS analyzer. Particle morphology was observed using a JEOL JEM-2100transmission electron microscope (TEM), in which the particles were first stained with phosphotungstic acid (PTA) before subjecting to TEM. Fourier transform infrared (FTIR) spectra were recorded on a Nicolet 6700 FTIR spectrometer using the attenuated total reflection (ATR) mode. The rheological properties of epoxy resins containing core/shell particles were investigated on a TA DHR Rheometer with frequency ramping from 0.1 to 1,000 s −1 . The degree of dispersion of core/shell particles in the epoxy matrix was qualitatively characterized by TEM on a thin composite section with the thickness ranging from 30 to 50 nm, which were prepared by freezingultramicrotome. Tests on tensile properties and fracture toughness were performed on a Wance ETM104B-EX electronic universal testing machine with a 2 kN load cell, using appropriate fixtures. The dog-bone-shaped tensile samples were prepared according to ASTM D638 and an extensometer was used to measure the strain. Fracture toughness was conducted according to ASTM D5045 using a singleedge-notch bending geometry. A pre-crack was initiated by tapping a sharp chilled blade in the notch. The critical stress intensity factor (K IC ) and critical strain energy release rate (G IC ) were calculated according to the equations given in ASTM D5045. Each reported value for the tensile and fracture toughness test was the average of at least six specimens. The morphologies of fracture surfaces from tensile tests were coated with gold and inspected on a Hitachi S4800 scanning electron microscope (SEM).
Core/shell nanoparticle characterization
Three types of nanoparticles with the same core but different shell compositions were prepared via emulsion polymerization. The DLS analysis on the particle size and distribution are shown in Figure 1a-c. The Z-average intensity particle diameters (d) and polydispersion index (PDI) are presented in Table S2 (Supplementary Information). The particle size distribution curves all had a single peak, indicating that the particles were of monomodal distribution. Moreover, the increase in particle sizes relative to the core composition indicated that the shell monomers were polymerized on top of the cores without new particle nucleation during the second polymerization stage, namely the shell-forming stage.
The synthesized nanoparticles were subjected to TEM to observe their morphologies. The PTA negative staining technique was employed to improve the weak contrast between different polymer phases in the nanoparticles. Figure 1 presents images of B/M (d), B/S (e), and B/A (f) nanoparticles, wherein the light and dark domains corresponded to the core and shell phases, respectively. It is evident from Figure 1(d-f) that the nanoparticles synthesized were spherical with relatively uniform sizes, showing no signs of undesirable secondary particle nucleation.
The particle compositions were verified by FTIR. As shown in Figure 1(h), all three particles exhibited a significant peak at 1,730 cm −1 , ascribed to the -C]O stretching vibration ester groups. As expected, the peak in B/A particles at 2,242 cm −1 represented the characteristic absorption of -C≡N from PAN. The B/S particles had strong absorptions at 3,027, 1,602, 1,494, 761, and 700 cm −1 , which could be attributed to the characteristic bands of -CH and C]C bonds on the benzene ring in PST.
Dispersion of nanoparticles in epoxy resin
To obtain a uniform epoxy-nanoparticle mixture without compromising its morphology and particle size distribution, the aqueous medium was replaced by an epoxy resin via a phase-transfer process, which was a proven technique described in an earlier study [26]. It is worth noting that due to the large difference in the shell polymer polarity of the three types of particles, the actual phase transfer processes might be slightly altered in terms of solvent water ratios. Therefore, it was also necessary to verify whether the particle morphology and degree of dispersion were preserved after the phase transfer. The TEM images of the epoxy resins containing various core/shell nanoparticles are shown in Figure 2. As can be seen the three types of shell polymers, which were significantly different in polarities led to three significantly different particle dispersibility in epoxy resin. The B/M particles in Figure 2(a), having moderate polarity PMMA as the shell polymer, were well dispersed. In contrast, both particles having shell polymers of lower polarity (B/S) and higher polarity (B/A), as shown in Figure 2(b) and (c), exhibited aggregations, with the aggregation of B/A particles to be the most significant.
The degree of nanoparticle dispersion in a resin depends on their mutual compatibility. Since the shell polymer is in contact with the resin, it determines the particles' compatibility with and the degree of dispersion in the epoxy resin. In order to quantitatively characterize, describe and predict the particle compatibility with the epoxy resin, the authors adopted the theory of solubility parameter. According to the principle of "like dissolves like", the solubility of a given polymer in various solvents is largely determined by its chemical structure [35]. Therefore, the HSP are extensively used as a method to quantitatively characterize the structure-property relationship of polymer materials. In addition, HSP are used to predict the compatibility of polymers and to characterize the surfaces of fillers to improve their dispersion and adhesion [32,[36][37][38]. Based on a similar principle of "like seeks like," materials with similar HSP values would exhibit high mutual physical affinity or rather compatibility [37]. Thus, a comparison of HSP of the core/shell nanoparticles and the epoxy resin potentially provides insights into the compatibility between them as well as a quantitative view of nanoparticles' degree of dispersion.
The solubility or degree of swelling of a polymer can be measured using a series of solvents with known HSP, and its HSP can be calculated, accordingly, and conveniently by HSPiP. The results are summarized in Table 1, together with the relative energy difference (RED, calculated by HSPiP) values of materials in various solvents. The RED value provides an estimate on whether two materials will be miscible or not, i.e., miscible when RED <1, partially miscible when RED = 1, and immiscible when RED >1 [39,40]. Figure 3 presents the HSP spheres of neat epoxy resin and various core/shell nanoparticles, with each color representing one composition. The HSP values are presented in Table 2. The solubility parameter components, δ D , δ P and δ H , correspond to the three types of interactions, namely dispersion, polar and hydrogen bond [37,40,41]. The radius R o of the sphere is the interaction radius. Solvents that fall within R o are expected to dissolve or swell the corresponding polymer [42]. As shown in Table 2, the δ D , δ P and δ H values for B/S particles were the smallest among the three types of particles, suggesting a weak particle-particle interaction, while the highest values of δ D and δ P for B/A particles indicated a strong particleparticle interaction. Good compatibility generally requires that the respective HSP values for the two polymers are close to each other, thus their R a , corresponding to the distance between the center of the two polymer spheres, is considered to be a good measure of their compatibility [32,43]. The HSP distance R a was calculated by the following equation: where subscripts 1 and 2 represent the two different polymers, respectively. In addition, the extent of the overlapping of HSP spheres is an intuitive way to represent compatibility [32]. Therefore, the HSP distance R a and the overlapping of the spheres were combined to estimate the compatibility between two polymers (Figure 4). It is evident from Figure 4 that the distance R a between B/M and the epoxy matrix resin was the smallest (0.50), and the overlapping was the largest. As a matter of fact, the HSP sphere of B/M was completely included in the HSP sphere of the epoxy resin, indicating that the B/M particles had excellent compatibility with the epoxy resin. Differently, the overlap between B/S and the epoxy resin was smaller and the R a was larger (4.48), so the B/S particles and epoxy resin were less compatible. The R a of the B/A particles was the largest (6.56), giving the lowest compatibility between B/A and the epoxy resin.
The above analysis demonstrated that the HSP method satisfactorily described and predicted the compatibility and the degree of dispersion of the core/shell particles in the epoxy matrix resin, which agreed with the experimental observations in this study.
Rheological behaviors
The influence of nanofillers on the rheological properties is essentially important for the processing of nanocomposites. Rheology studies provide a convenient way to evaluate the resin-filler interactions, filler-filler interactions, as well as the dispersion of fillers in the resin matrix [44,45]. The shear response is one of the most important characteristics to evaluate the rheological properties. Figure 5 shows the viscosity as a function of shear rate for neat epoxy and epoxy resins containing core/shell nanoparticles. The neat epoxy resin had the lowest viscosity and was mostly Newtonian. As expected, adding nanoparticles increased the viscosity. More specifically, the viscosity of epoxy-B/M and epoxy-B/A increased by about 14 and 17 times, respectively, compared to the neat epoxy. Moreover, adding the B/A particles also made the blend significantly pseudoplastic, i.e., shearing thinning.
Changes in rheological properties when nanoparticles are added to a matrix resin are determined by known factors such as particle morphology and the degree of dispersion, particle-resin and particle-particle interactions. The effect of these factors, however, may not contribute in the same direction or equally. For example, according to the above conclusions, the B/A particles had moderate interaction with the epoxy resin but strong particle-particle attraction due to their high polarity, which formed large aggregates and associative networks between particles, leading to high flow resistance, i.e., viscosity. Moreover, these networks of physical association would break apart under high shear, decreasing the viscosity and producing the effect of shear thinning. As for the B/M particles, their good compatibility with epoxy resin meant strong interfacial interaction, consequently inducing a significant increase in the viscosity of the resin mixture. The B/S particles had the least increase in the resin viscosity due to the relatively weak polar force and hydrogen bond in its shell polymer, which led to low particle-particle and epoxy-particle interactions.
Mechanical properties
Literature studies have shown, similarly to the physical properties discussed above, that the nanoparticles' degree of dispersion and their interfacial interaction with resins significantly influence the mechanical performance of composite materials via load distribution and interfacial stress transfer [30,46]. Based on the above understanding of effects of the shell polymer polarity on the core/shell nanoparticles' dispersion and interfacial interaction, it was expected that the composite mechanical properties would also be a function of the shell composition. The results of tensile and fracture toughness tests are shown in Figure 6 and the data are summarized in Table S3 (Supplementary Information). Compared to the neat epoxy resin, the tensile strength and Young's modulus of the nanoparticle-containing resins were lower (Figure 6a). More specifically, when B/M, B/S and B/A nanoparticles were incorporated, the tensile strengths decreased by 14.6, 40.0, and 51.4%, respectively. However, their Young's modulus had little changes (Figure 6b). Since tensile strengths are normally controlled by the type and number of defects in the materials and considering that the incorporation of particles introduces a different phase that may act as defects, it would be expected that the tensile strength of the epoxy resin containing particles would decrease. As demonstrated in previous sections, the particle aggregations, both in sizes and severity, were minimal for B/M, but were significant for B/S and the largest for B/A; therefore, the cured resin having the B/A particles would contain the most significant defects and exhibit the highest tensile strength drop, while the resin having B/M particles would show the least strength reduction. Different from the tensile strengths, Young's modulus is determined by the rigidity of the material. With the T g and the modulus of the shell polymers close to that of the cured epoxy resin, no significant change in the modulus was expected.
As for the effect of core/shell nanoparticles on fracture toughness, which is the main focus of this study, interesting but unsurprising results were observed. It was apparent from Figure 6c and d that only the nanoparticles B/M significantly improved the fracture toughness, with K IC and G IC increasing by 182 and 852%, respectively, compared to the control epoxy resin. This is a higher improvement in toughness than that reported for leading commercial products, i.e., MX125 from Kaneka Corp. gave 127 and 674% gain for K IC and G IC , respectively [47]. Not surprisingly, nanoparticles B/S and B/A showed a little toughening effect.
Again, the root cause for the mechanical property discrepancies was the degree of dispersion of the nanoparticles as observed in the aforementioned TEM, and their compatibility with the epoxy resin as demonstrated by the theory of HSP. In order to further evaluate the effect of the shell polymer polarity on compatibility and interfacial bonding, the fracture surface morphologies were observed by SEM, as shown in Figure 7.
The observation of the fracture surface morphology confirmed the earlier conclusions by TEM. The nanoparticle B/M was uniformly dispersed in the cured epoxy resin, and the fracture surfaces exhibited multiplane features with numerous microcracks, indicating high-energy absorption that was favorable for high fracture toughness. Higher magnification images (Figure 7b-c) revealed that B/M particles were uniformly embedded in the epoxy matrix and no particle pull-out was observed, denoting strong interactions between B/M particles and the resin. In contrast, nanoparticles B/S and B/A both showed aggregations, with a higher level of aggregation for B/A particles than that of B/S particles. The aggregated particles could not effectively block or diffract the crack propagation and absorbed the energy, resulting in no improvement in the toughness. Moreover, it can be observed from Figure 7f and j that a large number of B/S and B/A particles were pulled out from the resin and left empty holes behind them, indicating weak interfacial bonding between particles and the epoxy matrix. These aggregations would readily act as defects that lead to decrease tensile strengths, as observed earlier.
Modifying nanoparticles under the guidance of HSP theory
The above results demonstrated that the degree of dispersions of the nanoparticles and their interaction with epoxy matrix resin could be well described and predicted by the HSP theory, from which fundamental and reasonable explanations can be derived to describe the physical mechanical behaviors of the particle containing resins.
To take this argument further, if the HSP theory truly holds for these nanoparticle-containing epoxy resin systems, it shall yield predictions on what constitutes the most favorable conditions, as well as points to the direction, toward which modifications could be made to a "bad" particle to turn it into a "good" one. For example, if a particle has a much lower HSP than the resin it intends to toughen, the HSP theory should not only forecast its poor toughening effect but also predict that if the particle's HSP can be increased to match that of the resin's, resulting in higher toughness.
To test the hypothesis, the B/S nanoparticle was selected even though the B/A particle was the lowest performer because the emulsion copolymerization of styrene with a number of co-monomers could be better controlled than that of acrylonitrile. The idea was to use a co-monomer to modify the polarity of the polystyrene shell composition so that it would fall into the solubility parameter ranges calculated from the HSP theory. The degree of dispersion, interfacial interaction and mechanical properties were subsequently studied.
In this study, GMA was selected as the co-monomer, for it carried the more polar epoxy functional groups which happened to be the same as in epoxy resins, and it copolymerized well with styrene. Three levels of GMA were used to copolymerize with styrene and the copolymer became the new shell on the PBA core, producing the modified B/S nanoparticles. The solubility results are presented in Table S4 (Supplementary Information). Table 3 presents the HSP of the modified-B/S nanoparticles. It can be seen from Table 3 that, for the modified B/S particles, their HSP values increases while their HSP distance R a decreases, with increasing GMA levels, indicating their compatibility with the epoxy resin would become better.
Tensile and fracture toughness tests were performed and the results are shown in Figure 8 and Table S3 (Supplementary Information). Compared to the pristine B/S particles, the modified particles produced over 70% increase in tensile strength, reaching the same strength as the neat epoxy resin. More importantly, the modified particles showed significant 46 and 114% increases in K IC and G IC , respectively. The effect of incorporating GMA on the dispersion state of B/S particles in the epoxy matrix was inspected under a TEM, as shown in Figure 9. It is noted that due to the very close performances of these three modified particles, only the middle-level GMA-modified particle (i.e. B/S-10) was selected for the TEM observation. As expected, the GMA-modified-B/S particles showed a much more uniform dispersion in the epoxy matrix, compared to the unmodified particles. To further study the effect of GMA modification on epoxy-particle interaction, the fracture microstructure from the tensile test was examined using an SEM. In the case of unmodified B/S particles (Figure 10a and b), evident particle agglomerates were present and were pulled out of the resin. However, the modified particles in the cured epoxy resin not only exhibited a much more uniform distribution but also showed a much stronger bonding to the resin matrix, as illustrated by Figure 10c and d, which displayed no particle pull-outs. The above results demonstrated that the HSP theory provides a fundamental understanding and explanation of the physical mechanical performances of polymer composites containing nanoparticles. It further confirms the validity and application of HSP theory as a guide for designing and/or modifying polymer compositions, from which high-performance particles, including but not limited to core/shell nanoparticles, can be synthesized or constructed.
Conclusions
This study aimed at understanding the core/shell nanoparticles design principles for epoxy toughening. To achieve this, the composition of core/shell nanoparticles, particularly the shell polymer polarity, was carefully controlled and their effect on the dispersion, rheological and mechanical properties in the epoxy matrix were investigated. 1. The B/M particles, with PMMA as the shell polymer, which had the polarity matching with that of the epoxy matrix resin, would form uniform dispersion, yielding the most significant toughening effect, which was 220 and 851% enhancement in K IC and G IC , respectively. 2. If the polarity of the shell polymer did not match that of the epoxy matrix resin, such as the B/S particles with low shell polymer polarity or the B/A particles with high shell polarity, the particles would more or less aggregate, lowering mechanical performance and leading to little or no toughening. 3. Theoretical HSP compatibility analysis verified that the experimental observations, i.e., the difference in polarity of the shell polymer would greatly affect the compatibility of the core/shell particles with the epoxy matrix, which in turn determined their dispersibility and interfacial interaction. 4. The HSP compatibility not only helped to explain the differences in particle toughening performances but also facilitated the design of better core/shell nanoparticles for effective epoxy toughening. | 7,084 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
RSSI based Device-free Human Identification
—Researchers have explored many methods and techniques for human detection and identification in diverse contexts. One such approach is studying variations of Radio Frequency (RF) signals (e.g., Received Signal Strength Indicator (RSSI)). RSSI based techniques have been widely used in human detection, but not for recognizing the identity of a person from a group of people due to its noisy nature. This research focused on investigating the possibilities and limitations of device-free human identification using WiFi RSSI data with machine learning-based classification techniques. To inspect the characteristics of WiFi RSSI data, the authors have conducted multiple statistical analyses. A Kalman filter was applied to minimize the noise in WiFi RSSI data, followed by a feature extraction process. Furthermore, the authors have conducted several research experiments in different configurations of receivers and participant numbers. The experimental results show that the human identification accuracy level increases with the number of receivers used for the data collection. Moreover, the authors have identified that human identification accuracy can be further improved by leveraging proper noise reduction methods and feature extraction processes. For a Kalman filter applied and feature-extracted WiFi RSSI dataset of 20 people, the Support Vector Machine (SVM) - Radial Kernel model recorded the highest average identification accuracy of 99.58%.
I. INTRODUCTION
Human detection and identification in open spaces are vital in many research as well as for industry-level applications. Human detection refers to distinguishing a human's presence from other physical objects (e.g., animals). In contrast, human identification refers to the unique identification of a particular person from a finite set of individuals. In this paper, the size of that particular set is limited to 20 since a typical office or meeting room accommodates around 15-20 people.
Some applications, such as trip-wire systems, need only to detect a human's presence, whereas sophisticated authentication systems need to uniquely identify the person. Hence, depending on the system requirements, system devel-opers/researchers must deploy suitable human detection and identification models, accordingly.
Many research studies have investigated and implemented human detection and identification in the current literature. The use of properties and behaviors of Radio Frequency (RF) signals is one such approach. When RF signals travel through solid objects and open spaces, it attenuates and demonstrates variations in the RF signal strength. This behavior has been researched to apply in the localization and human identification applications. Moreover, since RF signals can penetrate through solid objects, these techniques can be used in dark environments as well [1]. Therefore, it has the potential to be used in many scenarios even when the traditional human identification methods such as video cameras and thermal cameras fail to provide sufficient accuracy.
RF signals comprise multiple attributes from both frequency and time domains. The Received Signal Strength Indicator (RSSI) is one such obtainable attribute from any wireless transmitter. Moreover, it is available in widely used technologies such as WiFi, Bluetooth, and ZigBee. The primary purpose of RSSI is to provide information on the signal strength to other wireless devices [2]. It has been identified that when RF signals travel through objects, RSSI variations occur, and these variations can be used for human detection in open spaces [1]. Moreover, Wilson and Patwari have named this RSSI reduction as 'shadow-loss' in their research [1].
According to the existing literature, the emerging devicefree localization approach 'Radio Tomographic Imaging (RTI)' is based on 'shadow-loss' of RSSI values [1]. Furthermore, researchers have used these RTI-based techniques for human tracking and activity detection as well [3], [4], [5]. Since participants do not need to carry any device dedicated for the identification, these approaches are considered device-free methods. However, it is important to note that RTI or RSSI based approaches for unique human identification are limited in the current literature.
Researchers have highlighted that RSSI data suffers from high noise and lack sufficient information to identify humans, uniquely [6], [7]. Therefore, WiFi Channel State Information (CSI) has been used for human identification studies [8], [9], [10]. It is due to the reason that CSI data are considered information-rich, as those provide the information including, but not limited to signal amplitude, phase, Doppler-shift, delay-spread and channel frequency response. Moreover, these studies utilize supervised machine learning models for the classification process. Similar to RSSI based approaches, these techniques also follow device-free principles.
Even though the current literature has limited attention over RSSI based human identification, RSSI data are widely available from many wireless networks and can be extracted even without connecting to the network. Most importantly, opposed to CSI, RSSI does not need any special device or equipment for the data extraction as any wireless receiver can capture it [11]. Hence, RSSI based approaches have a wide range of applicability in real-world scenarios. However, due to the noisy nature and lack of information, it is crucial to examine the feasibility of using RSSI data for unique human identification. Therefore, it is evident that there is a clear research gap on identifying and exploring possibilities of using RSSI based human identification methods and designing solutions while addressing the real-world setting limitations.
This research contributes to the human detection and identification research domains by empirically investigating the possibilities and limitations of using WiFi RSSI as a devicefree human identification method. This study aims to observe, analyze, and critically evaluate the human identification accuracy levels based on different machine learning models, experimental setup arrangements, and data processing methods. As a preliminary study, a statistical analysis was conducted to examine the properties and behavior of WiFi RSSI data. Then, several research experiments were performed to explore the human identification accuracy differences according to the used machine learning model, experimental setup configurations and multiple data processing steps.
The rest of the paper is arranged as follows. Section 2 presents the most related existing literature on human detection and identification approaches based on radio waves. The research methodology, consisting of preliminary studies and research experiments, is explained under section 3. A critical analysis of the research experiment results is presented in section 4, and finally, section 5 presents the research conclusion and possible future directions.
II. BACKGROUND
This section presents the existing literature on RF signalbased device-free human detection and identification under
A. RSSI based Human Detection and Tracking
Wilson and Patwari have presented RSSI based Radio Tomographic Imaging (RTI) approach for localization and human presence detection [1]. They have stated that this method can be used in many real-world scenarios, such as emergency rescue missions and physical intrusion detection systems. Furthermore, RTI is considered a device-free approach since external transceivers collect RSSI data instead of participants carrying devices. The RTI method is based on Computed Tomography principles, where signal strength is considered as the main measurement. Figure 1 presents a typical setup used in RTI-based research studies. In one of their studies, Wilson and Patwari have collected RSSI data by using 28 transceiver nodes in a 21x21 foot square open space. For the data collection, the researchers have used TelosB wireless nodes that are based on IEEE 802.15.4 standard and function on the 2.4GHz frequency. They have concluded that the proposed RTI method can visualize human's presence based on RSSI attenuation values [1]. As an extension to this research, the same researchers have presented an improved human tracking method using RTI techniques on RSSI data [3]. In their approach, a Kalman filter has been used for noise reduction, and variance-based RTI, a variation of RTI, has been used to improve human presence detection and tracking accuracy [3]. Following a similar approach Piumwardane et al. [12] and Niroshan et al. [13] have empirically investigated the possibility of applying these RTI techniques in WiFi networks and increase the human detection accuracy through a humaninterference model, respectively.
Scholz et al. [14] have described an RSSI-based human activity recognition approach, which compares the accuracy levels of both device-free and device-bound methods . The researchers have deployed transceivers based on IEEE 802.15.4 standard and have used machine learning models for activity classification. It was asserted that the both device-free and device-bound methods had yielded similar activity recognition accuracy levels [14].
Konings et al. [15] have proposed an RSSI based devicefree localization method, in contrast to RTI methods, which does not require an offline calibration. However, similar to RTI methods, transceiver locations must be prior known in this approach. Their introduced spring relaxation approach has utilized RSSI data from commercial off-the-shelf (COTS) wireless devices, and the researchers have discussed the CSI data unavailability from COTS devices as a limitation. Booranawong et al. [16] have conducted a literature review on existing filtering methods for RSSI data and have presented a novel filtration method. In this method, the researchers have considered both the accuracy and computational complexity of the RSSI data filtration. Subsequently, the researchers have introduced an adaptive filtration method for human detection and tracking with reduced computational complexity [16]. Kaltiokallio Bayesian filter based methods. The researches have stated that their approach outperforms the state-of-the-art imaging based localization methods by 30% -48%. Panwar et al. [18] have proposed a novel localization method leveraging both timeof-arrival and RSSI information. The authors have claimed that their approach works in non-line of situations without prior knowledge of the sensing environment. Furthermore, they have used a majorization-minimization algorithm to increase the computational efficiency and complexity.
B. CSI based Human Identification
Zhang et al. [8] were the first to use WiFi signals for unique human identification . In their research, WiFi CSI data has been collected for 6 people by asking them to walk between two transceivers. Their research methodology is comprise of silence removal, segmentation, feature extraction, and classification. For a group of 6 people, they have achieved a human identification accuracy level of 77% [8]. Hong et al. have also followed a similar approach for unique human identification [9]. Their research has explored the possibility of using WiFi CSI data for human identification in 3 scenarios. Those are standing, walking, and marching. They have proposed a novel CSI feature named as subcarrier-amplitude frequency (SAF) to use in the classification process with SVM -Linear kernel machine learning model [9]. 'Wii' [10] is another attempt to identify humans uniquely using WiFi CSI data with SVM -Radial Kernel model. A Principle Component Analysis (PCA) has been conducted, which is followed by a low pass filter to minimize the noise in CSI data. In their research experiments, over 1500 gait instances have been recorded for eight human subjects. They have achieved over 90% accuracy level and have discussed the possibility of using 'Wii' in home security systems as well [10].
Nipu et al. [7] have evaluated two machine learning-based classification models to determine the human identification accuracy level for a group of five participants. According to the yield results, decision tree model and random forest model has recorded average human identification accuracy of 84% and 78%, respectively. Mo et al. [19] have developed a deep learning model, Convolutional Long and Short-Term Memory (CLSTM), to identify humans uniquely leveraging WiFi CSI data. They have proposed a data augmentation method to reduce the data collection cost. Their approach has achieved 92% accuracy for a group of 8 people. Zou et al. [20] have proposed a WiFi CSI data underpinned human identification system, AutoID. They have leveraged human gait information and considered CSI data as human fingerprint. Their approach has achieved 91% accuracy for a group of 20 people. Similarly, Ming et al. [21] also have leveraged human gait for uniquely identifying humans using CSI data. They have used a LSTM model with WiFi CSI data and achieved 96% accuracy for a group of 24 people.
In a different approach, Chen et al. [22] have proposed a fusion method to integrate the information yielded through WiFi CSI data and camera videos. Importantly, this approach has achieved 97.01% real-time detection accuracy for a group of 25 people. However, in this method, the participants had to carry a WiFi device and the device ID alongside with WiFi CSI data were being used to build user profiles.
C. RSSI based Human Identification
'Radio Biometrics' by Xu et al. [6] is one key research that has examined and compared both WiFi CSI and RSSI data for human identification. Data collection has been carried out with the participation of 12 people. Furthermore, for data collection, a WiFi chipset comprise of 3 × 3 multi-in multiout (MIMO) antenna has been used. According to their novel approach, Radio Biometrics Refinement Algorithm, CSI based technique has recorded a 98.78% accuracy level, whereas the RSSI based method has achieved only 31.93% accuracy level for human identification. According to research conclusions, they researchers have asserted that it is mainly due to the noise in WiFi RSSI data [6]. In a similar study, Zhanyong et al. [23] have proposed, CrossSense, a Wifi sensing data approach that consider both RSSI and CSI data for gait and gesture recognition. Their experimental results also suggests that WiFi CSI data based models have a higher accuracy compared to RSSI models.
Dharmadasa et al. [24] have proposed WiFi RSSSI based human identification approach using a supervised machine learning method. The authors have applied multinomial logistic regression model on WiFi RSSI data for both human detection and identification. But the study has limited to 5 participants and has not considered any noise reduction techniques. Moreover, it has used 7 transmitters and a receiver for data collection, which is challenging to implement in realworld scenarios.
D. Summary
In summary, it is evident that RSSI based methods and techniques are widely being used for human detection and localization. Approaches such as RTI have employed a large number of transceivers for data collection, making them impractical to use in real-world settings. Furthermore, to use RTI methods, prior knowledge of transceiver placement is also needed, which makes it further challenging to use in closed or private environments. Since RSSI suffer from noise, researchers are more interested in using WiFi CSI data for human identification. Hence, the literature on RSSI based human identification is limited. Many WiFi CSI based approaches have used machine-learning models for human classification. However, it was identified that participant number (sample size) is low in many research and has not investigated the possibility of using multiple transceivers for data collection. Hence, a research niche is visible in WiFi RSSI based human identification methods in terms of data collection, processing and classification phases. Figure 2 shows the main stages of followed in this research study.
A. Experimental Setup Design and Implementation
All research experiments were conducted in a covered empty room. Figure 3a illustrates the room dimensions and the placement of the partition board wall. The main reason for separating the WiFi receivers from the WiFi transmitter was to simulate the scenario where an outsider unauthorizedly monitoring the inside of a room. Furthermore, as shown in Figure 3b, the WiFi transmitter was placed 2.75m above the room floor, where it was centered on the wall. As per Figure 3b, WiFi receiver was placed 0.5m above the room floor. Both these placements were made to ensure that the experimental setup resembles a real-world setting. To reduce any interference that could cause due to movements, the researchers ensured that, except for individuals who participated in the data collection and the researcher who was conducting the experiments, no one else was present in the experimental setup environment while collecting the data. Since this research aimed at exploring the possibilities of using RSSI based for human identification in real-world settings, all other environmental factors were kept unchanged. For example, the researchers did not use shields to reduce the interference from other WiFi networks to ensure that the experimental environment is as similar as possible to real-world settings.
Prolink -H5004NK WiFi router was used as the WiFi transmitter, and Dell Inspiron 5110 laptops were used as WiFi receivers, which were equipped with Intel Centrino N1030 WiFi adapters. The laptops were installed with Ubuntu-18.10-Desktop (64bit) operating system. The researchers developed a shell script to scan and capture RSSI values from the WiFi transmitter and store them in laptop hard-drives. Furthermore, after several attempts, 0.1s was selected as the scanning time. Details on the enhanced experimental setup design are presented at the end of this section.
B. Data Collection and Analysis 1) RSSI data collection: The data collection was carried out with the participation of 20 people. A convenient sampling method was used for selecting these participants, and their composition is as follows.
• Age: 20 -60 years • Gender: Male -11, Female -09 • Height: 1.48m -1.86m • Weight: 44kg -92kg As shown in Figure 4, the room floor center was marked to place human subjects. The following steps were carried out in the data collection process. Step 04 were repeated for all twenty participants. 2) RSSI data visualization: To examine patterns and anomalies of the collected WiFi RSSI data, those were visualized using line graphs. As shown in Figure 5, it was observed that in the first and last 10 seconds (approximately) of the data collection period, WiFi RSSI data shows exceptional variations. It was identified that it was due to the movements of the person conducting the research experiments. As the researcher walks near WiFi receivers to start and end the data collection process, the researcher's movements have impacted the WiFi RSSI values. Hence, to minimize the interference that happened due to these external factors, the first and last 10 seconds of each WiFi RSSI data record were filtered out.
C. Preliminary Studies 1) Statistical analysis on WiFi RSSI data: As the first step, a normality test was conducted on the collected WiFi RSSI data to investigate whether WiFi RSSI data follows a normal distribution in the presence of people. For each data record, Shapiro-Wilk Normality Test was performed with the following null hypothesis. Hypothesis H0: WiFi RSSI data has a normal distribution. Since the returned p-value for each data record was lower than 0.05, null hypothesis was rejected and decided to use non-parametric tests for the next steps.
To determine whether the collected RSSI data is statistically different from person to person, the researchers conducted Mann Whitney U Test for all participant pairs with the following null hypothesis.
Hypothesis H0: X and Y data samples are taken from the same population.
Here, X and Y refer to WiFi RSSI data records of two people that participated in the data collection process. The Mann Whitney U Test was conducted with a 95% confidence level and since the returned p-value was lower than 0.05, null hypothesis was rejected. Hence, it was asserted that the WiFi 2) Investigation on machine learning models: The existing literature was examined to determine the machine learning models that are applied in RSSI/CSI based human identification studies. It was identified that researchers have leveraged Support Vector Machine (SVM) based machine learning models and Multinomial Logistic Regression model for human identification in both CSI and RSSI based research [9], [10], [24]. Hence, SVM-Linear Kernel (SVM-L), SVM-Radial Kernel (SVM-R), SVM-Polynomial Kernel (SVM-P), SVM-Sigmoid Kernel (SVM-S) and Multinomial Logistic Regression (MLR) model were examined in the preliminary experiments of this study. Here, Human Identification Accuracy (HIA) refers to correctly identifying a particular person from a given set of participants. The following formula was used to calculate the HIA for a particular person. CP and TP denote the number of correctly identified RSSI data points and total number of data points, respectively. The average human identification accuracy refers to the average value of all HIAs for a particular set of participants.
For the preliminary study, all 20 participants were considered but limiting to WiFi RSSI data from a single receiver (as shown in Figure 4). Table I presents the average human identification accuracy along side with the used machine learning model. Since, Multinomial Logistic Regression, SVM -Linear kernel, and SVM-Radial Kernel models outperformed other models, those were selected for the next phase of the research experiments. Furthermore, k-Fold Cross-Validation method was used for the evaluation, where k value was set to 10, after considering the size of the WiFi RSSI dataset.
D. Data Processing 1) WiFi RSSI data noise reduction: WiFi RSSI data considered to be noisy and highly susceptible to interference. [2], [26], [27]. Furthermore, Joey Wilson and Neal Patwari also have used a Kalman filter for RSSI based human detection and tracking [3]. Kalman filter, which is also known as linear quadratic estimation (LQE) is an iterative process which provides state estimations on nonobservable variable based on observed noisy data. There are two conditions that must be fulfilled to apply a regular Kalman filter [28]. Those are, model or the system should be linear and observed error must follow a Gaussian distribution. The collected RSSI data of this study fulfills both those conditions. RSSI data were collected on stationary people, hence the estimation model can be approximated as a linear model. And as per the existing literature, RSSI data noise (e.g., multi-path fade, device sensing errors) can be approximated as a Gaussian distribution [3], [2].
The researchers adopted the approach followed by Wouter et al. [2], where they apply a Kalman filter on noisy RSSI data in a stationary environment setting. The following steps explains the application of a Kalman filter on RSSI data. Equation 2 defines a general transition model.
x t is the current state vector where A is a state transition matrix of the previous state vector x t−1 . B is an input matrix of the u control input vector. ϵ represents system noise.
Since in this study transmitter and receivers are static and test subject (human participant) remains stationary throughout the data collection time it can be assumed that RSSI values should be consistent, and all the variations are due to the noise. Based on that assumption equation 2 can be simplified as below.
Equation 4 defines the observational model.
z t is the state which is produced due to the x t measurement and δ t noise. C t is a transformation matrix. Since in this research both state and measurement are equal, equation 3 and 4 can be combined as shown in the equation 5.
To update the Kalman filter, the following steps were performed.
µ t defines the expected state, which is based on the previous state and Σ t denotes certainty of the prediction which is again based on the previous certainty. R t is the system noise.
Equation 8 defines a simplified Kalman gain.
Q denotes the measurement noise, which in this research related to the variance of the RSSI data.
Equation 9 and 10 defines the Kalman filter update steps. µ t is the final predicted value from the system. Accordingly, in this research, a Kalman filter was applied to reduce the noise in WiFi RSSI data. KalmanJS JavaScript library 1 was used to apply Kalman filter on the collected raw RSSI data. As shown in Figure 6, sharp fluctuations, and variations of the Raw-RSSI have been significantly reduced after applying this filter. Fig. 6: Raw RSSI and Kalaman filter applied RSSI 2) RSSI data feature extraction: Defining the feature vector is one of the critical steps that must be followed in a machine learning-based approach. Hence, to identify the most related features of WiFi data, existing literature was examined. After carefully reviewing machine learning-based human identification research, a set of RF features were listed. Subsequently, by leveraging ReliefF, a feature selection and ranking algorithm, the following features were selected [8], [9], [10].
E. Experimental setup enhancement
As observed from in the preliminary experiments, the human identification accuracy was too low when using only one WiFi receiver (less than 50%) and unsuitable for real-world settings. Therefore, the experimental setup was enhanced using multiple receivers to increase the average human identification accuracy. Figure 7 illustrates the receiver placement and arrangement for 2, 3, 4 and 5 WiFi receivers. The data collection process described in section III-B was followed to collect WiFi RSSI data for each of the above illustrated experimental setup configurations. A total of 300,000 WiFi RSSI data points were collected. Then, these data points were labeled with the participant's identification number (i.e., Person 1 -Person 20).
For example, Table II shows a sample of a 5-receiver dataset that was later used with the machine learning models.
IV. RESULTS AND ANALYSIS
This section discusses the experimental results with their interpretations and explanations. Figure 8 presents the box-plot diagrams prepared for average Human Identification Accuracy (HIA) with the number of participants (group size). Here, the number of participants refers to the sample size of the dataset. For example, if the number of participants is two, the dataset comprises of RSSI data records from two participants. When it is increased from two to three participants, another data record was added to the dataset. These data records were selected randomly, and each data record was considered once only for each conducted experiment. As Figure 8 clearly depicts, the average HIA has been drastically decreased with the number of participants and a similar pattern is visible across all three machine learning models. Importantly, it was observed that the Interquartile Range (IQR) also has been increased with the number of participants resulting a low confidence on the HIA measures. In brief, the highest loss of accuracy is recorded in MLR model with a loss of 66.41% (from 90.8% to 30.5%) while SVM-R model recorded the lowest loss of 64.71% (from 93.8% to 33.6%). Furthermore, SVM-R model recorded the highest average HIA for every group size, while MLR model resulted the lowest. Thus, SVM-R model was selected for the next steps of this study.
A. Human Identification Accuracy with the Number of Participants
B. Impact of Data Processing Steps on the Human Identification Accuracy Figure 9 illustrates the box-plot diagrams on average HIA with regards to the applied data processing steps. Figure 9(a), (b), and (c) denote the data processing steps, raw RSSI, Kalman filtered RSSI, and Kalman filtered, and feature extracted RSSI data, respectively. Overall, in all three datasets, the average HIA has decreased when the number of participants incremented. In brief, the average HIA has decreased from 93.8% to 31.6% (accuracy loss: 66.31%) in raw RSSI, 94.8% to 35.4% (accuracy loss: 62.66%) in Kalman Filtered RSSI, and 98.8% to 37.4% (accuracy loss: 62.15%) in featureextracted RSSI datasets. The highest HIA result was recorded in the SVM-R classification model with the Kalman filter applied and feature extracted RSSI dataset, which also had the lowest average human identification loss with the incremented number of participants. Thus, these results assert that the use of proper noise reduction methods and feature-extraction approaches can increase the RSSI based HIA. However, the accuracy gains from these methods become insignificant when there is a large number of participants. Therefore, to further increase the HIA, the researchers decided to increase the number of receivers for data collection.
C. Human Identification Accuracy with the Number of Receivers Figure 10 illustrates the average HIA for 20 participants with the number of receivers used for RSSI data collection. Note that, SVM-R model with Kalman filtered and featureextracted dataset has been used in all these instances. It is evident that the average HIA has rapidly increased with the number of receivers employed for RSSI data collection. The main reason behind this improvement is the increased level of information captured by the WiFi receivers. For example, as illustrated in Figure 11, when only one receiver is used for the data collection, it captures only a limited number of WiFi links. But when two receivers are used for data collection, receivers can intercept more RSSI links. Hence, these incorporate more information regarding the participant's physical features. Therefore, machine learning model receives more information to be trained. 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Fig. 9: Average human identification accuracy with SVM-R model and multiple data processing steps. (a). Raw RSSI, (b). RSSI data after applying a Kalman filter and (c). Feature extracted RSSI data after applying a Kalman filter location [29]. Similarly, in 3-receivers, 4-receivers, and 5receivers configurations also additional information has been incorporated into the machine learning models by adding extra receivers. However, as per Figure 10, it can be recognized that amount of accumulating new information gain has been decreased with the number of receivers since accuracy gain rate has decreased (e.g., 1-receiver to 2-receivers accuracy gain is 40.59% and 2-receivers to 3-receivers accuracy gain is 11.87%) with the number of receivers.
V. CONCLUSION AND FUTURE WORKS
This research investigates the possibilities and limitations of utilizing WiFi RSSI data with machine learning models as a device-free approach to identify humans, uniquely.
It was determined that the existing RTI based research has employed many transceivers for RSSI/CSI data collection and knowing the locations of these transceivers is a mandatory requirement. Thus, the applicability of these approaches in real-world settings is limited. Therefore, in this research, the researchers investigated the possible human identification accuracies that can be gained from minimal resources (i.e., one transmitter and one receiver) and the potential of increasing the accuracy level by employing more resources (i.e., by increasing the number of receivers used for RSSI data collection). As per the statistical analysis conducted under the preliminary study of this research, it was determined that the WiFi RSSI data can be used to distinguish humans uniquely since statistically different WiFi RSSI signatures were identified for different people.
By considering the results of all performed experiments, it was asserted that the human identification accuracy decreases with the number of participants (sample size). While data processing steps such as noise reduction and feature extraction help to increase the HIA, when there is a large number of participants, their accuracy gains become insignificant. In contrast, by increasing the number of receivers used for the RSSI data collection, a significant improvement of human identification accuracy can be achieved. However, deciding the number of receivers to be used for data collection is critical as the accuracy gaining rate decreases with the incremented number of receivers. Thus, researchers must carefully examine and balance their expected accuracy levels and the resources at dispense.
In respect to the applied machine learning models, Support Vector Machine -Radial Kernel model showed the highest accuracy, whereas the Multinomial Logistic Regression model recorded the lowest. However, all three machine learning models followed a similar accuracy increase/decrease pattern in all experiments.
While this research presents promising results of human identification using WiFi RSSI data, it was observed that WiFi RSSI suffer from noise. Hence, as a future improvement, a comprehensive study on noise reduction mechanisms can be carried out by considering methods such as Extended Kalman Filter and Particle Filter. Furthermore, it is vital to investigate the human identification accuracy with the data collection time duration, as in this research, it was set to a fixed value. Furthermore, this study does not consider participants' physical properties (weight, height, body mass) or environmental factors (interference from other WiFi networks, different weather conditions). Therefore, future research will require to validate the proposed human identification methods accuracy and usability in different conditions and situations. | 7,282.8 | 2023-06-27T00:00:00.000 | [
"Computer Science"
] |
Intelligent Software-Aided Contact Tracing Framework: Towards Real-Time Model-Driven Prediction of Covid-19 Cases in Nigeria
– As many countries around the world are trying to live with the deadly coronavirus by adhering to the safety measures put in place by their government as regulated by World Health Organization (WHO), it becomes very vital to continuously trace patients with COVID-19 symptoms for isolation, quarantine and treatment. In this work, an intelligent software-aided contact tracing for real-time model-driven prediction of COVID-19 cases is proposed utilizing COVID-19 dataset from kaggle.com. The dataset is preprocessed using One-Hot encoding and Principal Component Analysis. Isolation Forest algorithm is used to train and predict COVID-19 cases. The performance of the model is evaluated using Accuracy, Precision, Recall and F1-Score. The intelligent software-aided contact tracing framework has four layers: symptoms, modeling/prediction, cloud storage/contact routing and contact tracers. The contact tracing system is an android application that receives symptom values, predict it and automatically send the prediction resu lt together with user’s contact and location details to the closest contact tracer via the Firebase real-time database. The closest contact tracer is determined
INTRODUCTION
Countries of the world that were seriously ravaged by the deadly coronavirus (COVID-19) have reopen their businesses and territories so as to grow back their local and social economies while trying to cope with the existence of the virus in the face of absence of proper vaccination and drugs to combat the spread of the virus. This decision therefore implies that the countries need to adopt a highly effective approach of testing, tracing, isolation of cases and quarantining close contact to cases. Developing an intelligent real-time software aided prediction and contact tracing system is one of the techniques these countries can adopt to curb the spread of the virus.
The present day strategy practiced in Nigeria for epidemic prevention and control, where patients with vital symptoms go to medical institutions for clinical examinations and diagnosis and doctors collect their biomedical data and interview them to identify their close contacts within a stipulated period for subsequent tracing, is obviously ineffective in a contemporary society characterized by high-speed mobility and complex social existence. This approach does not allow for prompt tracing and quarantining of suspected sources of infection and also difficult to cover a wide population space considering the availability of trained personnel.
Traditional Contact Tracing (CT) is a highly demanding process as it involves a lot of interviews, detective work and monitoring. Ideally, manual CT by trained professionals can help identify persons for testing and quarantine to help contain the spread of COVID-19, but a variety of policy experts, technology companies, and public health officials have argued that digital tools may be able to expand the reach of traditional manual contact tracing systems and provide a rapid alert system that enables potentially exposed individuals to seek testing [1]. Up till date, this automatic process of CT has not been demonstrated to prove its reliability though several companies and institutions of higher learning are attempting to develop the infrastructure that will permit automated CT [2].
It has been found that 70% of contacts need to be traced to curtail the spread of the majority of outbreaks [3] like COVID-19, which has a very high rate of transmission. Considering this, the manual approach to contact tracing may not be effective as many contact tracing personnel would be required. Rapid identification and isolation of cases and exposures, made possible by digital technologies remain the most effective approach to reduce contact rate [4].
ABSTRACT-As many countries around the world are trying to live with the deadly coronavirus by adhering to the safety measures put in place by their government as regulated by World Health Organization (WHO), it becomes very vital to continuously trace patients with COVID-19 symptoms for isolation, quarantine and treatment. In this work, an intelligent software-aided contact tracing for real-time model-driven prediction of COVID-19 cases is proposed utilizing COVID-19 dataset from kaggle.com. The dataset is preprocessed using One-Hot encoding and Principal Component Analysis. Isolation Forest algorithm is used to train and predict COVID-19 cases. The performance of the model is evaluated using Accuracy, Precision, Recall and F1-Score. The intelligent software-aided contact tracing framework has four layers: symptoms, modeling/prediction, cloud storage/contact routing and contact tracers. The contact tracing system is an android application that receives symptom values, predict it and automatically send the prediction result together with user's contact and location details to the closest contact tracer via the Firebase real-time database. The closest contact tracer is determined by employing a dynamic routing algorithm (contact routing algorithm) that uses Open Shortest Path First (OSPF) protocol to compute the distance between two geographic locations (user and contact tracer) and chooses a contact tracer with shortest distance to the patient utilizing a unicast routing technique (routing a patient to a contact tracer in a one-to-one relationship). The predictive model along with the android application for software-aided contact tracing is implemented using the python, and Java programming language on Pycharm and Android Studio IDE respectively. This Framework is capable of predicting COVID-19 patients, notifying contact tracers of positive cases for proper follow-up which can subsequently curtail the spread of the virus journal.ump.edu.my/ijsecs ◄ Countries like South Korea and Iceland which successfully responded to COVID-19 made digital CT approach their strong pillar. USA and Germany with high confirmed cases have prioritized CT as a means for breaking the transmission chain of the disease after the lockdown. These countries are relying on models that center around testing and extensive contact tracing [5]. Early tracing and quarantining of close contacts are critical in cutting off the transmission chain and limiting the scale of any epidemic [6].
CT is a decades-old traditional essential public health technique/measure used for combating the spread of infectious disease [5,7,8]. It is a critical component of comprehensive strategies to control the spread of COVID-19. In practice, CT begins with those who test positive for COVID-19; and those who may have been exposed to the patients are identified and followed up daily for 14 days (quarantine) until they develop symptoms, pass the window of risk or are proven not to have been exposed. CT breaks the chains of human-to-human transmission [7]. CT is essential in swiftly identifying cases and their contacts thereby preventing recurrence [9].
[10] has broken the contact tracing task into three basic steps as shown in Figure 1. Speed and capacity are essential requirements for effective CT; there must be enough CT personnel to test suspected cases in a timely manner [8].These requirements are obviously lacking in traditional CT approaches. For this reason, there is a need to design digital tools to assist in CT efforts. These tools are only to enhance and augment the capabilities of the public health officials and not to replace them [5].
Some of the digital tools used in CT efforts are mobile applications and smartphones. Countries like Singapore, South Korea, Israel and USA have included mobile applications (apps) as one of the technologies to facilitate CT of COVID-19 [1]. Smartphones are widely used electronic products that can detect characteristics of users, such as spatiotemporal trajectory and social contacts. They are already used in health care through websites and apps such as WeChat, Twitter, and Facebook, opening up a new field in medical and scientific research known as mobile health [11].
[7] classified digital tools for contact tracing into 3 categories as depicted in Table 1. Specifically useful for initial localized outbreak response, early cluster investigations, and limited populations; optimally designed for field staff and run on smartphones or tablets that can synchronize across mobile and internet networks; they need standardized data formats/data dictionaries and reporting templates to link case-based line lists with contact tracing and laboratory testing data. Proximity Tracing/Tracking Tools Use location-based (GPS) or Bluetooth technology to find and trace the movements of individuals who may have been exposed to an infected person in order to identify them.
Proximity is not a complete assessment of exposure, since exposure may vary independently of proximity; More evidence is needed on the effectiveness of proximity tracing tools for contact tracing; The feasibility and thresholds scale required for implementation is not certain; Overreliance on proximity tracing tools may result in the exclusion of contacts such as children or people who do not have a suitable device.
Symptoms Tracking Tools
Use applications designed to routinely collect self-reported signs and symptoms; Could be used to augment in-person visits by receiving reports from contacts of confirmed cases more than once a day.
It has limited specificity and positive predictive value for respiratory infections; potential for misdiagnosis or nondiagnosis of other illnesses; limited ability to offer differential diagnoses, and as such must be used with caution in order not to increase the risk of adverse clinical outcomes for diseases not encompassed in the tool Table 1 comprises outbreak response, proximity tracing/tracking and symptoms tracking tools designed for public health officers, location based technology and for routinely collecting self-reported signs/symptoms respectively. Bluetooth signaling use in proximity tracing enables users to know if they have been in close contact with a case without necessarily providing location information. Symptom tracing tools are used to assess disease severity, users therefore need to know how to take follow-up actions in cases of indication of serious illness.
Contact Identification
Our intelligent software-aided CT framework combines the proximity tracing and symptoms tracking approaches for real-time prediction and tracing of COVID-19 cases in Nigeria. This supports the recommendations of the World Health Organization that a proper CT system should identify cases, have a system to feedback data and people to identify and follow-up contacts [9].
The rest of this paper is organized as follows: Section 2 highlights some different technologies used by different countries to trace COVID-19 patients; section 3 explains the proposed framework for intelligent software-aided CT for real-time prediction as well as Firebase, contact routing and the algorithm used; section 4 discusses the modules of the software-aided contact tracing while section 5 concludes the work
RELATED WORK
Many countries around the world have employed different technologies and software applications for mobile phones to trace those with COVID-19 cases. [25] used a mathematical model to investigate the possibility of reducing the spread of COVID-19 with contact tracing and found it to be true but added that CT must be implemented in line with extensive community cases detection; reaching out to as many contacts as possible. China asked their citizens to use the Alipay Health Code Application (App). The government will receive the travel history and current symptoms of their citizens and issue the App users with a colour-based QR-code indicating their health status (red colour -to be quarantined for 14 days; yellow colour -to remain indoors for 7 days; green colourfree to go about normal businesses). Israel is tracking individuals' phones to unveil where a suspected carrier has been and those they had contacts. The contacts will be notified with text messages to self-quarantine. They will be monitored through the location-tracking capabilities of cell phones [12]. Singapore released TraceTogether App that uses Bluetooth to track the proximity of two persons who have activated the App. The App exchanges time-limited tokens between the two persons which are sent to a central server. When any person reports being diagnosed with COVID-19, the Ministry of Health through the central server will determine the person's contact(s) within a particular time. A human contact tracer will alert these contacts and determine appropriate follow up actions [4]. Corona100m app in South Korea collects data from government sources and alerts users of COVID-19 patients who have been diagnosed within a 10 meters radius. The alert contains diagnosis date, nationality, age, gender and prior location. The app also plots the graphs of locations of diagnosed patients to help those who want to avoid these areas.
Other applications of mobile health (mHealth) for COVID-19 include Aarogya Setu or Big Sensor Data from India, Immuni from Italy, CovidSafe from Washington, Care19 from North Dakota and SafePlaces use Bluetooth or SMS to help monitor contact in a different way [13 -16]. In [26] a model for an interactive computer system using mobile phones and the internet was developed for real-time collection and transmission of COVID-19 related events.
The GoData software application (an example of outbreak response tool) developed by WHO in collaboration with Global Outbreak Alert and Response Network was designed specifically for field workers and has been implemented in many countries [17]. In [27], a CT management system integrating traditional contact tracing measures with symptoms tracking and contact management system was developed to assist public health officers in tracing COVID-19 cases. A good CT system requires promptness in reporting and follow-up actions. Therefore the problem with the work of [27] is that it lacks proximity measures as well as the real-time reporting of COVID-19 symptoms. [28] developed a COVID-19 symptoms monitoring and CT web application for people to individually and manually record their details (temperature, symptoms, travel history, suspected and confirmed cases they have come in contact with. What people have recorded in the web application will be sent to their email to be used for potential symptom monitoring and contact tracing if they are sometimes affected. The authors ought to know that for people to individually and daily record their details in the web portal requires some sort of force from the government and also the authenticity of the recorded details may be questionable. [11] proposed a spatiotemporal reporting over network and GPS (STRONG) which is a system integrating GPS and social media via a Smartphone app and GeoAIcombination of Artificial Intelligence and Geographic Information System. They developed a mini-program called Geo-WAS (Geo Wechat Artificial Intelligence System) which was published on January 31, 2020. Geo-WAS is within the Wechat app and collects data from users Wechat activities including time and location labels together with history of phone activities over the previous 14 days. These are used to generate and update space-time quick response code to use for identification. Individual dynamic spatiotemporal risk index to quantify the real time cumulative exposure risk for each user was also created.
The tools, technologies and software applications used by the authors highlighted above make use of Global Positioning System (GPS) and Bluetooth technologies to collect individual's data. Such information is not useful or appropriate because phone location data is not always precise enough to decide the closeness of a particular individual to another to transmit the virus; the accuracy of GPS is questionable as it works well when people are outside; travel paths of individuals may be exposed; noise and false positive identification are the issues associated with GPS data [18,19,11]. Accuracy of Bluetooth technology is quite uncertain [20]. Care19 app combining Bluetooth and GPS journal.ump.edu.my/ijsecs ◄ technologies has been challenged with accuracy issues due to inconsistent recording of GPS data and the insufficient granularity of the data it records [21].
This work therefore proposes a software-aided contact tracing system, which is an android application that receives COVID-19 symptom values, predicts it (positive or negative) and automatically sends the positive prediction result together with user's contact and location details to the closest contact tracer via the Firebase real-time database. The closest contact tracer is determined by employing a dynamic routing algorithm (contact routing algorithm) that uses Open Shortest Path First (OSPF) protocol to compute the distance between two geographic locations (user and contact tracer) and chooses a contact tracer with shortest distance to the patient utilizing a unicast routing technique (routing a patient to a contact tracer in a one-to-one relationship). Figure 2 shows an intelligent software-aided contact tracing framework for real-time model-driven prediction of COVID-19 cases comprising four (4) layers: Layer 1: This is the first layer of the framework called COVID-19 symptoms layer which is responsible for accepting COVID-19 symptoms (features) and their values as input into the system and passing the same to the next layer for modeling and prediction. Layer 2: This is the second layer called COVID-19 modeling and prediction layer which accepts the layer 1 inputs and subjects them into the COVID-19 Predictive model, predicts COVID-19 as either Positive or Negative using Isolation Forest (I-Forest) algorithm and sends the prediction results to the user via an android device while also transmitting the predicted result to Firebase Cloud Database using a Firebase client Application Programming Interface (API) on the android device.
Modelling and prediction
The modeling and prediction layer of the framework for intelligent software-aided contact tracing for real-time model-driven prediction of COVID-19 cases of Figure 2 is decomposed in order to highlight the various activities and algorithms employed to predict COVID-19. The prediction model is depicted in Figure 3 with the following compositions: . . . [22,23] 4. Model Evaluation -To ascertain the effectiveness of this model, performance evaluation is carried out. The trained I-Forest algorithm is used by the android software application for the prediction of COVID-19 cases in our intelligent software-aided contact tracing system. 70% of the COVID-19 dataset was used for training while 30% was used for testing. The training model used was Isolation Forest. This model was ranked in [24] as the best model for predicting COVID-19 cases. The training and test sets share the same features (fever, chills, fatigue, malaise, weakness, runny-nose, breathing-difficulty and sore-throat) and the output variable represents COVID-19 positive or negative case.
COVID-19 Prediction
The class distribution from the training dataset reveals that 91% of the dataset represents COVID-19 positive cases while 9% represents negative cases. Also from the test dataset, 96% represents positive cases while 4% represents negative cases. Higher percentages of positive cases reflects the imbalanced nature of the dataset for intelligent prediction and that it is only suitable for use in one class classification algorithms such as I-Forest.
The performance of I-Forest on both the training and test datasets using Accuracy, Precision, Recall, and F1-Score is shown in Table 2 indicating a better performance.
Firebase
Firebase was founded in 2011 as an independent company and was later acquired by Google in 2014 and has rapidly evolved and become a platform that provides software developers with many tools and services for creation of mobile and web applications. This work utilizes the Firebase real-time database to facilitate the transmission of COVID-19 patient's geospatial data to Contact Tracers. Firebase real-time database stores and sync data in real-time. Firebase supports the following data types in a JSON (Javascript Object Notation) format: String, Long, Double, Boolean, Map <String, Object> and List <Object>.
Model Evaluation
Model Predictions journal.ump.edu.my/ijsecs ◄ After initializing the Firebase real-time database references, the patient's data and contact routing table are read from, written to, deleted, and updated using the following Java functions: getValue(); setValue(); deleteValue(); updateChildren();
Contact routing
Routing is the process of selecting a path across one or more networks. The principles of routing can apply to any type of network, from telephone networks to public transportation. The type of routing protocol used in this work is similar to the Open Shortest Path First (OSPF) protocol commonly used by network routers to dynamically identify the fastest and shortest available routes for sending packets to their destination.
In this work, a dynamic routing algorithm that delivers patient's COVID-19 prediction information to the contact tracers using the "Unicast" delivery scheme is employed. Patient's COVID-19 alert message is delivered to a single specific node (in this case: Contact Tracer) using a one-to-one association between the patient and the contact tracer. Each destination address that uniquely identifies a single Contact Tracer is managed using a contact routing Table. The flow diagram of the contact routing algorithm used in this work is presented in Figure 4. The contact routing algorithm works by calculating the distance between the COVID-19 Patient and all available contact tracers; the closest contact tracer receives the COVID-19 alert information. While this information transmission is real-time, the contact tracer should also act in a timely fashion in order to isolate the COVID-19 patient.
The contact routing algorithm is formulated mathematically as: = ⋀ ( , ) =0 (1) Where: is the final selected tracer (tracer with the shortest distance) is the loop control variable is the size of the routing table (i.e number of contact tracers in the routing table) RoutingTable = { "contactTracer" : "Matthew James", "tracerID" : "10 Mushin Street, Lagos", "latitude" : "0.237465547", "longitude" : "0.886636734" } is the patient's location is the current tracer from the routing table is the minimum distance function ∧ is distance functionit calculates the distance between two coordinates and is defined as the Haversine distance calculated as: ∧ = .
SOFTWARE-AIDED CONTACT TRACING
An android application using Java programming language on Android Studio Integrated Development Environment (IDE) is developed for the collection, prediction and transmission of a patient's COVID-19 positive status. This application comprises the user registration, input, prediction and the contact tracing modules. Figure 5 shows the transition diagram of the software aided contact tracer User Registration Module The user registration module is the entry point for the software-aided contact tracing, it collects user's informationname, phone number and residential address; name, address and phone number of next of kin and user's passport photograph. This information will be sent together with location information to the firebase only when the prediction result is positive. The user registration module launches the inputs module when once the save button is clicked (for first time user). For old users, the input module is the entry point. ii.
Input module
The input module accepts COVID-19 symptom values from the user. These symptoms include -Fever, Chills, Fatigue, Malaise, Weakness, Runny-Nose, Sore Throat, Breathing Difficulty and Cough. After supplying the input, clicking the predict button launches the COVID-19 prediction module. iii.
The Prediction Module
The prediction module utilizes the Isolation Forest Algorithm and the symptoms values from the input module to predict cases of COVID-19 as either positive or negative and sends the result to the user and the nearest contact tracer. The current geographical location of the patient is also sent to the nearest contact tracer using the Firebase real-time database.
Clicking the save button on the prediction module closes the android application on the user's device. The transmitted COVID-19 information is routed to the cloud via a contact routing algorithm to the nearest contact tracer iv.
Contact Tracing Module
The contact tracing module is managed by an android application that receives a patient's COVID-19 event from the Firebase and displays this information using a map interface. The contact tracing module helps the Contact Tracer to receive positive COVID-19 cases which are in close proximity to the contact tracer. The contact tracing module, as depicted in Figure 6, is a map interface showing the location and contact details (Name, Address and Phone Number) of COVID-19 positive cases, which are closest to the contact tracer, as predicted by the prediction module of the android application for software-aided contact tracing. The test result consists of data input or feature values, actual prediction and model prediction. The actual prediction is the ideal output of the given feature value (datapoint) while model prediction is the output of the Isolation Forest algorithm, given the said datapoint. The highlighted cells indicate incorrect prediction while every other cell represents correct prediction. It is observed from the results that Isolation Forest predicted cases of COVID-19 with minimal misclassified points.
CONCLUSION
In this work, an intelligent software-aided contact tracing system for real-time model-driven prediction of COVID-19 was undertaken. Isolation Forest was successfully trained for COVID-19 prediction using a transformed dataset preprocessed with One-Hot encoding and Principal component algorithm. The Isolation Forest was capable of predicting cases of COVID-19 in the dataset with a higher degree of accuracy. An android application was developed using Java programming language on android studio to facilitate the transmission of data between the COVID-19 patient and the closest contact tracer. The closest contact tracer was determined using a contact routing algorithm which routes patients to the closest contact tracer. | 5,556.6 | 2021-02-28T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
3,6-Didehydro-5-hy-droxy-1,2-O-iso-propyl-idene-5-C-nitro-meth-yl-α-d-gluco-furan-ose.
The title compound, C(10)H(15)NO(7), consists of one methyl-enedi-oxy ring and two fused tetra-hydro-furan rings. The three fused rings exhibit cis arrangements at the ring junctions. One O atom of a tetra-hydro-furan ring and the H atoms bound to the neighboring C atoms are disordered over two orientations with site-occupancy factors of 0.69 (1) and 0.31 (1). intra-molecular O-H⋯O and C-H⋯O inter-actions stabilize the mol-ecular conformation. In the crystal structure, inter-molecular O-H⋯O and C-H⋯O inter-actions link the mol-ecules into a three-dimensional network.
The title compound, C 10 H 15 NO 7 , consists of one methylenedioxy ring and two fused tetrahydrofuran rings. The three fused rings exhibit cis arrangements at the ring junctions. One O atom of a tetrahydrofuran ring and the H atoms bound to the neighboring C atoms are disordered over two orientations with site-occupancy factors of 0.69 (1) and 0.31 (1). intramolecular O-HÁ Á ÁO and C-HÁ Á ÁO interactions stabilize the molecular conformation. In the crystal structure, intermolecular O-HÁ Á ÁO and C-HÁ Á ÁO interactions link the molecules into a three-dimensional network.
Comment
Azasugars containing some novel glycosyls such as bicyclo-glycosyl and heterocycle glycosyl, the synthesis of which being known for many years (Choi et al., 1991;Kvaernø et al., 2001), have attracted a growing interest due to their potent antiviral activity. As a contribution to the research for carbohydrate and azasugars compounds (Liu et al., 2004;Ke et al., 2009), we report here the synthesis and X-ray crystal structure of the title compound, an intermediate of bicyclo-glycosyl. The title compound, which shows a similar structure to the one previously reported by Zhang & Yang (2010), was enantiomerically synthesized at room tempeature by means of the Henry reaction (Saito et al., 2002).
The title compound, C 10 H 15 NO 7 , consists of one methylenedioxy ring and two fused tetrahydrofuran rings. The three fused rings exhibit cis arrangements at the ring junctions and give two V-shaped molecules. One O atom of a tetrahydrofuran ring moiety is disordered over two positions with site-occupancy factors of 0.69 (1) and 0.31 (1), the H atoms bound to the neighboring C atoms were disordered as well. The bond angles O2-C7-O1 and C8-C7-C9 around the isopropylidene are 105.6 (2) and 113.2 (3)°, which are almost equal to the corresponding bond angles reported by Zhang & Yang (2010).
In the crystal structure, some intra-and intermolecular O-H···O and C-H···O interactions exist to stabilize the molecular conformation and link the molecules into a three-dimensional network.
Experimental
The title compound was synthesized from 3,6-didehydro-1,2-O-isopropylidene-5-carbonyl-α-D-glucofuranose with Henry reaction as described previously by Saito et al. (2002) whose starting material was D-glucose. To a solution of the starting material (2.4 g, 9.2 mmol) in tetrahydrofuran (30 ml) was added CH 3 NO 2 (0.82 ml) and potassium fluoride (0.84 g) under ice bath. The mixture was stirred at room temperature for 12 h. After the material was consumed, the reaction mixture was filtered to remove the KF. The filtrate was concentrated in vacuo to yield the residue, which was recrystalized in CH 3 OH to obtain the title compound as a white solid. Crystals suitable for X-ray analysis were grown by slow evaporation from methanol at room temperature for two weeks.
Refinement
All H atoms were placed geometrically and treated as riding on their parent atoms, with C-H = 0.96 Å and U iso (H) =1.5U eq (C) for methyl H atoms, with C-H = 0.97 Å and U iso (H) =1.2U eq (C) for methylene H atoms, and with C-H = 0.98 Å and U iso (H) =1.2U eq (C) for methine H atoms. The hydroxy H atom was freely refined. In the absence of any significant anomalous scatterers in the molecule, attempts to confirm the absolute structure by refinement of the Flack parameter in the supplementary materials sup-2 presence of 896 sets of Friedel equivalents led to an inconclusive value of 0.0 (3). Therefore, the Friedel pairs were merged before the final refinement and the absolute configuration was assigned to correspond with that of the known chiral centres in the precursor molecule, which remained unchanged during the synthesis of the title compound.
One O atom of a tetrahydrofuran ring moiety is disordered over two positions with site-occupancy factors of 0.69 (1) and 0.31 (1), the H atoms bound to the neighboring atoms C3 and C6 were disordered as well over two positions. Fig. 1. The molecular structure of the title compound showing the atomic numbering and 30% probability displacement ellipsoids. | 1,043 | 2011-06-18T00:00:00.000 | [
"Chemistry"
] |
Effects on quantum physics of the local availability of mathematics and space time dependent scaling factors for number systems
The work is based on two premises: local availability of mathematics to an observer at any space time location, and the observation that number systems, as structures satisfying axioms for the number type being considered, can be scaled by arbitrary, positive real numbers. Local availability leads to the assignment of mathematical universes, $V_{x},$ to each point, $x,$ of space time. $V_{x}$ contains all the mathematics that an observer, $O_{x},$ at $x,$ can know. Each $V_{x}$ contains many types of mathematical systems. These include the different types of numbers (natural numbers, integers, rationals, and real and complex numbers), Hilbert spaces, algebras, and many other types of systems. Space time dependent scaling of number systems is used to define representations, in $V_{x}$, of real and complex number systems in $V_{y}$. The representations are scaled by a factor $r_{y,x}$ relative to the systems in $V_{x}.$ For $y$ a neighbor point of $x,$ $r_{y,x}$ is the exponential of the scalar product of a gauge field, $\vec{A}(x),$ and the vector from $x$ to $y.$ For $y$ distant from $x,$ $r_{y,x}$ is a path integral from $x$ to $y.$ Some consequences of the two premises will be examined. Number scaling has no effect on general comparisons of numbers obtained as computations or as experimental outputs. The effect is limited to mathematical expressions that include space or space time integrals or derivatives. The effect of $\vec{A}$ on wave packets and canonical momenta in quantum theory, and some properties of $\vec{A}$ in gauge theories, is described.
Universe equivalence means here that for any system type, S,S y is the same system in y asS x is in x .
For the purposes of this work, it is useful to have a specific definition of mathematical systems. Here the mathematical logical definition of a system of a given type as a structure [12,13] is used. A structure consists of a base set, a few basic operations, none or a few basic relations, and a few constants. The structure must satisfy a set of axioms appropriate for the type of system being considered. For example,N satisfies a set of axioms for the natural numbers as the nonnegative elements of a discrete ordered commutative ring with identity [14], is a real number structure that satisfies the axioms for a complete ordered field [15], andC is a complex number structure that satisfies the axioms for an algebraically closed field of characteristic 0 [16].
is a structure that satisfies the axioms for a Hilbert space [17]. Here ψ x is a state variable inH x . There are no constants inH x . The subscript, x, indicates that these structures are contained in x . The other idea introduced here is the use of scaling factors for structures for the different number types. These scale structures are based on the observation [18,19,20] that it is possible to define, for each number type, structures in which number values are scaled relative to those in the structures shown above. The scaling of number values must be compensated for by scaling of the basic operations and constants in such a way that the scaled structure satisfies the relevant set of axioms if and only if the original structure does.
Scaling of number structures introduces scaling into other mathematical systems that are based on numbers as scalars for the system. Hilbert spaces are examples as they are based on the complex numbers as scalars.
The fact that number structures can be scaled allows one to introduce scaling factors that depend on space time or space and time. If y = x+μdx is a neighbor point of x, then the real scaling factor from x to y is defined by Here A is a real valued gauge field that determines the amount of scaling, and µ and dx are, respectively, a unit vector and length of the vector from x to y. Also · denotes the scalar product. For y distant from x, r y,x is obtained by a suitable path integral from x to y.
Space time scaling of numbers would seem to be problematic since it appears to imply that comparison of theoretical and experimental numbers obtained at different space time points have to be scaled to be compared. This is not the case. As will be seen, number scaling plays no role in such comparisons. More generally, it plays no role in what might be called, "the commerce of mathematics and physics".
Space time dependent number scaling is limited to expressions in theoretical physics that require the mathematical comparison of mathematical entities at different space time points. Typical examples are space time derivatives or integrals. Local availability of mathematics makes such a comparison problematic. If f is a space time functional that takes values in some structureS, then "mathematics is local" requires that for each point, y, f (y) is an element of S y . In this case space time integrals or derivatives of f make no sense as they require addition or subtraction of values of f in different structures. Addition and subtraction are defined only within structures, not between structures.
This problem is solved by choosing some point x, such as an observers location, and transforming eachS y into a local representation ofS y onS x . Two methods are available for doing this: parallel transformations for which the local representation ofS y onS x isS x itself, and correspondence transformations. These give a local, scaled representation ofS y onS x in that each element ofS y corresponds to the same element ofS x , multiplied by the factor r y,x .
The rest of this paper explains, in more detail, these ideas and some consequences for physics. The next section describes representations of number types that differ by scaling factors. Sections 3 and 4 describe space time fields of complex and real number structures and the representation of r y,x in terms of a gauge field, as in Eq. 5. This is followed by a discussion of the local availability of mathematics and the assignment of separate mathematical universes to each space time point. Section 6 describes correspondence and parallel transforms. It is shown that A plays no role in the commerce of mathematics and physics. This involves the comparison and movement of the outcomes of theoretical predictions and experiments and the general use of numbers. Section 7 applies these ideas to quantum theory, both with and without the presence of A. Parallel and correspondence transformations are used to describe the wave packet representation of a quantum system. It is seen that there is a wave packet description that closely follows what what is actually done in measuring the position distribution and position expectation value. The coherence is unchanged in such a description.
The next to last section uses "mathematics is local" and the scaling of numbers to insert A into gauge theories. The discussion is brief as it has already been covered elsewhere [18,20]. A appears in the Lagrangians as a boson for which a mass term is not forbidden. The last section concludes the paper.
The origin of this work is based on aspects of mathematical locality that are already used in gauge theories [21,22] and their use in the standard model [23]. In these theories, an n dimensional vector space,V x , is associated with each point, x, in space time. A matter field ψ(x) takes values inV x . Ordinary derivatives are replaced by covariant derivatives, D µ,x , because of the problem of comparing values of ψ(x) with ψ(y) and to introduce the freedom of choice of bases. These derivatives use elements of the gauge group, U (n), and their representations in terms of generators of the Lie algebra, u(n), to introduce gauge bosons into the theories.
Representations of different number types
Here the mathematical logical definition [12,13] of mathematical systems as structures is used. A structure consists of a base set, basic operations, relations, and constants that satisfy a set of axioms relevant to the system being considered. As each type of number is a mathematical system, this description leads to structure representations of each number type.
The types of numbers usually considered are the natural numbers,N , the integers,Ī, the rational numbers, Ra, the real numbers,R, and the complex numbers,C. Structures for the real and complex numbers can be defined bȳ A letter with an over line, such asR, denotes a structure. A letter without an over line, as R in the definition ofR, denotes a base set of a structure. The main point of this section is to show, for each type of number, the existence of many structures that differ from one another by scale factors. To see how this works it is useful to consider a simple case for the natural numbers, 0, 1, 2, · · · . LetN be represented bȳ whereN satisfies the axioms of arithmetic [14]. The structureN is a representation of the fact that 0, 1, 2 · · · with appropriate basic operations and relations are natural numbers. However, subsets of 0, 1, 2, · · · , along with appropriate definitions of the basic operations, relations, and constants are also natural number structures.
As an example, consider the even numbers, 0, 2, 4, · · · inN LetN 2 be a structure for these numbers wherē Here N 2 consists of the elements of N with even number values inN . The structureN 2 shows that the elements of N that have value 2n inN have value n inN 2 . Thus the element that has value 2 inN has value 1 inN 2 , etc. The subscript 2 on the constants, basic operations, and relations inN 2 denotes the relation of these structure elements to those inN . The definition ofN 2 floats in the sense that the specific relations of the basic operations, relation, and constants to those inN must be specified. These are chosen so thatN 2 satisfies the axioms of arithmetic if and only ifN does. A suitable choice that satisfies this requirement is another representation ofN 2 defined byN This structure is called the representation ofN 2 onN . N 2 2 shows explicitly the relations between the basic operations, relations, and number values inN 2 and and those inN . For example, 1 2 ↔ 2, + 2 ↔ +, × 2 ↔ ×/2, < 2 ↔< . These relations are such thatN 2 2 , and therebyN 2 , satisfies the axioms of arithmetic if and only ifN does. N 2 2 also shows the presence of 2 as a scaling factor. Elements of the base set N 2 that have value n inN 2 have value 2n inN . Note that, by themselves, the elements of the base set have no intrinsic number values. The values are determined by the axiomatic properties of the basic operations, relations, and constants in the structure containing them.
This description of scaled representations applies to the other types of numbers as well. For real numbers let r be a positive real number inR, Eq. 6. LetR be another real number structure. Define the representation ofR r onR by the structure,R R r r shows that number values inR r are related to those inR by a scaling factor r.R r r gives the definitions of the basic operations, relation, and constants in R r in terms of those inR. These definitions must satisfy the requirement that R r satisfies the real number axioms if and only ifR r r does if and only ifR does. 1 Note that the base set R is the same for all three structures. Also the elements of R do not have intrinsic number values independent of the structure containing R. They attain number values only inside a structure where the values depend on the structure containing R.
The relationships between number values inR r r ,R r , andR can be represented by a new term, correspondence. One says that the number value a r in R r corresponds to the number value ra inR. This is different from the notion of sameness. InR, ra is different from the value a. However, a is the same value inR as a r is inR r as ra is inR r r . The distinctions between the concepts of correspondence and sameness does not arise in the usual treatments of numbers. The reason is that sameness and correspondence coincide when r = 1.
For complex numbers, the structures, in addition toC, Eq. 6, arē and the representation ofC r onC as Here r is a real number value inC. a is the same number value inC as a r is in C r . Otherwise the description is similar to that for the natural and real numbers. More details on these and other number type representations are given in [19].
Fields of mathematical structures
As was noted in the introduction, the local availability of mathematics results in the assignment of separate structures,S x , to each point, x, of space time. Here S denotes a type of mathematical structure. The discussion is limited to the main system types of concern. These are the real numbers, the complex numbers, and Hilbert spaces. Hilbert spaces are included here because the freedom of choice of scaling factors for number types affects Hilbert spaces as they are based on complex numbers as scalars.
Complex numbers
Parallel transformations betweenC x andC y for two points, x, y, define the notion of same number values between the structures. Let F y,x be an isomorphism fromC x ontoC y . WithC and F y,x defines the notion of same number value and same operation inC y as that inC x . This is expressed by Here a y is the same (or F y,x -same) number value inC y as a x is inC x . Op y is the same operation inC y as Op x is inC x . Op denotes any one of the operations, +, −, ×, ÷. Note that F y,x is independent of paths between x and y. This follows from the requirement that for a path P from x to y and a path Q from y to z, Here Q * P is the concatenation of Q to P . If z = x then the path is cyclic and the final structure is identical to the initial one. This gives the result that This shows that F P y,x is path independent so that a path label is not needed. Note that The subscript order in F y,x gives the path direction, from x to y. At this point the freedom to choose complex number structures at each space time point is introduced. This is an extension, to number structures, of the freedom to choose basis sets in vector spaces as is used in gauge theories [22,21] This can be accounted for by factoring F y,x into a product of two isomorphisms as in Here y = x +νdx is taken to be a neighbor point of x.
The action of W y r and W r y is given bȳ Here r y,x is a real number inC x that is associated with the link from x to y.
As was the case for F y,x the order of the subscripts determines the direction of the link. Thus r x,y is a number inC y for the same link but in the opposite direction and (r x,y ) x r y,x = 1.
Here (r x,y ) x is the same number value inC x as r x,y is inC y . In the following, the subscripts y, x are often suppressed on r y,x to simplify the notation. The structureC r x is defined to be the representation ofC y onC x . As is the case forC r r , Eq. 13, the number values and operations inC r x are defined in terms of the corresponding number values and operations inC x : The multiplication and division by r, shown in × x /r, r÷ x , are operations inC x . Note that the number value r inC x is the multiplicative identity inC r x . Alsō C r x has the same base set, C x , as doesC x . The corresponding definition of W r x is given by W r x is an isomorphism in that Here a x and b x are number values inC x and O x denotes the basic operations inC x . W y r has a similar definition as it is an isomorphism fromC r x toC y . Since the definition is similar it will not be given here.
C r x can also be represented in a form similar to that of Eq. 12 as This structure can be described as the representation ofC y at x. The relation between the number values and operations inC r,x and those inC x is provided byC r x which defines the number values and operations ofC r,x in terms of those inC x . In this sense bothC r,x andC r x are different representations of the same structure. From now onC r,x andC r x will be referred to as the representation of C y at x and onC x respectively.
The relations between the basic operations and constants ofC r x and those of C x lead to an interesting property. Let f r x (a r x ) be any analytic function onC r x . It follows that Here f x is the same function onC x as f r x is onC r x . Also a x and b x are the same number values inC x as a r x and b r x are inC r x . This result follows from the observation that any term (a r The n factors and n-1 multiplications in the numerator contribute a factor of r. This is canceled by a factor of r in the denominator. The one r factor arises from the relation of division inC r x to that inC x . Eq. 27 follows from the fact that Eq. 28 holds for each term in any convergent power series. As a result it holds for the power series itself.
Real numbers
Since the treatment for real numbers is similar to that for complex numbers, it will be summarized here. The representations ofR y at x and onR x are given by Eqs. 10 and 11 as Here r = r x,y is a positive real number. The definition of parallel transforms for complex number structures applies here also. Let F y,x transformR x toR y . F y,x defines the notion of same real number in that a y = F y,x (a x ) is the same real umber inR y as a x is inR x . As was shown in Eqs. 20 and 21, F y,x can be factored into two operators as in W r x defines the scaled representation ofR y onR x . It is given explicitly by Eq. 24. W y r maps the scaled representation ontoR y . Eqs. 27 and 28 also hold for the relations between any real valued analytic function onR r x and its correspondent onR x in that f r x (a r x ) = rf x (a x ).
Hilbert spaces
As noted in the introduction, Hilbert space structures have the form shown in Eq. 4 asH Complex numbers are included implicitly in that Hilbert spaces are closed under multiplication of vectors by complex numbers. Also scalar products are bilinear maps with complex values. As was the case for numbers, parallel transformation ofH y to x mapsH y ontoH x . If scaling of the numbers is included, then the local representation of This equation gives explicitly the relations of operations and vectors of the local representation ofH y to those inH x . The relations are defined by the requirement thatH r x satisfy the Hilbert space axioms [17] if and only ifH x does. 2 Here rψ x is the same vector inH r x as ψ x is inH x . The description ofH r x given here is suitable for use in section 7 where wave packets for quantum systems are discussed. For gauge theories, the Hilbert spaces contain vectors for the internal variables of matter fields. In this case one has to include a gauge field to account for the freedom to choose bases [22,21]. The local representation ofH y onH x is then given by [18,20] If theH x are n dimensional, then V is an element of the gauge group, U (n).
Gauge fields
As was noted in the introduction, for y = x +νdx, r y,x can be represented as the exponential of a vector field: 2 Support for the inclusion of r as a vector multiplier, as inH r x , Eq. 32, is based on the equivalence between finite dimensional vector spaces and products of complex number fields [17]. IfHy andHx are n dimensional spaces, thenHy ≃C n y andHx ≃C n x . These equivalences extend to the local representation ofHy onHx. As the local representation ofCy onCx,C r x is the scalar field base for the local Hilbert space representation. It follows thatH r x is equivalent to (C r x ) n . A vector in (C r x ) n corresponds to an n-tuple, {a r x,j : j = 1, · · · , n} of number values inC r x . Use of the fact that the value a r x,j inC r x , corresponds to the number value ra j,x inCx shows that the n-tuple in (C r x ) n corresponds to the n-tuple, r{a j,x : j = 1, · · · , n} inC n x . These equivalences should extend to the case wherē Hy andHx are separable, which is the case here.
(sum over repeated indices implied). A(x) is also referred to as a gauge field as it gives the relations between neighboring complex number structures at different space time points. To first order in small quantities, The use of r y,x makes clear the fact that the setup described here is a generalization of the usual one. To see this, set A(x) = 0 everywhere. Then r y,x = 1 for all y, x and the local representations ofC y andR y onC x andR x areC x and R x . Since theC x andR x are then independent of x, one can replaceC x andR x with just one complex and real number structure,C andR.
Scale factors for distant points
The description of r y,x can be extended to points y distant from x. Let P be a path from x to y parameterized by a real number, s, such that P (0) = x and P (1) = y. Let r P y,x be the scale factor associated with the path P . If a y is a number value inC y , then a y corresponds to the number value, r P y,x a x , inC x where a x = F x,y a y is the same number value inC x as a y is inC y .
One would like to express r P y,x as an exponential of a line integral along P of the field A(x). However this is problematic because the integral corresponds to a sum over s of complex number values inC P (s) . Such a sum is not defined because addition is defined only within a number structure. It is not defined between different structures. This can be remedied by referring all terms in the sum to one number structure such asC x . To see how this works, consider a two step path from x to y = x +ν 1 ∆ x and from y to z = y +ν 2 ∆ y . ∆ y is the same number inC y as ∆ x is inC x .
Let a z be a number value inC z . a z corresponds to the number value r z,y × y a y inC y . Here a y = F y,z a z is the same number value inC y as a z is inC z . InC x , r z,y × y a y corresponds to the number value given by Here (r z,y ) x = F x,y (r z,y ) is the same number inC x as r z,y is inC y and × x /r y,x is the representation of × y onC x . TheC x multiplications are implied in the righthand term and a x = F x,y a y . The factor (r z,y ) x r y,x can be expressed in terms of the field A. It is ∆ y is replaced here by its same value ∆ x inC x . Let P be an n step path where P (0) = x 0 = x, P (j) = x j , P (n−1) = x n−1 = y and x j+1 = x j +ν j ∆ xj . Then r P y,x a x is given by The subscript x denotes the fact that all terms in the product and in the exponential, are values inC x . For example (r xj+1,xj ) x = F x,xj r xj+1,xj is the same value inC x as r xj+1,xj is inC xj . An ordering of terms in the product of Eq, 38 is not needed because the different r factors commute with one another. This can be extended to a line integral along P . The result is [20] r P y,x = exp{ The subscript x on the factors in the integral mean that the terms are all evaluated inC x . It is unknown if the field A and thereby r P y,x is or is not independent of the path P from x to y. If A is not integrable, then the path dependence introduces complications. In particular it means that for y distant from x, there is no path independent way to describe the local representation ofC y onC x . The local representation would have to include a path variable as inC In this work, this complication will be avoided by assuming that A is integrable. Then r p y,z = r y,x , independent of P . Let P be a path from x to y and Q be another path from y to x. Then integrability gives Here Q * P is the concatenation of Q to P . This result gives
Local availability of mathematics
The local availability of mathematics means that for an observer, O x , at point x, the mathematics that O x can use, or is aware of, is locally available at x. Since mathematical systems are represented by structures, [12,13], one can use x to denote the collection of all these structures.
x includes real and complex number and Hilbert space structuresR x ,C x ,H x , structures for operator algebras as well as many other structure types. All the mathematics that O x uses to make physical predictions and physical theories use the systems in x . Similarly, all the mathematics available to an observer O y at point y is contained in y .
An important requirement is that the mathematics available, in principle at least, to an observer must be independent of the observers location. This means that y must be equivalent to x . For each system structure in y , there must be a corresponding structure in x . Conversely, for each system structure in x there must be a corresponding structure in y . Furthermore the corresponding structures in y and x must be related by parallel transforms that map one structure to another. These parallel transforms define what is meant by the same structure and the same structure elements and operations in x as in y , and conversely. This use of parallel transforms is an extension to other types of mathematical systems, of the definitions and use of parallel transforms, Section 3, to relate complex and real number structures at different space time points. It is based on the description of each type of mathematical system as structures, each consisting of a base set, basic operations, relations and constants, that satisfy a set of axioms relevant to the system type. [12,13].
The association of an observer to a point, as in O x , is an idealization, mainly because observers, e.g. humans, have a finite size. Because of this, an observer's location is a region and not a point. This is the case if one notes that the observer's brain is the seat of all mathematical knowledge and limits consideration to the brain. In addition, quantum mechanics introduces an inherent uncertainty to the location of any system. In spite of these caveats, the association of an observer to a point will be used here.
An important aspect of x is that O x must be able to use the systems in x to describe the systems in y . This can be done by means of parallel transform maps or correspondence maps from systems in y to those in x . Parallel transforms map elements and operations of system structuresS y to the same elements and operations ofS x . In this case O x can use the mathematics ofS x as a stand in for the mathematics ofS y .
Correspondence maps take account of scaling of real and complex numbers in relating systems at y to those at x. In this case O x describes the mathematics ofS y in terms of the local representation,S r x , ofS y onS x . If S = R or S = C then O x would describe the properties ofR y orC y in terms of the local scaled systemsR r x andC r x . The existence of correspondence maps means that for each system type, S, x contains all the scaled systems,S r x , for each point y, in addition toS x . (Recall r = r y,x .) They include scaled real numbers,R r x , complex numbers,C r x , and scaled Hilbert spaces,H r x , as well as many other system types. All these scaled systems are available to an observer, O x at x. Since they are locally available, O x takes account of scaling by using them to make theoretical calculations that require inclusion of numbers or vectors at different space or space time points. If O x does not use these correspondence maps and restricts use to parallel transform maps only, then the setup becomes simpler in that eachS r x is identical toS x . This raises the question of when correspondence maps can be used instead of parallel transform maps. This will be discussed in the next sections. Here the use of correspondence maps follows from the inclusion into physics of the freedom to choose number systems at different space time points. In this sense it extends the freedom to choose bases in vector spaces in gauge field theory [21,22] to the underlying scalars.
Correspondence maps and parallel transform maps
It is proposed here that correspondence maps be used in any theoretical physics expression that requires These involve what is referred to here as the commerce of mathematics and physics. 3 To see this suppose O x wants to compare the numerical output of either an experiment or a theoretical computation, done at x, with the numerical output of either an experiment or computation done at y. Let b x and d y be the real valued numerical outcomes obtained. Use of the correspondence maps means that O x would compare b x with the local representation of d y at x, that is, with the number r y,x d x .
Here d x is the same number inR x as d y is inR y . This is contradicted by experience. There is no hint of a factor r y,x in comparing outcomes of repeated experiments, or comparing experimental outcomes with theoretical predictions, or in any other use of numbers in commerce. If one ignores statistical and quantum uncertainties, numerical outcomes of repeated experiments or repeated computations are the same irrespective of when and where they are done. 4 The reason for this is a consequence of a basic fact. This is that no experiment and no computation ever directly yields a numerical value as an outcome. Instead the outcome of any experiment is a physical system in a physical state that is interpreted as a numerical value. Similarly the outcome of a computation is a physical system in a state that is interpreted as a numerical value.
The crucial word here is interpreted. If ψ y is the output state of a measurement apparatus for an experiment at point y, and φ x is the output state of a computation at point x, then the numerical values of these output states are given by a y = I y (ψ y ) and b x = I x (φ x ). Here I y and I x are interpretive maps from the output states of the measurement system intoR y and from the computation output states intoR x respectively. The space time dependence of the maps is indicated by the x, y subscripts.
The "Naheinformationsprinzip", no information at a distance principle [24,22], forbids direct comparison of the information in ψ y with that in φ x . This means that I y (ψ y ) and I x (φ x ) cannot be directly compared. Instead the in-formation contained in ψ y and that contained in φ x must be transported, by physical means, to a common point for comparison.
There are many different methods of physical information transmission. Included are optical and electronic methods as well as older slower methods. All methods involve motion of an information carrying physical system from one point to another, "information is physical" [25]. The physical system used should be such that the state of the information carrying degrees of freedom does not change during transmission from one point to another. Figure 1 illustrates schematically, in one dimensional space and time, the nonrelativistic transmission of a theory computation output state obtained at x ′ , u and an experimental output state obtained at y, v to a common point, x, t, for comparison. Here x, x ′ , y are space locations and u, v, t are times. Figure 1: A simple example of comparing theory with experiment. The ovals denote the output computation and experiment systems in states φ c and φ e at space and times, x ′ , u and y, v. P and Q denote the paths followed by these systems. One has P (u) = x ′ and Q(v) = y. The double oval in the center denotes the the two systems at the point, x, of path intersection where P (t) = Q(t) = x. The interpretation maps are denoted by I P (s),s and I Q(s),s for different times s. The real number structuresR P (s),s andR Q(s),s are associated with each point in the paths P and Q. F x,t;x ′ ,u and F x,t:y,v are parallel transform operators that map the real number structures at the points of theory and experiment completion to the point of path intersection.
The figure, and the discussion, illustrate a general principle. All activities in the commerce of mathematics and physics consist of physical procedures and operations that generate physical output systems in states that are interpreted as numerical values. The "no information at a distance" principle forbids direct comparison of the associated number values at different points. Instead the systems or suitable information carriers must be brought to a common point where the numerical information, as number values in just one real number structure, can be locally compared.
Similar considerations apply to storage of outcomes of experiments or computations either in external systems or in the observers brain. As physical dynamic systems, observers move in space time. If P is a path taken by an observer, with P (τ ) the observers location at proper time τ, the mathematics available to O P (τ ) is that in P (τ ) . If φ(P (τ )) denotes the state of a real number memory trace in an observers brain, then the number value represented by φ(P (τ )) is given by I P (τ ) (φ(P (τ ))). This is a number value inR P (τ ) . At a later proper time τ ′ , the number value represented by the memory trace is I P (τ ′ ) (φ(P (τ ′ )). If there is no degradation of the memory trace, then I P (τ ) (φ(P (τ ))) is the same number value inR P (τ ) as I P (τ ′ ) (φ(P (τ ′ )) is inR P (τ ′ ) . Correspondence maps play no role here either.
Quantum theory
As might be expected, the local availability of mathematics and the freedom of choice of number scaling factors, have an effect on quantum theory. This is a consequence of the use of space time integrals and derivatives in the theory. For example, one would expect to see the gauge field A appear in quantum descriptions of physical systems. To see how this effect arises, it is useful to limit the treatment to nonrelativistic quantum mechanics on three dimensional Euclidean space, R 3 .
Effect of the local availability of mathematics on quantum theory
The local availability of mathematics requires that the usual setup of just onē C,R,H is replaced by separate number structures,R x ,C x and separate Hilbert spaces,H x , associated with each x in R 3 . 5 It follows that mathematical operations, such as space or time derivatives or integrals, which involve nonlocal mathematical operations on numbers or vectors at different points, cannot be done. The reason is that these operations violate mathematical locality.
To preserve locality, one must use either parallel transformations or correspondence transformations. These two methods are well illustrated by considering a single particle wave packets. The usual representation has the form ψ = ψ(y)|y dy where the integral is over all space points in R 3 . One result of the local availability of mathematics is that, for each y, the vector. ψ(y)|y . is inH y , just as ψ(y) is a number value inC y . It follows that the space integral over y makes no sense. It describes a suitable limit of adding vectors that belong to different Hilbert spaces. Addition is not defined between different spaces; it is defined only within one Hilbert space and complex number structure.
The use of parallel transformations replaces Eq. 42 by Here ψ(y) x = F x,y ψ(y) is the same number value inC x as ψ(y) is inC y , the number triple, y x , in |y x x is the same triple inR 3 x as y is inR 3 y , and |y x x is the same state inH x as |y is inH y . The differentials dy x = dy 1 x dy 2 x dy 3 x refer toR 3 x . The subscript x in indicates that the integral is based onH x ,C x . The representations of sameness given above are shown explicitly by Also |F x,y (y) x is the same basis vector inH x as |y is inH y .
Note that the point x on which the integral is based is arbitrary. Eq. 43 holds if the subscript x is replaced by another point z. Then the integral is based onC z ,H z .
The use of parallel transforms is applicable to other aspects of quantum mechanics. For each y in R 3 , the momentum operator, p y , for vectors inH y is given by Here, i y , y are numbers inC y . The action of p y on a vector ψ at point y gives for the jth component p y,j ψ = i y y ∂ j,y ψ = ψ(y + dy j ) − ψ(y) dy j .
As was the case for the space integral, this expression makes no sense because ψ(y + dy j ) is inC y+d j y and ψ(y) is inC y . This can be remedied by replacing ∂ j,y by ∂ ′ j,y where Here ψ(y + dy j ) y = F y,y+d j y ψ(y + dy j ). It follows from this that the expression for the momentum becomes The Hamiltonian for a single quantum system in an external potential, acting on a state, ψ y at point y, is given by Here, y and m y are Planck's constant and the particle mass. They have values inR y . The values of the external potential, V (y), are also inR y .
The main difference between this and the usual expression for a Hamiltonian is the replacement of ∂ j,y with ∂ ′ j,y . Otherwise, the expressions are the same. For a single particle state, ψ, the momentum representation is ψ = ψ(p)|p dp.
Here dp = dp 1 dp 2 dp 3 . Since the amplitude ψ(p) is a complex number value and no location for the value is specified, one may choose any location, x, such as that of an observer, O x , to assign ψ(p) as a number value inC x and the integral as an element of x .
The relation between ψ(p) and ψ(x) is given by the Fourier transform. The components of the space integral in must all be mapped to a common point, x, for the integral to make sense. This gives Here is the same number inC x as e iz pzz is inC z . The treatment described can be extended to multiparticle entangled states. It is sufficient to consider two particle states. For example a two particle state ψ 1,2 where the total momentum of the two particles is 0 can be expressed by Use of Fourier transforms gives The integral must be transformed to a Hilbert space with just one scalar field. For a point x withC x ,H x , the integrand factors are parallel transformed to obtain Here Also p x is the same value inC x as p is inC z1 in the z 1 integral, Eq. 54, and −p x is the same value inC x as −p is inC z2 in the z 2 integral.
Inclusion of number system scale factors
The above shows that the imposition of "mathematics is local" on quantum theory is more complex than the usual treatment with just one scalar field and one Hilbert space for all space points. Since the description with parallel transforms is equivalent to the usual one, the added complexity is not needed if one goes no further with it. This is not the case if one extends the treatment to include space dependent scaling factors for the differentC x ,R x . For a given x, the local representations ofC y onC x are given by scaled representations,C ry,x x , ofC y onC x . Also the local representation ofH y onH x with effects of the number scaling included, is given byH For y = x +νdx, a neighbor point of x, the scaling factor, r y,x is given by, r y,x = e A(x)·νdx , Eq. 34. If y is distant from x and A(x) is integrable, then, expressing r y,x as an integral along a straight line path from x to y gives, (Eq. 39) r y,x,i (57) Here x i = x·î and y i = y ·î are the components of x and y in the direction i. The last equality assumes that the components of A commute with one another. 6 The subscript x indicates that the integrals are defined onR x .
The presence of A(x) affects the expression of a wave packet state ψ as given by Eq. 43. In this case the wave packet expansion of ψ x is given by where r y,x is given by Eq. 57. This result is obtained by noting that for each y the local representation of H y ,C y onH x ,C x , with scaling factor included, isH that is the same vector as ψ(y)|y is inH y , is denoted by ψ(y) r x · r x |y r x . Here ψ(y) r x is the same number value inC ry,x x as ψ(y) is inC y and |y r x is the same vector inH ry,x x as |y is inH y . This follows from the observation that y r x and y are the space positions associated with |y r x and |y inH ry,x x andH y respectively. Scalar vector multiplication inH ry,x x is shown by · r x .
The corresponding state onH x is obtained by noting that ry,x r y,x |y x = r y,x · x |y x .
(59) This is the result shown in Eq. 58. The use of (· r x ) x = ·x ry,x , Eq. 32, is based on the requirement thatH ry,x x satisfies the same Hilbert space axioms [17] as does H x .
It must be emphasized that the usual predictions of the quantum mechanical properties of wave packets, with A(x) = 0 everywhere, do a good job of prediction of experimental results. So far, quantum mechanical predictions and experiments have not shown the need for the presence of A. This shows that the effect of the A field must be very small, either through the values of the field itself or by use of a very small coupling constant, g, of the field to numbers and vectors. This would be accounted for by replacing A in Eqs. 34 and 57 by g A.
In this sense the presence of A is no different than the presence of the gravitational field. In theory, a proper description of quantum mechanics of systems should include the effects of the gravitational field. However, it can be safely neglected because the field is so small, at least far away from black holes where quantum physics is done.
Another feature of Eq. 58 is the dependence on the reference point x. This can be removed by restricting the integration volume to a region of space, excluding x, where the region contains essentially all of ψ. This is what one does in any experiment since ψ is prepared in a region that does not include the observer.
The removal of x dependence then follows from expressing Eq. 58 as a sum of two terms, one as the integral over V and the other over all space outside V : The subscript, W, on the second integral means that it refers to integration over all space outside V. Here x is a point in W . Assume that V is chosen so that the integral over W can be neglected. Then This equation has a problem in that the correspondence transforms are extended from any point in V to a point outside V. However these transforms are restricted here to apply within space or space time integrals, and not outside the integration volume. This can be fixed by choice of a point z on the surface of V and replacing r y,x by U x,z r y,z . The factor r y,z accounts for the correspondence transform from a point y in V to a point z on the surface of V , and U x,z is a unitary operator that parallel transforms the result from z to x. Then one has The subscript z on ψ x,z indicates a possible dependence on the choice of z on the surface of V . Figure 2 illustrates the setup for two points y, u in V . Figure 2: Representation of scaling factors in the integrals from point z on the surface of V to points y and u. The direction implied in the order of the subscripts of r is opposite to the direction of the parallel transformations taking the integrand factors from y and u to z. U x,z parallel transforms ψ z to ψ x,z . This is the same vector inH x as ψ z is inH z .
This result shows that the wave packet representation is independent of x, such as an observers location, provided it is outside of V . However, to the extent that A cannot be neglected, the representation does depend on the location of z. To see what this dependence is, let w be another point on the surface of V . Then, following Eq. 62, ψ x,w is given by (63) For the comparison, at x, of ψ x,z with ψ x.w , it is sufficient to compare, inH z , the parallel transformation of ψ w to z with ψ z , inH z . The parallel transformation of ψ w to z is given by Use of the fact that parallel transforms of numbers and and vectors from y to w and then to z are the same as transforms from y to z gives Since A is integrable, one can write (r y,w ) z = (r y,z r z,w ) z = r y,z (r z,w ) z to obtain (ψ w ) z = (r z,w ) z z,V r y,z ψ(y) z |y z dy z .
The subscript, z, denotes parallel transformation to z, of mathematical elements that are not in z . No subscript appears on r y,z as it is already a number value inR z . This shows that (ψ w ) z differs from ψ z by a factor, (r z,w ) z = (r w,z ) −1 , Eq. 41. The difference is preserved on parallel transformation to x in that (ψ w ) x = ψ x,w differs from (ψ z ) x = ψ x,z by a factor (r z,w ) x .
If the effect of A is small, then it is useful to express r y,x as an expansion to first order in the exponential. For example the expression for ψ x in Eq. 58 becomes, Hereν is a unit vector along the direction from x to y. The first term of the expansion corresponds to the usual case with A equal to 0 everywhere. The x dependence arises from the second term, which gives the correction due to the presence of A.
The restriction of the integration to a finite volume V, as in Eq. 62, removes the dependence on x in that ψ x is the same vector inH x as ψ y is inH y provided y is not in V . Expansion of (r y,z ) x to first order in small terms shows that the z dependence arises from the A containing term as in Eq. 68.
The dependence on z can be appreciable because z can be any point on the surface of V . What is interesting is that this dependence can be greatly reduced by using the properties of actual measurements to minimize the effect of A on the predicted expectation value.
Consider a position measurement on a system in state ψ x , Eq. 58. The expectation value for this measurement, calculated at x, is given by This expectation value 7 is an idealization or what one does. It does not take account of what one actually does. Typically, position measurements are done by dividing a volume of space up into cubes and measuring the relative frequency of occurrence of the quantum system in the different cubes. A measurement consists of a large number of repetitions of this measurement on repeated preparations of the system in state ψ. Assume the cubes in space have volume ∆ 3 where ∆ is the length of a side. Then outcomes of the repeated experiment are "yeses" from the cube detectors whose locations are denoted by triples, j, k, l of integers. Each "yes" means that the location is somewhere in the volume of the responding detector cube located at position, z j,k,l = j∆, k∆, l∆. The local availability of mathematics means that z j,k,l is a triple of numbers inR z j,k,l .
The presence of parallel and correspondence transformations enables physical theory to express exactly what is done experimentally. Eq. 58 for ψ x is replaced by an expression that limits integrals with scaling factors to the cube volumes and parallel transports these integrals to a common point x where they can be added together. The result, ψ ′ x , is given by Here the sum is over all cubes. Each cube is labeled by a point z = z j,k,l on the cube surface. Each integral over the volume, V j,k,l = ∆ 3 , of cube, j, k, l, is a vector inH z . Within each integral, the r factor scales the values of each integrand at point w to values at z. U x,z parallel transforms the integrals at different z to a common point x.
The theoretical expectation value for the experimental setup described here is given by The effect of the r factor is smaller here than it is in the expectation value using ψ x . The reason is that it is limited to integrations over small volumes. This representation of the prediction is supported by the discussion on mathematical and physical commerce. The "no information at a distance" principle requires that the information contained in the outcomes of each position measurement, as physical systems in "yes" or "no" states for each point z j,k,l , be transmitted by physical means to x where the results of the repeated measurements are tabulated. The tabulation is all done at x. No factor involving A appears in the transmission or tabulation.
As noted, the effect of A appears only in the integrals over the volumes ∆ 3 . In these integrals, to first order, Since the integral is limited to points within the volume ∆ 3 , it is clear that as ∆ → 0, the integrals for each cube also approach 0. This shows that the effect of the A field diminishes as the accuracy of the measurement increases. In the limit ∆ = 0, the A field disappears and one gets the usual theoretical prediction without A present. However, the Heisenberg uncertainty principle prevents the limit, ∆ = 0, from actually being achieved.
The presence of the A field affects other quantum mechanical properties of systems. For example, the description of the momentum operator with A = 0, replaces Eq. 48 by D j,y ψ is given by altering Eq. 47 to read D j,y ψ = r y+dy j ,y ψ(y + dy j ) y − ψ(y) dy j .
Here r y+dy j ,z ψ(y+dy j ) y is the number value inC y that corresponds to ψ(y+dy j ) inC y+dy j .
Using the fact that r y+d j y,y = e Aj (y)d j y and expansion to first order in the exponential gives, The momentum components become p A,j,y = i y y D j,y = i y y (∂ ′ j,y + A j (y)).
This expression is similar to that for the canonical momentum in the presence of an electromagnetic field. Note that A j (y) is pure real. The expressions for the Hamiltonian for a single particle remain as shown in Eq. 49 except that ∂ ′ is replaced by D. For example Eq. 49 becomes with D y,j given by Eq. 76. Inclusion of scaling factors into the two particle state entangled by momentum conservation is straightforward. This is achieved by including scale factors in the two particle space integral, x (dz 1 ) x (dz 2 ) x , in Eq. 55. The result is Here r z1,x and r z2,x are given by Eq. 57.
Gauge theories
One approach [22] to gauge theories already makes partial use of the local availability of mathematics with the assignment of an n dimensional vector space to each x. Here the vector space is assumed to be a Hilbert space,H x , at each x. ThisH x is quite different from that discussed in the previous section in that the vectors inH x refer to the internal states of matter fields. Matter fields ψ are functionals where for each space time point x, ψ(x) is a vector inH x .
The freedom of choice of a basis [21,22] in eachH x is reflected in the factorization, of a parallel transform operator, U y,x , [24] fromH x toH y where y = x +νdx is a neighbor point of x. 8 The unitary operator V y,x expresses the freedom of basis choice. As such it is an element of the gauge group U (n) with a Lie algebra representation [27,28] Sum over repeated indices is implied. The τ j are the generators of the Lie algebra su(n) and the Ω j µ (x) are the components of the n different gauge fields, Ω j (x). Ξ(x) is the gauge field for the U (1) factor of U (n).
The covariant derivative of the field, ψ, is expressed by Here V µ,x is the µ component of V y,x . Expansion of the exponential to first order in small quantities gives Coupling constants, g 1 and g 2 , have been added. The definition of ∂ ′ µ,x ψ is essentially the same as that given in Eq. 47. It is given by Here The covariant derivative, Eq. 83, accounts for the local availability of mathematics and the freedom of basis choice. It does not include the effects of scaling factors for numbers. This is taken care of by replacing V µ,x ψ(x + dx µ ) x in Eq. 82 by r x+dz µ ,x V µ,x ψ(x + dx µ ) x . This is a vector in the local representation,H r x , Eq. 32, ofH y onH x .
Expansion of the exponentials to first order adds another term to D µ,x in Eq. 83. One obtains [18,20] A coupling constant, g r , for A(x) has been added. The coupling constants, and i are all number values inC x . The physical properties of the gauge fields in D µ,x are obtained by restricting the Lagrangians to only those terms that are invariant under local and global gauge transformations [28]. For Abelian gauge theories, such as QED, Ω(x) is absent. Invariance under local gauge transformations, Λ(x), requires that the covariant derivative satisfy [28] D ′ µ,x is obtained from D µ,x by replacing A µ (x) and Ξ µ (x) with their primed values, A ′ µ (x), Ξ ′ µ (x). The presence of the primes allows for the possible dependence of the fields on the local U (1) gauge transformation, Λ(x) where Use of Eq. 84 and separate treatment of real and imaginary terms gives the following results: [28] This shows that the real field A is unaffected by a U (1) gauge transformation. It also shows that Ξ µ (x) transforms in the expected way as the electromagnetic field.
As is well known the properties of the Ξ field show that it is massless. The reason is that a mass term for this field is not locally gauge invariant [22,28].
Unlike the case for the Ξ field, a mass term can be present for the real A field. This suggests that it represents a gauge boson for which mass is optional. That is, depending on what physical system A represents, if any, the presence of a mass term in Lagrangians is not forbidden.
For nonabelian gauge theories, such as U (2) theories, Eq. 88 still holds. However there is an additional equation giving the transformation properties of the three vector gauge fields under local SU (2) gauge transformations. These properties result in the physical representation of these fields in Lagrangians as charged vector bosons [28]. The A and Ξ bosons are still present.
Physical properties of the A field from the gauge theory viewpoint
At this point it is not known what physical system, if any, is represented by the A field. Candidates include the inflaton field [29,30], the Higgs boson, the graviton, dark matter, and dark energy. One aspect that one can be pretty sure of is that the ratio of the A field -matter field coupling constant, g r , to the fine structure constant, α, must be very small. This is a consequence of the great accuracy of the QED Lagrangian and the fact that the A field appears in covariant derivatives for all gauge theory (and other) Lagrangians. As was noted, Ξ is the photon field. Inclusion of this field and a Yang Mills term for the dynamics of this field into the Dirac Lagrangian gives the QED Lagrangian [28].
Conclusion
This work is based on two premises: the local availability of mathematics and the existence of scaling factors for number systems. Local availability is based on the idea that the only mathematics that is directly available to an observer is that which is, or can be, in his or her head. Mathematical information that is separate from an observer, O x , at space time point x, such as a textbook or a lecturer at point y, must be physically transmitted, e.g. by acoustic or light waves, to O x where it becomes directly available.
This leads to a setup in which mathematical universes, x , are associated with each point x. If an observer moves through space time on a world line, P (τ ), parameterized by the proper time τ, the mathematics directly available to O P (τ ) at time τ is that in P (τ ) .
Each x contains many types of mathematical systems. IfS x is in x , then y contains the same system type,S y , and conversely. Each x contains the different types of number systems and many other systems that are based on numbers. Included are the real and complex numbersR x andC x .
Here the mathematical logical definition [12,13] of each type of system as a structure is used. A structure consists of a base set, basic operations, relations, and constants that satisfies axioms appropriate for the type of structure considered. Examples areR andC, Eq. 6 for the real and complex numbers.
For each type of number structure it is possible to define many structures of the same type that differ by scaling factors [19]. For each real number r, one can define structuresR r , Eq. 11, andC r , Eq. 13, in which a scale factor r relates the number values inR r andC r to those inR andC. The scaling of number values must be compensated for by scaling of the basic operations and relations in a manner such thatR r andC r satisfy the relevant axioms for real and complex numbers if and only ifR andC do.
The local availability of mathematics requires that one be able to construct local representations ofC y onC x . Two methods were described. One uses parallel transformations. These define or represent the notion of sameness between mathematical systems at different points. If F x,y is a parallel transform map fromS y ontoS x , then for each element, w y , inS y , w x = F x,y (w y ) is the same element inS x as w y is inS y . In this case the local representation, W x,ySy , of S y onS x isS x itself.
The other method uses what are called correspondence maps. These combine parallel transformations with scaling. The local representation ofC y onC x is C r x , which is a scaling ofC x by a factor r = r y,x . (From now onR is not explicitly mentioned as it is implicitly assumed to be part ofC.) The local representation of an element, a y , ofC y corresponds to the element r y,x a x inC x .
Here a x = F y,x a y is the same element inC x as a y is inC y .
It was seen that the scaling of numbers plays no role in the general use of numbers in mathematics and physics. This includes such things as comparing outcomes of theory predictions with experimental results or in comparing outcomes of different experiments. More generally it plays no role in the use of numbers in the commerce of mathematics and physics. The reason is that theory computations and experiment outcomes obtained at different locations are never directly compared. Instead the information contained in the outcomes as physical states must be transmitted by physical systems to a common point. There the states of the physical transmittal systems are interpreted locally as numbers, and then compared.
In this work, number scaling was limited to theory calculations that involve space time derivatives or integrals. Examples of this were described in quantum theory and in gauge theories. An example discussed in some detail was the expansion of a wave packet ψ = ψ(y)|y dy. Since ψ(y)|y is a vector inH y , the integrand has to be moved to a common point, x for the integral to make sense. This can be done by parallel transform maps which give or by correspondence maps which give Here the scaling factor, r y,x , is the integral from x to y of the exponential of the gauge field, A(y), as in Eq. 57. It was also seen that one can use both transform and correspondence maps to express the wave packet in a form that reflects exactly what one does in an experiment that measures either the spatial distribution or the position expectation value of a quantum particle. If the experiment setup consists of a collection of cube detectors of volume ∆ 3 that fill 3 dimensional Euclidean space, the outcome of each of many repeated experiments is a triple of numbers, j, k, l that label the position, j∆, k∆.l∆ of the detector that fired.
As was seen in the discussion of mathematical and physical commerce, the outcomes of repeated experiments must be physically transported to a common point, x where they are interpreted as numbers inR x for mathematical combination. It follows that the scaling factors are limited to integration over the volume of each detector. This results in the replacement of ψ x by ψ ′ x where, Eq. 70, ψ ′ x = j,k,l U x,z V j,k,l r w,z ψ(w) z |w z dw z .
The sum is over all cubes. z = z j,k,l is a point on each cube surface. Each integral is over the volume, V j,k,l , of each cube. The r factor, Eq. 72, scales the values of each integrand at point w to values at z. U x,z parallel transforms the integrals at different z to a common point x.
The state, ψ ′ x , differs from ψ x , or the usual quantum mechanical wave packet expression for ψ, in that it ties theory closer to experiment. It also reduces the effect of scaling to the sum of the effects for the volumes of each of the detectors. The effect is reduced because, for any point w in the sensitive volume of the experiment, the effect of A on the transform from w to x is limited to the part of the path in the detector volume containing w. As the detector volumes go to 0, so does the effect of A. The increase in the number of detectors as the volume of each gets smaller does not remove this effect.
In this sense the usual quantum theory wave packet integral for ψ is a limit in that it is independent of experimental details. Unlike the case for the usual representation of ψ, use of ψ ′ x to make predictions will give values that depend on experimental details. The fact that there is no indication, so far, of such dependence, at least to the accuracy of experiment, means that the effect of A(x) must be very small. Whether the small effect is due to small values of A itself or a small value of a coupling constant of A to states and matter fields, is not known at present.
The relation between ψ ′ x and the usual integral for ψ is further clarified by the observation that ψ x → ψ as the detector volume goes to 0. However, the Heisenberg uncertainty principle prevents experimental attainment of this limit.
It must be emphasized that the tying the wave packet integral to experiment details, as with ψ ′ x has nothing to do with collapse of the wave packet during carrying out of the experiment. ψ ′ x is just as coherent a state as is ψ. The gauge field A also appears in the expression for the canonical momentum. The usual expression for momentum p = j i ∂ j,x is replaced by, Eq. 73 where ∂ ′ j,x , Eq. 47, accounts for the local availability of mathematics. As a covariant derivative, D µ,x appears in gauge theories with additional terms. It was seen that the limitation of Lagrangians to terms that are invariant under local gauge transformations, [22,28], results in A appearing as a gauge boson for which mass is optional. This is the case for both Abelian and nonabelian gauge theories.
The physical nature of A, if any, is unknown. What is known is that the great accuracy of QED requires that the coupling constant of A to matter fields must be very small.
It must be emphasized that this work is only a first step in combining "mathematics is local" with the freedom of choice of scaling factors for number structures. An example of work for the future is to determine the effect of number scaling factors on geometry. It is suspected that the scaling factors may induce conformal transformations into geometry. More work also needs to be done on the effects of number scaling on quantum mechanics. An interesting question here is whether scaling factors are needed at all in classical mechanics.
Finally, one may hope that this work provides a real entry into the description of a coherent theory of physics and mathematics together. Such a theory would be expected to describe mathematics and physics together as part of a coherent whole instead of as two separate but closely related disciplines. | 16,639 | 2011-10-06T00:00:00.000 | [
"Mathematics"
] |
New goodness-of-fit tests for exponentiality based on a conditional moment characterisation
The exponential distribution plays a key role in the practical application of reliability theory, survival analysis, engineering and queuing theory. These applications often rely on the underlying assumption that the observed data originate from an exponential distribution. In this paper, two new tests for exponentiality are proposed, which are based on a conditional second moment characterisation. The proposed tests are compared to various established tests for exponentiality by means of a simulation study where it is found that the new tests perform favourably relative to the existing tests. The tests are also applied to real-world data sets with independent and identi-cally distributed data as well as to simulated data from a Cox proportional hazards model, to determine whether the residuals obtained from the fitted model follow a standard exponential distribution.
Introduction
The exponential distribution is an important and commonly used statistical model for a multitude of real-life phenomena, such as lifetimes, time to default of loans, and many other time-to-event scenarios. As a result, this distribution plays a vital role in the practical application of reliability theory, survival analysis, engineering, and queuing theory (to name only a few), as the underlying theory governing these applications often assumes an exponential distribution for the data. Therefore, to effectively implement these applications, it is necessary to perform goodness-of-fit tests to determine whether these fundamental distributional assumptions are satisfied or not. Examples where the assumption of exponentiality is necessary (and hence the need to test for this assumption) range from the analysis of queuing networks [26], to cancer clinical trials [15], and the time-tofailure of systems of machines and operators [12]; for further examples of data sets see the papers by Shanker, Fesshaye, and Selvaraj [23,24].
Suppose that a random variable X follows an exponential distribution with scale parameter λ (written X ∼ Exp(λ)). This random variable has a number of unique distributional properties which include the forms of its cumulative distribution function (CDF), survival function, probability density function, and characteristic function (CF), which are given by F (x) = P(X < x) = 1 − e −λx , respectively, with x > 0 and where λ > 0 is the scale parameter, with E(X) = 1/λ. In addition, the exponential distribution also exhibits many other unique distributional properties, called characterisations. These characterisations help in the development of tests for exponentiality since, if one can verify that the data has these properties, then one can conclude that the data were obtained from an exponential distribution. One such property is the so-called 'memoryless' property which states that, if X follows an exponential distribution, then we can write for s, t > 0. This property implies that, if X represents the lifetime of a certain component, then the remaining lifetime of that component is independent of its current age. For components that suffer from wear-and-tear (i.e., where the lifetime is dependent on its current age), the exponential distribution would not be an appropriate model. A second property states that the exponential distribution uniquely has the feature that the hazard rate is a constant, that is, This feature is directly tied to the memoryless property since the failure rate is constant throughout the lifetime of the component.
Suppose now that X 1 , X 2 , . . . , X n are realisations from some random variable X with unknown distribution function F , then the process of testing whether or not this data are realisations from an exponential distribution with parameter λ involves the use of statistical inference via goodness-of-fit tests. The inferential question can be framed in the form of the following composite hypothesis statement: for some λ > 0, against the alternative hypothesis that the distribution of X is something other than exponential. Note that the tests that will be discussed in this paper all make use of a scaled version of the original data, defined as Y j = X j λ, j = 1, 2, . . . , n, where λ denotes the maximum likelihood estimator (MLE) of the parameter λ and is given by λ = 1/X n withX n = 1 n n j=1 X j . The motivation for the use of this scaling factor primarily comes from the fact that the distribution of exponential random variables is invariant to simple scale transformations, that is, X is exponentially distributed if, and only if, cX is also exponentially distributed for every constant c > 0. Therefore, conclusions drawn regarding exponentiality based on the sample Y 1 , Y 2 , . . . , Y n can reasonably be extended to the exponentiality of X (from which X 1 , X 2 , . . . , X n was obtained). Furthermore, many statistics discussed will also employ the order statistics of X j and Y j , defined as X (1) <X (2) <. . . <X (n) and Y (1) <Y (2) <. . . <Y (n) , respectively.
To test the hypothesis in (2), formal test statistics are used, some of which are general statistics that can be applied to test for almost any distribution, whereas others exploit the various unique characteristics of exponentially distributed data, such as the memoryless or constant hazard properties. Examples of general tests employed to test exponentiality include the Kolmogorov-Smirnov test and the Cramér-von Mises test, both of which are based on the same basic principle of measuring the discrepancies between the theoretical CDF of the exponential distribution and its empirical equivalent (see, e.g., Chapter 4 of D'Agostino and Stephens, 1986 [11]). Another class of general tests involves a similar approach, but replaces the CDF with the CF; examples of these include the test by Epps and Pulley [13], and the one by Meintanis, Swanepoel and Allison [21]. In addition, there are many goodness-of-fit tests based on the unique properties of the exponential distribution, which are occasionally desirable as they tend to focus on much more specific aspects of the distribution and are potentially more robust than the more general tests. For example, since the memoryless property uniquely characterises the exponential distribution, this implies that the exponentially distributed random variable X will have this property and conversely, if X exhibits this property it must be exponentially distributed. Therefore, a test based on this property will involve first determining sample estimates of the two probabilities appearing on either side of the expression of (1), and the test can then be designed to measure the equality of these two estimates. For examples of tests based on this characterisation, see [2], [5], and [6].
There are many more such unique characterisations of the exponential distribution and the literature on goodness-of-fit contains numerous test statistics based on these characterisations. For example, for tests based on the mean residual life, see [8], [25], [17], and [9]. For a test based on the Arnold-Villasenor characterisation, see [18], and for a test based on the Rossberg characterisation see [27]. For a comprehensive review of tests for exponentiality, the interested reader is referred to the review papers by Ascher [7], Henze and Meintanis [16], and Allison, Santana, Smit and Visagie [4].
The remainder of the paper is organised as follows: In Section 2 we propose new tests for exponentiality which are based on a conditional second moment characterisation of the exponential distribution and, in Section 3, the results of a brief Monte Carlo simulation are presented to compare the power performance of the newly proposed tests to some commonly used existing tests for exponentiality. The paper concludes in Section 4 where the tests are applied to some real-world data sets with independent and identically distributed random values, as well as to data simulated from a Cox proportional hazards model, to determine whether the residuals obtained from the fitted model follow a standard exponential distribution.
New tests for exponentiality based on a characterisation
Consider the following characterisation of the exponential distribution by Afify, Nofal and Ahmed [1]: Characterisation. Let X be a non-negative random variable with continuous distribution function F and density f . If E(X 2 ) < ∞, then X has an exponential distribution with parameter λ (that is, From this characterisation we can deduce the following corollary. Corollary 1 Let X be a non-negative random variable with continuous distribution function F . If E(X 2 ) < ∞, then X has an exponential distribution with parameter λ if, and only if, for all t > 0 where r λ (t) : and I(· ) denotes the indicator function.
Proof: Straightforward calculations yield that, for all t > 0, From the Characterisation, it follows that X has an exponential distribution with parameter λ if, and only if, for all t > 0, or equivalently if, and only if, for all t > 0, Based on Y , the characterisation in Corollary 1 can be restated as follows: Y is exponentially distributed if, and only if, where S(t) = P (Y > t) and r 1 (t) = 2 + h(t) t 2 + 2t , with h(t) the hazard rate of Y .
Based on this, a random variable Y has a standard exponential distribution if, and only if, or equivalently if, and only if, Naturally, ψ(t) is unknown and hence must be estimated from the data . . , n. Define two possible estimators for ψ(t) by Here, f (t) denotes the kernel density estimate of f (t), which is defined as where φ(· ) is the standard normal density function and h is a suitably chosen bandwidth (for an in-depth discussion on kernel density estimators, the interested reader is referred to the monograph by Wand and Jones [28]). The only difference between the estimators ψ n and ψ (2) n is that in ψ n we choose f (t) = e −t , the density function specified under the null hypothesis, whilst in ψ (2) n we estimate f by f . Now, if the observed data originated from an exponential distribution, then both ψ (1) n and ψ (2) n should be close to zero. This leads to the following two Cramér-von Mises type test statistics and where F n (t) = 1 n n i=1 I(Y i ≤ t) is the empirical distribution function of Y 1 , Y 2 , . . . , Y n and w(t) is a suitable, positive weight function satisfying some standard integrability conditions. For implementation of the proposed test statistics, we will use w(t) = e −at , where a > 0 is a user-defined tuning parameter.
With this choice of w(t) the following easily calculable form of the proposed test statistics S n and T n is obtained.
Both tests reject the null hypothesis in (2) for large values of the test statistics. The critical values for the test statistics can easily be calculated using the following Monte Carlo procedure.
3. Calculate the test statistic, say S = S n (Y 1 , Y 2 , . . . , Y n ). 4. Repeat steps 1-3 a large number of times, say M C times, to obtain M C copies of S denoted S 1 , S 2 , . . . , S M C . 5. Obtain the order statistics S (1) ≤ S (2) ≤ · · · ≤ S (M C) . 6. The critical value at a α% significance level is then given by where x denotes the largest integer less than or equal to x.
Simulation study and results
In this section Monte Carlo simulations are used to compare the finite-sample performance of the newly proposed tests T n,a and S n,a to the following existing tests for exponentiality.
• The traditional tests of Kolmogorov-Smirnov (KS n ) and Cramér-von Mises (CM n ), where the test statistics for these tests are given by Both of these tests reject the null hypothesis for large values of the test statistics • A Kolmogorov-Smirnov type test (KS n ) and a Cramér-von Mises type test (CM n ) based on the mean residual life, as developed by Baringhaus and Henze [8], with the following test statistics Both KS n and CM n reject the null hypothesis for large values. • The Epps and Pulley test EP n [13], which is based on the characteristic function, φ(x), and with the test statistic given by The null hypothesis is rejected for large values of |EP n |.
Simulation setting
A significance level of 5% was used throughout the study. Empirical critical values of all the tests were obtained from 10 000 independent Monte Carlo replications using the procedure given at the end of Section 2. Power estimates were calculated for sample sizes n = 20 and n = 30 using 10 000 independent Monte Carlo replications for the various alternative distributions given in Table 1.
The two new tests in which a tuning parameter appears were evaluated for a = 0.25 and a = 1. All calculations and simulations were performed in R [22]. Table 2 contains the percentage of the 10 000 Monte Carlo samples that resulted in the rejection of the null hypothesis in (2) rounded to the nearest integer. For each alternative the top row corresponds to the estimated powers obtained for n = 20 whereas the row below corresponds to the estimated powers for n = 30.
EL(θ)
Exponential geometric The highest power for each alternative distribution is highlighted for ease of comparison. From Table 2 it is clear that there is no single test that dominates all of the other tests. However, S n,a outperforms all its competitors for the EG(θ), EL(θ) and Γ(0.7) alternatives and for both sample sizes. No single test dominates for the majority of the other alternatives with the exception of T n,a , which performs well for the alternatives LF (4) and P W (1). Overall, the two newly proposed tests produce estimated powers which are competitive relative to the other tests and, hence, this limited Monte Carlo study shows that they can be used in practice to test whether observed data are realised from an exponential distribution.
Practical applications and conclusion
In this section all the tests considered in the simulation study will be applied to both real-world and simulated data sets. The two real-world data sets considered in this study respectively contain the failure times of air conditioning systems and the waiting times of bank customers. For these two data sets, the tests for exponentiality will be used to determine whether the observed values are realisations from an exponential distribution. On the other hand, the remaining two data sets that will be considered are simulated from a Cox proportional hazards (CPH) model and the tests for exponentiality will be used to determine the adequacy of a specific CPH model fitted to the data.
Practical application to real-world data sets
The first data set contains 30 failure times of the air conditioning system of an airplane as given by Linhart and Zucchini [20], whereas the second data set contains the waiting times (in minutes) of 100 bank customers before service as obtained from Ghitany, Atieh and Nadarajah [14]; both data sets can be found in the Appendix in Tables 7 and 8. In Table 3 and 4 a summary of the results of all the different tests for exponentiality can be found. The summary contains the value of each test statistic and associated p-value used to test whether the data originated from an exponential distribution. For the failure time data, all of the tests, except the KS n test, do no reject the null hypothesis of exponentiality using a 5% significance level. In contrast, for the waiting time data, all of the tests reject the null hypothesis of exponentiality at the same significance level. This illustrates that the newly proposed test at least agrees with the more traditional tests for exponentiality.
Practical application to simulated data sets
The following two data sets, given in the Appendix in Tables 9 and 10, contain simulated lifetimes (t i , i = 1, 2, . . . , 100) together with a single covariate (x i , i = 1, 2, . . . , 100) which can take on the values 0, 1, 2 or 3. The first data set was obtained by simulating data from a CPH model with a Weibull cumulative baseline hazard function, whereas the second data set was simulated from a CPH model with a log-normal cumulative baseline hazard function.
Recall that the cumulative hazard function of the j th individual follows a CPH model with a single covariate if Λ j (t) = e βx j H(t), where H(· ) is some unspecified baseline cumulative hazard function, x j is the value of the covariate of the j th individual and β is an unknown regression parameter.
On the basis of the observed data (t j , x j ), j = 1, 2, . . . , 100 we wish to test the null hypothesis where H 0 (t; a, b) = t b a is the Weibull cumulative baseline hazard function with unknown parameters a and b.
We can now estimate the parameters β, a and b by their maximum likelihood estimators β, a and b. Based on these estimators we can obtain the (so-called) Cox-Snell residuals, defined as ε j = e βx j H 0 (t j ; a, b).
If the null hypothesis is true (i.e., if the cumulative baseline hazard was correctly specified as the Weibull cumulative baseline hazard) then the Cox-Snell residuals should (approximately) follow a standard exponential distribution (see, e.g., Chapter 11 of Klein and Moeschberger [19]). Hence any exponential test on the basis of ε j , j = 1, 2, . . . , 100 constitutes in effect a goodness-of-fit test for the CPH model itself.
It is, therefore, expected that tests for exponentiality will not reject the null hypothesis for the first simulated data set (recall that this data was generated from a CPH model with a Weibull cumulative baseline hazard), whereas the tests should reject the null hypothesis of exponentiality for the second simulated data set (which was generated from a CPH model with a log-normal cumulative baseline hazard).
The results of all the different tests for exponentiality for the two simulated data sets are summarised in Table 5 and 6, which display both the test statistics and associated p-values used to test whether the residuals originate from a standard exponential distribution, i.e., whether the cumulative baseline hazard is correctly specified as Weibull. Due to the fact that the null hypothesis in (3) involves unknown parameters -and hence must be estimated under H 0 -the p-values had to be obtained using the bootstrap algorithm described in Cockeran, Allison and Meintanis [10]. For the first simulated data set, the MLEs are β = 0.090, a = 0.880 and b = 0.763. Table 5 shows that all tests correctly do not reject the null hypothesis, which is expected, as the data was known to be generated using a Weibull cumulative baseline hazard.
The second simulated data set produced the following MLEs: β = 0.008, a = 0.854 and b = 3.302, and the resulting p-values displayed in Table 6 indicate that the null hypothesis was rejected by all of the tests. This is not surprising, since a good test for exponentiality should have the ability to detect the mis-specification of the Weibull cumulative baseline hazard when the data originated from a CPH model with a log-normal cumulative baseline hazard.
To ultimately use the two new tests for exponentiality, one would need to make a choice regarding the value of the tuning parameter a, however, from extensive simulation studies conducted (not displayed here), it was concluded that a = 1 produces satisfactory results. If, however, one would prefer to rather use a data dependent choice of this parameter, one can employ the method outlined in Allison and Santana [3]. | 4,663.4 | 2019-12-20T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
Multifrequency excitation of a clamped–clamped microbeam: Analytical and experimental investigation
Using partial electrodes and a multifrequency electrical source, we present a large-bandwidth, large-amplitude clamped–clamped microbeam resonator excited near the higher order modes of vibration. We analytically and experimentally investigate the nonlinear dynamics of the microbeam under a two-source harmonic excitation. The first-frequency source is swept around the first three modes of vibration, whereas the second source frequency remains fixed. New additive and subtractive resonances are demonstrated. We illustrated that by properly tuning the frequency and amplitude of the excitation force, the frequency bandwidth of the resonator is controlled. The microbeam is fabricated using polyimide as a structural layer coated with nickel from the top and chromium and gold layers from the bottom. Using the Galerkin method, a reduced order model is derived to simulate the static and dynamic response of the device. A good agreement between the theoretical and experimental data are reported.
INTRODUCTION
Microelectromechanical systems (MEMS) resonators are the primary building blocks of several MEMS sensors and actuators that are used in a variety of applications, such as toxic gas sensors 1 , mass and biological sensors [2][3][4][5] , temperature sensors 6 , force and acceleration sensors 7 , and earthquake actuated switches 8 . MEMS resonators can be based on thin-film surface micromachining, yielding compliant resonating structures, or bulk micromachining, for example, in the case of bulk resonators. These are primarily based on the wave propagation within the bulk structure. This article addresses the first category, that is, primarily clamped-clamped microbeam resonators. MEMS resonators are excited using different types of forces, such as piezoelectric 9 , electromagnetic 10 , thermal 11 , and electrostatic 8,12 . The electrostatic excitation of resonators is the most commonly used method because of its simplicity and availability 12 . However, electrostatic forces are inherently nonlinear, thus adding complexity to the dynamics of these resonators, especially when they undergo large motions. The nonlinear dynamics of electrostatically actuated resonators have been thoroughly studied over the past two decades [12][13][14][15][16][17][18][19] .
There has been increasing interest in obtaining resonant sensors with large-frequency bands, especially with a highquality factor range and near higher order modes of vibrations, where a high sensitivity of detection is demanded 1,2 . A few of the approaches that have been investigated to improve the vibration of resonators and increase their frequency band width are through parametric excitation 16 , secondary resonance 20 , slightly buckled resonators 21 , and multifrequency excitation 22 . Challa et al. 23 designed and tested a device with tunable resonant frequency for energy harvesting applications. The resonant frequency band was increased up to ± 20% of the original resonant frequency using a permanent magnet. The effects of the double potential well systems on the resonant frequency band and their application in energy harvesting applications are reviewed in Ref. 24. Recent studies on a carbon nanotube-based nano-resonator for mass detection applications proved that the resonator bandwidth is directly proportional to the forcing amplitude 25 .
Recent studies have highlighted the interesting dynamics of mixed frequency excitation and their applications as sensors and actuators. The mixed frequency excitation of a micromirror has been studied extensively in Ref. 22, where it is proposed as a method to improve the bandwidth in resonators. Erbe et al. 26 demonstrated using the nonlinear response of a strongly driven nanoelectromechanical system resonator as a mechanical mixer in the radiofrequency regime. They used a magnetic field at an extremely low temperature (4.2 K) to excite a clamped-clamped microbeam using two AC signals of a frequency that were extremely close to each other and to the fundamental natural frequency of the beam. They determined that by exceeding a certain threshold of the amplitude excitation, higher order harmonics appeared. By increasing the excitation amplitude further, a multitude of satellite peaks with limited bandwidth occurred, thus allowing effective signal filtering. These results were verified by applying a perturbation theory on the Duffing equation with cubic nonlinearity and numerically integrating the Duffing equation and calculating the power spectrum of the response. They determined from both analysis and experiment that the cubic nonlinearity was responsible for generating the frequency peaks. A parametrically and harmonically excited microring gyroscope was investigated at two different frequencies in Ref. 27. The method proposed in the present study increases the signal to noise ratio and improves the gyroscope performance. Liu et al. 28 fabricated and characterized an electromagnetic energy harvester, which harvested energy at three different modes of vibration. Moreover, the method of multifrequency excitation was implemented to perform mechanical logic operation, where each frequency carried a different bit of information 29 . The mixed frequency is used for an atomic force microscope resonator to generate high-resolution imaging and extract the surface properties 30 .
Motivated by the interesting dynamics and the wide range of applications of a large bandwidth resonator excited near the higher order modes of vibration, the objective of this article is to excite higher order modes of vibrations combined with multifrequency excitation to broaden the frequency bandwidth around the excited modes. The behavior of clamped-clamped microbeams excited by a multifrequency electrical source has been investigated experimentally and analytically.
MATERIALS AND METHODS Fabrication
The clamped-clamped microbeam resonator, as depicted in Figure 1a, is fabricated using the in-house process developed in Refs. 31,32. The microbeam consists of a 6-μm polyimide structural layer coated with a 500-nm nickel layer from the top and 50 nm chrome, 250 nm gold, and 50 nm chrome layers from the bottom. The nickel layer acts as a hard mask to protect the microbeam during the reactive ion etching process and defines the length and width of the beam. The lower electrode is placed directly underneath the microbeams and is composed of gold and chrome layers. The lower electrode provides the electrical actuation force to the resonator. The two electrodes are separated by a 2-μm air gap. When the two electrodes are connected to an external excitation voltage, the resonator vibrates in the out-of-plane direction. Figure 1b illustrates the various layers of the fabricated resonator.
Problem formulation
We investigate the governing equation for a clamped-clamped microbeam depicted in Figure 2, which is electrostatically actuated by two AC harmonic loads V AC1 and V AC2 of frequencies Ω 1 and Ω 2 , respectively, and superimposed onto a DC load V DC . The equation of motion governing the dynamics of the microbeam can be written as follows: where E is the modulus of elasticity; I is the microbeam moment of inertia; c is the damping coefficient; A is the cross sectional area; ρ is the density; ε is the air permittivity; d is the air-gap thickness; t is time; x is the position along the beam; N is the axial force; b is the beam width; and w is the microbeam deflection. The boundary conditions of the clamped-clamped microbeam can be given as follows: Next, we non-dimensionalize the equation of motion and its boundary conditions for convenience. Accordingly, the nondimensional variables (denoted by hats) can be introduced as follows:ŵ where T is a time scale that can be defined as follows: By substituting Equations (3) and (4) into Equations (1) and (2) and dropping the hats from the non-dimensional variables for convenience, the non-dimensional equation can be derived as follows: where the normalized boundary conditions are The parameters in Equation (5) can be defined as follows: To calculate the beam response, we solve the normalized microbeam equation, Equation (5), in conjunction with its boundary conditions, Equation (6), using the Galerkin method 12 . This method reduces the partial differential equation into a set of coupled second order differential equations. The microbeam deflection can be approximated as follows: where ϕ i (x) is selected to be the i th undamped, unforced and linear orthonormal clamped-clamped beam mode shape; u i (t) is the i th modal coordinate; and n is the number of assumed modes.
To determine the mode shape functions ϕ(x), we can solve the eigenvalue problem as follows: where ω non is the eigenfrequency. Both sides of Equation (5) are multiplied by (1 − w) 2 to simplify the spatial integration of the forcing term 12 . Then, we substitute Equation (8) into Equation (5) and multiply the outcome by the mode shape ϕ i (x). Next, we can integrate the resulting equation from 0-1 over the spatial domain as follows: Evaluating the spatial integration in Equation (10) produces a set of coupled ordinary equations, which can be solved numerically using the Runge-Kutta method. We implement the first three mode shapes to produce converged and accurate simulation results.
Experimental characterization
The experimental characterization setup used for testing the device and measuring the initial profile, gap thickness and out-ofplane vibration is depicted in Supplementary Figure S1. The experiment is conducted on a 400-μm microbeam with a lower electrode that spans half of the beam length. This electrode provides an anti-symmetric electrical force to excite the symmetric and anti-symmetric resonance frequencies. The experimental setup consists of a microsystem analyzer, which is a highfrequency laser Doppler vibrometer under which the microbeam is placed to measure the vibration, data acquisition card connected to an amplifier to provide actuation signals of wide range of frequencies and amplitudes, and vacuum chamber, which is equipped with ports to pass the actuation signal and measure the pressure. In addition, the chamber is connected to a vacuum pump that can reduce the pressure to 4 mtorr. The initial profile of the microbeam is revealed using an optical profilometer. After defining the vertical scanning range and exposure time, a 3D map of the microbeam is generated (Supplementary Figure S2). The combined thickness of the microbeam and air gap is measured to be~9 μm. In addition, the total length of the microbeam is 400 μm with a fully straight profile and without any curvature or curling.
To characterize the static behavior of the device, we initially biased the microbeam by a slow DC ramp voltage, generated using the data acquisition card, and measured the static deflection. The experimental result is reported in Supplementary Figure S3. The deflection increases until pull in is exhibited at 168 V.
We experimentally measured the first three natural frequencies by exciting the device with a white noise signal of V DC = 30 V and V AC = 50 V. The vibration at different points along the beam length is scanned to extract the vibration mode shapes and resonance frequencies. The acquired frequency response curve is depicted in Figure 3, which reveals the values of the first three natural frequencies of ω 1 = 160 kHz, ω 2 = 402 kHz, and ω 3 = 738 kHz. The mode shapes (root mean squared absolute values) are reported in the insets of Figure 3. We observed that all of the points vibrate at ω 1 , whereas the mid points are nodal points at ω 2 . In addition, at ω 3 , there are two nodal points. These results match the clamped-clamped structure's first, second, and third vibration mode shapes.
Frequency response curves
The nonlinear response of the microbeam is experimentally investigated near the first three modes of vibration. The microbeam is excited using the data acquisition card, and the vibration is detected using the laser Doppler vibrometer. The excitation signal is composed of two AC signals, V AC1 and V AC2 , superimposed on a DC signal V DC . The measurements are performed by focusing the laser at the mid-point for the first and third mode measurements and at a quarter of the beam length for the second mode measurements. Then, the frequency response curve is generated by taking the steady-state maximum amplitude of the motion W max . The generated frequency response curves near the first mode are depicted in Figure 4a. Each curve denotes the frequency response for different values of V AC2 . The results are obtained by sweeping the frequency of the first AC source Ω 1 around the first mode and fixing the second source frequency Ω 2 at 1 kHz. The swept source voltage V AC1 and the DC voltage are fixed at 5 and 15 V, respectively. The results of sweeping Ω 1 near the second mode while fixing the second source frequency Ω 2 at 5 kHz is depicted in Figure 4b. The swept source voltage V AC1 and the DC voltage are fixed at 20 and 15 V, respectively. In addition, this experiment is repeated near the third mode, as indicated in Figure 4c, where Ω 2 is fixed at 10 kHz and the actuation voltages V AC1 and the DC are fixed at 40 and 20 V, respectively. The chamber pressure is fixed at 4 mtorr.
The curves of Figure 4 highlight the effects of V AC2 on the combination resonances, where new resonance peaks appear at frequencies of the additive type at (Ω 1 +Ω 2 ), (Ω 1 +2Ω 2 ), and (Ω 1 +3Ω 2 ) and the subtractive type at (Ω 1 − Ω 2 ), (Ω 1 − 2Ω 2 ), and (Ω 1 − 3Ω 2 ) 33 . These resonances appear due to the quadratic nonlinearity of the electrostatic force as well as the cubic nonlinearity caused by mid-plane stretching. It should be noted that in Equation (5), the integral term representing mid-plane stretching indicates W and its derivatives to be a positive cubic term, which tends to cause hardening behavior. However, expanding the electrostatic force term in Equation (5) with a Taylor series results in a constant term, that is, representing the static effect, linear term, that is, representing the linear decrease in the natural frequency due to voltage loads, quadratic nonlinearity and other higher order nonlinearities. The strongest nonlinearity is the quadratic one, which is known to cause a softening effect regardless of its sign 17 . In addition, hardening behavior is reported near the first and second resonances. As V AC2 increases near the first resonance (Figure 4a), the response curves tilt toward the lower frequency values (softening), where the quadratic nonlinearity from the electrostatic force dominates the cubic nonlinearity from mid-plane stretching.
Figures 5a-c reports the results for different values of Ω 2 under the same electrodynamic loading condition near the first, second and third resonance frequencies.
As Ω 2 decreases further, a continuous band of high amplitude is formed. This result demonstrates that the multifrequency excitation can be used to broaden the large amplitude response near the main resonance, hence increasing the bandwidth, even for higher order modes.
In addition to the previous results, Supplementary Figure S4 compares the experimentally obtained response owing to a single-frequency excitation of parameters V DC = 15 V and V AC = 5 V to that of a two-source excitation, where another harmonic source of frequency fixed at 1 kHz and amplitude of 10 V is added. The multifrequency response indicates a clear contrast and clear advantage in terms of bandwidth, which can have several practical applications. Typically, resonators of the resonant sensors may not necessarily be driven at the exact sharp peak due to noise, temperature fluctuation and other uncertainties, which results in significant losses and weak signal to noise ratios. The above results prove the ability to control the resonator bandwidth by properly tuning the excitation force frequencies. In addition, by using the partial lower electrode configuration and properly tuning the excitation voltages, the higher order modes of vibration are excited with high amplitudes above the noise level.
Simulation results
The microbeam dynamical behavior is modeled according to Equation (5) with the unknown parameters EI, N, and C, which are extracted experimentally. All of the results are obtained based on the derived reduced order model. The eigenvalue problem of Equation (9) is solved for different values of the non-dimensional internal axial force N non to determine the theoretical frequency ratio ω 2 /ω 1 that matches the measured ratio. The theoretical and experimental ratios are matched for N non = 20.82, as reported in Supplementary Figure S5. The axial forces in the surface micromachining process arise due to the residual stress from depositing the different layers of the microbeam at high temperatures and then being cooled down to room temperature. These forces affect the resonance frequency values and their ratio. To extract the flexural rigidity EI, we use the static deflection curve and match the theoretical result with the experimental data (Supplementary Figure S3). On the basis of the static solution of Equation (5), we determined that EI = 0.106 × 10 − 9 Nm 2 . The damping ratio ς is extracted from the frequency response curve of the beam to a single and small AC excitation, where the experimental and theoretical results are matched at a damping ratio ς = 0.002, as depicted in Supplementary Figure S6.
The simulated dynamic response is based on a long-time integration of the modal equations of the reduced-order model of Equation (10) to reach a steady-state response. The first threemode shapes are used in the reduced-order model to approximate the response. The simulation and experimental results for the multifrequency excitation near the first three modes of vibrations are reported in Figures 6a-c. Using the Galerkin approximation, the model predicts the resonator response accurately near the first and third mode shapes. Near the second mode, long-time integration method failed to Multifrequency excitation of a mechanical resonator N Jaber et al capture the complete solution due to the weak basin of attraction near the large response curve, as indicated in Figure 6b. As reported by Batineh and Younis 34 , long-time integration method depends on the size and robustness of the basin of attraction to capture a solution. Another numerical technique needs to be implemented to accurately predict the complete response, such as the shooting technique, which can determine the entire response as well as capture the stable and unstable periodic solutions 12,34 .
CONCLUSIONS
In this report, we investigated the dynamics of an electrically actuated clamped-clamped microbeam excited by two harmonic AC sources with different frequencies superimposed onto a DC voltage near the first three modes of vibrations. After recording the static deflection curve and detecting the first three natural frequencies, a numerical analysis was conducted to extract the device parameters. Then, the governing equation was solved using three mode shapes, which provided a good agreement between the simulation and the experimental results. Moreover, we proved the ability to excite the combination resonance of both the additive and subtractive type. In addition, the ability to broaden and control the bandwidth of the resonator near the higher order modes has been illustrated by properly tuning the frequency of the fixed source. Furthermore, by increasing the fixed frequency source voltage, the vibration amplitude with respect to noise near the higher order modes is enhanced. These capabilities of generating multiple peaks and a wide continuous response band with the ability to control its amplitude and location can have a promising application in increasing the resonator bandwidth for applications such as mechanical logic circuits, energy harvesting and mass sensing. | 4,439.4 | 2016-03-14T00:00:00.000 | [
"Engineering",
"Physics"
] |
The Identification and Characterization of a Noncontinuous Calmodulin-binding Site in Noninactivating Voltage-dependent KCNQ Potassium Channels*
We show here that in a yeast two-hybrid assay calmodulin (CaM) interacts with the intracellular C-terminal region of several members of the KCNQ family of potassium channels. CaM co-immunoprecipitates with KCNQ2, KCNQ3, or KCNQ5 subunits better in the absence than in the presence of Ca2+. Moreover, in two-hybrid assays where it is possible to detect interactions with apo-CaM but not with Ca2+-bound calmodulin, we localized the CaM-binding site to a region that is predicted to contain two α-helices (A and B). These two helices encompass ∼85 amino acids, and in KCNQ2 they are separated by a dispensable stretch of ∼130 amino acids. Within this CaM-binding domain, we found an IQ-like CaM-binding motif in helix A and two overlapping consensus 1–5-10 CaM-binding motifs in helix B. Point mutations in helix A or B were capable of abolishing CaM binding in the two-hybrid assay. Moreover, glutathione S-transferase fusion proteins containing helices A and B were capable of binding to CaM, indicating that the interaction with KCNQ channels is direct. Full-length CaM (both N and C lobes) and a functional EF-1 hand were required for these interactions to occur. These observations suggest that apo-CaM is bound to neuronal KCNQ channels at low resting Ca2+ levels and that this interaction is disturbed when the [Ca2+] is raised. Thus, we propose that CaM acts as a mediator in the Ca2+-dependent modulation of KCNQ channels.
The KCNQ transmembrane proteins are members of a family of voltage-dependent potassium selective channels that are involved in the control of cellular excitability. Remarkably, mutations in four of the five known members of this family have been associated with different hereditary human disorders. While mutations in the KCNQ1 subunit (KvQT1) lead to arrhythmia in the human long QT syndrome, mutations in KCNQ2 or KCNQ3 are associated with a benign form of epilepsy. It has also been shown that KCNQ4 is mutated in a dominant form of progressive hearing loss (1).
With regards to the normal physiology of this protein family, the KCNQ2 and KCNQ3 subunits have been shown to form M-type potassium channels whose expression is restricted to neuronal tissue (2). Moreover, in some brain areas and neuro-nal tissues, KCNQ4 and KCNQ5 also contribute to the formation of M channels, suggesting that the different combinations of KCNQ subunits may be in part responsible for the diversity of M channel properties (1). The M current (I M ) is a subthreshold noninactivating voltage-dependent potassium current that is found in many neuronal cell types. The M current controls membrane excitability, and it has been shown to be modulated by a variety of intracellular signals that in turn dramatically affect the firing rate of neurons. Among those intracellular signals, Ca 2ϩ has been shown to mediate the inhibition of I M by B 2 bradykinin receptors in sympathetic neurons (3). Indeed, intracellular Ca 2ϩ can suppress the activity of M channels under conditions that do not support enzymatic activities such as phosphorylation (4). This phenomenon suggests that an intermediary might be involved in this Ca 2ϩ -dependent modulation.
In a search for candidates that might mediate the effects of Ca 2ϩ in modulating I M , we screened a human brain cDNA library using the yeast two-hybrid system. We found that calmodulin (CaM) 1 bound to the C-terminal region of KCNQ channels. CaM is a small Ca 2ϩ -binding protein that acts as a ubiquitous intracellular Ca 2ϩ sensor in the regulation of a growing diverse array of ion channels (5). Efforts to define common characteristics of CaM binding have indicated that it associates with short ␣-helical sequences within its targets (6 -8). However, it appears that the interaction of CaM with KCNQ channels does not conform to this simple model. Rather, our data suggest that the CaM-binding site in KCNQ channels is formed by two ␣-helices that are separated by a stretch of ϳ130 amino acids. We hypothesize that those two helices come into close proximity in the tertiary structure, facilitating CaM binding.
EXPERIMENTAL PROCEDURES
Yeast Two-hybrid Analysis-A cDNA generated by PCR encoding amino acids 310 -844 of the human KCNQ2 subunit (9) was subcloned in frame with the GAL4 DNA-binding domain of the yeast vector pGBKT7 (CLONTECH) to be used as bait in a yeast two-hybrid screen. The reporter yeast strain Y190 was sequentially transformed with this plasmid and with a human brain cDNA library subcloned in pACT2 (CLONTECH, catalog number HL 4004 AH, lot 5008, mRNA source: normal, whole brain from a 37-year-old Caucasian male, whose probable cause of death was trauma). We screened Ͼ2.5 ϫ 10 6 co-transformants that were selected on medium lacking histidine (in the presence of 25 mM 3-aminotriazole), leucine, and tryptophan and assayed for -galactosidase activity.
Constructs containing point mutations and deletions were generated by PCR as described previously (10), sequenced, and subcloned into pGBKT7 (CLONTECH bait vector). The mutated constructs were cotransformed, along with a rat CaM cDNA in pGADT7 (CLONTECH prey vector), into Y190 to asses their interaction in vivo. Yeast extracts were analyzed to confirm the presence of Myc-tagged bait proteins by Western blotting with the 9E10 monoclonal anti-c-Myc antibody. For liquid quantitative -galactosidase assays, O-nitrophenyl--D-galactopyranoside was used as the substrate, and the number of -galactosidase units was calculated according to the CLONTECH protocol.
We also monitored the activity of the His reporter when using the low copy two-hybrid system, pPC97 and pPC86 (Invitrogen), with the yeast Y190 strain. Colonies growing in medium lacking leucine and tryptophan were grown in 1 ml of liquid medium overnight. The following morning the culture was diluted 100-fold in 10 mM Tris, 1 mM EDTA buffer and spotted on a Leu Ϫ Trp Ϫ His Ϫ plate with 10 -50 mM 3-aminotriazole. After 2-4 days at 30°C, the strength of the interaction was assessed by the size and color of the colonies.
In Vitro Binding and Western Blot-Recombinant rat CaM was produced in BL21 Escherichia coli and purified as described (11). Different regions of human KCNQ subunits were generated by PCR, subcloned into the glutathione S-transferase (GST) fusion vector pGEX (Pharmacia Corp.) and transformed into BL21 E. coli. The synthesis of fusion proteins was induced with 0.5 mM isopropyl--D-thiogalactopyranoside for 4 h at 30°C. The cells were resuspended in chilled GST buffer that included protease inhibitors (20 mM Tris-HCl, 100 mM NaCl, 1 mM EDTA, 0.5% Triton X-100, pH 8, plus 1 mM phenylmethylsulfonyl fluoride, 1 g/ml each aprotinin and leupeptine) and lysed by sonication at 4°C, and the protein was recovered by immobilization on glutathione-Sepharose 4B beads (Amersham Biosciences). After extensive washing, the immobilized proteins were equilibrated in pull-down buffer (25 mM Hepes, 120 mM KCl, 5 mM NaCl, pH 7.5) with either 2 mM CaCl 2 or 5 mM EGTA. Rat calmodulin (10 g) was added to the beads and incubated for 45 min at room temperature. After three washes, the proteins were recovered, separated by 15% SDS-PAGE in the presence of 5 mM EGTA, and transferred to Probond nitrocellulose (Schleicher & Schuell) for Western blotting. The nitrocellulose was blocked with 5% nonfat dry milk in 0.05% Tween 20 in phosphate-buffered saline, incubated with the primary antibody (monoclonal anti-CaM from Upstate, diluted 1:2000 in blocking buffer) overnight at 4°C, washed, and incubated with horseradish peroxidase-conjugated goat anti-mouse IgG secondary antibody (Bio-Rad) diluted 1:5000 in blocking buffer. Antibody binding was detected using enhanced chemiluminiscence and ECL hyperfilm (Amersham Biosciences).
Antibody Production-Divergent sequences from the intracellular Nand C-terminal regions of the different KCNQ channels were used to generate GST fusion proteins that were then used to produce antisera in rabbits. The specificity of the antisera produced was tested in immunoblots of membrane lysates of cells stably or transiently expressing different KCNQ subunits. A full description of the characterization of these antisera will be published elsewhere. 2 Immunoprecipitation-Stable (kindly provided by B. S. Jensen, Neu-roSearch) or transient HEK293 cells expressing human KCNQ subunits were maintained in Dulbecco's modified Eagle's medium supplemented with 10% fetal calf serum at 37°C in 5% CO 2 . Transient expression was achieved with the KCNQ expression plasmid using LipofectAMINE 2000 (Invitrogen). For immunoprecipitation experiments, confluent 60-mm dishes were washed twice with ice-cold phosphate-buffered saline and solubilized for 1 h in 400 l of immunoprecipitation buffer (50 mM Tris-HCl, 150 mM NaCl, 2 mM EDTA, 1% Triton X-100, pH 7.5) including protease inhibitors as above, and phosphatase inhibitors (1 mM NaF, 1 mM -glycerophosphate, and 5 mM pirophosphate). The cell lysates were centrifuged at 12,000 ϫ g for 20 min to remove insoluble material, and the protein concentration was determined with the Bio-Rad protein assay. The lysate was diluted 10-fold in immunoprecipitation buffer, and bovine serum albumin was added to a final concentration of 1 mg/ml. Rabbit anti-KCNQ subunit-specific antibodies were added at 4 mg/ml and incubated for 3 h at 4°C with agitation. Immunocomplexes were recovered with 40 l of equilibrated protein A-agarose (Santa Cruz) and washed with immunoprecipitation buffer. The proteins were eluted in Laemmli's buffer and resolved in 8% SDS-PAGE for the channels or 15% SDS-PAGE for CaM. The rabbit antisera to KCNQ channels were used at 1:500 dilution, and the peroxidase-conjugated protein A/G (Pierce) was used at 1:5000 dilution.
Fluorimetric Determination of CaM Binding-CaM (100 l at 10 mg/ml) was diluted 10-fold in 100 mM Tris-HCl, 20 mM CaCl 2 , pH 8.5. Dansyl chloride dissolved in acetone (2.17 mg/ml) was added (12.5 l) to achieve a final concentration of 100 M. The mixture was incubated at room temperature in the dark for 2 h, vortexing every 20 min. Unincorporated dansyl was eliminated using a 1-ml G-25 Sepharose column. The concentration of dansyl-calmodulin used in all experiments was 200 nM, unless noted otherwise.
Fluorescence spectra were recorded in a Perkin-Elmer fluorescence spectrophotometer in a final volume of 3 ml (light path, 1 cm). Dansyl-CaM was diluted in binding buffer (25 mM Tris-HCl, 120 mM KCl, 5 mM NaCl, with either 5 mM EGTA or 2 mM Ca 2ϩ , pH 7.4). The excitation wavelength was 340 nm, and the emissions were collected from 450 to 600 nm (0.5-nm steps).
RESULTS
CaM Interacts with the Intracellular C-terminal Region of KCNQ Channels-Potassium channels that are members of the KCNQ protein family have been implicated in pathological conditions affecting the nervous system and in the physiological regulation of excitable cells. Indeed, the activity of channels containing KCNQ subunits can affect aspects of the behavior of the cell as important as their firing rate. To identify proteins capable of modulating the activity of these channels, we have performed a yeast two-hybrid screen of a human brain cDNA library using the intracellular C-terminal region of human-KCNQ2 as bait (534 amino acids).
We identified 32 positive clones that grew in the absence of histidine (25 mM 3-aminotriazole added), leucine, and tryptophan and that presented -galactosidase activity. Of these, 19 clones were identified as the product of the gene CALM2, and 13 clones were identified as the product of the gene CALM3. Both genes encode variants of CaM and have an identical amino acid sequence. This interaction was confirmed in a second assay where CaM was used as bait and the C-terminal region of KCNQ2 or KCNQ3 was used as prey, demonstrating that the association of CaM with KCNQ subunits was independent of which protein is fused to the binding or activation domain.
Other regions of the KCNQ2 intracellular domain were also studied to determine whether they too were capable of interacting with CaM. We tested the N-terminal region (Met 1 -Arg 89 ), the loop connecting the second and third transmembrane domains (Arg 144 -Ile 173 ), and the loop connecting the fourth and fifth transmembrane domain (Leu 206 -Ala 235 ). We also tested the pore region (Glu 254 -Leu 292 ) that connects the fifth and sixth transmembrane domains because an interaction of the equivalent region of KCNQ1 with the C-terminal region of minK has been demonstrated in the two-hybrid assay (12). We were unable to observe any interaction between the constructs encoding these regions and CaM in the two-hybrid assay, despite the detection of these Myc-tagged hybrid proteins in immunoblots of lysates from transformed yeast (not shown).
The C-terminal domain of KCNQ channels commences with a region that is highly conserved between several members of this family and that is followed by a more divergent region. To study the domain to which CaM binds in more detail, we divided the C-terminal region of KCNQ2 and KCNQ3 into two overlapping parts and found that CaM interacted only with the initial, more highly conserved region (see Fig. 2). The overall similarity of this initial C-terminal region as defined by the Clustal method of DNA-Star software ranges from 38 to 45% for KCNQ2-5 subunits, and the similarity between KCNQ1 and the other KCNQ channels ranges from 22 to 25%. Using this C-terminal region as bait (or the whole C-terminal region; not shown), we found that each member of the KCNQ channel family was capable of interacting with CaM. Furthermore, when the interaction was quantified with a liquid -galactosidase assay, the binding of CaM appeared to be stronger for the KCNQ1 and KCNQ3 C-terminal regions than for those of KCNQ2, KCNQ4, or KCNQ5. The values relative to the quan-tification obtained with KCNQ1 (aa 250 -456, n Ն 5) were 48.7 Ϯ 3.3% for KCNQ2 (aa 310 -550), 150.0 Ϯ 14.8% for KCNQ3 (aa 349 -556), 22.1 Ϯ 2.7% for KCNQ4 (aa 316 -571), and 77.7 Ϯ 24.6% for KCNQ5 (aa 309 -524). The significance of these differences remains unclear.
CaM Interacts with Full-length KCNQ Channels-To determine whether CaM associates directly with full-length channels inserted into the membrane, we immunoprecipitated KCNQ2 from solubilized membranes of transiently transfected HEK293T cells. A ϳ20-kDa band was recognized by a CaMspecific antibody in the immunoprecipitate obtained in Ca 2ϩfree conditions with antisera to the KCNQ2 subunits but not with preimmune serum (Fig. 1). In the presence of calcium, CaM was also co-immunoprecipitated, although the amount that could be detected was reduced.
When specific antisera were used to immunoprecipitate KCNQ3 and KCNQ5 from solubilized membranes of stable HEK293 cell lines expressing these subunits, 2 CaM was present in the immunoprecipitate in Ca 2ϩ -free conditions. As for KCNQ2, in the presence of Ca 2ϩ , less CaM appeared to be associated with these subunits (Fig. 1). Moreover, no CaM was detected when preimmune antisera were used. Thus, we concluded that native neuronal KCNQ channels interact preferentially with apo-CaM.
Mapping of the KCNQ Regions Necessary for the Interaction with CaM-Extensive studies have shown that CaM binds to amphipatic ␣-helices (6 -8). To detect potential ␣-helices within the C-terminal region of KCNQ channels, we used two secondary structure prediction algorithms (GOR4 and Predator; www.expasy). These algorithms highlighted four regions with a high probability of forming an ␣-helix in several members of the KCNQ channel family (Fig. 2B, helices A-D). Helix D corresponds to the putative assembly domain (13, but see Ref. 14), and helix C corresponds to the A domain (15). We tested the capacity of all of these potential ␣-helical domains to interact with CaM using the high copy plasmid two-hybrid CLONTECH system (pGBKT7 bait, pGADT7 prey). Although the proximal C-terminal region (that include helices A and B) interacted with CaM, distal regions including helices B, C, and D did not (Fig. 2C). Similarly, a region including helix A (Gly 310 -Ser 448 ) or helix B and part of helix C (Ser 448 -Asp 549 ) did not bind CaM. We next performed a series of lateral and internal deletions in the KCNQ2 C-terminal amino acid sequence. As a result of analyzing these deletions, we determined that the binding of CaM required the simultaneous presence of two discontinuous regions (helix A (Gly 310 -Tyr 372 ) and helix B (Thr 501 -Glu 529 ); Fig. 2C).
Several consensus sites for CaM binding have already been described (6,7). A closer analysis of the region that contains helix A showed that it includes a sequence that resembles the IQ CaM binding motif and that this is conserved among several members of the KCNQ family (Fig. 3A). Mutations within the IQ motif have been shown to abolish the interaction between CaM and neurogranin in a two-hybrid assay (16) or to alter the Ca 2ϩ -dependent regulation of ion channels (17). We introduced several point mutations to determine whether amino acids within the IQ domain are necessary for the interaction of KCNQ2 with CaM. These mutations included Ile 340 3 Ala, Ile 340 3 Glu, Ser 342 3 Asp, and Ala 343 3 Asp.
Amino acid alignment of the IQ motifs indicates that Ala 343 of KCNQ2 corresponds to Ser 36 of neurogranin (not shown). Mutating Ser 36 3 Ala of neurogranin makes the IQ motif of neurogranin resemble that of KCNQ2 and does not affect (or may even increase) CaM binding. However, mutating Ser 36 3 Asp, thereby introducing a negative charge that mimics the effect of protein kinase C, abolishes the interaction between neurogranin and CaM in the two-hybrid assay (16). Similarly, we found that the interaction between CaM and the KCNQ2 bait was lost in the equivalent Ala 343 3 Asp mutant. The interaction was also disturbed when the Ile 340 3 Ala and Ile 340 3 Glu mutations were introduced, and the Ser 342 3 Asp mutation appeared to partially perturb CaM binding because the filter -galactosidase assay took longer to develop and gave a weaker signal (Fig. 3E). Thus, it appears that the IQ binding motif is necessary to sustain CaM binding in the yeast twohybrid assay.
Helix B contains two overlapping 1-5-10 CaM binding motifs ( Fig. 3C and Ref. 7). We determined whether the introduction of negative charges in this region affected CaM binding in a manner similar to that used when they are introduced into key positions of the IQ binding motif. Within helix B of KCNQ2 there are three protein kinase C phosphorylation consensus sites (Ser 511 , Ser 523 , and Ser 530 ), and thus we investigated the effect of introducing a negative charge (mutating Ser 3 Asp) at these positions to mimic the effect of phosphorylation. We also evaluated the effect of mutating serine 406, which is also a potential target for protein kinase C but that lies in a region not required for CaM binding. As expected, the introduction of an aspartate at position 406 did not alter CaM binding (not shown). In contrast, the interaction with CaM was lost when serine 511 was mutated to aspartate (Ser 511 3 Asp) but appeared to be only slightly affected (the signal was fainter than for the wild type) when the other serine residues were mutated (Fig. 3, D and E). These results suggest that protein kinase C might regulate the binding of CaM to KCNQ2 channels through the phosphorylation of Ser 511 .
To further characterize the interaction between CaM and the KCNQ channels, we performed GST pull-down experiments (Fig. 4). Different fragments of the C-terminal region of KCNQ2 were fused in frame to GST, and their binding to apo-CaM and Ca 2ϩ -CaM was compared with that of GST-neurogranin and GST fused to the C-terminal region of the NR1 subunit (18). The association of purified recombinant rat CaM to fusion proteins was analyzed both in the absence and in the presence of Ca 2ϩ . Western blots probed with a monoclonal anti-CaM antibody showed that the fragments including helix A (aa 310 -451) or helix B (aa 445-548) bound CaM in the presence of Ca 2ϩ (Fig. 4). In the absence of Ca 2ϩ , helix A did not bind CaM, whereas helix B appeared to show a weak interaction with CaM. In the two-hybrid assay, both of these regions appeared to be incapable of interacting with CaM; however, as discussed later, this can be explained by the failure to detect interactions with Ca 2ϩ -CaM with the two-hybrid assay. FIG. 1. CaM and KCNQ2, KCNQ3, or KCNQ5 co-immunoprecipitate from transfected cells. Membrane preparations from HEK293T cells transiently expressing KCNQ2 or stably expressing KCNQ3 or KCNQ5 subunits were solubilized and immunoprecipitated in the absence (Ϫ) or presence (ϩ) of Ca 2ϩ (with the addition of 5 mM EGTA or 2 mM Ca 2ϩ , respectively). The immunoprecipitations were performed with antisera raised against a fusion protein of the Nterminal region of KCNQ2 (␣-KN2), the C-tail of KCNQ3 (␣-KC3), or the N terminus of KCNQ5 (␣-KN5). The precipitated proteins were resolved in 8 or 15% polyacrylamide denaturing gels, and the channels or CaM were then detected by Western blotting. The association between CaM and KCNQ subunits was more clearly detected in lysates processed in the absence of Ca 2ϩ . C, control lysate; pI, preimmune serum. Molecular masses in kDa are indicated on the left of the figure.
A GST fusion protein containing the most C-terminal region of KCNQ2 (aa 549 -844) was incapable of binding CaM in either the presence or the absence of Ca 2ϩ . In contrast, CaM was pulled down independently of the presence of Ca 2ϩ (although more CaM was pulled down in absence of Ca 2ϩ ) when the GST fusion included a fragment that contained helices A and B (aa 310 -548; Fig. 4), again indicating that CaM interacts directly with KCNQ channels.
The interaction of CaM with the C-terminal tail of KCNQ2 was studied by fluorimetry (Fig. 5). The fluorescence spectrum of CaM dansylated at Lys 75 is shifted, and the intensity increases when the environment of the fluorophore becomes hydrophobic (19). Dansyl-CaM was incubated with GST, GSThelix A, GST-helix B, or GST-helices AϩB fusion proteins, and the changes in fluorescence emission induced by the interaction were studied in the presence (2 mM Ca 2ϩ ) and absence of Ca 2ϩ (in the presence of 5 mM EGTA). As previously shown, GST alone had no effect on the emission spectrum of dansyl-CaM independent of the [Ca 2ϩ ] (20). In the presence of Ca 2ϩ and equimolar concentrations of the GST fusion proteins that included helices A and B or helix B alone, modest changes in the fluorescent emission were detected. These changes were even more modest in the presence of the GST-helix A fusion protein.
In contrast, a substantial enhancement in fluorescence was observed in the absence of Ca 2ϩ with the GST-helices AϩB fusion protein. Using a series of dansyl-CaM concentrations (50 -400 nM), the EC 50 estimated in the absence of Ca 2ϩ for GST-AB binding to dansyl-CaM ranged from 186 to 320 nM.
Under similar conditions, the change in fluorescence remained modest with the GST-helix B fusion and could not be detected with the GST-helix A fusion. In addition, no synergism was seen in the enhancement of fluorescent emission when both GST-helix B and GST-helix A were used together. Thus, the most significant changes in fluorescence were seen when both helices are part of the same polypeptide, suggesting that in the absence of Ca 2ϩ , this peptide folds in such a way that it provides a better CaM-binding site than helix B alone (Fig. 5).
The role of helices A and B was further examined by transiently expressing in HEK cells mutant KCNQ2 channels in which the helices were deleted and assessing their ability to associate with CaM by immunoprecipitation. CaM was not co-immunoprecipitated by antisera against KCNQ2 in cells expressing mutants devoid of helix A (⌬IQ Leu 339 -Thr 358 , ⌬pIQ Trp 359 -Met 371 ), helix B (⌬⌽ Ser 511 -Ser 523 ), or both (⌬IQϩ⌬⌽; Fig. 6). In contrast, the deletion of aa 372-493 did not appreciably inhibit CaM binding (Fig. 6A). In addition, we analyzed the point mutant Ser 342 3 Asp, which gave a weak signal in two-hybrid assay (Fig. 3E). With this mutant, CaM was only seen to co-immunoprecipitate with the channel in the absence of Ca 2ϩ . In contrast, the association of CaM was not observed with the Ser 511 3 Asp mutant in the presence or absence of Ca 2ϩ . Thus, the results of the co-immunoprecipitation experiments paralleled those obtained in the two-hybrid assay, reinforcing the proposal that both helices are necessary for interaction with CaM in the intact channel. Determinants in CaM That Are Required for Binding to KCNQ-CaM is composed of four Ca 2ϩ -binding helix-loop-helix motifs, called EF hands. These are arranged in two pairs, each pair forming a distinct domain or lobe. The domains are arranged in a dumbbell-like conformation at the ends of a flexible central helix (aa 65-92). The N-terminal lobe (aa 1-77) contains EF hands 1 and 2, and the C-terminal lobe (aa 78 -148) contains EF hands 3 and 4. It is thought that, in evolutionary terms, CaM arose by duplication of a gene that represented one lobe. As a consequence EF hands 1 and 3 are alike, and EF hand 2 is most similar to EF hand 4 (8). Ca 2ϩ binds to the four EF hand motifs in a highly cooperative manner. First it associates with EF hands 4 and 3, and subsequently it associates with EF hands 2 and 1. The binding of Ca 2ϩ to the EF hands results in the creation of a surface that serves as an interface for the association of CaM with the target protein. Each lobe can control distinct processes in the target protein (21,22), and binding to a target can change the affinity for Ca 2ϩ (23).
To investigate the role of the different EF hands, we studied the interaction of a mutated CaM where the first aspartate of EF hands 2, 3, and 4 has been replaced with alanine and another CaM mutated at EF hands 3 and 4. These Asp to Ala mutations in the EF hands greatly diminish or abolish their ability to bind Ca 2ϩ (21,24,25). Both mutants were able to interact with the C-terminal region of KCNQ2 or KCNQ3, although less strongly than wild type CaM. In the liquid galactosidase assay the intensity of the signal obtained with the EF-2,3,4 mutant was ϳ40% of wild type CaM (not shown, but see Fig. 7C).
The determinants for this interaction were further studied using the low copy number pPC97 (bait) and pPC86 (prey-CaM) vectors. These plasmids produce more physiological levels of protein than the ones used to perform the previous experiments. Thus, this low copy number system allows for better discrimination of small changes in binding strength caused by the point mutations introduced into either the bait or the prey. It has been proposed that this two-hybrid assay only detects Ca 2ϩ -independent interactions with CaM (21). While this still remains unclear (26), this may be an important technical consideration to bear in mind given the difficulties encountered in identifying the site of interaction of apo-CaM to some proteins (27). As mentioned earlier, we found that in vitro, helix A and helix B can associate with Ca 2ϩ -CaM (Fig. 4), although we were unable to detect this interaction in vivo with the two-hybrid assay. This is in keeping with Keen's proposal (21). Moreover, using this assay, we were unable to detect an interaction of CaM with baits known to interact with Ca 2ϩ -CaM but not with apo-CaM (baits such as the C-terminal region of the NMDA receptor (18); the Eag potassium channel (28); and the CaM binding site of myosin light chain kinase (7)). Conversely, when neurogranin, which binds to apo-CaM but not to Ca 2ϩ -CaM, was used (7), a clear interaction could be seen (Fig. 7A).
We tested fragments of CaM to determine which regions were required for binding to KCNQ2 and to KCNQ3 (Fig. 7B).
We also studied the interaction of CaM with the full C-terminal region of SK2 potassium channel as well as neurogranin and the C-terminal region of P/Q calcium channel for reference (16,21,29). As previously reported, we found that the C-terminal lobe of CaM (aa 78 -148, EF-3,4) interacted with the C-terminal domain of SK2 (21). In contrast, full-length CaM (i.e. the four EF hands) was required to bind to the C-terminal AB region of KCNQ2 or KCNQ3 subunits.
CaM carrying different combinations of EF hands mutated at the first aspartate were tested for their ability to associate with the AB region of KCNQ2 (Fig. 7C). Although apo-CaM binds to both the SK (25) and KCNQ family of channels, there is no significant sequence homology in the C-terminal region of these K ϩ channels. It has been reported that CaM carrying mutations in any or all combinations of the EF hands associates with a fragment of the C-terminal region of SK2 (aa 390 -487) (21). In the experiments reported here we used a longer bait (aa 390 -707) that produced a weaker signal (i.e. the strength of the interaction of the hybrid proteins is closer to the threshold level of detection). With the full C-terminal region of SK2, the assay produced a signal that was difficult to detect when either EF hands 3 or 4 were mutated alone or in combination (M3, M4, and M34). Because it has been clearly demonstrated that EF hands 3 and 4 directly mediate the interaction of apo-CaM with SK2 channels (21,23), these results indicate that the strength of binding is reduced when Asp 3 Ala mutations are introduced into these EF hands, sometimes to below the threshold of detection.
In contrast to SK, the interaction with the KCNQ2 or KCNQ3-AB region was lost or deficient when EF hands 1 or 3 were mutated (M1, M3, and M13), suggesting that these EF hands directly mediate the interaction of apo-CaM with KCNQ channels. In addition, the complementary M1 and M3 mutants (M234 and M124) retained the ability to interact with the KCNQ-AB region. Interestingly, whereas M1 did not bind to the KCNQ-AB region, the interaction was partially recovered when EF hands 1 and 2 were mutated simultaneously (M12 and M124), suggesting the existence of important cross-talk between EF hands 1 and 2 that influences the binding of CaM to KCNQ channels. Similarly, the interaction was recovered when EF hands 1 and 4 were mutated simultaneously (M14 and M124), indicating that both CaM lobes are functionally interconnected when interacting with the KCNQ-AB region. A, membrane preparations from HEK293T cells transiently expressing KCNQ2 mutants were solubilized and immunoprecipitated with ␣-KN2 antisera in the absence (Ϫ) or presence (ϩ) of Ca 2ϩ (with the addition of 5 mM EGTA or 2 mM Ca 2ϩ , respectively). The precipitated proteins were resolved in 8 or 15% polyacrylamide denaturing gels to detect the channels or CaM by Western blotting, respectively. The association between CaM and ⌬IQ, ⌬pIQ, and ⌬⌽ was detected neither in the presence nor the absence of Ca 2ϩ condition. The deleted amino acids were: ⌬IQ L339-T358, ⌬pIQ W359-M371, and ⌬⌽ S511-S523. B, CaM was co-immunoprecipitated with the Ser 342 3 Asp in the absence of Ca 2ϩ , but an interaction was not detected with the Ser 511 3 Asp mutant in any Ca 2ϩ condition. WT, wild type.
DISCUSSION
We have demonstrated here that CaM binds to the intracellular C-terminal domain of neuronal KCNQ2, KCNQ3, and KCNQ5 transmembrane channels. Two-hybrid experiments suggest that CaM also binds to KCNQ1 and KCNQ4, but more experiments are necessary to unequivocally confirm this interaction. The voltage-dependent channels to which members of this family of proteins contribute have been implicated in a variety of physiological processes and pathologies. As a result, the modulation of the activity of these channels through intracellular signaling is important in maintaining the physiological homeostasis of nervous tissue (30). An example of this can be seen in the Ca 2ϩ -dependent regulation of M channels (made up of KCNQ subunits) that influences the firing rate of the sympathetic cells in which they are expressed. Our results indicate that through its association with these proteins, CaM may mediate the Ca 2ϩ -dependent modulation of channels that include KCNQ subunits.
In addition to demonstrating here that CaM associates with members of the KCNQ family, we have also defined its binding site. The binding site identified in this study is unusual in that it is composed of two discontinuous regions, helix A and helix B. Helix A contains an IQ-like binding motif, a motif (IQXXXRXXXXR) that mediates apo-CaM binding in a variety of proteins (31) and that contains positively charged residues at positions 6 and 11. When compared with other IQ motifs, in the FIG. 7. CaM domains involved in binding to KCNQ channels. A, the two-hybrid binding assay detected an interaction of CaM with targets known to interact with apo-CaM (C-terminal region of SK2) as well as for the C terminus of KCNQ2, KCNQ3, and P/Q voltage-dependent calcium channels. No interaction was seen for targets known to interact with Ca 2ϩ -CaM (C-terminal region of the NMDA receptor (C0C1C2), the CaM-binding region of myosin light chain kinase), or the C-terminal region of Eag voltage-dependent potassium channels. The regions used were: SK2, 390 -707; KCNQ2, 510 -550; neurogranin (Nrg), 1-78; P/Q, 1761-2213; NR1, 815-922; Eag, 666 -1307; and myosin light chain kinase (MLCK), 959 -1332. B, full-length CaM was required for binding to KCNQ channels. The fragments of CaM indicated were tested for interaction with KCNQ2-BD and KCNQ3-BD. The whole C-terminal region of SK2 was used as a positive control. The CaM fragments tested were: EF12, 1-82; EF34, 78 -148; EF123, 1-132; EF234, 42-148; EF1, 1-46; and EF4, 109 -148. C, EF hand 1 (and EF hand 3 to a lesser degree) is important for the interaction of CaM with KCNQ2-BD in the two-hybrid assay. The interaction of CaM with an Asp 3 Ala mutation at the indicated EF hands was tested in the two-hybrid assay. D, scores of interaction with the different EF hand mutants. ϩ, growth in His Ϫ medium but without a clear blue signal in the -galactosidase assay; ϩϩϩ, strong -galactosidase signal. helix A of the KCNQ channels the second Arg is replaced by a negatively charged or neutral amino acid. As a result, this domain resembles the second "incomplete" IQ motif on myosin II, the region to which the regulatory myosin light chain (structurally similar to CaM) binds. A model of apo-CaM binding derived from this and other light chain structures bound to myosin IQ motifs reveals that the initial portion of this motif (IQXXXR) is the most critical part. Moreover, it is this region that is specifically recognized by the loop between EF hands 3 and 4 and that determines a semi-open lobe conformation (32).
In accordance with this model, the interaction of KCNQ2 with CaM is destabilized when point mutations are introduced in this first part of the IQ motif.
On many occasions, the C lobe of CaM has been shown to be that which interacts with peptides, as also occurs with CaMlike proteins that associate with peptides. Moreover, the bound peptide essentially occupies the same position relative to the C-terminal EF hand domain (33). The finding that the C lobe is sufficient to bind to SK channels (21), P/Q Ca 2ϩ channels, and neurogranin supports this finding. However, in contrast, it appears that to bind to KCNQ2 or KCNQ3 channels, both the C and N lobes are required.
Surprisingly, we found that when using the full-length Cterminal region of SK2 as bait, point mutations that abolish Ca 2ϩ binding to EF hands 3 or 4 also abrogate the interaction with CaM. There is, however, ample biochemical, functional, and structural evidence to indicate that Ca 2ϩ is not bound to EF hands 3 or 4 when CaM interacts with SK2. In addition, it has been shown that the Ca 2ϩ -free C-terminal lobe is that which mediates the binding of CaM to the SK2 CaM-binding domain (21,23). Our results indicate that mutating the first aspartate to alanine in the EF hands that directly mediate the interaction of apo-CaM with the target causes a reduction in the binding strength. By analogy with the SK2 CaM-binding domain, the observation that the interaction of CaM with KCNQ2 or KCNQ3 does not tolerate point mutations at EF hand 1 and (to a lesser degree) 3 suggests that binding to the KCNQ-BD is mainly mediated by the apo-EF hands 1 and 3. Interestingly, EF hand 1 is most similar to EF hand 3 (8), suggesting that both play a similar role in stabilizing the target complex.
The difficulties in identifying apo-CaM interactions have been highlighted by Erickson et al. (27). To overcome this problem, a very elegant technique has been developed, threecube fluorescence resonance energy transfer, that demonstrates the preassociation of CaM with calcium channels in living cells (27). The yeast two-hybrid system is another viable alternative for approaching this problem. Our results provide further evidence that the two-hybrid assay is capable of detecting interactions between the Ca 2ϩ -free form of CaM and target proteins such as the SK2 K ϩ channels (21). It should be born in mind that a limitation of the two-hybrid system is that the transmembrane segments of target proteins must be eliminated to allow targeting of the bait to the nucleus (34). However, the main advantage is that this is a relatively simple assay and that it does offer us the opportunity to study proteinprotein interactions in a living cell (16).
The recent resolution of the structure of CaM associated with the SK-binding domain has shown that the binding domain is composed of two ␣-helices connected by a short loop (23). The high probability that the two KCNQ regions contain ␣-helices suggests that a similar conformation to the SK2 CaM-binding domain may also arise in KCNQ. However, some differences are evident. Although in SK channels the connecting loop is only 5 aa long, in KCNQ channels this varies from ϳ100 to ϳ150 aa. Secondly, the C lobe of CaM (EF hands 3 and 4) is sufficient for binding to SK channels (21), whereas the complete CaM molecule is necessary for binding to KCNQ2 or KCNQ3 channels. In essence, our data suggests that the Cterminal domain of KCNQ channels folds in such a way that helix A and helix B form a compact structure that can be engulfed between the N-and C-terminal lobes of apo-CaM (Fig. 8).
What is the role of CaM in KCNQ function? The preassociation of apo-CaM to a target protein whose activity may be regulated by CaM generally ensures a rapid and selective response to local elevations in Ca 2ϩ (8,27). Indeed, the suppression of the M current by bradykinin in rat sympathetic neurons is mediated by this cation (3), and apo-CaM has been shown to modulate the Ca 2ϩ -dependent gating of many channels (27,35). The Ca 2ϩ -dependent modulation of M channels can also be observed in excised patches. Under these conditions, signaling pathways such as phosphorylation are not supported, and because the effects observed are fully reversible, it becomes very unlikely that the modulation of I M via Ca 2ϩ is due to dephosphorylation of the M channel or of an associated protein. Furthermore, the absence of a "signature" flickering behavior and the tendency to "desensitize" in inside-out patches (i.e. the effect is transient) suggests that the activity of Ca 2ϩ does not involve the direct blockage of the internal mouth of the channel (4). These observations indicate that the mediator exists in limited amounts in excised patches and that it is washed out after the application of Ca 2ϩ . These properties suggest the involvement of a Ca 2ϩ sensor such as CaM, which, as we have shown, interacts with the C-terminal intracellular region of KCNQ channels. In this respect, it is interesting to reflect on the fact that helix A is adjacent to the end of S6, the last transmembrane domain (Fig. 8). In other channels with a similar six transmembrane architecture, such as SK Ca 2ϩ -activated K ϩ channels and cyclic nucleotide-gated channels, gating is modulated by a module that attaches to the end of S6 (23,36). However, we should also bear in mind that the binding of The main features of the model are that helix A and helix B come into close proximity in the tertiary structure and are engulfed by CaM. The S6 transmembrane segment and pore region of only two subunits of a tetrameric potassium channel are shown for clarity. The relative orientation of helix A of the two subunits is a suggestion based on the proposed structure of cyclic nucleotide gated channels (36). The crystal structure of KcsA potassium channel (S6 and pore) (39) and the crystal structure of the SK potassium channel binding domain complexed with CaM (23) have been used as templates to draw this cartoon to scale.
CaM may also be important in other processes such as assembly or trafficking (37,38). The functional analysis of mutants unable to bind CaM should help to unveil the role of CaM in KCNQ channel function. | 9,520 | 2002-08-09T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Cost Escalation in Road Construction Contracts
This paper presents a study of cost escalation in unit price road construction contracts. The aim is to investigate why the final cost of contracts differs from the agreed contract cost following tendering, both to identify causes of observed discrepancies and to suggest measures that could improve the planning and delivery of future projects. Road projects often consist of several contracts and as they account for the biggest costs of the projects, cost escalation in the contracts may increase the risk of project cost overrun. Even if contract cost performance is an important indicator of project success, it may be too simplistic to equate this with project success. It is quite possible to deliver a project within budget even if contract costs escalate, as long as the project cost contingency is adequate to cover such escalations. However, escalations in contracts increase the risk of project overrun and may lead to other problems such as conflicts and delays. The results show that most of the studied contracts experienced cost escalation. The main cause of the escalation was change orders to the scope that were not covered by the original contract. In addition, the results indicate that complexity represented by contract size, duration, and urban location increases the risk and size of cost overrun. Based on these findings, the paper presents some recommendations on how contract delivery can be improved as well as some implications for future research.
The construction industry accounts for a large share of gross domestic product and employs many people in all developed countries. The industry's size suggests that its efficiency is important to the overall performance of the economy. Thus, both client organizations and contractors should work to reduce wasteful practices.
The literature on cost overruns is extensive, and perhaps especially so in the transport industry. However, beyond demonstrating poor cost performance, many of the studies are repetitive and often demonstrate a lack of insight into how large and complex projects are organized (1). Most of the research to date has been on project cost overruns, leaving the exploration of contract overruns largely unexamined.
A road project may consist of a single contract awarded to a contractor, or it may consist of many contracts awarded to several contractors. The cost of the contracts will affect the performance of a project. As contracts normally account for the biggest costs in projects, the cost performance in contracts is important for successful project delivery. If contracts are not delivered on time and according to agreed costs, both the client's value for money and the contractor's profits are affected.
Traditionally, project success has been measured against targets for time, cost, and quality. In recent decades, the academic literature has recognized that a wider range of criteria is needed for measuring project success, but efficient project management to convert resources into results still remains at the heart of project-based organizations (2). Although certain changes to the scope of a project may be expected, major changes to a contract can indicate poor project management. Additions through change orders often result in cost escalations in contracts and may ultimately cause cost overruns at the project level.
Cost escalations in contracts can have different causes. They can be a result of wrong assumptions in the description of the works, which in turn may result in changes and additions to what was originally agreed (3). Cost escalations may also be a result of unforeseen external circumstances that need to be considered and budgeted for, or when the client requires changes to the agreed scope to achieve an intended function. Cost estimation is not an exact science. Therefore, contingency is needed so that, for example, changes to the scope do not automatically result in cost overruns (4). By adding contingency, cost overruns in contracts may be absorbed so that project overruns are avoided.
It is possible that the final cost of contracts will be higher than originally agreed and that an acceptable result at the project level still will be achieved. However, knowledge of past contract cost performance may be useful when estimating the costs of projects and in particular the necessary contingency that is needed to cover inaccuracies in estimates or unforeseen events.
The purpose of this paper is to investigate the extent to which original agreed contract prices differ from the final cost in construction contracts. This in turn enables an exploration of some of the causes of the observed escalations and suggests measures that could lead to improvements in the planning and delivery of future projects.
The paper is organized as follows. Section 2 reviews some previous studies of cost escalation in construction contracts. Section 3 briefly describes some crucial elements for managing and delivering construction contracts and projects. Section 4 describes the data and methodology used in the paper, and Section 5 reports the results. Section 6 presents some conclusions.
Previous Studies
Many studies have documented that cost overruns in projects are a challenge in most countries and in most industries. The reasons for the overruns vary, and different authors seem to have completely different theories as to why projects exceed their budgets. Flyvbjerg et al. (5) argued that overruns are because of deliberate underestimation, fraud, and optimism bias. According to Siemiatycki (6), Flyvbjerg et al. received much attention for their sensational accusations at the time, but their work has since received a lot of criticism for relying on secondary data sources and for comparing estimates prepared at different stages of project development. Other authors, such as Love et al. (7) and Love and Ahiaga-Dagbui (8), have therefore claimed that project-internal issues such as changes in scope, ground conditions, and weak project management are the main reasons for overruns.
One reason why both the size of the overruns and the supposed causes vary so much between different studies is that the basis for comparison varies. Some studies compare final costs with the first estimate, most studies compare final costs with the formal budget, and others compare final costs with the contract cost agreed with the contractor. Consequently, different comparisons can cause huge variations in reported results. Surprisingly, professional associations such as the Project Management Institute and the Association for Project Management do not provide clear definition of cost overruns (9). Probably the most common definition of cost overruns is the one used by Flyvbjerg et al. (5), namely the difference between final costs and the cost estimate at the time of the decision to build. Flyvbjerg et al. (10) argued that this is the most relevant point of departure for comparisons of cost performance because it measures the accuracy of the information that was available to decision makers and how well-informed they were when they made their investment decision. If, instead, one was to compare the final cost with estimates produced while projects were in the earliest stages of project development, when the uncertainty about scope and design is large, the deviation might range from tens of percent (11) to several hundreds of percent (12). Siemiatycki (13) reviewed 13 studies conducted by researchers and auditors and found that one-third of the studies compared final costs with contracts, whereas the other two-thirds compared costs with the budget.
Different cost estimates are prepared throughout the development of a project. The expected accuracy of estimates varies according to, among many things, the extent of project definition and the degree of effort made to prepare the estimate. AACE International provides a classification of estimate classes from concept screening (Class 5) through to estimates for bid/tender (Class 1). Through the stages, the expected accuracy range decreases from 250%/ + 100% to 210%/ + 15% (14). Thus, any evaluation of cost estimates should consider the stage at which the estimate has been prepared.
Changes in the contract scope is a source of uncertainty in most projects. If the client requests changes and additions beyond the agreed scope of the contract, the contractor will normally require compensation for carrying out such work. If contractors speculate on change orders for profits, this may give unfortunate incentives to contractors to produce unrealistically low tenders (15). Burnett and Wampler (16) argued that in tenders in which the lowest bid is crucial to winning, such as in unit price contracts, the contractor must not only calculate the cost of carrying out the specified works according to the client's description but also estimate likely changes to the contract that could increase their profit margins.
Although most studies of cost overruns compare the project's costs with the budget, some studies have investigated cost escalation at contract level. Thurgood et al. (17) studied 499 road construction contracts in Utah from 1980 to 1989 and found that the average deviation between final cost and agreed contract price ranged from 0% to 10%. Hinze et al. (18) reviewed 468 unit price contracts in projects undertaken in the state of Washington between 1985 and 1989 and found that the average escalation was 5.1%. Ellis et al. (19) studied 3,130 road projects in Florida and examined whether unit price contracts were more vulnerable to escalations than other forms of contracts. They found that there was a difference, although small: unit price contracts were on average 9% more expensive than the agreed contract price, whereas the corresponding figure for other types of contracts was 8%. The North Carolina Department of Transportation found similar results: in 390 road projects, the payment to the contractors averaged 7% over the original contract price (20).
The identified causes of escalations in the abovediscussed studies in the United States varied, but in all cases the authors pointed to changes in contract volume as an important source of uncertainty and escalations. Furthermore, strong competition and price pressure, insufficiencies in the contract documents, and poor capacity and competence in the client organization were considered important reasons for escalations. These findings are similar to those of Williams et al. (21), who suggested that cost escalations in contracts in the United States and the United Kingdom were caused by unrealistically low bids, poor project management, deficient design requiring change orders, design difficulties, and unforeseen external events.
Cost escalation in contracts has also been studied in Australia. In a study of 67 construction contracts, Love et al. (3) found that the deviation between the agreed contract price and the final cost was on average 23.8%. They did not find any connection between deviations and project size, type of project, or type of contract. The main reasons for the deviations were changes in the agreed project scope.
Furthermore, escalation in contracts is linked to the wider issue of disputes between contractors and clients, which is a global problem that causes significant costs and may lead to industry inefficiencies. Sabri et al. (22) investigated conflicts in the Norwegian construction industry and found that tender specifications and differences in understandings of contracts were among the main reasons why the conflicts occurred.
Managing and Delivering Construction Contracts and Projects
The contract strategy is an important element in a construction project. The contract strategy should describe how to ensure appropriate competition in the selection phase, how to allocate tasks, responsibilities and uncertainties, and which contractual instruments should be established to support governance during the implementation phase. Most large client organizations have a general, overall strategy for contracts, which in turn provides the premises for the contract strategy for the individual projects. The purpose of the contract strategy is to achieve the best possible value for money in the contracts, considering the market situation, the characteristics of the contract, and the maturity of the plans, as well as the client's expertise and capacity. The contract strategy can have major impacts on the timescale and cost of projects (23,24), and therefore the choice of contract is important when making investment decisions.
The costs of carrying out a project are made up of different elements, depending on the contract strategy of the client organization. Traditionally, the most common contracts in the construction industry have been design-bidbuild unit price contracts. This type of contract is based on estimated quantities of items and unit prices (e.g., hourly rates, rate per unit work volume) set out by the client in the tendering documents. As such, the risk of inaccurate estimation of quantities has been removed from the contractor. In general, the contractor's overhead and profit are included in the rate. Consequently, some contractors may submit an ''unbalanced bid'' when they discover large discrepancies between their own estimates of quantities and the owner's estimates. Depending on the contractor's own estimates and risk propensity, the contractor may slightly raise the unit prices on the underestimated tasks and lower the unit prices on other tasks. If the contractor is correct in its assessment, it can increase its profit substantially as the payment is made on the actual quantities of tasks; if the reverse is the case, the contractor can lose on this basis (25).
The final price of the project depends on the actual quantities needed to carry out and complete the work. Contracts allow the client flexibility in that the scope can be changed to accommodate for changes in needs and unforeseen circumstances. They also provide transparency, particularly when the client is trying to procure the contractor with the lowest bid. The difference between estimated and actual quantities represents a risk to both owners and contractors and may affect a client's costs and a contractor's profit (26). Thus, unit price contracts are best suited for projects with well-known elements but unknown quantities.
A shortcoming in traditional contract arrangements is that contractors who carry out the contracts are not involved in developing them, even in cases when contractors may have better construction knowledge and experience than the clients or designers (27). This may increase the risk of changes to the agreed contract scope, which may come at an additional cost to the client. Molenaar et al. (28) claimed that unit price contracts and similar traditional practices have led to diminished trust between clients and contractors, inhibited innovation and efficiency, and contributed to cost and schedule overruns.
A budget in a construction project may consist of several elements (Figure 1). In Norway, where costs are estimated by stochastic cost estimation, the budget in large projects is normally set at the P85 level, meaning that the risk of overrun should be no more than 15%.
Construction projects in Norway include a contingency for uncertainty and for ''known-unknowns'', the client's own costs of organizing and managing the project, the costs of ground acquisition, and the costs of the different contracts needed to deliver the project. The contingency is reserved at project manager or project owner level. The cost of the contracts is estimated by deterministic methods with no contingency for unforeseen events outside the agreed contract scope. The agreed contract cost with the contractor may differ from the client's own contract estimate and the final cost depends on the completion of the works.
Some projects are based on just one contract with one large contractor, whereas others are based on many contracts with several small contractors. As shown in Figure 1, even if the total cost of the project is made up of the costs of the individual contracts, escalations in the individual contracts may not result in project overrun if project contingencies are adequate to cover contract cost uncertainties. In a large project, a client may also diversify risk between different contracts so that escalations in one contract are offset by decreases in others.
However, large changes to a contract may indicate that the descriptions of works have been poor, and that the management of the project and the costs have been inadequate. In addition, major changes may pose challenges to staffing, progress, and costs for both the client and the contractor. This may result in lower cost-efficiency, and there is a broad agreement in the industry that this should be avoided (22).
Projects are delivered differently throughout the construction industry, but with the growth of increasingly more complex projects there has been a development toward early contractor involvement and toward transferring a larger proportion of the risks to contractors. In such situations, contractors may be forced to factor in the risk of changes and additions in their bids, which will lead to greater cost certainty for clients, but may require a risk premium. Figure 1 shows a project organized through traditional design-bid-build contracts. The client awards contracts to separate companies for the design and construction. The project consists of different elements that need to be managed to ensure successful project delivery. If the sum of the cost of all elements in Figure 1 exceeds the budget, the project will overrun its budget.
Data and Methodology
This paper is based on data from the Norwegian Public Roads Administration (NPRA) and a study of 712 different contracts tendered between 2009 and 2014 and completed between 2012 and 2016 (29). The contract prices varied from USD 35,000 to USD 195 million and covered typical highway engineering works such as tunnels, bridges, groundworks, and water drainage.
The contracts were all unit price contracts. The NPRA is increasingly using contract strategies whereby contractors are responsible for the design and in some cases the maintenance of the road over a specified time, but the initial dataset only contained a handful of such projects, so they were omitted for purposes of data consistency.
To gain an overview of the cost performance of the contracts, first the main features of the data were summarized through descriptive statistics. Traditional statistical measures such as average escalation, the median, standard deviation, and min and max were used. To measure cost escalation, both the size and probability of escalation were examined.
The escalation from agreed contract to final cost was defined as: ÀAgreed contract The total cost in a unit price contract is normally made up of non-adjustable items such as items that are easy to specify, adjustable items that are difficult to specify and for which quantities can be adjusted at an agreed price, and change orders beyond the description of works and at a higher price. Lump sum/non-adjustable items made up a very small proportion of the total cost in the study data and therefore their costs were merged with the adjustable items. Ideally, the deviation between agreed contract and final cost should be zero, but as the purpose of unit price contracts is flexibility, some degree of cost deviation should be expected. Errors in the description of work can occur and changes to the description can add value to the contract even if this results in a cost escalation.
An escalation beyond the agreed contract does not have to be above the client's own estimate. Hinze et al. (18) found that even when the contracts in their sample overran by an average of 4.68%, the total cost of the contracts was close to the engineer's own estimate, as bids were usually below this.
Nevertheless, final costs in contracts should be reasonably close to the agreed contract. AACE International expects the accuracy of estimates prepared for the bid/ tender stage to be within the range from 210% to + 15% (14). Although the agreed contract might differ from the (unknown) client's own estimate, this was used as a proxy for target accuracy.
Additionally, the data were used to develop a best-fit cumulative distribution function (CDF) to determine the probability of escalation. A CDF F(x) is a mathematical equation that describes the probability that a random variable X is less than or equal to x: The probability density function can be determined from the CDF by differentiation: The CDF can be expressed as an integral of its probability density function: The distribution can be tested for best fit, for example, by using the chi-squared statistic (x 2 ): where N is the observed frequency for bin i and E is the expected frequency for bin i (30).
The CDF is useful when historical data are accessible. If past results are considered likely to be representative of future events, the CDF can be used to estimate the necessary ranges around estimates of different cost elements, so that the total project cost necessary at different levels of probability can be estimated.
The data allowed for the testing of several contract features such as the impact of contract size, contract duration, and geographical location (urban versus nonurban). The sources of escalation were also investigated by examining whether the escalations were the results of changes in quantities through adjustable items in the contracts or change orders to the contract scope.
All costs were adjusted to real prices using the construction cost index developed by Statistics Norway (31). The contract accounts were made up of three elements: the original contract price, changes in agreed scope (i.e., different quantities), and change orders to the scope. Generally, changes to the scope were priced higher than changes to the agreed scope.
Results
This section presents the results of the research.
Cost Escalation Overview
The cost escalation of the contracts in the study sample is summarized in Table 1. On average, the contracts that the NPRA was responsible for turned out to be about 17% more expensive than the agreed price. The statistical dispersion shown in Table 1 is large, ranging from 46% under the agreed contract price to 185% over. Only 37% of the contracts had deviations within 6 10% of the original contract. There was a relatively large difference between the median (P50) and the mean, which indicated a high number of large escalations.
Half of the contracts experienced escalations of more than 10%. The NPRA factors in the risk of contract escalations through projects' contingencies, but if escalations at the contract level are too large, this can lead to cost overruns at the project level.
The results for contract cost escalation differ from the results for project cost overrun. The average overruns against the P50 estimate in large (.USD 60 million) Norwegian road projects is relatively small, ca. 1%-2% (32). Average overruns in all Norwegian road projects, large and small, are somewhat larger. The yearly average overrun for road projects completed between 2007 and 2018 was 5.8% (33). The relatively large escalations in contracts, as shown in Table 1, may explain the consistent overruns in projects. If the contracts overrun in a project, a higher contingency will be required to deal with the overruns. The results in Table 1 indicate that the cost performance in contracts may be a source of project risk, for which budgets need to be adjusted.
Distribution Fit
The large sample size (712 projects) of this study enabled the authors to determine the probability of cost escalation expressed as CDF.
The CDF was produced using the ''best fit'' command in Palisade's @Risk software. The distribution of final costs to the awarded contracts is shown in Figure 2.
The data shown in Figure 2 have a right skew. Based on these historical data, there is strong probability that final costs in construction contracts will be significantly higher than the awarded contract price. For example, the probability of an escalation between 10% and 50% is 40%, whereas the probability of a large escalation (.50%) is 10%.
The best fit for the data is the log-logistic distribution, which is a typical right-skewed distribution with a heavy tail. Ideally, the deviations should follow a normal distribution with a mean of zero, but that was not the case for data relating to road construction contracts.
Cost estimation practices vary between industries and countries, and accounting for risk may vary from a simple uplift to more sophisticated methods aimed at identifying the main risk drivers and quantifying risk in individual projects. Knowledge of past contract performance and their distributions may be useful when applying probability-based cost estimation.
Contract Size
The study data allowed to test cost escalations using some contractual characteristics. First, contract size may give an indication of the complexity of the work and thus the likelihood of escalation. At the same time, cost escalation in large contracts may have bigger impact on a project's financial success, as even small escalations may cause considerable monetary cost overruns. Table 2 provides a summary of the results when considering small contracts (ł USD 2 million), medium-sized contracts (between USD 2 million and USD 12 million) and large contracts (.USD 12 million). Although most of the contracts were small, the dataset was sufficiently large for a statistical comparison of the categories.
Although the risk of cost escalation increased with contract size (from 66% for small contracts to 86% for large contracts), the escalation percentage was larger for the smaller contracts (17% for small contracts versus 14% for the largest contracts). This was even more evident when only the contracts with a cost escalation were considered: small contracts on average experienced a 32% escalation, whereas large contracts only experienced an 18% escalation on average. These results are similar to those of Odeck (34), who found that the mean overrun in small road projects was 10.6% and that large projects experienced a mean underrun of 22.5%.
The effect might be a result of better project management in the larger projects, as cost escalations in large contracts are prioritized in Norway, and damage control, incentives, or both, in the contracts allow for a better outcome when a cost escalation occurs. Furthermore, large projects generally have more client resources at hand, with which to follow up the contractor more closely. Large contracts are also more complex, which is why they have a greater probability of cost escalation compared with smaller contracts.
Contract Duration
Contracts with a long duration may be more vulnerable to escalations than contracts with shorter durations. Time may be an indicator of complexity, and delays may be caused by inefficiencies on the part of the client or the contractor. A total of 44% of the contracts in the dataset had a duration of less than 1 year, 41% had a duration between 1 and 2 years, and 15% had a duration of over 2 years. The differences in escalation between different durations of the contracts are listed in Table 3.
Shorter contracts had on average lower escalations than medium and long contracts, and long contracts had a higher probability of escalation. However, the average escalation was the same, regardless of duration.
Geographical Location
It is widely acknowledged that civil engineering projects in urban areas are high-risk ventures, as demonstrated by, for example, Crossrail in London, Boston's Big Dig, and a range of light rail transit projects worldwide. Welde (32) found that large governmental projects in urban areas experienced significantly higher overruns than other projects (which, on average, experienced underruns). Ideally, increased risk should be covered by a higher contingency in the budget, but the estimates for urban projects did not take the increased risk of such projects fully into account, as shown in Table 4.
Among the 554 projects for which the authors were able to distinguish between an urban location and nonurban location, the escalation in urban contracts was significantly higher than in non-urban areas. The probability of escalation also appeared to be higher but was not significant.
Sources of Cost Escalation
Cost escalation in contracts can arise from two sources: changes in quantities through adjustable items in the contract and change orders to the scope beyond the description of works in the contract. The impact of the two sources of changes for cost escalations in the studied road construction contracts is shown in Table 5. The results listed in Table 5 are interesting for the following reasons. All the escalations were caused by change orders to the contracts. Unit price contracts are based on items for which quantities can be adjusted according to an agreed fixed price. In line with the intention of such contracts, the variation with respect to the expected value is normally distributed with a mean close to zero. However, client-initiated change orders account for a significant increase in the cost of the contracts. Just 10% of the contracts had change order costs of 2% or less. The cause of the escalation was work that was not included in the description of works.
Conclusions
This paper studied cost escalation in unit price contracts in the Norwegian road construction industry. There are many studies of project cost overruns in the transport literature, but as argued in this paper, there is a need for more knowledge of the impacts of the cost performance of the contract that make up the project as this will have implications for project performance.
The results of the analyses revealed that contracts for road construction projects tendered between 2009 and 2014 experienced higher escalations than reported in most studies from the United States, where most of the reported escalations have been in the order of 0%-10%.
The average escalation in the dataset was 17%, and the dispersion of the data indicates that there are large risks remaining at the time when contracts are agreed. The results are well beyond the target accuracy range suggested by the AACE International (14) and may explain why final costs in Norwegian projects are consistently skewed to the right.
The poor cost predictability and the right skew of the distribution of final costs should be a cause for concern for project managers and project owners, and may ultimately lead to project cost overruns, unless the project cost contingency is large enough to account for these uncertainties. Even if some changes to the scope of works may be necessary to accommodate changing needs and to deliver higher benefits, the volume of escalations may indicate a risk of poor efficiency because of delays, opportunistic behavior, and conflicts.
Our results revealed that some road construction contracts are more at risk of escalations than others. Large and complex contracts have a higher probability of escalation even if their average escalations are lower than for small contracts. The same relationship was found for contracts with a long duration, and therefore project managers are advised to treat such contracts with caution, especially in urban locations where both the probability of escalation and average escalation is higher than in other locations. The main source of escalation was change orders. This indicates that in many cases the description of works was inadequate.
One can only speculate as to why the results from Norway differ so much from the results from Utah (17), Washington (18), Florida (19), and North Carolina (20). The studies in the United States suggest that factors such as project size, the client's staff skills, and a competitive bidding process may affect overruns. The studies also found that change orders were a major source of risk. The suggested causes in those studies are similar to those of the present study; however, as Hinze et al. (18) argued, relatively little can be explained from large sample studies that use independent variables that are easily observed. For more specific knowledge, it is probably necessary to conduct in-depth studies of considerable magnitude.
However, and as argued by Williams et al. (21), unit price contracts create a competitive environment in which contractors may have a lot to gain by submitting unrealistically low bids and deliberately misinterpreting the contracts, which in turn will lead to costly change orders. Therefore, in a lot of countries there has been a development toward contract arrangements that require closer alignment of the incentives of clients and contractors. The Norwegian experience suggests that Norway, too, may have something to gain from exploring the use of contracts in which the contractors are responsible for parts of the design.
The present study is not without limitations. The dataset was large, but it could not be used to differentiate clearly between the types of work (e.g., rehabilitation, resurfacing, reconstruction, bridges). Most contracts are classed as ''mixed.'' More detailed data could have allowed for further testing of the relationships explored in this paper. Furthermore, the NPRA has not provided information on their own estimate of the contract costs, only the agreed contract price, which may be either below or above their own estimate. In a tender process, bids are often very dispersed and may create incentives for low bidding. Therefore, it may very well be the case, as also suggested by Hinze et al. (18), that that the real deviation between final cost and the client's estimate is lower than the observed escalations. Finally, intuitively, contract cost escalation can be expected to correlate with project cost overrun, but that is beyond the scope of this study. In future studies, it would be worth mapping contract performance against project performance. The knowledge gained from such mapping could allow better modeling of cost performance to improve estimation and project delivery in future projects.
Author Contributions
The authors confirm contribution to the paper as follows: study conception and design: M. Welde; data collection: R.E. Dahl and M. Welde; analysis and interpretation of results: R.E. Dahl and M. Welde; draft manuscript preparation: M. Welde. All authors reviewed the results and approved the final version of the manuscript.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 7,708.4 | 2021-04-02T00:00:00.000 | [
"Engineering",
"Business"
] |
SVD Based Robust Unsighted Video Watermarking Technique for different attacks
Nowadays, digital watermark protects intellectual property in the digital world. In this article, a new singular value decomposition (SVD) based robust unsighted video watermarking technique is used for protection of copyright or owns individual work. The Singular Value Decomposition is used in this method to embed the information in a video file. In the embedded process the information generally referred to as watermark, which is of the form either audio, text, video, or any binary image or digital image. The embedded information can be identified in this process without considering the mechanisms of the original video, or any other knowledge about the encoded video ‘s main singular values. Singular value decomposition is used in embedding the information as well as retrieving the information from the embedded watermarked video. SVD based robust unsighted video technique shows that it bears required robustness on various attacks like MPEG-2 compressed video, median filtered, small rescaled, and rotational videos. In the experimental result, it is noticed that the unsighted video watermarking technique has visible quality and robustness for various attacks. In which watermark is installed in uniquely chose particular estimations of the video, and the first request can be kept up to recovering the data in our method and it has been indicated roust to different attacks.
ICRAEM 2020 IOP Conf. Series: Materials Science and Engineering 981 (2020) 032030 IOP Publishing doi: 10.1088/1757-899X/981/3/032030 2 is most likely the motivation behind why so much consideration has been drawn toward the advancement of computerized picture assurance plans.
In this report, another strong unsightly video watermarking process based on singular value decomposition (SVD) is accessed for copyright security of claims individual work. The SVD is used in this technique to implant the data in a video document. The installed data is called watermark it is as either in content, sound or any twofold picture/computerized picture. The watermark can be covered in digital video watermarking techniques [1], either in the spatial domain or in the frequency domain, including DCT (Discrete Cosine Transform), and DWT (Discrete Wavelet Transform) [2]. Let's consider the Singular Value Decomposition (SVD) [3], which could be a suitable method of watermarking through decomposition. The SVD method is primarily used in watermarking of images. SVD-based image watermarking techniques are effective in some papers [4][5][6] and [7], and it reports that their watermarking techniques provide good robustness. For recover the original watermark, the technique required the singular values or their orthogonal matrices [8]. It is noted that if the singular values are changed, then the order of the singular values can be altered, creating inappropriate recognition of the watermarks that we add. So, the position information of the updated singular values must always be maintained in advance. Since these two techniques need to cover work-relevant knowledge for retrieving watermark information, they cannot be used in applications for video watermarking [9][10][11][12][13][14][15]. Despite the tremendous cover processing the required data function.
In it, a robust, unsightly video watermarking procedure based on SVD is proposed for various attacks [16][17][18]. Thinking about the identifiable reliability and robustness, the watermarks are embedded in targeted singular values, and our method can hold the first request up. The results from the experiment reveal appropriate robustness of the proposed method to MPEG-2, motion averaging, slightly rescaling, and median filtering, etc.
Simple SVD method
The fundamental evaluation of unique image values suffering from these parameter variations [19,20]: • Each matrix and transposition of the matrix must have the standard singular non-zero quantities.
• Each matrix and row or column reversed matrix in attempt to provide usual nonzero singular values. • The matrix's values are non-zero singular values and the it is a constant ratio to its scaled matrix (which is replicated several times by row or column). Where r is the rank of matrix A. Because of above properties, SVD can be used for watermarking. The secret key used in unsighted watermarking is that maintained the original order of singular values.
Embedding Watermark process
The actual binary sequence needs to be followed and modulated as a sequence of bits before embedding the watermark method, it consisting of "1 and "-1" (that is, modulated from "1" modulated from "0" respectively. In the way to keep the same order of singular valu es in the embedding process it became provide robustness to some attacks. The watermark embedding process steps are: i. Consider the Singular value decomposition about the all cover frames of the form A: A= U ΣV T , They λ1, λ2, λ3, ……. are singular values of the cover frames.
ii. To achieve robustness & visibility, adjust the specific attributes of the cover frames as for the watermark sequence: Where r is the rank of A'.
Watermark Extraction Process
The watermark extraction process follows: i. Check the SVD for entire watermarked frames A ': A ' = Σ ′ and they λ 1 ′ , λ 2 ′ ……λ ′ are the singular values of the watermarked frames. ii. The extraction of the watermark from the singular values:
3.Simulation Results and Discussion
The watermarks are the fundamental M-bit sequence. In the trials performed [29], the watermark is of length 880 bits and the video for the test is MPEG-2 standard test succession for example "Mobile". The video sequence adjusts to the standard of the Phase Alternation Line (PAL) with a precision of 756 bis576. There are 30 frames in the arrangement for a video. Watermarks were inserted through video outline chrominance screen. The watermark power is 0.48 and the procedures, sixth and eighth parameter, are mounted at each frame. Accepting the parameter for watermarking to be λi (1≤ I ≤256) the method can accomplish great robustness for different attacks. The stated technique was tested for various attacks such as Compression, shift, rescaling, and median filtering of MPEG-2 "Mobile" watermarked video frame shows the Peak Signal to Noise Ratio (PSNR) of the first 25 watermarked "mobile" frames in Table 1 showing the results against those attacks.
Our strategy has good robustness for attacks in particular on median filtering, rotating, rotation and rescale, compression with MPEG-2 according to the findings in Table 1. If we embed one bit of each frame, the rotation and rescaling technique is robust too. The experiment can also perform other video sequences of short duration length such as "winter" is a small video with a PAL standard, and the total frames are 30 in the taken video sequence. As per the above-mentioned technique, at the second, four, sixth, and eighth coefficients, four bits are inserted in each frame. Coefficients SVD. Experimental results for various attacks such as MPEG-2 compression, rotation and rescale, shifting, and median filtering are shown. Figure 5. Watermarked frames of "Mobile" and extracted watermark after the Rotation and rescale attack (1degree) with PSNR=26.4844 dB, Normalized cross correlation (NCC) =1 and Bit Error Rate (BER) =0.
5.Conclusion
Digital Watermarking is an important requirement for providing Intellectual property rights of the Digital Multimedia data. In the case of Video Watermarking, introduce video is used to embed the information and some issues were removed in video watermarking as compared to Image Watermarking. In the proposed video watermarking technique, it provides good robustness and great visible quality. A new unsighted Video Watermarking method using SVD techn ique proposed. The SVD method is quite different from other commonly used techniques such as DCT (Discrete Cosine Transform, DFT (Discrete Fourier Transform), and DWT (Discrete Wavelet Transform) transformations. Non -fixed orthogonal base functions and one-dimensional non-symmetrical decomposition are employed in SVD and, these properties proved the advantages of various sizes of transformation and it became more security.
The proposed video watermarking method has provided good performance and proved that it has good in robustness and security for various attacks has been achieved. The relationship between the U, V and S components was explored in the proposed method and it provided robustness against different attacks and better quality as shown in figures 3 to 7 that is, in figure 3 Watermarked frames of "Mobile" and extracted watermark with PSNR=38.6343 dB, BER (Bit Error Rate) =0, and NCC (Normalized Cross Correlation) =1 and figure 4 shows extracted watermark after the median filtering attack with PSNR=24.1686 dB, Normalized cross correlation (NCC) =0.98195 and BER (Bit Error Rate) =0.01805, figure 5 shows extracted watermark after the Rotation and rescale attack (1degree) with PSNR=26.4844 dB, Normalized cross correlation (NCC) =1 and BER (Bit Error Rate) =0, figure 6 shows extracted watermark after the Shifting attack (1degree) with PSNR=24.9798 dB, Normalized cross correlation (NCC) =0.9277 and Bit Error Rate (BER) =0.0723 and, figure 7 shows Watermarked frames of "Mobile" and extracted watermark after the MPEG-2 attack with PSNR=22.864 dB, Normalized cross correlation (NCC) =0.90185 and BER (Bit Error Rate) =0.0985.
As depicted over, this strategy is good to the greater part of the attacks like Median sifting, Small scaling, and rescaling, moving and MPEG-2 compression. In addition, it needn't bother with any data in the location procedure while different strategies need the data of singular values. In the experimental result, it is noticed that the unsighted video watermarking technique h as the great visible quality and good robustness for various attacks. | 2,137.8 | 2020-12-05T00:00:00.000 | [
"Computer Science"
] |
Gamma-widths, lifetimes and fluctuations in the nuclear quasi-continuum
Statistical $\gamma$-decay from highly excited states is determined by the nuclear level density (NLD) and the $\gamma$-ray strength function ($\gamma$SF). These average quantities have been measured for several nuclei using the Oslo method. For the first time, we exploit the NLD and $\gamma$SF to evaluate the $\gamma$-width in the energy region below the neutron binding energy, often called the quasi-continuum region. The lifetimes of states in the quasi-continuum are important benchmarks for a theoretical description of nuclear structure and dynamics at high temperature. The lifetimes may also have impact on reaction rates for the rapid neutron-capture process, now demonstrated to take place in neutron star mergers.
Introduction
Nature displays a huge span of lifetimes, from the birth and death of stars to the population and decay of states in the micro-cosmos. In the world of quantum physics, unstable states are associated with an energy width Γ, which is related to the lifetime through τΓ = . Both quantities depend on available final states and the γ strength into these states.
The nuclear level density (NLD) is an exponentially increasing function of excitation energy. When the number of states reaches 100-1000 levels per MeV, detailed spectroscopy becomes almost impossible and less useful. In this quasi-continuum region, the NLD and the average γ-ray strength function (γSF) become fruitful concepts. These two quantities replace the accurate position of initial and final states and the transition probabilities between them in conventional discrete spectroscopy.
The Oslo method has provided NLDs and γSFs for many nuclei in the vicinity of the β-stability line 1 . From these observables, lifetimes, γ widths, and fluctuations can be explored in the quasicontinuum. In this work, we will demonstrate the wealth of information that is hidden in these data.
The present study deals with the properties of levels in the quasi-continuum excitation region below the neutron separation energy S n . The level density is ranging from around 10 3 to 10 7 levels per MeV at S n , when going from nuclear masses of A ∼ 50 to 240. For γ energies around 3 MeV, the corresponding increase in γ strength is only one order of magnitude. This makes sense, because the NLD is fundamentally a combinatorial problem of the number of active quasi-particles, while the electric-dipole γ strength scales linearly with the number of protons.
The Oslo Method
In this section we give a short review of the Oslo method [1] for which the starting point is a set of γray spectra measured as a function of initial excitation energy. The γ rays are measured in coincidence with the charged ejectile from light ion reactions such as (d, pγ), (p, p γ) and ( 3 He, αγ), where the ejectile determines the initial excitation energy of each γ spectrum. Typical beam energies for the three reactions are 12 MeV, 16 MeV and 30 MeV, respectively. Figure 1 shows a schematic drawing of the set-up. A silicon particle detection system (SiRi) [2], which consists of 64 telescopes, is used for the selection of a certain ejectile types and to determine their energies. The front ∆E and back E detectors have thicknesses of 130 µm and 1550 µm, respectively. Coincidences with γ rays are performed with the CACTUS array [3], consisting of 26 collimated 5" × 5" NaI(Tl) detectors with a total efficiency of 14.1% at E γ = 1.33 MeV.
With the raw γ-ray spectra at hand, we arrange these into a particle-γ matrix R(E, E γ ). Then, for all initial excitation energies E, the γ spectra are unfolded with the NaI response functions giving the matrix U(E, E γ ) [4]. The procedure is iterative and stops when the folding F of the unfolded matrix equals the raw matrix within the statistical fluctuations, i.e. when F (U) ≈ R.
In the next step the primary γ-ray spectra are extracted from the unfolded matrix U. This is obtained by subtracting a weighted sum of U(E , E γ ) spectra below excitation energy E: The weighting coefficients w(E, E ) are determined in an iterative way described in Ref. [5]. After a few iterations, w(E, E ) converges to P(E, E γ ), where we have normalized each γ spectrum by E γ P(E, E γ ) = 1. This conversion of w → P is exactly what is expected, namely that the weighting function should equal the primary γ-ray spectrum. We rely on the fact that quasi-continuum decay is dominated by dipole transitions [9,13], and consider only E1 and M1 transitions in the following. It should be noted that the validity of the procedure rests on the assumption that the γ-energy distribution is the same whether the levels were populated directly by the nuclear reaction or by γ decay from higher-lying states.
To extract the level density and the γ-ray strength function, we exploit a part of the primary P(E, E γ ) matrix where the level density is high, typically well above 2∆ (the pairing gap), and where no single γ lines dominate. This statistical part of the matrix is described by the product of two vectors: where the decay probability is assumed to be proportional to the NLD at the final energy ρ(E − E γ ) according to Fermi's golden rule [6,7]. The decay is also proportional to the γ-ray transmission coefficient T , which according to a generalized version of the Brink hypothesis [8] is independent of spin and excitation energy; only the transitional energy E γ plays a role. The γSF can be calculated from our measured transmission coefficient through [9] f XL (E γ ) = 1 2π It remains to normalize ρ and T to known experimental information from other experiments. The normalization procedures and the precisions obtained depend on available external data. Further description and tests of the Oslo method and the normalization procedures are given in Refs. [1,10]. One could argue that the level densities and γSFs would depend on the light-ion reaction used. However, although these reactions are very selective, the γ decay appears much later and thus from a thermalized, compound-like system. This has been demonstrated by the Oslo group for many reactions. As an example, the ( 3 He, αγ) and ( 3 He, 3 He'γ) reactions have been studied populating the same final nuclei, 96 Mo and 97 Mo [11,12]. Also the very different reactions (p, p γ) and ( 3 He, αγ) into 56 Fe confirm the independence of the reaction [13]. Minor differences may appear which probably are due to deviations in the spin distributions populated by the various reactions.
The evaluation of γ width and lifetime
The γ width (Γ) and lifetime (τ) can be evaluated from the measured NLD and γSF obtained with the Oslo method. However, one should note some differences when comparing with neutron capture data. First of all, significantly more levels are populated in the charged-particle reaction, giving a large spin distribution of typically J ≈ 0 − 6 and populations of both parities. Secondly, the initial excitation bin is much larger (100-200 keV) than for neutron capture data that may even select only one ore a few resonances. These conditions ensures that the Oslo type of data represent an averaging over a broader initial excitation energy region and spins and parities.
The γ-decay strength function for γ-ray emission of multipole XL from levels of spin J and parity π at E x is defined by Bartholomew et al. [14] as where Γ E γ XL (E x , J, π) is the partial γ width for the transition E x → E x − E γ . In the equation, it is assumed that E x takes a fixed value while E γ takes variable values, i.e. the final excitation energy E x − E γ varies. We now apply Eq. (3) with the assumption that f XL (E γ , E x ) is independent of E x [8] and find In order to obtain the the average total γ width of levels with excitation energy E x , spin J and parity π, we sum up the strength for all possible primary transitions below E x as prescribed by Kopecky and Uhl [9]: where the summation run over all final levels with spin J f and parity π f that are accessible by XL transitions with energy E γ . It should be noted that for the normalization of T , we apply Eq. (6) using the initial spin(s) populated in neutron capture to reproduce the total γ width Γ(S n ) . If we for simplicity assume that all levels within the initial energy bin are populated in the charged particle reaction, we obtain the average total γ width by where g and P are the spin and parity distributions, respectively. From Γ(E x ) , we get the lifetime in the quasi-continuum by where the γ width is in units of meV.
Results and discussion
The γ widths and lifetimes of J π = 1 + and 1 − states in the quasi-continuum of 56 Fe have been measured by photon-scattering experiments [15,16]. In Fig. 2 (a) we show the measured lifetimes, and compare with the corresponding estimates of lifetimes evaluated from the NLD and γSF obtained from the Oslo method [13]. We observe that the experimental data fluctuate up to a factor of ten, which is a result of the random structural overlap with few final states. Assuming Porter-Thomas fluctuations [17], the relative fluctuations are √ 2/ν where the degree of freedom ν can be estimated by the number of primary transitions from the excited state. Figure 2 (b) shows that the spin 1 states He) 4 He, 3 ( Dy 161 He) 3 He, 3 ( Dy 161 He) 4 He, 3 ( Dy 162 He) 3 He, 3 ( Dy 162 He) 4 He, 3 ( Dy 163 He) 4 He, 3 ( Dy 164 He) 3 He, represent the fastest dipole transitions in the quasi-continuum, which can be explained by their direct decay to the 0 + ground and first excited 2 + states. | 2,373 | 2018-04-03T00:00:00.000 | [
"Physics"
] |
Being a body and having a body. The twofold temporality of embodied intentionality
The body is both the subject and object of intentionality: qua Leib, it experiences worldly things and qua Körper, it is experienced as a thing in the world. This phenomenological differentiation forms the basis for Helmuth Plessner’s anthropological theory of the mediated or eccentric nature of human embodiment, that is, simultaneously we both are a body and have a body. Here, I want to focus on the extent to which this double aspect of embodiment (qua Leib and Körper) relates to our experience of temporality. Indeed, to question, does this double bodily relation correspond to a twofoldtemporality of embodied intentionality? In the first part of this paper, I differentiate between the intentional temporality of being a body and the temporal experience of having a body. To further my argument, in the second part, I present examples of specific pathologies, as well as liminal cases of bodily experiences, wherein these temporal dimensions, which otherwise go hand-in-hand, become dissociated. Phenomenologically, I want to argue that Husserl’s differentiation between Leib and Körper corresponds to two genetic forms of intentionality – operative and act (or object) intentionality – and that these are, in turn, characterized by different temporalities. Anthropologically, I want to argue that having a body – what occurs as an inherent break to human embodiment – is the presupposition for the experience of a stable and object-like time. I will conclude that the double aspect of human embodiment and in particular the thematic experience of having a body enables both the experience of a past, which is remembered, and a future that is planned.
body' (and thus also of being a body) necessarily involves that we are physical and external, otherwise we could neither sense nor be sensed. Secondly, and even more importantly, Husserl stresses our ability to address ourselves thematically in terms of a physical object or Körper. The crucial point in Husserl's differentiation that is taken over by Plessner is not the difference between the non-material and the material, the internal or external, but rather the difference between a non-thematic (operative) bodily being or subjectivity and the body as thematic, explicit or intended objectthat is, as Plessner puts it, the sense of having a body. Depending on one's attention and given circumstances, we are more or less aware of our body as such. 'Being' and 'having' a body can thus be understood as two poles (cf. Breyer 2017) within lived human embodiment that can be descriptively differentiated as two aspects of embodiment 1 : While being a body refers to both being a lived and material body, having a body represents the fact that we can address ourselves as Körper. 2 It is this second aspect of Husserl's differentiation between Leib and Körper, which is crucial for the argument of this paper.
In the following, then, I want to argue that this double aspect of embodiment corresponds to a twofold temporality of embodied intentionality. In doing so, I seek to phenomenologically ground and describe this twofold experience of time. I will differentiate in this regard between: (i) the intentional temporality of being a body; and (ii) the temporal experience of having a body.
The paper is composed of three parts. After setting the scene by means of a short explanation of Plessner's theory and argument concerning a twofold embodiment (1), I will investigate the theoretical and descriptive phenomenological fundamentals of this difference (2). This second part will be taken in two steps: Firstly, in section (2.1), I will analyze the transcendental relation between time consciousness, intentionality, and the body. To do so, I link Husserl's theory of inner-time consciousness to his theory of intentionality. By doing this, I intend to build upon Dan Zahavi's thesis of time consciousness as the pre-reflective self-awareness inherent to every intentional act. The next step (2.2) relates these Husserlian insights directly to what Thomas Fuchs defines as implicit and explicit temporality. In what follows, then, I argue that time consciousness as such is concretely expressed via an operative or bodily form of temporal intentionality, which corresponds to the anthropological level of being a body.
The third part (3) comprises a concrete illustration of this dual temporality. I will present examples of specific pathologies, as well as liminal cases of bodily experiences, whereby these two temporal dimensions, which otherwise go hand in hand, become dissociated.
To this methodological end, I pose both a phenomenological and anthropological argument. Phenomenologically, I want to show that Husserl's differentiation between Leib and Körper corresponds to two genetic forms of intentionality, operative and act-or object intentionality, which are characterized through their different temporality. Anthropologically, I want to argue that having a body, i.e. the inherent break within human embodiment, might be the presupposition for the experience of a stable and object-like time, in that it allows us to have a past to be remembered and a future to be planned.
1 Setting the scene: Plessner's theory of the double aspect of human embodiment According to Plessner, every form of life can be defined according to the relation one has towards their own 'borders' [Grenzen]. While the border of a non-living object refers merely to its spatial boundaries, the borders of living beings are a part of those beings themselves. In this regard, their boundaries are not spatially fixed, but have to be dynamically realized by the organism in relation to her environment. 3 With regard to the process of realization, the living form of plants can be distinguished from that of animals: while the living functions of plants are open to and ultimately intertwined with their environment, animals and humans are characterized by their independence and ability to close themselves off from their surroundings. As non-open but closed and centralized forms of organization, they are not fully embedded within their environment, but must actively seek out a relation to their environment, as well as themselves. Because of the environmental distance and relative independence of the organism, a notable distinction thus looms between a lived interior and an experienced exterior. As such, all their interactions with their environment are mediated by a 'center' (that is, in the biological sense of a central organ, such as, the brain for most mammals) that lies within their bodies. Plessner calls this particular organization one's 'centralized positionality' (Plessner 1975, 129). This center may only be physically a small part of our bodies, but it often functions as if it was the center of everything as it effectively steers and controls the body. In this sense, a duplification of the body takes place: we are a body, but at the same time, it is something we have (Plessner 1975, 159).
What makes human life unique is our reflective capacity on this mediation. While animals act out from this center, and sometimes towards this center, by establishing a relation towards their environment and themselves, humans live as a center; in other words, they can relate to this relation (Plessner 1975, 288). 4 One could summarize this as follows: plants merely live, whereas animals and humans undergo experiences; humans in contrast to animals, however, can also experience their experiences (cf. De Mul 2018). This means that, as humans, we are doubly mediated. Not only do we have a first-person perspective, in which we are oriented towards the world and implicitly towards ourselves, 3 Within the scope of this paper, I cannot go into the philosophical and biological details of Plessner's argumentation and descriptions. Nonetheless, I want to indicate that Plessner's definition of life (that is inspired by Aristoteles) is compatible with autopoietic approaches of life, cf. Varela and Maturana 1980, Varela et al. 1991, Thompson 2007. Life as autopoietic is characterized by its circular self-generating organization: it remains its identity with reference to a changing environment (criterium of identity); at the same time life stands out with reference to his environment and interacts with it, thereby the individual structures this very environment according to his needs, preferences and valences (criterium of sense-making). 4 "Das Tier lebt aus der Mitte heraus, in seine Mitte hinein, aber es lebt nicht als Mitte." (Plessner 1975, 288); Humans thus have a relation towards themselves as living beings, that means they found out about themselves, "das lebendige Ding ist jetzt wirklich hinter sich gekommen" (291). but we also perceive ourselves from without, that is, we have an "ex-centric positionality" (Plessner 1975, 325). This additional point of reference or ex-centric positionality is the reason why humans do not only have an inherent self-relation, but also relate to their body explicitly, that is to say, we are able to look upon ourselves as physical objects in the world: " [a] human being always and conjointly is a living body and has this living body as his physical thing" (Plessner 1970, 34).
This anthropological difference has fundamental implications on our experience of temporality, as Plessner merely briefly mentions in his descriptions of the lifeforms. For Plessner, all lifeforms have an inherent temporality, including plants, that is to say that they change with time, as well as preserve and integrate these changes. Starting from a centralized form of bodily organization, however, temporality becomes more distinct. For example, animals are able to correct their actions according to past experiences and thus constitute a habitual memory. But, these behavioral adjustments are associatively informed by the past and, as such, must be embedded within a concrete situation. In this regard, I would like to argue, that time operates implicitly, thereby functioning as what Plessner calls (drawing on a term by the German Biologist Hans Driesch) a historical basis for reaction (Plessner 1975, 277). It is only due to their ex-centric positionality, and the distance and experience of mediation involved, that humans can speak of an explicit relation to one self and, in turn, an explicit temporality. However, this does not imply that humans have only an explicit experience of time. As ex-centric, they are both centered and de-centered, meaning, we can experience time both implicitly and explicitly. In the same sense that humans can never be merely a subject nor an object, as they constantly have to manage the tension between these two 'poles' (i.e. between being and having a body), one may assume that this holds true for the human relation to time.
2 The two temporalities of human embodiment 2.1 Inner time consciousness, intentionality and embodiment From an anthropological perspective (i.e. the third-person perspective, as proposed by Plessner), the fact of both, being a body and having a body is tantamount to human embodimentthis is the basis of our double aspectivity. However, from a phenomenological perspective, being a body, i.e. the subjective and living body, is more fundamental and thus deemed primary. This arises from the premise that before one can perceive or attend to one's body as an object, one has to be a living body that experiences and perceives.
This becomes clear when one takes a closer look at what experience amounts to. As Husserl points out, whenever I attentively or reflectively direct myself towards something, this something must have already been there within my lived experience, i.e. it was experienced so to speak. This for Husserl means that every experience is 'conscious', but this consciousness is not necessarily thematically experienced or yet to be characterized as a singled out object consciousness in the strict sense. Put in Plessnerian terms, this would mean that before one can have one's body as an object, one must already be a body; the body exists as lived or operative, lingering in the background of our consciousness remaining immanent as a potentially perceivable object. As Zahavi has famously shown, this also implies that there must be a pre-reflective, non-thematic or passive form of self-awareness or self-affection within living experience that can motivate such an explicit thematization of my body: "I can thematize myself, because I am already passively self-aware; I can grasp myself, because I am already affected by myself" (Zahavi 2003, 163). To understand the necessity of such a passive and inherent self-relation that grounds every thematic relation (as in having a body), one has to turn to Husserl's genetic analysis of how objects are constituted as temporal.
In his genetic phenomenology, Husserl did not only make a 'static' (or spatial) distinction between background and foreground consciousness, and between noticed and unnoticed regions or 'objects', but in addition to this, he attempts to describe how such stable and identifiable 'objects' are (or must be) temporally constituted in inner time consciousness. With his well-known example of the melody, Husserl shows that we would not be able to hear a melody or tune, which is a temporal continuity of tones, if we experienced the sounds as only an unconnected series of sensual input or points (cf. Husserl 1991). Temporally-extended objects can never appear if lived purely in the present, in the 'here and now' so to speak. Rather, this experience of here and now, which Husserl calls 'primal impression' (e.g. as expressed in the tone of a melody), must be embedded in a temporal horizon or field. This implies that every incoming tone is retained in consciousness (as retention x 1 of the formerly primary impression of the tone x), and thereby gradually modified: with every new tone, the retention of the former tone sinks deeper in the background of consciousness, losing its liveliness and clarity; the former perceived tone thus gets part of a continuity of momentary gradated retentions. In turn, every incoming tone must also be part of a horizon of upcoming or anticipated tones, a horizon of the protention of "constantly gradated coming" (Husserl 1968, 202;cf. Zahavi 2003, 165).
According to Husserl, this passive syntheses of inner time consciousness provides us with a temporal experience that is continuous. It passively synthesizes the incoming sensual input into a stable object with a fixed localization in objective time. This passive constitution in lived experience thus makes the reproduction of a formerly perceived situation, person or thing possible as an object of memory. We can only remember events or time passed, that effectively is reproducing a once perceived object, because it was temporally constituted as a stable and permanent object with a fixed position in time. This retention, which Husserl misleadingly named 'primal memory', is thus not memory in and of itself, but a part of the present temporal constituting of an object that, then, can be identified repeatedly as the same 'again and again' (Husserl 1932, Ms. C16 59a).
And yet, what is this inner time consciousness, along with its quasi-temporal structure of retention-impression-protention? Is this constituting something that is itself temporal, i.e. that has an inherent duration? How can we be aware of the very acts that constitute temporal objects? To avoid an infinite regress, whereby every constituted object (i.e. also the constituting consciousness as object of current reflection) again asks for a deeper and more primary constituting consciousness, Dan Zahavi suggested the following solution: Instead of making a difference between the intentional acts and time consciousness itself, one must understand the latter (i.e. the dimension of constituting time) as "the pre-reflective self-awareness" inherent in every act: "It is called inner time consciousness because it belongs intrinsically to the innermost structure of the act itself" (Zahavi 2003, 168). The intentional act is not simply conscious of something, they do not only permit the manifestation of something other than the ego ('hetero-manifestation'), but at the same time it literally manifests itself. Although one can distinguish this constituting part of time, this does not mean that inner time consciousness or self-awareness exists separately from external experience. As Zahavi puts it, there is no "pure or empty field of self-manifestation upon which the concrete experiences subsequently make their entry" (170). Rather, one could say that this 'minimal self', as Zahavi defines it in later texts (cf. Zahavi 2017), only exists in correlation with the manifestation of an object (other than oneself, i.e. 'hetero-manifestation').
Time consciousness with its inherent self-awareness thus passively constitutes a temporal continuity by qualifying all sensual input as my subsequent experience. With respect to Husserl's example of the melody, Zahavi summarizes this as follows: "The tone is not only given as having-just-been, but as having-just-been experienced [by someone, auth. amendment] " (Zahavi 2003, 172). The retentional modification of every input hence allows us not only to experience an enduring temporal object, nor does it merely enable the" constitution of the identity of the object in a manifold of temporal phases" (an object that can later and repeatedly be reproduced as a whole, i.e. as the melody I heard at a concert or music festival), but these retentional modifications also provide us with a pre-reflective and inherent temporal self-awareness (cf. Zahavi 2003, 170). Therefore, what Zahavi emphasizes is that there is not only a thematic selfawareness in the mode of an object-relation, but the very structure of temporal experiencing and being itself must comprise an inherent and pre-reflective self-relation. Indeed, this inherent self-relation is the phenomenological presupposition for every explicit form of self-perception or reflection, as well as object perception. Having admitted that it was out of the scope of his seminal paper, Zahavi leaves us to question: How is time-consciousness and its inherent self-awareness related to the body and intentionality in general? What does this mean for our analysis of the twofold temporal experience of the body's double aspectivity (i.e. being and having a body)? This draws us to focus on the connection between time-consciousness and kinaesthesis, as well as, between intentionality and self-awareness.
It is my contention in this regard that the concrete form of a temporally constituting intentionality must be a bodily one (cf. also Legrand 2006). This is because, firstly, every time consciousness relies on impressions, and thus affection and sensual receptivity that presupposes a body with localized sensations. 5 Secondly, all object perception presupposes a moving body with kinaestethic skills, that is, the fact that perception is dependent on potential movement and action, as Husserl (cf. Husserl 1966, 1973a, 1973b and after him Alva Noë (2004) have shown. 6 5 Cf. for a similar argument, see Landgrebe 1981: "without impressions there are no time constituting accomplishments and without kinaestheses there are no impressions" (59), cf. also Varela 1999, Depraz 2000Zahavi 1999. 6 In the perception of a house, for instance, only one side of this object (say, the front) falls within one's visual field, and is thus actually perceived, while the other sides (the sides and back) are emptily intended or, in Noë's words, virtually present. The given side of the perceived object thus carries a sense of the whole object, and includes indications of possible future locations of my body under which other aspects of the object could be given. The horizon of the co-intended, but momentarily absent, profiles of the object is correlated with my kinaesthetic horizon. The absent profiles are experienced in an intentional 'if-then' relation: my relation to them is characterized by my awareness that if I move in this way, then this or that profile will become accessible (Cf. Husserl 1997 §55;Husserl 1966 §3).
If we assume that inner time consciousness is the self-awareness inherent in every act of experience, then the concrete and primary form of such a constituting time/selfawareness is to be found in what Husserl (and after him, Merleau-Ponty) called operative intentionality. 7 In contrast to an act-or object-intentionality, one could define such an operative intentionality as practically and holistically oriented towards the world. In this mode of intentionality, one encounters or discloses the things in the world not with regard to what they are, but how one can use them or what can be done with them; in Heideggerian terms, they are not presented as separate perceived or imagined objects but as 'ready-at-hand' (Heidegger 1962). Operative intentionality is, in this respect, genetically prior and thus a more fundamental form of bodily receptivity and action, whereas object-intentionality refers to acts of thematic perception, reflection or remembering that present or represents an already constituted, i.e. distinct and singledout, object. As such, the former belongs to the constituting dimension of objectivity, while the latter represents an already constituted permanent object.
If we now apply this phenomenological insights to our hypothesis of the twofold temporality of human embodiment this would mean that being a body refers to the domain of constituting time, while the experience of having a body refers to an already constituted time. We could thus conclude that being a body is characterized by an operative and temporal intentionality; conversely, the body we have is an object of a higher order act or object-intentionality, and thus a temporal object. Furthermore, being a body must be characterized by an inherent pre-reflective self-awareness, whereas having a body refers to the realm of object perception or thematic reflection that, in turn, is founded on the former, more primary form of intention and self-awareness.
Implicit and explicit temporality
On a descriptive level, the temporal experience of being a body can be equated with what Thomas Fuchs defines as implicit temporality, and the experience of having a body with what he calls an explicit temporality. Fuchs maps these two temporalities onto Husserl's distinction between lived body (Leib) and the 'body as corporeal' (Körper). 8 Implicit or lived time, as he also calls it, is regarded as a "function of the lived body, opened up by its potentiality and capability" (Fuchs 2006, 196). The lived body is thereby the concrete realization of lived time. In this mode, we do not have time, but are 'inside time'; that is, completely engaged in the world and our tasks and 7 Operative, motor or functioning intentionality is a concept, originally developed by Edmund Husserl (fungierende Intentionalität, Husserl 1969Husserl , 234, 1973a and prominently picked up and developed further by Merleau-Ponty (Merleau-Ponty 2012, 441, cf. Moran 2018, 594). Operative intentionality is thereby related to what Husserl later called drive intentionality. Both, are characterized as a tendency rather than an object intention, Husserl also differentiates in this regard between a latent and a patent intentionality, cf. for a detailed analysis Summa 2014, 209 f. If we leave the Husserlian framework, such an operative intentionality can be understood as general intentional directedness or embodied action and engagement. Theorists from enactivism (cf. Weber 2002, Thompson 2007) would argue that this directedness is rooted in the structural dynamics that is associated with metabolism, adaptive self-regulation, and self-maintenance. Living organisms thereby already engage in a kind of sense making, as Thompson argues, in that they interpret environmental stimuli in terms of their significance, and create a domain of meaningfulness by maintaining and preserving their identity (Thompson 2007, cf. Maiese 2018. 8 Fuchs uses the notions 'lived body' and 'corporeal body', but as argued in the introduction, this is not entirely correct. The crucial point, here, is not that the body is material or physical, but that we experience the body as such. Thus I prefer the fuller expression of 'the body as corporeal'. hence "forget time as well as the body" (Fuchs 2006, 196). To illustrate this better, Fuchs gives the example of a child with his toys, whereby the child is absorbed in play and lost in his own world, oblivious to all else, including the time passing and their body moving. This mode of temporal experiencing implies what Fuchs describes as an 'implicit self-awareness' (Fuchs 2007, 231). 9 Fuchs is less interested in the foundational relation between constituting and constituted time (respectively, operative and act-intentionality) as opposed to the concrete motivational problem: how can a formerly implicit time become explicit? That is, to consider, how does an experiential switch from a mere implicit time to an explicit experience of time take place. According to Fuchs, an explicit experience of timei.e. temporal differentiations or evaluations like 'earlier' or 'later', 'not yet' or 'yet to come' arises through a gap within the structure of intentionality; a gap between, say, "need and satisfaction, desire and fulfillment, or plan and execution" (Fuchs 2006, 195). For Fuchs, embodiment and temporality are thus intimately connected in that they have a "parallel background-foreground structure" (Fuchs 2006, 196). As Fuchs emphasizes with the aid of Husserl, intentionality at this concrete bodily level has a dynamic temporal structure of intention and fulfillment (cf. Husserl 1973a). Furthermore, it could be argued that operative intentionality or lived time is to be understood as an "affective intentionality" (Slaby 2008) 10 or as 'affectively framed' (cf. Maiese 2018), and thus comprises of a specific affective motivation and quality (Wehrle 2015a).
In contrast to implicit time, then, an explicit experience of time is deemed secondary. It appears on the scene when actions are either interrupted or fail, that is, when an experience or perception seems to be incomplete or fragmentary, or something unexpected happens. In this sense, one could argue that explicit time is motivated by a deviation from the concordant or normal structure of experience. These deviations are, in turn, dependent on the former experiences, as well as affective states of the respective bodily subject (Wehrle 2015b). When such a break or interruption happens, the lived body (and its external objects) becomes thematic, changing from a tacit to an attentive mode of awareness.
Applied to the anthropological domain of being and having a body, it can be concluded that in the former we are practically and implicitly temporally related towards the world by moving, sensing and perceiving. In being a body, an individual discloses and explores her environments, and may be directed towards particular things or spaces that are relevant for immediate actions. This subjective, but pre-reflective mode of actual embodiment functions in an operative way. Here, within the flow of experiences, time is unfolding via the actions of the lived body. In this, as Merleau Ponty would put it, we do not actually have a temporal experience, rather we are temporal (Merleau-Ponty 2012, 451). In the mode of having a body, we instead explicitly refer to our body; whether this is particular parts of our body or our bodily abilities as further objects of perception and evaluation. To have one's body thus presupposes a temporal distance from our ongoing and operative embodiment in which our bodily or functioning intentionality must be (at least in part) interrupted.
Time in the primary and implicit sense is, in this regard, not something we think about, rather time is intentionally performed. Operative intentionality is essentially temporal. Building on Husserl's theory of time consciousness and concept of the lived body, we can thus argue with Merleau-Ponty that temporal constitution concretely takes place in the lived body's actual performance of movements, which integrate, in turn, the dimensions of past, presence and future by means of an intentional arc (Merleau-Ponty 2012, 137 f.). Bodily movement points always beyond itself, spatially and temporally. While engaged in a bodily movement, we are 'here', but also already 'there', that is to say, we are already anticipating the thing or the action that drives our intentional project of activity. The inherent temporality of bodily intentionality is illustrated by the gesture of pointing: "The gesture of reaching one's hand out toward an object contains a reference to this object, not as a representation, but as this highly determinate thing toward which we are thrown, next to which we are through anticipation, and which we haunt" (Merleau-Ponty 2012, 140).
In contrast to empirical time, this implicit time of operative intentionality is not correlated to the coexistence of particular objects or events, rather, it is an ability that keeps these objects and events together, and at the same time it holds them apart. It is the temporally and spatially situated body, with its goal-directed actions and movements, that operatively individuates and hence differentiates between temporal dimensions, objects and events, as well as relates them to each other. In this context, it is not a formal or bodyless inner time consciousness, but a concrete bodily subjectivity that constitutes time in the sense of a subsequent continuity and coherence of experience. 11 For this reason, being a temporal body does not only mean spontaneity and agency, but also it means being situated within a world and time that has existed long before we even appeared on the scene. In this sense, the pastthat is, our individual past along with the cultural or evolutionary pastis literally incorporated into and thus a part of being a body. This can be illustrated by the concept of the body-schema. 12 The body schema provides one with the immediate 'knowledge' about localization, size, and position within an intersensorial world. We 'know', for instance, whether we will be able to fit through a particular door or lift a certain object, and how to move or behave in a certain situation, without even thinking about. Such practical knowledge has its locus in our body and its parts; it is not thematic as such, but is automatically retrieved in the situation. 13 Acquired habits, practical know-how and bodily abilities provide us with orientation and skills that do not need constant attention or intellectual interference. 14 In this regard, the body schema mediates 11 Merleau-Ponty is thus concretely combining Husserl's theory of time consciousness with his ideas of operative intentionality and the body, as was theoretically indicated in section (a). Also, Fuchs refers to Merleau-Ponty's concept of an 'intentional arc' in order to describe and explain temporal disturbances in the experience of schizophrenic, melancholic or depressed persons that will be discussed in part 2. 12 Body schema is a term Merleau-Ponty took over from the psychology of his time (cf. Henry Head 1926, Paul Schilder 1923, Gelb and Goldstein 1920, Gallagher 2005a) and re-formulated in the spirit of gestalttheory. In this sense, the body schema is not a mere sum of information regarding different bodily functions (for example tactile and kinaesthetic sensations), but an holistic form of bodily organization in direct reference to its environment: "I hold my body as an indivisible possession and I know the position of each of my limbs through a body schema (schéma corporel)" (Merleau-Ponty 2012, 100-101). 13 Shaun Gallagher defines the body schema in this regard as a "system of processes that constantly regulate posture and movementa system of motor-sensory capacities that function below the threshold of awareness, and without the necessity of perceptual monitoring" (Gallagher 2005a, 234;Gallagher 2005b, 24). 14 Cf. recent research in philosophy, cognitive sciences and neurosciences that supports this : Dreyfus 1972;Dreyfus & Wrathall 2014;Milner and Goodale 2006. constantly between the currently performing body, which behaves and projects itself toward the world and the habitual body, which determines the practical possibilities of this very body through already acquired skills and know-how. 15 Moreover, the body-schema represents evolutionary, biological sedimentations or cultural influences of the past. Therefore, our temporal being is passive throughout; it is comprised of times and generations we never actually experienced, but continue to influence us in an implicit, nonetheless powerful, way. In this sense, we are never purely individual, but always already part of a history, culture and generativity, and indeed, a past that has never been present for us (Merleau Ponty 2012, 252). This relates well with Plessner's idea that all living beings (from plants to humans) are inherently temporal, i.e. shaped by time, whereas animals (and so too humans), in addition to this, shape their behavior informed by time (e.g. past experiences); to reiterate, animals and humans have a historical base of reaction.
Along with Merleau-Ponty, we can thus state that as living temporal beings we are at the same time "we are entirely active and entirely passive because we are the sudden upsurge of time" (Merleau-Ponty 2012, 452). With further respect to Husserl, one could argue that we are not only constituting time, but as concrete bodily beings, we are simultaneously constituted and shaped by time. In being a body, we are thus already ahead of ourselves, as well as, behind ourselves. It is in this sense that Merleau-Ponty argues that subjectivity is time, and that time is subjectivity (Merleau-Ponty 2012, 444).
According to the anthropological understanding of Plessner, we are not only the 'upsurge of time', but can also grasp and thematize our body as an object in time, and reflect on how time has shaped (and may continue to do so) our bodies. As Fuchs has shown, this act of attention towards our body must be, in part, accompanied by a temporal disruption or distance towards our current movements or operative doings. For example, if I were to fall over a stone and hurt my knee, my body switches from the modus of being a mediator of experience and movement to a thematic object of my perception and concern. In making our body thematic, we refer to it as an intentional and thus temporal object. This more explicit access to our body is given in the mode of a body-image, that is, a "system of (sometimes conscious) perceptions, attitudes, and beliefs pertaining to one's own body" (Gallagher 2005a, 234). This also refers to social and cultural images and the objectifying gazes and evaluations of others (cf. Sartre 2001). In this sense, the body is perceived as an object in space, as well as an object in time. It is temporal because we can perceive changes of this object: having fallen painfully, my knee is bleeding where before it was not. Through this deviation, we become aware of our 'normal' movements, of which we were not explicitly aware as we enacted them. To explain this change, I infer that 'I fell'. Thus, in retrospect, I make this movement (e.g. the act of falling) and its temporality explicit. It is exactly this rupture or process of objectifying our body that makes explicit the experience of time (i.e. time pre-and post-accident), and of our body as an object with temporal aspects (i.e. unhurt vs. bleeding), possible.
Anthropologically, it can be thus argued that in order to have an explicit temporal experience, one has to be able to experience one's body as object. As we have seen in the example of falling, we are able to grasp or trace back the temporal nature of our embodiment, because of the distance created within our interrupted operational doings. We only become aware of ourselves as having a past as we experience the contrast between the functioning body we had in the past and the injured body we now have to encounter in the present. As we have seen with regard to Husserl's theory of time consciousness, it is only possible to refer to a lasting and coherent perceptual object thanks to its temporal constitution within a field of presence, meaning, in the retention of sensations. Only after we hear the complete melody, and it is retained as a whole, can we explicitly remember it as a temporal object that has a definite location in time. The same holds true for the relation of being and having a body. Without being temporal, we could not refer to ourselves as objects in time. However, without experiencing ourselves as objects, we would never be able to grasp this very temporal structure explicitly.
Temporal dissociation and human double Aspectivity
In the following, I want to illustrate the above argument, i.e. that we can descriptively differentiate between a twofold temporality that corresponds with the double aspect (being and having a body) of human embodiment. Although, these two aspects are not separate dimensions of experience, but two relational poles of one continuum (cf. Breyer 2016), they can be distinguished on the level of description, especially when it comes to non-normal experience.
In 3.1 section a) I will present examples of pathological experience in schizophrenia and melancholic depression, investigated by Thomas Fuchs, that show a dissociation of implicit an explicit time. Here, implicit temporality and self-awareness, i.e. the lived time of being a body, are disturbed, which leads to an overestimation of explicit temporality. As Fuchs perspective is that of psychopathology, such an overcompensation is necessarily interpreted as a negative fragmentation of experience, a lack of attunement with one's environment and others, that leads to suffering and isolation of the respective patients. That the possibility to experience time (and the body) explicitly, in turn could also have positive aspects shall be argued in b) where an example by Merleau-Ponty is used to argue that the lacking of this possibility results in a limited scope of temporal (planned) action and thus explicit experience of past and future.
Disturbances of implicit time in schizophrenia and melancholic depression
According to Fuchs and others (Fuchs 2013, cf. Maiese 2018, cf. Gallagher 2005b, Bovet & Parnas 1993, mental conditions, for instance, schizophrenia, melancholia and depression, are best described as a psychopathology of implicit temporality and self-awareness (Zahavi & Parnas 1998). Major symptoms of schizophrenia, such as, as thought disorders, thought insertions, hallucinations and experienced passivity, are explained by a disturbance of implicit time, i.e. an interruption of the "constitutive synthesis of time consciousness" (cf. Fuchs 2013, 75). Because implicit or lived time is tied to the lived body and implies a pre-reflective self-awareness, this leads to a fragmentation of the (bodily) self, as well as, experience.
Concerning schizophrenia, Fuchs (cf. Fuchs 2013, 2006 argues in line with Merleau-Ponty that the intentional arc, which constitutes the continuity of implicit time, is fragmented and so, the passive temporal synthesis is disturbed. Schizophrenic patients, therefore, have problems in following a conversation or focusing on a strain of thought as they do not experience them as coherent and meaningful wholes anymore. In this regard, temporal gaps occur and there is an inability to anticipate what comes next (as in a melody or conversation). According to Fuchs, this refers to a lack in the protentional function of temporality that renders the occurrence of events too rapid, and the patients thus feel overwhelmed and even intruded upon by external events or their own thoughts. With this lack of protention, one additionally loses the ability to actively direct one's actions towards the future; one is stuck in the present or, better put, thatwhich-has-just-passed. Hence, they have to focus on what just occurred and the sensory feedback of one's just-passed movement (cf. Fuchs 2013, 86). Experiences, therefore, feel no longer as if they were one's own; in a sense, consciousness is continually surprised by itself and so, schizophrenics often experience their own thoughts as if inserted or manipulated (potentially, by another) from without. This leads not only to a fragmentation of experience, but also a de-personalization. 16 Disturbances in implicit time lead to a "disintegration and alienation of routine units of activity" and this forces patients in turn to produce "every single movement intentionally: the body's implicit knowledge has been lost, and its place taken by 'hyper-reflexive' self-observation and self-control" (Fuchs 2013, 90). 17 Disturbances in implicit time, therefore, cause a break in the affective attunement to the world, that must be overcompensated with explicit, that is, intellectual aspects. 18 Moreover, as Fuchs emphasizes, implicit and explicit time correspond to two different forms of intersubjectivity. While we have an explicit form of intersubjectivity in which we are linguistically and socially related to each other on the level of explicit time, the level of lived intersubjectivity is characterized by an intercorporeality (cf. Merleau-Ponty 1964). Normally, the temporalities of individual living and operating bodies are, in this sense, synchronized into an "intersubjective now" (Fuchs 2013, 82), and thus in intercorporeal resonance. Only when this synchronization or implicit temporal normality fails does one explicitly experience time, such as, the experience that one is too late or too early with respect to the normal intersubjective standard of others. Time is experienced as such as a "loss of simultaneity" (cf. Fuchs 2013, 83). Such a desynchronization of the individual and intersubjective level of temporality is, 16 Fuchs' phenomenological descriptions are in accordance with research that point to the reduced attention spans, deficits in working memory and executive control functions (cf. Vogeley et al. 1999;Vogeley and Kupke 2007;Manoach 2003), and deficits in sequential movements, discriminating in close temporal vicinity (Braus 2002) or estimate time intervals (Mishara 2007). 17 As we have seen in part 1 of this paper, disturbances of implicit temporality are accompanied by a disturbance of pre-reflective self-awareness or inherent mine-ness of experience. In this regard, patients then have to explicitly assure themselves via introspection that their experiences really belong to them, cf. Zahavi & Parnas 1998, 700). 18 A similar effect with regard to bodily behavior can be seen in the case of Ian Waterman, who suffers from lack of touch and proprioception from the neck down (cf. Cole 1995;Gallagher and Cole 1995). Here, the loss of bodily self-affection and sensorial feedback, forces him to control every single movement explicitly, i.e. with mental concentration and constant visual monitoring of his body. With this, so one could argue, also the implicit experience of time is lost. One's movements are explicitly temporal and compared as too slow in comparison to others. for example, crucial for melancholic depression. Here, as Fuchs argues, time becomes explicit to such an extent that it is experienced as a "burden of guilt and omission" (Fuchs 2013, 94). The typical "triggering constellation" of melancholia is to be described as the experience of 'falling or lagging behind'. This leads to a lack of affective attunement, whereby patients experience themselves with regard to the dynamic life going on around them as comparatively lifeless and rigid. Depressive disturbances are, in this sense, also characterized by a perception of oneself as physically static, lifeless and rigid, in other words, experience involves a "corporealization of the lived body" (Fuchs 2013, 96, cf. Fuchs 2005. 19 In Fuchs research on time and psychopathology, external time is merely understood as a sign of the disturbance or break down of implicit time. Explicit time is thus described entirely in negative terms, namely, as something "experienced as a painful burden" (Fuchs 2010, 97) or the cause of pain and suffering. This emphasis on the negative effects on explicit temporality is, of course due, to his psychopathological focus. But, as I want to argue here, the explicit experience of time that corresponds to an explicit corporeality (the body we have, so to speak) does not necessitate negative phenomenological effects alone. From an anthropological perspective, explicit temporality and corporeality are necessary aspects of specifically human embodiment, and thus not just a default mode. This alternative perspective can go so far to argue that the split within our embodiment that brings these radicalizations of distance or disruption about, is simultaneously the source for all modes of explicit perception, memory and reflection. The inherent mediation in embodiment that accompanies a break in temporal experience may not only lead to disintegration, alienation or is be experienced as burden, displeasure or suffering. Instead, it could engender a reflective distance towards one's body and behavior; allowing one, in turn, to control their movements or evaluate and optimize their body as a whole. In the following, then, I put forward a case in which explicit time or the capacity to have a body is disturbed in order to reflect the extent to which the ability to experience time explicitly can be regarded as positive or necessary aspect of human embodiment.
The case of Schneider: Concrete and abstract movements
In his Phenomenology of Perception, Merleau-Ponty illustrates the 'normal' functioning of motor intentionality and embodiment through a contrasting analysis of pathological cases. Throughout the book, he refers several times to a patient of the German neurologists Adhémar Gelb and Kurt Goldstein, named, Schneider. Due to a war injury, Schneider has severe brain injuries and shows functional distortions in visual perception, movement, memory, thinking, and social behavior. In the context of this paper, I will focus only on a specific motoric problem and its relevance for Schneider's temporal intentionality (i.e. his being a body). Gelb and Goldstein report that after his brain injury, Schneider was no longer able to perform what they call abstract movements. When they asked Schneider to point to his own nose when blindfolded, for instance, he was unable to quickly perform this task. Although he perfectly 'knew' where his nose was, Schneider was unable to point to it when asked to do so unless this movement was embedded in a practical or operative action, like sneezing or in brushing away a fly. Schneider can thus 'grasp' his nose whenever this grasping is a part of a current action or situation, but he cannot 'point' to his nose upon request. 20 Merleau-Ponty questions the juxtapositions of these situations: He concluded that there must be a difference between grasping one's nose in a current, practical or highly habitual situation and having to point to the same nose without being actually engaged in a practical and meaningful task. Grasping, therefore, is deemed a concrete movement, whereas pointing relates to a merely abstract or virtual movement. 21 A similar problem occurs when Schneider was asked to show where one of his doctors lives: although he had visited the place several times, and so 'knew' where it was, he was not able to 'indicate' its location on cue. Again, it seems that things and events have no meaning for him when they are too 'abstract', viz. when they are not integrated in a 'concrete' situation. For this reason, Merleau-Ponty affirms that Schneider is conscious of his own body and of its surroundings as an "envelope of his habitual action but not as an objective milieu", which is why he is only able to act habitually, but not spontaneously (Merleau-Ponty 2012, 106). Although the futureoriented engagement with the world is disturbed, Schneider is still embedded in the world and perfectly able to operate within the world in a habitual way.
Applied to Plessner's double aspectivity of embodiment, there are arguably two different kinds of knowledge and temporalities involved. In concrete movements, 'knowledge' is part of being a body, that is, it is aligned with the body-schema, comprised of proprioceptive information, and which operates automatically in current movements and actions. In the case of an abstract movement, however, whereby one is asked to point to a part of one's body, we need to refer to our body as an object. We need to have a body-image to point to the objective, external location of a specific part of our body in objective space. While a concrete movement realizes itself within the immediate field of presence or reality, the abstract movement refers to the realm of possibility and virtuality, that is, an imaginary situation, an abstract or even fictional space. To execute an abstract movement, one must be able to plan movements in advance, to imagine possible movements of one's body. In normal motor intentionality, these two aspectsthe real and the virtualthe body-schema and body-image come together to guarantee a smooth orientation and adjustment to new environments and tasks. Here, we seldom come across a movement that is purely concrete or merely abstract. Most of our everyday activities require an interplay of concrete and abstract aspects. But, in the case of Schneider, it seems that the openness for the realm of the possible is disturbed. His field of action is limited to immediate presence or habitual actions; he cannot imagine how to do things in advance or how to do them otherwise and appears stuck in the lived time of presence. Therefore, to be able to execute an abstract movement, like pointing, he needs to make this movement concrete for himself. For example, when he is asked to make a circle in the air, he first needs to locate the relevant 20 Included in a current action, Schneider has no problem with his being a physically extended Körper -he can immediately find the 'external' aspect of his body. Schneider's difficulties, rather, relate to experiencing his body as a spatial thing in the physical world, i.e. to perceive himself as Körper. I offer gracious thanks to the anonymous reviewer for making me aware of this point. 21 For a critical discussion of Merleau-Ponty's interpretation, see Jensen 2009. limb through a preparatory movement. Schneider thus shows a "contraction of the awareness of action possibilities" (cf. Jensen 2009, 386).
What becomes clear here is how the ability to perform abstract movements properly requires certain planning and adjustment of one's movements, and to project one's movements into the future. Nonetheless, this requires a distance from the operational doings of one's body. Apart from a body-schema, one needs a body-image. To anticipate or have an explicit future, as well as a past that can be remembered, one needs to be able to actively objectify their body. This opportunity is what Schneider supposedly lacks. But, it would be too quick to conclude that Schneider's embodiment is like that of animals, reduced merely to being a body. What is at stake, here, is the disintegration of the interplay of both aspects. In human embodiment, the temporal aspects of being and having a body are interdependent, normally informing and influencing each other mutually. But in Schneider's case, his body-image does not inform his doings, and vice versa. So, we can conclude that without the aspect of having a body, motor intentionality would be limited to the immediate field of temporal presence.
Examples from normal an liminal experiences
In the following, I will turn to normal and liminal experiences to illustrate the double aspectivity of human embodiment in its relation to time. First, I will describe and discuss the experience of ageing. Here, the inherent tension of being and having a body becomes most obvious. In ageing, we necessarily get aware of our inherent bodily and temporal nature, namely that we are material and finite bodies: while ageing is one the one hand gradually lived, it is at the same time experienced as a sudden shock and confrontation with one's actual body that differs from the body (schema) we were (used to) and the body(image) we had (in mind). Moreover, this example shows another important aspect of having a body: The attention to the bodies we have or had, is heavily influenced, motivated or even constituted by the gazes of others. Having a body is thus always mediated by others, and in turn a confrontation with ourselves as other.
In the last section b) I will briefly discuss liminal experiences of pain, torture and rape. In this regard, I will an cannot give a full phenomenological description nor can I live up to the respective ethical and political implications. I just use these examples to indicate that the ability to objectify the bodies we are, can be regarded not only as an increasement of suffering or fragmentation, but also as an ability to distance oneself from one's pain and situation. I will argue in this regard that to extinguish the possibility of objectifying oneself is precisely what makes torture and rape so brutal and humiliating.
The experience of ageing
The double aspect of embodiment creates a tension that becomes especially radical and relevant with old age. Here, we come to experience a mismatch between the gradual experience of ageing and the appearance of the aged body; this shows, for example, in the widely known phenomena that one continues to feel young, but imprisoned in an old body. But, as I want to argue, this split is not the result of a static personal conception of self (that is forever young) and an inevitable changing appearance, rather, it results strategically from the attempt to cover or even deny the dynamical tension inherent in every human embodiment.
Being a body, we experience ageing as a gradual change, but we do not experience ourselves as 'old'. Oldness is an objective or relative category that belongs to the body we have. As De Beauvoir would put it, oldness is assigned to us by others (De Beauvoir 1996). What we experience from within are gradual or sudden changes in our embodied actions: the decrease in strength or fitness, slower reactions and delimited bodily abilities that deviate from past performance and no longer easily accommodate our environment; or, we feel out of sync with regard to the performances of others, as Fuchs has explained in relation to melancholic depression. Despite their limited magnitude, these apparent changes are experienced as sudden disruptions. They either gradually add up until they reach a threshold, and the mismatch becomes explicit; or the changes are recognized suddenly, i.e. when we try to do things we have not done in a while. In both cases, the operating, lived body becomes thematic and thus turns into the material and aged body we have.
This split becomes especially visible and radical the older we become: the more we experience our being a body as an expression of 'I cannot' instead of 'I can', the more we have to attend to our body as a material object. This embodied immersion in the world becomes rare as we are constantly forced to deal with our bodily shortcomings and changes as a result of old age. To experience ourselves as old implies that we have to be aware of our body as a body, in other words, as Körper; that is to say, the body's materiality, physicality and finiteness is made thematic to us. With regard to age, it becomes obvious, that our concrete body-image is heavily mediated by others, either directly or through the mediated gaze of others, i.e. when we see ourselves in the mirror with the eyes of others. Having a body, is in this sense never purely an individual experience as we experience having bodies through the gazes of others, as well as cultural and social images and norms internalized. According to De Beauvoir, the recognition of ourselves 'as old' is not something we can experience merely from within, but something that is assigned to us from without; through the judgments of other subjectivities that, in turn, represent the norms of a certain society. Therefore, De Beauvoir argues along with Sartre that old age is unrealizable: one cannot experience what "we are for others in the for-itself mode" (De Beauvoir 1996, 291). The appearance of the aged body is, in this sense, more apparent to others than to the subject itself. In turn, we experience ageing or having an aged body through the appearance and presence of others. It is thus also in the perception of the aged body of other subjects that we are confronted by our own future or current bodily stages and selves.
Being a body, we experience changes gradually and directed to the environment, only from time to time do we in fact attend to our bodies, for instance, when our embodied being has been physically disturbed or requires habitual adjustments; or, when our appearance is brought to our attention through others' gaze or the presence of a mirror. With old age we inevitably and increasingly attend to the body we are (as embodied beings), which is thus increasingly perceived as a body we have. The internal subject-object split inherent to every human embodiment becomes more self-apparent, and the achieved balance between the two, more shaky. We experience our body more and more as an object. This is the result of either internal physical modifications, changes in performance or the awareness that we deviate from the cultural standards of youth and beauty. But, the aged body we see in the mirror, or the 'old body' that is assigned to us through social categorization via others' gaze, diverges from our gradual experience of ageing, as well as our habituated bodyimage. Therefore, the sudden image of our aged body not only diverges from one's experience, but also from our habituated body-image. As such, this habitual body-image always temporally lags behind with respect to the actual external representation or categorization we are confronted with.
In the same sense that we are "habituated to a different [past, auth.] body", a sensing and moving body with more capacities, strength, and a broader environments (cf. Heinämaa 2014, 177), we are also habituated to a 'past' body-image: namely, a more or less stable image that we have created of ourselves in the course of our former lifetime. I would thus argue that not only the body-schema, but also the body-image is developed over time, and then becomes more fixed in adult life especially once we achieve certain life stabilities by means of, for instance, a reliable profession, social or family position. Although we constantly perceive ourselves daily in the mirror, we rarely update our body-image by these means. This is because we cannot perceive the gradual changes in our faces. Rather than look carefully at ourselves each time, we anticipate our appearance according to an already acquired body-image. Inevitably, the gradual physical changes accumulate to reach a moment that the actual 'change' becomes apparent. However, even this moment is achieved largely through the mediation of either others' comments or in comparison with an old picture of oneself. The aged body we then suddenly discover is experienced not as something we are or even have become, but as something truly 'other'. 22
Limit cases of experience, pain, torture, rape
When experiencing pain, the aspect of having a body is especially important. The ability to objectify ourselves allows us, at least temporally, to gain some distance, and so some control over our pain. Nietzsche illustrates this in the following citation: "I have given a name to my pain and called it a dog […] it is just as faithful, just as obtrusive and shameless […] and I can scold it and vent my bad mood on it, as others do with their dogs, servants, and wives." (Nietzsche 1974, 249-250). This citation points tacitly to the dangers of objectification, its ability to render certain people and animals inferior, but in doing so, Nietzsche also highlights the relief achieved in reference to pain as something exterior to oneself; something that is manifest, graspable and potentially manageable. With intense and chronic pain that extends all over the body, and is thus not easily locatable anymore, this objectification is increasingly impossible. In such situations, we are our pain, and we are unable to grasp or, indeed, have it anymore. In such cases of intense pain, we are not only unable to distance ourselves from our body, and make it the object of our intention, but we also lose our directedness towards the world, and in turn intentionality in the strict sense. Reduced entirely to our being a body, we lose our body's disclosing meaning and functions; we are hence reduced to the body as mere field of sensation and thus thrown back on the materiality and vulnerability which is accompanied by it. In this alienated situation, we also lose every sense of time (and space), because neither do we experience change and movement on the level of implicit time or intentional temporality, nor can we refer to our body as a temporal or perceived object anymore.
This total breakdown of the mediated structure of our embodiment becomes especially obvious in the case of violently induced pain. In the case of purposefully inflicting pain, like torture, the torturer seems to objectify the victim in a radical sense, thereby depriving their victims of subjectivity (cf. Scarry 1985;Grüny 2003). These victims find themselves in an entirely alienated situation, where their being a body is reduced to a mere passive receptivity. Furthermore, the loss of subjectivity does not mean that we are reduced to a mere object; the materiality is intertwined with the sensibility of living beings, and thus remains a part of our being a body. Instead, what we lose is the ability of having a body, the victim of torture is deprived of the inability to actively objectify herself. Torture can thus be described as radicalizing the subjective dimension of embodiment: "since the experienced pain completely fills subjective space, so that nothing else can be felt, up to a point where consciousness [and thus experience] might faint altogether" (Breyer 2016, 743). At the same time, through such a radicalization the victims lose any distance to their bodies, and thus the possibility to imagine, reflect or control them (and along with that, their pain).
Moreover, due to the monotony of torture practices, and the typically perpetually dark rooms in which torture takes place, the victim loses track of time and space. Deprived of the possibility to move, disclose or direct themselves towards the world and external objects, the victim loses the ability to bodily inhabit space or engage in lived time (through their operative intentionality). In such a situation of radical asynchrony, where the torturer inhabits more and more space, and also is the master of time, the victim loses all sense of an objective or intersubjectively-shared time. In this regard, human embodiment, the 'I can', as well as the 'I have', will be slowly disintegrated until the victim becomes incommensurable with pain and appears merely an object (at least, from a third-person perspective)moved and touched at another's will.
In extreme situations of violence, especially in cases of rape, characterized as an 'invasion of the body', some people report of 'out of body' experiences. This goes to express the ways in which a rape victim will 'leave' their body being violated, and regard it as if from a distance. As one's sense of being a body and lived time is literally penetrated and dominated by another, it becomes an unlivable and unbearable state of being; as such, a dissociation from one's body takes place. 23 This split or distance helps the victim to survive this "affront to the embodied subject" (Cahill 2001, 13). Although, the victims cannot restore neither their lived nor the intersubjectively shared time, one could argue that in such an out of body experience a second 'frozen' lived-time is established as substitution, in which one can observe what happens. Paradoxically, this split functions as a reaction of the embodied subject to retain its wholeness and dignity. In line with Plessner, it can be argued that a rape victim takes shelter in their 'eccentric positionality', and in doing so, they are able to re-establish the mediated relation between being and having a body, lived and explicit time.
Conclusion
Human embodiment implies that there is a rupture at the heart of our embodiment. And, it is this rupture, as I have tried to show throughout this paper, that is essentially temporal.
This means, the rupture of human embodiment constitutes our experience of a past and a future, and the very awareness of ourselves as temporal beings, in both an individual and generative sense. This creates a need to constantly mediate these two poles that normally go hand-in-hand. These poles stand for our operational and (re)presentational intentionality, pre-reflective and reflective awareness, implicit and explicit temporality.
While most phenomenological and psychopathological approaches emphasize the alienating and de-synchronizing disturbances of experiences that occur, when our body and temporal experience becomes thematic, an anthropological perspective could shed light on the positive aspects of the fact of self-objectification. Following Plessner, the distance or mediatedness within our embodiment can be understood as the precondition for explicit learning, reflection, as well as the development of culture and technology. Phenomenologically, it could have been shown that the temporal and operative intentionality of being a body is fundamental for a stable object perception, as well as an explicit temporality. Concretely, this enables us to make abstract movements, which involves planning and imagining what we do in advance of acting; that is to say, it gives us a sense of the possible. In addition, we relate to ourselves as past bodies, which becomes apparent in the example of ageing. Anthropologically, then, this split within human embodiment provides us with a past and a futureas well as a past and a future body -that not only shapes or motivates us, but to which we can and must relate as such. For these reasons, having a body as explicit corporeality and temporality is not merely be a default mode of embodiment, but this decentering and fragmenting aspect necessarily belongs to it. While it certainly can be alienating, it also implies the capacity for one to take a distance towards their immediate actions and feelings, and thus gain a sense of distance and control, makes it possible to reflect on and evaluate one's bodily behavior. The body as object thus mustn't be necessarily a burden, but could also be a blessing (dependent on the respective circumstances).
It is by having a body, imperfectly construed as it is, that further enables us to actively and thus also temporarily distance ourselves from the body we are. This necessarily implies a temporal distance as well, that makes as aware of our bodies not only passively as material, changing and finite objects, but also actively, as possible bodies related to our future actions and plans. Therefore, this temporal distance from the body we are bears a tendency towards reflection already at a bodily level. That the body is sensible for itself, as Merleau-Ponty emphasizes in his later works, does not only mean that it is sensible in its materiality or spatiality, but above all that it is sensible for itself as temporal. This temporal aspect of being self-reflexive illustrates what Merleau-Ponty meant when he claimed that bodily reflection is the precondition for any other form of self-consciousness. 24 24 The fact that one can experience oneself as a material body in the world, that is not only visible to or within physical reach of oneself, but most of all for others, could also be interpreted as the presupposition of explicit empathy. This is because we know first-hand that externally perceived material bodies (like us) can have an interior, i.e. we are experiencing subjects with perspectives and gazes themselves (Merleau-Ponty 1964, 168). In a sense, one could say that the distance we have towards ourselves makes room for the other, i.e. the encounter of ourselves as other is, in turn, the presupposition to experience the other as similar to oneself. Therefore, while there is already a pre-reflective self-awareness and intersubjectivity (intercorporeality) at the level of being a body, the fact that we also have a body may be interpreted as the source of an explicit selfconsciousness, as well as, possibility of an explicit form of empathy.
OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. | 15,505 | 2019-02-07T00:00:00.000 | [
"Philosophy"
] |
Optimal photon polarization toward the observation of the nonlinear Breit-Wheeler pair production
We investigate the optimization of the photon polarization to increase the yield of the Breit-Wheeler pair production in arbitrarily polarized plane wave backgrounds. We show that the optimized photon polarization can improve the positron yield by more than $20\%$ compared to the unpolarized case, in the intensity regime of current laser-particle experiments. The seed photon's optimal polarization is resulting from the polarization coupling with the polarization of the laser pulse. The compact expressions of the coupling coefficients in both the perturbative and nonperturbative regime are given. Because of the evident difference in the coupling coefficients for the linear and circular polarization components, the seed photon's optimal polarization state in an elliptically polarized laser background, deviates considerably from the orthogonal state of the laser polarization.
I. INTRODUCTION
The production of electron-positron pair in the collision of two high-energy photons, now referred to as the linear Breit-Wheeler process (LBW), was first proposed in 1930s [1]. The production yield depends not only on the photons' dynamical parameters, but also on the relative polarization of the two photons [1][2][3].
With the improvement of the laser intensity, the decay of a single high-energy photon into a pair of electron and positron in the collision with an intense laser pulse, which is often referred to as the nonlinear Breit-Wheeler (NBW) pair production [4][5][6][7], has been measured in the multiphoton perturbative regime via the landmark E144 experiment more than two decades ago [8,9] and been broadly studied within different type of laser fields [10][11][12][13][14][15][16][17][18][19][20][21][22][23]. The dependence of the NBW process on the polarisation state of the seed photon has also been partially investigated in the current literature [24][25][26][27][28][29][30][31], in which the laser backgrounds are commonly specified with the pure linear and/or circular polarization, and the production yield could be considerably improved/suppressed when the polarization of the seed photon is set to be orthogonal/parallel to that of the background field [29][30][31]. However, in an arbitrarily polarized laser background, how to assign the photon polarization to acquire the maximal production yield has not been clearly investigated.
In the LBW process, the polarization dependence of the production is resulting from the polarization coupling between the two high-energy photon [1][2][3]. However, how the polarization of the seed photon couples with that of the laser pulse (or multiple laser photons) in the NBW process is still not clear. In this paper, we concentrate on the properties of the polarization coupling between the seed photon and the laser pulse and reveal the optimal polarization of the seed photon for the maximal yield of the NBW process in arbitrarily polarized laser *<EMAIL_ADDRESS>backgrounds. We find that the linear and circular polarization component of the seed photon couple with the corresponding component of the laser polarization with the quite different coefficients, and thus in an elliptically polarized laser pulse, the optimal polarization state of the seed photon deviates considerably from the orthogonal state of the laser polarization.
The study of the optimal photon polarization for the maximal production yield is partly motivated by the upcoming high-energy laser-particle experiment, i.e., LUXE at DESY [32][33][34][35] and E320 at SLAC [36][37][38][39] in which beams of photons with the energy O(10 GeV) are generated to collide with laser pulses with the intermediate intensity ξ ∼ O(1), and one of their main goals is to detect the NBW process in the transition regime from the perturbative to the non-perturbative regime [34,35], where ξ is the classical nonlinearity parameter for laser intensity. In this planned intensity regime, the production yield could be enhanced/suppressed considerably by the photon polarization effect [28,31].
The paper is organised as follows. The theoretical model and relevant parameters are introduced in Sec. II. In Sec. III, we first explore the perturbative intensity regime and discuss the photon polarization coupling in the LBW process, and then, we go to the nonperturbative intensity regime to discuss the polarization coupling between the seed photon and the laser pulse in the NBW precess in Sec. IV. At the end, we conclude in Sec. V. In following discussions, the natural units = c = 1 is used, and the fine structure constant is α = e 2 ≈ 1/137.
II. THEORETICAL MODEL
We consider the typical scenario in the modern-day laser-particle experiments in which a beam of high-energy photons interacts with an intense laser pulse in the geometry close to the head-on collision. The laser pulse is modelled as a plane wave with scaled vector potential a µ (φ) = |e|A µ (φ) depending only on the laser phase arXiv:2207.00175v1 [hep-ph] 1 Jul 2022 φ = k · x, where k µ = ω(1, 0, 0, −1) is the laser wave vector, ω is the central frequency of the laser pulse and |e| is the charge of the positron. This plane wave background is a good approximation for collisions between high-energy particles and weakly focussed pulses [10,[40][41][42][43]. The collision is characterized by the energy parameter η = k · /m 2 and laser intensity parameter ξ, where µ is the photon momentum and m is electron rest mass.
Based on (1), the total yield of the NBW process can be phenomenologically given as where n 0 is the unpolarized contribution independent on the photon polarization (Γ 1,2,3 = 0) [22,23], and n 1,2,3 denote the contributions coupling to the polarization of the seed photon. As one can simply infer, to maximize the production yield, the photon polarization should be selected as which prompts the existence of the optimal photon polarization for the specified laser pulse and collision parameter η to achieve the maximal production yield, where n p = (n 2 1 + n 2 2 + n 2 3 ) 1/2 is the maximal contribution from the photon polarization. However, if reverse the optimal polarization of the seed photon, i.e. Γ 1,2,3 → −Γ 1,2,3 , the pair production would be largely suppressed.
III. LINEAR BREIT-WHEELER PROCESS
One may realize that the polarization contribution Γ i n i in (2) comes from the polarization coupling between the seed and laser photons, and thus the optimal photon polarization (4) depends on the polarization of the laser photons. To manifest this polarization coupling effect, we resort to the perturbative approximation of (1), which is often referred to as the LBW process, by expanding the integrand in (1), keeping only O(ξ 2 ) terms and integrating over s, where ν * = 2/η is the frequency threshold of the laser photon required to trigger the pair production, D(ν) = ν|ã(ν)| 2 /(4π 2 αλ 2 e m 2 ) is the (areal) number density of the laser photon with the frequency νω, λ e = 1/m = 386.16 fm is the electron's reduced Compton wavelength,ã(ν) = dφ[a x (φ), a y (φ)] exp(ivφ), and are the classical Stokes parameters of the laser photon νω [45], satisfying ς 2 1 (ν) + ς 2 2 (ν) + ς 2 3 (ν) = 1. Similar as the seed photon, ς 1,2,3 (ν) characterize the polarization property of the laser photon: ς 1 (ν) [ς 2 (ν)] describes the preponderance of the x ( 45 • )-linear polarization over the y ( 135 • )-linear polarization, and ς 3 (ν) denotes the preponderance of the + -circular polarization over the − -circular polarization. The parameter is the contribution from unpolarized photons [46,47], and are, respectively, the circular-and linear-polarization coupling coefficients, and indicate the amplitude of the contributions from each kind of polarization coupling between the seed and laser photons, where β = (1−ν * /ν) 1/2 is actually the normalized velocity of the produced particles in the center-of-mass frame.
In (5), we can clearly see the contributions from the polarization coupling between the seed and laser photons. To maximize the polarization contribution in the LBW process, the polarization of the seed photon is optimized, based on the polarization of the laser photon, as where σ l = κ c /κ l , andκ l is the sign of κ l . As we can also see in (5), the two sets of linear polarization have the identical coupling coefficient κ , because of the symmetry by rotating the linear polarization axis for 45 • . This identity results in the orthogonality between the linear polarization components of the seed and laser pho- (10) wherê κ l = −1 is obtained in Fig. 1. In Fig. 1, the unpolarized contribution Ξ and the polarization coupling coefficients κ l,c are presented with the change of the parameter β. As shown, the polarization contributions are indeed appreciable compared with the unpolarized contribution, especially in the low-energy region β < 0.2, where Ξ ≈ −κ c ≈ −κ l and the energy of the laser photon is close to the frequency threshold ν → ν * . With the proper photon polarization, the production could be doubled if Γ · ς(ν) → −1 or completely suppressed if Γ · ς(ν) → 1. Similar as the variation of the unpolarized contribution Ξ with β ∈ (0, 1) [47], the amplitude of the coupling coefficients κ c,l increase from zero at β = 0 to the maximum at around β ≈ 0.45 and then fall off again to zero at β = 1. In the region of β < 0.4, the two kind of polarization have the same coupling coefficient, κ c ≈ κ l . This means that, to acquire the maximal polarization contribution, the seed photon should be fully polarized in the state orthogonal to that of the laser photon, i.e. (10). However, in the higher-energy region with β > 0.4, the difference between κ c and κ l becomes considerable, which implies that the highest production yield is acquired from the seed photon polarized in the state deviating from the orthogonal state of the laser photon. Especially in the extremely high-energy region with β > 0.95 in which κ l is close to zero and κ c becomes positive and dominates the polarization contribution, the highest yield appears when the seed and laser photons have the pure circular polarization parallel to each other.
We now know that the polarization coupling between the two photons in the LBW process could contribute considerably to the production yield and the polarization contributions n 1,2,3 in (2) are proportional to the Stoke parameters of the laser photon as n 1,2 ∼ D(ν)κ l ς 1,2 (ν) and n 3 ∼ D(ν)κ c ς 3 (ν) with the coupling coefficients κ l,c depending only on the dynamic parameter β in the perturbative regime ξ 1. While in the upcoming laserparticle experiments [32,37], the laser intensity has increased to the magnitude of ξ ∼ O(1), in which the Breit-Wheeler pair production is in the transition regime from the perturbative to the non-perturbative regime, a high number of laser photons would be involved to satisfy the energy threshold in the center-of-mass frame, and the NBW process would dominate the pair production. The polarization contributions would, therefore, come from the polarization coupling with the laser pulse, i.e. multiple laser photons, but not with a single laser photon, and the coupling coefficients would depend also on the laser intensity and field ellipticity.
IV. NONLINEAR BREIT-WHEELER PROCESS
In this section, we consider the NBW process stimulated by a high-energy photon in the collision with the laser pulse in the intermediate intensity region ξ ∼ O(1). This is the typical setup for the upcoming laser-particle experiment in LUXE [32,35]. To show the polarization effect clearly, we fix the energy parameter η and adjust the relative polarization of the seed photon and laser pulse.
The background laser field is expressed as where Re {·} means the real part of the argument, a x (θ) = cos θ−iδ sin θ, a y (θ) = sin θ+iδ cos θ. δ ∈ [−1, 1] characterizes not only the rotation of laser field: δ/|δ| = 1 means the left-hand rotation and δ/|δ| = −1 is righthand rotation, but also the ellipticity |δ| of the laser pulse: |δ| = 0, 1 corresponds, respectively, to the linearly and circularly polarized laser background and 0 < |δ| < 1 gives a laser pulse with the elliptical polarization. The semi-major axis of the elliptical laser field is along (cos θ, sin θ) with the deflection angle θ ∈ [−π, π] in the transverse plane. f (φ) depicts the envelope of the laser pulse. The polarization of the laser field could also be described with the classical Stokes parameters (ς 1 , ς 2 , ς 3 ) [45] as where The total linear polarization degree of the laser pulse is given as , and the laser's circular polarization degree is given by ς 3 . The equivalence between the laser Stokes parameters (12) and those of the laser photon (6) can be seen when we consider a relatively long laser pulse with the slowly varying envelope f (φ) ≈ 0 and |f (ν + 1)| |f (ν − 1)| at ν ≥ 1 [23].
A. Numerical results
To show the importance of polarization contributions and their dependence on the corresponding laser Stokes parameters, we first present the numerical results for the NBW process stimulated by a 16.5 GeV photon in the head-on collision with the laser pulse in the intermediate intensity region ξ ∼ O(1). The pulse envelope is given as f (φ) = cos 2 [φ/(4σ)] in |φ| < 2πσ and f (φ) = 0 otherwise, where σ = 8. The calculations have been done with the laser central frequency ω = 4.65 eV, as an example, which is the third harmonic of the normal laser with the wavelength λ = 0.8 µm. For the detail calculation of (1), one can refer to the presentation in Ref. [22] and the analogous calculation in Ref. [48] for the polarized nonlinear Compton scattering.
In Fig. 2, the contribution n 2 is always zero with the change of the laser ellipticity δ. This is because the laser has no polarization preponderance along the direction of θ = π/4, i.e. ς 2 = 0. To see the effect of the field deflection angle θ, we plot the variation of the polarization contributions n i with the change of θ in Fig. 3 (a) for ξ = 1 and δ = 0.5. As shown, the polarization contributions n 1,2 vary in the trend as (n 1 , n 2 ) ∝ −(cos 2θ, sin 2θ) and n 3 is unchanged for different θ. All are in the same trend as the variation of the corresponding laser Stokes parameters ς 1,2,3 in (12). However, we also note that the amplitude of the linearly polarized contribution (n 2 1 + n 2 2 ) 1/2 is constant with the change of θ shown as the green dotted lines in Fig. 3 (a). Therefore, the maximized polarization contribution n p in (3) from the optimized polarization (4) is independent on the field's deflection angle θ as shown in Fig. 3 (b), in which we also find that the unpolarized contribution n 0 is unchanged for different θ. This is because of the azimuthal symmetry of the interaction geometry. We can thus conclude that, for laser pulses with the fixed ellipticity δ and intensity ξ, the field's deflection angle θ can only alter the relative value of the linear polarization contributions n 1, 2 with the constant amplitude (n 2 1 + n 2 2 ) 1/2 , but not change the circularly polarized (n 3 ) and unpolarized (n 0 ) contributions. To show the correlation between the polarization contribution n i and the corresponding laser Stokes parameter ς i , we fit the numerical results in Fig. 3 (a) respectively as n 1 : n 1 (θ = 0)/ς 1 (θ = 0)ς 1 , n 2 : n 2 (θ = π/4)/ς 2 (θ = π/4)ς 2 , and n 3 : n 3 (θ = 0)/ς 3 (θ = 0)ς 3 , and find the precise agreement between the numerical results and data fitting.
In Fig. 4, we show the variation of the different contributions to the positron yield with the change of the laser ellipticity δ for the fixed deflection angle θ = π/9 and laser power density I = 1, corresponding to 3.84 × 10 19 W/cm 2 . As shown in Fig. 4 (a), both the unpolarized contribution n 0 and the maximized polarization contribution n p from the optimal polariza- FIG. 4. Different contributions to the positron yield of the NBW process in the laser pulse with different ellipticity δ ∈ [0, 1], but the fixed laser power density I = ξ 2 (1 + δ 2 )/2 = 1 and deflection angle θ = π/9. (a) The unpolarized contribution n0 and the maximized polarization contribution np from the seed photon with the optimal polarization in (4). The relative importance np/n0 of the maximal polarization contribution np is also plotted and compared with that of the polarization contribution n p = −(ς1n1 + ς2n2 + ς3n3) from the photon state orthogonal to the laser polarization. (b) The variation of the polarization contributions n1,2,3 with the change of the laser ellipticity. The full QED results ('cycle', 'plus' and 'square') are fitted with the corresponding laser Stokes parameters as c1,2,3ς1,2,3, where c1,2 = n1,2(δ = 0)/ς1,2(δ = 0) and c3 = n3(δ = 1)/ς3(δ = 1). The laser power density I = 1 corresponds to the real power density I ≈ 3.84×10 19 Wcm −2 . The other parameters are the same as in Fig. 2. Fig. 5 (b)] decrease with the increase of the laser ellipticity δ from 0 to 1. This is because of the decrease of the field intensity ξ = [2I/(1 + δ 2 )] 1/2 . Simultaneously, the relative importance, n p /n 0 , of the maximized polarization contribution decreases from about 31.6% at δ = 0 for a linearly polarized laser pulse to about 22.3% at δ = 1 for the laser pulse with pure circular polarization. For comparison, we also plot the importance of the polarization contribution n p = −(ς 1 n 1 + ς 2 n 2 + ς 3 n 3 ) from the orthogonal state of the laser polarization, which is clearly smaller than that from the optimal polarization state especially for the elliptically polarized laser with δ ≈ 0.5. In Fig. 4 (b), we see that the amplitude of the linear polarization contributions n 1,2 decrease with the increase of δ, while the amplitude of the contribution from the circular polarization, n 3 , increases. These variation are again in the same trend as the laser Stokes parameters in (12). The difference between the two linear polarization contributions can be depicted as n 1 /n 2 ≈ tan 2θ = ς 1 /ς 2 . The numerical results in Fig. 4 (b) are respectively fitted as n 1,2 : n 1,2 (δ = 0)/ς 1,2 (δ = 0)ς 1,2 and n 3 : n 3 (δ = 1)/ς 3 (δ = 1)ς 3 , and again, we see the agreement between the numerical results and data fitting. The slight difference around δ ≈ 0.4 implies the dependence of the polarization coupling between the seed photon and laser pulse on the laser ellipticity, as we will see later.
tion (4) [shown in
In this section, we investigate the NBW process in the laser pulse with the ellipticity δ ∈ [0, 1] and deflection angle θ ∈ [0, π]. For the laser pulse with the ellipticity δ ∈ [−1, 0], the laser field would rotates in the opposite direction as the laser with the ellipticity −δ (see the expression for ς 3 ). The calculations would be consistent with the above results, except that the polarized contribution n 3 would change sign, but keeps the same amplitude. For the laser pulse with the deflection angle θ ∈ [−π, 0], all the above results would also be the same except the polarized contribution n 2 would change sign because of the odd property of ς 2 . All the calculations have be done for a relative long laser pulse, and for a ultra-short laser pulse, the conclusion would be different.
As we can see, in the NBW process, the polarization contribution n i is also proportional directly to the corresponding laser Stokes parameter ς i , as shown in Figs. 3 and 4, with the coupling coefficients in (14) depending not only on the laser power, but also on the field ellipticity. The two linear polarization components share, again, the same coupling coefficient because of the symmetry of rotating the linear polarization axis as discussed in Fig. 3. We put the fine structure constant α out of the coupling coefficients as the NBW process is a single-vertex process, and I is because of the increase of the contributions with the laser power and in the perturbative regime, n i ∝ ξ 2 in (5).
In Fig. 5 (a), we present the dependence of the coupling coefficients κ nl and κ nc on the field ellipticity for the lasers with the fixed power I = 1 and relatively long duration. As shown, the value of κ nl and κ nc vary slightly with the change of the field ellipticity δ, and there exists significant difference between κ nl and κ nc with the ratio κ nc /κ nl < 1, which also changes for different δ.
The dependence of κ nl and κ nc on the laser power density is presented in Fig. 6 (a) for the fixed field ellipticity δ = 0.5 and deflection angle θ = π/8. As shown, in the low-power density region I < 10 −3 , κ nl and κ nc are independent on the laser power I because the LBW process dominates the production, κ nl and κ nc can be acquired alternatively from the perturbative result (5) with κ l and κ c depending only on the parameter β. The value of κ nl and κ nc are determined by the energy parameter η and the pulse envelope. In this region, the positron yield increases as n 0 , n p ∝ I shown in Fig. 6 (c) because of the single-photon effect with the high-frequency components from the finite-pulse effect [23]. In the intermediate laser power region, 10 −3 < I < 10 −1 , the coupling coefficients increase as κ nl , κ nc ∝ I 3 because of the multiphoton perturbative effect, in which 4 = 2/η laser photons are involved in the production process and the positron yield increase in the trend as n 0 , n p ∝ I 4 in Fig. 6 (c), where x denotes the minimal integer larger than x. With the further increase of the laser power, I 0.5, this 4photons channel is forbidden and a higher number of laser photons, n = 2(1 + I)/η , would be involved in the production process. Therefore, the fully non-perturbative effect would be dominant. The increase of the coupling coefficients κ nl and κ nc become slower, as well as the increase of the positron yield in Fig. 6 (c). In Fig. 6 (a), we can also see the evident difference between κ nl and κ nc in the broad laser power region with the ratio κ nc /κ nl < 1 depending also sensitively on the laser power. This difference would result in the deviation of the optimal photon polarization from the completely orthogonal state of the laser polarization.
C. Optimal photon polarization
From (14), the optimal polarization of the seed photon (4) can be written as based on the polarization of the laser pulse, wherê κ nl = −1 is the sign of κ nl acquired numerically, and σ n = κ nc /κ nl denotes the difference between the coupling coefficients κ nl and κ nc . If σ n = 1, the photon's optimal polarization state would deviate from the orthogonal state −(ς 1 , ς 2 , ς 3 ) of the laser polarization. As shown in Fig. 5 (a), σ n is much smaller than 1 for different δ. Therefore, the optimal polarization state of the seed photon, for the maximal yield, is much different from the orthogonal state −(ς 1 , ς 2 , ς 3 ) of the laser polarization as one can see in Fig. 5 (b), except in the regions around δ ≈ 0, 1, where the laser is linearly and circularly polarized, respectively. With the optimized photon polarization in Fig. 5 (b), the production yield could be enhanced for more than 20% compared to the unpolarized case as shown in Fig. 4 (a).
In Fig. 6 (b), the optimal polarization state of the seed photon is presented in a broad laser power region for the specified ellipticity δ = 0.5. Because the field deflection angle is θ = π/8, the two linear polarization components are equal, Γ 1 = Γ 2 . Again, because of the evident difference between κ nl and κ nc in Fig. 6 (a), the photon's optimal polarization state deviates considerably from the orthogonal state of the laser polarization as shown in Fig. 6 (b). Especially in the non-perturbative regime I > 0.5, the circular polarization degree |Γ 3 | of the optimal polarization decreases rapidly with the increase of I, because of the rapid decrease of the ratio κ nc /κ nl for 6. (a) The variation of the coupling coefficients κ nl , κnc with the increase of the laser power density. The dependence of the ratio σn = κnc/κ nl on the laser power is also presented with the right y-axis. (b) The Stokes parameters of the photon's optimal polarization with the change of the laser power. Γ1 = Γ2 as the field deflection angle is θ = π/8. (c) The yield from the unpolarized contribution n0 and the maximal polarization contribution np, and the relative importance of the polarization effect np/n0. In (a) and (b), the pink dotted lines are the corresponding perturbative results acquired from (5), and the black dotted lines show the varying trend of the curves. The field ellipticity is δ = 0.5. The other parameters are the same as in Fig. 4. larger I in Fig. 6 (a), which means that the contribution from the circular polarization becomes less important. In the ultra-high intensity regime ξ 10 (not shown in Fig. 6), in which the locally constant field approximation would work precisely [31,52], the contribution from the circular polarization would be negligible, i.e. k nc → 0 and Γ 3 → 0. This is because the formation length of the NBW process becomes much shorter than the typical length of the field variation [53] and the laser pulse would work as a linearly polarized field with the direction varying with the laser phase [31].
With the polarization-optimized seed photon, the positron yield could be enhanced appreciably as shown in Fig. 6 (c). In the perturbative intensity region I < 10 −3 , the positron yield could be enhanced more than 55% by the polarization effect compared with the unpolarized case, and in the multi-photon perturbative region 10 −3 < I < 10 −1 , the yield enhancement is about 34% from the optimized polarization state. With the further increase of the laser power, even though the relative importance of the polarization contribution becomes less, the positron yield could still be improved for more than 16% at I 50.
V. CONCLUSION
The optimization of the photon polarization state to the maximal positron yield of the Breit-Wheeler pair production is investigated in arbitrarily polarized plane wave backgrounds for a broad intensity region. Both the polarization of the photon and the laser pulse are comprehensively described with the classical Stokes parameters.
The optimal polarization state of the seed photon is resulting from the polarization coupling with the laser pulse/photon in the production process. For the laser pulse with the pure linear or circular polarization, the seed photon's optimal polarization is the orthogonal state of the laser pulse. However, because of the evident difference between the coupling coefficients for the linear and circular polarization components, the seed photon's optimal polarization state in elliptically polarized laser backgrounds, deviates considerably from the orthogonal state of the laser polarization, especially in the ultrahighintensity regime in which the linear-polarization coupling coefficient is much larger than that of the circular polarization and thus the seed photon's optimal polarization would tend to the linear polarization.
With the polarization-optimized seed photon, the positron yield could be considerably enhanced in a broad intensity region. For the laser intensity region, ξ ∼ O(1), of current laser-particle experiments, the yield enhancement from the optimized photon polarization could be more than 20% compared to the unpolarized case.
VI. ACKNOWLEDGMENTS
The author thank A. Ilderton for helpful suggestions and comments on the manuscript. The author acknowledge the support from the National Natural Science Foundation of China, Grant No.12104428. The work was carried out at Marine Big Data Center of Institute for Advanced Ocean Study of Ocean University of China. | 6,787.2 | 2022-07-01T00:00:00.000 | [
"Physics"
] |
Marine Alga Ecklonia cava Extract and Dieckol Attenuate Prostaglandin E2 Production in HaCaT Keratinocytes Exposed to Airborne Particulate Matter
Atmospheric particulate matter (PM) is an important cause of skin damage, and an increasing number of studies have been conducted to discover safe, natural materials that can alleviate the oxidative stress and inflammation caused by PM. It has been previously shown that the extract of Ecklonia cava Kjellman, a perennial brown macroalga, can alleviate oxidative stress in epidermal keratinocytes exposed to PM less than 10 microns in diameter (PM10). The present study was undertaken to further examine the anti-inflammatory effects of E. cava extract and its major polyphenolic constituent, dieckol. HaCaT keratinocytes were exposed to PM10 in the presence or absence of E. cava extract or dieckol and analyzed for their viability, prostaglandin E2 (PGE2) release, and gene expression of cyclooxygenase (COX)-1, COX-2, microsomal prostaglandin E2 synthase (mPGES)-1, mPGES-2, and cytosolic prostaglandin E2 synthase (cPGES). PM10 treatment decreased cell viability and increased the production of PGE2, and these changes were partially abrogated by E. cava extract. E. cava extract also attenuated the expression of COX-1, COX-2, and mPGES-2 stimulated by PM10. Dieckol attenuated PGE2 production and the gene expression of COX-1, COX-2, and mPGES-1 stimulated by PM10. This study demonstrates that E. cava extract and dieckol alleviate airborne PM10-induced PGE2 production in keratinocytes through the inhibition of gene expression of COX-1, COX-2, mPGES-1, and/or mPGES-2. Thus, E. cava extract and dieckol are potentially useful natural cosmetic ingredients for counteracting the pro-inflammatory effects of airborne PM.
Introduction
The World Health Organization (WHO) reported that more than 4.2 million people died in 2018 from air pollution, making it the largest single environmental health risk factor (https://www.who.int). Air pollutants causing serious health risks include particulate matter (PM), ozone (O 3 ), nitrogen dioxide (NO 2 ), and sulphur dioxide (SO 2 ) [1]. PM, the main component of air pollution, is composed of inorganic and organic, solid, and liquid particles suspended in the air [2]. PM can be produced directly or indirectly from several sources including agriculture, industry, power plants, automobiles, construction, and forest fires [3]. PM less than 10 and 2.5 microns in diameter (PM10 and PM2.5) can penetrate deep into the lungs and enter the bloodstream. Exposure to PM increases the incidence of cardiovascular, cerebrovascular, and respiratory diseases [4][5][6][7].
The skin is a barrier between the body and the outer environment and is directly exposed to harmful environmental pollutants. Patients with compromised skin barriers are more affected by PM through increased absorption thereof by the percutaneous tract [8,9]. PM itself can impede the barrier function, enhancing subsequent drug absorption [10]. PM that infiltrates the skin can aggravate skin diseases, such as atopic dermatitis, acne, and psoriasis [11]. PM is also associated with premature skin aging [12] and hyperpigmentation [13]. Simultaneous PM and UV ray exposure synergistically exert negative effects on the skin and can lead to photo-aging and cancer [14,15].
Airborne PM has been implicated in the production of reactive oxygen species (ROS) and the expressions of cytokines and matrix metalloproteinases involved in oxidative stress and inflammation, as demonstrated in human dermal fibroblasts, epidermal keratinocytes, and reconstructed epidermis models [16][17][18][19]. PM increases the production of the eicosanoid mediator prostaglandin (PG) E 2 and decreases filaggrin expression in human keratinocytes, leading to reduced skin barrier function [20,21]. In contrast, eupafolin, derived from the medicinal herb Phyla nodiflora, inhibited PM-induced cyclooxygenase (COX)-2 expression and PGE 2 production in HaCaT keratinocytes [22]. Resveratrol, a polyphenol found in grapes and red wine, reduced PM-induced COX-2 expression and PGE 2 production in human fibroblast-like synoviocytes [23]. Therefore, dermatological and cosmetic approaches using safe and effective antioxidants might alleviate the adverse skin reactions that arise from PM exposure [24].
Ecklonia cava Kjellman, which belongs to the Laminariaceae family, is a perennial brown macroalga widely distributed along the coast of Korea and is used in traditional medicine [25]. E. cava contains phlorotannins such as eckol and dieckol [26] and has been reported to have antioxidant, anti-inflammatory, antibacterial, antidiabetic, and anticancer properties [27][28][29][30][31][32]. In a previous study, this laboratory showed that E. cava extract and dieckol attenuated lipid peroxidation and the expression of inflammatory cytokines in human keratinocytes exposed to PM10 [33]. Building on this previous work, we further examined here whether E. cava extract and dieckol affect PM10-induced PGE 2 release and the gene expression of enzymes involved in the synthesis of PGE 2 in human keratinocytes.
Marine Alga Extracts
The extracts of 50 different marine algae were purchased from Jeju Biodiversity Research Institute of Jeju Technopark (Jeju, Korea), as previously reported [34].
Purification of Dieckol from E. cava
Dried E. cava was purchased from Jayeoncho (http://www.jherb.com) (Seoul, Korea) and 200 g of powder was extracted with 1.0 L 80% v/v aqueous ethanol for 7 days at room temperature (usually 25 • C). The slurry was then filtered through a Whatman No. 1 filter paper (Sigma-Aldrich, St. Louis, MO, USA) and the filtrate was evaporated under reduced pressure, yielding 18 g crude extract. The crude extract was dispersed in 0.2 L water and partitioned with organic solvents, yielding 1.49 g methylene chloride fraction, 2.83 g ethyl acetate fraction, 3.46 g 1-butanol fraction, and 8.65 g water fraction. A portion of the ethyl acetate fraction (2.45 g) was further fractionated by normal phase chromatography on a φ3 cm × 20 cm column of silica gel (Sigma-Aldrich) and eluted with a 4:1 v/v mixed solvent of methylene chloride and methanol (MeOH). Fractions that contained a significant amount of dieckol were combined and evaporated under reduced pressure, yielding 0.81 g of dry materal. This material was subjected to reversed phase chromatography on a φ3 cm × 20 cm column of YMC-GEL ODS-A (YMC Co., Ltd., Kyoto, Japan) and eluted using stepped-gradient 30-70% v/v aqueous MeOH. The fractions that contained dieckol were pooled and evaporated under reduced pressure to dryness, yielding 60 mg of compound 1 (purity, 97%).
Instrumental Analysis
Nuclear magnetic resonance (NMR) spectra were obtained using a Bruker Ascend III 700 (CryoProbe) spectrometer (Bruker BioSpin, Rheinstetten, Germany). Chemical shifts in δ values were referenced to an internal standard, tetramethylsilane (TMS). Electrospray ionization mass spectra (ESI-MS) were obtained using a TSQ Quantum Discovery MAX (Thermo Fisher Scientific Inc., Waltham, MA, USA HPLC was carried out using a Waters Alliance HPLC System (Waters, Milford, MA, USA.) consisting of a Waters e2695 Separation Module and a Waters 2996 photodiode array detector. The stationary phase was a 5 µm, 4.6 mm × 250 mm Hector-M C 18 column (RS Tech Co., Daejeon, Korea), and the mobile phase was a gradienet mixture of 0.1% phosphoric acid (A) and acetonitrile (B). The solvent gradient program was as follows: 0-30 min, a linear gradient from 0-100% B; 30-40 min, 100% B. The flow rate of the mobile phase was 0.6 mL min −1 .
Cell Culture
HaCaT cells, an immortalized human keratinocyte cell line established by Norbert E. Fusenig [35], and so named so to denote its origin from human adult skin keratinocytes propagated under low Ca 2+ conditions and elevated temperature, were obtained from In-San Kim (Kyungpook National University, Daegu, Korea) [36]. Cells were cultured in a closed incubator at 37 • C in humidified air containing 5% CO 2 . Cells were administered a DMEM/F-12 medium (GIBCO-BRL, Grand Island, NY, USA) containing 10% fetal bovine serum, 100 U mL −1 penicillin, 100 µg mL −1 streptomycin, 0.25 µg mL −1 amphotericin B, and 10 µg mL −1 hydrocortisone every three days.
Treatment of Cells with PM10
The cells were plated onto 6-well culture plates (SPL Life Sciences, Pocheon, Korea) at 8 × 10 4 cells/well and cultured in a growth medium for 24 h.
A standardized PM 10 -like fine dust (European Reference Material ERM-CZ120PM10) (Sigma-Aldrich) was suspended in phosphate-buffered saline (PBS) at 100 times the final concentration of each treatment before each experiment. Cells were treated with PM10 at specific concentrations ranging from 25 to 400 µg mL −1 for 24 to 48 h, depending on the experimental purpose, with or without E. cava extract or dieckol at specified concentrations. N-acetyl cysteine (NAC) (Sigma-Aldrich) was used as a positive control antioxidant.
Enzyme-Linked Immunosorbent Assay (ELISA)
Levels of PGE 2 protein in the culture medium were determined using a prostaglandin E 2 express ELISA kit (Cayman Chemical Co., Ann Arbor, MI, USA). In this assay, a fixed amount of PGE 2 -acetylcholinesterase (AChE) conjugate is used as a PGE 2 tracer whose binding to PGE 2 monoclonal antibody is inversely proportional to the amount of PGE 2 derived from the sample. Briefly, 50 µL of 4-fold-diluted cell culture media or standard PGE 2 solutions were transferred to microplate wells containing immobilized goat polyclonal anti-mouse IgG. PGE 2 tracer and PGE 2 monoclonal antibody were then added to each well, and the mixtures were incubated at 4 • C for 18 h. The well was rinsed 5 times with wash buffer and Ellman's reagent containing acetylthiocholine and 5,5 -dithio-bis-(2-nitrobenzoic acid) was added to initiate the AChE reaction. After 60 min, absorbances were measured at 405 nm with a SPECTROstar Nano microplate reader (BMG LABTECH GmbH). The amount of PGE 2 was estimated using a standard curve.
Quantitative Reverse-Transcriptase Polymerase Chain Reaction (qRT-PCR) Analysis
The mRNA levels of COX-1, COX-2, microsomal prostaglandin E 2 synthase (mPGES)-1, mPGES-2, and cytosolic prostaglandin E 2 synthase (cPGES) were determined by qRT-PCR using a StepOnePlus™ Real-Time PCR System (Applied Biosystems, Foster City, CA, USA). Total RNA was extracted from cells with an RNeasy kit (Qiagen, Valencia, CA, USA), and this RNA was used as a template for the synthesis of complementary DNA (cDNA) with a high capacity cDNA archive kit (Applied Biosystems). Gene-specific primers for qRT-PCR analysis were purchased from Macrogen (Seoul, Korea), and their nucletotide sequences are shown in Table 2. The qRT-PCR reaction mixture (20 µL) consisted of SYBR ® Green PCR Master Mix (Applied Biosystems), cDNA (60 ng), and gene-specific primer sets (2 pmole). Thermal cycling parameters were set as follows: 50 • C for 2 min, 95 • C for 10 min, 40 amplification cycles of 95 • C for 15 s and 60 • C for 1 min, and a dissociation step. In each run, the melting curve analysis confirmed homogeneity of the PCR product. The mRNA levels of each gene were calculated relative to that of the internal reference, glyceraldehyde 3-phosphate dehydrogenase (GAPDH), using the comparative Ct method [37]. Ct is defined as the number of cycles required for the PCR signal to exceed the threshold level. Fold changes in the test group compared to the control group were calculaed
Assay for Cellular ROS Production
Cellular ROS production was assessed by using 2 -7 -dichlorodihydrofluorescein diacetate (DCFH-DA), a cell permeable fluorescent probe sensitive to changes in the redox state of a cell [41]. The cells were plated onto 12-well culture plates (SPL Life Sciences) at 4 × 10 4 cells/well for 24 h. Cells were pre-labeled with 10 µM DCFH-DA (Sigma-Aldrich) for 30 min and treated with 100 µg mL −1 PM10 alone or in combination with a test material at different concentrations for 30 min. Cells were extracted with 20 mM Tris-Cl buffer (pH 7.5) containing 1% sodium dodecyl sulfate (SDS) and 2.5 mM ethylenediamine-N,N,N ,N -tetraacetic acid (EDTA) (150 µL/well). The extracted solution was centrifuged at 13,000 rpm for 15 min and the supernatant was used for the measurement of fluorescence intensity (excitation at 485 nm and emission at 538 nm) with the Gemini EM fluorescence microplate reader (Molecular Devices, Sunnyvale, CA, USA).
The 3D-reconstructed Human Skin Models
A 3D-reconstructed human skin model (Neoderm ED ®, ) in a 12-well plate format, produced by culturing human epidermal keratinocytes on top of human dermal fibroblasts in an air-medium interface (air-lift culture) for 12 days, was purchased from TEGO science (Seoul, Korea). The skin model was air-lift cultured for an additional 1 day in this laboratory, at 37 • C in humidified air containing 5% carbon dioxide. The skin model were then treated with 200 µg mL −1 PM10 in the presence or absence of 20 µM dieckol for 48 h. The skin model was fixed in 4% paraformaldehyde in PBS and embedded in paraffin. The 6 µm thick sections of paraffin blocks were stained with hematoxylin and eosin and observed with an Eclipse 80i microscope (Nikon Instruments Inc., Melville, NY, USA).
Statistical Analysis
Data are expressed as a mean ± standard deviation (SD) of three or more independent experiments. Experimental results were statistically analysed using SigmaStat v.3.11 software (Systat Software Inc., San Jose, CA, USA), by one-way analysis of variance (ANOVA), followed by Dunnett's test comparing all treatment groups to a single control group. A p-value of less than 0.05 was considered statistically significant.
PM10 Induces Cytotoxicity and PGE 2 Release of Keratinocytes
To examine whether airborne PM10 causes cytotoxicity and inflammation, HaCaT cells were exposed to PM10 in vitro. PM10 treatments at 100 to 400 µg mL −1 for 48 h decreased cell viability ( Figure 1a). Conditioned cell culture media were used for the determination of PGE 2 . PGE 2 production increased in the cells exposed to PM10 at 100 to 400 µg mL −1 for 48 h (Figure 1b). Data are expressed as a mean ± standard deviation (SD) of three or more independent experiments. Experimental results were statistically analysed using SigmaStat v.3.11 software (Systat Software Inc., San Jose, CA, USA), by one-way analysis of variance (ANOVA), followed by Dunnett's test comparing all treatment groups to a single control group. A p-value of less than 0.05 was considered statistically significant.
PM10 Induces Cytotoxicity and PGE2 Release of Keratinocytes
To examine whether airborne PM10 causes cytotoxicity and inflammation, HaCaT cells were exposed to PM10 in vitro. PM10 treatments at 100 to 400 μg mL −1 for 48 h decreased cell viability ( Figure 1a). Conditioned cell culture media were used for the determination of PGE2. PGE2 production increased in the cells exposed to PM10 at 100 to 400 μg mL −1 for 48 h (Figure 1b). . Control cells were treated with phosphate-buffered saline. Data are presented as mean ± standard deviation (SD) (n = 4). All treatments were compared to the control using one-way analysis of variance (ANOVA) followed by Dunnett's test. * p < 0.05.
Effects of Marine Alga Extracts on PM10-induced Cytotoxicity
To identify the marine alga extracts that alleviated the cytotoxic effects of PM10, HaCaT cells were exposed to PM10 at 200 μg mL −1 with or without each alga extract at 50 μg mL −1 for 48 h. Of the 50 marine alga extracts, the extract of E. cava showed the most protective effects, followed by the extract of Hypnea charoides Lamouroux (Figure 2). E. cava extract was thus chosen for further study. . Control cells were treated with phosphate-buffered saline. Data are presented as mean ± standard deviation (SD) (n = 4). All treatments were compared to the control using one-way analysis of variance (ANOVA) followed by Dunnett's test. * p < 0.05.
Effects of Marine Alga Extracts on PM10-induced Cytotoxicity
To identify the marine alga extracts that alleviated the cytotoxic effects of PM10, HaCaT cells were exposed to PM10 at 200 µg mL −1 with or without each alga extract at 50 µg mL −1 for 48 h. Of the 50 marine alga extracts, the extract of E. cava showed the most protective effects, followed by the extract of Hypnea charoides Lamouroux (Figure 2). E. cava extract was thus chosen for further study. Effects of marine alga extracts on the viability of HaCaT keratinocytes exposed to PM10. Cells were treated with 200 μg mL −1 PM10 for 48 h in the presence or absence of each extract at 50 μg mL −1 . Data are presented as a mean ± SD (n = 4). All treatments (50 extracts) were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05. Data are presented as a mean ± SD (n = 4). All treatments (50 extracts) were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05.
Effects of E. cava Extract on PM10-induced Cytotoxicity and PGE 2 Release
To examine the effects of E. cava extract on cell viability and inflammatory responses in HaCaT keratinocytes exposed to PM10, cells were treated with the extract at concentrations ranging from 25 to 100 µg mL −1 with or without 200 µg mL −1 PM10. E. cava extract decreased cell viability and increased PGE 2 release to a small degree at high concentrations but rescued the cell viability and attenuated the PGE 2 release stimulated by PM10 in a dose-dependent manner (Figure 3a-b). In additional experiments, cells were treated with 100 µg mL −1 of E. cava extract and concentrations of PM10 ranging from 25 to 100 µg mL −1 . NAC was also tested at 100 µg mL −1 as a positive control antioxidant. E. cava extract-treated cells demonstrated more cell viability than non-treated controls or NAC-treated positive control under various PM10-exposed conditions, although differences among the test, control, and positive control groups were not statistically significant (Figure 3c). E. cava extract significantly attenuated the PGE 2 release stimulated by different concentrations (25-100 µg mL −1 ) of PM10, while NAC showed an inhibitory effect only at 100 µg mL −1 PM10 (Figure 3d).
Effects of E. cava Extract on PM10-induced Cytotoxicity and PGE2 Release
To examine the effects of E. cava extract on cell viability and inflammatory responses in HaCaT keratinocytes exposed to PM10, cells were treated with the extract at concentrations ranging from 25 to 100 μg mL -1 with or without 200 μg mL −1 PM10. E. cava extract decreased cell viability and increased PGE2 release to a small degree at high concentrations but rescued the cell viability and attenuated the PGE2 release stimulated by PM10 in a dose-dependent manner (Figure 3a-b). In additional experiments, cells were treated with 100 μg mL −1 of E. cava extract and concentrations of PM10 ranging from 25 to 100 μg mL −1 . NAC was also tested at 100 μg mL −1 as a positive control antioxidant. E. cava extract-treated cells demonstrated more cell viability than non-treated controls or NAC-treated positive control under various PM10-exposed conditions, although differences among the test, control, and positive control groups were not statistically significant (Figure 3c). E. cava extract significantly attenuated the PGE2 release stimulated by different concentrations (25-100 μg mL −1 ) of PM10, while NAC showed an inhibitory effect only at 100 μg mL −1 PM10 (Figure 3d). Effects of Ecklonia cava extract on the viability and PGE2 release of HaCaT keratinocytes exposed to PM10. Cells were treated with PM10 in the absence or presence of E. cava extract or N-acetyl cysteine (NAC) for 48 h for the viability assay (a, c) and for the PGE2 assay (b, d). Data are presented as a mean ± SD (n = 4). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05.
Effects of E. cava Extract on the PM10-induced Gene Expression of the Enzymes Involved in the PGE2 Synthesis
Because PM10-induced PGE2 release was attenuated by E. cava extract, additional experiments were undertaken to determine the mRNA expression levels of COX-1, COX-2, mPGES-1, mPGES-2, and cPGES, the enzymes involved in the PGE2 synthesis [42]. PM10 at a concentration of 100 μg mL −1 increased the expression of COX-1 and COX-2 at the mRNA level, changes that were significantly attenuated by E. cava extract (100 μg mL −1 ) and NAC (100 μg mL −1 ) (Figure 4a-b). PM10 also increased the mRNA levels of mPGES-1 and mPGES-2 but did not increase cPGES mRNA ( Figure 4c-e). The PM10-induced increase of mPGES-2 mRNA was attenuated by E. cava extract (100 μg mL −1 ).
Figure 3.
Effects of Ecklonia cava extract on the viability and PGE 2 release of HaCaT keratinocytes exposed to PM10. Cells were treated with PM10 in the absence or presence of E. cava extract or N-acetyl cysteine (NAC) for 48 h for the viability assay (a,c) and for the PGE 2 assay (b,d). Data are presented as a mean ± SD (n = 4). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05.
Effects of E. cava Extract on the PM10-induced Gene Expression of the Enzymes Involved in the PGE 2 Synthesis
Because PM10-induced PGE 2 release was attenuated by E. cava extract, additional experiments were undertaken to determine the mRNA expression levels of COX-1, COX-2, mPGES-1, mPGES-2, and cPGES, the enzymes involved in the PGE 2 synthesis [42]. PM10 at a concentration of 100 µg mL −1 increased the expression of COX-1 and COX-2 at the mRNA level, changes that were significantly attenuated by E. cava extract (100 µg mL −1 ) and NAC (100 µg mL −1 ) (Figure 4a-b). PM10 also increased the mRNA levels of mPGES-1 and mPGES-2 but did not increase cPGES mRNA (Figure 4c-e). The PM10-induced increase of mPGES-2 mRNA was attenuated by E. cava extract (100 µg mL −1 ). Data are presented as a mean ± SD (n = 3). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05; n.s. was not significant.
Purification of Dieckol from E. cava
Dieckol is a major polyphenolic consituent of E. cava that exhibits antioxidant activity [33,43]. Dieckol was purified from E. cava extract through solvent fractionation and subsequent chromatography on a normal phase silical gel column and a reversed phase octadecyl silane column. The HPLC profile of purified dieckol is shown in Figure 5. in HaCaT keratinocytes exposed to PM10. Cells were treated with PM10 in the presence or absence of E. cava extract or NAC for 24 h for the mRNA assays of cyclooxygenase (COX)-1 (a), COX-2 (b), microsomal prostaglandin E 2 synthase (mPGES)-1 (c), mPGES-2 (d), and cytosolic prostaglandin E 2 synthase (cPGES) (e). Data are presented as a mean ± SD (n = 3). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05; n.s. was not significant.
Purification of Dieckol from E. cava
Dieckol is a major polyphenolic consituent of E. cava that exhibits antioxidant activity [33,43]. Dieckol was purified from E. cava extract through solvent fractionation and subsequent chromatography on a normal phase silical gel column and a reversed phase octadecyl silane column. The HPLC profile of purified dieckol is shown in Figure 5. Data are presented as a mean ± SD (n = 3). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05; n.s. was not significant.
Purification of Dieckol from E. cava
Dieckol is a major polyphenolic consituent of E. cava that exhibits antioxidant activity [33,43]. Dieckol was purified from E. cava extract through solvent fractionation and subsequent chromatography on a normal phase silical gel column and a reversed phase octadecyl silane column. The HPLC profile of purified dieckol is shown in Figure 5.
Effects of Dieckol on PM10-induced Cytotoxicity and PGE2 Release of Keratibnocytes
Dieckol did not change the viability of the HaCaT cells at the tested concentrations up to 30 μM, but it showed toxic effects at concentrations above 100 μM (Figure 7a). In the subsequent experiments, dieckol was used at 10-30 μM, to remain within a non-toxic concentration range. Dieckol attenuated PGE2 release in keratinocytes exposed to PM10 in a dose-dependent manner, although it did not rescue cell viability (Figure 7b-c).
Effects of Dieckol on PM10-induced Cytotoxicity and PGE 2 Release of Keratibnocytes
Dieckol did not change the viability of the HaCaT cells at the tested concentrations up to 30 µM, but it showed toxic effects at concentrations above 100 µM (Figure 7a). In the subsequent experiments, dieckol was used at 10-30 µM, to remain within a non-toxic concentration range. Dieckol attenuated PGE 2 release in keratinocytes exposed to PM10 in a dose-dependent manner, although it did not rescue cell viability (Figure 7b-c).
Effects of Dieckol on PM10-induced Cytotoxicity and PGE2 Release of Keratibnocytes
Dieckol did not change the viability of the HaCaT cells at the tested concentrations up to 30 μM, but it showed toxic effects at concentrations above 100 μM (Figure 7a). In the subsequent experiments, dieckol was used at 10-30 μM, to remain within a non-toxic concentration range. Dieckol attenuated PGE2 release in keratinocytes exposed to PM10 in a dose-dependent manner, although it did not rescue cell viability (Figure 7b-c). Figure 7. Effects of dieckol on the viability and PGE 2 release of HaCaT keratinocytes exposed to PM10. Cells were treated with dieckol at varied concentrations for 48 h for the viability assay (a). Cells were treated with 100 µg mL −1 PM10 in the presence or absence of dieckol at indicated concentrations for 48 h for the viability assay (b) and the PGE 2 assays (c). Data are presented as a mean ± SD (n = 4). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05.
Effects of Dieckol on the PM10-Induced ROS Production and the PM10-Induced Gene Expression of the Enzymes Involved in the PGE 2 Synthesis.
PM10 treatment of HaCaT cells increased ROS production, and the PM-induced change were attenuated by dieckol (Figure 8a). In addition, dieckol attenuated the mRNA expression of COX-1, COX-2, and mPGES-1 induced by PM10 (Figure 8b-f). Figure 7. Effects of dieckol on the viability and PGE2 release of HaCaT keratinocytes exposed to PM10. Cells were treated with dieckol at varied concentrations for 48 h for the viability assay (a). Cells were treated with 100 μg mL -1 PM10 in the presence or absence of dieckol at indicated concentrations for 48 h for the viability assay (b) and the PGE2 assays (c). Data are presented as a mean ± SD (n = 4). All treatments were compared to the PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05.
Effects of Dieckol on the PM10-Induced ROS Production and the PM10-Induced Gene Expression of the Enzymes Involved in the PGE2 Synthesis.
PM10 treatment of HaCaT cells increased ROS production, and the PM-induced change were attenuated by dieckol (Figure 8a). In addition, dieckol attenuated the mRNA expression of COX-1, COX-2, and mPGES-1 induced by PM10 (Figure 8b-f). Figure 8. Effects of dieckol on the production of reactive oxygen species (ROS) and the gene expression of enzymes involved in PGE2 synthesis in HaCaT keratinocytes exposed to PM10. Cells were treated with 100 μg mL −1 PM10 in the presence or absence of dieckol at the indicated concentrations for 30 min for the ROS assay (a), and for 24 h for the mRNA assays for COX-1 (b), COX-2 (c), mPGES-1 (d), mPGES-2 (e), and cPGES (f). Data are presented as a mean ± SD (n = 4 for a and n = 3 for others). All treatments were compared to PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05; n.s. was not significant.
Protective Effects of Dieckol against PM10 in a 3D-reconstructed Skin Model
The protective effects of dieckol against PM10 were further studied using a 3D-reconstructed skin model (Figure 9a-d). The tissue sections stained with hematoxylin and eosin showed morphological differences between the control and PM10-treated cells. PM10 treatment deceased the number of the intact cells at the upper epidermal layer. PM10 tended to decrease the thickness of the epidermal layer, but the change was statistically insignificant. Dieckol itself did not induce significant morphological changes and partially attenuated the morphological changes induced by PM10. Figure 8. Effects of dieckol on the production of reactive oxygen species (ROS) and the gene expression of enzymes involved in PGE 2 synthesis in HaCaT keratinocytes exposed to PM10. Cells were treated with 100 µg mL −1 PM10 in the presence or absence of dieckol at the indicated concentrations for 30 min for the ROS assay (a), and for 24 h for the mRNA assays for COX-1 (b), COX-2 (c), mPGES-1 (d), mPGES-2 (e), and cPGES (f). Data are presented as a mean ± SD (n = 4 for a and n = 3 for others). All treatments were compared to PM10 control using one-way ANOVA followed by Dunnett's test. * p < 0.05; n.s. was not significant.
Protective Effects of Dieckol against PM10 in a 3D-reconstructed Skin Model
The protective effects of dieckol against PM10 were further studied using a 3D-reconstructed skin model (Figure 9a-d). The tissue sections stained with hematoxylin and eosin showed morphological differences between the control and PM10-treated cells. PM10 treatment deceased the number of the intact cells at the upper epidermal layer. PM10 tended to decrease the thickness of the epidermal layer, but the change was statistically insignificant. Dieckol itself did not induce significant morphological changes and partially attenuated the morphological changes induced by PM10.
Discussion
Marine algae have attracted increasing attention as a potential resource for cosmeceutical ingredients [34,45]. E. cava is a rich source of phlorotannins, a unique group of polyphenol compounds found in marine brown algae [25,46]. The total phenolic content of E. cava extract was estimated to be the highest of the 50 marine plants tested in our previous study [33]. In the present study, of the 50 marine alga extracts tested, E. cava extract was the most protective against PM10 toxicity in HaCaT keratinocytes. E. cava extract attenuated PGE2 production in cells exposed to varying concentrations of PM10 more effectively than NAC, a positive control antioxidant. Dieckol purified from E. cava extract also exhibited inhibitory activity against PM10-induced PGE2 production.
The synthesis of PGE2 begins with the production of arachidonic acid from membrane phospholipids by the enzymatic action of phospholipase A2, followed by the conversion of arachidonic acid to PGG2 and then to PGH2 by reactions catalyzed by COX-1 and COX-2 [42]. Both isoforms are present in many normal human tissues, and both isoforms are upregulated in a variety of pathological conditions [47]. PGE2 synthesis from PGH2 is catalyzed by mPGES-1, mPGES-2, and cPGES [48]. Of these isoforms, mPGES-1 is considered responsible for the increased PGE2 synthesis during inflammation [49].
Previous studies have shown that dieckol and phlorotannins-rich brown alga extracts attenuated PGE2 production and COX-2 expression in lipopolysaccharide (LPS)-stimulated RAW 264.7 murine macrophage cells [43], in LPS-stimulated murine BV2 microglia [50], and in UVB radiation-induced skin carcinogenesis in SKH-1 mice [51]. In the present study, PM10 increased the gene expression of both COX-1 and COX-2 in keratinocytes, and these PM-induced COX-1 and COX-2 expressions were ameliorated by E. cava extract and dieckol, as well as by NAC (positive control antioxidant). In addition, PM10 increased the expression of mPGES-1 and mPGES-2, and PM10-induced mPGES-2 expression was reduced by E. cava extract. Dieckol attenuated the expression of mPGES-1 stimulated by PM10. This suggests that the E. cava extract and dieckol can alleviate PM10-induced PGE2 production, at least partially, through the inhibition of COX-1, COX-2, mPGES-1, and/or mPGES-2 gene expression ( Figure 10). The present study showed that dieckol alleviated the PM-induced inflammatory responses of keratinocytes and PM-induced morphological changes in a 3D-reconstructed skin model. Future studies are warranted to examine clinical efficacy.
Discussion
Marine algae have attracted increasing attention as a potential resource for cosmeceutical ingredients [34,45]. E. cava is a rich source of phlorotannins, a unique group of polyphenol compounds found in marine brown algae [25,46]. The total phenolic content of E. cava extract was estimated to be the highest of the 50 marine plants tested in our previous study [33]. In the present study, of the 50 marine alga extracts tested, E. cava extract was the most protective against PM10 toxicity in HaCaT keratinocytes. E. cava extract attenuated PGE 2 production in cells exposed to varying concentrations of PM10 more effectively than NAC, a positive control antioxidant. Dieckol purified from E. cava extract also exhibited inhibitory activity against PM10-induced PGE 2 production.
The synthesis of PGE 2 begins with the production of arachidonic acid from membrane phospholipids by the enzymatic action of phospholipase A 2 , followed by the conversion of arachidonic acid to PGG 2 and then to PGH 2 by reactions catalyzed by COX-1 and COX-2 [42]. Both isoforms are present in many normal human tissues, and both isoforms are upregulated in a variety of pathological conditions [47]. PGE 2 synthesis from PGH 2 is catalyzed by mPGES-1, mPGES-2, and cPGES [48]. Of these isoforms, mPGES-1 is considered responsible for the increased PGE 2 synthesis during inflammation [49].
Previous studies have shown that dieckol and phlorotannins-rich brown alga extracts attenuated PGE 2 production and COX-2 expression in lipopolysaccharide (LPS)-stimulated RAW 264.7 murine macrophage cells [43], in LPS-stimulated murine BV2 microglia [50], and in UVB radiation-induced skin carcinogenesis in SKH-1 mice [51]. In the present study, PM10 increased the gene expression of both COX-1 and COX-2 in keratinocytes, and these PM-induced COX-1 and COX-2 expressions were ameliorated by E. cava extract and dieckol, as well as by NAC (positive control antioxidant). In addition, PM10 increased the expression of mPGES-1 and mPGES-2, and PM10-induced mPGES-2 expression was reduced by E. cava extract. Dieckol attenuated the expression of mPGES-1 stimulated by PM10. This suggests that the E. cava extract and dieckol can alleviate PM10-induced PGE 2 production, at least partially, through the inhibition of COX-1, COX-2, mPGES-1, and/or mPGES-2 gene expression ( Figure 10). The present study showed that dieckol alleviated the PM-induced inflammatory responses of keratinocytes and PM-induced morphological changes in a 3D-reconstructed skin model. Future studies are warranted to examine clinical efficacy. Although the composition of airborne PM differs depending on location, altitude, and season, it nearly always contains toxic components, such as heavy metals and polycyclic hydrocarbons that exert pro-oxidative and pro-inflammatory activity in exposed tissues [52][53][54][55]. PM10 causes the production of ROS through the aryl hydrocarbon receptor/NADPH oxidase-dependent pathway [20]. The results in [56][57][58] and our recent study also suggested that dual oxidase 2 plays a critical role in ROS production in keratinocytes exposed to PM [59]. Thus, antioxidants have the potential to alleviate adverse skin reactions that arise from PM exposure [24].
In the previous study, pomegranate peel extract and punicalagin attenuated PM10-induced inflammatory monocytes adhesion to endothelial cells [55]. Epigallocatechin gallate derived from green tea and punicalagin reduced the PM10-induced expression of inflammatory cytokines, such as the tumor necrosis factor (TNF)-α, interleukin (IL)-1β, IL-6, and IL-8 [54]. Resveratrol and resveratryl triacetate inhibited the expression of PM10-induced IL-6 in keratinocytes [60]. In addition, E. cava extract attenuated cellular lipid peroxidation in keratinocytes induced by PM10 [33]. Dieckol, one of the major phlorotannins of E. cava, attenuated cellular lipid peroxidation and the expression of inflammatory cytokines TNF-α, IL-1β, IL-6, and IL-8 at the mRNA and protein levels in human epidermal keratinocytes exposed to PM10 [33]. Taken together, data from these previous studies and the current study suggest that polyphenol-rich plant extracts and individual polyphenolic compounds can mitigate oxidative stress and inflammation in the skin that occur as a result of exposure to airborne PM10.
It was previously shown that PM-induced cellular ROS production was attenuated by various antioxidants, such as NAC, apocynin, resveratrol, resveratryl triacetate, punicalagin, (−)-epigallocatechin gallate, and eupafolin [22,54,55,60]. In the present study, dieckol was shown to attenuate the PM-induced ROS production in keratinocytes. PM-derived ROS can lead to the activation of the mitogen activated protein kinase (MAPK) family including extracellular signal regulated kinase (ERK), c-Jun N-terminal kinase (JNK), and p38 kinase, and the stimulation of nuclear factor-kappa B (NF-κB) signaling pathway, leading to the activation of redox-sensitive transcription factors activator protein 1 (AP-1) and NF-κB [61,62]. The expression of COX-2 mRNA is regulated by several transcription factors including the cyclic-AMP response element binding protein, NF-κB and the CCAAT-enhancer binding protein, which are activated by various MAPKs and other protein kinases [63]. PM stimulates MAPKs such as ERK, p38 and JNK in keratinocytes Although the composition of airborne PM differs depending on location, altitude, and season, it nearly always contains toxic components, such as heavy metals and polycyclic hydrocarbons that exert pro-oxidative and pro-inflammatory activity in exposed tissues [52][53][54][55]. PM10 causes the production of ROS through the aryl hydrocarbon receptor/NADPH oxidase-dependent pathway [20,[56][57][58]. Our recent study also suggested that dual oxidase 2 plays a critical role in ROS production in keratinocytes exposed to PM [59]. Thus, antioxidants have the potential to alleviate adverse skin reactions that arise from PM exposure [24].
In the previous study, pomegranate peel extract and punicalagin attenuated PM10-induced inflammatory monocytes adhesion to endothelial cells [55]. Epigallocatechin gallate derived from green tea and punicalagin reduced the PM10-induced expression of inflammatory cytokines, such as the tumor necrosis factor (TNF)-α, interleukin (IL)-1β, IL-6, and IL-8 [54]. Resveratrol and resveratryl triacetate inhibited the expression of PM10-induced IL-6 in keratinocytes [60]. In addition, E. cava extract attenuated cellular lipid peroxidation in keratinocytes induced by PM10 [33]. Dieckol, one of the major phlorotannins of E. cava, attenuated cellular lipid peroxidation and the expression of inflammatory cytokines TNF-α, IL-1β, IL-6, and IL-8 at the mRNA and protein levels in human epidermal keratinocytes exposed to PM10 [33]. Taken together, data from these previous studies and the current study suggest that polyphenol-rich plant extracts and individual polyphenolic compounds can mitigate oxidative stress and inflammation in the skin that occur as a result of exposure to airborne PM10.
It was previously shown that PM-induced cellular ROS production was attenuated by various antioxidants, such as NAC, apocynin, resveratrol, resveratryl triacetate, punicalagin, (−)-epigallocatechin gallate, and eupafolin [22,54,55,60]. In the present study, dieckol was shown to attenuate the PM-induced ROS production in keratinocytes. PM-derived ROS can lead to the activation of the mitogen activated protein kinase (MAPK) family including extracellular signal regulated kinase (ERK), c-Jun N-terminal kinase (JNK), and p38 kinase, and the stimulation of nuclear factor-kappa B (NF-κB) signaling pathway, leading to the activation of redox-sensitive transcription factors activator protein 1 (AP-1) and NF-κB [61,62]. The expression of COX-2 mRNA is regulated by several transcription factors including the cyclic-AMP response element binding protein, NF-κB and the CCAAT-enhancer binding protein, which are activated by various MAPKs and other protein kinases [63]. PM stimulates MAPKs such as ERK, p38 and JNK in keratinocytes which ultimately induce the expression of COX-2 [20,64]. Therefore, a variety of redox-sensitive signaling pathways are involved in the regulation of PGE 2 production in response to PM, and antioxidants contained in E. cava, such as dieckol, are assumed to interfere with these multiple signaling pathways, attenuating PM-induced PGE 2 production. Further studies are needed to verify this notion and to examine in vivo efficacy of dieckol.
Conclusions
In conclusion, this study demonstrated that that airborne PM10 stimulated COX-1, COX-2, mPGES-1, and mPGES-2 gene expression, and thereby PGE 2 production, in keratinocytes. E. cava extract and dieckol were shown to alleviate PM10-induced PGE 2 production through the inhibition of gene expression of COX-1 COX-2, mPGES-1, and/or mPGES-2. E. cava extract and dieckol are potentially useful natural cosmetic ingredients for counteracting the pro-inflammatory effects of airborne PM on the skin. | 9,091.6 | 2019-06-01T00:00:00.000 | [
"Biology"
] |
A Central Limit Theorem for Weighted Averages of Spins in the High Temperature Region of the Sherrington-Kirkpatrick Model
In this paper we prove that in the high temperature region of the Sherrington-Kirkpatrick model for a typical realization of the disorder the weighted average of spins P i • N t i (cid:190) i will be approximately Gaussian provided that max i • N j t i j = P i • N t 2 i is small.
Introduction.
Consider a space of configurations Σ N = {−1, +1} N . A configuration σ ∈ Σ N is a vector (σ 1 , . . . , σ N ) of spins σ i each of which can take the values ±1. Consider an array (g ij ) i,j≤N of i.i.d. standard normal random variables that is called the disorder. Given parameters β > 0 and h ≥ 0, let us define a Hamiltonian on Σ N and define the Gibbs' measure G on Σ N by The normalizing factor Z N is called the partition function. Gibbs' measure G is a random measure on Σ N since it depends on the disorder (g ij ). The parameter β physically represents the inverse of the temperature and in this paper we will consider only the (very) high temperature region of the Sherrington-Kirkpatrick model which corresponds to β < β 0 (1.1) for some small absolute constant β 0 > 0. The actual value β 0 is not specified here but, in principal, it can be determined through careful analysis of all arguments of this paper and references to other papers. For any n ≥ 1 and a function f on the product space (Σ n N , G ⊗n ), f will denote its expectation with respect to G ⊗n f = Σ n N f (σ 1 , . . . , σ n )G ⊗n ({(σ 1 , . . . , σ n )}).
In this paper we will prove the following result concerning the high temperature region (1.1). Given a vector (t 1 , . . . , t N ) such that let us consider a random variable on (Σ N , G) defined as The main goal of this paper is to show that in the high temperature region (1.1) the following holds. If max i≤N |t i | is small then for a typical realization of the disorder (g ij ) the random variable X is approximately Gaussian r.v. with mean X and variance X 2 − X 2 . By the "typical realization" we understand that the statement holds on the set of measure close to one. This result is the analogue of a very classical result for independent random variables. Namely, given a sequence of independent random variables ξ 1 , . . . , ξ N satisfying some integrability conditions the random variable ξ 1 +. . .+ξ N will be approximately Gaussian if max i≤N Var(ξ i )/ i≤N Var(ξ i ) is small (see, for example, [5] ). In particular, if σ 1 , . . . , σ N in (1.3) were i.i.d. Bernoulli random variables then X would be approximately Gaussian provided that max i≤N |t i | is small. It is important to note at this point that the main claim of this paper in some sense is a well expected result since it is well known that in the high temperature region the spins become "decoupled" in the limit N → ∞. For example, Theorem 2.4.10 in [12] states that for a fixed n ≥ 1, for a typical realization of the disorder (g ij ) the distribution G ⊗n becomes a product measure when N → ∞. Thus, in the very essence the claim that X in (1.3) is approximately Gaussian is a central limit theorem for weakly dependent random variables. However, the entire sequence (σ 1 , . . . , σ N ) is a much more complicated object than a fixed finite subset (σ 1 , . . . , σ n ), and some unexpected complications arise that we will try to describe after we state our main result -Theorem 1 below.
Instead of dealing with the random variable X we will look at its symmetrized version Y = X − X , where X is an independent copy of X. If we can show that Y is approximately Gaussian then, obviously, X will also be approximately Gaussian. The main reason to consider a symmetrized version of X is very simple -it makes it much easier to keep track of numerous indices in all the arguments below, even though it would be possible to carry out similar arguments for a centered version X − X .
In order to show that for a typical realization (g ij ) and a small max i≤N |t i |, Y is approximately Gaussian with mean 0 and variance Y 2 we will proceed by showing that its moments behave like moments of a Gaussian random variable, i.e.
where a(l) = Eg l , for a standard normal random variable g. Since the moments of the standard normal random variable are also characterized by the recursive formulas a(0) = 0, a(1) = 1 and a(l) = (l − 1)a(l − 2), Let us define two sequences (σ 1(l) ) l≥0 and (σ 2(l) ) l≥0 of jointly independent random variables with Gibbs' distribution G. We will assume that all indices 1(l) and 2(l) are different and one can think of σ 1(l) and σ 2(l) as different coordinates of the infinite product space (Σ ∞ N , G ⊗∞ ). Let us define a sequence S l by In other words, S l are independent copies of Y.
The following Theorem is the main result of the paper.
Remark. Theorem 1 answers the question raised in the Research problem 2.4.11 in [12]. Theorem 1 easily implies that Indeed, For the first and second terms on the right hand side we can use independent copies to represent the powers of · l and then apply Theorem 1. For example, Similarly, Clearly, combining these equations proves (1.7). Now one can show that for N → ∞ and max i≤N |t i | → 0 the characteristic function of (S 1 , . . . , S n ) can be approximated by the characteristic function of n independent Gaussian random variables with variance S 2 1 , for (g ij ) on the set of measure converging to 1. Given (1.7) this should be a mere exercise and we omit the details. This, of course, implies that (S 1 , . . . , S n ) are approximately independent Gaussian random variables with respect to the measure G ⊗∞ and, in particular, S 1 = i≤N t iσi is approximately Gaussian with respect to the measure G ⊗2 .
Theorem 1 looks very similar to the central limit theorem for the overlap where σ 1 , σ 2 are two independent copies of σ (see, for example, Theorem 2.3.9 and Section 2.7 in [12]). In fact, in our proofs we follow the main ideas and techniques of Sections 2.4 -2.7 in [12]. However, the proof of the central limit theorem for X in (1.3) turned out to be by at least an order of magnitude more technically involved than the proof of the central limit theorem for the overlap R 1,2 (at least we do not know any easier proof). One of the main reasons why the situation here gets more complicated is the absence of symmetry. Let us try to explain this informally. When dealing with the overlaps R i,j one considers the quantity of the following type E i<j R k i,j i,j (1.8) and approximates it by other similar quantities using the cavity method. In the cavity method one approximates the Gibbs' measure G by the measure with the last coordinate σ N independent of the other coordinates and this is achieved by a proper interpolation between these measures. As a result, the average (1.8) is approximated by the Taylor expansion along this interpolation and at the second order of approximation one gets the terms that have "smaller complexity" and a term that is the factor of (1.8); one then can solve for (1.8) and proceed by induction on the "complexity". The main reason this trick works is the symmetry of the expression (1.8) with respect to all coordinates of the configurations. Unfortunately, this does not happen any longer in the setting of Theorem 1 due to the lack of symmetry of X in the coordinates of σ. Instead, we will have to consider both terms on the left hand side of (1.6), and approximate both of them using the cavity method. The technical difficulty of the proof comes from the fact that at the second order of approximation it is not immediately obvious which terms corresponding to the two expressions in (1.9) cancel each other up to the correct error terms and this requires some work. Moreover, in order to obtain the correct error terms, we will need to make two coordinates σ N and σ N −1 independent of the other coordinates and to simplify the computations and avoid using the cavity method on each coordinate separately we will develop the cavity method for two coordinates. Finally, another difficulty that arises from the lack of symmetry is that unlike in the case of overlaps R i,j we were not able to compute explicitly the expectation X and variance X 2 − X 2 in terms of the parameters of the model.
Preliminary results.
We will first state several results from [12] that will be constantly used throughout the paper. Lemmas 1 through 6 below are either taken directly from [12] or almost identical to some of the results [12] and, therefore, we will state them without the proof. Let us consider where z is a standard normal r.v. independent of the disorder (g ij ) and q is the unique solution of the equation For 0 ≤ t ≤ 1 let us consider the Hamiltonian and define Gibbs' measure G t and expectation · t similarly to G and · above, only using the Hamiltonian −H N,t (σ). For any n ≥ 1 and a function f on Σ n N let us define The case t = 1 corresponds to the Hamiltonian −H N (σ), and the case t = 0 has a very special property that the last coordinate σ N is independent of the other coordinates which is the main idea of the cavity method (see [12]). (Cavity method is a classical and fruitful idea in Physics, but in this paper we refer to a specific version of the cavity method invented by Talagrand.) Given indices l, l , let us define The following Lemma holds.
Lemma 1 For 0 ≤ t < 1, and for all functions f on Σ n N we have This is Proposition 2.4.5 in [12].
Roughly speaking, this two results explain the main idea behind the key methods of [12] -the cavity method and the smart path method. The Hamiltonian (2.2) represents a "smart path" between the measures G and G 0 , since along this path the derivative ν t (f ) is small, because all terms in (2.3) contain a factor R l,l − q which is small due to (2.4). Measure G 0 has a special coordinate (cavity) σ N that is independent of the other coordinates, which in many cases makes it easier to analyze ν 0 (f ).
This two lemmas imply the following Taylor expansion for ν(f ).
Lemma 3
For a function f on Σ n N we have Proof. Proof is identical to Proposition 2.5.3 in [12].
Cavity method with two coordinates. In this paper we will use another case of the cavity method with two coordinates σ N , σ N −1 playing the special role. In this new case we will consider a "smart path" that makes both coordinates σ N and σ N −1 independent of other coordinates and of each other. This is done by slightly modifying the definition of the Hamiltonian (2.2). Since it will always be clear from the context which "smart path" we are using, we will abuse the notations and use the same notations as in the case of the Hamiltonian (2.2).
Let us consider where z 1 , z 2 are standard normal r.v. independent of the disorder (g ij ).
For 0 ≤ t ≤ 1 let us now consider the Hamiltonian and define Gibbs' measure G t and expectation · t similarly to G and · above, only using the Hamiltonian (2.6.) For any n ≥ 1 and a function f on Σ n N let us define We will make one distinction in the notations between the cases (2.2) and (2.6). Namely, for t = 0 in the case of the Hamiltonian (2.6) we will denote It is clear that with respect to the Gibbs' measure G 0 the last two coordinates σ N and σ N −1 are independent of the other coordinates and of each other. Given indices l, l let us define The following lemma is the analogue of Lemma 1 for the case of the Hamiltonian (2.6).
Lemma 4 Consider ν t (·) that corresponds to the Hamiltonian (2.6). Then, for 0 ≤ t < 1, and for all functions f on Σ n N we have Proof. The proof repeats the proof of Proposition 2.4.5 in [12] almost without changes.
Lemma 5 There exists β 0 > 0 and L > 0 such that for β < β 0 and for any k ≥ 1, The second inequality is similar to (2.4) and it follows easily from it since |R 1,2 −R = 1,2 | ≤ 2/N (see, for example, the proof of Lemma 2.5.2 in [12]). The first inequality follows easily from Lemma 4 (see, for example, Proposition 2.4.6 in [12]). Lemma 3 above also holds in the case of the Hamiltonian (2.6).
Lemma 6
For a function f on Σ n N we have The proof is identical to the proof of Proposition 2.5.3 in [12].
To prove Theorem 1 we will need several preliminary results. First, it will be very important to control the size of the random variables S l and we will start by proving exponential integrability of S l .
Theorem 2 There exist β 0 > 0 and L > 0 such that for all β ≤ β 0 , and for all k ≥ 1 14) The statement of Theorem 2 is, obviously, equivalent to for large enough L.
Proof. The proof mimics the proof of Theorem 2.5.1 in [12] (stated in Lemma 2 above). We will prove Theorem 2 by induction over k. Our induction assumption will be the following: there exist β 0 > 0 and L > 0 such that for all Let us start by proving this statement for k = 1. We have Thus we need to prove that We will now show that ν 0 (σ 1σN ) = 0 and ν 0 (σ 1σN ) = O(N −1 ). The fact that ν 0 (σ 1σN ) = 0 is obvious since for measure G ⊗2 0 the last coordinates σ 1 N , σ 2 N are independent of the first N − 1 coordinates and ν 0 ( Since for a fixed disorder (r.v. g ij and z) the last coordinates σ i N , i ≤ 4 are independent of the first N − 1 coordinates and independent of each other, we can write First of all, the first and the last terms are equal to zero because σ 1 N − σ 2 N 0 = 0. Next, by symmetry Therefore, we get . In order to avoid introducing new notations we notice that it is equivalent to proving that ν( where q − is the solution of (2.1) with β substituted with β − , we would get Lemma 2.4.15 in [12] states that for β ≤ β 0 , |q − q − | ≤ LN −1 and, therefore, the above inequality would imply that ν 0 (σ 1 (R − 1,3 − q)) = O(N −1 ). To prove (2.16) we notice that by symmetry ν(σ 1 (R 1,3 − q)) = ν(σ N (R 1,3 − q)), and we apply (2.5) which in this case implies that where in the last inequality we used (2.4). Finally, This finishes the proof of (2.15) for k = 1. It remains to prove the induction step. One can write Let us define ν i (·) in the same way we defined ν 0 (·) only now the i-th coordinate plays the same role as the N -th coordinate played for ν 0 . Using Proposition 2.4.7 in [12] we get that for any τ 1 , τ 2 > 1 such that 1/τ 1 + 1/τ 2 = 1, Let us take τ 1 = (2k + 2)/(2k + 1) and τ 2 = 2k + 2. By (2.4) we can estimate then for this parameters the induction step is not needed since this inequality is precisely what we are trying to prove. Thus, without loss of generality, we can assume that ν Combining this with (2.18), (2.19) and (2.20) we get Plugging this estimate into (2.17) we get One can write, we get .
(2.22)
First of all, by induction hypothesis (2.15) we have since this is exactly (2.15) for parameters N − 1, β − = β 1 − 1/N , and since j =i t 2 j ≤ 1. Next, by Proposition 2.4.6 in [12] we have where in the last inequality we again used (2.15). Thus, (2.21) and (2.22) imply for L large enough. This completes the proof of the induction step and Theorem 2.
Remark. Theorem 2 and Lemmas 2 and 5 will be often used implicitly in the proof of Theorem 1 in the following way. For example, if we consider a sequence S l defined in (1.5) then by Hölder's inequality (first with respect to · and then with respect to E) one can write where in the last equality we applied Theorem 2 and Lemma 2. Similarly, when we consider a function that is a product of the factors of the type R l,l − q or S l , we will simply say that each factor R l,l − q contributes O(N −1/2 ) and each factor S l contributes O(1).
The following result plays the central role in the proof of Theorem 1. We consider a function where S l are defined in (1.5) and where q l are arbitrary natural numbers, and we consider the following quantity ν (R l,l − q)(R m,m − q)φ .
We will show that this quantity essentially does not depend on the choice of pairs (l, l ) and (m, m ) or, more accurately, it depends only on their joint configuration. This type of quantities will appear when one considers the second derivative of ν φ , after two applications of Lemma 1 or Lemma 4, and we will be able to cancel some of these terms up to the smaller order approximation.
Proof. The proof is based on the following observation. Given (l, l ) consider where b = σ = ( σ i ) i≤N . One can express R l,l − q as R l,l − q = T l,l + T l + T l + T. (2.25) The joint behavior of these quantities (2.24) was completely described in Sections 6 and 7 of [12]. Our main observation here is that under the restrictions on indices made in the statement of Lemma 7 the function φ will be "almost" independent of these quantities and all proofs in [12] can be carried out with some minor modifications. Let us consider the case when (l, l ) = (m, m ) and (p, p ) = (r, r ). Using (2.25) we can write (R l,l − q)(R m,m − q) as the sum of terms of the following types: T l,l T m,m , T l,l T m , T l,l T, T l T m , T l T and T T.
Similarly, we can decompose (R p,p − q)(R r,r − q). The terms on the left hand side of (2.23) containing a factor T T will obviously cancel out. Thus, we only need to prove that any other term multiplied by φ will produce a quantity of order O(max |t i |N −1 ). Let us consider, for example, the term ν(T l,l T m,m φ). To prove that ν(T l,l T m,m φ) = O(max |t i |N −1 ) we will follow the proof of Proposition 2.6.5 in [12] with some necessary adjustments. Let us consider indices i(1), i(2), i(3), i(4) that are not equal to any of the indices that appear in T l,l , T m,m or φ. Then we can write, Let us consider one term in this sum, for example, then we can decompose (2.27) as where R 1 is the sum of terms of the following type where φ − j = n l=1 (S − l ) q l /S − j , and R 2 is the sum of terms of the following type using Theorem 2 and Lemma 2. To bound R j 1 we notice that ν 0 (R j 1 ) = 0, and, moreover, ν 0 (R j 1 ) = O(N −1 ), since by (2.3) each term in the derivative will have another factor R − l,l −q. Therefore, using (2.5) we get ν(R j 1 ) = O(N −1 ). The second term in (2.28) will have order O(N −3/2 ) since and one can again apply (2.5). Thus the last two lines in (2.28) will be of order To estimate the first term in (2.28) we apply Proposition 2.6.3 in [12] which in this case implies Now, using the similar decomposition as (2.27), (2.28) one can easily show that Thus, combining all the estimates the term (2.27) becomes All other terms on the right-hand side of (2.26) can be written in exactly the same way, by using the cavity method in the corresponding coordinate and, thus, (2.26) becomes For small enough β, e.g. Lβ 2 ≤ 1/2 this implies that ν(T l,l T m,m φ) = O(max |t i |N −1 ). To prove (2.23) in the case when (l, l ) = (m, m ) and (p, p ) = (r, r ), it remains to estimate all other terms produces by decomposition (2.25) and this is done by following the proofs of corresponding results in the Section 2.6 of [12]. The case when (l, l ) = (m, m ) and (p, p ) = (r, r ) is slightly different. The decomposition of (R l,l − q) 2 using (2.25) will produce new terms ν(T 2 l,l φ) and ν(T 2 l φ), which are not small but up to the terms of order O(max |t i |N −1 ) will be equal to the corresponding terms produces by the decomposition of (R p,p − q) 2 . To see this, once again, one should follow the proofs of the corresponding results in the Section 2.6 of [12] with minor changes.
Proof of Theorem 1
Theorem 1 is obvious if at least one k l is odd since in this case the left hand side of (1.6) will be equal to 0. We will assume that all k l are even and, moreover, at least one of them is greater than 2, say k 1 ≥ 4. Since a(l) = (l − 1)a(l − 2), in order to prove (1.6) it is, obviously, enough to prove (3.1) We will try to analyze and compare the terms on the left hand side. Let us write and ¿From now on we will carefully analyze terms in (3.2) in several steps and at each step we will notice that one of two things happens: (a) The term produced at the same step of our analysis carried out for (3.3) is exactly the same up to a constant k 1 − 1; (b) The term is "small" meaning that after combining all the steps one would get something of order O(max |t i |).
Let us look at one term in (3.2) and (3.3), for example, If we define S − l by the equation First of all, ν 0 (III) = ν 0 (VI) = 0 and, therefore, applying (2.5) Next, again using (2.5) Thus the contribution of the terms II and V in (3.1) will cancel out -the first appearance of case (a) mentioned above. The terms of order O(t 2 N +t N N −1/2 ) when plugged back into (3.2) and (3.3) will produce Here we, of course, assume that similar analysis is carried out for the i-th term in (3.2) and (3.3) with the only difference that the ith coordinate plays the special role in the definition of ν 0 . We now proceed to analyze the terms I and IV. If we define S = l by the equation then, and where R 23 is the sum of terms of the following typē for some (not important here) powers q l , and where R 3 is the sum of terms of the following typeσ Similarly, and whereR 23 is the sum of terms of the following typē for some (not important here) powers q l , and whereR 3 is the sum of terms of the following typeσ . Indeed, one need to note that ν 00 (R 23 ) = 0, and using Lemma 4, ν 00 (R 23 ) = O(N −1 ) since each term produced by (2.9) will have a factor σ l N −1 00 = 0, each term produced by (2.10) will have a factor σ 1 N 00 = 0, and each term produced by (2.11) Obviously, ν 00 (R 1l ) = 0. To show that ν 00 (R 1l ) = 0, let us first note that the terms produced by (2.9) will contain a factor σ l N −1 00 = 0, the terms produced by (2.10) will contain a factor σ 1 N 00 = 0, and the terms produced by (2.11) will contain a factor (S = 1 ) k 1 −1 00 = 0, since k 1 − 1 is odd and S = 1 is symmetric. For the second derivative we will have different types of terms produced by a combination of (2.9), (2.10) and (2.11). The terms produced by using (2.11) twice will have order O(N −2 ); the terms produced by using (2.11) and either (2.10) or (2.9) will have order O(N −3/2 ), since the factor R = l,l − q will produce N −1/2 ; the terms produced by (2.9) and (2.9), or by (2.10) and (2.10) will be equal to 0 since they will contain factors σ l N −1 00 = 0 and σ 1 N 00 = 0 correspondingly. Finally, let us consider the terms produced by (2.9) and (2.10), e.g.
It will obviously be equal to 0 unless m, p ∈ {1(1), 2(1)} and m , p ∈ {1(l), 2(l)} since, otherwise, there will be a factor σ 1 N 00 = 0 or σ l N 00 = 0. All non zero terms will cancel due to the following observation. Consider, for example, the term which corresponds to m = 1(1), m = 1(l), p = 2(1) and p = 2(l). There will also be a similar term that corresponds to m = 2(1), m = 1(l), p = 1(1) and p = 2(l) (indices m and p are changed) These two terms will cancel since the product of the first two factors is unchanged and, making the change of variables 1(1) → 2(1), 2(1) → 1(1) in the last factor we get (note that Using ( We will prove only (3.6) since (3.7) is proved similarly. Since ν 00 (R 21 ) = ν 00 (R 21 ) = 0 it is enough to prove that On both sides the terms produced by (2.10) will be equal to 0, the terms produced by (2.11) will be of order O(N −1 ), thus, it suffices to compare the terms produced by (2.9). For the left hand side the terms produced by (2.9) will be of the type (S = l ) k l and will be equal to 0 unless m ∈ {1(1), 2(1)} and m ∈ {1(1), 2(1)}. For a fixed m consider the sum of two terms that correspond to m = 1(1) and m = 2(1), i.e.
For m ∈ {1(2), 2(2), . . . , 1(n), 2(n)} this term will have a factor β 2 , and for m = 2n + 1 it will have a factor −β 2 (2n). Similarly, the derivative on the right hand side of (3.8) will consist of the terms of type For m ∈ {1(1), 2(1), . . . , 1(n), 2(n)} this term will have a factor β 2 , and for m = 2n + 3 it will have a factor −β 2 (2n + 2). We will show next that for any m and m , This implies, for example, that all terms in the derivatives are "almost" independent of the index m . This will also imply (3.8) since, given arbitrary fixed m , the left hand side of (3.8) will be equal to and the right hand side of (3.8) will be equal to which is the same up to the terms of order O(N −1 ). For simplicity of notations, instead of proving (3.9) we will prove Let us write the left hand side as and consider one term in this sum, for example, ν(U N ). Using (2.5), one can write since each term in the derivative already contains a factor R − l,l − q. Thus, where ν i is defined the same way as ν 0 only now ith coordinated plays the same role as N th coordinate plays for ν 0 (= ν N ). Therefore, again using (2.13) and (2.12) and writing Similarly one can write, If we can finally show that this will prove (3.10) and (3.8). For example, if we consider ν 0 (U N ), since all other terms are equal to 0. Similarly, one can easily see that This finishes the proof of (3.8).
The comparison of R 22 andR 22 can be carried out exactly the same way. | 7,208.6 | 2004-05-18T00:00:00.000 | [
"Computer Science"
] |
Challenges in deploying educational technologies for tertiary education in the carceral setting: Reconnecting or connecting?
With the COVID-19 pandemic, educators across the globe pivoted to using educational technologies such as lecture capture, video conferencing and discussion boards to reconnect with learners. For incarcerated learners, this was not an option due to the dearth of technologies and internet access in most correctional jurisdictions. As many tertiary education institutions leverage the affordances of digital technologies to increase access to learning and reconnect with learners, they are inadvertently excluding a large cohort, incarcerated learners. Prisons are typically technology poor and prohibit access, at least to some degree, to the internet. This paper examines some of the common challenges to the deployment of educational technology in prisons to reconnect with incarcerated learners. They are classified as physical challenges, operational challenges, attitudinal challenges, and human challenges.
Introduction
Prisons across Australasia deploy tertiary education for people in prison. This usually takes the form of vocational education, pre-tertiary programs and higher education (Barrow et al., 2019). Prisoner engagement with education serves many purposes. Among them, it keeps prisoners occupied and out of trouble (Rochealeau, 2013); it can help create vocational pathways that lead to employment on release (Rosmilawati & Darmawan, 2020); promotes prosocial behaviour among participants (Farley & Pike, 2016); and in and of itself it reduces recidivism rates (Davis et al., 2013). During the prison lockdowns precipitated by the COVID-19 pandemic, prisoners were only able to leave their cells for an hour or two at a time, and education providers and volunteers were prohibited from visiting prison sites. This happened in Aotearoa New Zealand and most, if not all, Australian jurisdictions. At the time of writing, educators are still not allowed on prison sites and educational programming has not resumed in many places. While educators 'outside the wire' pivoted to use educational technologies to connect with learners, these options were not available for incarcerated learners (Bradley & Davies, 2021). In Aotearoa New Zealand, Ara Poutama Aotearoa Department of Corrections did consider the rapid deployment of tablet technologies for prisoners, but ultimately it was decided it would be detrimental to rush such an initiative without due consideration to content strategies and a rigorous hardware options analysis. Instead, they opted for the delivery of hard copy activity booklets and a database of resources that could be readily printed off by corrections officers. This strategy has been maintained over the two and a half years of the pandemic.
Especially in the wake of the pandemic, the tertiary landscape is moving away from hard copy materials and face-to-face delivery, to one that is leveraging the affordances of educational technologies (Bradley & Davies, 2021). Even face-to-face programs usually incorporate some online and digitally enabled aspects in a blended learning model, usually requiring access to a learning management system and the internet. Broadly speaking, prisons do not provide access to the internet for learners or access to contemporary digital technologies making it very difficult for prisoners to meaningfully participate in digital tertiary education (Willems, Farley & Garner, 2018), particularly during prison lockdowns precipitated by the COVID-19 pandemic. Video conferencing technologies were introduced to facilitate virtual visits from whānau/family members (Dallaire, et al., 2021), and it was initially envisaged that these could be leveraged for educational purposes, but the heightened demand for family contact, staff availability and site logistics made this option untenable.
Educational technology in the carceral setting
Despite the many challenges, educational technologies have been unevenly deployed across many jurisdictions. The Making the Connection project led by the University of Southern Queensland (UniSQ) provided access to digital higher education through providing servers networked into existing computer labs and laptops that could be used in-cell, preloaded with course materials, and requiring no access to the internet (Farley et al., 2015). The processes and technologies have since been incorporated into standard operating procedures for UniSQ. The Otago Corrections facility in Aotearoa New Zealand has been using virtual reality to teach literacy and numeracy to people in prison. Again, this system required no internet (McLauchlan & Farley, 2019). Across the US and Europe, and more recently in New South Wales, digital tablets have been supplied to prisoners to provide access to entertainment, administrative functions such as appointment booking, and education. These technologies are often supplied free of charge to prisons with the technology companies charging prisoners for access to materials and resources (McKay, 2022). Every prison in Aotearoa New Zealand has Secure Online Learning suites that offer learners access to a limited range of whitelisted websites focused on drivers licensing and literacy and numeracy (McLauclan & Farley, 2019).
Challenges to the deployment of educational technologies in the carceral space
There is wide agreement that education is beneficial to prisoners and that educational technologies could enhance the delivery of that education. It is also acknowledged that the biggest impacts come with the delivery of tertiary education (Davis, et al, 2013). Given the impacts of COVID-19, it has become necessary to use technologies to connect with learners who have often been restricted to learning from home (Christopoulos & Sprangers, 2021). Ideally, these same technologies would also be deployed with incarcerated learners, however, several challenges remain. Education in prisons is almost always facilitated by face-to-face delivery by external providers or by educators directly employed by correctional jurisdictions. The options for reconnection to learners via educational technologies for prison educators simply do not exist.
Physical challenges
Prisons are designed to be impenetrable fortresses made of concrete and cinder blocks. Many prisons across Australia and New Zealand date back to Victorian times when less thought was given to prison design and the movement of prisoners around a facility. Physical barriers to the outside are made obvious (Engstrom & Van Ginneken, 2022). Though this serves the security demands of a prison well, it is less amenable to the post-build installation of wi-fi to accommodate learner needs. Most modern builds are installing the capacity for either wired or wireless internet, even if the purpose of that has not been determined. Even where attempts have been made to create connectivity, in the wake of the COVID-19 pandemic, access to prisons by outside contractors has been prohibited. This leaves prisons unable to become 'connected' in a timely or efficient manner to facilitate learning.
Operational challenges
The first purpose of a prison is to keep the community safe and to contain those who are perceived to threaten that safety. People are imprisoned and allocated to a security classification and an individual may move up and down through those classifications, dependent on their behaviour and participation in rehabilitative programs (Tahamont & Frisch, 2019). People from different security classifications are not allowed to meet. In Aotearoa New Zealand, this is further complicated by the necessity to keep members of different gangs apart (Breetzke, et al., 2021). This results in a prison with numerous cohorts. For example, Christchurch Men's Prison has 28 different cohorts that can never meet. This makes movement around the facility difficult; these people cannot meet, even in a walkway or classroom. Moving learner cohorts to classrooms and computer labs becomes a significant challenge, resulting in no or little time in those spaces (Farley & Doyle, 2014). Isolation requirements exacerbate these issues with prisoners being shuffled around to accommodate new prisoners or those exhibiting symptoms suggestive of COVID-19 (Ayhan, et al., 2022).
Corrections officers are needed to accompany learners to computer labs, supervise the use of technologies by prisoners, and sometimes to act as the intermediary between learners and prison education staff. The COVID-19 pandemic has taken a significant toll on frontline corrections officers (Smith, 2022). They often have to wear full PPE all day, they have to deal with prisoners frustrated at not being able to see visitors or engage with programs, and they have to cover for colleagues who have COVID-19 or are too immunocompromised to work in the prison during the pandemic. This has resulted in a large number of corrections officers leaving due to workload and conditions or leaving because they are close to retirement. Across the world, prisons are reporting being understaffed. In these situations, activities are prioritised and education is usually not near the list of those priorities (Bradley & Davies, 2021).
Attitudinal challenges
'Security' is often bandied about as the reason that educational technologies and internet cannot be introduced (Farley & Doyle, 2014). Unrestricted access to the internet is seen as posing too great a risk to the community and, to victims of crime. Prisoners would be able to monitor victims or potential victims through social media channels, view prohibited content related to their crime, or run illegal businesses. Though there are ways to mitigate risk, for example through whitelisting websites, a blanket ban often persists. Though the public is kept safe by this prohibition, it also excludes access to the websites and learning management systems of tertiary education institutions. Most of these sites are not suitable for whitelisting as they often incorporate third party links and resources, such as journals and rely on dynamic IP addresses (Taugerbeck, 2019). Even when viable solutions are offered, these are often disregarded by those with an incomplete understanding of the technologies and 'security' is usually the excuse proffered.
In most jurisdictions, the relationship between prison education teams and correctional officers is fraught. Education teams are seen to be 'soft' on prisoners. Corrections officers perceive that prisoners receive concessions and opportunities that they, or people on the outside are not entitled to (Novek, 2019). This is frequently a misconception. Prisoners bear the costs of tertiary education and are subject to the same rules as learners outside of prison. Some of this resentment stems from corrections officers, outside of Scandinavian jurisdictions, needing very little education for their role. They become uncomfortable if a prisoner is studying for a higher qualification than the one they hold. This is particularly an issue with higher education incarcerated learners. In addition, corrections officers may poorly understand educational technologies and perceive them as a risk to security (Kerr & Willis, 2018). A personal example illustrates this; I visited an Australian prison to meet with corrections officers about a technology project I was leading in their prison. One of the main concerns I had to address was their perception that phone calls could be made from a scientific calculator. Obviously, this is not true but was a widely held belief in that prison at that time.
Human challenges
Access to technology is not enough to ensure access to learning. Both education tutors and learners in the carceral settings need to understand how the technologies can be used and why they would be used. The lack of digital literacies of both staff and learners have slowed down my own technology projects in the carceral space. People in prison are frequently from disadvantaged communities and have already experienced the digital divide, even before their incarceration. They come to prison without the skills and knowledge they need to fully participate in the digital world (Smith, Willems & Farley, 2021). This is exacerbated by the dearth of contemporary technologies in prison. These people are not given the opportunity to build their digital literacies while incarcerated. This undermines the rehabilitative potential of education; digital literacies are necessary for higher level employment. A lack of digital literacies restricts an ex-prisoner to low paying and often physical jobs that have little impact on recidivism (Bhuller, et al., 2020). When technologies are provided, both staff and learners are unsure how to use them and they may be put aside in favour of hard copy materials (Farley, Murphy & Bedford, 2014).
Conclusion
Incarcerated learners lack the access to the educational technologies that learners 'outside the wire' enjoy. In pre-COVID-19 times, tertiary education was normally delivered face-to-face or supported by education tutors or education officers. During the lockdowns that have resulted from the COVID-19 pandemic, these educators have been prohibited from visiting prisons, forcing them to rely on the delivery of hard copy materials to prisoners by corrections officers. In most cases, the delivery of regular educational programing has ceased, and technological solutions have not been in place. The range of challenges to the implementation of educational technologies that could overcome these barriers have been listed and are broadly categorized into physical, operational, attitudinal, and human challenges. Educators have not been able to reconnect with their incarcerated learners and those learners have been languishing in their cells without the opportunity to learn. If they are to reap the benefits of reduced recidivism rates and dynamic security afforded by educating prisoners, correctional jurisdictions need to address the challenges of implementing educational technologies and recognise the very human foibles that often prevent progress. | 2,944.2 | 2022-11-18T00:00:00.000 | [
"Education",
"Computer Science"
] |
Sp1 Plays a Key Role in Vasculogenic Mimicry of Human Prostate Cancer Cells
Sp1 transcription factor regulates genes involved in various phenomena of tumor progression. Vasculogenic mimicry (VM) is the alternative neovascularization by aggressive tumor cells. However, there is no evidence of the relationship between Sp1 and VM. This study investigated whether and how Sp1 plays a crucial role in the process of VM in human prostate cancer (PCa) cell lines, PC-3 and DU145. A cell viability assay and three-dimensional culture VM tube formation assay were performed. Protein and mRNA expression levels were detected by Western blot and reverse transcriptase-polymerase chain reaction, respectively. The nuclear twist expression was observed by immunofluorescence assay. A co-immunoprecipitation assay was performed. Mithramycin A (MiA) and Sp1 siRNA significantly decreased serum-induced VM, whereas Sp1 overexpression caused a significant induction of VM. Serum-upregulated vascular endothelial cadherin (VE-cadherin) protein and mRNA expression levels were decreased after MiA treatment or Sp1 silencing. The protein expression and the nuclear localization of twist were increased by serum, which was effectively inhibited after MiA treatment or Sp1 silencing. The interaction between Sp1 and twist was reduced by MiA. On the contrary, Sp1 overexpression enhanced VE-cadherin and twist expressions. Serum phosphorylated AKT and raised matrix metalloproteinase-2 (MMP-2) and laminin subunit 5 gamma-2 (LAMC2) expressions. MiA or Sp1 silencing impaired these effects. However, Sp1 overexpression upregulated phosphor-AKT, MMP-2 and LAMC2 expressions. Serum-upregulated Sp1 was significantly reduced by an AKT inhibitor, wortmannin. These results demonstrate that Sp1 mediates VM formation through interacting with the twist/VE-cadherin/AKT pathway in human PCa cells.
Introduction
Prostate cancer (PCa) is a common cancer among men around the world [1]. PCa spread to nearby organs, tissues and other parts of the body including lymph nodes and bones [2]. After spreading, cancer cells attach to the other tissues and grow to form new tumors that can cause damage where they land [3]. Reportedly, a quarter of men with PCa in the world have a metastatic disease and the 5-year survival rate of patients with metastasis to distant sites is 29% [4]. PCa cells are known to have largely aggressive properties [5]. Since these tumors need a blood supply to grow and spread through blood circulation [6], it is important to shut off the blood supply to prevent tumor growth and metastasis in PCa.
Vasculogenic mimicry (VM), discovered in 1999 [7], is the alternative neovascularization by aggressive tumor cells without the presence of endothelial cells (ECs) and it functions as blood vessels by ECs [8][9][10]. Blood supply is the indispensable process for cancer cells to grow and metastasize through providing oxygen and nutrients [11]. However, the therapeutic efficacy of drugs targeting only blood vessels by ECs is limited due to an adequate blood supply through a new pattern such as VM [12][13][14]. VM has appeared in various types of cancer including PCa and is related to poor prognosis of cancer patients by meta-analysis [15,16]. Overall survival (OS) and disease-free survival (DFS) were significantly lower in VM-positive PCa patients [17]. Since VM has the essential effects on tumor progression, VM is a new therapeutic strategy to improve the therapeutic efficacy of cancer patients including PCa.
Sp1 transcription factor is overexpressed in many types of cancer cells including PCa and controls several genes that are involved in many cellular processes, including cell differentiation, cell growth, apoptosis, angiogenesis, and response to DNA damage [18][19][20][21]. Additionally, it contributes to progression and metastasis of PCa [21]. Therefore, Sp1 is an attractive target of cancer treatment in PCa patients. Although there are many studies on the functions of Sp1, there is no evidence of the relationship between Sp1 and VM formation. Among human PCa cell lines, PC-3 and DU145 cells have a powerful property of VM formation compared with LNCaP cells [22]. Thus, this study investigated whether and how Sp1 affects VM formation in human PCa PC-3 and DU145 cells.
Sp1 Mediates VM Formation in PCa Cells
PC-3 cells were treated with an increasing concentration of serum for 24 h and then the expression level of Sp1 was checked by Western blot. Sp1 was dramatically upregulated by serum in a dose-dependent manner ( Figure 1A). Since serum promotes VM formation of PC-3 cells [23], to determine the role of Sp1 and VM formation, loss-of-function approach was introduced. PC-3 cells were treated with a selective Sp1 inhibitor, mithramycin A (MiA), or were transfected with the siRNA-targeting Sp1 gene. First, the cell viability assay was performed to determine non-cytotoxic concentrations of MiA and Sp1 siRNA. There was no cytotoxic effect of MiA or siRNA up to 200 nM or 15 nM, respectively ( Figures 1B,C). This study used 100 and 200 nM of MiA or 15 nM of siRNA for subsequent experiments. Serumupregulated Sp1 expression was effectively inhibited by MiA or Sp1 silencing ( Figure 1D,E). To determine whether Sp1 is associated with VM formation, 3D culture VM formation assay was performed in PC-3 cells after MiA treatment or transfection with Sp1 siRNA. Serum stimulation led to the induction of tubular channels by PC-3 cells, which was effectively reduced by MiA in a dose-dependent manner ( Figure 1F). Similarly, Sp1 silencing had an obvious inhibitory effect on serum-induced the formation of tubular channels ( Figure 1G).
To verify a role of Sp1 in VM formation, a Western blot for Sp1 after serum treatment and transfection with Sp1 siRNA, and 3D culture VM formation assay after transfection with Sp1 siRNA were performed in another PCa DU145 cells. Consistent with the results from PC-3 cells, Sp1 was upregulated by serum in DU145 cells ( Figure 2A). Additionally, serum-induced VM formation was significantly reduced after Sp1 silencing in DU145 cells ( Figure 2B,C).
To confirm a novel functional role of Sp1 in VM formation, a gain-of-function approach was introduced using Sp1 CRISPR activation plasmid in both PC-3 and DU145 PCa cells. Sp1 overexpression caused an effective increase in VM tubular formation compared with control plasmid without serum in both PC-3 ( Figure 3A) and DU145 cells ( Figure 3B) by a 3D culture VM formation assay.
Taken together, Sp1 silencing inhibited serum-stimulated VM formation, whereas Sp1 overexpression triggered VM formation in PCa cells, suggesting that Sp1 is required to induce VM formation in PCa cells. , and in siRNA-transfected cells with serum (E) for 24 h. Cell viability was measured by MTT assay in MiA-treated with cells with serum (B), and in siRNA-transfected cells with serum (C) for 24 h. VM tube formation assay was carried out in MiA-treated cells with serum (F) and in siRNA-transfected cells with serum (G). After 16 h incubation, images were obtained under an inverted light microscope at 40× magnification. Scale bar = 250 μm. The number of formed VM structures was counted. Data are shown as mean ± SD and were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups. To verify a role of Sp1 in VM formation, a Western blot for Sp1 after serum treatment and transfection with Sp1 siRNA, and 3D culture VM formation assay after transfection with Sp1 siRNA were performed in another PCa DU145 cells. Consistent with the results from PC-3 cells, Sp1 was upregulated by serum in DU145 cells (Figure 2A). Additionally serum-induced VM formation was significantly reduced after Sp1 silencing in DU145 cells ( Figure 2B,C). To confirm a novel functional role of Sp1 in VM formation, a gain-of-function approach was introduced using Sp1 CRISPR activation plasmid in both PC-3 and DU145 PCa cells. Sp1 overexpression caused an effective increase in VM tubular formation compared with control plasmid without serum in both PC-3 ( Figure 3A) and DU145 cells (Figure 3B) by a 3D culture VM formation assay.
Taken together, Sp1 silencing inhibited serum-stimulated VM formation, whereas Sp1 overexpression triggered VM formation in PCa cells, suggesting that Sp1 is required to induce VM formation in PCa cells.
Sp1 Upregulates VE-Cadherin Expression through the Nuclear Twist in PC-3 Cells
To reveal whether Sp1 affects the expression of VE-cadherin to induce VM forma a Western blot was conducted in PC-3 cells. Serum upregulated VE-cadherin protein pression, which was attenuated by MiA in a dose-dependent manner ( Figure 4A). A tionally, VE-cadherin protein expression by serum was markedly inhibited in Sp1 siR treated cells ( Figure 4B). However, Sp1 overexpression slightly upregulated VE-cadh protein expression without serum ( Figure 4C). To assess whether the VE-cadherin pro level was affected by the transcriptional level, the mRNA expression level of VE-cadh was detected by RT-PCR. Consistent with the protein expression of VE-cadherin, th rum-upregulated mRNA level of VE-cadherin was decreased after treatment with ( Figure 4D) or Sp1 siRNA ( Figure 4E). These results indicated that Sp1 regulates VEherin expression at the transcription level.
Figure 3. Sp1 mediates VM formation in PCa cells. Western blot was performed in PC-3 cells (A) and DU145 cells (B) after transfection with CRISPR activation plasmid. VM tube formation assay was carried out in PC-3 cells (C) and DU145 cells (D) after transfection with CRISPR activation plasmid.
After 16 h incubation, images were obtained under an inverted light microscope at 40× magnification. Scale bar = 250 µm. The number of formed VM structures was counted. Data are shown as mean ± SD and were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups.
Sp1 Upregulates VE-Cadherin Expression through the Nuclear Twist in PC-3 Cells
To reveal whether Sp1 affects the expression of VE-cadherin to induce VM formation, a Western blot was conducted in PC-3 cells. Serum upregulated VE-cadherin protein expression, which was attenuated by MiA in a dose-dependent manner ( Figure 4A). Additionally, VE-cadherin protein expression by serum was markedly inhibited in Sp1 siRNA-treated cells ( Figure 4B). However, Sp1 overexpression slightly upregulated VE-cadherin protein expression without serum ( Figure 4C). To assess whether the VE-cadherin protein level was affected by the transcriptional level, the mRNA expression level of VE-cadherin was detected by RT-PCR. Consistent with the protein expression of VE-cadherin, the serum-upregulated mRNA level of VE-cadherin was decreased after treatment with MiA ( Figure 4D) or Sp1 siRNA ( Figure 4E). These results indicated that Sp1 regulates VE-cadherin expression at the transcription level.
To identify the transcriptional regulation of VE-cadherin, a Western blot and immunofluorescence analysis were performed in PC-3 cells. Twist was elevated by serum, which was decreased by MiA treatment ( Figure 5A) or Sp1 silencing ( Figure 5B). However, the overexpression of Sp1 increased the expression level of twist without serum compared with control plasmid ( Figure 5C). Immunofluorescence staining showed that enhanced twist expression in the nucleus by serum was attenuated after MiA treatment ( Figure 5D) or Sp1 silencing ( Figure 5E). As shown in Figure 5F, the interaction between Sp1 and twist was induced by serum, which was significantly reduced by MiA treatment. Taken together, these results demonstrated that the nuclear twist upregulates VE-cadherin expression, which the process of which is mediated by Sp1. To identify the transcriptional regulation of VE-cadherin, a Western blot and immunofluorescence analysis were performed in PC-3 cells. Twist was elevated by serum, which was decreased by MiA treatment ( Figure 5A) or Sp1 silencing ( Figure 5B). However, the overexpression of Sp1 increased the expression level of twist without serum compared with control plasmid ( Figure 5C). Immunofluorescence staining showed that enhanced twist expression in the nucleus by serum was attenuated after MiA treatment (Figure 5D) or Sp1 silencing ( Figure 5E). As shown in Figure 5F, the interaction between Sp1 and twist was induced by serum, which was significantly reduced by MiA treatment. Taken together, these results demonstrated that the nuclear twist upregulates VE-cadherin expression, which the process of which is mediated by Sp1.
Sp1 Promotes the Activation of AKT Pathway in PC-3 Cells
To investigate whether Sp1 is involved in the AKT pathway to induce V the Western blot analyzed PC-3 cells. The phosphorylation of AKT and t levels of MMP-2 and LAMC2 were augmented by serum. MiA treatment ( Sp1 silencing ( Figure 6B) decreased the effects of serum. In contrast, Sp1 o elevated the phosphorylation of AKT and the expression levels of MMP-2 without serum compared to the control plasmid ( Figure 6C). Serum-upregu not twist, was significantly reduced by the AKT inhibitor, wortmannin (Figu After incubating with twist antibody (green) followed by FITC-conjugated secondary antibody, the nuclei were counterstained with propidium iodide (red). Images were obtained by a fluorescence microscope at 400× magnification. Scale bar = 40 µm. (F) Co-IP was performed in MiA-treated cells with serum. IgG: negative control. Data were statistically calculated by one-way ANOVA followed by Tukey's studentized range test. * Means with different letters are significantly different between groups.
Sp1 Promotes the Activation of AKT Pathway in PC-3 Cells
To investigate whether Sp1 is involved in the AKT pathway to induce VM formation, the Western blot analyzed PC-3 cells. The phosphorylation of AKT and the expression levels of MMP-2 and LAMC2 were augmented by serum. MiA treatment ( Figure 6A) or Sp1 silencing ( Figure 6B) decreased the effects of serum. In contrast, Sp1 overexpression elevated the phosphorylation of AKT and the expression levels of MMP-2 and LAMC2 without serum compared to the control plasmid ( Figure 6C). Serum-upregulated Sp1, but not twist, was significantly reduced by the AKT inhibitor, wortmannin ( Figure 6D). These results indicate that Sp1 contributes to the activation of VM-related AKT signaling. Additionally, Akt was controlled by Sp1 expression.
Discussion
VM is the formation of a vessel-like network lined by cancer cells. The function of VM is similar to that of blood vessels formed by ECs [8][9][10]. VM strongly participates in tumor invasion, metastasis, and growth through a blood supply and is closely related to poor prognosis in cancer patients [8,15,24]. VM-positive PCa patients showed high
Discussion
VM is the formation of a vessel-like network lined by cancer cells. The function of VM is similar to that of blood vessels formed by ECs [8][9][10]. VM strongly participates in tumor invasion, metastasis, and growth through a blood supply and is closely related to poor prognosis in cancer patients [8,15,24]. VM-positive PCa patients showed high Gleason scores and distance metastasis as well as short OS and DFS [17]. The Sp1 transcription factor plays a crucial role in the progression and metastasis of PCa [21]. However, the involvement of Sp1 in VM formation has not been determined yet. Therefore, this study investigated a novel functional role of Sp1 in the process of VM in human PCa cells.
Since a previous study demonstrated that serum promotes VM formation in human PCa PC-3 cells [23], this study focused on Sp1 to explore an underlying molecular mechanism of VM. As expected, serum dramatically upregulated the expression of Sp1 at the protein level in both PCa PC-3 and DU145 cells (Figures 1A and 2A). To elucidate a novel functional role of Sp1 in VM formation, MiA and Sp1 siRNA used for a loss-of-function approach and Sp1 CRISPR activation plasmid was used for a gain-of-function approach. The inhibition of Sp1 by MiA and Sp1 siRNA caused the perfect blockage in VM formation induced by serum (Figures 1 and 2). On the contrary, despite the absence of serum, the overexpression of Sp1 by CRIPSR activation plasmid sufficiently formed VM ( Figure 3). Therefore, these results clearly demonstrate that Sp1 may be an important factor in the process of VM formation in PCa cells.
Highly aggressive tumor cells overexpress VE-cadherin, but not non-aggressive tumor cells [25]. VE-cadherin, an endothelial-specific junction molecule, is a biomarker of VM and plays a crucial role in VM formation [26][27][28]. The endothelial-specific transcriptional active region of VE-cadherin contains the Sp1 binding site [29,30], highlighting the relationship between Sp1 and VE-cadherin. In this study, the serum-upregulated expressions of VE-cadherin at the protein and mRNA levels were decreased after treatment with MiA or Sp1 siRNA (Figure 4), highlighting the transcriptional regulation of VE-cadherin expression. However, the overexpression of Sp1 upregulated the protein expression of VE-cadherin ( Figure 4C). Twist is a transcription factor that regulates the expression of VE-cadherin [31,32]. Twist has been reported to be associated with tumor metastasis and angiogenesis [33] and also regulates VM formation [32]. In this study, serum-treated PC-3 cells were found to increase the expression of twist in the nucleus, which was reduced by the inhibition of Sp1 by MiA or siRNA ( Figure 5A,B). However, the overexpression of Sp1 elevated the protein expression of twist ( Figure 5C). Sp1 interacted with twist, which was significantly reduced by MiA treatment ( Figure 5F). Taken together, these results revealed that Sp1 regulates the expression of VE-cadherin by interacting with twist in the nucleus.
Multiple signaling pathways such as AKT, FAK, hypoxia, and nodal/notch contribute to VM formation [8,24]. Among them, as a downstream signaling of VE-cadherin, AKT is activated by VE-cadherin [8,34]. Then, activated AKT elevates the expressions of matrix metalloproteinases (MMPs) such as MMP-2 and -14, thereby leading to VM formation through the remodeling of the extracellular matrix including LAMC2 [8,24]. Additionally, AKT promotes cancer cell growth, proliferation, and malignant behavior [35]. A previous study demonstrated that the AKT/MMP-2/LAMC2 signal transduction pathway participates in VM formation in response to serum [23]. Sp1 knockdown suppressed tumor progression by inhibiting AKT and ERK signaling [36]. AKT-mediated VEGF mRNA expression required Sp1 [37]. These reports indicated that Sp1 may be involved in the AKT signaling pathway. In this study, the serum-induced phosphorylation of AKT in PC-3 cells was seen to decrease when Sp1 was suppressed by MiA or siRNA ( Figure 6A,B). However, the overexpression of Sp1 enhanced the phosphorylation of AKT ( Figure 6C). Meanwhile, AKT signaling also regulated the Sp1 expression. Both serum-upregulated MMP-2 and LAMC2 expressions were decreased when Sp1 was inhibited by MiA or siRNA ( Figure 6A,B). On the contrary, the overexpression of Sp1 enhanced the expression levels of MMP-2 and LAMC2 ( Figure 6C). These results verified that Sp1 is involved in the AKT pathway to induce VM in PC-3 cells.
In conclusion, this study demonstrated a novel functional role of Sp1 in VM formation through loss-and gain-of-function approaches and these results are summarized in Figure 7. Sp1 regulated the expression of VE-cadherin through controlling the nuclear expression of transcription factor, twist. Sp1-induced the upregulation of twist/VE-cadherin in turn activated the AKT pathway including MMP-2 and LAMC2, thereby causing an induction of VM. Taken together, Sp1 plays a key role in VM formation through the twist/VEcadherin/AKT pathway in human PCa cells. These results may provide a new therapeutic strategy for the treatment of PCa patients associated with VM through targeting Sp1.
was seen to decrease when Sp1 was suppressed by MiA or siRNA ( Figure 6A,B). However, the overexpression of Sp1 enhanced the phosphorylation of AKT ( Figure 6C). Meanwhile, AKT signaling also regulated the Sp1 expression. Both serum-upregulated MMP-2 and LAMC2 expressions were decreased when Sp1 was inhibited by MiA or siRNA (Figure 6A,B). On the contrary, the overexpression of Sp1 enhanced the expression levels of MMP-2 and LAMC2 ( Figure 6C). These results verified that Sp1 is involved in the AKT pathway to induce VM in PC-3 cells.
In conclusion, this study demonstrated a novel functional role of Sp1 in VM formation through loss-and gain-of-function approaches and these results are summarized in Figure 7. Sp1 regulated the expression of VE-cadherin through controlling the nuclear expression of transcription factor, twist. Sp1-induced the upregulation of twist/VE-cadherin in turn activated the AKT pathway including MMP-2 and LAMC2, thereby causing an induction of VM. Taken together, Sp1 plays a key role in VM formation through the twist/VE-cadherin/AKT pathway in human PCa cells. These results may provide a new therapeutic strategy for the treatment of PCa patients associated with VM through targeting Sp1.
Three-Dimensional (3D) Culture VM Tube Formation Assay
VM tube formation was assessed as described previously [23,41]. Cells (3.6 × 10 5 ) were seeded on a matrigel-polymerized 24-well plate and then treated with serum with or without MiA for 16 h at 37 • C. In siRNA-transfected cells, cells (3.6 × 10 5 ) were seeded after 48 h transfection and then treated with serum. In CRISPR activation plasmid-transfected cells, cells (3.6 × 10 5 ) were seeded after 48 h transfection without serum. Tubular shapes were counted after imaging using an inverted light microscope Ts2_PH (Nikon, Tokyo, Japan) at 40× magnification.
Western Blot Analysis
Western blot was performed in MiA-treated cells with serum and in siRNA-transfected cells with serum for 24 h, in CRISPR activation plasmid-treated cells, and in serum-treated cells with or without wortmannin (WM, Merk, Darmstadt, Germany) for 24 h. Total proteins were isolated using RIPA buffer (Thermo Scientific, Rockford, IL, USA) supplemented with phosphatase inhibitor cocktail (Thermo Scientific, Rockford, IL, USA) and protease inhibitor cocktail (Thermo Scientific, Rockford, IL, USA). The protein samples (30-35 µg) were separated by SDS-polyacrylamide gel (8-12%) electrophoresis and then transferred onto a membrane (Pall Corporation, Port Washington, NY, USA). The membrane was incubated with the indicated primary antibodies (Table 1) overnight at 4 • C, followed by incubation with specific secondary antibodies for 2 h at room temperature (RT). Protein bands were visualized using an enhanced chemiluminescence reagent (GE Healthcare, Chicago, IL, USA) and ImageJ 1.40 g software (National Institute of Health, Bethesda, MD, USA) was used to quantify each protein band.
Isolation of RNA and Reverse Transcriptase Polymerase Chain Reaction (RT-PCR)
Total RNA extraction was carried out in MiA-treated or Sp1 siRNA-transfected cells using a TRIzol reagent (Invitrogen, Carlsbad, CA, USA). cDNA synthesis and PCR were performed as described previously [23]. ImageJ 1.40g software was used to quantify each PCR product band.
Immunofluorescence Assay
Cells were seeded on an 8-well chamber slide with serum with or without MiA. In siRNA-transfected cells, cells (7 × 10 4 ) were seeded on an 8-well chamber slide and transfected with siRNA for 48 h and treated with serum. Immunofluorescence assay was performed as described previously [23]. Images were captured using an ECLIPS Ts2-FL (Nikon, Tokyo, Japan) at 400× magnification.
Co-Immunoprecipitation (Co-IP)
The total cell lysate (300 µg) were mixed with 0.5 µg of twist antibody (Abcam plc., Cambrige, UK) for 1 h at 4 • C and then added protein A/G agarose (Santa Cruz Biotechnology, Inc., Danvers, MA, USA) for 1 h at 4 • C. The beads were collected by centrifugation and washed 3 times with lysis buffer. The immunoprecipitated protein complexes were analyzed by Western blot.
Statistical Analysis
All experiments were performed at least three times. Data are shown as mean ± standard deviation (SD). All data were analyzed by one-way ANOVA followed by Tukey's studentized range test using a GraphPad Prism software (GraphPad Software Inc., San Diego, CA, USA). Means with different letters are significantly different between groups. | 5,292 | 2022-01-25T00:00:00.000 | [
"Biology"
] |
A Novel Fusion Pruning Algorithm Based on Information Entropy Stratification and IoT Application
: To further reduce the size of the neural network model and enable the network to be deployed on mobile devices, a novel fusion pruning algorithm based on information entropy stratification is proposed in this paper. Firstly, the method finds similar filters and removes redundant parts by Affinity Propagation Clustering, then secondly further prunes the channels by using information entropy stratification and batch normalization (BN) layer scaling factor, and finally restores the accuracy training by fine-tuning to achieve a reduced network model size without losing network accuracy. Experiments are conducted on the vgg16 and Resnet56 network using the cifar10 dataset. On vgg16, the results show that, compared with the original model, the parametric amount of the algorithm proposed in this paper is reduced by 90.69% and the computation is reduced to 24.46% of the original one. In ResNet56, we achieve a 63.82%-FLOPs reduction by removing 63.53% parameters. The memory occupation and computation speed of the new model are better than the baseline model while maintaining a high network accuracy. Compared with similar algorithms, the algorithm has obvious advantages in the dimensions of computational speed and model size. The pruned model is also deployed to the Internet of Things (IoT) as a target detection system. In addition, experiments show that the proposed model is able to detect targets accurately with low reasoning time and memory. It takes only 252.84 ms on embedded devices, thus matching the limited resources of IoT.
Introduction
Neural networks have evolved rapidly in recent years, from VGG [1], GoogLeNet [2], ResNet [3], and DenseNet [4], to the newer networks SqueezeNet [5], MobileNet [6], and ShuffleNet [7], all of which have achieved very good results. As the algorithmic model becomes more complex, the neural network has more and more layers, as well as the number of parameters and computational effort. However, due to the hardware limitations of embedded devices, model compression algorithms have been developed in order to successfully deploy such large network models in such mobile devices. He et al. [8] proposed a new filter pruning method to prune redundant convolutional kernels based on geometric center criterion instead of parameters to achieve network acceleration; Li et al. [9] proposed a pruning operation on the convolutional layer, calculating the convolution kernel based on the 1 -norm, deleting the convolution kernel with a smaller norm value, and the corresponding feature map, reducing considerable computational cost; He et al. [10] proposed a LASSO-based filter selection strategy to identify representative filters and a least-squares reconstruction error to reconstruct the output; Zhao [11] proposed a method based on knowledge distillation and quantification to further improve the model through a teacher network to guide the student network and subsequent quantification methods; Lin [12] computed the rank of the feature map by giving a small amount of input to each layer, and concluded that the feature map with a larger rank contains more information and is ranked based on this, and the corresponding filters should be retained; Wang [13] proposed a global pruning method, which measures the relationship between filters based on the Pearson Correlation Coefficient, reflecting the replaceability between filters, and adding a hierarchical constraint term on the global importance to obtain a better result; Ghimire [14] investigates three aspects of quantization/binarization models, optimization architecture and resource constrained systems to improve the efficiency of deep learning research. The literature [15] proposed a learnable global importance ranking method by measuring the importance of filters with the 2 -norm, and all layers are normalized by a linear transformation. The parameter values of the linear transformation of each layer are solved by the evolutionary algorithm, and the global importance is sorted. Souza [16] proposes a pruning method using Bootstrapped Lasso BR-ELM, which selects the most representative neurons for model responses based on regularization and resampling techniques.
The aforementioned methods have made certain progress in neural network model compression and other related dimensions, but the degree of model compression and accelerated calculation is not enough, and it is not necessarily suitable for deployment on mobile terminal devices. Based on this, a fusion pruning algorithm based on information entropy hierarchy is proposed in this paper. The main contributions of the paper are listed below.
•
Through the network pruning operation, the model size of the network model is smaller, the inference time is less, and the number of operations is smaller.
•
Compared to the baseline model (VGG16, ResNet56), the model has better performance. • It maintains a good result for target detection on embedded devices.
The remainder of this paper is organized as follows. Section 2: Introduction of pruning algorithm fundamentals. Section 3: Describing the algorithm steps and details the paper in detail. Section 4: Presentation of experimental results and analysis interpretation of the results. Section 5: Deployment to mobile devices and testing. Section 6: Summarizes the algorithm's comprehensive performance and outlook.
Related Work
Network pruning is a widely used method in model compression. The pruning steps are generally divided into three steps: model pre-training, pruning, and parameter finetuning. Figure 1 shows the general network pruning process. The initial step is model pre-training, which means that the original model is trained to adjust the weight parameters, and the more important purpose of pre-training is to find out what is "important". The second step is pruning, which generally involves determining which weights are to be removed based on the proposed decision criteria. The final step is fine-tuning, i.e., retraining the pruned network model. After pruning, there may be a loss in accuracy, and this step is used to achieve the purpose of restoring accuracy.
Unstructured Pruning
Pruning algorithms can be classified into structured pruning and unstructured pruning based on the level of detail. In the early days, LeCun et al. [17] and Hassibi et al. [18] employed the Hessian matrix of the loss function to determine the redundant connections in the network. However, the Hessian matrix itself consumes a lot of computation time for second-order computation and takes a long time to train. Dong et al. [19] further improved the method by restricting the computation of the Hessian matrix to a single-layer network, which greatly reduced the computational effort. Han et al. [20] proposed iterative pruning, which continuously pruned and retrained the network, and obtained a simplified model after convergence, which shortened the network length compared to the method of LeCun et al. [17] training time. In addition, Guo et al. [21] improved the method of Han et al. [20]. Faced with the problem that important filters may be removed during the pruning process, resulting in a drop in accuracy, this method allows the pruned neurons to be restored again during the training process. Similarly, Zhou et al. [22] proposed to prune unimportant nodes based on the magnitude of activation values. Srinivas and Babu et al. [23] proposed a pruning framework that does not rely on training data from the perspective of the existence of redundancy among neurons, where the redundancy of nodes is calculated and removed. Chen et al. [24] proposed the HashedNets model, which introduces a hash function to group the weights according to the Hamming distance between parameters to achieve parameter sharing. However, weight pruning is unstructured pruning, and the unstructured sparse structure is not conducive to parallel computing and requires special software or hardware for acceleration, as opposed to structured pruning, which does not have these limitations.
Structured Pruning
Structured pruning is operated on channels or entire filters without destroying the original convolutional structure, compatible with existing hardware and libraries, and more suitable for deployment on hardware. Both unstructured pruning and structured pruning require the evaluation of parameter importance. Liu et al. [25] proposed a channellevel pruning method that uses the scale factor of the BN layer as a measure to achieve compressed model size acceleration operations. Inspired by this, Kang and Han [26] considered channel scaling and shifting parameters for pruning. Yan et al. [27] combined 1 -norm parameters and computational power as pruning criteria. SFP [28] allows the pruned filters to be updated during the training process. The convolutional kernels that were pruned in the previous training round are still involved in the iterations in the current training round, so these convolutional kernels are not directly discarded. This approach can greatly maintain the capacity of the model and obtain excellent performance. Luo et al. [29] proposed a channel pruning algorithm for ThiNet. They defined the channel pruning form as an optimization problem, using the statistics of the next layer to guide the pruning of the current layer. Redundant channels are selected based on a greedy strategy, and then the model is fine-tuning by minimizing the reconstruction error before and after pruning. Jin et al. [30] proposed structured pruning of neural networks followed by weight pruning to further compress the network model. Hu et al. [31] proposed Average Percentage Of Zeros (APOZ) to measure the number of activations to zero in each convolutional kernel as a way to evaluate the importance of convolutional kernels and perform pruning. Molchanov et al. [32] viewed the pruning problem as a combinatorial optimization problem in which an optimal subset of multiple parameters is selected to minimize the change in the model loss function after pruning. Redundant channels were selected by Taylor expansion and evaluated the effect of channel pruning on the model. Luo and Wu et al. [33] used the results of the output feature mapping GAP to obtain the information entropy and remove the redundant filters. Similarly, Yu et al. [34] optimized the reconstruction error of the final output response, and made an importance score for each channel propagation. Lin et al. [35] introduced dynamic coding filter fusion (DCFF) to train compact convolutional neural networks. Wen et al. [36] used Group Lasso for structured sparsity. Huang and Wang et al. [37] performed structured pruning by introducing learnable masks and using APG algorithm for mask sparsification.
All of the above structural pruning methods use only the parameter information of a single layer to select redundant parameters, and do not take advantage of the dynamics of network parameter updates to select redundant filters flexibly. In addition, there is noise in the parameters of the filters themselves, and these methods do not reduce the influence of interference information, which affects the correct selection of redundant filters. The fusion pruning method proposed in this paper uses the filter itself as well as information entropy stratification and BN layer parameter information to jointly select redundant parameters more accurately by combining multiple determination values.
Fusion Pruning Algorithm
The main idea of the network pruning method is to judge the weights or convolutional kernels that are less important in the model. We remove these and then recover the model performance by the fine-tuning. It is achieved to compress the neural network model parameters to the maximum extent to achieve model acceleration while guaranteeing the model performance.
In the previous filter pruning methods, most of them were judged by the value of the
Filter Pruning Based on Affinity Propagation
Affinity Propagation (AP) was originally proposed as a paradigm for selecting data points with different attributes. All samples are considered as nodes in the network, and then the clustering center of each sample is calculated based on the message transmission of each edge in the network. There are two kinds of messages passed among the nodes during clustering, which are responsibility and availability. Affinity Propagation algorithm [38] continuously updates the responsibility and availability values of each point through an iterative process until high-quality exemplars (similar to the center of mass) are produced, while the remaining data points are assigned to the corresponding clusters.
As shown in Figure 3, it is proposed to reformat each filter as a high-dimensional data point into vector form. For any two filters, Affinity Propagation takes their similarity graph as input, which reflects the extent to which the filter is suitable as an example of the filter. The negative Euclidean distance is: when i = j, it expresses the adaptability of the filter to its own samples (self-similarity). It can be defined as: The median( * ) function returns a middle value of the input. Larger s k (i, i) results in more example filters, however, this will reduce complexity. Using the median of the total weights of the k-th layer in Equation (2), a moderate number of samples can be obtained. To solve, Equation (2) was restated as follows: where β is a pre-given hyperparameter. Equation (3) differs from Equation (2) in two ways: first, the median is obtained on the i-th filter, rather than the whole weight. Therefore, the similarity s k can be more adapted to the filter w ki . Second, the introduced β provides an adjustable model complexity reduction, with larger β reducing very high complexity and vice versa.
In addition to similarity, two other messages are passed between filters-responsibility and availability-to determine which filter is the paradigm, and which paradigm it belongs to for each other filter.
By considering other potential examples of the filter w ki , responsibility indicates the filter w kj and whether it is suitable as a filter w ki paradigm of the filter. The update of the r(i, j) is as follows: where a(i, j) is initialized to zero, which is the following "availability". The r(i, j) is set to s(i, i) minus the maximum similarity between the filter w ki and the other filters. Afterwards, if a filter is assigned to other samples, its availability degree is all less than zero in Equation (6), which further reduces the validity of s(i, j ) in Equation (4) and thus will be removed from the candidate samples. For i = j, the "self-responsibility" is defined as follows: It is set to s(i, i) minus the maximum similarity between the filter and the other filters. It reflects the possibility that the filter is an example.
As for availability, its update rule is first given as follows.
The availability a(i, j) is set to the sum of r(j, j) plus the other responsibilities of the filter. max() excludes the negative responsibility, since only good filters (positive responsibility) need to be focused on. r(j, j) < 0 indicates that the filter is better suited to belong to another paradigm rather than the paradigm itself. It can be seen that the availability of a filter as an exemplar increases if some other filter has a positive responsibility towards it. Thus, the degree of belonging reflects the suitability of the choice as its paradigm, as it takes into account the support of other filters that should be used as paradigms. Finally, min() limits the effect of strong positive responsibility so that the sum cannot exceed zero.
For i = j, the "self-availability" is given as: This reflects that the filter w ki is an example of the positive responsibilities based on other filters.
The updates of responsibility and availability are iterative. To avoid numerical oscillations, consider the weighted sum of each message at the t-th update stage.
where 0 λ 1 is the weighting factor. After a fixed number of iterations, the filter, as its example, satisfies.
when i = j, the filter selects itself as an example. All the selected filters make up the examples. Therefore, the number of samples is adaptive and there is no artificial identification.
Channel Pruning Based on Batch Normalization (BN) Layer Scaling Factor
The Batch Normalization (BN) layer has been widely used in neural networks due to its role in accelerating the convergence of the network and is generally placed in the next layer of the convolutional layer to normalize the features to the output of the convolutional layer, allowing each layer of the network to learn on its own, slightly independent of the other layers. BN layer has two optimization parameters-the scaling factor and the offset factor-which fine-tune the normalized feature data so that the feature values can learn the distribution of features in each layer. The Algorithm 1 is the forward propagation process.
scale and shift
µ B is the mean of the input and σ 2 β is the variance. The convolution layer produces a feature map for each filter when the convolution operation is performed, and each feature map has a unique correspondence when normalization is performed. Thus, the feature map can be determined by the scaling factor and then the corresponding filter can be selected by the feature map, from which the importance of the filter can be determined.
Fusion Pruning
In order to achieve the most refined and simplified network model structure, the fusion pruning method combines two pruning methods, filter clustering and information entropy hierarchy, to remove the redundant filters and channels. On the one hand, the redundant filters are searched for from the filter perspective. On the other hand, based on the information entropy hierarchy, the importance between each layer is judged from the layer perspective to determine the pruning rate of each layer, and then channel pruning is performed to remove the redundant channels.
Since the importance of each layer of the convolutional layer is different, pruning with the same pruning rate may lead to pruning of important features of some layers. Therefore, a channel pruning method of the information entropy hierarchy based on the scale factor of the BN layer is proposed. Based on the range of scaling factor value sizes, this value interval is divided equally into a number of intervals, denoted as N, and counts the probability of each weight in different intervals. The entropy value can be calculated by the following equation.
where S j i denotes the entropy value of the j-th scaling coefficient in the i-th layer, and p n denotes the probability that the scaling factor is in a certain interval.
J denotes the total number of features in the current layer. T i is the final score entropy value of the current layer, which can reflect the dispersion degree of the current layer. According to the degree of dispersion by which the layer weight fluctuation size can be known, the greater the fluctuation, the more features contained, and vice versa, the less features contained. The K-class pruning rate can be set and the entropy value can be classified using k-means. According to the classification results, they correspond to different pruning rates-a high entropy value corresponds to a low pruning rate and a low entropy value corresponds to a high pruning rate. Algorithm 2 represents the k-means clustering process.
In this paper, it is pruned by the pruning method mentioned above to achieve a network model with maximum proportional compression. Figure 4 shows different pruning strategies for different network structures. Algorithm 3 describes the fusion pruning process. The fusion pruning algorithm takes the training data, the original VGG16 network, the initialized weights, and the high and low pruning rates as input and the network obtained after pruning as output. Cause for j = 1, 2, .., m do
5:
Calculate the distance between the sample T j and each mean vector Determined from the nearest mean vector T j of the cluster markers: ρ j = arg min i∈{1,2,...,k} d ji ; 7: Place the samples T j Classify into the appropriate clusters C ρ j = C ρ j ∪ T j 8: end for 9: for i = 1, 2, 3 . . . , K do 10: Calculate the new mean vector: if ξ i = ξ i then 12: Update the current mean vector ξ i to ξ i ; 13: else 14: Keep Obtain the redundant filters to be removed, based on the samples from each category. 5: After step 2, calculate the entropy value using a range of scaling factor values. 6: Determine the pruning rate of each layer by dividing into N classes using k-means. 7: Ranking the scaling factors of each convolutional layer and obtaining the channels to be removed according to the pruning rate P of that layer. 8: Merge the results obtained in steps four and seven. 9: Remove the sum obtained in step eight after step two to obtain the streamlined network structure. 10: Fine-tuning the new network on the training data to get the final model.
Experiment
In order to verify the effectiveness of the model pruning algorithm, the experimental session of this paper will be based on the pytorch framework, based on the VGG model and the Resnet model.
First, experiments were conducted for pruning tests on a dataset containing 10 common objects, CIFAR-10. The CIFAR-10 dataset has a total of 60,000 images, all 32 × 32 pixels in size in color, divided into 10 categories in total. When conducting the experiments, they were generally divided into 50,000 training set images and 10,000 validation set images.
We select several existing algorithms and compare them objectively with the algorithm proposed in this paper in accuracy and parameter compression ratio to verify the advantages and disadvantages of the fusion pruning algorithm. The experimental results of different pruning algorithms are compared on the CIFAR-10 dataset based on the vgg16 model and the resnet56 model.
As shown in Table 1, we analyzed on cifar10 through VGG16 and ResNet56. More detailed analyses are provided as below: VGGNet-16. After pruning and fine-tuning, several different algorithms' accuracy variation is shown in Figure 5 below, and it can be found that the final convergence values do not differ much. The accuracy of the proposed pruning method did not lose much accuracy compared to the other algorithms, which is in accordance with the experimental expectations. As shown in Table 1 below, it can be found that the FPGM uses a geometric center-based filter evaluation metric with a small difference in accuracy, but only 34.22% reduction in floating point computation. The literature [25] reduces half of the computation by sparsifying the scale factor evaluation and reduces the parameter volume to 1.83 M. Compared with other algorithms, the algorithm in this paper prunes from both filter and channel perspectives, which greatly compresses the number of parameters as well as the computation volume while ensuring that the pruning accuracy remains basically the same, and achieves the purpose of streamlining and accelerating the network model. ResNet-56. On CIFAR10, with similar parameters and FLOPs, our algorithm based on information entropy stratification enables ResNet56 to obtain an accuracy of 93.21% with 0.31 M parameters and 45.78 M FLOPs, respectively. In addition, using our fusion pruning algorithm is effective in reducing the computation with a slight loss of accuracy. This shows that our algorithm is suitable for pruning residual blocks.
Development of the Internet of Things (IoT)
Since the launching of Google's head-mounted device, quite a lot of attention has been paid to wearable (electronic) devices. In recent years, major technology companies have also started to develop novel wearable devices. Samsung launched its first smartwatch, the Samsung Galaxy Gear, in 2013, and Apple followed a year or so later with its own smartwatch offshoot, the Apple Watch. In the field of virtual reality, HTC also announced a new generation of VR headsets in 2016, which allows users to move freely in virtual space-time. It shows that wearable devices have significant market space and have a wide range of applications, including health, entertainment, and military. For example, wearable sensing can provide responsive early warning to patients following episodes of Parkinson's disease, heart attack, sleep apnea, and other diseases. In these sudden moments, patients are likely to lose consciousness or mobility instantly, and wearable devices can save lives. For certain body signals that are difficult to observe directly, such as pulse and sweat secretion, IoT devices can achieve timely and continuous detection, and can sustainably make health assessment to protect life. On the other hand, IoT applications are not uncommon in life. Kumar [41] proposes a public transportation system mask detection system to guarantee public transportation safety. Tarek [42] designs a tomato leaf disease identification workstation to guard the healthy growth of vegetables.
The proliferation of mobile Internet devices and various embedded devices has created a world full of rich information. Applying deep learning techniques to IoT applications can make better use of the data collected by IoT sensors such as sound, images, etc.
Deploying deep learning solutions in IoT applications typically follows the following processes: training first, then deployment. Deep learning algorithms include both training and inference components. Training is the process of extracting valid information from existing data, while inference is the procedure of using the extracted valid information to process the data to be processed. Training a large deep learning model is a high resource process. When applying deep learning techniques to an IoT environment, the training process is usually done in a server with huge computing power before the trained model is deployed in an IoT device. However, a major design requirement for IoT devices is low power consumption, which often means limited computing power as well as storage space. Therefore, we pruned the complex network before deployment. The network pruning is intended to make the model smaller and the running time faster, suitable for deployment on IOT devices or wearable devices with weak computing power.
The purpose of the research in this section is to deploy the algorithm to the embedded platform. To demonstrate the performance on IoT devices, this experiment uses a common mobile terminal as a platform for target recognition testing. The experimental environment is Android Studio 2020.3.1, simulated device Pixel 4 API 30 Android 11.0 1080*2280:440dpi, cpu x86.
NCNN Framework
Deploying algorithms to end devices requires a deep learning framework that can achieve inference acceleration on embedded platforms. NCNN is Tencent's open source deep learning forward framework, which has no dependency on third-party libraries and is relatively small in size and is convenient to use for deployment on the embedded platform. NCNN is not only optimized for CPU acceleration on ARM platforms, but also supports GPU acceleration. NCNN provides an interface to use Vulkan, so in practice you only need to compile the GPU version of the NCNN library, and then use GPU acceleration for model inference via Vulkan, and it is very easy to use with a single line of code command to enable GPU acceleration.
The NCNN framework builds the network and the inference process as shown in Figure 6. First, the NCNN network interface is built, then the model is initialized, and the model parameter file and the model weight file are loaded. Second, the processed network input is read, and then the network extractor is created and used to perform the forward inference process to obtain the network output, where the names of the network input and output should be consistent with those in the model parameter file. Figure 6. The NCNN framework for building networks and the inference process.
Model Conversion
The algorithm in this paper will get the model file and weight file after design and training in the Pytorch framework, but NCNN does not provide the interface to convert the Pytorch model to NCNN model directly. NCNN provides interfaces for conversion of caffe framework and onnx model formats, but the caffe framework does not provide interfaces for model conversion from Pytorch to caffe. Therefore, in this paper, we choose to convert the model from torch to the intermediate format onnx first, and then from onnx format to NCNN model, and verify the results at each conversion format. Finally, we can get the model parameter files and model weight files in NCNN format. The model conversion process is shown in Figure 7.
System Construction
The flow of deploying running target recognition algorithms to identify objects based on the NCNN framework is shown in Figure 8. After obtaining the compressed model, we proceeded to deploy it on a mobile platform with ARM instruction set. We take VGG16 as an example to test. The over-parameterized model was implemented and trained using the Pytorch deep learning framework, which provides good computational support for X86 instruction set CPUs and Nvidia GPUs but cannot run on ARM instruction set CPUs. For this reason, we use the ONNX (Open Neural Network Exchange) specification as a transit to convert the computational graph and parameters of the model to the file format used by NCNN, and then use NCNN to perform the inference on the cell phone side. The following Figure 9 shows the running results. Table 2 compares the change in time consumption before and after pruning.
Conclusions
In order to obtain a more streamlined and compact network model with basically the same model accuracy, this paper proposes a fusion pruning method, which first uses filter clustering to avoid an overly concentrated weight distribution, and then prunes to remove unimportant channels according to the BN layer scaling factor based on information entropy stratification, and the two are combined to achieve maximum compression. The experimental results show that the algorithm makes the computation compressed by a factor of 4.1 and the number of parameters by a factor of 10.74 while maintaining accuracy on VGG16. There are also pretty good results on ResNet56. After the algorithm is deployed to the device for testing, it still shows excellent performance. The current experiments have proved that, compared with the original deep learning model, the algorithm proposed in this paper has great improvements in both running time and space. The optimized model deployment is only on the mobile side, not really running on wearable devices, and its performance and expansion space have not been fully verified. Our future work will be carried out in this area, and we will try to add quantification methods to further optimize the model.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,909 | 2022-04-11T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
DNA sequences performs as natural language processing by exploiting deep learning algorithm for the identification of N4-methylcytosine
N4-methylcytosine is a biochemical alteration of DNA that affects the genetic operations without modifying the DNA nucleotides such as gene expression, genomic imprinting, chromosome stability, and the development of the cell. In the proposed work, a computational model, 4mCNLP-Deep, used the word embedding approach as a vector formulation by exploiting deep learning based CNN algorithm to predict 4mC and non-4mC sites on the C.elegans genome dataset. Diversity of ranges employed for the experimental such as corpus k-mer and k-fold cross-validation to obtain the prevailing capabilities. The 4mCNLP-Deep outperform from the state-of-the-art predictor by achieving the results in five evaluation metrics by following; Accuracy (ACC) as 0.9354, Mathew’s correlation coefficient (MCC) as 0.8608, Specificity (Sp) as 0.89.96, Sensitivity (Sn) as 0.9563, and Area under curve (AUC) as 0.9731 by using 3-mer corpus word2vec and 3-fold cross-validation and attained the increment of 1.1%, 0.6%, 0.58%, 0.77%, and 4.89%, respectively. At last, we developed the online webserver http://nsclbio.jbnu.ac.kr/tools/4mCNLP-Deep/, for the experimental researchers to get the results easily.
DNA methylation is a mechanism that entails the chemical modification of DNA sequences, that changes hereditary performance without altering the DNA's nucleobases. DNA modification through methylation and demethylation plays a significant role in gene expression. DNA methylation can regulate various biological processes including genomic imprinting, chromosome stability, and cell development and extend the assortment of genes because of its structural changes in DNA 1 . Prokaryotic and eukaryotic genomes undergo three types of methylation; N4-methylcytosine (4mC) 2 , 5-Methylcytosine (5mC) 3 , and N6-methyladenine (6mA) 4 .
Gene modifications are assembled by the distinct DNA methyltransferases (DNMTs) to transmit a methyl group to a particular exocyclic amino group 5 . 5mC is one of the most extensively studied types of cytosine methylation as a consequence of its widespread dissemination and complicated aspects 6 . 5mC plays an important role in numerous biological processes 7 associated with neurological diseases, diabetes, and cancer. 6mA, in contrast, takes place only on a very small-scale, and is only found in eukaryotes using high sensitivity methods. 4mC is considered a dynamic epigenetic modification because of the restriction-modification (R-M) method to protect restriction enzyme form deterioration of self-DNA. It was first discovered in 1983 8 . 4mC plays a significant role in the regulation of a number of processes intrinsic to the cell cycle, including gene expression, defining self and non-self-DNA , DNA replication, and correcting DNA replication errors 9,10 . Investigational studies related to 4mC have waned in part due to a lack of sufficient identification techniques. While there are several experimental procedures capable of identifying 4mC sites, including mass spectrometry, for the whole genome 4mC-Tetassisted bisulfite, Single-Molecule of Real-Time (SMRT) sequencing, and methylation-precise PCR [11][12][13][14] , these approaches are regarded as expensive and time-consuming when applied across an entire genome. [15][16][17][18][19] . These methods rely on state-of-the-art machine learning (ML) algorithms to make their predictions for 4mC sites. Each of these predictors used different encoding techniques such as binary encoding, nucleotide chemical properties, and nucleotide frequencies with various algorithms like support vector machine, random forest, and decision tree. Recently, a new predictor DNC4mC-Deep 20 was proposed to identify and analyze of Rosaceae genome, where they implemented a deep learning based Convolution Neural Network (CNN) algorithm. They used six different kinds of encoding methods binary encoding (BE), dinucleotide composition (DNC), trinucleotide composition (TNC), Nucleotide chemical property (NCP), nucleotide chemical property and nucleotide frequency (NCPNF), and multivariate mutual information (MMI). Another deep learning based model has been established named as 4mCDeep-CBI 21 to identify the N4-methylcytosine sites in the newly developed dataset of Caenorhabditis elegans, where they implemented 3-CNN and Bidirectional Long Short-Term Memory (BLSTM) to fetch the deep features for the prediction.
In this work, we developed a new tool, named 4mCNLP-Deep, to identify and analyze 4mC sites associated with C. elegans dataset which was recently expanded by increasing the number of samples. The Structure of the proposed model was built as follows; First, we used the encoding method word2vec, which has never been used before in N4-methylcytosine identification, to transform sequences into vectors form using word embedding. The word-embedding approach mostly operates on Natural Language Processing (NLP) 22 , but thereafter executed efficaciously on wide-genome identification [23][24][25][26][27][28] . We obtained the final CNN model by applying the grid search algorithm with tuned hyper-parameters and fed the vectors of word embedding into it. We used a K-fold crossvalidation method for different values of K. Then, we applied five evaluation metrics to assess the model. We also employed two applications, silico mutagenesis 29
Materials and methods
Benchmark datasets. The benchmark dataset of Caenorhabditis elegans (C. elegans) was attained from Feng Zeng et al. 21 . Where they extend the existing dataset of Ye et al. 31 by producing new samples. New samples were got from the MethSMRT database consisted of 4mC and non 4mC sites, where each had a length of 41 bp. Two steps were taken, first, in the Methylome Analysis Technical Note, it was shown that the modification QV (modQV) score for the IPD ratio had remarkably dissimilar from the estimated. With the modQV score of greater than 30, the samples were removed. Next, they used the CD-HIT 32 software to remove the redundancy of bias samples to make sure a biased dataset will not miscalculate the accuracy results. The cut off frequency was used 0.80.
Subsequently, newly acquired samples were integrated with the benchmark dataset which was used in several research works. A dataset formed with the number of 18747 samples, to reduce the similarity between the new and old samples a CD-HIT was used. After that, a new dataset of C. elegans prepared with a total of 17808 samples from which 11173 are 4mC samples and 6635 are non 4mC samples.
Distributed feature representation. The nature of raw genomic datasets is considered as complicated and noisy. With this necessity, we focused to apply the computational model for the instinctive feature representation learning approach on genomic data 33 . This method allows for inducing optimum features set and increases the performance of the computational model by reducing the model complexity.
Vector representation of words or word embedding is the most well-known technique in the natural language processing (NLP) operations. Theoretically, it transforms the 1-dimension per word into continuous N-dimension vectors. The first word2vec model was proposed by Mikolov et al. 22 Based on a neural network, resultant outcomes possessed distributed characterize sentences of linguistic words. The aforementioned, technique was much faster than the preceding methods, to train the model for continuous vector space and lower the dimensions. In recent years, the success of NPL has been shown caused by its advantageous applications for instance speech recognition, language assist, and translation devices, which made substantial progression on the word embedding methods. Furthermore, researchers revealed that genetic data can be used as language whether DNA or RNA samples that occur within the structure of the cell [34][35][36] . Additionally, several biological related complex problems have been successfully demonstrated through NLP approaches [25][26][27][28]37 .
Corpus development is the first step for implementing the word2vec model, it divides the continuous biological sequence into k length (k-mer) 38 of nucleotides groups to formulate as a word and identify linguistic associations among them. In this work, we have carried out the preprocessing to compose the text corpus from the C. elegans genome and then trained the word2vec model. We have produced the corpus by operating the whole C. elegans genome assembly (WBcel235/ce11) which is downloaded from http://hgdownload.soe.ucsc. edu/goldenPath/ce11/chromosomes/.
Firstly, the genome assembly was distributed into seven chromosomes (chrI, chrII, chrIII, chrIV, chrM, chrV, chrX) and then each chromosome split into the sequence of 41nt to shape the sentence. A continuous bag-ofwords (CBOW) approach has been employed to train the word2vec model. CBOW determines the recent word w(t) by surrounds the context word based on predefined window size which was set as 5. A biological sequence www.nature.com/scientificreports/ which is mostly a combination of A, C, G, T nucleotides transforms into the sequence of words by setting the k-mer value. The k was set k = 3 with overlapping which forms a DNA sequence ACG TCA GT into words like ACG, CGT, GTC, TCA, CAG, AGT. Each 3-mer word is indicated by a 100-dimensional vector. We experimented with the word2vec model by different values of k-mer, such as k=2, k=3, k=4. Complete details of the parameters which were used are shown in Table 1.
The proposed model
In this work, adequate deep learning based CNN model was proposed for the prediction of N4-methylcytosine sites of the C. elegans genome. CNN has the capability to acquire leading quality features automatically for the classification prediction instead of manually handy crafted like traditional supervised learning methods. Whereas, an assorted CNN model can be made by using handcrafted features. Convolution Neural Network has been utilized in several research areas such as image processing 39,40 , natural language processing 41 , and computational biology [42][43][44][45][46][47] . A grid search algorithm was implemented with different hyper-parameters values to obtain the most optimal CNN model during its learning. The range of parameters is demonstrated in Table 2.
In the proposed work, the word2vec feature representation was introduced which is completely different from previous works of N4-methylcytosine. A CNN based word2vec model trained on the optimum model which got from the grid search. The input of the model is (L − k + 1) × 100 , where L is the length of the input sequence, k is the value of k-mer and 100 is a dimensional vector for each word in the sequence sentence. The model contains three blocks and each has several layers with diverse parameter ranges to construct the model. Each block comprises a convolution layer (Conv1D) having parameters as a filter number with values of 32, 32, 16, respectively, kernel-size with values of 5, 5, 4, correspondingly and stride with the value of 1 for all. The convolution layer has capability to extract the features by self-activating for the appropriate input. In all the convolution layers, L2 regularization weights and bias used to assure the model by overfitting. The values of both regularizations were set as 0.0001, for all three Conv1D, an exponential linear unit (ELU) utilized as an activation function. Each block Conv1D was followed by a group normalization layer (GN), which condensed the consequences of convolution layers. GN distributes the feature map into the desired numbers of groups and normalizes them within each group. The number of groups was fixed as 4, 4, 2, respectively in each block. Moreover, a max-pooling layer (MaxPooling1D) was applied to minimize the dispensable features after the GN layer and avail to turn down the dimensionality of the features. The pool-sizes were set as 4, 4, 2, correspondingly, and strides were set as 2 for all max-pooling layers in each block. Right after MaxPooling1D, dropout layers were used to avoid the overfitting problem while training the model. It supports by turning off the operations of some hidden nodes by regulating the neurons to zero at the learning process.
After the convolution blocks, a flatten layer was used to unstack the outcomes and squash the features vectors from preceding layers. Furthermore, a fully connected layer (FC) was implemented with the number of 32 neurons and L2 weights and bias regularization was utilized by setting the value at 0.001. In the FC layer, ELU activation was used. Finally, the last FC layer was employed with a sigmoid activation function for the bimodal classification. The sigmoid function helps to squeeze the outcome numbers on the scale of between 0 and 1 and www.nature.com/scientificreports/ demonstrates the likelihood of acquiring the 4mC and non-4mC sites. Figure 1 shows the detailed architecture of presented CNN model and feature learning model. The proposed model 4mCNLP-Deep was executed on Keras 48 . Stochastic gradient descent (SGD) optimizer was used with the value of 0.95 for momentum and 0.004 for the learning rate. Binary cross-entropy is deployed as a loss function. We fix the 100 epochs and 32 batch size for the fit function. The call back function was used to storing models and their corresponding weights by calling the checkpoint. While early stopping is used to stops the prediction accuracy at a certain point once validation puts an end to improve. The early stopping patience level was set as 20.
Performance evaluation metrics
The effective performance of the 4mCNLP-Deep model was measured by k-fold cross-validation, we used three different values for the k such as 3 fold, 5 fold, and 10 fold cross-validation to carry out the preeminent identification. Cross-validation is used to estimate the explicit achievement of the desired model by using the resampling method. The whole dataset merges and splits into k number of clusters, each cluster carries eight folds for training, one for validation, one for testing. The proposed CNN model was trained and tested k intervals. There are four metrics to evaluate the performance of the model such as Accuracy (ACC), Mathew's correlation coefficient (MCC), Specificity (Sp), and Sensitivity (Sn) with the given mathematical formulation [49][50][51] .
Where TN and TP represent as true negative and true positive having the correct number of identified sequences related to 4mC and non-4mC, respectively. Whereas, FN and FP denote as false negative and false positive taking false number of identified sequences for 4mC and non-4mC, respectively. Moreover, the receiver operating characteristics curve (ROC) and area under the ROC curve (AUC) were also deployed to demonstrate the achievement of the presented deep learning model.
Results and discussion
A word2vec formulation technique was utilized with different ranges of k-fold cross-validation to predict the N4-methylcytosine by the implemented an optimal predictor to obtain the best performance.
Performance evaluation. In the proposed model 4mCNLP-Deep, we did diverse experiments with a distinct assortment of values for corpus k-mer (2, 3, 4) and k-fold (3, 5, 10) on the C. elegan dataset. Each model utilized word2vec with different k-mer values on a various number of folds to check the best performance. For example 2-mer word2vec was implemented with distinct folds of 3, 5, and 10 as cross-validation. As the value of k increases, the disparity in size among the training set and the resampling subset gets shorter. In the resulting model returns immeasurable results. If the difference increase, the bias of the procedure becomes larger and it affects the model outcomes as compared to a large difference. In contrast, the proposed model gives better results in 10 folds of each k-mer of word2vec. For constructive comparison, we compared our predictor with state-of-the-art model 4mCDeep-CBI 21 and scrutinize the credibility of the model to identify the 4mC and non-4mC sites. The 4mcDeep-CBI applied 3-fold, our model outperformed with 3-mer word2vec using 3-fold and has reported as ACC of 0.93.54, MCC of 0.8608, Sp of 0.89.96, Sn of 0.9563, and AUC of 0.9731 and accomplished increment of 1.1%, 0.6%, 0.58%, 0.77%, and 4.89%, respectively. The detailed experimental results of 4mCNLP-Deep with all ranges have been shown in Table 3. The performance evaluation of 4mCNLP-Deep and state-of-the-art are demonstrated in Fig. 2. The first method to decipher a convolution neural network model for computational and statistical biology is silico mutagenesis which was used in various scientific works 29,52 . It is operated by mutating each nucleotide by a single base of sequences with a fixed length of four nucleotide A, C, G, T. In this systematical methodology, the model restore each outcome of resulted mutation and keeps the output as an absolute difference. Further, taken out the aggregated average of mutated predictions for the complete dataset.
For the mutated alterations, a heat map was implemented to show the impact of mutation. Figure 3, demonstrates the visualization of the mutation on the C. elegans dataset as an indigenous feature during the model's learning phase. As it can be shown that the influence of mutation is less in the center of sequence on ultimate identification due to C nucleotide which is symbolized as N4-methylcytosine modification. The recasting of C nucleobase can intimate the unique kind of gene modification. In comparison, more influence of mutation can be shown on the other sides of the heat map which leads to represent the modification of nucleotides can change the results of cytosine recognition.
The second technique to interpret the deep learning based CNN model is the saliency map which helps to identify the most influential features of the sequence by the help of the gradient of the model for final prediction. It points out the most significant characteristics in the samples to classify the class related to the modification, several investigators used in their work 30,53 . For the envision, the efficiency of each location was derived by pointwise product of the saliency map through the vector encoding to obtain the imitative values of actual nucleotide characters of sequences such as A, C, G, T. We experimented by splitting the samples into 3-mer chunks across all the sequence by the formulation of L − K + 1 . The effect of tri-nucleotide letters at each place of the whole C. elegans dataset's outcome result can be shown in Figure 4. In the middle of the words, the CAA motif has a significant magnitude value which is illustrating the utmost vital features in the sample for the identification of the CNN model. The base 4mCNLP-Deep also specifying the current gene modification related to N4-methylcytosine.
Application of clinical research. C. elegans consider as a hereditary model organism used in the study of physiology, to sum up, the aspects of human disease. It is a widely applied non-mammalian animal model that is well proven for the highly versatile experiments for research on genetics, development, aging, muscle physiology, and radiobiology. The main purpose of the clinical research on C. elegans is to identify such type of genes which provides information about the mechanism of human disease development and also helps to enhanced diagnosis and treatment. Clinical experiments are costly and time-consuming when used for the whole genome. Therefore [25][26][27][28]37 to contribute as a better solution. This type of applications helps the biologist by freely accessible online tools.
Conclusion
In the presented work, we introduced a persuasive computational biological model which is known as 4mCNLP-Deep for the prediction of 4mC and non-4mC sites. The expanded dataset of C. elegans was utilized for training and testing the deep learning model. Furthermore, a unique encoding technique was applied to transform sequences into the vectors representation by using a word embedding for the deep learning model. An optimal CNN algorithm was deployed after getting the best settings by exploiting hyperparameter tuning in a grid search. We performed several experiments for the values of k-mer in corpus and cross-validation for k-fold. All the experimental results are outperforming from the existed model. However, for rational comparison, 3-mer word2vec on 3-fold cross-validation has shown a prominent result which indicates the effective performance and high intelligence of the model for predicting the N4-methylcytosine sites. In the proposed work, five evaluation metrics were used like ACC, MCC, Sp, Sn, and AUC to measure the robustness and productivity of identification. Lastly, two diverse approaches named silico mutagenesis and saliency map were employed to interpret our deep learning based CNN model and understand the biological significance of gene modification. 4mCNLP-Deep can be appropriated by the biologists and create a high impact to identify a different kind of gene modification specifically N4-methylcytosine and specify the brain related diseases or development irregularities. In the future, we will expand the model complexity with a proper and efficient way for the prediction of all kinds of gene modification which will make a huge contribution in the field of bioinformatics and computational biology. Moreover, we developed the online webserver http://nsclbio.jbnu.ac.kr/tools/4mCNLP-Deep/, for the experimental researchers to get the results easily. | 4,621.4 | 2021-01-08T00:00:00.000 | [
"Computer Science",
"Biology"
] |
A Novel Method for Ocean Wind Speed Detection Based on Energy Distribution of Beidou Reflections
The Global Navigation Satellite System Reflectometry (GNSS-R) technique exploits the characteristics of reflected GNSS signals to estimate the geophysical parameters of the earth’s surface. This paper focuses on investigating the wind speed retrieval method using ocean scattered signals from a Beidou Geostationary Earth Orbit (GEO) satellite. Two new observables are proposed by computing the ratio of the low energy zone and the high energy zone of the delay waveform. Coastal experimental raw data from a Beidou GEO satellite are processed to establish the relationship between the energy-related observables and the sea surface wind. When the delay waveform normalized amplitude (this will be referred to as “threshold” in what follows) is 0.3, fitting results show that the coefficient of determination is more than 0.76 in the gentle wind scenario (<10 m/s), with a root mean square error (RMSE) of less than 1.0 m/s. In the Typhoon UTOR scenario (12.7 m/s~37.3 m/s), the correlation level exceeds 0.82 when the threshold is 0.25, with a RMSE of less than 3.10 m/s. Finally, the impact of the threshold and coherent integration time on wind speed retrieval is discussed to obtain an optimal result. When the coherent integration time is 50 milliseconds and the threshold is 0.15, the best wind speed retrieval error of 2.63 m/s and a correlation level of 0.871 are obtained in the UTOR scenario.
Introduction
As one of the most important links in the global climate system, the ocean plays a decisive role in regulating the climate through the exchange of energy with the atmosphere and water circulation. Coastal areas are threatened by storms and typhoons, especially in the northwest Pacific Ocean. Therefore, monitoring the offshore sea-state is necessary to ensure the safety of social activities in local areas. Traditional methods such as buoys and active radars have performed well in sea-state detection. However, their high cost and geographic dependence limits their quantitative distribution. The Global Navigation Satellite System Reflectometry (GNSS-R) technique has been an innovative option for remote sensing since it was first proposed for mesoscale altimetry by Martin-Neira in 1993 [1]. This technique exploits signals of opportunity from GPS or other GNSS constellations (Galileo, Beidou, Glonass, etc.) being reflected off the Earth's surface to retrieve various geophysical parameters of the Earth's surface. During the initial period, scientists mainly focused on GNSS-R based ocean remote sensing, such as sea altimetry and scattermetry [1][2][3][4][5][6][7][8]. During the last two decades, the applications of the GNSS-R technique have expanded to various fields, such as monitoring of sea-ice, sea salinity, snow depth, oil spilling and soil moisture [9][10][11][12][13]. Meanwhile, various experimental activities have been performed to demonstrate the performance of this technique. A detailed review of GNSS-R principles, applications and future space-borne missions can be found in [14,15].
Theoretical Analysis
This section presents the analysis of the relationship between GNSS-R waveforms and wind speed to derive the sensitivity of the observable. The shape of GNSS-R waveform can characterize the sea surface roughness: the higher the wind speed is, the rougher the sea surface is and, thus, the extension of the waveform is. This section focuses on the description of the energy distribution of the GNSS-R waveform and defines the bistatic radar model, which describes the total average correlation power of the scattered GNSS signal as a function of the time delay ∆τ and the frequency offset ∆ f with respect to the time delay and Doppler frequency associate with the nominal specular point on the ocean's surface [30]. where Y r (·) is the averaged power waveform; D is the integration area; G r is the receiver's antenna gain, R ts is the distance from the transmitter to the reflection point ρ; R sr is the distance from ρ to the receiver. ∆τ and ∆ f are delay offset and Doppler frequency offset, respectively. τ(ρ) and f (ρ) are the delay offset and Doppler frequency shift at the scatter point, respectively. R AC is the auto-correlation function of the GNSS ranging code defined as R AC (τ) = 1 − τ/τ c when |τ| ≤ τ c , R AC (τ) = 0, elsewhere (τ c is the length of one chip of the C/A code). |S| is the Doppler frequency filter function defined as |S| = sin(π f )/(π f ) . σ 0 is the normalized bi-static radar cross section of the sea surface, which is calculated and expressed as a function of the probability density function (PDF) of the slopes based on the geometric optics limit.
where is the Fresnel reflection coefficient, q is the scattering vector, P is the PDF of the surface slope. In the following simulation, the PDF is assumed to be a 2-D zero-mean Gaussian distribution [31]. The mean square slopes of the up-wind and cross-wind are approximated based on the simplified sea roughness model proposed by Katzberg [32] as where σ u and σ c are the mean square slopes of the up-wind and cross-wind, respectively. f (u) is the function of the wind speed at the height of 10 m above the sea surface and is calculated as To analyze the relationship between the wind speed and energy distribution of the waveform, a simulation is performed with the parameters listed in Table 1. Considering the rather small effect of the Doppler frequency spreading in the following experimental scenario, the simulation mainly focuses on the variation of the delay waveform. Figure 1 shows the normalized delay waveforms under different wind conditions. As shown in Figure 1a, we set a constant wind direction of 20 deg and simulate 1-D delay waveforms corresponding to different wind speeds. The extension of the trailing edge shows a dependence on wind speed, while the leading edge of the waveform remains relatively stable with different wind speeds. As shown in Figure 1b, we set a constant wind speed of 6 m/s and simulate the waveforms corresponding to different wind directions. This shows a slight change in the trailing edge of the waveform. 1 shows the normalized delay waveforms under different wind conditions. As shown in Figure 1a, we set a constant wind direction of 20 deg and simulate 1-D delay waveforms corresponding to different wind speeds. The extension of the trailing edge shows a dependence on wind speed, while the leading edge of the waveform remains relatively stable with different wind speeds. As shown in Figure 1b, we set a constant wind speed of 6 m/s and simulate the waveforms corresponding to different wind directions. This shows a slight change in the trailing edge of the waveform.
Methods
The trailing edge of the delay waveform is divided into two parts, named the low energy zone and the high energy zone, respectively. Figure 2 illustrates these two energy zones at a wind speed of 2 m/s.
Methods
The trailing edge of the delay waveform is divided into two parts, named the low energy zone and the high energy zone, respectively. Figure 2 illustrates these two energy zones at a wind speed of 2 m/s. As shown in Figure 2, the trailing edge is divided into two energy zones by two thresholds. We set the 1 noise Threshold P = , where noise P is the noise power. 2 Threshold is the boundary value between the high energy zone and low energy zone: , τ τ , where 2 τ is the delay value when the amplitude equals 1 Threshold . The description of the new observables that are used to derive wind speed is provided in the following section.
Averaged Amplitude Ratio of Low Energy Zone and High Energy Zone
The first observable named EMR is the averaged correlation amplitudes ratio of the low energy zone and high energy zone. And the EMR is shown as where l r Y and h r Y represent the average amplitudes of the low energy zone and high energy zone As shown in Figure 2, the trailing edge is divided into two energy zones by two thresholds. We set the Threshold1 = P noise , where P noise is the noise power. Threshold2 is the boundary value between the high energy zone and low energy zone: Threshold1 < Threshold2 < 1. A detailed discussion on threshold is presented in Section 4.3. The delay range of the high energy zone is (τ 0 , τ 1 ), τ 0 and τ 1 are the delay values when the correlation amplitudes equal maximum and Threshold2, respectively. And the delay range of the low energy zone is (τ 1 , τ 2 ), where τ 2 is the delay value when the amplitude equals Threshold1. The description of the new observables that are used to derive wind speed is provided in the following section. The first observable named EMR is the averaged correlation amplitudes ratio of the low energy zone and high energy zone. And the EMR is shown as where Y l r and Y h r represent the average amplitudes of the low energy zone and high energy zone where τ 1 and τ 2 are the delay values when the correlation amplitudes equal Thershold2 and Thershold1, respectively. τ 0 is the delay value when the amplitude is the maximum.
Area Ratio of the Low Energy Zone and High Energy Zone
The second observable named EDR is the area ratio of the low energy zone and high energy zone where Area l and Area h are the areas of low energy zone and high energy zone, respectively.
Experimental Setup
The coastal experiment was conducted to observe typhoon events in the Yangjiang site during the summer of 2013 within the cooperation of the ESA-China in GNSS Reflectometry [25].
As shown in Figure 3, the antennas were mounted directly on the roof of the building in the mountain (21.56 • N, 111.86 • E), with approximately 120 m altitude above the sea's surface. Both antennas are compatible with frequencies of 1575.42 MHz and 1561.098 MHz. The right-hand circular polarization antenna was used to collect direct signals from GNSS satellites with zenith-looking and the left-hand circular polarization antenna pointing toward the sea surface was used to collect reflected signals, as shown in Figure 3. To collect the weak reflected signal, the left-handed circular one has a high antenna gain of 12 dB and a narrow beam width of 38 • . With a detailed parameter shown in Table 2, a two channels GNSS receiver was employed to collect GPS and Beidou signals.
During the experiment, Beidou signals were collected with a fixed time length of 250 s each time. In this paper, the reflected signals from the Beidou GEO4 satellite (with the elevation of 31 • and azimuth of 108 • ) are processed. The in situ wind speed measurements from the Zhapo meteorological station (No. 59674) are used to assess the performances of the results.
antennas are compatible with frequencies of 1575.42 MHz and 1561.098 MHz. The right-hand circular polarization antenna was used to collect direct signals from GNSS satellites with zenith-looking and the left-hand circular polarization antenna pointing toward the sea surface was used to collect reflected signals, as shown in Figure 3. To collect the weak reflected signal, the left-handed circular one has a high antenna gain of 12 dB and a narrow beam width of 38°. With a detailed parameter shown in Table 2, a two channels GNSS receiver was employed to collect GPS and Beidou signals. During the experiment, Beidou signals were collected with a fixed time length of 250 seconds each time. In this paper, the reflected signals from the Beidou GEO4 satellite (with the elevation of 31° and azimuth of 108°) are processed. The in situ wind speed measurements from the Zhapo meteorological station (No. 59674) are used to assess the performances of the results.
Data Processing
The data processing mainly included three parts: raw data preprocessing, averaged-waveform computation and observable computation.
Data Processing
The data processing mainly included three parts: raw data preprocessing, averaged-waveform computation and observable computation.
• Raw data preprocessing. The Direct intermediate frequency (IF) signals were tracked to calculate the precise code delay and the Doppler frequency. The reflected IF signals were cross-correlated against local generate code replicas with different delays which were estimated by using the direct signal code phase as shown in Figure 4. •
Averaged-waveform computation
In the experimental scenario, the collected reflected signals were contaminated by different kinds of factors, such as thermal noise, speckle noise and direct signal crosstalk. It is necessary to post-process the waveform from the Beidou-Reflected software receiver. As discussed in [33], the The output 1 ms complex waveform of the reflected signal is where I r (t, τ) and Q r (t, τ) are the in-phase component and quadrature component of the complex waveform, respectively.
• Averaged-waveform computation In the experimental scenario, the collected reflected signals were contaminated by different kinds of factors, such as thermal noise, speckle noise and direct signal crosstalk. It is necessary to post-process the waveform from the Beidou-Reflected software receiver. As discussed in [33], the coherent and incoherent averaging were employed to increase the SNR of the waveform. Here, the coherent integration time was set to 50 ms. Then, utilizing the difference coherence properties between direct and reflected signals, the cross-talk of direct signals was removed by where Y r (τ) is the averaged power waveform. y r_50 (t k , τ) is the 50 ms coherent integrated complex waveform, N is the number of incoherent averages (N = 5000). Figure 5 shows the power waveforms at 12:00 on August 13, 2013 in the Yangjiang experiment. Figure 5a plots 250,000 consecutive 1 ms power waveforms. Figure 5b plots the averaged power waveform. •
Observable computation
The new proposed observables EMR and EDR were extracted from the normalized power waveform during the Yangjiang coastal experiment, respectively.
Results, Analysis and Discussion
To investigate the performance of the proposed method, we processed the reflected signals from the Beidou GEO4 satellite from August 3 to August 5 and from August 13 to August 14 in 2013. In the second period, the Typhoon UTOR approached the coast of Yangjiang of Guangdong province.
Wind Speed from the Meteorological Station
During the coastal experiment, the sea surface wind data from the Zhapo meteorological station was collected every 5 minutes in two periods, i.e. from August 3 to August 5 and from August 13 to August 14. Since these two periods include two typical wind states: gentle wind (1~10 m/s) and high wind (10 m/s~40 m/s), respectively, we chose them to verify the performance of the new observables for wind retrieval. From the afternoon of August 3 to the early morning of August 5, the trend of the wind shows a decreasing evolution ranging from 10 m/s to 2 m/s as shown in Figure 6a. To the contrary, the wind speed shows an increasing evolution from noon on August 13 to the early morning on August 14. The wind speed ranged from 12.7 m/s to 37.3 m/s during the observation time.
• Observable computation
The new proposed observables EMR and EDR were extracted from the normalized power waveform during the Yangjiang coastal experiment, respectively.
Results, Analysis and Discussion
To investigate the performance of the proposed method, we processed the reflected signals from the Beidou GEO4 satellite from August 3 to August 5 and from August 13 to August 14 in 2013. In the second period, the Typhoon UTOR approached the coast of Yangjiang of Guangdong province.
Wind Speed from the Meteorological Station
During the coastal experiment, the sea surface wind data from the Zhapo meteorological station was collected every 5 minutes in two periods, i.e. from August 3 to August 5 and from August 13 to August 14. Since these two periods include two typical wind states: gentle wind (1~10 m/s) and high wind (10 m/s~40 m/s), respectively, we chose them to verify the performance of the new observables for wind retrieval. From the afternoon of August 3 to the early morning of August 5, the trend of the wind shows a decreasing evolution ranging from 10 m/s to 2 m/s as shown in Figure 6a. To the contrary, the wind speed shows an increasing evolution from noon on August 13 to the early morning on August 14. The wind speed ranged from 12.7 m/s to 37.3 m/s during the observation time.
Gentle Wind Scenario
The energy observables (EMR and EDR) extracted from August 3 to August 5 are shown in Figure 6b,c with 2 0.3 Threshold = . To study the relationship between the observables and in situ wind speed, we resampled the in situ data to match the temporal resolution of the observables. From Figure 6b,c, both observables show the same evolution with in situ wind speed.
The scatter plots between wind speed and EMR are presented in Figure 7c, in which a strong linear dependence on wind speed of EMR can be observed. A simple linear polynomial is employed to describe their relationships.
WS m EMR n = ⋅ +
where m and n are coefficients that can be obtained using least squares to fit EMR and in situ wind speed. To evaluate the wind speed retrieval performances, two metrics including the root mean square (RMSE) and the coefficient of determination were computed. As shown in Table 3, the correlation between EMR and wind speed is 0.769, with a RMSE of 0.89 m/s. As shown in Figure 7d, the scatter plots also show a strong linear relationship between wind speed and EDR. A similar linear polynomial was employed to describe their relationship.
Gentle Wind Scenario
The energy observables (EMR and EDR) extracted from August 3 to August 5 are shown in Figure 6b,c with Threshold2 = 0.3. To study the relationship between the observables and in situ wind speed, we resampled the in situ data to match the temporal resolution of the observables. From Figure 6b,c, both observables show the same evolution with in situ wind speed.
The scatter plots between wind speed and EMR are presented in Figure 7c, in which a strong linear dependence on wind speed of EMR can be observed. A simple linear polynomial is employed to describe their relationships.
where m and n are coefficients that can be obtained using least squares to fit EMR and in situ wind speed. To evaluate the wind speed retrieval performances, two metrics including the root mean square (RMSE) and the coefficient of determination were computed. As shown in Table 3, the correlation between EMR and wind speed is 0.769, with a RMSE of 0.89 m/s. As shown in Figure 7d, the scatter plots also show a strong linear relationship between wind speed and EDR. A similar linear polynomial was employed to describe their relationship. where a and b are fitted coefficients. The fitting result indicates a correlation of 0.765, with a RMSE of 0.90 m/s. Two metrics of wind speed retrieval including coherent time and relative amplitude (RA) [18,19] are employed to compare the retrieval performances with the new observables. The coherent time is the time that the autocorrelation of the complex value in specular delay point decays to 1/e of its maximum. The RA is the amplitude ratio of a delay point in the trailing edge and specular point. Figure 7a,b illustrate the relationships between the wind speed, RA and the coherent time, respectively. As presented in Table 3, both methods show good retrieval performances in the gentle wind scenario. maximum. The RA is the amplitude ratio of a delay point in the trailing edge and specular point. Figure 7a,b illustrate the relationships between the wind speed, RA and the coherent time, respectively. As presented in Table 3, both methods show good retrieval performances in the gentle wind scenario. In this section, the proposed observables were measured during Typhoon UTOR. UTOR started as a tropical depression and evolved into a typhoon on August 11 in the western north Pacific. As an example, EMR and EDR were computed with the 2 Threshold set to 0.25. A detailed discussion on threshold will be presented in Section 4.3. The evolution of EMR and EDR from August 13 to August 14 are presented in Figure 8b,c. Comparing with the in situ wind speed in Figure 8a, both EDR and
High Wind Scenario
In this section, the proposed observables were measured during Typhoon UTOR. UTOR started as a tropical depression and evolved into a typhoon on August 11 in the western north Pacific. As an example, EMR and EDR were computed with the Threshold2 set to 0.25. A detailed discussion on threshold will be presented in Section 4.3. The evolution of EMR and EDR from August 13 to August 14 are presented in Figure 8b,c. Comparing with the in situ wind speed in Figure 8a, both EDR and EMR present the same trends as illustrated in Figure 8b,c. The scatter plots between the wind speed and EMR and EDR during Typhoon UTOR are presented in Figure 9c,d, respectively. Both EMR and EDR show an obvious linear dependence on wind speed. A simple linear polynomial is employed to describe the relationship between wind speed and EMR.
WS p EMR q
= ⋅ + (13) where p and q are fitting coefficients. As shown in Table 4, the correlation between EMR and wind speed is 0.822, with a RMSE of 3.10 m/s. A similar linear polynomial was employed to establish the relationship between wind speed and EDR.
WS k EDR c = ⋅ + (14) where k and c are fitting coefficients. The fitting result with a correlation of 0.866 and a RMSE of 2.69 m/s was obtained. As shown in Figure 9a,b, both coherent time and the RA method were also used for wind speed retrieval. Comparing the results for the coherent time and RA methods, the coherent time shows a slightly better retrieval performance when employing a non-linear fitting, with a correlation coefficient of 0.875 and a RMSE of 2.61 m/s. The scatter plots between the wind speed and EMR and EDR during Typhoon UTOR are presented in Figure 9c,d, respectively. Both EMR and EDR show an obvious linear dependence on wind speed. A simple linear polynomial is employed to describe the relationship between wind speed and EMR.
where p and q are fitting coefficients. As shown in Table 4, the correlation between EMR and wind speed is 0.822, with a RMSE of 3.10 m/s. A similar linear polynomial was employed to establish the relationship between wind speed and EDR.
where k and c are fitting coefficients. The fitting result with a correlation of 0.866 and a RMSE of 2.69 m/s was obtained. As shown in Figure 9a,b, both coherent time and the RA method were also used for wind speed retrieval. Comparing the results for the coherent time and RA methods, the coherent time shows a slightly better retrieval performance when employing a non-linear fitting, with a correlation coefficient of 0.875 and a RMSE of 2.61 m/s.
Threshold Effects
This paper defined two energy zones in the trailing edge of the normalized power waveform by setting two thresholds as where noise P is the noise power, which can be estimated by averaging the samples of the waveform with no scattered signals. Specifically, we used delay values between −2 and −1.5 chips before the specular point to estimate the noise. Since 2 Threshold is an important factor when defining the energy zones, this section compared different values of 2 Threshold in order to find the optimal threshold for wind speed retrieval. Table 5 shows the wind speed retrieval performances under different threshold values.
Threshold Effects
This paper defined two energy zones in the trailing edge of the normalized power waveform by setting two thresholds as Threshold1 = P noise Threshold1 < Threshold2 < 1 (15) where P noise is the noise power, which can be estimated by averaging the samples of the waveform with no scattered signals. Specifically, we used delay values between −2 and −1.5 chips before the specular point to estimate the noise. Since Threshold2 is an important factor when defining the energy zones, this section compared different values of Threshold2 in order to find the optimal threshold for wind speed retrieval. Table 5 shows the wind speed retrieval performances under different threshold values. As presented in Table 5, the EMR method shows a fluctuant correlation level with respect to different Threshold2 in both scenarios. Alternatively, the EDR method presents a relatively stable correlation value of around 0.73 and 0.85 in two wind scenarios, respectively. Figure 10 illustrates the wind speed retrieval performances with different Threshold2 ranging from 0.15 to 0.5. In the gentle wind scenario (Case1, the blue line), the EMR reaches a relatively better correlation of 0.769 when Threshold2 is 0.3 and degrades rapidly when Threshold2 is increasing from 0.3 to 0.38. In the high wind scenario (Case 2, the red line), the EMR correlation level shows a considerable fluctuation when Threshold2 ranges from 0.28 to 0.25. The EDR keeps a stable retrieval performance in both scenarios. As presented in Table 5, the EMR method shows a fluctuant correlation level with respect to different 2 Threshold in both scenarios. Alternatively, the EDR method presents a relatively stable correlation value of around 0.73 and 0.85 in two wind scenarios, respectively. Figure As shown in Figure 10, in the gentle wind scenario, when Threshold2 is 0.3, the wind speed retrieval RMSE by EMR and EDR methods are 0.89 m/s and 0.90 m/s and R 2 are 0.769 and 0.765, respectively. In the high wind scenario, when Threshold2 is 0.15, the RMSE by EMR and EDR methods are 2.93 m/s and 2.63 m/s and R 2 are 0.841 and 0.871. According to these results, the optimal value of Threshold2 for wind speed retrieval is 0.3 in the gentle and high wind scenarios. In the high wind scenario, the optimal Threshold2 is 0.15.
Coherent Integration Time Effects
As the scattered signals are contaminated by additive thermal noise and speckle noise, the coherent integration was employed over the complex waveforms to suppress the effect. We tested different lengths of coherent integration time to study their effect on wind speed retrieval and to achieve an optimum value of coherent integration time. Figure 11 presents the results for different coherent integration time. Both methods show degradations in the wind speed retrieval performances with increasing coherent integration time. 2 Threshold for wind speed retrieval is 0.3 in the gentle and high wind scenarios.
In the high wind scenario, the optimal 2 Threshold is 0.15.
Coherent Integration Time Effects
As the scattered signals are contaminated by additive thermal noise and speckle noise, the coherent integration was employed over the complex waveforms to suppress the effect. We tested different lengths of coherent integration time to study their effect on wind speed retrieval and to achieve an optimum value of coherent integration time. Figure 11 presents the results for different coherent integration time. Both methods show degradations in the wind speed retrieval performances with increasing coherent integration time. In order to keep the correlation of the waveform, the coherent integration time should be smaller than the coherent time of the scattered signal [18]. In our coastal experiment scenario, the coherent time ranged from 50 ms to 150 ms, depending on the sea state. Therefore, the best performances were obtained as expected for a coherent integration time of 50 ms, in agreement with the principle that the coherent integration time should be smaller than the correlation time.
Conclusions
Two new observables that describe the energy distribution of delay waveform were proposed in this paper. To study the dependence of the reflected GNSS signal on the sea surface wind speed, we divided the complex waveform into two zones,i.e. low-energy zone and high-energy zone, and proposed two new observables as input for alternative methods for wind speed retrieval. EMR is the ratio of energy between the low-energy zone and the high-energy zone. EDR is based on the computation of the ratio of the pixel volume between the low-energy zone and the high-energy zone. In order to keep the correlation of the waveform, the coherent integration time should be smaller than the coherent time of the scattered signal [18]. In our coastal experiment scenario, the coherent time ranged from 50 ms to 150 ms, depending on the sea state. Therefore, the best performances were obtained as expected for a coherent integration time of 50 ms, in agreement with the principle that the coherent integration time should be smaller than the correlation time.
Conclusions
Two new observables that describe the energy distribution of delay waveform were proposed in this paper. To study the dependence of the reflected GNSS signal on the sea surface wind speed, we divided the complex waveform into two zones, i.e. low-energy zone and high-energy zone, and proposed two new observables as input for alternative methods for wind speed retrieval. EMR is the ratio of energy between the low-energy zone and the high-energy zone. EDR is based on the computation of the ratio of the pixel volume between the low-energy zone and the high-energy zone. This paper processed two periods of dataset from the Beidou GEO satellite during the coastal experiment in Yangjiang, which represented two typical different wind states: gentle and high wind scenarios. Both observables showed good wind speed retrieval performances in the gentle wind scenario. The wind speed retrieval errors were less than 1.0 m/s with correlation R 2 of 0.76 when Threshold2 = 0.3. In the high wind scenario, the dataset during Typhoon UTOR were processed. When Threshold2 = 0.25, EMR and EDR showed a strong linear relationship with the high wind speed during Typhoon UTOR, which reached a RMSE of 3.10 m/s and 2.69 m/s with a correlation R 2 of 0.822 and 0.866, respectively.
To obtain an optimal retrieval result, the influences of threshold and coherent time on wind speed retrieval were analyzed. Finally, the optimal retrieval performances were obtained, with an RMSE of 2.63 m/s and a correlation of 0.871 in the high wind scenario. In the gentle wind scenario, the optimal RMSE of wind speed reached 0.89 m/s and the correlation was 0.769.
We neglected the wind direction factor when we set up the wind speed retrieval models in this paper. In the future, the relationships between wind direction and delay waveform and influences of wind direction on waveform will be considered, which will be good for estimating a more efficient wind speed retrieval model. Besides, it should be remarked that the buoy is located near the gulf of the Yangjiang Island and the distance between the GNSS coastal experiment station and the in situ measurement station is around 10 km, which may introduce additional errors when fitting the retrieval model. By referring to more ocean meteorological information such as tide and swell during the experiment, this may provide more opportunity to investigate the relationship between wind speed and observables in the future.
Author Contributions: Q.W. conceived the main idea of this article and wrote this paper. Y.Z. analyzed the wind retrieval method based on the coherent time. Q.W. and Y.Z. discussed the influences of the factors on wind speed retrieval. K.K. and Q.W. supplemented the theory part during the manuscript modification. | 7,271.6 | 2019-06-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Physics"
] |
Computational and experimental characterization of the novel ECM glycoprotein SNED1 and prediction of its interactome
The extracellular matrix (ECM) protein SNED1 has been shown to promote breast cancer metastasis and control neural crest cell-specific craniofacial development, but the cellular and molecular mechanisms by which it does so remain unknown. ECM proteins exert their functions by binding to cell surface receptors, sequestering growth factors, and interacting with other ECM proteins, actions that can be predicted using knowledge of protein’s sequence, structure and post-translational modifications. Here, we combined in-silico and in-vitro approaches to characterize the physico-chemical properties of SNED1 and infer its putative functions. To do so, we established a mammalian cell system to produce and purify SNED1 and its N-terminal fragment, which contains a NIDO domain. We have determined experimentally SNED1’s potential to be glycosylated, phosphorylated, and incorporated into insoluble ECM produced by cells. In addition, we used biophysical and computational methods to determine the secondary and tertiary structures of SNED1 and its N-terminal fragment. The tentative ab-initio model we built of SNED1 suggests that it is an elongated protein presumably able to bind multiple partners. Using computational predictions, we identified 114 proteins as putative SNED1 interactors. Pathway analysis of the newly-predicted SNED1 interactome further revealed that binding partners of SNED1 contribute to signaling through cell surface receptors, such as integrins, and participate in the regulation of ECM organization and developmental processes. Altogether, we provide a wealth of information on an understudied yet important ECM protein with the potential to decipher its functions in physiology and diseases.
INTRODUCTION
The extracellular matrix (ECM) is a complex scaffold made of hundreds of proteins that instructs cell behaviors, organizes tissue architecture, and regulates organ function (1). It plays prominent roles during embryonic development, aging, and diseases (2)(3)(4)(5)(6)(7). Mechanistically, ECM proteins can play these roles through their interactions with each other, with growth factors or morphogens, and with receptors present at the cell surface (1,8,9). These molecular interactions are mediated by specific protein domains, motifs, or sequences and govern the nature of the chemical and mechanical signals conveyed by the ECM. Characterizing the composition of the ECM alongside the interactions taking place within this compartment and determining how they regulate cellular functions is the first step towards building a systems biology view of the ECM.
We previously used the characteristic domain-based organization of known ECM proteins (10)(11)(12) to utilize sequence analysis to computationally predict via sequence analysis the ensemble of genes coding for ECM proteins and ECM-associated proteins. We termed this ensemble the "matrisome" (13). Our interrogation of the human genome found that 1027 genes encoded matrisome proteins, among which 274 encoded "core" ECM components such as collagens, proteoglycans, and glycoproteins. While all 44 collagen genes (14) and 35 proteoglycan genes (15) had previously been reported, several of the 195 genes predicted to encode structural ECM glycoproteins based on the protein domains present were or still are of unknown function (13, 16,17). One such gene is SNED1. It encodes the Sushi, Nidogen, and EGF-like domain-containing protein 1 (SNED1) and was initially named Snep for stromal nidogen extracellular matrix protein, since the murine gene, was cloned from stromal cells of the developing renal interstitium, and its pattern of expression overlapped with that of the ECM basement membrane proteins nidogens 1 and 2 (18). Sned1 is broadly expressed during mouse development, particularly in neural-crest-cell and mesoderm derivatives (18,19). The interrogation of RNA-seq databases indicates that SNED1 is also expressed in multiple human adult tissues, although at a low level (unpublished data from the Naba lab). A decade after the cloning of this gene, we identified SNED1 in a proteomic screen comparing the ECM of poorly and highly metastatic mammary tumors and further reported the first function of this protein as a promoter of mammary tumor metastasis (20). Intrigued by this novel protein, we sought to identify its physiological roles. To do so, we generated a Sned1 knockout mouse model and demonstrated that Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20200675/905850/bcj-2020-0675.pdf by guest on 19 March 2021 Biochemical Journal. This is an Accepted Manuscript. You are encouraged to use the Version of Record that, when published, will replace this version. The most up-to-date-version is available at https://doi.org/10.1042/BCJ20200675 4 glycine solution at pH 3 and pH 2.5, dialyzed against phosphate-buffered saline (PBS), and stored at 4˚C. The reactivity and specificity of the antibody were assessed by western blot (Supplementary Figure S1).
Plasmid constructs
The cDNA encoding full-length human SNED1 (fl-SNED1) cloned into pCMV-XL5 (clone SC315884) was obtained from Origene. The cDNA encoding full-length murine Sned1 cloned into pCRL-XL-TOPO (clone 40131189) was obtained from Open Biosystems (now, Thermo Fisher,). Fl-SNED1 and a construct spanning the most N-terminal region of SNED1 and including the NIDO domain (amino acids 1 to 260, referred to as "N-terminal fragment" of SNED in the text and as "N-ter" in the figures) were subcloned into the bicistronic retroviral vector pMSCV-IRES-Hygromycin between the BglII and HpaI sites, and a FLAG tag (DYKDDDDK) was added at the C-terminus of both constructs ( Figure 1A). These constructs were used to establish stable cell lines (see below). 6x-His-tagged constructs of human and murine SNED1 cloned into pCDNA5/FRT (Thermo Fisher) between the FseI and AscI sites were used to transiently transfect 293T cells to validate the anti-SNED1 antibody generated in this study (Supplementary Figure S1). Fl-SNED1 was subcloned into p-Select-eGFP-Blasti (Invivogen) between the AgeI and NcoI restriction sites. Fl-SNED1-GFP or GFP alone were then shuttled into the bicistronic retroviral vector pMSCV-IRES-Puromycin between the BglII and EcoRI sites. Retroviral particles were obtained as described below and used to express GFP and fl-SNED1-GFP in immortalized mouse embryonic fibroblasts (see below). All primers used are listed in Supplementary Table S1. All constructs were validated by Sanger sequencing.
Cell culture
Human embryonic kidney (HEK) 293T cells (further referred to as 293T cells) were cultured in Dulbecco's Modified Eagle's medium (DMEM) supplemented with 10% fetal bovine serum and 2 mM glutamine.
Retrovirus production
293T cells were plated at ~30% confluency and transfected 24 h later using the Lipofectamine 3000 system (Invitrogen) with a mixture containing 1 μg of retroviral vector with the construct of interest, 0.5 μg of packaging vector (pCL-Gag/Pol), and 0.5 μg of coat protein (VSVG). The transfection mix was prepared according to the manufacturer's instructions and added to the cells for 24 h, after which Downloaded from http://portlandpress.com/biochemj/article-pdf/doi/10.1042/BCJ20200675/905850/bcj-2020-0675.pdf by guest on 19 March 2021 Biochemical Journal. This is an Accepted Manuscript. You are encouraged to use the Version of Record that, when published, will replace this version. The most up-to-date-version is available at https://doi.org/10.1042/BCJ20200675 5 the transfection mix was removed and cells were fed with fresh culture medium. Cells were then cultured for an additional 24 h, after which the viral-particle-containing culture medium was collected and filtered through a 0.45-μm filter and then either stored at −80 °C or immediately used.
Establishment of 293T cells stably expressing fl-SNED1 or the N-terminal fragment of SNED1
293T cells were plated at ~40% confluency. Undiluted viral-particle-containing conditioned medium (see above) was added to the cells 24 h after seeding, and cells were fed with fresh culture medium 24 h after transduction. Transduced cells were selected with hygromycin (100 μg/mL) over a period of 10 days. Protein expression and secretion were monitored by performing western blot analysis on cellular protein extracts obtained by lysing cells using 3X Laemmli buffer (0.1875 M Tris-HCl, 6% SDS, 30% Glycerol) supplemented with 100 mM dithiotreitol, and on the cell conditioned media (CM) with the rabbit polyclonal anti-SNED1 antibody (2 μg/mL) described below, a rabbit polyclonal anti-FLAG antibody (2 μg/mL; Sigma, F7425), or the monoclonal anti-FLAG M2 antibody (Sigma, F3165).
Secondary anti-rabbit antibody conjugated to the horseradish peroxidase (Thermo Fisher, 31460) was used and immunoreactive bands were detected by chemiluminescence (SuperSignal West Pico PLUS, Thermo Fisher or ECL Prime Western Blotting System, GE Healthcare).
For large-scale expression of fl-SNED1 and the N-terminal fragment of SNED1, cells were cultured in HYPERFlasks™ in DMEM (Sigma-Aldrich, D5796) supplemented with 50 μg/mL of gentamicin (Sigma-Aldrich, G1272) as previously described (21). Culture media were harvested every 48 h for up to 18 days. After collection, 3 tablets of EDTA-free cOmplete inhibitor (Roche) were added to the culture medium which was then centrifuged at 14,000 × g for 30 min at 4°C. Supernatants were stored at -80°C until use. Fl-SNED1 and the N-terminal fragment were purified by affinity chromatography on an anti-FLAG resin (Sigma-Aldrich, A2220) as previously described (21) in presence of 150 mM NaCl. In brief, FLAG-tagged proteins were purified at 4°C on the anti-FLAG resin at a flow rate of 20 ml/h with a P1 pump (GE Healthcare) and eluted by competition with a FLAG peptide solution at 200 μg/mL in 10 mM HEPES, 150 mM NaCl, pH 7.4 (HEPES buffered saline, HBS). The purified proteins were then concentrated on either Amicon (Merck Millipore, MWCO 10 kDa) or Vivaspin (Sartorius, MWCO 5 kDa) concentration columns. The yield was approximately 200 g/L and 600 g/L culture medium for the N-terminal fragment of SNED1 and fl-SNED1, respectively. iMEFS were plated at ~40% confluency. Undiluted viral-particle-containing conditioned medium containing the cDNA encoding fl-SNED1-GFP (see above) was added to the cells 24 h after seeding, and cells were fed with fresh culture medium 24 h after transduction. Transduced cells were selected with hygromycin (100 μg/mL) over a period of 10 days. Protein expression and secretion in the culture medium were monitored by performing western blot analysis on cellular protein extracts and conditioned medium obtained as described above, using the mouse monoclonal anti-GFP antibody [9F9.F9] (Abcam #ab1218; used at a final concentration of 2 g/mL) and a secondary HRP-coupled anti-mouse antibody.
Deoxycholate (DOC) solubility assay
293T cells stably expressing FLAG-tagged fl-SNED1 were grown to full confluency and lysed in DOC buffer: 2% deoxycholate; 20 mM Tris-HCl, pH 8.8 containing 2mM EDTA, 2mM N-ethylamine, 2mM iodoacetic acid, 167 μg/mL DNase and 1X protease inhibitor (Thermo Scientific, A32953) as previously described (22). Lysate was then passed through a 26G needle to further shear DNA and reduce viscosity. Centrifugation was used to pellet the DOC-insoluble, ECM-enriched, protein fraction from the DOC-soluble supernatant, enriched for intracellular components. These fractions were analyzed for the presence of SNED1 by western blot as described above.
Preparation of cell-derived matrices
Cell-derived matrices (CDMs) from iMEFs were prepared following the protocol published by the Schwarzbauer lab (23). In brief, glass coverslips were coated with 0.2% gelatin (Sigma, G1890) for 1 h
Immunofluorescence
CDMs were fixed with 4% paraformaldehyde, and immunofluorescence staining was performed using the following primary antibodies: mouse monoclonal anti-GFP antibody [9F9.F9] (Abcam #ab1218; used at a final concentration of 10 g/mL) and rabbit serum containing anti-fibronectin polyclonal antibodies (a kind gift from Richard Hynes, MIT, used at a 1:100 dilution), and the following secondary antibodies: goat anti-mouse coupled to Alexa Fluor 647 (Thermo Fisher A21236; used at 4 g/mL) and goat anti-rabbit coupled to Alexa Fluor 568 (Thermo Fisher A11036; used at 4 g/mL).
Coverslips were mounted on glass slides and imaged using the Zeiss Axio Imager Z2 or the Zeiss Confocal LSM 880. Images were acquired and processed using ZEN v2.3. All negative control staining are provided in Supplementary Figure S9.
Analysis of SNED1 post-translational modifications by SDS-PAGE and western blot
Conditioned media from 293T cells stably expressing FLAG-tagged fl-SNED1 and the N-terminal fragment of SNED1 were incubated with PNGase F as previously described (21), or with heparinase III and chondroitinase ABC (2 mU per 40 μL of conditioned medium) as previously described (24).
Proteins were separated by SDS-PAGE and transferred onto nitrocellulose membranes. Membranes were probed with an anti-FLAG antibody or the rabbit polyclonal anti-SNED1 antibody generated in this study to identify recombinant SNED1. To determine whether SNED1 is phosphorylated, FLAG-tagged fl-SNED1 was immunoprecipitated from 1.25 mL of medium conditioned by cells for 72 h using an anti-FLAG resin (Sigma-Aldrich, A2220). Bound proteins were resolved by SDS-PAGE and western blots were performed with the anti-FLAG antibody to validate the immunoprecipitation of FLAG-tagged fl-SNED1 and with antiphosphoserine (1 μg/mL; Abcam, ab9332), anti-phosphothreonine (1 μg/mL; Sigma-Aldrich, AB1607), or anti-phosphotyrosine (1 μg/mL; Sigma-Aldrich, 05-321) antibodies.
Circular dichroism (CD)
Far-UV CD spectra were recorded in a quartz cuvette at 20°C with a path length of 0.1 cm in a 7.12 (Malvern Instruments Ltd). The theoretical hydrodynamic radii of fl-SNED1 and the N-terminal fragment of SNED1 were calculated using folded proteins parameters and the number of amino acid residues of each protein (27).
Bioinformatic analysis of the amino acid sequence of human SNED1
The sequence of human SNED1, without its peptide signal, was used for further queries unless stated otherwise (UniProtKB Q8TER0, residues 25-1413; Figure 1A). The domain organization of human SNED1 was drawn with Illustrator of Biological Sequences 1.0.3 (28). The secondary structure of SNED1 and NIDO was predicted using Proteus2 (29 Ser-Gly-X-Gly, X being any amino acid residue except proline (35) and Glu/Asp-x-Ser-Gly (36) sequences, corresponding to glycosaminoglycan (GAG) attachment sites were searched with PATTINPROT (https://npsa-prabi.ibcp.fr) (37). The Ser-Gly pattern was searched manually in SNED1 sequence. Disulfide bond forming cysteines and ternary cysteine classification were performed using DISULFIND (34) A template-free, ab-initio protein structure prediction, QUARK (41), was used to generate models of the NIDO domain. The models was refined with MolRefiner (42) and its quality was assessed with ProSAII (43), Verify-3D (44), and PROCHECK Ramachandran plot analysis (45).
Reactome Pathway Analysis
The 114 proteins predicted to interact with SNED1 were input as a dataset into the Reactome pathway algorithm (https://reactome.org/) (57,58). In brief, a statistical test determines whether certain Reactome pathways and biological processes are enriched in the submitted dataset. The test produces a probability score corrected for false discovery rate using the Benjamini-Hochberg method.
Computational analysis of the sequence of the ECM protein SNED1
As previously described (18,19), SNED1 is a multidomain protein containing one NIDO domain, one follistatin domain, one Sushi domain, also known as complement control protein (CCP) domain, 15 EGF-like and EGF-Ca ++ domains, and 3 fibronectin III domains in its C-terminal region ( Figure 1A, domain boundaries were predicted using SMART (59)). While these protein domains are found in lower organisms (60,61), the results of our phylogenetic analysis have revealed that orthologs of SNED1 are found in vertebrates, including the following model organisms: mouse, rat, chicken, zebrafish, and xenopus. Sequence homology ranges from 85% between mammalian species to ~45-50% with other vertebrates. No ortholog of SNED1 was found in lower organisms (19).
An interesting feature of SNED1 is the presence of a NIDO domain (SMART: SM00539) in its Nterminal region (amino acids 103-260). This domain is only found in 4 other human or rodent proteins in addition to SNED1: the basement membrane proteins nidogen-1 and nidogen-2 (62,63), mucin-4, and alpha-tectorin, a component of the tectorial membrane, the apical ECM of the inner ear (64)(65)(66).
Identity between the NIDO domain of human SNED1 and that of other vertebrate SNED1 orthologs ranges between 73% and 92% (19). Sequence alignment within the NIDO domains of the 5 human proteins showed that the NIDO domain of SNED1 is most closely related to that of alpha-tectorin (77% of similarity and 58% of identity; Figure 1B
Development of a mammalian cell system to produce and purify SNED1 in vitro
In order to study the biochemical and biophysical properties of SNED1, we devised a mammalian cell system to produce recombinant FLAG-tagged full-length SNED1 (fl-SNED1) or the N-terminal fragment that contains the NIDO domain of SNED1 ( Figure 1A). We found that both proteins were secreted by the cells, since we detected them in the conditioned medium of 293T cells stably expressing them, from which we can purify the proteins using the FLAG-tag added to their C-terminal ends (Figure 2A). In order to study SNED1, we also generated a rabbit polyclonal antibody that we validated and found to be specific to human SNED1 since it did not recognize murine SNED1 (Supplementary Figure S1A and S1B).
The canonical DOC solubility assay has been used to demonstrate the incorporation of proteins, such as fibronectin, into the ECM (22,70). Here, we show that fl-SNED1 is detected in the DOC-insoluble fraction, indicating its relative insolubility and likely incorporation in the ECM deposited by 293T cells in vitro ( Figure 2B).
Determination of the secondary and tertiary structures of full-length and of the N-terminal fragment of SNED1
Determining the structure of SNED1 has the potential to shed light on the mechanisms underlying its possible functions and signaling mechanisms. We thus turned to molecular modeling and biophysical assays using purified proteins to determine the secondary and tertiary structures of SNED1 and its Nterminal fragment.
Secondary structures of SNED1 and its N-terminal fragment
The predicted secondary structure of the N-terminal fragment of SNED1 using Proteus2 (29) was 33% -strand, 9.3% helix, and 57.6% random coil (Supplementary Figure S2), whereas the deconvolution of its circular dichroism (CD) spectra showed the presence of 39% -strand, 5% helix, and 36% of random coil ( Figure 3A). These results confirmed that the N-terminal fragment of SNED1 containing the NIDO domain is mostly composed of -strands, with a small percentage of helices (≤9%). Two helices were predicted in the NIDO domain itself ( 124 PAMLRRATEDVRHY 137 and 235 DMAEVETT 242 ). A large proportion of random coil (73%) was predicted in fl-SNED1 together with 26% of -strands, and 1% of helix corresponding to the sequence also found in the N-terminal fragment of SNED1, 124 PAMLRRATEDVR 135 ( Figure 3A). However, a higher percentage of strands (52%), and a lower amount of random coil (41%) together with 9% turns were found by CD analysis. Interestingly, no helix was experimentally detected in fl-SNED1.
Determination of the molecular weight of SNED1 and its N-terminal fragment using sizeexclusion chromatography-multi-angle laser light scattering
The theoretical molecular weight (M w ) of fl-SNED1 determined by the ProtParam tool
SNED1 and its N-terminal fragment are disulfide-bonded
The sequence of human SNED1 contains 107 cysteine residues. All the cysteine residues except one are located in two regions: residues 265-902 and 1311-1391. DISULFIND (73) predicted the presence of 53 disulfide bonds in the SNED1 sequence (Supplementary Table S3). Most domains of SNED1 were predicted to be disulfide-bonded except the fibronectin III domains (Supplementary Table S3).
Only one cysteine residue, Cys 99 , is present in the N-terminus of SNED1.
To experimentally determine if SNED1 is stabilized by disulfide bonding in vitro, purified
Determination of the hydrodynamic radius of SNED1 and its N-terminal fragment
The hydrodynamic radius (Rh) of fl-SNED1 determined by dynamic light scattering (DLS) was 9.02 ± 0.97 nm. This value was more than two-fold greater than the value calculated for a fully folded protein comprising the same number of amino acid residues as SNED1 (3.7-3.9 nm), suggesting that SNED1 is (Figure 3A). We did not obtain a concentration of the N-terminal fragment of SNED1 sufficient enough to get a clear signal in the DLS experiment and a reliable experimental value of its hydrodynamic radius, but its theoretical hydrodynamic radius, calculated assuming that it folds as a globular protein, is 2.3-2.4 nm. The Stokes radius of the N-terminal fragment estimated by SEC was 3.4 nm. Altogether, our results provide the first computational and experimental determination of the structural and biophysical parameters of SNED1 ( Table 1).
3D model of the NIDO domain of SNED1
Since no crystal structure or model of SNED1 or its NIDO domain are available, we sought to build a computational model of the NIDO domain using QUARK and then refined it with ModRefiner (coordinates file provided in Supplementary File S1). The TM-score was 0.9 (the topology is assumed to be correct if this value is > 0.5), and the QMEAN, which should be above -4, was -3.77.
The model contained two helices predicted by Proteus2 (although the second helix was longer than predicted, 233 TADMAEVETTT 243 ), a short -sheet, and two additional -strands ( Figure 3E). The ProSA z-score, which indicates the overall model quality, was -4.21, within the range of scores typically found for X-ray structures of proteins of similar size (Supplementary Figure S4). The model was also correct according to Verify-3D (Supplementary Figure S5) and according to PROCHECK, that returned that 99.7% of the residues were found in allowed regions and 69.9% in the most favored regions (Supplementary Figure S6). Only 2 residues, excluding glycine and proline residues, were found in disallowed regions. The overall quality of the model of the NIDO domain was thus considered good.
SNED1 is a glyco-phosphoprotein
A key feature of ECM proteins is their high level of post-translational modifications (PTMs), including glycosylations (75) and phosphorylations (76)(77)(78), which potentially mediate protein-protein interactions and scaffolding (especially of mineralized tissues in the case of phosphorylations). We thus used several algorithms and queried multiple databases to determine whether SNED1 was predicted to be subject to PTMs (Supplementary Table S4) and further tested our findings experimentally. In addition to glycosylation sites, several potential attachment sites of glycosaminoglycans (GAGs) were identified within the sequence of SNED1, including 8 Ser-Gly motifs, 1 Ser-Gly-X-Gly sequence ( 846 SGGG 849 ), and 2 Glu/Asp-X-Ser-Gly sequences, 66 Figure S7B).
Phosphorylation
Sequence analysis also revealed 133 predicted phosphorylation sites in SNED1 (Supplementary residues are predicted to be phosphorylated by casein kinase II which is known to phosphorylate ECM proteins, including collagen XVII (ecto-caseinase 2), fibronectin, and vitronectin (78).
Through database interrogation, we found experimental evidence showing the phosphorylation of 12 of these residues: 5 serine, 5 threonine, and 2 tyrosine residues, none of which lie within the N-terminal domain of SNED1 (Supplementary Table S4D). In order to determine whether SNED1 was phosphorylated when secreted by 293T cells, we immunoprecipitated FLAG-tagged SNED1 and conducted western blot analysis of the immobilized protein using anti-phosphoserine, antiphosphothreonine, and anti-phosphotyrosine antibodies. While we were not able to obtain consistent results with the anti-phosphotyrosine antibody, our results show that both human fl-SNED1 ( Figure 4C, left panel) and its N-terminal fragment ( Figure 4C, right panel) were phosphorylated on serine and threonine residues.
Altogether our results provide evidence that SNED1 secreted by 293T cells is both N-glycosylated and serine-and threonine-phosphorylated, and that some of the modified residues lie within the N-terminal region of SNED1. Future studies are needed to determine which enzymes are responsible for these PTMs and how these PTMs relate to SNED1 structure, interactions, and functions.
Domain-domain interaction network of SNED1
A domain-domain interaction network of fl-SNED1 was built using 3did, the database of 3dimensional interacting domains, and returned 106 unique interactions (Figure 5 and Supplementary Figure 5). Two interactions were retrieved twice (Sushi/EGF and EGF/EGF_CA). Of note, the lack of knowledge about the NIDO domain was further exemplified here since NIDO is simply absent from the database 3did, and no other protein domain has ever been reported to interact with it.
Prediction of the protein-level SNED1 interactome
The query of the interaction databases MatrixDB (49) and IntAct (50) Methods). We focused on secreted proteins and membrane proteins, which resulted in the prediction of 114 unique interactions by at least one algorithm, including SNED1 auto-interaction ( Figure 6A).
Sequence analysis of SNED1 revealed that it displays two integrin-binding consensus sequences, RGD and LDV (Figure 1A), which suggests that integrins may serve as SNED1 receptors. Two additional membrane proteins, Indian hedgehog (IHH) and tissue factor (F3), are also annotated as matrisomeassociated and secreted proteins respectively (Figure 6A and Supplementary Table S6). Last, 47 partners of SNED1 are identified as being extracellular proteins, including 30 matrisome proteins, 10 matrisome-associated proteins, and 7 secreted proteins ( Figure 6A and Supplementary Table S6). 45 unique interactions were also predicted to involve intracellular proteins (Supplementary Table S6).
Comparison of the predictions made by the different algorithms revealed that 10 interactions were predicted by at least two methods, including the ECM proteins collagen VII (COL7A1), tenascin N (TNN), and fibronectin (FN1), the ECM receptor integrin 4 subunit (ITGB4) and the related secreted We then focused on the 13 binding partners predicted to interact with SNED1 by HOMCOS (Supplementary Table S6E). This tool specifically allows the 3D modeling of structures and interactions using 3D molecular similarities, resulting in the mapping of the potential interactor binding sites to SNED1 domains. We found that most putative interactor binding sites were located within EGF-like domains, whereas only a few partners, including the proteoglycan aggrecan (ACAN) Of note, no partner was predicted to interact with the NIDO, follistatin, or C-terminal domains of SNED1 (Supplementary Figure S8B and C), which, again, may reflect the limited experimental data available for these domains. Future studies will be aimed at testing experimentally whether these predicted interactors indeed bind SNED1 and, if so, we will further determine the characteristics of these interactions (e.g. binding affinity, precise mapping of interaction sites) and their biological relevance.
Potential binding partners of SNED1 are involved in multiple signaling pathways
The in-silico interaction network of SNED1 was then analyzed using Reactome (58) to identify associated biological pathways. No annotation could be retrieved for SNED1 itself, further highlighting the critical gap in knowledge about this protein (Reactome, version 73, released June 17, 2020). The Reactome database included information on 94 of the 114 predicted SNED1 partners, and since the majority of them are either part of the matrisome or are transmembrane receptors, the processes most over-represented in SNED1's network were "signal transduction", "cell-cell communication", "ECM organization", and "developmental biology" (Figure 6B). More specifically, the predicted SNED1 interactors were found to be part of, or contribute significantly to, more defined pathways, including "integrin cell surface interaction" and "ECM proteoglycans" (Figure 6C). This analysis, together with the list of predicted SNED1 interactors, will help prioritize future lines of investigation focused on uncovering the molecular mechanisms by which SNED1 controls aspects of embryonic development (19) and breast cancer metastasis (20). 20 While 293T cells are an excellent mammalian system to produce and purify proteins and ECM proteins, they do not assemble an ECM scaffold in vitro. In order to test the interaction of SNED1 with its predicted partners, we needed a cellular system in which we can assess ECM proteins in situ.
SNED1 is a fibrillar ECM protein and colocalizes with fibronectin
Fibroblasts are the main producers of ECM proteins in vivo. In vitro, these cells can secrete, deposit, and assemble ECM proteins into a structural ECM scaffold (23,86,87). To study the pattern of deposition of SNED1 within the ECM and test its interaction with other ECM partners, we sought to take advantage of mouse embryonic fibroblasts (MEFs) we recently obtained from the Sned1 knockout mouse model we generated (19). While the SNED1 antibody reported in the present study detected SNED1 in applications such as western blot (Figure 2), it failed to detect SNED1 in-situ. We with the anti-GFP antibody, further confirming the specificity of the fibrillar pattern observed. This fibrillar pattern was reminiscent of that of other known fibrillar ECM proteins such as fibronectin (23,86,87). Since we predicted that fibronectin could be a potential interactor of SNED1 (Figure 6A), we sought to determine whether SNED1 and fibronectin colocalized within the ECM. We observed a partial overlap and co-alignment between these two proteins ( Figure 7C and Supplementary Figure 9C). Future studies are now needed to determine whether SNED1 and fibronectin are capable of physically interacting and, if so, whether their interaction is direct or mediated by other ECM proteins or GAGs. It would also be interesting to determine, in future studies, the role of this possible interaction in ECM deposition, assembly, or remodeling.
DISCUSSION
Deciphering the nature of protein-protein interactions within the ECM is critical to understand the mechanisms governing proper ECM assembly and signaling functions in health and disease. We report here the computational prediction of the structure and interaction network of the novel ECM protein SNED1 and provide experimental insight into this protein's properties. While SNED1 shares structural The NIDO domain found in the N-terminal region of SNED1 is only found in 4 other human proteins.
Structure/function analysis of the NIDO domain of mucin-4 has revealed its role in promoting the invasiveness of pancreatic tumor cells (68,88). We have previously demonstrated that SNED1 promotes mammary tumor metastasis (20), and SNED1 was also identified in a screen as a mediator of p53-dependent pancreatic tumor cell invasive phenotype (89 interactors contribute identified "ECM organization" as one of the most significantly enriched pathways, and 6 collagen chains (COL6A3, COL7A1, COL12A1, COL14A1, COL16A1, and COL20A1) were predicted to bind to SNED1. Together, these results hint at a potential role for SNED1 in regulating collagen deposition and organization, which will need to be experimentally assessed.
While our focus is on the full-length, secreted ECM protein SNED1, early reports indicated that the 3' half of SNED1 could encode an intracellular protein, then named insulin response element-binding protein 1 (IRE-BP1), since it was identified by phage display to bind the IRE of the gene encoding insulin-like growth factor-binding protein 3 (IGFBP3) (91,92). Sequence analysis suggests that an alternative start codon could generate this shorter isoform. Further database interrogation revealed multiple putative isoforms of SNED1, however none of them have been reported experimentally yet.
Last, the only experimentally-detected interactor of SNED1 is the intracellular estrogen receptor beta (79). Whether the full-length secreted SNED1 can interact with this protein, or whether it is an intracellular shorter isoform or a truncated form of SNED1 remains to be determined, as does the physiological relevance of this interaction.
CONCLUSION
In summary, our study has provided the first biochemical and biophysical insights into the novel ECM protein SNED1 and is paving the way for future mechanistic studies that will eventually help us understand its multi-faceted roles in development, health, and disease.
DATA AVAILABILITY STATEMENT
All antibodies, constructs, and cell lines generated for this study are available upon request to Dr.
Naba. Additional information on experimental design can be obtained from Dr. Ricard-Blum and Dr. Naba.
FUNDING SOURCES
This work has been supported by a grant from the Fondation pour la Recherche Médicale n°DBI20141231336 to SRB, and by a start-up fund from the Department of Physiology and Biophysics at UIC to AN.
CONFLICT OF INTEREST
The authors declare that they have no conflicts of interest with the contents of this article. | 6,980.4 | 2020-07-28T00:00:00.000 | [
"Biology",
"Medicine",
"Computer Science"
] |
Closed‐Loop Recyclable Silica‐Based Nanocomposites with Multifunctional Properties and Versatile Processability
Abstract Most plastics originate from limited petroleum reserves and cannot be effectively recycled at the end of their life cycle, making them a significant threat to the environment and human health. Closed‐loop chemical recycling, by depolymerizing plastics into monomers that can be repolymerized, offers a promising solution for recycling otherwise wasted plastics. However, most current chemically recyclable polymers may only be prepared at the gram scale, and their depolymerization typically requires harsh conditions and high energy consumption. Herein, it reports less petroleum‐dependent closed‐loop recyclable silica‐based nanocomposites that can be prepared on a large scale and have a fully reversible polymerization/depolymerization capability at room temperature, based on catalysis of free aminopropyl groups with the assistance of diethylamine or ethylenediamine. The nanocomposites show glass‐like hardness yet plastic‐like light weight and toughness, exhibiting the highest specific mechanical strength superior even to common materials such as poly(methyl methacrylate), glass, and ZrO2 ceramic, as well as demonstrating multifunctionality such as anti‐fouling, low thermal conductivity, and flame retardancy. Meanwhile, these nanocomposites can be easily processed by various plastic‐like scalable manufacturing methods, such as compression molding and 3D printing. These nanocomposites are expected to provide an alternative to petroleum‐based plastics and contribute to a closed‐loop materials economy.
Table S3.The molecular weights in MALDI-TOF-MS spectra of the prepolymers and the depolymerized products (Figure 2h, Figure S4) of the hybrid Si-O-Si networks formed from the co-condensation of APTMS and TEOS with DEA or EDA as the catalyst, and their possible structural units.
Figure S2 .
Figure S2.Schematic chemical structures of the closed-loop recycled nanocomposite.The hybrid Si-O-Si networks with (a) DEA, (b) EDA, and (c) DEA and EDA.(f) The resulting silica-based nanocomposites containing (d) PTFPMS micelles with an APTES shell and (e) FAS.
Figure S3 .
Figure S3.The high resolution N1s peaks in XPS curves of solid prepolymer, the original and recycled solid prepolymers with DEA and EDA.
Figure S4 .
Figure S4.Schematic illustration of the partial depolymerization of the hybrid Si-O-Si network without a catalyst.Strong hydrogen bonds are formed between silanols and aminopropyl groups in the network without a catalyst.The hydrogen-bonded aminopropyl groups are less reactive, leading to the hybrid Si-O-Si network being only partially depolymerized.
Figure S5 .
Figure S5.The MALDI-TOF-MS spectra of the prepolymers and the depolymerized products of the hybrid Si-O-Si networks formed from the co-condensation of APTMS and TEOS with the catalyst: (a) DEA, (b) EDA, and (c) DEA and EDA.
Figure S6 .
Figure S6.The MALDI-TOF-MS spectra of the prepolymers and the depolymerized products of the poly(silsesquioxane) networks formed from the self-condensation of APTMS with the catalyst: (a) DEA, (b) EDA, and (c) DEA and EDA.
Figure S7 .
Figure S7.The depolymerization of the hybrid Si-O-Si networks formed from the co-condensation of APTMS and TEOS with DEA, EDA and the mixture of DEA and EDA: (a) 1 H-NMR spectra and (b) 13 C-NMR spectra of the corresponding prepolymers and the depolymerized products.
Figure S8 .
Figure S8.The depolymerization of the poly(silsesquioxane) networks formed from the self-condensation of APTMS with DEA, EDA and the mixture of DEA and EDA: (a) 1 H-NMR spectra and (b) 13 C-NMR spectra of the corresponding prepolymers and the depolymerized products.
Figure S9 .
Figure S9.Effect of the micelles on the formation of defect-free bulk materials after drying.Photographs showing that without micelles, the materials containing (a) DEA, (b) EDA, and (c) DEA and EDA, cracked after drying at 80°C for 3 days.(d) Photograph of the intact nanocomposite containing DEA and EDA with micelles after drying at 80°C for 3 days.
Figure S10 .
Figure S10.Elemental analysis of the nanocomposite: (a) XPS spectrum, (b) quantification table indicating the atomic species and their atomic percentages; (c) SEM image and corresponding element mapping of the fracture surface.
Figure S11 .
Figure S11.Mechanical properties of the nanocomposites containing DEA, EDA, and a mixture of DEA and EDA.(a-b) Nanoindentation tests: (a) load-displacement curves, and (b) hardness and modulus.(c-d) Threepoint bending tests: (c) stress-strain curves, and (d) flexural strength and flexural modulus.
Figure S12 .
Figure S12.The weight change in the nanocomposites with different catalysts in water with time.The weight ratio of the nanocomposite/water is 20/500.
Figure S13 .
Figure S13.Photos showing that (a) the surface of the glass can be easily scratched by a glass cutter, while no scratch is formed on (b) the surface of our nanocomposite containing DEA and EDA by the cutter under the same force.
Figure S14 .
Figure S14.TGA curves of the nanocomposite containing DEA and EDA in air and nitrogen.
Figure S15 .
Figure S15.(a) Contact angles and (b) mechanical properties of the nanocomposite containing DEA and EDA before and after immersion in different solvents for 24 h.
Figure S16 .
Figure S16.Photos of the polluted (a) glass and (b) nanocomposite containing DEA and EDA before and after wiping.(c) Photo of the water-based acrylic paint spray used as the pollutant.
Figure S17 .
Figure S17.Recycling ratio of the nanocomposites after different recycling cycles.
Figure S18 .
Figure S18.Photos of (a) the PTFPMS@APTES micelle dispersion and (b) the solution of depolymerized product in water.The size distributions of micelles in (c) the original prepolymer sol and (d) the depolymerized solution.TEM images of (e) the pristine PTFPMS@APTES micelles and (f) the micelles in the depolymerized solution.
Figure S19 .
Figure S19.The impact of the temperature of water on the depolymerization time of the nanocomposite.The weight ratio of nanocomposite/water was 20/500, and the depolymerization time was judged by the complete disappearance of the nanocomposite containing DEA and EDA and the formation of a clear solution.
Figure S20 .
Figure S20.TGA curves of the solid prepolymer powder.
Figure S21 .
Figure S21.(a) Photos of SiO 2 nanoparticles mixed with the liquid prepolymer.(b) Photos of the composite powders, photos of molded nanocomposites, and SEM images of the nanocomposites with different SiO 2 content.
Figure S22 .
Figure S22.Material properties of the nanocomposites with 0, 25 and 28 wt% SiO 2 nanoparticles.(a-d) Mechanical properties of the nanocomposites: (a) flexural stress-strain curves, (b) flexural strength and flexural modulus, (c) specific flexural strength, (d) impact toughness.(e-f) Thermal properties of the nanocomposites: (e) thermal conductivity and (f) TGA curves showing the thermal stability.
Figure S23 .
Figure S23.(a) Contact angles and (b) mechanical properties of the nanocomposites containing CuCl 2 and SiO 2 before and after immersion in water for 3 days.
Table S2 .
The chemical structural units of the prepolymers and the depolymerized products of the hybrid Si-O-Si networks formed from the co-condensation of APTMS and TEOS, as well as their molecular formulae, molecular weights, and symbols.
Table S4 .
Density and mechanical properties from nanoindentation tests of our nanocomposites and some common materials. | 1,549.8 | 2023-10-16T00:00:00.000 | [
"Materials Science"
] |
Data Flow Construction and Quality Evaluation of Electronic Source Data in Clinical Trials: Pilot Study Based on Hospital Electronic Medical Records in China
Background: The traditional clinical trial data collection process requires a clinical research coordinator who is authorized by the investigators to read from the hospital’s electronic medical record. Using electronic source data opens a new path to extract patients’ data from electronic health records (EHRs) and transfer them directly to an electronic data capture (EDC) system; this method is often referred to as eSource. eSource technology in a clinical trial data flow can improve data quality without compromising timeliness. At the same time, improved data collection efficiency reduces clinical trial costs
Introduction
Source data are the original records from clinical trials or all information recorded on certified copies, including clinical findings, observations, and records of other relevant activities necessary for the reconstruction and evaluation of the trial [1].Electronic source data are data initially recorded in an electronic format (electronic source data or eSource) [2,3].
The traditional clinical trial data collection process requires a clinical research coordinator (CRC) who is authorized by the investigators to read from the hospital's electronic medical record and other clinical trial-related data from the hospital information system and then manually enter the patient's data into the electronic data capture (EDC) system.After data entry, the clinical research associate visits the site to perform source data verification and source data review.The drawbacks of collecting data by manual transcription are that data quality and timeliness cannot be guaranteed and that it is a waste of human and material resources.Using electronic source data opens a new path to extract patients' data from electronic health records (EHRs) and transfer it directly to EDC systems (often the method is referred to as eSource) [4].eSource technology in a clinical trial data flow can improve data quality without compromising timeliness [5].At the same time, improved data collection efficiency reduces clinical trial costs [6].
eSource can be divided into two levels.The first level is to enable the hospital information system to obtain complete data sets; the second level is to allow direct data transfer to EDC systems based on the clinical trial patients' electronic data in hospitals to avoid the electronic data being transcribed manually again, which is the core purpose of eSource [7].This project will explore the use of eSource technology to extract clinical trial data from EHRs, send it to the sponsor data environment, and discuss the issues and challenges occurring in its application process.
Ethics Approval
This study was approved by the Ethics Committee and Human Genetic Resource Administration of China (2020YW135).During the ethical review process, the most significant challenges were patients' informed consent, privacy protection, and data security.The B7461024 Informed Consent Form (Version 4) states that "interested parties may use subjects' personal information to improve the quality, design, and safety of this and other studies," and "Is my personal information likely to be used in other studies?Your coded information may be used to advance scientific research and public health in other projects conducted in future."This project is an exploration of using electronic source data technology instead of traditional manual transcription in the process of transferring data from hospital EHRs to EDC systems, which will improve the data quality of clinical trials and will improve the data flow in the future.Therefore, this project is within the scope of the informed consent form for study B7461024, which was approved by the ethics committee after clarification.
Project Information
This project was conducted from December 15, 2020, to November 19, 2021, which was before China's personal information protection law and data security law were introduced.The data for this project were obtained from an ongoing phase 2, multicenter, open-label, dualcohort study to evaluate the efficacy and safety of Lorlatinib (pf-06463922) monotherapy in anaplastic lymphoma kinase (ALK) inhibitor-treated locally advanced or metastatic ALK-positive non-small cell lung cancer patients in China (B7461024), registered by the sponsor on the Drug Clinical Trials Registration and Disclosure Platform (CTR20181867).The data extraction involved 4 case report form (CRF) data modules: demographics, concomitant medication, local lab, and vital signs, which were collected in the following ways: • Demographics: Originally entered directly into the hospital EHR then manually transcribed by the CRC to the sponsor's EDC system • Local lab: Laboratory data collected by the hospital laboratory information management system (LIMS) and then manually transcribed by the CRC into the EDC system • Vital signs: Hospital uses paper-based tracking form provided by the sponsor to record patients' vital signs and investigators transcribe the vital signs data into the hospital medical record • Concomitant medication: Similar to vital signs, hospital uses the paper tracking form provided by the sponsor to record the adverse reactions and concomitant medication; investigator might also transfer the concomitant medication data into the hospital EHR, but there was no mandatory requirement to transfer these data into patients' medical records All information was collected from 6 patients in a total of 29 fields (Textbox 1).
Overview
The study chosen in our project used the traditional manual data entry method to transcribe patients' CRF data into the EDC system.This project proposes testing the acquisition of data directly from the hospital EHR, deidentification of the patients' electronic data on the hospital medical data intelligence platform, mapping and transforming the data based on the sponsor's EDC data standard, and transferring the data into the sponsor's environment.The data was transferred from the hospital to the sponsor's data environment and compared to data that was captured by traditional manual entry methods to verify the availability, completeness, and accuracy of the eSource technology.
In the network environment of this project, the technology provider accessed the hospital network through a virtual private network (VPN) and a bastion host, and processed the data of this project as a private cloud, thus ensuring the security of the hospital data.
Data Integration
The hospital information system involved in this project has reached the national standards of "Level 3 Equivalence," "Electronic Medical Record Level 5," and "Interoperability Level 4." The medical data intelligence platform in this project is deployed in a hospital intranet, isolated from external networks.Integrated data from different information systems, including the hospital information system, LIMS, picture archiving and communication system, etc, were deidentified from the platform and transferred to a third-party private cloud platform for translation and data format conversion after authorization by the hospital through a VPN.
The scope of data collection in this project was limited to patients who signed Informed Consent Form (Version 4) for study B7461024.The structured data of four CRF data modules (demographic, concomitant medications, local lab, and vital signs) were extracted from the source data in hospital systems, and data processing was completed.
Three-Layer Deidentification of Data
In this project, three layers of deidentification were performed on the electronic source data to ensure data security.The first layer of deidentification was performed before the certified copy of data was loaded to the hospital's medical data intelligence platform.The second layer of deidentification follows the Health Insurance Portability and Accountability Act (HIPAA) by deidentifying 18 data fields at the system level.A third layer of deidentification was performed when mapping and transforming third-party databases for the clinical trial data (demographics, concomitant medications, laboratory tests, and vital signs) collected for this study, as required by the project design.
Collected data did not contain any sensitive information with personal identifiers of the patients, and all deidentification processes were conducted in the internal environment of the hospital.In addition to complying with the relevant laws and regulations, we followed the requirements of Good Clinical Practice regarding patient privacy and confidentiality, and further complied with the requirements of HIPAA to deidentify the 18 basic data fields.Data fields outside the scope of HIPAA will be deidentified and processed in accordance with the TransCelerate guidelines published in April 2015 to ensure the security of patients' personal information and to eliminate the possibility of patient information leakage [8].
The general rules for the third layer of deidentification were as follows: • Time field: A specific time point is used as the base time, and the encrypted time value is the difference between the word time and the base time • ID field: Categorized according to the value and only shows the category • Age field: Categorized according to the value and only shows the category • Low-frequency field: set to null In addition, all data flows keep audit trails throughout and are available for audit.
Data Normalization and Information Extraction
After three layers of deidentification, the data was transferred from a hospital to a third-party private cloud platform through a VPN, where translation from Chinese to English and data format conversion were implemented.The whole transfer process was performed for the data that was collected for the clinical trial of this study.Standardization of data is a crucial task during the data preparation phase.This process involves consolidating data from different systems and structures into a consistent, comprehensible, and operable format.First, a thorough examination of data from various systems is necessary.Understanding the data structure, format, and meaning of each system is essential.The second step involves establishing a data dictionary that clearly outlines the meaning, format, and possible values of each data element.Next, selecting a data standard is necessary to ensure consistency and comparability.In this study, we adopted the Health Level 7 (HL7) standard.Additionally, data cleansing and transformation are needed to meet standard requirements, including handling missing data, resolving mismatched data formats, or performing data type conversions.Extract, transform, and load tools were used to integrate data from different systems.Data security must be ensured throughout the data integration process.This includes encrypting sensitive information and strictly managing data permissions.Data verification and validation steps were then performed by professional staff on the translated data.The data from the hospital's medical data intelligence platform were then converted from JSON format to XML and Excel formats.The processed data was transferred back to the hospital via a VPN to a designated location for final adjudication before loading to the sponsor's environment.
One-Time Data Push and Quality Assessment
After the hospital received the processed data, it was then pushed by the hospital to the sponsor's secure and controlled environment (Figure 1).All data deidentification processes were conducted in the hospital's environment, and none of the data obtained by the sponsor can be traced back to patients' personal information to ensure their privacy and information security.
The data quality of this project was assessed using industry data quality assessment rules [9], which are shown in Table 1.e Total number of data fields captured (processed and sent to the sponsor) through the eSource method.f Total number of nonempty data fields captured (processed and sent to the sponsor) through the eSource method.
Results
In this project, we collected patients' demographics, vital signs information, local laboratory data, and concomitant medication data from EHRs, successfully pushed the data directly to the designated sponsor environment, and evaluated the data quality from three perspectives including availability, completeness, and accuracy (Table 2).
• The eSource-CRF availability score, which is used to evaluate the ratio of fields in EHR that can be collected by eSource and used for CRF, was low for demographics, blood tests, and urine sample tests but higher for vital signs and concomitant medications.• Data completeness, defined as the ratio of the total number of nonnull data captured by eSource to the total number of data fields required in the electronic CRF, was used to evaluate the ratio of nonnull data fields in the CRF that can be captured by eSource.In this study, the completeness score of the vital signs module was only 1.32%, and the concomitant medications and laboratory test modules also had poor performance in the data completeness evaluation.• Data accuracy, defined as the compatibility between the data field values in the hospital EHR and the data field values that can be collected using eSource, was 100% for all modules.• EHR-CRF availability, which is used to evaluate the ratio of fields in the EHR that can be used for the CRF, was 50%, 60%, and 66.67% for demographics, blood tests, and urine sample tests, respectively, in this study, and the rest of the data were 100% available.c Checks were made with the relevant clinical research associates (CRAs) regarding the original data collection and CRF completion methods for the following reasons: vital signs were obtained using paper tracking forms provided by the sponsor as the original data source, and the data may not be transcribed into the hospital information system (HIS) by the researcher.Therefore, data from many visits are not available in the HIS.d A total of 2708 blood biochemistry tests were involved.e Concomitant medication uses tracking forms to record adverse event and ConMed (a paper source), and data may not be transcribed into the HIS.As confirmed by the CRA, the percentage of paper ConMed sources was approximately 80%.
Discussion
Although EHRs have been widely used, the degree of structure of EHR data varies substantially among different data modules.In EHRs, demographics, vital signs, local lab data, and concomitant medications are more structured than patient history or progress notes and often contain unstructured text [10].Therefore, we selected these 4 well-structured data modules for exploration in this project.
For demographics data, among the 6 required fields (subject ID, date of birth, sex, ethnicity, race, and age), subject ID (subject code number/identifier in the trial, not the patient code number/identifier in the EHR system), ethnicity, and race were not available in the EHR, so the EHR-CRF availability score was 50%.Since this was an exploratory project, the date of birth field was also deidentified and thus could not be collected based on our deidentification rule, so the eSource-CRF availability score was 33%.In the future, the availability score can reach close to 100% by bidirectional design of the EHR and CRF under the premise of obtaining compliance for industrial-level applications.
The low availability score of local laboratory data on EHR-CRFs is due to the lack of required fields in the hospital system; "Lab ID" and "Not Done" do not exist in the LIMS, and for the "Clinically Significant" field, the meaning of laboratory test results needs to be manually interpreted by an investigator, so they cannot be transcribed directly.The availability score of eSource-CRFs was further decreased because the field "Laboratory Name and Address" is not an independent structured field in the EHR.The completeness score of urine sample test data was only 37.56% because during the actual clinical trial, especially amid the COVID-19 pandemic period, patients completed study-related laboratory tests at other sites, and those test results were collected via paper-based reports, so the complete data sets cannot be extracted from the site's system.
To improve data availability in future applications, clinical trial-specific fields need to be added to EHR designs for those data that require an investigator's interpretation such as "Clinically Significant," and data transfer and mapping processes for the determination of the scope of data collection also needs to be optimized.Based on these two conditions, the completeness score can be improved to over 90%.
The availability and accuracy of vital signs data are ideal.However, since not all vital signs data collection was recorded by the electronic system during the actual study visit, many vital signs data were collected in "patient diary" and other types of paper-based documents during the study, resulting in a serious limitation in data completeness.With the development of more clinical trial-related electronic hardware and enhancements in products intelligence, more vital signs data will be directly collected by electronic systems, and the completeness of vital signs data transferred from EHR to EDC will be greatly improved in the future.
In the concomitant medication module, there was a good score for availability and accuracy because the standardization and structuring of prescriptions are well done in this hospital system.However, the patient's medication use period during hospitalization is recorded in unstructured text, so the data could not be captured for this study, resulting in a low completeness score of 18.42% for concomitant medication.
In summary, the accuracy score of eSource data in this study was high (100% for all fields).A study by Memorial Sloan Kettering Cancer Center and Yale University confirmed that the error rate of automatic transcription reduced from 6.7% to 0% compared to manual transcription [10].However, data availability and completeness have not reached a good level.Data availability varies widely across studies, ranging from 13.4% in the Retrieving EHR Useful Data for Secondary Exploitation (REUSE) project [11] to 75% in The STARBRITE Proof-of-Concept Study [12], mainly related to the coverage and structure of the EHR.
National drug regulatory agencies (eg, US Food and Drug Administration [FDA], European Medicines Agency, Medicines and Healthcare products Regulatory Agency, and Pharmaceuticals and Medical Devices Agency) have developed guidelines to support the application of eSource to clinical trials [3,[13][14][15].The new Good Clinical Practice issued by the Center for Drug Evaluation in 2020 encourages investigators to use clinical trials' electronic medical records for source data documentation [1].Despite this, we still encountered challenges, including ethical review and data security, during this study's implementation process.Without knowing the precedents, the project team decided to follow the requirements for clinical trials to control the quality of the study.There were no existing regulatory policies or national guidance on eSource in China at the time of this study.The project team provided explanations for inapplicable documents and communicated several times to ensure the approval of relevant institutional departments before finally becoming the first eSource technology study to be approved by the Ethics Committee and Human Genetic Resource Administration of China.
In the absence of regulatory guidelines, our eSource study, the first in China's International Multi-center Clinical Trial, navigated challenges in data deidentification.We adopted HIPAA and TransCelerate's guidelines [8].Securing approval under "China International Cooperative Scientific Research Approval for Human Genetic Resources," we answered queries and achieved unprecedented recognition.
For transferring data from the hospital to the sponsor's environment, we prioritized security and obtained necessary approvals.Iterative revisions ensured a robust data flow design.Challenges in mapping hospital EHR to EDC standards highlighted the need for a scalable mechanism.This study pioneers eSource tech integration in China, emphasizing the importance of seamless data mapping.In the process of executing data standardization, several challenges may arise, including inconsistent data definitions.Data from different systems may use different definitions due to the independent development of these systems, leading to varied interpretations of even identical concepts.To address this issue, establishing a unified data dictionary is crucial to ensure consensus on the definition of each data element.Different systems might also use distinct data formats such as text encodings.Preintegration format conversion is required, and extract, transform, and load tools or scripts can assist in standardizing these formats.During the integration of data from multiple systems, it is possible to discover data in one system that is not present in another.In the data standardization process, considerations must be made on how to handle missing data, which may involve interpolation, setting default values, etc. Quality issues like errors, duplicates, or inaccuracies may exist in data from different systems.Data cleansing, involving deduplication, error correction, logical validation, etc, is necessary to address these quality issues.Different systems may generate data based on diverse business rules and hospital use scenarios.In data standardization, unifying these rules requires collaboration with domain experts to ensure consistency.
Internationally, multiple research studies and publications have been released on regulations, guidelines, and validation of eSource.The FDA provided guidance on the use of electronic source data in clinical trials in 2013 that aims to address barriers to capturing electronic source data for clinical trials, including the lack of interoperability between EHRs and EDC systems.The European-wide Electronic Health Records for Clinical Research (EHR4CR) project was launched in 2011 to explore technical options for the direct capture of EHR data within 35 institutions, and the project was completed in 2016 [16].The second phase of the project connected the EHRs to EDC systems [17] and aimed to realize the interoperability of EHRs and EDC systems.The US experience focuses more on improving and standardizing the existing EHRs to make them more uniform.
In Europe, the experience focuses on breaking down the technical barrier of interoperability between EHRs and EDC systems.In China, the current industry trends focus on the governance of existing EHR data in the hospital and the building of clinical data repository platforms [7].Clinical data repository platforms focus on data integration and cleaning between EHRs and other systems in hospital environments and on unstructured data normalization and standardization by natural language processing and other AI technology [18].At the national level, China is also actively promoting the digitization of medical big data and is committed to the formation of regional health care databases [19], which lays the foundation for the future implementation of eSource in China [20].
This study evaluates the practical application value of eSource in terms of availability, completeness, and accuracy.To improve availability, the structure of the CRF needs to be designed according to the information of the EHR data at the design stage of clinical trials.Even so, since EHRs are designed for the physicians to conduct daily health care activities, certain fields in clinical trials (eg, judgment of normal or abnormal values of laboratory tests and judgment of correlations of adverse events and combined medications) are still not available, and clinical trial-specific fields need to be added to EHR designs for those data that require investigators' interpretation to improve data availability.Completeness could be improved by the development of hospital digitalization that ensures patients' data is collected electronically rather than on paper.Additionally, 2708 blood test records were successfully collected from only 6 patients via eSource in this study, which indicates that laboratory tests often contain large amounts of highly structured data that are suitable for eSource.EHR-EDC end-to-end automatic data extraction by eSource is suitable for laboratory examinations and can improve the efficiency and accuracy of data extraction significantly as well as reduce redundant manual transcriptions and labor costs.Processing unstructured or even paper-based data in eSource is still a big challenge.Using machine learning tools (eg, natural language processing tools) for autostructuring can be explored in the future.The goal is to have common data standards and better top-level design to facilitate data integrity, interoperability, data security, and patient privacy protection in eSource applications.During deidentification, we processed certain data with a specific logic to protect privacy.The accuracy assessment was performed during the deidentification step to ensure that the data was still sufficiently accurate while meeting privacy requirements.Reversible methods need to be used when performing deidentification as well as providing controlled access mechanisms to the data so that the raw data can be accessed when needed.It is worth noting that different regions and industries may have different privacy regulations and compliance requirements.When deidentifying, you need to ensure that you are compliant with the relevant regulations and understand the limitations of data use.This may require working closely with a legal team.
In the future, we can consider adding performance analysis, including an assessment of data import performance.This involves evaluating the speed and efficiency of data import to ensure it is completed within a reasonable timeframe.Additionally, analyzing data query performance is crucial in practical applications to ensure that the imported data meets the expected query performance in the application.For long-term applications involving a larger size of patients, it is advisable to consider adding analyses related to maintainability and cost-effectiveness.This includes implementing detailed logging and monitoring mechanisms to promptly identify and address potential issues.Furthermore, for the imported data, establishing a version control mechanism is essential for tracing and tracking changes in the data.Simultaneously, for overall resource use, evaluating the resources required during the data import process ensures completion within a cost-effective framework.It is also important to consider the value of imported data for clinical trial operations and related decision-making, providing a comparative analysis between cost and value.
Table 1 .
Introduction of data quality assessment rules.Based on the electronic CRF, 6 data fields in the demography need to be captured, and 3 of them have records in the EHR.Data availability: 3/6 × 100% = 50% Total number of data fields in the hospital's EHR.d Total number of data fields requested in the electronic CRF.
a available in the hospital EHR b to the total number of data fields required in the electronic CRF: EHR c / CRF d × 100% availability verification Field dimension The ratio of the total number of data fields in the clinical trial CRF (eSource) that can be transmitted electronically in the hospital's Based on the electronic CRF, 6 data fields in the demography need to be captured, and 2 data fields can c
a CRF: case report form.b EHR: electronic health record. | 5,702.2 | 2023-09-19T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
India ’ s Contribution and Research Impact in Leishmaniasis Research : A Bibliometric Analysis
Copyright © The Author(s). 2018 This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. INTRODUCTION
INTRODUCTION
The health sector is very critical towards the right information on certain diseases.The critical input for policy decision making, it is essential to provide right source of information about
METHODOLOGY
This study is undertaken on the publications on Leishmaniasis by Indian research community.The data is collected from SCOPUS multidisciplinary bibliographic database available over http://www.scopus.com/home.url.The SCOPUS is having hundred percent coverage of PubMed data.The Medical Subject Heading (MeSH) Terms "Leishmania", "Leishmaniasis", "Cutaneous Leishmaniasis", "Mucocutaneous Leishmaniasis", "Visceral leishmaniasis" available in the article title, abstract and keywords, has been used to retrieve bibliographic data and "India" was used for the country of affiliation of the author using following string for the period 1968 till 2017.
The research impact of publication was taken in terms of citation count.The citation study has been taken as the number of citations received by the articles in first year of its publication (C 0 ) and till the year 2017 (C 2017 ).For the international collaboration, each article published from India was manually analyzed to see the collaboration by Indian authors with International authors.In combination with the above string, separate search strategies have been adopted to retrieve the data for individual, institutional and journal output.Thus, the data obtained using the different search strategies has been subjected to data analysis and interpretation of results.Another quality indicator, h-Index, which quantify the individual quality of the author and Institution was obtained from the database.The data thus obtained from SCOPUS was analyzed for global output, Indian output, share of publication, productive country, productive Indian institute, productive Indian authors, journals publishing Indian articles, international collaboration by Indian authors, and most cited articles authored by Indian authors.
RESULTS
There were 39302 articles available in SCOPUS during 1968 till 2017 (50 Years) on Leishmaniasis.Out of these, 3391 articles were published by authors from India. Figure 1 shows the trends of growth of literature output published globally and India.The global output averaged at 6.19% annual growth rate while India achieved an annual average growth rate of 20.60% per year.Indian research on Leishmaniasis was in infancy state till 1987 (3.45% share) and adolescent up to 2007 (36.21% share) and matured thereafter (60.34% share) (Table 1).In case of the global publication, 44.64% literature were parasitic disease, spread by the bite of female Phlebotomine Sandflies.There are three forms of the disease; cutaneous, mucocutaneous, or Visceral leishmaniasis; of which later has been estimated as most dangerous among all.It is sharing a 90% of prevalence across the globe but frequent in the population residing in the region of Bangladesh, Brazil, Ethiopia, India, Nepal, Saudi Arabia, Afghanistan and the Sudan. [5]e research focused on Neglected Tropical Disease across the globe has posed a quite challenge for everyone being medical practitioners or policy makers.World Health Organization report on the leishmaniasis is in alarming situation due to new cases in the region of East Africa (Ethiopia, Kenya, South Sudan and Sudan) have caused high morbidity and mortality due to Visceral leishmaniasis. Visceral leishmaniasis have affected about 67% Global Burden of Diseases in India, Bangladesh, Nepal. [8]vernmental and Non-Governmental agencies are playing a key role in elimination of the disease from the affected communities with the help of effective medication, vaccination and community services.Simultaneously, the medical practitioners are engaged in discovering the effective medication to control the disease.The scientists are working specially the countries of the most affected region to discover new medicine and treatment methods.These efforts are available as published literature and available as guiding path for future.
Various qualitative and quantitative assessment measures are being taken to analyze the research progress in scientific disciplines including leishmaniasis.In earlier days, the bibliometric studies were covering the different aspects of neglected tropical diseases such as Latin American studies [9][10] and social science research. [11]But after 2011 onwards specific research were conducted related to leishmaniasis in general; [12][13] or for specific countries and region such as Iran, [14][15] Latin America, [16] South America. [17]The bibliometric study focusing India were reported in parasitic and neglected tropical disease; [18] lymphatic filariasis; [19] and other neglected tropical diseases such as Ascariasis/Toxocariasis, [20] and Schistosomiasis. [21]owever, these literature does not reveal any study covering the research impact of India in the field of leishmaniasis.The objective of this study is analyze literature on leishmaniasis published from India and its research impact in terms of total publication compared with global output, productive institutions, productive authors, productive journals, international collaboration, most popular articles on qualitative parameters citation and Hirch-Index (h-index). [22]ppeared the period of 2008-17 and in case of India, it was 60.34%.The period from 1978-1987 was highly productive in case of average percent growth rate when World literature growth was 127.94% and Indian literature growth was 357.14% than its previous period.For India, the highly productive year was 1979, 1999, and 2014.5] Table 2 presents the distribution of 11 most productive countries on Leishmaniasis with more than two percent of global publications.The global publication ranges from 2.63% to 19.36%, where United States holds top rank with the highest number of publications (7609 papers; 19.36% share), followed by Brazil (6019 papers; 15.31% share) and United Kingdom (3548 papers; 9.03% share).India ranked fourth in terms of total publication (3391 papers; 8.64% share).France, Spain, Germany, Iran, Italy, Switzerland and Canada ranked from fifth to eleventh.The etiological evidence of Leishmaniasis in these countries ranges epidemics of the disease. [26] terms of citation impact by these countries, the articles published from USA have accumulated highest number of citations (317646 citations) with an Average Citation Per Paper (ACPP) of 1.75 citations per paper.The articles published from United Kingdom had 1,39,870 citations with ACPP of 39.42 citations per paper and Brazil had 1,06,261 citations with an ACPP of 17.65 citations per paper.Articles published from Canada were most frequently cited and were more impactful with an ACPP of 44.99 citations per paper, followed by Switzerland (ACPP=43.59).Indian papers accumulated 69,985 citations and ranked fifth, with an ACPP of 20.61 citations.
The quality parameter used for benchmarking [27] based on Hirsch Index, USA has highest value (h-Index=211) followed by United Kingdom (h-Index=149) and Germany (h-Index=110).The India has h-Index value of 93, which make it ranked eighth, higher than Spain, Italy and Iran.
In this study, it is found that there were ten Indian institutes, which had published hundred or more articles on various aspect of leishmaniasis during 1968-2017.The publication performance of these ten Indian institutions was measured in terms of total publication, publication share, total citations, average citation per paper and h-Index is given in Table 3.It is found that these ten institutes had contributed 76.41% of total Indian publication on leishmaniasis.'Indian Institute of Chemical Biology, Kolkata' topped the rank with 566 articles (16.69% share), followed by Banaras Hindu University Varanasi (419 publications; 12.36% share) and Central Drug Research Institute Lucknow (376 publications; 11.09% share).Other institutes ranked from 4 th to 10 th and the percent share of publication ranges from 9.73% to 3.04%.These most productive Indian institutions have accumulated a total of 67,250 citations with an ACPP of 25.96 citations.The Banaras Hindu University was most impactful with highest [30,31] P. Salotra has highest h-Index value of 93.
The second most impactful author was S. Sunder with 16255 citations (h-Index=65) with an ACPP of 43.81 citations, followed by C.P. Thakur with 3869 citations (h-Index=34) with an ACPP of 50.25 citations.Other than these three most productive authors, P. Das have forth ranked in terms of citation but was ranked eleventh in average citation per paper and H.K. Majumdar ranked tenth in overall publication and ranked sixth on overall citations but ranked fourth on average citation per paper.On the parameter of h-Index A. Dubey ranked forth, however, he was ranked fifth on total citations (Table 4).
Most productive journals publishing Indian Leishmaniasis research
The Indian authored papers were published in 3348 national and international journals. The top three ranked institutions on parameter of h-Index were Banaras Hindu University Varanasi (h-Index=62), Banaras Hindu University Institute of Medical Science Varanasi (h-Index=54) and Indian Institute of Chemical Biology Kolkata (h-Index=49).
The 3391 articles were published by 19788 authors either singly or jointly with authors from national or international institutions.(2015), was most impactful as it was early recognized by the number of citations received in its first year of publication.It was immediately get cited in the year it was published (C 0 =163) and very early rise in citations.The article which get recognition through early citations can be considered as most impactful. [32]Moreover, the most cited article Lozano et al.
CONCLUSION
Leishmaniasis is one of the thirteen Tropical and Neglected Disease identified by WHO.About 98 countries have its etiological impact where Nepal, Bangladesh, India, Afghanistan, Saudi Arabia, Brazil Sudan are most affected region.The agencies are working towards the better health remedies to eradicate the disease from the society.The assessment of research impact helps policy and decision makers to frame
Figure 1 :
Figure 1: Comparison of Indian and Global Research Publication.
(2012), was in sleeping state for two years (C 0 =1) with two citations only in first two years of publication and nine citations in third year.Thereafter, it had gained momentum and become most cited article (TC 2017 =4920).
Figure 2 :
Figure 2: Citation life of seven most cited articles by Indian authors.
Table 2 : Most productive countries of Leishmaniasis research.
Varanasi, Jawahar Lal Nehru University Delhi and Vardhman Mahavir Medical College New Delhi have one author each.These ten authors have contributed 1304 articles, which was 38.45% share of cumulative total Indian output.Four authors had published more than hundred articles on Leishmaniasis. .Salotra from Vardhman Mahavir Medical College, New Delhi was most impactful with highest number of citations for his 102 publication.P. Salotra had scored a total of 68379 citations at the rate of 670.38 citations per paper.Unlike other Bibliometric measures, the h-index is another very important parameter to account the lifetime achievement of a scholar's work and h-index can give fairer measure of an academic's overall impact. P
Table 5
of 4.856; Forty-Seven articles in Antimicrobial Agents and Chemotherapy (IF 2016 =4.302) and Forty-Two papers in Journal of Biological Chemistry (IF 2016 =4.125).The other high impact factor journals where Indian authors have published their paper were 'New England Journal of Medicine' (IF 2016 =72.406); 'Lancet (IF 2016 =47.831) and 'Nature' (IF 2016 =40.137) Table 4presents the status of ten most productive Indian authors and their citation impact who have published more than one percent of Indian leishmaniasis articles during 1968-2017.It is found that these ten authors were associated with five Indian institutes where Rajendra Medical College and Hospital Patna, Central Drug Research Institute Lucknow and Indian Institute of Chemical Technology Kolkata have two authors and Banaras Hindu University Medical Science
Table 6 )
Most Cited Articles Authored by Indian AuthorsThere were 280 articles which have scored one citations, and rest other more than two.There were six articles, which have received 500 or more citations since their publication until 2017 and can be referred as highly cited articles.These six articles appeared in different national and international journals.Of these six articles, five articles were published in collaboration with international institutions and one published from single Indian institution.Two articles appeared in the 'Lancet' (IF 2016 =47.831) journal and one each in 'Colloids and Surfaces B: Biointerfaces' (IF 2016 =3.887); 'Clinical Microbiology Reviews' (IF 2016 =19.958); 'Nature Reviews Microbiology' (IF 2016 =26.819) and 'Lancet Infectious Diseases' (IF 2016 =19.864).Figure 2 presents the status of most cited Indian articles with more than 500 citations.The article by Lozano et al. (2012), scored highest TC 2017 =4920 with an average annual citation rate of 615 citations pear year ranked 1 st .Naghavi et al. (2015) was second highly cited article with TC 2017 of 1744 citations with average annual citation of 581.33 citations.Kumari et al. (2010), was the only highly cited article with Indian affiliation which was cited by 1323 times with average annual citation of 147 citations and was ranked third.Other articles were ranked from forth to sixth.The article Naghavi et al.
Out of 3391 papers published from India, 2936 (86.58%) articles received at least one citation till 2017 (TC 2017 ) during 1968-2017 with sum cumulative total of 69,985 citations. | 2,914.8 | 2018-05-22T00:00:00.000 | [
"Economics"
] |
An Efficient Probabilistic Algorithm to Detect Periodic Patterns in Spatio-Temporal Datasets
: Deriving insight from data is a challenging task for researchers and practitioners, especially when working on spatio-temporal domains. If pattern searching is involved, the complications introduced by temporal data dimensions create additional obstacles, as traditional data mining techniques are insufficient to address spatio-temporal databases (STDBs). We hereby present a new algorithm, which we refer to as F1/FP , and can be described as a probabilistic version of the Minus-F1 algorithm to look for periodic patterns. To the best of our knowledge, no previous work has compared the most cited algorithms in the literature to look for periodic patterns—namely, Apriori , MS-Apriori , FP-Growth , Max-Subpattern , and PPA . Thus, we have carried out such comparisons and then evaluated our algorithm empirically using two datasets, showcasing its ability to handle different types of periodicity and data distributions. By conducting such a comprehensive comparative analysis, we have demonstrated that our newly proposed algorithm has a smaller complexity than the existing alternatives and speeds up the performance regardless of the size of the dataset. We expect our work to contribute greatly to the mining of astronomical data and the permanently growing online streams derived from social media.
Introduction
Recent technological developments have led to a data deluge [1], a scenario where more data are generated than can be successfully and efficiently managed or capped.This results in missed chances to analyze and interpret data to make informed decisions.
When decision making calls for pattern discovery, the complexity is further expanded if the data have spatio-temporal features, because traditional algorithms are not meant to handle the search for correlations which have a time dimension.This is, for example, the case of global positioning systems [2] and geographic information systems [3], which can be represented as spatio-temporal databases (STDBs)-that is, extensions to existing information systems that include time to better describe a dynamic environment [4].
The exploitation of STDBs can provide valuable knowledge, for instance, in the context of road traffic control and monitoring [5], weather analysis [6], and location-based sociological behavior in social networks [7].However, as stated above, traditional data mining techniques cannot be directly applied to STDBs, which complicates not only data exploitation but also processing times.
We are interested in the discovery of periodic patterns, which can be seen as events occurring with a certain "periodicity"-for example, the subway's arrival at Central Park Station every 15 min defines a periodic pattern.A period corresponds to any unit of time, such as hours, days, weeks, et cetera.To be precise, a period is the time elapsed between two occurrences of a pattern, and it can be counted in terms of time or a number of transactions.
Sequential pattern mining is also concerned with finding statistically relevant patterns where data appear in a sequence [8].The sequence is analyzed in such a manner that the possible patterns satisfy a minimum threshold while considering the length of the periods to be analyzed.From the point of view of performance, the discovery of valuable knowledge depends on two aspects: the volume of data and the processing power.Hence, in a context where data grow exponentially, it is critical to ensure the use of efficient algorithms, regardless of the available processing power.
Problem Definition
Let o(s ′ , t) be a spatio-temporal object defined by a point in time t and a spatial location s ′ .A change in the shape of the object or in the object's location is known as an event.We will denote an event as e(o x , t i ), where o x is the object at a location s ′ m and a time t i .For simplicity, the space where the objects are located is segmented into a set of n × n disjoint cells with equal sizes.A cell is denoted as s ′ m , and a sequence of localized events for the object o x is denoted as S x .Events belonging to S x take place over a time series τ, such as τ = {t 1 , . . ., t n }, where t i < t i+1 .Definition 1.Given a minimum support sup(X) min provided by a user, then sup(X) is a pperiodic pattern if and only if sup(X) ≥ sup(X) min , such that the length of X is p and p corresponds to the period.X is a p-periodic pattern over S x if it satisfies two user requirements: p and sup(X) min .To illustrate this, consider the sequence: where sup(X) min = 1/3 and p = 3.There are three subsequences, each containing three events.Thus, it is feasible to obtain the p-periodic pattern {a}{ * }{c}, which corresponds to a partial periodic pattern, because { * } can represent any event.This pattern is also a perfect periodic pattern, as it appears across all three subsequences.
The main contributions of this paper are as follows : Extensive experimentation: To the best of our knowledge, no previous work has compared, empirically, the performance of the most cited algorithms based on association rules, such as Apriori [9], MS-Apriori [10], FP-Growth [11], PPA [12], and Max-Subpattern [11].Thus, we have conducted a comprehensive comparison of these algorithms over two STDBs-first, a synthetic one, then a real one.As part of our experiments, we have also included the Minus-F1 algorithm [13], which has been proven to achieve good results, and a new probabilistic version of it, which we have developed.
An efficient probabilistic algorithm: Although recent developments have produced several off-the-shelf libraries for pattern mining-for instance, apyori [14] is a library that implements the Apriori algorithm in Python-our experiments have confirmed that the performance of the most well-known algorithms is not ideal for STDBs.Thus, we have developed a new, probabilistic version of the Minus-F1 algorithm [13], which we refer to as F1/FP.This new algorithm allows for periodic pattern discovery in STDBs.As in the case of Minus-F1, F1/FP is an algorithm of Las Vegas type [15], which always provides the correct answer when searching for a pattern, and has a polynomial behavior matched with a better performance in STDBs.
Complexity analysis:
A calculation of the complexity of the F1/FP algorithm.The complexities of association rule algorithms have not been discussed sufficiently in the literature.Indeed, we have struggled to find sources where this kind of analysis is undertaken.Thus, we have endeavoured to prove that the complexity of our newly proposed algorithm is better than that of the alternatives.
We expect our work to contribute significantly towards future research on pattern searching, especially in the case of the exploration of massive datasets-such as those required for the mining of astronomical data-and online streams which continue to grow uninterruptedly-such as those derived from social media.
The remainder of this paper is organized as follows: Section 2 consists of a bibliographical review of pattern searching.Section 3 presents the main algorithms based on association rules, and Section 4 analyzes the complexity of our proposal.Section 5 reports on the experimental environment and Section 6 introduces our results.Lastly, Section 7 offers our conclusions and comments on future work.
Related Work
There are three types of sequential pattern-mining algorithms: machine learning algorithms, algorithms based on mathematical techniques, and algorithms based on association rules.Machine learning algorithms require an objective function and a training dataset to define "correct" patterns [16,17].This approach often involves a complex model selection process and hyperparameter tuning, which can be challenging for users who lack sufficient domain knowledge and experience.Thus, this approach is unsuitable for users who are not well versed in the intricacies of training and tuning machine learning models.
Algorithms based on mathematical techniques involve the utilization of the Fourier transform to calculate the circular autocorrelation [18].This allows customization.For instance, Khanna and Kasurkar [19] addressed three types of periodicity-symbol periodicity, segment periodicity, and partial periodicity-by proposing corresponding variants of an algorithm based on autocorrelation.Methods based on mathematics are also robust against noise and efficient at extracting partial periodic patterns, without additional domain knowledge.Regrettably, they prioritize computational efficiency by employing approximations, which may miss some periodic patterns [13].In other words, mathematical methods trade off the guarantee of finding all the qualifying patterns for faster execution times.
Association rule mining algorithms are those derived from the Apriori-based association rule proposed by Agrawal and Srikant [9].These algorithms exploit the fact that "any superset of an infrequent item set is also infrequent".Indeed, Apriori identifies frequent item sets from smaller to larger candidates by pruning infrequent ones to prevent an explosion of the number of combinations to be examined.
Even though Apriori remains a well-regarded algorithm [20], it has limitations.First, it only allows for a single minimum support (MS), which can restrict its scope.Second, its efficiency may be lacking in certain situations.To address the first drawback, the MS-Apriori algorithm [10] has been developed to enable the discovery of frequent patterns across multiple thresholds.To address the second drawback, optimization strategies have been used to take advantage of the inherent properties of periodic pattern mining [21,22].For example, it is not necessary to assess the frequency of an item set in position t if it is not frequent at any position contained within the cycles involving t.Also, other researchers have looked into algorithms that use properties specific to the types of patterns they are interested in, for instance, partial periodic patterns [23], asynchronous periodic patterns [24], symbol periodicity, sequence periodicity, and segment periodicity [25].
Spatio-temporal databases are another area which extends the scope of the problem with many new applications, such as disease diffusion analysis [26], user activity analysis [27], and local trend discovery in social networks [28,29].Several approaches have been proposed to deal with spatial information [30], treating it as a continuous variable [31,32], formulating it as a dynamic graph mining problem [33], and encoding spatial features as discrete symbols [13].We have adopted the discrete symbol encoding approach to fully exploit our former research on sequential periodic pattern mining [13].
Han et al. [11] proposed the Max-Subpattern Hit-Set algorithm, often referred to simply as Max-Subpattern.They based their development on a custom data structure called a max-subpattern tree to efficiently generate larger partial periodic patterns from combinations of smaller patterns.Yang et al. [12] proposed the projection-based partial periodic pattern algorithm (PPA), derived from a strategy to encode events in tuples.The empirical results show that the PPA algorithm is better at discovering partial periodic patterns than Max-Subpattern and Apriori.Han et al. [34] also proposed another algorithm called partial frequent pattern growth (PFP-Growth).
PFP-Growth has two stages: the first stage constructs an FP-tree, and the second stage recursively projects the tree to output a complete set of frequent patterns.Experiments were carried out comparing PFP-Growth with the Max-Subpattern algorithm on synthetic data.Results show that PFP-Growth performs better than Max-Subpattern.
Then, Gutiérrez-Soto et al. suggested the Minus-F1 algorithm in 2022 [13].This is an algorithm designed specifically to search for periodic patterns in STDBs.Gutiérrez-Soto et al. showed that Minus-F1 has a polynomial behavior, which makes it more efficient than other alternatives, such as Apriori, Max-Subpattern, and the PPA.Recently, Gutiérrez-Soto et al. [35] proposed an alternative called HashCycle to find cyclical patterns.Although highly relevant, HashCycle is not appropriate for periodic pattern discovery.
Xun et al. [36] proposed a new pattern called a relevant partial periodic pattern and its corresponding mining algorithm (PMMS-Eclat) to effectively reflect and mine the correlations of multi-source time series data.PMMS-Eclat uses an improved version of Eclat to determine frequent partial periodic patterns and then applies the locality-sensitive hashing (LSH) principle to capture the correlation among these patterns [37].
Jiang et al. [38] addressed the discovery of periodic frequent travel patterns of individual metro passengers considering different time granularities and station attributes.The authors proposed a new pattern called a "periodic frequent passenger traffic pattern with time granularities and station attributes" (PFPTS) and developed a complete mining algorithm with a PFPTS-Tree structure.The proposed algorithm was evaluated on real smart card data collected by an automatic fare collection system in a large metro network.As opposed to Jiang et al., our work can be applied in different situations rather than specifically on individual travellers contexts.
Whilst existing algorithms have been designed to handle various aspects of periodic pattern mining and spatio-temporal data, they often focus on optimizing computational efficiency or addressing specific pattern types.In contrast, our work presents a novel probabilistic variant of the Minus-F1 algorithm that aims to balance efficiency and effectiveness in a wide range of scenarios.The proposed algorithm is exhaustively evaluated against most of the previously mentioned algorithms using two datasets with diverse characteristics, showcasing its ability to handle different types of periodicity and data distributions.By conducting a comprehensive comparative analysis, we will highlight the unique contributions and advantages of our probabilistic variant of Minus-F1.
Algorithms
Sequential pattern mining is concerned with finding statistically relevant data patterns where the values appear in a sequence [8].Several algorithms have been designed for this purpose, and we want to compare our newly suggested alternative with the most well-regarded options, namely, Apriori, Max-Subpattern, PPA, Minus-F1, and FP-Growth.We will describe these options below and illustrate our explanations with examples.
Apriori
Apriori is an algorithm for frequent item mining on relational databases [9].It identifies items retrieved frequently in a database and creates a set containing such items.Over time, the set becomes larger, as items continue to be added if they are retrieved often.These sets can later be used to establish association rules [39], which highlight trends in the database.Although Apriori is not originally designed to have a temporal dimension, we have amended it to include it.
Consider the following example.Let us assume that the string below represents a time series with periodicity four-the periodicity has been determined in advance.Note that each character in the string represents a separate event, and the events within curly braces are those that occur simultaneously.
a{b, c}ddab{c, d}daabbacbda{b, d}da
Given that the periodicity of the time series is four, we can confirm that the number of periods is five.We have used hyphens to separate each period in the line below.a{b, c}dd − ab{c, d}d − aabb − acbd − a{b, d}da Apriori identifies the sets of frequent items by making subsequent passes through the database.In the first pass, it gathers the set of frequent items of size 1; then, in the second pass, the set of frequent items of size 2 and so on.
Let us call F k the set of frequent items of size k.Then, assuming a minimum support of 3, F 1 can be derived from the following candidates: Subsequently, F 2 can be derived from the following candidates, Finally, there is only one candidate for F 3 , The algorithm finishes when F K = ∅.Then, we finish with F 3 in this example, as the number of events in F 3 cannot generate an F 4 set.
Max-Subpattern
Max-Subpattern was originally proposed by Han et al. [11] as an attempt to reduce the number of sets to determine periodic patterns [40].It builds as many trees as the number of periods we encounter in a time series, representing a sequence of events.However, period 1, which is equivalent to a period formed by a single event, is not taken into account.If a sequence has size n, the maximum number of periods to evaluate is n 2 .Thus, Max-Subpattern builds up to n 2 − 1 trees.Let us call C max the root of the tree.Then, for each set of candidates F k,Candidates , there is a different C max .Also, each level of the tree will have subpatterns.For instance, if C max is formed by four events, the next level in the tree (Level 1) will be formed by four nodes, and each node will represent a subpattern composed of |C max | − 1 events.Then, Level 2 is formed by nodes with |C max | − 2 events whose ancestor belongs to Level 1.Each node is made up of at least two events, that is, without considering F 1 .Thus, the maximum height for each tree is |C max | − 1.
Let us consider the same example used for Apriori in Section 3.1.Once F 1 has been determined, C max is formed.Hence, Then, we proceed to find subpattern hits, discarding all the matches with only one non-* element .
PPA
After discovering that Max-Subpattern spends a large amount of time calculating frequency counts from redundant candidate nodes, Yang et al. [12] developed the projectionbased partial periodic patterns algorithm-abbreviated as PPA-for mining partial periodic patterns with a specific period length in an event sequence.
The PPA starts by going over the time series which represents the sequence of events and splits it into partial periods of size l.Afterwards, each event is codified-that is, the position of each event inside the partial period is recorded.Codified events can be seen as a matrix, where the first row corresponds to the first codified events and each column corresponds to the event's position inside the partial periods.The matrix was referred to by Yang et al. as an encoded period segment database (EPSD) [12].
By following this approach, it is possible to count the instances of each event by column, and the result is used to check whether the events comply with the required support.Consider Apriori's example defined in Section 3.1.Specifically, consider a particular instance of the original example for Apriori, namely, abdd − abdd − aabb − acbd − abda Consequently, the matrix is defined as follows, where the element x i corresponds to event x in position i.Once the instances of each event are counted by column, and the minimum support is satisfied, a candidate subsequence can be derived.Then, the events that form this subsequence are sorted, considering first the partial positions and then the lexicographic nomenclature of each event.The last subsequence S c is equivalent to F 1 .Indeed, according to Yang et al. [12], S c is used to look for the other F k,Candidates patterns.Each event of S c is used as a prefix to obtain the patterns that comply with the minimum support over the EPSD.Finally, all the F k sets that fulfil the minimum support are gathered.
Minus-F1
Minus-F1 operates by using two counters: one which is increased by 1 every time there is a match with the candidate pattern, and a second one which decreases until it reaches zero when the subsequence is consumed.In the first run of the algorithm, the sequence's probability distribution is calculated-this can be seen as capturing the entropy of all the events in the sequence.To achieve this, Minus-F1 finds out how many times each event occurs.When an event occurs, its counter is decreased.Thus, when the counter reaches zero, we can confirm that it is unnecessary to keep looking for it-it can no longer occur.
The worst-case scenario for Minus-F1 happens when the events are distributed uniformly [13].In contrast, when the distribution is not uniform, the algorithm performs the pruning efficiently.To illustrate this, let us consider the following sequence S, which comprises the subsequences s 1 = abc, s 2 = abj, s 3 = e f g, and s 4 = hij, namely, S = {abc − abj − e f g − hij}.
Note that all the subsequences have period 3. Assuming a minimum support of 2, the only two subsequences S i,j which satisfy the minimum support and form a partial pattern are S 1,1 = a and S 1,2 = b.In other words, ab * is the only partial pattern.
Once the subsequences s 1 and s 2 have been consumed, it makes no sense to continue searching for them-in our example, the events a and b cannot occur in s 3 and s 4 .Hence, we can prune the search space.
It appears that Minus-F1 is mainly affected by the size of the period [13], as opposed to the number of patterns found, which is different to the rest of the algorithms reviewed here.In fact, Minus-F1 goes through the entire sequence of events once for each period under consideration.Thus, Gutiérrez-Soto et al. [13] have pointed out that Minus-F1's best performance is achieved as the minimum support tends to zero.We aimed to fix this issue in the new algorithm that we are proposing.
FP-Growth
FP-Growth was designed to derive sets of frequent items from sequences without a pre-defined period.The algorithm begins by creating a table comprising the frequent items which satisfy the minimum support.Then, the table is sorted in descending order.
Let us say that the items which satisfy the minimum support are as follows: Then, FP-Growth removes from the items the segments that do not satisfy the minimum support and separates them to search for partial patterns.Finally, the patterns are sorted according to the position they have in the original segments.In the case of our example, the results are displayed in Table 1:
Minus-F1's (Probabilistic Version)
Our version of Minus-F1, which we have called F1/FP, is a Las Vegas type of algorithm, which always provides the correct answer.This means that its performance in the worstcase scenario corresponds to the deterministic algorithm's performance.Note that this situation arises only when the probability distribution of the algorithm's input data reaches the worst case.Although this is uncommon, it depends on the probability distribution.Therefore, the time complexities for this type of algorithm are expressed as expected time, denoted by Θ( f (n)).
F1/FP operates similarly to Minus-F1, except that, when searching for subsequences, these are selected randomly, assuming their occurrence likelihood follows a uniform distribution.This can be seen in Line 9-the swap procedure-of Algorithm 1, where we have listed the pseudo-code for F1/FP to illustrate our explanation.This simple modification of Minus-F1 provides a better performance.It is worth noting that the literature has plenty of such subtle improvements, which result in better performances and running times.
23:
end for
Time Complexity
To show how random swaps affect the running time, determined by its expected value, we provide the following definitions: Definition 2. Let F(s) be a function determining the occurrence of subsequence s i within the sequence S, such that: F(s i ) = 1 if s occurs at moment t i over S. 0 if s does not occur at moment t i over S.
Definition 3. Let Pr[s i
] be the probability of choosing some subsequence within sequence S, such that its position can be between i + 1 and n ′ , where n ′ is the number of sequences to carry out a swap (n ′ = n/p).
We assume that all subsequences have the same probability of being selected-in other words, a uniform distribution is assumed.Thus, Pr[s i ] is defined as follows: Definition 4. Given a random subsequence s, whose position within S is i, the expected value to carry out a swap is defined as : Lemma 1.The number of swaps carried out by the probabilistic version of Minus-F1 is given by the number of subsequences n ′ − 1.
Therefore, the time complexity to carry out a swap is given by: Proof by Induction: Base case (when n ′ = 2): Using the loop invariant in Lines 2-3 in Algorithm 1, we notice that there is a swap, which is equal to n ′ − 1.Note that S has two events.Then, the random event is chosen from the first event of the sequence.Thus, Inductive steps: This case is provided when n ′ ≥ 3, for any k-iteration from i = 2 until n ′ , such that k ≤ n ′ .Using the loop invariant in lines 2-3, there are always n ′ − 1 swaps.Without loss of generality, it is expressed as Although our random procedure provides notable improvements in running times, its time complexity does not change in general.This is because the sequence's length is n, and it must run through all subsequences of length n p over p periods.Consequently, running all subsequences s i takes Θ(n).Without loss of generality, and given that all algorithm loops operate on subsequences chosen randomly, a portion of this version can be denoted as Θ( f (n)), instead of using O( f (n)), except for the loops between lines 2 and 8-note that these loops are related to m events.Thus, since Minus-F1's time complexity is O(mn 2 ); this probabilistic version can be characterized as Θ(mn 2 ), which is bounded by O(mn 2 ).
Experimentation
To check the algorithms' performance, two datasets were used.The first one is composed of synthetic data, and it was used to corroborate that each algorithm was implemented correctly-that is, to confirm that each algorithm was able to find the required patterns.Once correctness had been verified, we used a second dataset to confirm that the algorithms could handle real data.The second dataset is a sample of the Geolife GPS trajectory dataset [41].
Geolife records a broad range of users' outdoor movements, including daily routinesgoing to work or returning home-and activities like travelling to entertainment, shopping, and sport activities [41].Geolife has been widely used in mobility pattern mining and location-based social networks [26], which are potential applications for our work.Therefore, we thought this dataset would fit our experimentation adequately.
The Geolife dataset comprises GPS trajectories undertaken by 182 people over a period of three years-between April 2007 and August 2012-and it was collected by Microsoft Research Asia.Each GPS trajectory is represented by a sequence of time-stamped points labelled by latitude, longitude, and altitude.
To characterize Geolife as an STDB for our experiments, the space was represented by a set of cells forming a grid.The location of each object within the grid was determined by its latitude and longitude.Time was modelled as a timestamp.At timestamp 0, all the objects are situated in their initial positions.Subsequently, objects move to different positions across the grid.An object's motion was characterized as a contiguous sequence of characters, facilitating pattern searching within the sequence.
It should be observed that our representation of motion can have an impact on pattern detection only if movement occurs within a time window whose granularity is smaller than what has been represented.For instance, if we were measuring time in minutes, we could lose some patterns occurring within seconds.However, this is not the case.The efficiency of the algorithms considered here does not depend on the granularity of the grid, but on the length of the sequence.
The results displayed below correspond to the average of five executions for each experiment.From an empirical perspective, the performance of each algorithm is determined by its running time.To define a pattern, a range of 2 to n 2 events was considered, where n represents the length of the sequence.This implies that all patterns consist of at least two events-at least one event repetition-occurring up to half the length of the sequence.For a pattern to be valid, it must occur at least twice within the sequence.
All experiments were limited to a maximum of 3 h-results exceeding this length are not shown.The experiments were carried out on a server equipped with an Intel Xeon Processor E3-1220 at 3.00 GHz and 16 GB of RAM operating at 2133 MHz with a 1 TB 7200 RPM hard drive, and running under Linux (Debian Jessie 8.4).Table 2 displays the abbreviations used later in our results to refer to the different algorithms.
Results Derived from the Synthetic Dataset
The experiments contemplated sequences of size 500, 750 and 1000, considering periods of 4, 8, 16, and 20.To link the running times with the corresponding computational complexities for each algorithm, two experiments were performed.The experiments cover pattern searching over the synthetic database which has a full pattern with period 48 and is repeated until achieving the sequence size.Supports for 25% (Tables 3-5), 50% (Tables 6-8), and 75% (Tables 9-11) were considered.
Discussion
As mentioned previously, timestamped data on the grid are mapped to a character string representing a sequence.All the algorithms which we have included in our research operate on such sequences, and both their performance and scalability depend solely on the sequence's length and the minimum support.Our results are independent of the size of the grid and the configuration of its cells.Thus, the impact of the mapping is not considered here.There is, on the other hand, a separate body of research that studies indexing and searching methods in spatio-temporal databases.These works are based on indexing structures such as the r-tree and its variants [42], namely, HR-tree and MVR-tree to mention a couple.Given that such works depend on these structures, it is not possible to compare them directly with the association rule algorithms that we have described here.
Synthetic Data Results
Tables 3-5 show the experimental results over the synthetic STDB.In these tables, the minimum support was set to 25%.The sequence length in Table 3 is 500, in Table 4 is 750, and in Table 5 is 1000.
Even though our experiments were limited to a maximum of three hours, they shed light on the algorithms' performance.In Table 3, the best results for average processing time are provided by F1/FP (6.4 ms), followed by F1 (26.6 ms), M-SP, FP-G, and PPA.The first results are in line with the standard deviation presented by the first two algorithms-1.85for F1/FP and 17.02 for F1.The worst results were by APR and MSA.These two algorithms also had the worst standard deviations-2.32× 10 7 for APR with an average time of 1.74 × 10 7 ms, and 9.20 × 10 6 for MSA with an average time of 9.20 × 10 6 ms.Table 4 reflects the same behavior as Table 3, maintaining the same order of less and more efficient algorithms in terms of processing time.According to Table 5, it is possible to see the same performance trend for both the least and most efficient algorithms.Whenever the sequence length was increased in Tables 3 and 4, the processing times also increased for all the algorithms.Tables 6-8 present the results considering a minimum support of 50%.In Table 6 the sequence length is 500, whereas in Table 7 is 750, and in Table 8 is 1000.In Table 6, the worst performance is by MSA, with an average time of 1.0 × 10 4 ms and a standard deviation of 3.99 × 10 3 , followed by PPA, whose average time was 1.77 × 10 3 ms with a standard deviation of 3.52 × 10 3 .Conversely, the best times are provided by F1/FP and F1.F1/FP has an average of 4.8 ms and a standard deviation of 1.16, while F1 has an average of 20.6 ms with a standard deviation of 13.23.
Table 7 exhibits the same behavior as Table 6-that is, the same order of performance for the two most efficient and the two least efficient algorithms.The worst average time in Table 8 was recorded by PPA (2.82 × 10 3 ms), while its standard deviation was 6.79 × 10 3 .The second worst average-2.82× 10 3 -was recorded by M-SP, while the second worst standard deviation-4.79× 10 3 -was presented by MSA.
It is worth noting that the PPA is particularly affected when the period is 12 in Tables 6-8, as both its average time and standard deviations increase.However, the PPA is not the only one affected.All algorithms are impacted negatively by the same period, except for F1/FP and F1.This peculiarity with period 12 could be attributed to how the pattern is formed, as both AP and MSA are not affected as much as the PPA.Following the same trend observed in Tables 3-5, the best average time along with the best standard deviation is yielded by F1/FP and F1.
Tables 9-11 display the results considering a minimum support of 75%, with sequences of lengths 500 (Table 9), 750 (Table 10), and 1000 (Table 11).In Table 9, the worst average time was yielded by M-SP with 442.2 ms, while the second worst was from APR-41.6 ms with a standard deviation of 24.62.Remarkably, the standard deviation for F1 (13.141) was the second worst.The most efficient algorithm was the PPA, with an average of 3.8 ms and a standard deviation of 1.30.The second most efficient one was MSA, whose average time was 4 ms.Note that FP-G registered the lowest standard deviation, with a value of 0.707.
As in the case of Table 9, the less efficient algorithms in Table 10 are M-SP, with an average of 1.22 × 10 3 ms, and a standard deviation of 44.7.APR presents an average time of 46.8 ms with a standard deviation of 27.36.FP-G exhibits the lowest standard deviation, and the PPA proves to be the most efficient with an average of 4.4 ms and the second-lowest standard deviation of 1.14.F1/FP offers the second-best average-that is, 4.8 ms-and the third-best standard deviation of 1.30.Notably, F1 continues to have better time average and standard deviation than APR and M-SP.
Finally, Table 11 shows the same trend as Tables 9 and 10.The highest standard deviation was displayed by M-SP at 2.78 × 10 3 , and the lowest time average was exhibited by F1/FP at 5.4 ms, followed by the PPA at 5.6 ms.Note that F1/FP presents a high standard deviation, though it is negligible in comparison with the PPA, and F1 has better averages than M-SP and APR.
To summarise, from this set of experiments, we can appreciate that every time the sequence length is increased, the processing time also increases.In addition, whenever the support is raised, all algorithms tend to reduce their average time and their standard deviation, which implies lower processing times for each of them.PPA, MS-P, and FP-G greatly benefit from the minimum support being increased.On the other hand, F1/FP and F1 exhibit a scalable performance that is independent of both increments, the minimum support and the sequence length.This is particularly notable in comparison with the performance of the other algorithms, especially when the support is low.
Real Data Results
Tables 12-14 display experimental results on the real dataset.In these three tables, the minimum support was set to 25%.Specifically, the sequence lengths are 500, 750, and 100, respectively, for each table.In Table 12, three algorithms exceed the maximum of three hours, particularly when the periods are 50 and 100.These algorithms correspond to APR, MSA, and PPA, which present the highest averages for time along with their corresponding standard deviations-that is, replacing "-" with three hours in milliseconds.According to the results of this table, the most efficient algorithms are F1/FP with an average of 19.6 ms and F1 with 454 ms.Their standard deviations were 12.11 and 577.23, respectively.Table 13 exhibits the same behavior as Table 12, maintaining the same positions for the least efficient algorithms in terms of running time, particularly when the period is 100.Following the same pattern as Table 12, F1/FP and F1 had the lowest averages and standard deviations.Continuing this trend, Table 14 provides the same rankings for the best and worst averages along with their standard deviations.
Four algorithms-APR, MSA, PPA, and FP-G-exceeded the time limit in Table 15.These four algorithms yielded the highest standard deviations.Conversely, the lowest averages and standard deviations corresponded to F1/FP and F1.No algorithm exceeded the time limit in Table 16.However, the highest averages were provided by APR (3.25 × 10 3 ms with a standard deviation of 6.04 × 10 3 ), followed by M-SP with an average of 1.60 × 10 3 and a standard deviation of 6.20 × 10 2 .
Two algorithms obtained the lowest averages: PPA with 22 ms and a standard deviation of 7, followed by F1/FP with 25.6 ms.As for Table 17, three algorithms exceeded the time limit-APR, MSA, and PPA-when the period was 100.Also, note that these algorithms presented the highest standard deviations.The algorithm with the lowest average was FP-G-34 ms with a standard deviation of 10.99-followed by F1/FP-41.2ms and a standard deviation of 31.956.
Table 18 is no exception to the fact that some algorithms exceeded the time limit, such as APR, MSA, and PPA, specifically when the period was 100.The lowest averages were given by F1/FP-15.4ms with a standard deviation of 11.41-and F1-192.6 ms with a standard deviation of 221.14.As for Table 19, no algorithm exceeded 3 h of processing.The highest average times were given by APR with 2.83 × 10 3 ms and a standard deviation of 5.16 × 10 3 .The second-highest times corresponded to M-SP with an average of 1.54 × 10 3 ms and a standard deviation of 568.33.
Finally, Table 20 shows that no algorithms exceeded the maximum time limit.The lowest average time was provided by PPA-27 ms with a standard deviation of 6.782 .The second best average time was for FP-G-32 ms with a standard deviation of 7.211.
Just as it happened with the synthetic dataset, every time the support was increased in the real dataset, the running times decreased, except for F1/FP and F1.Similarly, when the sequence length was increased, the running times also increased.
At first glance, the running times are higher on the real dataset than on the synthetic one.However, for both datasets, the running times decreased for the algorithms using minimum support every time the minimum support increased.F1/FP always showed remarkable running times.Indeed, this algorithm was always among the best ones.Incidentally, when the minimum support was increased, PPA and MS-P also achieved good results on the real dataset.
Conclusions
Mining periodic patterns became a topic of relevance in the 1990s, mostly after the development of the Apriori algorithm.Since then, the discovery of patterns has turned out to be one of the main techniques for characterizing data.Over the years, several improvements to the basic Apriori idea have been considered, focusing on larger and larger datasets as time has progressed, increasingly stressing the storage and processing capabilities of modern computers.
Table 3 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 25% over synthetic data.
Table 4 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 25% over synthetic data.
Table 5 .
Processing time (ms) for each algorithm using 1000 sequences with a minimum support of 25% over synthetic data.
Table 6 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 50% over synthetic data.
Table 7 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 50% over synthetic data.
Table 8 .
Processing Time (ms) for each algorithm using 1000 sequences with a minimum support of 50% over synthetic data.
Table 9 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 75% over synthetic data.
Table 10 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 75% over synthetic data.
Table 11 .
Processing time (ms) for each using 1000 sequences with a minimum support of 75% over synthetic data.
Table 12 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 25% over real data.
Table 13 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 25% over real data.
Table 14 .
Processing time (ms) for each algorithm using 1000 sequences with a minimum support of 25% over real data.
Table 15 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 50% over synthetic data.
Table 16 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 50% over synthetic data.
Table 17 .
Processing time (ms) for each algorithm using 1000 sequences with a minimum support of 50% over synthetic data.
Table 18 .
Processing time (ms) for each algorithm using 500 sequences with a minimum support of 75% over synthetic data.
Table 19 .
Processing time (ms) for each algorithm using 750 sequences with a minimum support of 75% over synthetic data.
Table 20 .
Processing time (ms) for each algorithm using 1000 sequences with a minimum support of 75% over synthetic data. | 9,248.4 | 2024-06-03T00:00:00.000 | [
"Computer Science",
"Mathematics"
] |
Modified U-Net for liver cancer segmentation from computed tomography images with a new class balancing method
Background Liver cancer is the sixth most common cancer worldwide. It is mostly diagnosed with a computed tomography scan. Nowadays deep learning methods have been used for the segmentation of the liver and its tumor from the computed tomography (CT) scan images. This research mainly focused on segmenting liver and tumor from the abdominal CT scan images using a deep learning method and minimizing the effort and time used for a liver cancer diagnosis. The algorithm is based on the original UNet architecture. But, here in this paper, the numbers of filters on each convolutional block were reduced and new batch normalization and a dropout layer were added after each convolutional block of the contracting path. Results Using this algorithm a dice score of 0.96, 0.74, and 0.63 were obtained for liver segmentation, segmentation of tumors from the liver, and the segmentation of tumor from abdominal CT scan images respectively. The segmentation results of liver and tumor from the liver showed an improvement of 0.01 and 0.11 respectively from other works. Conclusion This work proposed a liver and a tumor segmentation method using a UNet architecture as a baseline. Modification regarding the number of filters and network layers were done on the original UNet model to reduce the network complexity and improve segmentation performance. A new class balancing method is also introduced to minimize the class imbalance problem. Through these, the algorithm attained better segmentation results and showed good improvement. However, it faced difficulty in segmenting small and irregular tumors.
Background
Liver cancer is the sixth most common cancer worldwide. As of the Global Cancer Statistics report, it is the second and sixth cause of cancer death for men and women, respectively [1]. According to the WHO data, the percentage of liver cancer deaths in Ethiopia out of the total death in 2017 was about 0.16% [2]. In generall, there are two types of liver cancers, primary and secondary. Among primary types of cancers, hepatocellular carcinoma (HCC) accounts for 80% of the cases [3]. HCC is the third cause of cancer deaths and results in the death of around 700,000 people each year worldwide [4]. The major risk factors associated with primary liver cancers are cirrhosis resulting from alcohol usage, hepatitis B and C viruses, and a fatty liver disease caused by obesity [5]. Liver cancer can be diagnosed and detected by using different imaging tests like ultrasound, magnetic resonance imaging (MRI), and computed tomography (CT). From these, a CT scan is the frequently used imaging test [6].
A CT scan gives detailed cross-sectional images of the abdominal region. Most of the time further processing of these abdominal CT scan images is required to segment the liver and its counter tumorous areas from the rest of the CT image contents.
But still, the intensity similarity between the tumor and other nearby tissues in the CT images made the segmentation of the tumorous areas too difficult [5]. Therefore, these images need to be processed and enhanced to differentiate the cancerous tissue.
In a CT scan, the presence of liver cancer can be identified by the difference in pixel intensity in comparison to the surrounding healthy liver, i.e. the tumor area may be darker (hypodense) or brighter (hyperdense) than the surrounding healthy liver [7]. The manual segmentation of CT scan images is laborious and time-consuming for a clinical setting because of various factors, for instance, commonly the liver typically stretches over 150 slices in a CT volume, the shapes of the lesions are indefinite, the contrast between the lesions and the nearby tissue might be low, the shape and the size of the liver varies among patients and the intensity of the liver might be similar to the other organs [5,8]. Considering these problems, researchers have designed different computer-aided diagnostic systems for the segmentation of liver and tumor from the abdominal CT scan images.
In earlier days, different traditional techniques were used to extract tumors from liver images. But these methods were not fully effective in the extraction of the tumor. Most of them are manual or semi-automatic and dependent on edge detectors rather than analyzing the image as a pixel. After hardware improvement in the 2000s, machine learning approaches came into a widely applicable system in image processing tasks like segmentation [9]. A variety of deep-learning methods have also been developed for automatic or semi-automatic segmentation of liver tumors. Among those, convolutional neural networks (CNN) are currently the most widely used method [10]. Researchers had used CNN and its extensions, fully connected layer and UNet, for liver and tumor segmentation.
Recent techniques for the segmentation of liver tumors can be classified into three classes according to the method that they implemented. These are convolutional neural networks (CNN), fully convolutional networks (FCN), and UNet convolutional networks. But CNN is the baseline for all methods.
The first method is the convolutional neural network (CNN). In this method, researchers had used pure CNN architectures for the segmentation of the liver and the tumor. In 2019, Budak et al. developed two cascaded encoder-decoder convolutional neural networks for efficient segmentation of liver and tumor. They proposed the EDCNN algorithm that includes two symmetric encoder and decoder parts. Each part consists of ten convolutional layers with batch normalization and ReLU activation followed by a max-pooling layer [11].
The other method is a fully convolutional network (FCN). FCN is an extension of CNN that substitutes the fully connected layer of CNN with a 1 × 1 convolution where the final output layer has a large receptive field that matches the width and height of the original image, enabling every pixel to be classified. FCNs have two parts, downsampling, and upsampling path. In the downsampling path, there are seven convolutional and five max-pooling layers that downsized the input image through convolution and max pooling operations. Researchers had also used this method for liver and tumor segmentation [12,13].
The third method is the UNet convolutional neural network. UNet was designed for biomedical image segmentation by extending the work published in 2014 [14]. It works with small training samples and gives more accurate segmentation results. This network consists of a contracting path that extracts semantic or contextual information from the image and an expansive path which adds location information for each pixel and answers where each of them is localized. The two paths are more or less symmetric to each other, and yields a u-shaped architecture [15]. Researchers had been used this model for tumor segmentation by modifying and improving the architecture by increasing the depth of the structure and by adding more skip connections and dropout layers. And had combined it with other methods like graph cut and 3D conditional random fields for better segmentation results [8,10,16].
As of the available literature regarding U-Net, the maximum dice score obtained for liver and tumor segmentation is 0.9522 and 0.63 respectively. Additionally, Christ et al. and Chlebus et al. had used 3D postprocessing methods for better segmentation results [8,16]. But still, the segmentation performance was comparatively poor.
In this paper, a deep learning-based segmentation algorithm was employed for liver and tumor segmentation from abdominal CT scan images. The main contributions of this work are, first, it applied data augmentation tasks that solve the limitation of available data in biomedical images, second, it highly reduced the time needed for training by reducing the number of filters in each convolutional block thereby it reduced the number of trainable parameters and third it minimized the effect of class imbalance which presents between the tumor and the background through discarding slices with no tumor information from the datasets and used only slices with full information. These modifications improve the performance of the algorithm in detecting the tumor from the CT images. Finally, this work also showed the direct segmentation of liver tumors from the abdominal CT scan images without segmenting the liver first. By this, we were able to show the results of the three segmentation experiments in one paper, unlike others.
Results
For training, three separate models with similar architectures had been used. The first model was trained using abdominal CT scan images with liver annotations for liver segmentation. Then the second model was trained using liver images with tumor annotations for the segmentation of the tumor from the liver. Finally, the third model was trained using abdominal CT scan images with tumor annotations for the segmentation of the tumor directly from the abdominal CT scan images.
Each network was trained using 2346 images with data augmentation from scratch. Images were 512 × 512 in dimension. Since processing the whole images with these sizes is difficult due to limited GPU memory, the images were resized to a dimension of 128 × 128 even if degradation of image quality and information loss is inevitable. Weighted dice loss was chosen as a loss function for the first two networks and showed better performance during training. For the last model, which was trained to segment tumors directly from abdominal CT scan images, binary cross-entropy was chosen as a loss function and for all those three models Adam was selected as an optimizer through experiments. The network's model DSC and model loss for liver segmentation, tumor segmentation from the liver, and tumor segmentation from the abdominal CT scan images were plotted from Figs. 1, 2 and 3.
The first two plots (a and b) in Fig. 1, show the model DSCs for training and validation data during the training of the model for liver segmentation. And the third plot in Fig. 1 shows the model losses for training and validation data.
And as it can be inferred from the two graphs, the model has good performance for both training and validation data. The DSC for both graphs increased highly around the first 100 epochs and its increment became gradual and been nearly constant. Finally, the DSCs became 0.9511 and 0.9633 for training and validation data respectively.
As observed from the third graph, the losses for both data decreased highly up to around the first 100 epochs and after that, it became nearly constant. The final losses for training and validation were − 1.7567 and − 2.1753 respectively.
As it can be inferred from the three graphs in Fig. 1, the model has good performance in segmenting the liver from the abdominal CT scan images.
As Fig. 2 (a) shows the model DSCs for the training data increased up to some point and became nearly constant. This showed the network was good during training. In the second plot (b), the model DSC for validation data was also plotted and some fluctuations were observed. At the last epoch, DSCs of 0.7769 and 0.8375 were obtained for training and validation data respectively.
In Fig. 2 (c) the model losses were plotted. The losses for the training and validation data decreased as expected and became nearly constant. And finally, losses of − 1.6291 and − 2.0278 were obtained for training and validation data respectively.
As shown in Fig. 3, model DSCs and model losses were plotted for tumor segmentation from the abdominal CT scan images. The first two plots are model DSCs for training and validation data. At the last epoch, DSCs of 0.7734 and 0.8240 were obtained for training and validation data respectively.
And in the third plot Fig. 3, the model losses for the two data were plotted. Here also some fluctuation in validation losses was observed. But the training loss decreased almost constantly. And obtained losses of 0.0093 and 0.001 for training and validation respectively.
Test results for liver segmentation
The performance of the liver segmentation algorithm was evaluated using different performance metrics and the result is included in Table 1. The segmentation result of the algorithm with the respective ground truth images is included in Fig. 4.
Row 1 shows the result of the model, row 2 shows the respective masks, row 3 shows overlap images of the result with a mask, and row 4 shows both results and mask on the original CT scan image. As shown in Fig. 4 the liver segmentation result is satisfactory and the algorithm could almost segment the liver from the abdominal CT scan images fully. It has an average dice score of 0.96 which is greater than the others by 0.01. But in some cases, it missed some portion of the liver as it is shown with cyan and segmented nearby tissues as a liver as it is shown as magenta in row 3.
Test results for tumor segmentation
The segmentation result of this network on segmenting liver tumors from the liver and directly from the abdominal CT scan images was evaluated using different performance metrics and the result is included in Table 2. The segmentation result of the algorithm with the respective ground truth images is included in Figs. 5 and 6.
Row 1 shows the result of the model, row 2 shows the respective masks, row 3 shows overlap images of the As shown in Fig. 5, the algorithm has good segmentation ability on circular tumors and could also detect the distributed tumors from the same liver slice. It has an average dice score of 0.74, which is greater than the others by 0.11. But in some cases, it failed to segment some tumors as it is seen as cyan and segmented other tissues as tumor as it is seen as magenta in row 3.
Row 1 shows the result of the model, row 2 shows the respective masks, row 3 shows overlap images of the result with a mask and row 4 shows both results and mask on the abdominal CT image.
As shown in Fig. 6, the tumor segmentation directly from the abdominal CT scan image showed good performance relative to works done by other researchers. It has a relatively similar performance with those works that segment the tumor with a two-way process. It has an average dice score of 0.63. But it failed to segment some tumors as it is seen as cyan in row 3 and it also segmented other nearby tissues as a tumor as it is shown with magenta in row 3.
Discussion
In the original UNet paper the batch size of 1 was used for maximum usage of GPU memory without considering the time it took for training [15]. As the batch size decreases the training time will increase and the probability of using maximum GPU memory increases. Therefore the selection of batch size needs great care. Unlike [15], in this work batch size of 8 was used that compensates both GPU memory problems and training time after many trials. That means the network was trained using eight images at a time.
To test the liver segmentation performance of the developed network, 392 images were used. And those images were preprocessed using the same preprocessing technique that was implemented on the training data. The result of the network was evaluated using the respective ground truths of the images and the comparison result of this algorithm with works of Christ et al. who had used a [11], were also included. Table 1 shows the result obtained from this work and other works.
For testing the segmentation ability of the developed algorithm on segmenting the liver tumor a total of 392 images with their respective ground truths were used. The tumor was segmented in two ways. The first is the segmentation of the tumor directly from the abdominal CT scan image and the other is from the liver after segmenting it first. The result of the network was evaluated using the respective ground truths of the images and the comparison result of this algorithm with works of Chlebus et al. who used UNet by modifying it with object-based post-processing to segment liver tumor [8], and Budak et al. who implemented an encoder-decoder convolutional neural network for liver tumor segmentation [11], were also included. Table 2 shows the tumor segmentation result of two papers and the current work.
This algorithm highly reduces the complexity of the network by reducing the number of filters in each convolutional block. This decreases the time needed to train the network from a few hours to 40 min. In this thesis, 2346 images with data augmentation were used to train the network which is very small when it is compared with other works that had used more than 20,000 images.
And here the class frequency difference between the liver and the background was minimized through removing the CT slices with no liver that affects the segmentation performance in addition to introducing a weight vector to the loss function. The result of this algorithm was compared with other works to show how this algorithm improves liver segmentation performance. And the work also shows a new way for liver tumor segmentation. It can segment the tumor directly from the abdominal CT scan images, unlike the others which followed two steps to segment it. In other work, to segment liver tumors, the liver has to be segmented first and then the tumor segmentation precedes next to that.
But this work came up with segmenting of the tumor directly without liver segmentation and by this, a comparable segmentation result of 0.63 DSC was obtained. In addition to this, the tumor segmentation was also done using the previous way. That means by following a two-step process like others and obtained a DSC of 0.74 that differs by an average of 0.11 from the previous works. Chlebus and his colleagues had used the postprocessing method, which includes 3D connected components and random forest classifiers. However, the segmentation result obtained from this algorithm is greater than them by 0.16. This improvement is due to the class balancing that the work implemented. As discussed above, the class balancing was done by removing slices with no tumor. The difference between the numbers of tumor pixels to background pixels largely affects the segmentation result. Therefore this work tried to decrease this class imbalance by removing those slices with no tumor from the whole dataset in addition to the weight factor added to the loss function and observed a performance improvement. The segmentation result is compared with other works to show the improvements in liver tumor segmentation.
General results of the architecture
This segmentation algorithm highly improves the efficiency of liver tumor segmentation. First, it reduces the complexity of the network by reducing the number of filters needed on each convolutional block that decreases the number of trainable parameters. Due to this the time needed for training the network greatly reduced. The total time needed to train the network for 250 epochs is about 40 min on Kaggle kernel. This is a great achievement in deep learning-based segmentation in which the time and complexity of the network matter a lot.
And the other burning issue in deep learning-based segmentation was the absence of enough training samples to train the network. And this is also solved by the developed algorithm. It only needs small training samples and used excessive data augmentation. By this, it can increase the number of training samples present. Data augmentation applied affine deformations on those available images that helped the network to learn invariance to those deformations hence, deformation is the most common variation in biomedical images.
And the other important thing that should be considered during liver tumor segmentation or other biomedical image segmentation is the class imbalance between the two classes to be segmented. There is a large difference in size between the tissue to be segmented and the background. This highly affects the segmentation performance. For example, in Fig. 7 the number of white pixels to black pixels shows a high difference.
The ratio of white pixels to black pixels can be calculated using Eq. 1. It is 1: 9 and 1: 85 for liver and tumor masks respectively. Due to this, the network gets more black pixels than white pixels and learns from that during training. Its probability of learning from white pixels is very small when it is compared with the black ones. This results in poor performance of the network.
Ratio ¼ Number of white pixels number of black pixels ð1Þ
On the original UNet paper, the authors included a weight map that pre-computed from the ground truth images for balancing the class frequencies. In addition to this, in this paper, it is reduced using the removal of slices with no tumor information. During data preparation, the first step was checking all patients' data with tumor from both datasets. Then remove data that is obtained from healthy ones from the dataset and next search and remove for slices with no tumor. Lastly, the data with the tumor only was arranged and saved sequentially.
The network had been trained using those data and it's observed that the network's performance showed a great difference. The network performance increased with 0.01 and 0.11 for liver and tumor respectively.
This work also introduces a new way for tumor segmentation. Before this work, tumor segmentation has been done from the liver after segmenting it first from the abdominal CT scan image. The segmentation was a two-way process. But here liver tumors can be detected and segmented directly from the abdominal CT scan images with relatively comparable performance. This decreases the time and the effort needed during the segmentation of the tumor.
Experiments were done to show the effect of filter reduction and the application of data augmentation on the overall model performance. Table 3 demonstrates the results of the models with an original and reduced number of filters and their performance before and after applying a data augmentation.
As Table 3 shows, reducing the filter size didn't reduce the model's performance, rather it shows small improvements in both liver and tumor segmentation, and the training time is also reduced by about 1/3. The model performance is checked with and without data augmentation. Without data augmentation, it shows overfitting. It was good during the training but it is worse at the testing time especially for tumor segmentation since most of the tumors are very small.
Even if the algorithm showed good improvement on liver and tumor segmentation, it still fails to segment correctly in some slices. In liver segmentation, the algorithm almost segments the liver correctly but it fails in some slices in which the full liver is not captured and in slices in which the liver is covered by other overlapping organs and seems to be divided into parts. Regarding tumor segmentation, the algorithm mostly fails to segment tumors that are small and irregular in shape.
Conclusion
This paper focused on segmenting the liver and its tumor using a deep learning method. The method consists of three modified UNet models for the liver, the tumor from the liver, and the tumor from abdominal CT scan image segmentation. Using this algorithm a DSC of 0.96 and 0.74 for the segmentation of liver and tumor from the liver respectively were attained which showed an improvement of around 0.01 and 0.11 respectively. This improvement was obtained due to the reduction of the class imbalance that occurred in the data manually by removing unnecessary images and the selection of good hyperparameters through many trials.
Description of materials Datasets
Images that were used to train and test liver and liver cancer segmentation algorithm developed by this paper were taken from two publicly available datasets, 3Dir-cadb01 (3D Image Reconstruction for Comparison of Algorithm Database) [17] and LITS (Liver Tumor Segmentation) Challenge [18]. The 3DIRCADb dataset is challenging to utilize since there is a high variety of data and the liver and tumor complexity [11]. Detailed information about the two segmentation datasets is included in Table 4. Table 4 shows detailed information about the two datasets.
Data preparation
Images taken from the two datasets should be prepared to use them for training and testing the developed algorithm. The 3D-ircadb01 dataset contains up to seven folders under each patient's data for the tumor masks depending on the anatomical position of the tumor on the liver. Therefore these tumor masks from those different folders should be added and put into one folder since the main intention is on the segmentation result not on the tumor's anatomical position.
And the images in the LITS datasets are threedimensional and there is no separate mask for the liver and its tumor. Instead, they are found on the same mask image under the segmentation folder in the dataset. Table 3 Experimental results for liver and tumor segmentation with filter size reduction and application of data augmentation Since the developed algorithm is two dimensional (2D), the data should be converted into 2D. A separate mask for the liver and tumor must also be prepared. This data preparation was done using an ImageJ tool. From both datasets, the patient data with no liver and tumor masks are discarded. And from each patient data, images or slices which are taken at the starting and ending of scanning, with no liver information were also discarded for reducing the class imbalance present between the background and foreground. The number of images used for training and testing is included in Table 5.
Image preprocessing
The images were 512 × 512 in dimension originally. Using those images as it was is difficult due to limited GPU memory. Therefore, all images were resized with a factor of 0.25. And the images were also normalized to have a value between 0 and 1.
Computing platforms
The acquired images from both publicly available datasets were processed and analyzed on Kaggle. Kaggle is an online community of data scientists, owned by Google that provides cloud infrastructures such as a built-in Python Jupyter notebook, graphical processing unit (GPU), tensor processing unit (TPU), and data storage platform for facilitating the works of data scientists.
The segmentation algorithm
The algorithm is based on the UNet architecture developed by Ronneberger et al. in 2015. This algorithm includes two 2D UNet architectures, for the liver and its tumor. These architectures were designed to segment liver and tumors from the abdominal CT scan images.
Network architecture
For both liver and tumor segmentation, the same U shaped network architecture is used. It consists of a contracting path, an expansive path, and a bottleneck part like the original UNet. But, here in this paper, after each convolutional layer, batch normalization is added in all three parts of the network and a 0.5 dropout layer is added after each convolutional block of the contracting path.
The batch normalization is important for normalizing the outputs of the convolutional layers to have a mean of zero and a standard deviation of one and the dropout layer randomly deactivated some neurons in the hidden layer to prevent overfitting of the network. And the other modification is done on the number of filters of each convolutional block. In the first block, there are 16 filters and it will be doubled in the consecutive three blocks and become 128 at the last convolutional block. The details of the modified network architecture are indicated in Fig. 8.
Contracting or downsampling path The contracting path also called encoder is composed of 4 blocks. Each block is composed of: The purpose of this contracting path is to capture the context or semantics of the input image to be able to do segmentation. It extracts features that contain information about what is in an image using convolutional and pooling layers. During this process, the size of the feature map gets reduced and the deep or high-level features of the image will be obtained but the network loses the spatial or location information in which those features are found.
Bottleneck This part of the network is between the contracting and expanding paths. The bottleneck is built from two convolutional layers with batch normalization.
Expanding or upsampling path The expanding path also called decoder is composed of 4 blocks. Each of these blocks is composed of Up convolution or Deconvolution layer with stride 2. Concatenation with the corresponding cropped feature map from the contracting path.
(3 × 3 Convolution layer with ReLU activation function and batch normalization).
The purpose of this expanding path is to recover the feature map size and to add spatial information for the segmentation image, for which it uses up-convolution layers.
The course contextual information from the contracting path will be transferred to the upsampling path through skip connections.
Skip connections There could be a loss of low-level information during the decoding process. To recover this information lost and to let the decoder access the lowlevel features produced by the encoder layers skip connections are used. Intermediate outputs of the encoder are concatenated with the inputs to the intermediate layers of the decoder at appropriate positions. This enables precise localization combined with contextual information from the contracting path.
Details of the network architecture and layers found in each part of the model are shown in Fig. 8.
Training
The network architecture is based on the original UNet architecture. However, additional batch normalization and dropout layers are included in the network architecture of this work, and the number of filters in each convolutional block is also reduced. Therefore, training the network from scratch is needed. The input images and their corresponding segmentation masks are used to train the network. 2346 images from the two datasets with data augmentation were used. During training, many experiments were done by tuning the hyperparameters used in the network. Learning rate, batch size, number of epochs, and number of filters, validation split, dropout value, optimizer, loss function, and activation function had been checked for different values and assignments. After many trial and error a batch size of 8, epochs of 250, validation split of 0.30, and a dropout of 0.5 had been used.
Optimizer Adam Optimizer is an extension for the stochastic gradient descent (SGD) and RMSprops (root mean squared). It is a method for efficient stochastic optimization that only requires first-order gradients with little memory requirements. It finds individual adaptive learning rates for each parameter in the network. Its name is derived from adaptive moment estimation [19]. In this work, Adam optimizer with a learning rate of 0.0001 had been used.
Loss function Weighted dice loss and binary crossentropy were used as loss functions to measure the variations of the predicted values from the actual values during the training of the network. The equations used for calculating weighted dice loss and binary cross-entropy are given in Eqs. 2 and 3 respectively.
Where TP is true positive, FN is a false negative, FP is false positive and W is a weight factor that is introduced to balance the class frequency difference between the foreground and the background.
Where BCE is binary cross-entropy, N is the total number of pixels, y i is the predicted label for each pixel i, and p(y i ) is the predicted probability of each pixel being foreground or background.
Data augmentation Data augmentation is important to train the network effectively when there are small training samples available. In biomedical image segmentation tasks, there are often very little training data available. Therefore excessive data augmentation by applying affine deformations to the available training images is used. This allows the network to learn invariance to such deformations.
Data augmentation is specifically essential for biomedical image segmentation in which deformation is the basic difference in tissues. Less number of training pairs results in overfitting [15].
In the proposed work, in place or on the fly data augmentation technique had been used [20]. This type of augmentation artificially increases the size of the dataset by applying real-time data augmentation. In each epoch new randomly augmented data were given to the model. This increases the amount of data and the generalizability of the model.
Performance metrics
For evaluating the performance of the segmentation method, the binary mask of the segmentation result is compared to the ground truth mask and their similarity is estimated. Different performance metrics like DSC, Jaccard similarity coefficient (JSC), accuracy, and symmetric volume difference (SVD) are used.
Dice similarity coefficient (DSC)
It measures the overlap between two binary masks. It is the size of the overlap of the two segmentations divided by the total size of the two objects. It ranges from 0 (no overlap) to 1 (perfect overlap). It represents the overall performance of the segmentation [21,22]. It is calculated using Eq. 4.
Where TP is a true positive, FN is a false negative, and FP is a false positive.
Jaccard similarity coefficient (JSC)
It measures the similarity between the segmented image and the binary mask. It is the ratio of the intersection of two binary masks to their union [22]. It is given by Eq. 5.
Where TP is a true positive, FN is a false negative, and FP is a false positive.
Accuracy
Accuracy represents the ratio of correctly segmented samples to the total samples. It is approximately one for good segmentation results. It is calculated using Eq. 6 [12].
Where TP is a true positive, TN is a true negative, FN is a false negative, and FP is a false positive.
Symmetric volume difference
SVD is a measure of difference that exists between the segmented images with the ground truth. For good segmentation results, SVD approximates to zero. It is given by Eq. 7.
Where DSC is the Dice similarity coefficient. | 7,770 | 2021-03-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
No Significant Role for Smooth Muscle Cell Mineralocorticoid Receptors in Atherosclerosis in the Apolipoprotein-E Knockout Mouse Model
Objective: Elevated levels of the hormone aldosterone are associated with increased risk of myocardial infarction and stroke in humans and increased progression and inflammation of atherosclerotic plaques in animal models. Aldosterone acts through the mineralocorticoid receptor (MR) which is expressed in vascular smooth muscle cells (SMCs) where it promotes SMC calcification and chemokine secretion in vitro. The objective of this study is to explore the role of the MR specifically in SMCs in the progression of atherosclerosis and the associated vascular inflammation in vivo in the apolipoprotein E knockout (ApoE−/−) mouse model. Methods and Results: Male ApoE−/− mice were bred with mice in which MR could be deleted specifically from SMCs by tamoxifen injection. The resulting atheroprone SMC-MR-KO mice were compared to their MR-Intact littermates after high fat diet (HFD) feeding for 8 or 16 weeks or normal diet for 12 months. Body weight, tail cuff blood pressure, heart and spleen weight, and serum levels of glucose, cholesterol, and aldosterone were measured for all mice at the end of the treatment period. Serial histologic sections of the aortic root were stained with Oil Red O to assess plaque size, lipid content, and necrotic core area; with PicroSirius Red for quantification of collagen content; by immunofluorescent staining with anti-Mac2/Galectin-3 and anti-smooth muscle α-actin antibodies to assess inflammation and SMC marker expression; and with Von Kossa stain to detect plaque calcification. In the 16-week HFD study, these analyses were also performed in sections from the brachiocephalic artery. Flow cytometry of cell suspensions derived from the aortic arch was also performed to quantify vascular inflammation after 8 and 16 weeks of HFD. Deletion of the MR specifically from SMCs did not significantly change plaque size, lipid content, necrotic core, collagen content, inflammatory staining, actin staining, or calcification, nor were there differences in the extent of vascular inflammation between MR-Intact and SMC-MR-KO mice in the three experiments. Conclusion: SMC-MR does not directly contribute to the formation, progression, or inflammation of atherosclerotic plaques in the ApoE−/− mouse model of atherosclerosis. This indicates that the MR in non-SMCs mediates the pro-atherogenic effects of MR activation.
Objective: Elevated levels of the hormone aldosterone are associated with increased risk of myocardial infarction and stroke in humans and increased progression and inflammation of atherosclerotic plaques in animal models. Aldosterone acts through the mineralocorticoid receptor (MR) which is expressed in vascular smooth muscle cells (SMCs) where it promotes SMC calcification and chemokine secretion in vitro. The objective of this study is to explore the role of the MR specifically in SMCs in the progression of atherosclerosis and the associated vascular inflammation in vivo in the apolipoprotein E knockout (ApoE −/− ) mouse model.
Methods and Results
: Male ApoE −/− mice were bred with mice in which MR could be deleted specifically from SMCs by tamoxifen injection. The resulting atheroprone SMC-MR-KO mice were compared to their MR-Intact littermates after high fat diet (HFD) feeding for 8 or 16 weeks or normal diet for 12 months. Body weight, tail cuff blood pressure, heart and spleen weight, and serum levels of glucose, cholesterol, and aldosterone were measured for all mice at the end of the treatment period. Serial histologic sections of the aortic root were stained with Oil Red O to assess plaque size, lipid content, and necrotic core area; with PicroSirius Red for quantification of collagen content; by immunofluorescent staining with anti-Mac2/Galectin-3 and anti-smooth muscle α-actin antibodies to assess inflammation and SMC marker expression; and with Von Kossa stain to detect plaque calcification. In the 16-week HFD study, these analyses were also performed in sections from the brachiocephalic artery. Flow cytometry of cell suspensions derived from the aortic arch was also performed to quantify vascular inflammation after 8 and 16 weeks of HFD. Deletion of the MR specifically from SMCs did not significantly change plaque size, lipid content, necrotic core, collagen content, inflammatory staining, actin staining, or calcification, nor were there differences in the extent of vascular inflammation between MR-Intact and SMC-MR-KO mice in the three experiments.
INTRODUCTION
The majority of myocardial infarctions and ischemic strokes are caused by rupture and thrombosis of atherosclerotic plaques, making atherosclerosis the leading cause of death worldwide (1,2). Ample clinical data reveal that elevated levels of the hormone aldosterone are associated with an increased risk of myocardial infarction and stroke (3)(4)(5). Aldosterone is a steroid hormone that functions by activating the mineralocorticoid receptor (MR), a hormone-activated transcription factor. In the kidney, MR activation promotes sodium retention to regulate blood pressure. However, the increased risk of cardiovascular ischemia with elevated aldosterone appears to be independent of blood pressure (3,5), supporting a potential pro-atherosclerotic role for extra-renal MR. Preclinical studies using animal models of atherosclerosis to explore this phenomenon have demonstrated that administration of aldosterone to apolipoprotein E knockout (ApoE −/− ) mice accelerates atherosclerosis and plaque inflammation (6,7). Conversely, pharmacologic MR blockade attenuates plaque development and inflammation in mouse (8)(9)(10), rabbit (11), and non-human primate models of atherosclerosis (12) through unknown mechanisms.
Atherogenesis begins with vascular damage induced by hypercholesterolemia and other cardiovascular risk factors. The damaged vessel activates pro-inflammatory pathways resulting in cytokine release and enhanced adhesion molecule expression on endothelial cells lining the vessel, thereby attracting inflammatory cells to the site of vascular damage. Macrophages in the nascent plaque phagocytose lipids to become foam cells in the plaque core. Additionally, smooth muscle cells (SMCs) migrate into and contribute to the developing atherosclerotic plaque in several ways. SMCs produce extracellular matrix components of the plaque fibrous cap, which stabilizes the plaque and prevents rupture (13). Later in plaque progression, some SMCs transform into osteogenic cells and contribute to plaque calcification (14), which correlates with risk of plaque rupture in humans (15). Finally, some SMCs within the plaque have recently been found to dedifferentiate and become inflammatory-like, losing expression of smooth muscle α-actin and instead expressing traditionally leukocyte-specific markers, even acting as foam cells themselves [reviewed in Bennett et al. (16)]. Inflammatory cells in the plaque produce cytokines and reactive oxygen species to further promote plaque inflammation and produce matrix metalloproteases which can destabilize the fibrous cap. The resulting plaque rupture and thrombosis leads to ischemia of downstream tissues, which manifest as cardiovascular events such as myocardial infarction, ischemic stroke, and critical limb ischemia.
Clinical and animal data support that aldosterone and MR signaling promote plaque progression and increase inflammation, which may contribute to destabilization and rupture of the atherosclerotic plaque [reviewed in Moss et al. (17), Brown (18)]. However, the mechanisms underlying the contribution of the MR to plaque progression and inflammation have yet to be elucidated. In addition to the kidney, the MR is expressed in all cells of the vasculature, including in SMCs (19)(20)(21). When the MR is activated in SMCs in culture, these cells produce chemotactic cytokines that recruit inflammatory cells (7) and upregulate osteogenic factors that contribute to deposition of calcified material (22,23). These in vitro studies support a potential role for SMC-MR in plaque inflammation and/or calcification. However, the contribution of SMC-specific MR to the development and phenotype of the atherosclerotic plaque has never been studied in vivo. In the present study, we developed an atheroprone mouse model in which the MR can be specifically deleted from SMCs in an inducible fashion in order to investigate the hypothesis that SMC-MR contributes to the development and progression of atherosclerosis and promotes vascular inflammation.
Mouse Models
All animals were handled in accordance with National Institutes of Health standards and all experiments were conducted with the approval of the Tufts Medical Center Institutional Animal Care and Use Committee. All mouse models were generated on the C57BL/6 background. Atheroprone mice with inducible SMCspecific MR deletion were generated by crossing our previously described MR flox/flox × SMA-Cre-ERT2 +/− mice (20,24,25) onto the ApoE −/− genetic background (Jackson Laboratories, Stock #002052). The Cre-ERT2 driver used in this mouse model is a ligand-dependent Cre recombinase construct whose activity is induced by selective estrogen receptor modulation via tamoxifen administration (26). The insertion of this Cre construct into the smooth muscle α-actin promoter produced mice containing a Cre recombinase that is transiently activated during tamoxifen administration only in smooth muscle α-actin-positive cells (SMA-Cre-ERT2 +/− ). ApoE −/− × MR flox/flox × SMA-Cre-ERT2 +/− mice and ApoE −/− × MR flox/flox × SMA-Cre-ERT2 −/− littermate controls were all treated with intraperitoneal injection of 1 mg of tamoxifen daily for 5 days at 6-8 weeks of age, prior to starting high fat feeding. The resultant male ApoE −/− × MR flox/flox × SMA-Cre-ERT2 +/− mice with SMCspecific MR recombination, hereafter referred to as "SMC-MR-KO", and tamoxifen-treated ApoE −/− × MR flox/flox × SMA-Cre-ERT2 −/− littermates, hereafter referred to as "MR-Intact", were used for all studies.
PCR to Confirm MR Recombination
SMC-specific and nearly complete MR recombination in the vasculature has been previously confirmed in tamoxifen-induced MR flox/flox × SMA-Cre-ERT2 +/− mice (24). To confirm SMC-MR recombination after crossing to the ApoE −/− background, tissues were harvested from male SMA-Cre-ERT2 +/− and SMA-Cre-ERT2 −/− mice 2-4 weeks after tamoxifen induction. Genomic DNA was isolated using the Qiagen DNeasy kit. A PCR strategy was used such that LoxP-MR generates a smaller band (364 base pairs) while recombined MR yields a larger band (454 base pairs) as in Figure 1. The smooth muscle cellcontaining bladder from MR flox/flox × SMA-Cre-ERT2 +/− mice with intact ApoE was used as a positive control to compare the efficiency of MR recombination between the MR flox/flox × SMA-Cre-ERT2 +/− and the ApoE −/− × MR flox/flox × SMA-Cre-ERT2 +/− double-KO. PCR was performed as previously described (24) with a combination of three primers:
Atherosclerosis Protocols
Three experimental atherosclerosis protocols were used in this study. In the short-term early atherosclerosis protocol, tamoxifen-induced MR-Intact and SMC-MR-KO littermates were fed high fat diet (HFD, Envigo Teklad 88137, 42% calories from fat) for 8 weeks. In the long-term atherosclerosis protocol, mice were fed HFD for 16 weeks. In the aging atherosclerosis protocol, mice were allowed to age to 12 months on normal chow diet (Envigo Teklad 2918). All mice had free access to water. Mice were fasted for 4 h prior to tissue harvest and blood collection to measure fasting glucose (Nipro Diagnostics TrueBalance glucometer and strips) and cholesterol levels (Molecular Probes Amplex Red Cholesterol Assay Kit, Fisher). At the time of harvest, mice were weighed and then anesthetized with 2.5% isoflurane gas, blood was collected into heparinized syringes, and then the circulatory system was perfused with 0.9% sodium chloride solution via the left ventricle. The aortic arch was carefully isolated, cleaned, and separated from the heart approximately 1 mm from the aortic root and 2 mm distal to the left subclavian artery and stored on ice in PBS containing 0.5 mM EDTA until tissue digestion for flow cytometry (described below). In the 16-week HFD protocol, aortic arches were isolated for flow cytometry in one cohort, and in another cohort the brachiocephalic artery was removed at the level of the greater curvature of the aortic arch and frozen in OCT compound (Tissue-Tek #4583) for histology. The heart was first weighed then bisected horizontally at the level of the atria, and the upper portion of the heart was frozen in OCT for aortic root histology. The spleen was excised and weighed; the tibia was isolated and its length measured with calipers; the bladder was retained and snap-frozen for subsequent testing for MR gene recombination to confirm successful tamoxifen induction as needed.
Measurement of Tail Cuff Blood Pressure and Serum Aldosterone Levels
In the 5 days prior to animal sacrifice, tail cuff blood pressure was measured with the Kent Coda6 system using a protocol with 3 days of training and 2 days of measurement as we previously described and validated (27). To isolate serum, whole blood collected as above was incubated on ice for 1-4 h, centrifuged at 700 × g for 10 min, and the resulting serum transferred to a fresh tube. Aldosterone levels were measured using an Aldosterone Radioimmunoassay Kit (Tecan MG13051).
Histology
Sequential cryosections of OCT-embedded aortic roots were cut such that all three leaflets of the aortic valve could be visualized. Cryosections of brachiocephalic arteries were cut sequentially from the origin of the artery from the aorta. Serial 10 µm sections were processed for staining with Oil Red O (ORO) as previously described (7) or for immunofluorescent staining with Alexa-594-conjugated anti-Mac2/Galectin-3 (Cedarlane, 1:500), FITCconjugated anti-Acta2/smooth muscle α-actin (Sigma-Aldrich, 1:500), and DAPI (Fisher, 1:100) as previously described (28). Serial 6 µm sections were used for PicroSirius Red and Von Kossa staining. Immunofluorescent images were acquired using a Nikon A1R confocal microscope and brightfield images were acquired using an Olympus BX40 microscope and SPOT Insight camera and software. ORO-stained sections were analyzed to quantify plaque area, lipid content, and necrotic core area using ImagePro Premier v.9.0. All other stained sections were analyzed using ImageJ. Von Kossa staining was analyzed by a scoring method illustrated in Figure 9A, with 2-3 sections scored and averaged per mouse. All analyses were performed by genotypeblinded investigators.
Flow Cytometry
Immediately after animal sacrifice, aortic arches were minced with sharp scissors and digested with 125 U/mL collagenase type XI, 60 U/mL hyaluronidase type I-s, 450 U/mL collagenase type I (Sigma-Aldrich), and 60 U/mL DNase-I (New England Biolabs) in PBS with 20 mM HEPES in a 37 • C shaker at 325 RPM for 1.5 h. The tissue was then ground through a 100 µm filter to obtain a single cell suspension and re-filtered through a 70 µm pipet-tip filter (Flowmi) to remove non-cellular debris. Cells were stained with APC-Cy7-or PE-conjugated anti-CD45.2, FITC-conjugated anti-CD3ε, APC-conjugated anti-CD11b, and PE-Cy7-or PEconjugated anti-Ly6C antibodies (Biolegend, all 1:50), with Fcblock (BD Pharmingen, 1:50) added to the staining cocktail. After overnight storage at 4 • C, stained cells were counted using a BD LSRII flow cytometer; data was captured with the BD FACS Diva software and analyzed using FlowJo v.10. Flow cytometry data from single-cell aortic arch suspensions were first gated on size to exclude cell debris and non-leukocyte populations. The resulting population was then gated on CD45+ status for quantification of total leukocytes. The CD45+ population was further separated into CD3+ and CD11b+ mutually exclusive populations. Within the CD11b+/CD3-population, the proportion of Ly6C negative, Ly6C-lo, and Ly6C-hi cells was determined. Flow cytometry measurements were performed in 2-3 independent experiments with all experimental groups represented in each study.
Statistics
All mice that survived to termination of the study and developed atherosclerotic plaque were included in the analysis (one mouse was excluded due to no discernable plaque and one was euthanized due to poor health prior to study termination).
Each study was performed in 2-3 batches of mice and the histologic analysis data were compared to the average value for the MR-Intact group for each batch. Data are reported as mean ± SEM. Means between two groups were compared with unpaired Student's t-test using GraphPad Prism v.7.01. Normally distributed data in Table 1 (blood pressure, weight, serum cholesterol, and serum aldosterone) were analyzed by 2-factor ANOVA with Holm-Sidak post-test using SigmaPlot v.12.5. Non-normally distributed data (fasting glucose, heart weight, and spleen weight in Table 1 and smooth muscle αactin quantification in Figure 8B) were analyzed by Kruskal-Wallis ANOVA. The Von Kossa scoring data in Figure 9 was analyzed by Mann-Whitney Rank-Sum test using SigmaPlot v.12.5. Statistical significance was defined as p < 0.05.
Confirmation of SMC-Specific MR Recombination in an Atheroprone Mouse Model
In order to explore the role of SMC-MR in atherosclerosis, a well validated inducible SMC-specific MR knockout mouse (20,24) was crossed to the atheroprone ApoE −/− background. SMCspecific MR gene recombination on the ApoE −/− background was confirmed using PCR of genomic DNA isolated from tissues collected from male tamoxifen-induced SMC-MR-KO and MR-Intact littermate controls. As shown in Figure 1, DNA from Cre-recombinase negative (MR-Intact) animals produced only the smaller PCR product, consistent with an intact floxed MR gene. The larger, 454 base pair recombined MR gene product is observed only with DNA from SMC-containing tissues from Cre-recombinase positive (SMC-MR-KO) animals that have been induced by tamoxifen. Specifically, bladder, aorta, and colon DNA produce both bands, consistent with MR recombination in these SMC-containing tissues. Compared to the colon, the aorta shows a greater proportion of the recombined MR band, consistent with a greater proportion of SMCs relative to endothelial and other non-SMCs, whereas the colon has a smaller proportion of SMCs and hence relatively more of the intact MR PCR product. Tissues in which SMCs are relatively scarce-heart, lung, kidney, lymph node, and spleen-did not produce evidence of MR recombination. Bladder DNA from the previously described SMC-MR-KO mouse without ApoE disruption (20,24) served as a positive control for each PCR experiment. When compared to the SMC-MR-KO on the ApoEintact background, the degree of recombination in the bladder was the same or greater on the ApoE −/− background.
SMC-MR Deletion Does Not Significantly Alter Aortic Root Plaque Size or Composition Early in Atherogenesis
We first investigated whether SMC-MR influences early atherosclerotic plaque development by examining aortic root plaque size and composition after 8 weeks of HFD feeding in the ApoE −/− model. Importantly, the absence of SMC-MR did not affect the degree of weight gain, blood pressure, serum glucose, cholesterol, or aldosterone levels at this time point ( Table 1). Histology of the aortic root after 8 weeks of HFD feeding revealed no differences between SMC-MR-KO mice and MR-Intact littermates in atherosclerotic plaque size or in the percent of the plaque that is made up of lipids, necrotic core (Figures 2A,F,G), or collagen (Figures 2B,G). The degree of inflammatory staining within the plaque, as evidenced by Mac2 immunofluorescence (Figures 2C,H), was unchanged with SMC-MR deletion. We analyzed smooth muscle α-actin immunostaining within the plaque (Figures 2D,I) to identify those cells expressing this marker, and the levels of intra-plaque smooth muscle α-actin staining were also unaffected by SMC-MR deletion. It is important to note that this method does not identify all SMCs within the atherosclerotic lesion, nor does it identify only SMCs, as SMCs are known to downregulate SM-actin expression in atherosclerosis (28), and other cell types such as myofibroblasts may also express smooth muscle α-actin (29). Nevertheless, from these data we conclude that SMC-MR does not influence plaque accumulation or histologic indices of plaque vulnerability in the aortic root of ApoE −/− mice after 8 weeks of HFD.
SMC-MR Does Not Contribute to Vascular Inflammation in ApoE −/− Mice After 8 Weeks of HFD
As plaque inflammation is highly correlated with plaque instability in humans (30), aortic arch inflammation was quantified using flow cytometry, a sensitive measure of vascular inflammation. After 8 weeks of HFD, freshly isolated aortic arches were digested to a single cell suspension and cells sorted to quantify the total number of CD45+ leukocytes, CD45+/CD11b-/CD3+ T cells, and CD45+/CD3-/CD11b+ cells within the vessel wall. Previous flow cytometry studies revealed that very few of the CD11b+ cells were neutrophils regardless of genotype (data not shown), thus we concluded that this population was predominantly monocytes and macrophages and further staining for neutrophils was omitted. As MR has been previously reported to contribute to macrophage phenotype (31), we further stratified CD11b+ cells into Ly6C negative, Ly6C-lo, and Ly6C-hi populations to differentiate between the more pro-inflammatory macrophage phenotype (Ly6C-hi) and the reparative macrophage phenotype (Ly6C-lo-negative) (32). SMC-MR deletion did not significantly influence the number of total leukocytes, T cells, or monocytes/macrophages in the aortic arch, nor did it alter the proportion of Ly6C-lonegative vs. Ly6C-hi cells within the monocyte/macrophage population (Figure 3). From these data we conclude that SMC-MR does not play a significant role in vascular inflammation or atherogenesis after 8 weeks of high fat feeding, corresponding to early atherogenesis in the ApoE −/− model.
SMC-MR Deletion Does Not Significantly Alter Aortic Root Plaque Size or Composition in an Aging Model of Atherosclerosis
In order to assess the potential role of SMC-MR in the slower atherogenesis that occurs when the ApoE −/− mouse is allowed to age without high fat feeding, tamoxifen-induced SMC-MR-KO and MR-Intact littermates were aged to 12 months on normal chow diet. At 12 months of age, we observed a trend toward lower tail cuff blood pressure in the SMC-MR-KO mice that was not statistically significant ( Table 2). There was also no difference in body, heart, or spleen weight, nor was there a difference in serum levels of glucose, cholesterol, or aldosterone between MR-Intact and SMC-MR-KO animals ( Table 2). Histologic staining of aortic root sections likewise revealed no effect of SMC-MR deletion on plaque size, lipid content, necrotic core area, collagen content, Mac2 inflammatory staining, or smooth muscle α-actin staining in these animals (Figure 4). These data support that SMC-MR does not play a role in the development or progression of atherosclerotic plaques in the aging ApoE −/− mouse model of atherosclerosis.
SMC-MR-KO Mice Have in Lower Body Weight After 16 Weeks of HFD
Next we sought to determine whether SMC-MR influenced plaque characteristics in advanced disease, after 16 weeks of high fat feeding. As in previous experiments, we measured traditional cardiovascular risk factors as well as aldosterone levels and heart and spleen weights after this treatment regimen ( Table 1). As expected, body weight in mice fed HFD for 16 weeks was significantly higher than that of mice fed HFD for 8 weeks. This increase in body weight was associated with higher aldosterone levels, as expected since the degree of obesity is known to correlate with increased levels of aldosterone in mice (33) and in humans (34). Other parameters were not significantly different between MR-Intact and SMC-MR-KO mice after 16 weeks of HFD. One exception was that there was a statistically significant (p < 0.05) 10% reduction in average body weight in SMC-MR-KO mice compared to MR-Intact controls after 16 weeks of HFD that was accompanied by a non-significant trend (p = 0.07) toward a reduction in fasting glucose levels in SMC-MR-KO mice compared to their MR-Intact littermates.
SMC-MR Deletion Does Not Significantly Alter Aortic Root Plaque Size, Plaque Composition, or Vascular Inflammation After 16 Weeks of HFD Feeding
Histologic analysis of the aortic roots of MR-Intact and SMC-MR-KO ApoE −/− mice fed HFD for 16 weeks revealed a trend toward a reduction in plaque area ( Figures 5A,F left, p = 0.0503) in mice lacking SMC-MR, though this did not reach statistical significance. In addition, we observed no difference in plaque lipid content, necrotic core area ( Figure 5A,F right), collagen content (Figures 5B,G), Mac2 inflammatory staining (Figures 5C,H), or smooth muscle α-actin staining between SMC-MR-KO and MR-Intact mice (Figures 5D,I). As expected, the smooth muscle α-actin staining was low in these advanced lesions, but this was unaffected by the absence of SMC-MR. We therefore conclude that SMC-MR does not significantly modulate aortic root plaque histologic parameters in a 16 week HFD model of advanced atherosclerosis. We next quantified by flow cytometry the leukocyte populations present in the aortic arches of MR-Intact and SMC-MR-KO ApoE −/− mice following 16 weeks of HFD feeding. As in the 8 week study, there was no significant difference in the number of total leukocytes, T cells, or monocytes and macrophages in the aortae of SMC-MR-KO compared to MR-Intact mice, nor did we detect a difference in the proportion of Ly6C-hi versus Ly6C-lo-negative macrophages (Figure 6). We therefore conclude that SMC-MR does not affect the inflammatory profile of plaques in this late atherosclerosis model using a sensitive, quantitative flow cytometry analysis.
SMC-MR Deletion Does Not Significantly Alter Brachiocephalic Artery Plaque Size or Composition After 16 Weeks of HFD Feeding
The data thus far demonstrate that in the ApoE −/− model of atherosclerosis, deletion of MR specifically from SMCs does not significantly alter the size, composition, or degree of inflammation of atherosclerotic plaques in the aortic root under all 3 diet conditions (8 or 16 weeks of HFD or 12 months on normal chow). As SMCs can contribute to the formation of the fibrous cap, which stabilizes the plaque, we next investigated plaque characteristics in the brachiocephalic artery, a common anatomical location for analysis of advanced atherosclerotic lesions since plaques in this region can develop the classical lipid core with outward remodeling and fibrous cap formation that is typically not seen in the aortic root in the ApoE −/− model (35). After 16 weeks of HFD, there was no difference in the brachiocephalic arterial plaques from SMC-MR-KO mice compared to their MR-Intact littermates in overall plaque size, lipid content, necrotic core (Figures 7A,F), collagen content (Figures 7B,G), Mac2 inflammatory staining (Figures 7C,H), or intra-plaque smooth muscle α-actin staining (Figures 7D,I). Of note, there is smooth muscle α-actin staining at the luminal surface of plaques in both groups, as shown in the representative images in Figure 7D (yellow arrowheads), indicating the presence of smooth muscle α-actin-positive fibrous cap formation in plaques of the brachiocephalic artery. However, when actin staining within the plaque was blindly quantified, there was no significant difference in staining in the brachiocephalic artery between SMC-MR-KO and MR-Intact mice ( Figure 7I).
Smooth Muscle α-Actin Staining in the Tunica Media of the Aortic Root Diminishes With Advanced Atherosclerosis, With No Effect of SMC-MR Deletion
As the plaque develops, SMCs de-differentiate from the contractile phenotype to the proliferative phenotype, losing expression of markers such as smooth muscle α-actin, and exhibit an alternative phenotype in which they migrate into the intima and proliferate. A subpopulation of smooth muscle αactin-negative SMCs can even express inflammatory markers and may themselves contribute to atherosclerotic plaque instability (28). To consider more carefully whether the degree of dedifferentiation of medial SMCs was modulated by the presence of SMC-MR, we further analyzed the smooth muscle αactin staining in the tunica media of aortic root histologic sections. In these sections, we observed the expected decrease in smooth muscle α-actin staining in the tunica media of aortic roots from mice with more advanced atherosclerosis (after 12 months on normal chow or 16 weeks of HFD) compared with mice displaying earlier atherosclerotic lesions (8 weeks of HFD) (Figure 8A, yellow arrowheads), despite positive staining in nearby coronary vessels in adjacent regions of tissue (white arrowheads). The percentage of the media staining positive for smooth muscle α-actin was quantified and found to be significantly decreased in both the 12 month normal chow and 16 week HFD groups compared to in the 8 week HFD groups ( Figure 8B). However, medial smooth muscle α-actin staining was not affected by the presence of SMC-MR. From this data we conclude that SMC-MR does not play a role in the decrease in smooth muscle α-actin expression in the tunica media with advancing atherosclerosis.
SMC-MR Deletion Does Not Impact Plaque
Calcification in the Aortic Root or the Brachiocephalic Artery After 16 Weeks of HFD in the ApoE −/− Model As SMC-MR has been implicated in calcification in previous in vitro studies (22,23), we examined whether SMC-MR plays a role in plaque calcification in these atherosclerosis models. As calcification is a late manifestation of atherosclerosis, there was no Von Kossa positive staining of aortic root sections in mice following only 8 weeks of HFD (data not shown), as expected in these early lesions. After 16 weeks of HFD feeding, Von Kossa staining of the aortic root and brachiocephalic artery sections was analyzed. Overall, very little Von Kossa positive staining was present even in these more advanced atherosclerotic lesions, therefore a scoring method was used to analyze the extent of calcification in these plaques. Representative images illustrating the scoring strategy can be found in Figure 9A. Using this scoring method, no significant difference in calcification was found between SMC-MR-KO compared to MR-Intact mice in either the average or maximum calcification score per mouse in either the aortic root ( Figure 9B) or the brachiocephalic artery ( Figure 9C).
DISCUSSION
This study explored, for the first time, the role of the MR specifically in SMCs in the process of atherosclerosis using the ApoE −/− mouse model. This was addressed by generating a novel atheroprone mouse model in which the MR was deleted from SMCs in an inducible fashion on the ApoE −/− background. The results reveal that the deletion of SMC-MR does not influence atherosclerotic plaque burden, composition (lipid, collagen, or necrotic core), or vascular inflammation after 8 or 16 weeks of high fat feeding or after 12 months of normal chow feeding in the ApoE −/− model. We therefore conclude that the MR in SMCs does not contribute to atherosclerotic plaque development or progression in this model. Rather, the blood pressure-independent role of aldosterone in enhancing plaque development and inflammation, and conversely the capacity of MR antagonism to prevent atherosclerosis in mouse models, is mediated by the MR in cells distinct from vascular SMCs. It is important to understand the role of the MR in atherosclerosis, as MR activation by aldosterone is strongly associated with the risk of cardiovascular ischemia, the leading cause of death. The rationale for exploring a specific role for SMC-MR in atherosclerosis was based on published reports showing that the MR contributes to the regulation of a variety of SMC functions that can contribute to atherosclerosis. Specifically, it has been shown in vitro that MR activation in vascular SMCs promotes vascular cell calcification (22,23,36) and release of chemokines and growth factors that promote leukocyte chemotaxis (7). Using tissue specific MR-KO mice, the MR in SMCs has also been shown in vivo to contribute to SMC proliferation after wire injury (37) and to vascular fibrosis and stiffening in response to injury, hypertension (38), and aging (25). The rationale for testing this in the ApoE −/− mouse model is based on multiple studies showing that aldosterone enhances, and MR antagonists inhibit, atherosclerosis in this model (7,8,39). Despite these prior findings, we now demonstrate that in this atherosclerosis model, SMC-MR did not contribute to intimal calcification, SMC remodeling, or vascular inflammation. One reason for this paradox may be that SMCs in vitro display a profound phenotypic plasticity, taking on "phenotype-switched" characteristics such as proliferation, collagen synthesis, and αactin downregulation simply from the nature of tissue culture (40), a feature which complicates in vitro study of SMC activities in atherosclerosis. Further, our findings suggest that the role of SMC-MR in the vasculature may be stimulus-specific. While we and others have found SMC-MR to play a role in the proliferative and fibrotic responses to carotid wire injury, hypertension, and aging, as mentioned above, the results of the present study would suggest that these vascular roles for SMC-MR do not extend to atherosclerosis, at least in the genetic ApoE −/− model. Since MR generally has been shown to contribute to atherosclerosis and vascular inflammation specifically in this model (7,8,39), we conclude that these blood pressure-independent effects are mediated by non-SMC MR.
Ample data supports that MR activation by aldosterone is pro-inflammatory in atherosclerosis studies in animal models (7,8,11,18). SMCs have been shown in vitro and in other disease models to exert pro-inflammatory effects via adhesion molecule upregulation, induction of oxidative stress, and production of pro-inflammatory cytokines and growth factors (41). We previously showed that the conditioned media from aldosterone-treated human coronary artery SMCs promotes chemotaxis of monocytes in a SMC-MR-dependent manner (7). Based on this finding, we hypothesized that SMC-MR would promote inflammation and plaque progression in the context of atherosclerosis. We tested this hypothesis using a novel SMC-MR-KO atherosclerosis model at various stages of plaque development, in multiple vascular beds, and using both conventional histology and quantitative flow cytometry analysis. However, despite our predictions drawn from the existing literature, SMC-MR deletion did not influence vascular inflammation in this model of baseline atherosclerosis in the absence of exogenous aldosterone administration. Notably, our study leaves open the possibility that SMC-MR could contribute to atherosclerosis exacerbated by aldosterone administration or other perturbations.
In humans, increased serum aldosterone is associated with increased risk of myocardial infarction and stroke (3)(4)(5), complications associated with the degree of atherosclerotic plaque inflammation (42). This is consistent with animal models in which aldosterone infusion increases plaque size and vascular inflammation (7,39) and systemic MR inhibition reduces plaque burden and markers of inflammation (8,10,11). Synthesizing the current findings with this published data, we conclude that the MR acting in non-SMCs promotes plaque growth and inflammation. It was recently shown that deletion of the MR specifically from monocytes and macrophages reduces plaque size and histological markers of inflammation in both the angiotensin II-treated ApoE −/− and the LDL receptor knockout mouse atherosclerosis models, supporting a role for the MR acting in inflammatory cells in atherosclerosis (43). Further, endothelial cells are known to be important for the initiation of atherosclerotic plaques and for the inflammation associated with the disease (17). In vitro, activation of endothelial cell MR has been shown to up-regulate expression of endothelial adhesion molecules, including intracellular adhesion molecule-1 (ICAM-1), thereby promoting leukocyte adhesion (44). In vivo, ICAM-1 has been shown to be necessary for aldosterone induction of atherosclerosis and vascular inflammation in the ApoE −/− model (39). These data support the potential for endothelial cell MR to also contribute to the pro-inflammatory effects of MR activation in atherosclerosis. However, further studies are needed to test this in vivo.
Like inflammation, vascular calcification in humans is also associated with the risk of cardiovascular mortality. Calcification can occur in the intima, where it is associated with atherosclerosis, or in the media, where it is associated with vascular stiffness and valvular disease thereby contributing to hypertension and aortic stenosis (45). In vitro data strongly supports a role for the MR in regulating the osteogenic differentiation of SMCs, with MR activation promoting SMC calcification and MR inhibition preventing the process (22,23,36). Thus, while it appears from this study that SMC-MR may not contribute to intimal calcification in vivo in this atherosclerosis model, it remains possible that SMC-MR contributes to medial calcification commonly associated with renal failure and aging. Indeed, substantial recent data demonstrate that MR expression in vascular SMCs increases with age and contributes to hypertension and vascular stiffness in the aging vasculature (20,24,46). Further studies in models of medial vascular calcification are needed to test this possibility. It has previously been described that SMCs de-differentiate during the development of atherosclerosis, losing expression of traditional SMC markers such as smooth muscle α-actin and in some cases, taking on a macrophage-like phenotype (16,28). Direct treatment of SMCs in culture with oxidized phospholipids can recapitulate this effect (47). Further, renal denervation of ApoE −/− mice, which resulted in a decrease in aldosterone levels, was shown to reduce atherosclerotic plaque burden and increase plaque smooth muscle α-actin staining (48). Further, in vitro studies have shown that SMC-MR contributes to SMC proliferation (49) while a recent in vivo study of rats with no cardiovascular risk factors indicated that MR activation promoted aortic collagen deposition, a hallmark of SMCs switching from the quiescent and contractile phenotype to the proliferative and synthetic (50). We have also demonstrated previously that SMC-MR is necessary for the SMC hyperplasia and collagen deposition observed after carotid wire injury (37). Consistent with prior reports, we observed very little smooth muscle α-actin positive staining within the aortic root plaques, and we observed a significant decrease in actin staining in the tunica media of mice with advanced disease, indicating that SMCs within the plaque and the media had lost their characteristic marker staining, indicating a switch in phenotype. However, the degree of smooth muscle α-actin staining was not affected by the presence of SMC-MR, indicating that the MR in these cells does not contribute to this phenomenon. It is important to note that this staining strategy does not identify all SMCs, nor does it differentiate between actin-positive SMCs and other actin-positive cell types, such as pericytes and myofibroblasts. Thus, although we see no role for SMC-MR, without more rigorous lineage tracing of SMCs, such as that described by Shankman et al. (28), we cannot definitively rule out the possibility that SMC-MR plays a role in pathogenic SMC phenotype switching in atherosclerosis.
Importantly, deletion of SMC-MR did not significantly affect blood pressure, fasting glucose or cholesterol levels under any of the conditions tested our experiments, which could have altered the results of the atherosclerosis studies independent of direct effects of SMC-MR. It is interesting to note that after 12 months of aging on normal chow, blood pressure tended to be lower in SMC-MR-KO mice compared to MR-Intact littermates, as we previously showed by telemetry in aged mice with intact ApoE (24). However, this was not statistically significant, likely due to the less sensitive tail cuff method of blood pressure measurement used in these studies. Surprisingly, we did observe a reduction in body weight and a trend toward a decrease in fasting glucose (p = 0.07) in SMC-MR-KO mice fed HFD for 16 weeks compared to MR-Intact littermate controls. These findings were not noted after 8 weeks of HFD or in 12 month old mice on normal chow. The trend toward a decrease in aortic root plaque area with SMC-MR deletion in the 16 week HFD study (p = 0.05) may be attributable to this significant difference in body weight, as obesity itself may be a risk factor for atherosclerosis (51). Potential mechanisms for this reduction in body weight with SMC-specific MR deletion are unclear. While aldosterone and MR signaling have been implicated in the components of the metabolic syndrome associated with obesity (52,53), and MR blockade prevents obesity-induced metabolic syndrome in some animal models (54,55), this relationship has not held true in human studies (56)(57)(58). Further, to our knowledge, no data currently exists linking SMC-specific MR to the development of obesity and metabolic syndrome. Thus, this area warrants further confirmation and investigation to characterize the possible link, if any, between SMC-MR and obesity.
Several limitations in this study must be acknowledged. First, we used endpoint PCR of genomic DNA, rather than more direct methods such as qRT-PCR or immunohistochemistry, to confirm SMC-specific MR deletion in our mouse model. This was due to the lack of available antibodies that are highly specific for mouse MR for immunohistochemistry, as the best MR antibodies that exist were raised in mouse (59). Instead, we confirm that genomic MR recombination is at least as efficient in the ApoE −/− cross as in our previously extensively characterized SMC-MR-KO with intact ApoE that we include here as a control (24). As mentioned, tail cuff plethysmography was used to measure blood pressure instead of more sensitive telemetry measurements because the telemetry catheter in the aortic arch can affect blood flow characteristics thereby altering atherosclerosis. Thus, we cannot rule out small changes in blood pressure that are below the sensitivity of detection by this method. In addition, only male mice were used in this study, thus it remains possible that SMC-MR may influence atherosclerosis in females differently from males. Indeed, emerging evidence supports a sex difference in the contribution of the MR in other cell types to cardiovascular disease development and progression (60,61). Thus the possibility of sex differences in the role of SMC-MR in atherosclerosis deserves further study. Finally, although the ApoE −/− model of atherosclerosis is extensively used due to its development of plaques with similar composition to those of humans, no animal model completely reproduces the human disease. Importantly, the most commonly used atheroprone mouse models-ApoE −/− and LDL receptor knockout-do not exhibit rupture of atherosclerotic plaques, necessitating the use of inflammation and plaque composition as proxies for plaque stability. There are also substantial differences between the various animal atherosclerosis models in terms of lipoprotein levels, gene expression, inflammation, and even the extent of lesion development (62). It is thus possible that our finding that SMC-MR does not play a role in atherogenesis is specific to the ApoE −/− model and may differ if other models were tested.
Despite the limitations of this study, it is bolstered by several strengths. This investigation exhaustively and specifically explored the role of SMC-MR in atherosclerosis in vivo for the first time. Multiple plaque parameters were assessed including size, lipid content, necrotic core, smooth muscle α-actin content, and calcification by histologic methods and vascular inflammation was quantified by both conventional immunofluorescence and sensitive flow cytometry methods. These varied analyses were performed under three different treatment conditions (short-term HFD, long-term HFD, and aging on normal chow) and, for the 16 week HFD study, at two different locations in the vasculature (aortic root and brachiocephalic artery). Based on this thorough analysis, we conclude that the MR acting specifically in SMCs does not play a substantial role in plaque initiation, progression, or inflammation in the ApoE −/− mouse model of atherosclerosis, and thus the MR in non-SMCs mediates the pro-atherogenic effects of MR activation in this model.
ETHICS STATEMENT
This study was carried out in accordance with the recommendations of the Guide for the Care and Use of Laboratory Animals, published by the National Institutes of Health. The protocol was approved by the Tufts Medical Center Institutional Animal Care and Use Committee.
AUTHOR CONTRIBUTIONS
MEM performed experiments, analyzed data, and wrote the manuscript. JD performed experiments, analyzed data, and edited the manuscript. AM developed the mouse models and performed experiments. SI analyzed data. IZJ was involved in planning and oversight for all experiments, performed data analysis, wrote and edited the manuscript, and handled funding and regulatory requirements for all studies.
FUNDING
This work was supported by grants from the National Institutes of Health: HL095590 and HL119290 to IZJ and F30HL137255 to MEM; and the American Heart Association: EIA18290005 to IZJ, 15POST21300000 to JD, and 17PRE32910003 to MEM. | 9,664.2 | 2018-07-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
On the Risk Assessment of Terrorist Attacks Coupled with Multi-Source Factors
: Terrorism has wreaked havoc on today’s society and people. The discovery of the regularity of terrorist attacks is of great significance to the global counterterrorism strategy. In this study, we improve the traditional location recommendation algorithm coupled with multi-source factors and spatial characteristics. We used the data of terrorist attacks in Southeast Asia from 1970 to 2016, and comprehensively considered 17 influencing factors, including socioeconomic and natural resource factors. The improved recommendation algorithm is used to build a spatial risk assessment model of terrorist attacks, and the effectiveness is tested. The model trained in this study is tested with precision , recall , and F-Measure . The results show that, when the threshold is 0.4, the precision is as high as 88%, and the F-Measure is the highest. We assess the spatial risk of the terrorist attacks in Southeast Asia through experiments. It can be seen that the southernmost part of the Indochina peninsula and the Philippines are high-risk areas and that the medium-risk and high-risk areas are mainly distributed in the coastal areas. Therefore, future anti-terrorism measures should pay more attention to these areas.
Introduction
Terrorism is one of the most important threats in today's society and has caused great harm to people from all over the world [1].Southeast Asia is not only a key node in the "One Belt and One Road" development initiative, but also an area of frequent terrorist attacks.The spatial risk assessment of the terrorist attacks in Southeast Asia is of great significance to the implementation of both the One Belt One Road Initiative and the counterterrorism strategy.According to the statistics of the Global Terrorism Database (GTD), 1078 terrorist attacks occurred in Southeast Asia in 2016 alone, resulting in 533 deaths and causing great panic within the society.A large number of scholars at home and abroad have made great efforts to solve various problems related to the threat of terrorism [2][3][4][5][6][7][8][9][10][11][12][13][14].However, the risk assessment of terrorist attacks remains a complex and uncertain problem.On the one hand, the existence of the Internet has brought the global community closer together in all corners, in all sectors, and in all fields; as a result, the number of sensitive variables and disturbance variables related to terrorist attacks has increased unprecedentedly.On the other hand, because of the unprecedented advancement of the global digitization and the application of various advanced material collection methods, the terrorist attack assessment can obtain more types and larger volumes of related data from various angles than ever before, requiring researchers to have smarter, more efficient complex data processing capabilities.With the achievements of artificial intelligence in many fields, Sivasamy et al., Minu et al., and Gohar et al. have introduced machine learning methods into the evaluation of terrorist attacks and have conducted a series of fruitful works [15][16][17].Dong believes that the machine learning method can focus on extracting factor vectors from known information, forming pattern recognition and classification, and then use the data outside the sample to perform pattern verification and prediction [18].Moreover, machine learning can be used to automatically re-identify factor vectors, reconstruct conflict modes, and adjust predictive output results based on different data inputs.In addition, the machine learning-based terrorist attack assessment model can also be widely accommodated and integrated with unstructured data, and we have the ability to find discernable patterns from clutter and mixed data [18].
Related Works
The previous related studies primarily involved research from three aspects, as shown in Table 1.A terrorist attack prediction project led by Blair et al. used a neural network to successfully predict the conflict in Liberia in 2010 with the data in 2008; the accuracy was between 0.65 and 0.74 [19].Dong used the 2010-2016 forecast of terrorist attacks in India as an example to empirically examine the effectiveness of machine learning based on back propagation (BP) neural networks in real-life terrorist attacks.It was found that machine learning-based terrorist attack prediction paradigms, even without the support of specific social theories, have a certain ability to anticipate terrorist attacks and can discover new knowledge regarding conflicts [18].However, these studies are only aimed at individual countries and predictions on a national scale.Sheehan used time-series methods to investigate the relationship between the number of global strategic armed forces-related incidents and the frequency of transnational terrorist attacks, the type of attacks, and the type of victims of terrorist attacks with data from transnational terrorism incidents from 1993 to 2004 [20].Sivasamy et al. proposed a new prediction method that uses the mixed average model (MABM) to fit the civilian casualty data resulting from terrorist attacks in South Asia and predicted civilian casualties in 2014 [15].Minu et al. used the wavelet neural network (WNN) for prediction and applied it to the nonstationary nonlinear time-series of the terrorist attack time-series (time-series of the monthly number of world terrorist attacks from February 1968 to January 2007); the results revealed that the WNN is the best model for analyzing the time-series of terrorist attacks [16].These studies were based on the time-series of terrorist attacks.Faryal et al. proposed a new classification and prediction framework to predict terrorist organizations.This framework consisted of four basic classifiers: naive Bayes (NB), K nearest neighbour (KNN), Iterative Dichotomiser 3 (ID3), and decision stump (DS); Compared with a separate classifier, this method was found to achieve a fairly good accuracy and a lower classification error rate [17].Raghavan et al. used the hidden Markov model to establish a model for a terrorist organization's activity and detect the sudden situation of the organization [21].Adam et al. used a power-law distribution based on observations to calculate the likelihood of a single event [22].These studies focused on the terrorist attack itself.Scheffran's study showed that many connections and feedbacks exist among the climate system, natural resources, human security, and social stability [6].Nevertheless, previous studies on terrorist attacks have seldom considered the multi-source factors that affect terrorist attacks; most studies have been conducted at national or regional scales, and the research has been generally conducted from the time-series of the occurrence of an attack or the incident itself and has ignored the spatial distribution of the occurrence of terrorist attacks.To assess the risk of terrorist attacks in the places where terrorist attacks have not occurred, we combine the clustering algorithm and the location recommendation algorithm from the grid scale and conduct research using terrorist data from 1970 to 2016 in Southeast Asia.Based on a comprehensive analysis of the factors of the terrorist attack, we conduct a spatial risk assessment of terrorist attacks.Hidden Markov Models are used to establish a model for the terrorist organization's activity and detect the sudden situation of the organization.
Scharpf A Using a power-law distribution based on observations to calculate the likelihood of a single event In our study, the assessment process mainly includes two parts: partitioning areas and risk assessment.Partitioning areas refers to the regional division of the study area in space according to the influencing factors.In the machine learning algorithm, this method belongs to unsupervised learning, and the clustering algorithm is a typical unsupervised machine learning algorithm.We have selected four classical clustering algorithms, including K-means, Balanced Iterative Reducing and Clustering using Hierarchies (BIRCH), Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and Self-Organizing Maps (SOM).Through experimental comparison, we obtained the most suitable method for partitioning areas and obtained the weighted grid.The risk assessment part is mainly divided into three steps.First of all, the data of the weighted factor are used for the location recommendation algorithm to calculate the similarity between each grid and then build a kernel density function based on the severity of terrorist attacks.Then, combining the similarity with the kernel density, a score between 0-1 is calculated for each grid where no terrorist attack occurred.Finally, we conducted a validity test of the risk assessment model of terrorist attacks established by the research.
Data Processing
Southeast Asia was chosen as the research area to conduct a spatial risk assessment of terrorist attacks.There are 11 countries in the study area, covering an area of 4.57 × 10 6 km 2 : Vietnam, Laos, Cambodia, Thailand, Myanmar, Malaysia, Singapore, Indonesia, Brunei, the Philippines, and Timor-Leste.Among these countries, Laos is the only landlocked country in Southeast Asia, and Vietnam, Laos, and Myanmar border the People's Republic of China by land.Southeast Asia is a frequent area of terrorist attacks, as shown in Figures 1 and 2. From the figures, we can see that the southernmost part of Thailand and Philippine are high-risk density areas for terrorist attacks.These regions have long displayed an imbalance of political and economic development, and ethnic and religious conflicts are more serious, which is likely to lead to the breeding of terrorism.Therefore, the spatial risk assessment of terrorist attacks in Southeast Asia is of great significance.A risk assessment is a quantitative evaluation of the impact or loss potential of an event or thing [23].The spatial risk assessment of terrorist attacks assesses the location and occurrence of a terrorist attack from a spatial perspective, including, but not limited to, the use of locations where terrorist attacks have occurred, to assess the risk of not having a terrorist attack.We conducted our research from the perspective of location recommendation methods.For data, we collected 17 types of influencing factor data from two aspects of socioeconomic factors and natural resource factors, which are shown in Table 2.Among these factors, socioeconomic factors include ethnic diversity, major drug areas, population density and nighttime lighting, accommodation outlets, catering outlets, transportation sites, religious sites, and political sites; natural resource factors include average precipitation, average temperature, terrain, the distance to the main navigable lake, the distance to the ice-free ocean, and the distance to the main navigable river.Then, standard grid spatial processing (0.1° × 0.1°) of the 17 factors and the terrorist attack data was performed, by which we can get 36,978 standard grids to allow them to be analyzed at the same scale.To unify the measurement scale, this study normalizes the influencing factor.For data, we collected 17 types of influencing factor data from two aspects of socioeconomic factors and natural resource factors, which are shown in Table 2.Among these factors, socioeconomic factors include ethnic diversity, major drug areas, population density and nighttime lighting, accommodation outlets, catering outlets, transportation sites, religious sites, and political sites; natural resource factors include average precipitation, average temperature, terrain, the distance to the main navigable lake, the distance to the ice-free ocean, and the distance to the main navigable river.Then, standard grid spatial processing (0.1° × 0.1°) of the 17 factors and the terrorist attack data was performed, by which we can get 36,978 standard grids to allow them to be analyzed at the same scale.To unify the measurement scale, this study normalizes the influencing factor.For data, we collected 17 types of influencing factor data from two aspects of socioeconomic factors and natural resource factors, which are shown in Table 2.Among these factors, socioeconomic factors include ethnic diversity, major drug areas, population density and nighttime lighting, accommodation outlets, catering outlets, transportation sites, religious sites, and political sites; natural resource factors include average precipitation, average temperature, terrain, the distance to the main navigable lake, the distance to the ice-free ocean, and the distance to the main navigable river.Then, standard grid spatial processing (0.1 • × 0.1 • ) of the 17 factors and the terrorist attack data was performed, by which we can get 36,978 standard grids to allow them to be analyzed at the same scale.To unify the measurement scale, this study normalizes the influencing factor.We mainly use GIS software and Python programming language for data processing, including ArcMap10.3(http://pro.arcgis.com/)and Python 3.6 (https://www.python.org/).
(1) Based on GTD, the location of terrorist attacks in Southeast Asia, as well as the numbers of casualties, can be obtained, and the information on the terrorist attacks is converted into raster data, selecting a grid with a 0.1 • × 0.1 • resolution.The grid serves as a spatial unit to facilitate the statistical determination of the number of terrorist incidents and the total number of casualties.
(2) The raster data of five factors can be obtained by G-Econ 4.0 (a dataset of world economic activity): the distance from the main sailing lake (km), the distance from the main sailing river (km), the distance from the ice-free sea, the average precipitation (mm/a), and the average temperature ( • C); subsequently, ArcMap 10.3 is used to sample the above raster data in a 0. In addition, because the 17 factors have different units, to unify the measurement scale and avoid the differences between different units, we normalize the 17 influencing factors, and the normalized formula is given below: where X norm is the normalized value, X min is the minimum value of the factor, X max is the maximum value of the factor, and n is the number of factors.
Algorithm
The spatial risk assessment of terrorist attacks is used to assess the location and the occurrence risk of terrorist attacks from the perspective of spatial analysis, including, but not limited to, the use of locations where terrorist attacks have occurred, to assess the risk of locations where no terrorist attacks have occurred.The location recommendation algorithm can spatially extract the relationship between the terrorist attack and the location of the attack, thereby scientifically conducting a risk assessment.The traditional location recommendation algorithm mainly focuses on single-source factors, such as sign-in; in contrast, the algorithm has rarely been used for multi-source factors.In research, multiple regions are usually considered as a whole.However, factors such as economy and population often have different influences in different regions.The study found some factors of spatial factors but did not integrate them into the position recommendation process for in-depth research.Based on this research, the multi-source factors, spatial factors, and regional divisions are integrated into the location recommendation algorithm.
First, the Southeast Asian region is divided into regions by a clustering method, and the weights of each factor are obtained by correlation analysis.Next, the location recommendation algorithm is improved by using the factor that the activities in the geographical location are clustered, and the spatial risk assessment of the terrorist attack is completed through the partitioned terrorist attack location factors and the terrorist attack attribute data.The evaluation process of this paper mainly includes two parts: partitioning areas and risk assessment.The flow chart is shown in Figure 3.
Partitioning Areas
Because the influence of various factors on the terrorist attacks in different regions is not the same, we first consider the spatial division of the study area according to the factor data and then use the correlation analysis method to determine the degree of impact of each influencing factor on the terrorist attacks in each sub-area.The extent of the impact of a terrorist attack is weighted.The spatial division involves dividing the study area spatially according to the factors.According to the machine learning algorithm, this method belongs to unsupervised learning, and the clustering algorithm is a typical unsupervised machine learning algorithm.Clustering refers to a large number of unknown datasets.According to the inherent similarity of data, the dataset is divided into multiple clusters.The entities in clusters are similar, and the entities of different clusters are not similar.A cluster is the convergence of points in the test space.The distance between any two points of the same cluster is less than the distance between any two points of different clusters [24].We will select four classical clustering algorithms.Through the experimental comparison, the method that is most suitable for the spatial division of this study is obtained.a. K-means algorithm K-means is a partition-based clustering method.The K-means algorithm calculates the similarity based on the average value of the data objects in the cluster and takes the average (or the centroid) of the objects in the cluster as the center of the cluster.The algorithm first randomly selects k objects among n data objects.Each object represents the average of a cluster.For each remaining object, according to its distance from the center of each cluster and the principle of minimum distance, it is assigned to the nearest cluster.On this basis, the average of each cluster is recalculated.This process is repeated until the sum of squared errors is minimized.The formula is as follows (at this point, the members in the cluster no longer change): where 1 i is the given data object and j w is the average value of cluster j C [25].
Partitioning Areas
Because the influence of various factors on the terrorist attacks in different regions is not the same, we first consider the spatial division of the study area according to the factor data and then use the correlation analysis method to determine the degree of impact of each influencing factor on the terrorist attacks in each sub-area.The extent of the impact of a terrorist attack is weighted.The spatial division involves dividing the study area spatially according to the factors.According to the machine learning algorithm, this method belongs to unsupervised learning, and the clustering algorithm is a typical unsupervised machine learning algorithm.Clustering refers to a large number of unknown datasets.According to the inherent similarity of data, the dataset is divided into multiple clusters.The entities in clusters are similar, and the entities of different clusters are not similar.A cluster is the convergence of points in the test space.The distance between any two points of the same cluster is less than the distance between any two points of different clusters [24].We will select four classical clustering algorithms.Through the experimental comparison, the method that is most suitable for the spatial division of this study is obtained.a. K-means algorithm K-means is a partition-based clustering method.The K-means algorithm calculates the similarity based on the average value of the data objects in the cluster and takes the average (or the centroid) of the objects in the cluster as the center of the cluster.The algorithm first randomly selects k objects among n data objects.Each object represents the average of a cluster.For each remaining object, according to its distance from the center of each cluster and the principle of minimum distance, it is assigned to the nearest cluster.On this basis, the average of each cluster is recalculated.This process is repeated until the sum of squared errors is minimized.The formula is as follows (at this point, the members in the cluster no longer change): where i 1 is the given data object and w j is the average value of cluster C j [25].b.BIRCH algorithm BIRCH is a comprehensive hierarchical clustering method that is commonly used for large-scale data sets.This algorithm introduces two concepts: the clustering factor (CF) and clustering factor tree (CF-tree).These two concepts are used to summarize the clusters, and the distances between clusters are used.The equilibrium iteration of the hierarchical method is used to reduce the size and cluster of data sets.The BIRCH method saves memory and calculates quickly, with only a simple scan of the data set required to build a tree and identify noise points.However, BIRCH does not cluster well for non-spherical clustering and high-dimensional data clustering.In addition, the order of data input affects the results of the algorithm [26].c.DBSCAN algorithm DBSCAN is a more representative density-based clustering algorithm.DBSCAN defines a cluster as the largest set of points connected by density and can divide a region having a sufficiently high density into clusters.The algorithm requires the user to input two parameters: one parameter is the radius (Eps), which represents the extent of a circular neighborhood centered at a given point P; the other parameter is the number of minimum points within the neighborhood centered on the point P (MinPts).These two parameters are difficult to set because they require the user to have a general understanding of the cluster dataset and to set it empirically [27].
d. SOM algorithm
The SOM algorithm is an unsupervised learning algorithm for clustering and high-dimensional visualization, which is an artificial neural network developed by simulating the characteristics of the human brain for signal processing.After the model was proposed by Professor Kohonen of the University of Helsinki in Finland in 1981, it became the most widely used self-organizing neural network method.The SOM network structure consists of an input layer and a competition layer (output layer).The number of input layer neurons is n, and the competition layer is a one-dimensional or two-dimensional planar array composed of m neurons.The network is fully connected, and each input node is connected with all output nodes.The SOM network can map arbitrary dimensional input patterns in the output layer into one-dimensional or two-dimensional graphics and keep its topology unchanged.The "competitive learning" approach is used in training.Each input sample finds a node in the hidden layer that best matches it, called its activation node of the "winning neuron", followed by a random gradient descent method to update the parameters of the activation node.At the same time, the points adjacent to the active node are also updated appropriately according to their distance from the active node.Excitatory feedback is sent to neighboring neurons, and inhibitory feedback is sent to distant neurons.In other words, neighbors encourage each other, and distant neighbors suppress each other [28].
We use the clustering quality indicator called the Calinski-Harabaz (CH) index to evaluate the clustering effect.The CH indicator is the ratio of the degree of separation and compactness of the data set.Tightness is measured by the sum of the squares of the distances between the data points in each class and the representative points, and the degree of separation is measured by the square of the distance between each representative point and the center point of the data set.The larger the CH indicator value is, the closer the class itself is, the more dispersed the classes are, and the better the clustering effect is.
where K represents the number of clusters, n i represents the number of data points in the i-th class, d(c i , c) represents the distance between the representative point of the i-th class and the data center c, d(x, c i ) denotes the distance between data point x and its representative point in class i, and n represents the total number of data points in the dataset [29].
With regard to the factor weights, this study uses the maximum information coefficient (MIC) to calculate correlations and assign weights to factors.The maximum information coefficient is developed on the basis of mutual information.The maximum information coefficient is suitable for exploring the potential relationship between variable pairs in the data set, and it is fair and extensive.
MIC(X, Y|D
where X, Y denote the variables; n denotes the sample size; i × j < B(n) represents the division dimension of the grid G; G indicates that the pairs of variables are divided into i × j grids; and M(X, Y|D) i,j denotes the characteristic matrix of X and Y [30].In this study, B(n) = n 0.6 ; obviously, 0 ≤ MIC ≤ 1.
Risk Assessment
The data of weighted factor are used as the input for the location recommendation algorithm to calculate the similarity between each grid and then build a kernel density function based on the severity of terrorist attacks.Finally, combining the similarity with the kernel density, a score between 0-1 is calculated for each grid where no terrorist attack has occurred.This score indicates the probability of a terrorist attack on the grid.
(1) Similarity Calculation The Euclidean metric (also called the Euclidean distance) is a commonly used distance definition, which refers to the true distance between two points in an m-dimensional space, or the natural length of a vector (that is, the point to the origin distance).The Euclidean distance in 2D and 3D space is the actual distance between two points.
(2) Spatial Characteristic Analysis Kernel density analysis is used in spatial analysis to calculate the density of elements in their surrounding neighborhoods; it considers the neighborhood of each element as a smooth surface.The position of the element has the highest value, and with the increase in the point distance, the value gradually decreases, reaching 0 when the search radius is reached [31].With the analysis of kernel density, it is possible to vividly and intuitively show hot spots where geographical phenomena are distributed.The formula for the kernel density method is given by where f (s) is the kernel density calculation function at the spatial position s; h is the distance attenuation threshold, which is the bandwidth; n is the number of element points whose distance from the position s is less than or equal to h; and the K function is the kernel function.The study of the kernel function is based on the quadratic kernel function described in the work of Silverman.The geometric meaning of this equation is that the density value is largest at each core element x i and decreases continuously during the distance x i until the kernel density value drops to 0 when the distance from the core x i reaches the bandwidth h.
(3) Spatial Risk Assessment We use the data of the weighted factor to calculate the similarity between the grid of land where no terrorist attack has occurred and the grid of the terrorist attack; we then select the three grids with the highest similarity to the grids without terrorist attacks; and then, we weight the average of the kernel density values and their corresponding similarity values for the three grids.The calculated score is the degree of possibility of an assault incident occurring in a grid where no terrorist attacks have occurred.The calculation process is shown in Figure 4.
score is the degree of possibility of an assault incident occurring in a grid where no terrorist attacks have occurred.The calculation process is shown in Figure 4. We use precision, recall, and the combined F-Measure of the two to evaluate the space risk of terrorist attacks.The precision rate represents the proportion of actual terrorist attacks in the grids that the model assessed as high risk.The recall rate indicates the proportion of high-risk grids assessed by the model in the actual terrorist attack grids.The F-Measure is a comprehensive consideration of both rates and can comprehensively reflect the evaluation performance of the model.
FP TP TP cision Pre
(7) In Equation ( 7), Precision was used for the prediction result; it indicates how many samples in the positive prediction are true positive samples.There are two possibilities for the prediction to be positive: One possibility is to predict the true positive class as a positive class (TP); the other possibility is to predict the negative class as a positive class (FP).
FN TP TP Recall
(8) In Equation (8), Recall was used for the original sample.It indicates how many positive examples in the sample are correctly predicted.There are two possibilities for being correctly predicted: One is to predict the true positive class as a positive class (TP); the other is to predict the original positive class as a negative class (FN).
R P
In Equation ( 9), P means Precision and R means Recall.There are occasions when contradictory situations exist between P and R indicators.Therefore, a comprehensive calculation formula F-Measure (F) of P and R was selected in this study for the overall evaluation of the model.
Regional Division Results
According to the collected data of 17 multi-source factors, the clustering algorithm in the machine learning method is used to partition the Southeast Asian region.For the BIRCH and Kmeans algorithm, the number of clusters 2 to 10 is selected to adjust the parameters.It is found that the BIRCH clustering effect is the best when the number of clusters is 4 and that the K-means algorithm works best when the number of clusters is 2. For the DBSCAN clustering algorithm, eps (εneighborhood distance threshold) and min samples (ε-neighborhood threshold) were used to adjust the parameters.It is found that, when eps is 0.5 and min samples is 8, the clustering effect is best.For (4) Evaluation index We use precision, recall, and the combined F-Measure of the two to evaluate the space risk of terrorist attacks.The precision rate represents the proportion of actual terrorist attacks in the grids that the model assessed as high risk.The recall rate indicates the proportion of high-risk grids assessed by the model in the actual terrorist attack grids.The F-Measure is a comprehensive consideration of both rates and can comprehensively reflect the evaluation performance of the model.
In Equation ( 7), Precision was used for the prediction result; it indicates how many samples in the positive prediction are true positive samples.There are two possibilities for the prediction to be positive: One possibility is to predict the true positive class as a positive class (TP); the other possibility is to predict the negative class as a positive class (FP).
In Equation ( 8), Recall was used for the original sample.It indicates how many positive examples in the sample are correctly predicted.There are two possibilities for being correctly predicted: One is to predict the true positive class as a positive class (TP); the other is to predict the original positive class as a negative class (FN).
In Equation ( 9), P means Precision and R means Recall.There are occasions when contradictory situations exist between P and R indicators.Therefore, a comprehensive calculation formula F-Measure (F) of P and R was selected in this study for the overall evaluation of the model.
Regional Division Results
According to the collected data of 17 multi-source factors, the clustering algorithm in the machine learning method is used to partition the Southeast Asian region.For the BIRCH and K-means algorithm, the number of clusters 2 to 10 is selected to adjust the parameters.It is found that the BIRCH clustering effect is the best when the number of clusters is 4 and that the K-means algorithm works best when the number of clusters is 2. For the DBSCAN clustering algorithm, eps (ε-neighborhood distance threshold) and min samples (ε-neighborhood threshold) were used to adjust the parameters.It is found that, when eps is 0.5 and min samples is 8, the clustering effect is best.For the SOM algorithm, select the number of neurons to adjust the parameters; the best effect is found when the number is 2. The tuning parameters of the four clustering algorithms are shown in Figure 5.By comparing the optimal parameter states of the four clustering algorithms, we found that the K-means algorithm has the highest clustering quality score.Therefore, K-means was selected for spatial division.The comparison of the four algorithm clustering effects is shown in Figure 6, and the result of the spatial area division is shown in Figure 7.By comparing the optimal parameter states of the four clustering algorithms, we found that the algorithm has the highest clustering quality score.Therefore, K-means was selected for spatial division.The comparison of the four algorithm clustering effects is shown in Figure 6, and the result of the spatial area division is shown in Figure 7.By comparing the optimal parameter states of the four clustering algorithms, we found that the K-means algorithm has the highest clustering quality score.Therefore, K-means was selected for spatial division.The comparison of the four algorithm clustering effects is shown in Figure 6, and the result of the spatial area division is shown in Figure 7.
Spatial Characteristics
Kernel density analysis is used to calculate the density of elements in their surrounding neighborhoods.In the kernel density analysis tool of ArcMap 10.3, the Population field indicates counts or quantities that are distributed throughout the landscape used to create a continuous surface.This study uses ArcMap 10.3 for kernel density analysis and sets the Population field value to the severity of the terrorist attack represented by this point (combining the number of deaths, the number of injured, and property losses).The kernel density of each grid based on the severity of the terrorist attack is shown in Figure 8.
Assessment Results
The data set was divided into two parts: one for training the evaluation model and the other for testing the model.To train and test the performance of the spatial risk assessment model, a 10-fold
Spatial Characteristics
Kernel density analysis is used to calculate the density of elements in their surrounding neighborhoods.In the kernel density analysis tool of ArcMap 10.3, the Population field indicates counts or quantities that are distributed throughout the landscape used to create a continuous surface.This study uses ArcMap 10.3 for kernel density analysis and sets the Population field value to the severity of the terrorist attack represented by this point (combining the number of deaths, the number of injured, and property losses).The kernel density of each grid based on the severity of the terrorist attack is shown in Figure 8.
Spatial Characteristics
Kernel density analysis is used to calculate the density of elements in their surrounding neighborhoods.In the kernel density analysis tool of ArcMap 10.3, the Population field indicates counts or quantities that are distributed throughout the landscape used to create a continuous surface.This study uses ArcMap 10.3 for kernel density analysis and sets the Population field value to the severity of the terrorist attack represented by this point (combining the number of deaths, the number of injured, and property losses).The kernel density of each grid based on the severity of the terrorist attack is shown in Figure 8.
Assessment Results
The data set was divided into two parts: one for training the evaluation model and the other for testing the model.To train and test the performance of the spatial risk assessment model, a 10-fold
Assessment Results
The data set was divided into two parts: one for training the evaluation model and the other for testing the model.To train and test the performance of the spatial risk assessment model, a 10-fold cross-validation method was used.Divide the data set into ten and take nine of them as training data and one of them as test data.The sample data in each test set will get a score between 0 and 1, which is verified by taking the threshold value from 0.1 to 0.9, and the evaluation index selection precision rate, recall rate, and F-Measure.We conducted ten 10-fold cross-validations and sought the average value as an estimate of the final model accuracy, as shown in Figure 9.
ISPRS Int.J. Geo-Inf.2018, 7, x FOR PEER REVIEW 14 of 19 cross-validation method was used.Divide the data set into ten and take nine of them as training data and one of them as test data.The sample data in each test set will get a score between 0 and 1, which is verified by taking the threshold value from 0.1 to 0.9, and the evaluation index selection precision rate, recall rate, and F-Measure.We conducted ten 10-fold cross-validations and sought the average value as an estimate of the final model accuracy, as shown in Figure 9.As seen in Figure 9, with the increase in the threshold, the precision rate increases but the recall rate decreases, and the F-Measure increases first and then decreases.The F-Measure can comprehensively represent the overall performance of the model.Therefore, when the value with the highest F-Measure is selected in this study, the threshold is 0.4, and the precision is 88%.The results of the spatial risk assessment of an assault are shown in Figure 10.The high-risk areas in the figure have scores greater than 0.4, the medium-risk assessment scores range from 0.1 to 0.4, and the low-risk areas score is less than 0.1.From Figure 10, we can see that the high-risk areas of terrorist attacks in Southeast Asia are generally concentrated and multi-centered.The southernmost part of Thailand and the Philippines are high-risk areas of terrorist attacks.The conflicts between religions and ethnic groups in these regions are serious.The economic development of these regions is not balanced and belongs to the main drug areas, so it is easy to breed terrorism; the medium-risk areas of terrorist attacks are widely distributed.Some coastal and border areas are in medium-risk areas of terrorist attacks.Unbalanced economic development, and ethnic and religious conflicts are quite serious and can lead to terrorist attacks.The low-risk areas of terrorist attacks are also widely distributed.There are fewer religions and ethnic groups in these regions, and it is not easy to cause terrorism.As seen in Figure 9, with the increase in the threshold, the precision rate increases but the recall rate decreases, and the F-Measure increases first and then decreases.The F-Measure can comprehensively represent the overall performance of the model.Therefore, when the value with the highest F-Measure is selected in this study, the threshold is 0.4, and the precision is 88%.The results of the spatial risk assessment of an assault are shown in Figure 10.The high-risk areas in the figure have scores greater than 0.4, the medium-risk assessment scores range from 0.1 to 0.4, and the low-risk areas score is less than 0.1.From Figure 10, we can see that the high-risk areas of terrorist attacks in Southeast Asia are generally concentrated and multi-centered.The southernmost part of Thailand and the Philippines are high-risk areas of terrorist attacks.The conflicts between religions and ethnic groups in these regions are serious.The economic development of these regions is not balanced and belongs to the main drug areas, so it is easy to breed terrorism; the medium-risk areas of terrorist attacks are widely distributed.Some coastal and border areas are in medium-risk areas of terrorist attacks.Unbalanced economic development, and ethnic and religious conflicts are quite serious and can lead to terrorist attacks.The low-risk areas of terrorist attacks are also widely distributed.There are fewer religions and ethnic groups in these regions, and it is not easy to cause terrorism.Figure 11a shows that the precision rate increases as the threshold increases and that the precision of the model after K-means, DBSCAN, and SOM partitioning under different thresholds is greater than Figure 11a shows that the precision rate increases as the threshold increases and that the precision of the model after K-means, DBSCAN, and SOM partitioning under different thresholds is greater than
Figure 2 .
Figure 2. Southeast Asia terrorist attack death map.
Figure 2 .
Figure 2. Southeast Asia terrorist attack death map.
Figure 2 .
Figure 2. Southeast Asia terrorist attack death map.
Figure 6 .
Figure 6.Comparison of the clustering quality of four algorithms.
Figure 5 .
Figure 5. Clustering quality of four algorithms using different parameters.
Figure 5 .
Figure 5. Clustering quality of four algorithms using different parameters.
Figure 6 .
Figure 6.Comparison of the clustering quality of four algorithms.Figure 6.Comparison of the clustering quality of four algorithms.
Figure 6 .
Figure 6.Comparison of the clustering quality of four algorithms.Figure 6.Comparison of the clustering quality of four algorithms.
Figure 11 .
Figure 11.Effect before and after partition.
Figure
Figure 11a-c show comparisons of the accuracy, recall, and F-Measure for the model, respectively, partitioned by the K-means, SOM, BIRCH, and DBSCAN algorithms, and the unpartitioned model.Figure11ashows that the precision rate increases as the threshold increases and that the precision of the model after K-means, DBSCAN, and SOM partitioning under different thresholds is greater than
Figure 11 .
Figure 11.Effect before and after partition.
Figure
Figure 11a-c show comparisons of the accuracy, recall, and F-Measure for the model, respectively, partitioned by the K-means, SOM, BIRCH, and DBSCAN algorithms, and the unpartitioned model.Figure11ashows that the precision rate increases as the threshold increases and that the precision of the model after K-means, DBSCAN, and SOM partitioning under different thresholds is greater than
Table 2 .
Impact factor data.
Center for Comparative and International Studies (CIS), International Conflict Research (http://www.icr.ethz.ch/data/index)Major drug regions World drug report, 2016 Division for Policy Analysis and Public Affairs, United Nations Office on Drugs and Crime (http://www.unvienna.org/unov/en/unodc.html)NASA's Earth Observatory (http://neo.sci.gsfc.nasa.gov/)ISPRS Int.J. Geo-Inf.2018, 7, 354 6 of 19 1 • × 0.1 • grid.(3) Ethnic diversity is based on the GeoEPR (National Relations Dataset); the main drug area is based on the World Drug Report and the national administrative border; nighttime lighting is based on the Earth Observation Organization; population density; and topography is based on NASA's Earth Observatory.We use ArcMap 10.3 to sample the above data in a 0.1 • × 0.1 • grid.(4) With respect to points of interest (POIs), we use the Google Places API to get POI data of Southeast Asia, and then use ArcMap 10.3 to sample it in a 0.1 • × 0.1 • grid. | 9,300.2 | 2018-08-27T00:00:00.000 | [
"Computer Science"
] |
Analysis of information and information flow in technological processes. Method of transmitting information unaltered
Decision makers in an organization’s top management use a multitude of information to substantiate decisions. Execution staff also needs a multitude of inside information about the development of the production process, the way the tasks and objectives are met, the situation of the material and energy resources stocks, the functioning of the machines and the installations, the degree of fulfilment and the quality level of production, etc. The organization’s information system can be defined by all the data, information, flow and information circuits, information handling procedures and their means of application. All this information is processed and directed to potential users in order to be used to achieve the company’s goals. Incidentally, this information can be distorted by other factors such as: illegible printing, loss of information on computer’s magnetic supports due to wear, viruses or miscellaneous defects. Within the information system, it may often happen that we have to deal with the transmission of more information, from design to production and vice versa, for the conformation of the finished product.
Introduction
Along with the technological advancement of recent years and due to the electronic environment of transmitting, storing and managing information, the issue of the security of the organization's information system is becoming more common. In order to maintain the highest level of security, as hardware and software products evolve constantly but with certain vulnerabilities that can be exploited by some people by counter-engineering or reverse engineering, a growing amount of data and information requires adequate protection and a higher level of security in terms of access to information, and also a permanent adaptation of the user training process as a whole to the particularities of the information and production systems within the company.
Information flow. Method of transmitting information
Information entering the organization is recorded, processed, and stored in a database. All employees, need to have access to that information, so they have to be planned, organized, managed, thus a management information system (MIS) is needed. The studied organization produces subassemblies for the finished bearing product. They can be divided into three distinct categories: bearing rings (bearing shirts), cages and rollers.
For a more in-depth study of an informational flow, we will look at the Production department and the QS department.
In order to have an accurate picture of the flow of information between the two chosen departments, we start from the fact that the information system in the organization was very well established by 2 realizing the SAP system as an informational base. The structure of the SAP system allows access to any type of information by calling different system functions, even if the name of the entity (SAS command) is unknown, but elements of the command structure are known. The information flow between the two departments in terms of the frequency of information transmission is a permanent flow because information is transmitted via computer systems several times a day. From the point of view of the direction of the information flow, the information flow is horizontal.
Within these departments there also is another type of information flow and informational circuit established between the production department and the line control, flow that is permanent and horizontal, and between the line control and the Laboratory of measuring technique with the same characteristics. The informational circuit between the line control and the production department is internal from the organizational point of view with a horizontal trajectory and in the relation with the Laboratory of measuring technique is of the same type (internal, horizontal).
Figure 1. Equipment chart Laboratory of measurement technique
The information transmitted between the Production Department and the Laboratory of measurement technique is expressed in written form being horizontal and, from the point of view of their usefulness, they are of control and regulation. Between the line control and the Laboratory, the information can be orally or written, being horizontally, and is used for evaluation and reporting. Inside the Laboratory, orders are received with information to evaluate some features of the product with the help of the equipment. The result of these assessments is sent to the Production Department and Line Control. All these evaluations are done in the Laboratory, stored in an archive consisting of an electronic database to which other departments have access, such as accounting, production, etc.
The periodic provision of such predetermined reports in the form of syntheses in the database is the component of the information system, which is also called the "management alert system", as it is intended to specifically warn managers about the existence or possibility of existence of problems or opportunities. Management of the information system can be represented graphically as in the figure below: Figure 2. The role of information system management 2.1 Information system deficiencies Even though the informational system between the two departments appears to be well established, this has shortcomings, primarily in distorting the information generated by the difference in professional training between operators in the Laboratory of measurement technique and other employees or line controllers. This information may also be distorted by other factors such as illegible writing, malfunction
Control machines in Automatic Coordinates
(ZEISS)
Control machines in manual coordinates
Handheld rugsometer of copy machines (not seeing paper information correctly), and loss of information on computer magnetic supports due to wear, viruses, or mechanical defects. Filtration deficiencies often occur that change the content of the information, intentionally or not. Within the information system the description of the information system sequences must be accompanied by graphic representations. For the material reception activity the horizontal graphic representation is shown in the figure below: As a result of studies conducted within organizations on the information system, it has been found that there are some typical, relatively frequent deficiencies due to errors in its design and / or operation. These typical deficiencies are: Distortion -consists in the partial or total unintentional modification of the content of some information during collection and transmission from the transmitter to the receiver; Filtering -consists in partial or total intentional modification of the content of information during the collection, recording, processing and transmission from the transmitter to the receiver; Redundancyconsists of the collection, recording, processing and re-transmission of data and information; Overloading communication channels -consists of collecting, processing and transmitting unnecessary data and / or information by means of communications [1].
Identification of criteria for optimization of information flows
Optimizing information flows as a process or as a product is subject to the same rules based on value and cost. In this regard, many authors link the value of information to four main factors, namely: quality, speed, quantity and relevance in managerial ability in decision-making. Information quality -to assess the quality of a specific information, managers need to be able to compare the given facts with reality. Deadline for delivery of information -or a control to be effective, corrective measures are needed before a deviation from the standard plan takes place. Quantitative information sufficiency -a message cannot be considered either of a proper quality or opportune as long as it does not contain enough information. Relevance of information -at the same time, the information that the manager receives must be relevant to the responsibility and work tasks.
Compared to the foregoing, it should be noted that there are technical indicators on which one can appreciate the benefits of an information system. Among these include: accuracy, complexity, opportunity, frequency of elaboration, appropriate content, appropriate presentation, integrating capacity in the information system, utility. The utility is realized by what the system performs, and the performance through the way the utility is fulfilled. [2] Accuracy. It can be appreciated by two indicators: • the ratio (number of fair answers) / (total number of responses) given to that specific event; is a useful indicator for expert systems; • the precision of the data at one's disposal, as measured by the relationship: where: Complexity. It is about the quality of containing elements of knowledge that allow for a more complete picture of the event.
Database
Opportunity. It can be regarded as the rate of information that fell within the available disposal time relative to the total amount of information transferred.
An empirical evaluation of the usefulness of the information can be done on the basis of the ratio between the number of useful and unnecessary information. The following steps are set: > 0.5: essentially information; interval 0,1 -0,5: normal weight of useful information; and < 0,1: Informational void.
The "usefulness ratio coefficient" ( ) can provide a qualitative appreciation of the information where: , amounts of information that come in or out of a compartment; , -the corresponding value in the money terms of the information unit.
As regards the itineraries of data and information from the place where the data are collected to the recipients of the information, the circuits must be considered as having the following characteristics: to be as short as possible, to be rational, economical and the intermediate processing volume unknown by the end-user is minimal. The optimization of information flows is achieved by bringing the indicators or parameters of the mentioned evaluation models to levels that provide maximum efficiency to the information system. [3] 4. Critical analysis of the existing situation by determining weaknesses, strengths and their evaluation according to the quality criteria of the information system.
Determination of the quality characteristics of the analysed information system
The purpose of this stage is to determine the internal and external characteristics of the quality, the quality analysis methodology, the quality indicators and to determine the work necessary to improve the quality of the information system, requiring organizational, technical, technological and methodological assurance.
To describe the characteristics of the quality of the information system, the following processes must be covered: • Selection and argumentation of the initial set of data that reflects the general peculiarities and stages of the informational system's life cycle, that influence certain quality characteristics of the system; • Selection, establishment and confirmation of concrete parameters and scales for measuring the characteristics and attributes of the quality of the information system for their subsequent estimation and comparison of the requirements of the specifications in the process of qualification tests or certification at certain stages of the life cycle of the information system.
Collecting information about the existing information system
Various methods can be used to accomplish this step, among which the most common ones are: Interviews with persons involved in the company's activity; Consultation of records; Studying documents circulated within and outside the firm; Studying the documentation governing the conception, operation and control of the information system; We have chosen the method of interviews and for their realization we have developed two questionnaires, one for managers from any level within the organization and one for the persons involved
Quality characteristics of the quality of the information system
The questionnaires were drafted following the characteristics of the quality of the information system as listed below in Table 1, with the aim of clarifying the aspects related to: Rigorous control of information sources, Degree of degradation of information during traffic, Equipment for information processing, Level of training of information managers, Level of completeness of information, Level of accuracy of managers decisions, Methods for storing information, Design and structure of databases and the perspective of their computerization, Orientation towards qualified staff with computer literacy, Growth of performance decisions, Definition of the beneficiaries of information and their access to information. The final result of the analysis of the existing situation at the organizational level is reflected in Table 2. Based on the calculations above, we obtained 2.45 score, which is in the range of two (weak) and 3 (good) leading to a high power information at the organizational level. For this, it is only suggested to improve the existing information system. The transition to the new improved information system can be done through a pilot system, parallel to the old information system or through progressive passage. [4] | 2,793 | 2019-10-30T00:00:00.000 | [
"Computer Science"
] |
An efficient blockchain-based framework for file sharing
File sharing, being the foundation of the Internet, has traditionally relied on a centralized service architecture resulting in significant maintenance costs. Moreover, due to the lack of an effective file management system, instances of sensitive information going out of control and loss of confidentiality in file sharing have occurred frequently. In order to address the difficulty of tamper detection and the lack of supervision in the entire process of file transfer in the current Internet environment, this paper designs a blockchain-based system architecture for secure sharing of electronic documents. An efficient blockchain model is used in our framework, and with the help of distributed storage system and asymmetric encryption technology, file sharing can be controlled, reliable and traceable in the transfer process. Referring to existing consensus mechanisms, e.g., Delegated Proof of Stake (DPoS) and Practical Byzantine Fault Tolerance (PBFT), we propose a new consensus for efficient and secure file sharing. Our experimental results show that our framework can maintain a higher throughput than existing schemes
www.nature.com/scientificreports/The growth of the Internet of Things (IoT) has led to increased research 5 on distributed information systems due to its distributed nature, and traceable distributed data sharing solutions have emerged in IoT 6 .With the popularity of cryptocurrency all over the world, blockchain technology has attracted tremendous interest from both academia and industry 7 and applied in various fields, e.g., healthcare, Internet of Things (IoT), and cloud storage 8 .The decentralized nature and reliable security features of blockchain technology offer new perspectives on file transfer reliability.Leveraging this idea, we develop a more efficient file-sharing system that save server resource consumption.
The main contributions of this paper are as follows: • A proposed efficient method for file sharing utilizes blockchain technology.By the method, the existing storage system (e.g., cloud storage platform, P2P system) is used to store file and the blockchain is only used to save information about file sharing.• We build a blockchain with new framework, which contains two chains.The information of file is stored in the particular chain.We design a special data structure for file information, so we can reliably track the source and monitor the lifecycle of file.• A new consensus is proposed, which groups nodes and conducts transactions efficiently.We demonstrate its feasibility through evaluation and our experiment results show that our framework is more efficient than existing frameworks.
Related work
After the emergence of blockchain, people considered using it for cloud storage 9 .Initially, the application of blockchain for cloud storage was rudimentary and centralized, leveraging its inherent properties for enhanced security and integrity.However, this approach did not fully exploit the potential of blockchain's decentralization.Then Benet created Inter Planetary File System (IPFS) 10 .IPFS is a distributed file system, and like blockchain, it is a P2P network run by multiple nodes.So many people begin to combine it with blockchain technology for file transfer.Chen et al. 11 proposed an enhanced P2P file system scheme, improving IPFS's block storage model with a zigzag-based storage solution, and employing blockchain to facilitate better coordination among nodes for efficient data exchange.Vimal et al. 12 utilized Filecoin as an incentivization mechanism for content providers based on the integration of IPFS and blockchain technology.Subsequently, some schemes have sought to enhance IPFS with Hyperledger Fabric. 13,14While these schemes improve file sharing security and reliability, they typically rely on existing blockchain systems like Ethereum or Hyperledger Fabric for implementation, which may lack efficiency.
There has been a growing interest in high-performance information sharing blockchain in the field of IoT.Dorri et al. 15 proposed a lightweight blockchain architecture for IoT.Xu et al. 16 proposed DIoTA, a decentralized ledger-based framework to authenticate IoT devices and data generated from them.And people are also beginning to use the next generation blockchain for data sharing: Directed Acyclic Graph (DAG) Distributed Ledgers, e.g., IOTA 17 , Nano 18 .The DAG structure allows parallel validation of transactions and reduces the consumption of transaction.So DAG distributed ledgers can establish more efficient and scalable file share system, such as FileDAG 19 .
Method and functions
The main aim of our system is to share file safely.So we design a complete method to share file, it is depicted in Fig. 1.It can be divided into two parts: in the first part, file is encrypted and stored in IPFS; in the second part, information of file and user is stored in blockchain.We define two main functions to finish file transfer.
Upload
The user encrypts the local file with a randomly generated symmetric encryption key (as "the file key") and uploads it to IPFS to obtain the file hash (i.e. the IPFS content identifier, which is used to obtain the file).Of course, we can also use other storage platform (e.g., cloud storage platform).Each user need generate a "blockchain wallet", simplified as an asymmetric key here.The user's public key encrypts the file key, which is then stored in the blockchain transaction alongside the file hash and relevant information.The process is depicted in Algorithm 1.
Download and transfer
During the download process, the file owner retrieves the file hash and key from the blockchain, downloads the file from IPFS through the file hash, and decrypts the file key with a private key to decrypt the file.During transfer, the file key is first decrypted using the owner's private key, and then encrypted using the recipient's public key.Finally, the encrypted file key and file hash are stored in the transaction.Essentially, the file hash serves as an equivalent representation of the file on the blockchain, facilitating secure retrieval and transfer of the file.The process is depicted in Algorithm 2.
Framework
For efficient transfer, we need a novel blockchain that can achieve high concurrency and security.Therefore, we borrow from traditional blockchain and consensus algorithms widely used in cryptocurrency systems, and propose our framework.We utilize a two-chain structure for optimal performance.The File Transaction Chain is designed with specialized data structures to efficiently store transactions and maintain data integrity, while the File Info Chain functions as a traditional blockchain to safeguard the security and stability of the system.We simply discuss one kind of nodes: full nodes, which validate transactions and blocks, ensuring they adhere to the network's consensus rules.Full nodes stores a complete copy of the two-chain structure blockchain and relay new transactions and blocks to other nodes in the network.In practical situations, there will also be lightweight nodes 20 in the system.
File transaction chain (FTC)
This chain handles transactions about file and provide information of file to user.When a transaction is validated, it is stored in the following structure: To facilitate traceability, the file is used as a root block to form an array; the user group is attached to the corresponding file, where each user points to the user who shares the file to him to form a chain structure; transactions are sorted chronologically and attached to the corresponding users as an array.The overall structure is shown in Fig. 2. Main information is stored in the following sections: • File: file name, file hash, created time and other related information.
• User: user's public key and the encrypted file key.
• Transaction: address of both parties, transaction type, timestamp, transaction information, signature of validation node group, signature of transaction creator and transaction hash value.
File info chain (FIC)
This chain functions similarly to an traditional blockchain, which is used to manage.
Normal-case operation
There are 5 main steps for the whole system's lifecycle.These steps also show the consensus.We propose the new consensus based on the concepts of sharding and PBFT.The lifecycle is shown in Algorithm 3.
(1) Vote to select the leadership group.Nodes vote based on efficiency and reputation of each node.The number of votes that can be used by the node is determined by the rating of the nodes on FIC (File Info Chain) and each node has equal right in the first time.Nodes broadcast the voting results as a transaction to all nodes, and after obtaining all voting results, each node calculates the node ranking.The top 1/5 nodes are selected as the leadership group.(2) Divide node groups by the leadership group.The members of the leadership group rotate as chair according to the ranking order.Based on the information blocks on FIC, each node is rated and scored.The chair divide nodes into 6 groups (the number of groups is adjusted reasonably according to the node size and transaction volume) and make sure that each group has similar total score.(3) Create block on FIC.The chair packages the information about voting results, node grouping and other system (e.g., efficiency information, reputation information) as transaction, then chair requests the transaction according to the PBFT algorithm 22 .When 2/3 of the leadership group nodes confirm the result, blocks will be created on FIC.If more than 1/3 of the members do not agree with the grouping, they need to go back to the second step until a grouping is formed.(4) Conduct transactions.Nodes in each group form an independent peer-to-peer network, and adjacent groups establish P2P network channels to form a ring network structure.When one user want to upload or transfer file, he initiates transaction on FTC in the node of previous group and send it to a random node in the next group.After receiving the transaction, the receiving node broadcasts the transaction within the transaction processing group.After more than 2/3 of the nodes in the group have validated and signed, the receiving node attaches all signatures to the transaction and broadcasts it to its group and the leadership group.The leadership group forms a P2P network channel with each group.After receiving and validating it, the leadership group nodes broadcast the transaction to each group, and each node inserts the transaction into the transaction array of the user in chronological order based on the corresponding files on File Transaction Chain.Any transaction that have failed validation in 1/3 of the validation nodes will be discarded and notified to the transaction initiator and leadership group.(5) Supervise transactions.The leadership group evaluates the efficiency of each node based on the speed of transactions, initiates transactions containing efficiency information on FIC, randomly selects transactions for validation and labels transactions based on the results, and initiates transactions containing node reputation information.These transactions will be validated by the leadership group in the next time of grouping and the production blocks will be added to FIC.
Security analysis
In our framework design, security is an important standard.Our framework runs safely and reliably through the use of cryptography and decentralization.We need to strive for efficiency while ensuring security.Therefore, we do not conduct thorough security audits and only analyze the security of this scheme for common security threats in the blockchain.Double spending attack In normal blockchain system, attacker waits for specific conditions and uses cryptocurrency twice or more.This type of attack poses a significant threat to the integrity and security of blockchain systems by undermining the fundamental principle of cryptocurrency.But in our system, the crypto assets is file, owner can copy and tranfer file to anybody, there is no risk of double spending.The key aspect is to secure ownership through signature in our system, as files will not be "consumed".
Replay attack When the user requests a transaction, attacker listens in and steals user's information.Then attacker can send the same transaction again, and even modify transaction's key information to make the file stolen.To prevent this attack, the communication between nodes is encrypted by one-time key pair in our system, and each transaction is created with a timestamp and an expiration time.Nodes within the system are configured to promptly identify and reject any transaction that is outdated or exhibits suspicious characteristics, such as duplicate timestamps or users, preventing potential security breaches.
Impersonation attack Attacker impersonates a legitimate user in order to gain access.Our framework uses RSA algorithms to create user's wallet and key length is 2048 bits, which can provide enough security strength 23 .This approach emphasizes the need for utilizing advanced cryptographic techniques in blockchains.While more intricate encryption algorithms and longer key offer increased security, they may also result in higher performance degradation.Therefore, it is crucial to consider appropriate algorithms that can achieve a balance between security and performance based on the specific environment in which system is implemented.
Sybil attack Attacker subverts the reputation system of a peer-to-peer network by creating a large number of pseudonymous entities, using them to gain a disproportionately large influence 4 .Proof of Work (PoW) consensus does not depend on the number of nodes, Sybil attack can only cause limited damage to it and it is difficult to influence the entire blockchain network.But this type of attack is very dangerous to the consensus that run with voting (e.g., DPoS).To our consensus, if the attacker controls 2/3 nodes of a group, he can forge transactions.Due to the fact that node groups are divided by the leadership group based on node reputation and efficiency ratings, attacker need to control at least 2 3β nodes ( β is the number of groups) without a record of wrongdoing and then the attacker can control a group to create fraudulent transactions.But it is not enough to just fake transactions, the leadership group can determine whether the transaction is abnormal and record the malicious behavior by checking the hash and signature.So the fair election of the leadership group is an important guarantee for security.In order to measure the participation and credibility of the nodes and give different voting rights to the nodes, in this scheme, the effective transaction volume of the nodes is selected as the equivalent substitute of the computing resources of the nodes, and the reputation record is used to score the nodes.Unless the 51% attack is realized, the selection of the leader group is safe.
51% attack Attacker shows malicious behavior, such as tampering with transactions and forging blocks, by controlling 51% of the computing power of the entire network 24 .In our framework, if attacker controls 51% of the computing power, he can get enough right of voting to select the leadership group and control the entire system.This attack is challenging to defend against.But this is difficult to achieve in a large blockchain, the best prevention method is to establish a sufficiently large blockchain.
Evaluation
To assess the progressiveness of this framework, we need to analyze the time consumption.The total process times (from the beginning of voting to the next revoting) between two different leadership group can be analyzed in two aspects: communication consumption and calculation consumption.
Communication consumption
When a large number of nodes are evenly distributed in a network and there is no congestion caused by broadcasting, the average communication time RTT is considered as a fixed value.There are two types of communication consumption.
(1) Vote and divide.Each node needs to broadcast its own voting results to all nodes, and the chair will divide nodes and broadcast once, requiring a total of 2 RTT.(2) Process and supervise transactions, create block on FIC.When each group conducts transaction processing, the transaction is initiated, and the receiving node receives it and broadcasts it to all nodes within the group for signature.Then, each node sends result to the receiving node for integration, and the receiving node broadcasts signed transaction to the group and leadership group, requiring a total of 4 RTT.While other groups conduct the transaction, the leadership group conducts transaction supervision.In extreme cases, all leadership group nodes record, validate, and broadcast the results to all nodes, with a broadcast time of 1 RTT.FIC block generation adopts the PBFT algorithm, which takes 5 RTT in 5 stages.The leadership group took a total of 6 RTT, which is longer than transaction processing.
Overall, in the lifecycle of a leadership group, each node mainly spends time processing transactions.In part 1 some individual communications consume less time and have fewer occurrences, and communication consumption is mainly considered in part 2.
Calculation consumption
If we have a total of n nodes, γ leadership group nodes, α transactions and β groups, we can get algorithm com- plexity of the entire system.
(1) Leadership group management.Dividing node requires iterating over blocks of FIC and scoring each node, with a complexity of O(n) .The time consumption of the transaction supervision part is linearly related to the number of transactions, and in the extreme case, γ leadership group nodes record and validate α trans- actions with a complexity of O α γ .Each transaction can be completed by iterating over the transaction chain once per node, and the time consumption can be ignored.The block generation adopts the PBFT algorithm with a complexity of O γ 2 .We set average coefficient as C 1 , the calculation consumption of the leadership group is: (1) Vol.:(0123456789) www.nature.com/scientificreports/ (2) Transaction processing.β groups are conducted simultaneously, and the transaction is validated and signed by all nodes within the group after being broadcasted by the receiving node.Then, each node sends it back to the receiving node for integration, with a complexity of O α β n−γ β . We set average coefficient as C 2 and the transaction processing time of each group is: Generally, α is much greater than γ .Through analysis, it becomes evident that increasing the number of nodes in the leadership group results in a decrease in the number of transactions handled by each node within the group.Consequently, this reduces the time consumption while simultaneously enhancing the degree of decentralization.However, γ increasing will result the number of group nodes n−γ β in a too small number, which cannot guarantee the credibility of the transaction; moreover, the leadership group nodes have lost their transaction ability, and users can only create transactions through other nodes.Due to the limited processing capacity of other nodes, excessive number of leadership group nodes can lead to transaction congestion.So we need to select the appropriate number of groups and leader nodes based on the total number of nodes to ensure the overall system is reliable and efficient.
C 1 is mainly caused by the program for transaction validation and recording, C 2 is mainly caused by the program for transaction validation and signature, and the consumption of signature algorithms is much greater than that of recording algorithms.So C 2 > C 1 and T 2 > T 1 .
Experiment
We direct our attention toward the meticulous analysis of T 2 through a series of intricate system simulation experiments.In our experimental setup, we have established a network environment where each server encompasses multiple nodes operating within the confines of a local LAN, engaging in peer-to-peer communication built by the TCP protocol to achieve a state of seamless "zero-latency" discourse.To conduct these experiments, we employ a cluster of six computers, each equipped with an Intel Core i5 processor operating at 3.6 GHz, 16GB of RAM, Microsoft Windows 11 64-bit version, and a 500GB hard drive.Within each computer, there are a total of 30 nodes, each running on different ports, resulting in a cumulative count of 300 nodes.Out of these nodes, 60 belong to the leadership group, while the remaining nodes are divided into 8 separate normal groups.
We ensure that a fixed number of transactions are allocated to each normal group.Once all the transactions have been validated by the nodes in the leadership group, we proceed to gather and compute the average time spent on each individual transaction.The graphical representation of these results can be observed in Fig. 3.When the number of instantaneous transactions is below 300, the processing capacity of the group proves to (2) be adequate, and the time consumption for the leadership group remains consistently stable.Therefore, as the number of transactions increases, there is a noticeable decline in the average time consumed per individual transaction.
We change the number of nodes in each group to determine the impact of group size on transaction validation.After the number changes, each group requests 10 transactions.For each transaction, we collect integration time on the receiving node, which organizes signatures and broadcasts the result.The results, shown in Fig. 4, indicate that the integration time is short and increases slowly.
The impact of the number of nodes on transaction efficiency within a group is significant.The average number of transactions initiated by each node per second is TPS.To assess the processing capacity of groups with different sizes, we assign different TPS values and vary the number of nodes within a group.We measure the average time taken by each group to complete the requested transactions within one second.The resulting data is presented in Table 1.Blockage occurs when the total time consumed exceeds 1 second.From the table, it can be observed that when TPS is lower than 20, a group consisting of 40 nodes is an optimal choice.
Directed Acyclic Graph (DAG) blockchain is the state-of-the-art solution for blockchain-based file transfer.It allows parallel and rapid processing of transactions and is designed for high performance.In contrast to traditional blockchains, which validate transactions and create new blocks at fixed time intervals, DAG blockchains offer a more efficient solution for real-time file transactions.For example, Ethereum commits blocks once every twelve seconds, limiting its ability to meet real-time transaction requirements.Both in our framework and DAG blockchains, each node can conduct transactions.However, unlike DAG blockchains, our framework incorporates a "central" group to enhance efficiency and security.It is important to note that while our framework achieves a higher degree of efficiency and security, DAG blockchains generally exhibit greater decentralization.To compare with the performance of DAG blockchains (e.g., IOTA), we refer to the experimental settings in other study 25 , the number of groups was increased to 30, each consisting of 8 nodes.We test average processing speed of each group from 15 to 150 TPS per second in 300 seconds.The result is shown in Fig. 5.
Among the three implementations of IOTA, Nano, and Byteball in the paper 25 , Nano can achieve a maximum throughput of 60 transactions per second.Our system can achieve 85 transactions per second for each group according to Fig. 5, and our global throughput needs to be multiplied by the number of groups.Compared to the state-of-the-art method, our method has made some progress.
Conclusion
In this paper, to address the fundamental issue of file transfer, we propose a new blockchain-based framework for file transfer.To begin with, we propose the core functions of the entire system based on security requirements.Then, in order to efficiently complete the task, we design a dual-chain blockchain structure and new consensus based on the PBFT algorithm and sharding concept.Furthermore, we analyze the security, feasibility and efficiency of the framework.Finally, we conduct quantitative experiments in a simulated environment.The difference between our proposed framework and existing solutions lies in two aspects.Firstly, we adopt a relatively centralized consensus through the leadership group to ensure efficient operation of the system while ensuring security.The second is that transaction processing is highly parallel in each group, which solves the problem of low efficiency in existing blockchain file transfer solutions.But this framework is limited by the absence of practice under complex network, we haven't designed a communication scheme for adverse network condition.
Figure 1 .
Figure 1.The process of file sharing method.
Figure 2 .Algorithm 3 .
Figure 2. The data structure of File Transaction Chain.
Figure 3 .
Figure 3. Average time consumption for each transaction.
Figure 4 .
Figure 4. Integration time consumption for each transaction.
Table 1 .
Transaction calculation time consumption.
TPS Time | 5,409.6 | 2024-08-03T00:00:00.000 | [
"Computer Science"
] |
Enhancing English Learners ’ Willingness to Communicate through Debate and Philosophy Inquiry Discussion
The present study investigated the impact of two instructional methods, Debate and Philosophy Inquiry (PI), in enhancing Willingness to Communicate (WTC) among two groups of English as a Second Language (ESL) learners who were randomly selected. In each group there were sixteen participants. The researchers used independent samples t-test and paired samples t-test to analyze the collected data. The data analysis using paired samples t-test showed that both methods of instruction have a significant effect on learners’ WTC. However, the learners’ WTC increased more in Debate group comparing to the Philosophy Inquiry classroom discussion group. The results indicate that Debate is more effective than Philosophy Inquiry classroom discussion in enhancing ESL learners’ WTC.
Introduction
Willingness to Communicate (WTC) is "a personality-based, trait-like predisposition which is relatively consistent across a variety of communication contexts and types of receivers" (McCroskey & Baer, 1985, p. 6).Moreover, it is viewed "as a readiness to speak in the L2 at a particular time with a specific person, and as such, is the final psychological step to the initiation of L2 communication" (MacIntyre & Doucette, 2010, p. 162).In recent years researchers have attempted to examine learners studying English as a second language (ESL) or English as a foreign language (EFL) to find their Willingness to Communicate; however, previous studies have not addressed the influence of these methods of instruction i.e., Debate and Philosophy Inquiry (PI) discussion, on students' WTC.Both these approaches are known to be student centered and to promote learners' communicative skills.Therefore, this study attempted to examine the effectiveness of these two student-centered methods in promoting TESL undergraduates' WTC in classrooms.Therefore, this paper was an attempt to answer the following research questions: 1) Do PI classroom discussion and Debate instructional methods make any differences on ESL learners' WTC scores?
2) Is there any significant difference between mean scores of Debate and PI classroom discussion groups?
3) Which method of instruction promotes Self-Communicative Competence (CA) more: PI discussion or Debate?4) Which method of instruction increases Communicative Apprehension (CA) more: PI discussion or Debate?
Willingness to Communicate (WTC)
Willingness to Communicate (WTC) refers to a person's motivation to use the target language to communicate in a situation (Dörnyei, 2003(Dörnyei, , 2005)).Speaking is important in language development and acquisition.As stated by Swain (1985), a quality language output between a learner and a teacher has a direct impact on the achievement of language progress.Hence, the assertion is made by MacIntyre et al. (2003) who believe that the fundamental goal of language instruction is to trigger this psychologically driven process in a learner.According to Skehan (1989, p. 48), once initiated, the learner will "talk the language" and in the process he will learn or acquire the language.There are many factors that affects one's Willingness to Communicate, which basically are people and/or situation-specific qualities such as aptitude, motivation, interlocutors (friends or acquaintances) and context.Researchers also distinguish between WTC inside the classroom, when students are requested to answer questions, and WTC outside the classroom, when interacting with friends or acquaintances.What a teacher choose to do in the classroom context will also affect the students' WTC.
Willingness to Communicate is mostly dependent on two elements i.e., Communication Apprehension (CA) and Self-Communicative Competence (CC).Communication Apprehension refers to the anxiety that the learners experience in communicating.This is most prominent in adult learners as speaking is a public activity that indirectly affects one's self-esteem.The moment individuals attempt to deliver an idea, it is inevitable for them to become conscious of others' perception on their own ability and it is this anxiety that impedes their attempts to speak up (MacIntyre & Gardner, 1994).McCroskey and Richmond (1987) asserted that, those who experience high communicative apprehension will most likely withdraw or avoid communication.Another important element of WTC is Communicative Competence.This is defined as the feeling of confidence one has to communicate effectively in a particular situation (MacIntyre et al., 1998).The sense of efficacy arises due to previous successful communication encounters and having the knowledge and skills to carry out the task again (Weaver, 2010).Hence, this feeling to successfully carry out a communicative task correlates with the feeling of lack of anxiety to perform the task.McCroskey and Richmond (1990) argued that people who perceive themselves as poor communicators are less willing to communicate.Baker and MacIntyre et al. (2001) concluded that reduced anxiety and increased perceived competence means that students would be more willing to communicate.However, they pointed out that anxiety is more crucial among advanced learners while for less experienced students, perceived competence is the key factor.
Debate
Debate is an ancient practice which is 2,400 years old (Garrett et al., 1996).Protagoras, "the father of Debate", introduced it as a teaching method in Ancient Greece (Darby, 2007) and later in the twelfth century, Muslims scholars in colleges used this pedagogy to teach Islamic Jurisprudence (Makdisi, 1981).Debate is an activity which involves discussion on a matter with people whose opinions are different and/or contradictory.It requires participants to be open-minded to be able to decide the best solution when voting their opinion after listening and arguing opinions.Being open-minded is one of the predispositions of being a critical thinker.Another attribute of Debate is boldness in expressing ones' opinion.Perhaps, this is the reason why many claim that Debate is related to democracy and freedom of speech (Ericson et al., 2003).
Today, in the United States, Debate is a popular teaching tool in schools and at its tertiary levels where it is used in various disciplines (Jugdev et al., 2004).Research in Debate classroom at university level reveals that Debate promotes critical thinking (Hall, 2011) and offers many advantages all in one go.Among them are: clarifying ideas and presenting arguments (Bellon, 2000), better understanding of content knowledge (Vo & Morris, 2006), improving personal skills and critical understanding (Moon, 2005;Kennedy, 2007), improving persuasive public speaking skills and improving listening skills (Oros, 2007) and bolstering teamwork (Gervey, 2009).
Despite many benefits of Debate as a teaching method, it is rarely used in language classrooms.In Malaysia, Debate is more commonly carried out as a co-curricular activity where each year at the district level, each school would send three representatives to participate in its Annual Debate Competition (Othman, 2005;Othman et al., 2013).Hence, it is an exclusive activity often open to those who are proficient in the language.Occasionally, a professional development course on how to adjudicate Debate is carried out, but teachers seldom use Debate as a pedagogy tool in their language or content classroom.
Philosophical Inquiry Classroom Discussion
Philosophy Inquiry discussion is the pedagogy used in the children thinking program, Philosophy for Children (P4C), created by Mathew Lipman in the1960s (Lipman, 1980).For this program, Lipman created his own text of stories about children who were inquisitive and constantly deliberate on matters which puzzled them.The texts created were dialogic for the characters would think aloud their thoughts or what is termed as "Inner dialogues" (Vygotsky, 1986).By reading these text, students internalized the inquisitive nature of the characters.Lipman also incorporated the Socratic Method in this pedagogy where after reading the text students would pose questions based on the text.In the teacher-led discussion, the teacher will initially lead the discussion where students are probed to think critically of opinions expressed by their friends.Thus, with the teacher acting as the facilitator, students will engage in a dialogic discussion and deliberate among themselves.Through Socratic Method, the process of deliberation is internalized and students gradually become reflective and begin to think for themselves.In the process a community of inquiry is created and gradually a student-led discussion will take place.As stated by Lipman (2009) the program P4C "aims to encourage children to develop their own way of thinking by confidently expressing their opinion about the world in a safe environment (p.166)." Since its introduction, the Program P4C has been successfully implemented in various countries, including United Kingdom, Australia, New Zealand, Iran, Mexico and South Korea.In Singapore it was selectively carried out among its premier schools such as Raffles.Research on this program has shown that it facilitates students to think and argue well, to be reasonable and to improve students' communicative spoken skills.Students who undergo the Program P4C are said to acquire four attributes of becoming creative, critical, caring and collaborative (Sutclliffe, 2015).
In (Preece & Juperi, 2014).Several studies related to Philosophical Inquiry were carried out at the primary, secondary and tertiary levels by its members.Among the studies were Hashim (2003Hashim ( , 2009)), Othman (2005), Juperi (2010), Abdullah (2011) and Preece (2012).The results showed positive improvement in the subjects' cognitive and English language skills.Presently, the program has been carried out in private schools as a stand-alone program.
Method
The present study used a quantitative method, a pretest-post-test experimental design, to determine the effect of two programs on ESL learners who were randomly divided into two groups and given two different types of treatments: Group A: Debate instruction; and, Group B: instruction of Philosophy Inquiry classroom discussion.
Participants and Data Collection Procedures
The study focused on 32 ESL undergraduate students attending Oral and Aural course at University Putra Malaysia.The researchers used Fish Bowl method to divide the students into two equal groups.At the beginning of the semester, a pre-test questionnaire was given to the participants.Before that, the researchers obtained consent from the class lecturer and students to perform this research.Clarification on the items of the questionnaire was also given to the participants.Throughout the semester, the participants took part in various tasks given according to their groups.During the 8th week of the semester, a post-test was conducted to collect the data on their WTC after giving the treatment accordingly.
Treatment
There were two kinds of treatments in the study.First, one group (A) had a Debate instruction as a treatment.During the introduction the instructor explained about Debate, its structure, format, roles of each speaker, structure of an argument and a rebuttal.Subsequently, students carried out a mock Debate and at the end of it the instructor gave an evaluation of their delivery and argument.For subsequent lesson, students debated on topics related to the course outline.Topics were given before the actual class for students to do research on the topics.For each class, students were given half an hour to brainstorm and discuss their points.Throughout the lesson students came to the front to deliver their arguments.
Second treatment was a Philosophy Inquiry discussion for group B which during the introduction the instructor showed the studets how to carry the PI discussion.They were seated in a circle and were each given a text.This was read out loud paragraph by paragraph by each student.After reading the text, students raised controversial issues pertaining to the text.This became the agenda of each discussion.The instructor as the facilitator would lead the discussion initially.A student was appointed as the moderator as well.The instructor facilitated the discussion, mentoring the moderator as well.Each week students were given the text beforehand so that they would come prepared.
Instruments
The data were collected using the WTC questionnaire (McCroskey, 1992) to determine the participants' WTC before and after the Debate and PI discussions.This questionnaire consists of three parts: section A, which measures WTC; section B which assesses Communication Apprehension; and section C which measures Communicative Competence.This section comprises of 12 items developed by McCroskey (1992) to assess students' Willingness to Communicate in English to strangers, acquaintances and friends in different communicative context.Among the contexts it assessed are public speaking, talking in meetings, group discussions, and interpersonal conversations.For this questionnaire, the respondents needed to choose the percentage of the willingness between 0 (totally not willing to) and 100 (totally willing to).
Data Analysis
The pre-and post-test data from both Debate and Inquiry classroom discussion were analyzed using IBM SPSS version 20 to obtain descriptive statistics as well as to conduct t-test to compare between and within groups.
Results on Willingness to Communicate (WTC) (i) Pre-test Results for the Two Groups
There were two equal sized groups in this study, the Debate group (A) and Philosophical Inquiry discussion group (B).An independent t-test was used to determine if there was any significant difference between the two groups at the start of the study.The result (Table 1) shows that there is no significant difference between the two groups on WTC before the treatments.
(ii) Pretest/Posttest Results for the Two Groups
To answer the first question (Do PI classroom discussion and Debate instructional methods make any differences on ESL learners' WTC scores?) paired t-test was run to test the following hypothesis: Ho: There is no statistically significant difference in means of the pre-and post-tests for each treatment.For Debate, the result (Table 2) shows that there is a statistically mean difference for WTC (t=6.76,p < 0.05).Therefore, there is adequate evidence to reject the null hypothesis (Ho) as there is a statistically significant difference in means between the pre-and the post-test.Similarly, for PI discussion, the result (Table 3) shows that there is a statistically mean difference for WTC.Hence, there is also adequate evidence to reject the null hypothesis for PI discussion group.Thus, it can be concluded that there is a significant mean difference in the pre-and post-test of WTC in Debate group and PI discussion group.
(iii) Post-test Results for the Two Groups
This section aims to answer the second question: Is there any significant difference in mean scores between Debate and PI discussion groups?Therefore, an independent test was employed to determine the following hypothesis: Ho: There is no significant difference in mean scores between Debate and PI discussion groups for WTC.The result of the t-test (Table 4) reveals that there is a statistically significant difference in mean sores between the PI discussion and Debate groups for WTC.This indicates the Debate participants recorded a significant improvement of WTC compared to PI discussion.
Results on Self-Perceived Communicative Competence (CC)
In this section, it is aimed to answer the third research question: Which approach promotes Self-Perceived Communicative Competence (CC) more: PI classroom discussion or Debate?
A pre-and post-test for Communicative Competence was undertaken for the two groups.In order to find which of these two methods promotes more Communicative Competence, the learners' scores on both pre-and post-tests for each group were compared.The following From the table above, it can be seen that the participants in Debate Program showed a higher level of Communicative Competence.There is an increment of 4.39, while in the PI discussion the increment is 2.46.Thus, those in Debate showed more improvement in Self Communicative Competence.
Results on Communicative Apprehension (CA)
This section aims to answer the last research question: Which approach increases Communicative Apprehension (CA) more: PI classroom discussion or Debate?
Discussion
The results of the study show that both instructional methods improved students' Willingness to Communicate and Self-Communicative Competence.In comparison, Debate was more effective in promoting students' Willingness to Communicate and Self-Communicative Competence.In terms of Communicative Apprehension, participants in both programs experienced it as well.However, Debate triggered apprehensiveness more.
Self-Communicative Competence is the feeling of having confidence to be able to communicate effectively at a particular situation (MacIntyre et al., 1998) which can occur as a result of states of previous successful communication encounter/(s) and having the knowledge and skills to keep the communication going (Weaver, 2010).This might explain why Debate resulted more improvement in Self Communicative Competence and Willingness to Communicate.In Debate, everyone had a role as a Debate speaker and it was mandatory for each speaker to deliver an argument.Thus, they had more opportunity to practice speaking and consequently they improved their communicative skills.As stated by Liu and Littlewood (1997), with more practice of speaking and being engaged in speaking, the students would improve their language, communication skills and confidence.Thus, with more opportunities, they become more confident and more willing to communicate.However, in PI discussion group, interactions were among the participants, the moderator and instructor.Each played an important role to drive the discussion.Moreover, in PI classroom, it was not compulsory for all participants to give their point of view during the discussion.Since not everyone in PI participated in the discussion, the general level of Willingness to Communicate and Self-Communicative Competence was less in PI discussion when compared to Debate.
The level of apprehensiveness was more in Debate than in PI discussion.There are "many of the skills [that] take a year to build, as do the relationships that support rich conversations" (Zwiers & Crawford, 2011, p. 29).In PI discussion, it is not mandatory for its participants to engage in the discussion, so if students do not have these skills they would abstain from discussion; however, such a choice is not available in Debate as it is compulsory for all speakers to deliver their argument.In Debate, participants may often feel anxious as they have to prepare and deliver their speech one after another and each speaker has a specific role: either to define the key word, to give a point, to rebut or to deliver a summary of their arguments.Furthermore, they have to be mentally prepared to answer impromptu questions probed by members of the opponent.
Malaysia, the Philosophical Inquiry pedagogy was introduced in 2002 via the introduction of the children thinking program, P4C program, by Rosnani Hashim who received first-hand training from Mathew Lipman.The program was named the "Hikmah" Program and in 2006 the Centre for Philosophical Inquiry in Education (CPIE) was set up at the Institute of Education, IIUM to promote the Philosophical Inquiry pedagogy to the Malaysian Education through the Philosophy for Children (P4C) program
Table 1 .
Independent samples t-test for the performance of both groups on the pre-test for WTC
Table 2 .
Paired samples t-test for Debate (Group A) on WTC
Table 4 .
Independent samples t-test for the performance of group A and B on the post-test
Table 5 .
Table shows the results for both methods: Comparison between mean scores of Communicative Competence of both groups
Table 6 .
Comparison between mean scores of Communicative Apprehension (CA) of both groupsResult of the means for both groups, Debate and PI discussion, as presented in Table6, shows that the mean difference of Communicative Apprehensiveness is slightly higher in Debate group, 1.96, as compared to the difference in means of the PI discussion group, 1.82.Thus, this shows that students in Debate, experienced more apprehensiveness than the PI discussion group. | 4,223.8 | 2017-07-20T00:00:00.000 | [
"Education",
"Linguistics"
] |
Human Machine Interface Design for a 3 DoF Robot Manipulator
This paper represents a Human Machine Interface (HMI) design to control a 3 DoF robot manipulator. This manipulator has two parallelograms to make the moving platform always parallel to the ground. We used inverse kinematic analysis of the robot manipulator to control the end point location. Inverse kinematic results are verified using design parameters and end effector location. According to our algorithm, user defines the end point location from HMI, and then program solves inverse kinematics of the robot manipulator. The angles are sent to Arduino microcontroller to set the position of the servo motors. Using this HMI, the user picks and places the object in real time. The user can also give command to draw linear, circular and rectangular paths on the HMI.
Introduction
The world industry has developed rapidly since the beginning of the first industrial revolution. This evolution still continues and now 4th industrial revolution is started. In the last revolution, automation lines must be designed to manufacture flexible products which are specified by costumer needs. In planning of automated production lines, engineers need various designs of robot manipulators for picking and placing, painting, welding, assembling and similar tasks. Therefore, the robot manipulators are the most important part in flexible automated lines. According to task of the robot manipulator, motion of the end effector can be spatial or planar.
Huang et. al [1] designed and synthesized a two DoF parallel robot manipulator for pick-and-place operations. This parallel manipulator consists of two parallelograms to make moving platform always parallel to the ground link.
Five bar two DoF parallel manipulator is investigated for performance evaluation by Feng et. al [2]. The performance evaluation is realized by using global conditioning and velocity indices.
Other two DoF parallel manipulator structure using two sliding joint is researched by Xin-Jun Liu et. al [3]. These sliding joints are vertically aligned to move platform of the manipulator.
The Diamond robot which is a kind of two DoF parallel robot manipulator structure is represented by Huang et. al [4]. In this study, the moving platform path generation is planned considering joint torque and velocity constraints.
Well shaped workspace optimization of two DoF parallel robot manipulator is carried out by designing link parameters in study of Huang et. al [5]. Optimal manipulator configuration in terms of force-transmission behavior and isotropy is the first step of their approach. The second step consists of determining independent kinematic parameters by optimizing of global index.
Dynamic analysis of two DoF parallel manipulator is used to investigate joint clearances of the manipulator joints in investigation of XuLi-xin and LiYong-gang [6]. Trajectories of the end effector for both ideal and clearance joints are calculated in computer and then the results are plotted to observe difference on the end effector trajectory.
A new measuring mechanism is designed and tested for zero offset calibration of a similar 2-DoF parallel robot manipulator by Jiangping et. al [7]. Results of calibration mechanism are illustrated and discussed in plots.
In this study, we developed a HMI for a 3 DoF robot manipulator which was previously investigated 2 DoF kinematic structures in [1,4,5,6,7]. All links of our manipulator mechanism is manufactured by using 3D printer. Here, our purpose is to control the end of the robot manipulator from a graphical user interface by giving positions. In order to do this, the inverse kinematic of the robot manipulator is solved analytically. The inverse kinematic analysis is verified by using some specific positions for the end effector. Using inverse kinematics, the interface redraws the manipulator links for each end effector position and robot manipulator moves simultaneously. The user of the manipulator can select different paths such as linear, circular and rectangular paths by clicking buttons in the graphical user interface. Using our interface, one can determine the path of the end effector and easily make pick-and-place operations.
Robot Manipulator Prototype Structure and HMI Design
Our design of the robot manipulator is shown in Figure1. Manipulator moves linearly along z axis using a stepper motor connected to a trapezoidal lead screw linear mechanical element. Other two translational movements in x-y plane are obtained by closed kinematic chain which is depicted in Figure2. The links for this two degree of freedom closed mechanism are manufactured from 3D printer.
Inverse Kinematics of the Manipulator
Inverse kinematics of the manipulator must be solved to control motion of the end effector. Our algorithm based on inverse kinematic solution in Visual C# and then sends the data from HMI to arduino that controls two servo motors of the manipulator.
If Equation (1) is described using Euler notation, vector equation will be obtained in (2).
Rearranging (2), we get (3) and conjugate of this equation (4) as follows, Multiplying (3) and (4), the angle θ 3 is eliminated. After multiplying these equations, (5) is obtained as follows, 0 sin cos where ( ) Using half tangent relations, e e e t e e t (6) Solution of this polynomial equation is described in (7). Two real or imaginary results can be obtained. Imaginary results are unusable. Therefore, Lastly, the first actuation angle is calculated as follows, On the other side of closed kinematic chain, the second actuator position is solved by using similar procedures and similar equations. The second equation is given in (9).
The second actuator angle of the robot manipulator is calculated in (13).
From kinematic analysis of the manipulator, four modes are available using results. However, we are using always the mode which is depicted in Figures 2 and 3 for our robot manipulator.
Verification of the İnverse Kinematics
Before programming the graphical user interface, the verification of the kinematic analysis is needed to be sure that our analysis is correct and usable. We assumed the links of the manipulator to be . The position of the end effector is considered to be P x =100 mm and P y =150 mm. Substituting these values into kinematic equations, the parameters in ( θ .
HMI Design for the Manipulator
We designed our HMI (human machine interface) in Visual C#. NET environment. Before using this program, user must select COM port and baud rate of Arduino Microcontroller from all available ports listed in ComboBox. Then, Connect Arduino button must be clicked to start communication between HMI and Arduino. If the communication is successfully established, the progress bar will be full green color. If the communication fails, the program gives a message to connect Arduino correctly. Then user can send command from interface to Arduino. Using inverse kinematics in previous section, program calculates actuation angles and then draws the manipulator on the form. User changes the end effector location using mouse pointer. The manipulator moves simultaneously according to this mouse position change in inverse kinematics. Our interface design is shown in Figure 5. Calculate: discr1 and discr2 are calculated according to inverse kinematic analysis of the robot.
Are roots of discr1 and discr2 real?: Program must decide that the results are physically realizable or not. Non real results cannot be used in real environment. Therefore, they are unnecessary.
Prepare and send data packages to Arduino: When two angles are calculated, they are packaged in order for sending. The data package includes plus and minus symbols which show the direction of the rotation, and two digit numbers which indicate magnitude of the rotation.
One Data Package: Figure 6. Flowchart for Visual C# Fig. 6 shows the flowchart of inverse kinematic based control procedure. Furthermore, Arduino code working procedure is depicted in Fig.7. According to these procedures, data package is generated in Visual C# code and then this package is evaluated in Arduino code. Type of data package is string containing 7 characters. Arduino code divides this string into two parts for servo motor angles. Then, the code converts string variables into integer.
Three DoF Robot Manipulator Setup and Test Results
The setup of our 3 DoF robot manipulator is shown in Figure6. Electro magnet end effector is connected to the moving platform of the manipulator to pick a metal object and place a position in workspace of the manipulator. Electro magnet is controlled by using a button to grasp or release a metal object. Two Arduino microcontrollers are used to command servo and step motors simultaneously. Arduino micro is chosen to control stepper motor whereas Arduino Uno is used to control servo motors and electro magnet. The links lengths of the manipulator are designed to be mm L L 70 , and mm s 20 = . Firstly, we tested our robot manipulator using HMI for circular and linear paths. Our test results are depicted in Figure9. Figure 9 (a),(b) and (c) shows linear path which is drawn by the robot manipulator setup whereas Figure 9 (a1),(b1) and (c1) illustrates linear path which is generated by HMI program. As seen from these figures, our manipulator end effector moves very close to given linear path from HMI. The second path is circular for manipulator test. Similarly, the manipulator movements are shown in Figure 9 (d), (e) and (f), simulation of the movement is shown in (d1), (e1) and (f1). Consequently, our manipulator end effector can follow circular path.
Other test for our manipulator is to pick and place a plastic solid object which is assembled with a metal bolt for grasping. Electro magnet was activated by using button. Nearly a rectangular path is defined by user. The starting position of the solid object is seen in Figure 10 (a). The object was moved up in horizontal direction illustrated in Figure 10 (b). Then, it was translated horizontally as shown in Figure 10 (c). Finally, the object was moved down vertically to ground in Figure 10 (d).
All these four positions and path are depicted on HMI in Figure 10 (f).
Conclusions
3 DoF robot manipulator is designed and manufactured in this study. Inverse Kinematic analysis is analytically solved. Inverse kinematic analysis is verified using some design parameters. Human machine interface program is developed based on inverse kinematic analysis. Our manipulator cost is low due to 3D printing prototyping method in this project (all cost is nearly 120 $). Therefore, the manipulator can be manufactured by engineering students to program similar type of manipulators.
The manipulator will be manufactured from metal for industrial pick-and-place robot in the future. Of course, the cost will be increased due to industrial components. However, our procedure can be applied to industrial tasks using robot manipulator. This industrial robot will be programmed by using same control idea. Training and learning algorithms will be added to working HMI procedure. User will be able to plan and manage motion of the manipulator for a desired industrial task. Accuracy and precision of the manipulator will be measured in its workspace.
Supplements
You can watch working the video of the manipulator which is given in link: https://www.youtube.com/watch?v =hVkBJ0UeEIA | 2,534 | 2018-07-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A discerning gravitational property for gravitational equation in higher dimensions
It is well-known that Einstein gravity is kinematic (no non-trivial vacuum solution;i.e. Riemann vanishes whenever Ricci does so) in $3$ dimension because Riemann is entirely given in terms of Ricci. Could this property be universalized for all odd dimensions in a generalized theory? The answer is yes, and this property uniquely singles out pure Lovelock (it has only one $N$th order term in action) gravity for which $N$th order Lovelock Riemann tensor is indeed given in terms of corresponding Ricci for all odd $d=2N+1$ dimensions. This feature of gravity is realized only in higher dimensions and it uniquely picks out pure Lovelock gravity from all other generalizations of Einstein gravity. It serves as a good discerning and guiding criterion for gravitational equation in higher dimensions.
Introduction
Absence of all forces is characterized by maximally symmetric spacetime of constant (homogeneous) curvature, and Einstein gravity (GR) naturally arises when spacetime turns inhomogeneous [1]. It is the Riemann curvature that should determine dynamics of the force responsible for its inhomogeneity. The Riemann curvature satisfies the Bianchi differential identity which is purely a differential geometric property and its trace yields a second rank symmetric tensor with vanishing divergence, the Einstein tensor, giving the second order differential operator for the equation of motion. This is how we are uniquely led to the Einstein gravitational equation on identifying the cause of inhomogeneity as matter-energy, a universal physical property for all that physically exist, distribution [1]. Thus gravitational dynamics is entirely determined by sometime curvature and it resides in it.
It is very illuminating that without asking for an equation for gravity, GR simply follows from the geometric properties of Riemann curvature. Similarly, does geometry also determine spacetime dimension? The second order differential operator in the equation is given by the Einstein tensor which is non-trivial only in dimensions d > 2. Next the equation should admit non-trivial vacuum solution for free propagation which requires d > 3. In d = 3, Riemann curvature is entirely given in terms of Ricci tensor;i.e. it vanishes whenever Ricci vanishes. That is how gravity is kinematic in d = 3 as vacuum is flat and absence of non-trivial vacuum solution signifies absence of free degrees of freedom for propagation of the field. This is how we come to the usual four dimensional spacetime that admits non-trivial vacuum solution. However the Einstein equation would be valid in all higher dimensions as well.
We could however ask the question, could the kinematic property of GR for odd d = 3 dimension be universalized to all odd dimensions? Naturally this would require a generalization of GR because GR could be kinematic only in three dimension and none else. However this gravitational property may serve as a good guiding principle for a gravitational equation in higher dimensions. That is what we shall probe and establish that it uniquely singles out pure Lovelock gravity for which action consists of only one Nth order term of the Lovelock polynomial Lagrangian. Lovelock is the most natural generalization of GR because it is the only one that remarkably retains the second order character of the equation despite the action being a homogeneous polynomial in Riemann. This is quintessentially a higher dimensional generalization of GR.
In this essay we shall proceed as follows: First, we shall establish that gravity is indeed kinematic [2,3] in all odd d = 2N +1 dimensions in pure Lovelock gravity relative to properly defined Lovelock analogues of Riemann and Ricci tensors [4,5,3]. Since this generalization is effective only in higher dimensions, it is therefore pertinent to ask what is it that demands higher dimension(s), and why is it small so that it is not accessible to present day observations? That is what we shall probe next by appealing to a general principle and some gravitational properties. Next we would argue that pure Lovelock is thus proper gravitational equation [6] in higher dimensions and it is the only one that obeys kinematicity of gravity in all odd d = 2N + 1 dimensions. It is valid only for two, odd d = 2N + 1 and even d = 2N + 2, dimensions. It includes GR in the linear order N = 1. We would wind up the discourse with a discussion.
Kinematic property
Dadhich [4] defined an appropriate Lovelock analogue of Riemann, which was a homogeneous polynomial in Riemann, with the property that trace of its Bianchi derivative vanished. That gave rise to a corresponding analogue of Einstein tensor which was the same as the one obtained by varying the corresponding Lovelock action. Using this Lovelock Riemann generalization it was first shown that static vacuum spacetime was kinematic in all odd d = 2N + 1 dimensions;i.e. vanishing of Ricci implied vanishing of Riemann [2]. Right on the heels of this discovery came yet another parallel definition of Lovelock Riemann tensor by Kastor [5] which involved a (2N, 2N)-rank tensor, completely antisymmetric both in its upper and lower indices. Though the two Lovelock Riemann analogues are not completely equivalent yet interestingly they both yield the same Einstein tensor, and hence the same equation of motion. The difference between them came to the fore recently while studying vacuum solutions for the Kasner metric [7] where kinematic property held good for the Kastor Lovelock Riemann but not for the Dadhich one. Interestingly for static spacetime, the difference between the two vanishes and that is why it was not noticed earlier.
In view of its general validity, we shall employ the Kastor Lovelock Riemann tensor for establishing the kinematic property in all odd d = 2N + 1 dimensions. This (2N, 2N)-rank tensor 2 is defined as follows [5]: It is product of N Riemann tensors and completely antisymmetric in both its upper and lower indices. With all indices lowered, it is also symmetric under exchange of both groups of indices, The Lovelock Lagrangian is written as giving rise to the corresponding Einstein tensor It is purely an algebraic property that for d = 2N + 1, the above defined 4Nth Lovelock Riemann tensor could be entirely written in terms of its contraction Ricci, and thereby Einstein tensor, and it is in fact written as follows [3], This clearly establishes the kinematic property that Lovelock Riemann vanishes in all odd d = 2N + 1 dimensions whenever the corresponding Einstein (Ricci) tensor vanishes. It may however be mentioned that though vacuum spacetime in odd dimensions would be Lovelock Riemann flat but it would not in general be Riemann flat [2]. Another way of characterizing kinematic property is that the corresponding Weyl curvature vanishes in all odd d = 2N + 1 dimensions.
Why (small) higher dimensions?
It is symmetries of field theory for a consistent theory of fundamental particles and their interactions that lead naturally to higher dimensions, and this paradigm is popularly known as string theory. It is all driven by field theoretic considerations without any direct reference to gravity. Instead I would here like to concern myself only to gravitation to ask the question, are there any gravitational features that have so far remained unaddressed and whether their inclusion asks for higher dimensions?
One such possible feature could be probing of gravity in high energy regime [1,8]. For addressing high energy effects of any theory, we generally include higher powers of the basic field entity which in the present case is Riemann tensor. In Einstein gravity, Riemann occurs linearly in action, for high energy considerations we should therefore include higher powers of Riemann in action. However at the same time if we demand that the basic character of the equation should not change;i.e. it should continue to remain second order differential equation which is also required for warding off undesirable features like occurrence of ghosts. This uniquely singles out the Lovelock action which alone has this remarkable property that the equation continues 3 to remain second order despite action being homogeneous polynomial in Riemann tensor. But higher order Riemann terms in Lovelock action make non-zero contribution to the equation only in dimension > 4. That is why higher dimensions are required to probe high energy effects of gravity [8].
Second we appeal to the general principle that total charge of a classical field must be zero when it is summed over all charges in the universe. It is obviously true for electric field because charge is created from a neutral entity like an atom by kicking out one polarity particle out and what remains behind has equal and opposite charge. When all charges are summed over, it will add to zero. This should also happen for gravity. The charge for gravity is matter-energy distribution which has only one polarity, always positive by convention. How could this be balanced as there exists no matter-energy of opposite polarity, but balance it must to obey the general principle of total charge being zero. The only way out is that gravitational field that it produces must itself have charge of opposite polarity;i.e. gravitational field energy must be negative. This is why gravity can only be attractive and it is dictated by this general principle. Though negative gravitational charge is non-localizable as it is spread over whole of space, if we integrate it over entire space it would completely balance positive charge of matter distribution. This is exactly what has been rigorously shown in the famous ADM paper [9]. Now consider a 3-ball of some finite radius around a mass point. Since total charge on the ball is non-zero, because negative charge in the field lying outside the ball has been cut off, field must therefore propagate of the ball in higher dimension. However as it propagates, its past lightcone would include the region, and hence negative charge of the field, lying outside. Thus propagation in extra dimension would not be free but instead be with diminishing field strength (or equivalently it could be viewed as massive propagation). Thus gravity propagates in higher dimension but not deep enough [8]. This is precisely what the Randall-Sundrum braneworld gravity [10] envisages where the usual massless free propagation remains confined to the brane while off the brane propagation is massive. The braneworld model is string theory inspired while ours is a purely classical consideration based on a very general principle.
These two are purely classical arguments that appeal to general principles and considerations for a classical field. They clearly point to the fact that gravity could not remain confined entirely to a particular dimension. This is because gravitational dynamics resides in spacetime curvature and hence it cannot be constrained by any external prescription. Gravity is thus entirely self driven. This is the critical property that distinguishes gravity from all other forces. This is why only gravity propagates in higher dimensions while all other matter fields are believed to remain confined to the usual four dimensions [11]. Existence or realization of spacetime dimension is probed by observing propagation of a field in that. Since only gravity can propagate in higher dimension but not deep enough, hence higher dimension(s) cannot be large. The application of the general principle of total charge being zero not only asks for higher dimension, but also prescribes it to be small. This is a remarkable conclusion following from a very general principle.
Gravitational equation in higher dimensions
Since gravity cannot remain entirely confined to a given dimension, the consideration of higher dimensions becomes pertinent, and then arises the question what should be the equation 4 of motion in higher dimensions? Note that we are here not seeking an effective equation that takes into account some semi-classical corrections instead we are asking for a classical equation in higher dimensions. For that, the first and foremost requirement is that it should be of second order which uniquely picks out the Lovelock polynomial action in which each term comes with dimensionful coupling constant. Also note that Lovelock Lagrangian is the most general invariant that can be constructed from Riemann tensor giving the second order equation of motion.
On the other hand one can carry on with the Einstein equation itself which is valid in all dimensions d ≥ 3. This would however not be the most general equation in dimensions > 4 while the Lovelock polynomial action would give the most general equation for all d ≥ 2N + 1, and it includes GR for N = 1. The problem with the Lovelock equation is that it has a dimensionful coupling for each N, and there is no way to determine more than one coupling by measuring the strength of the field, which is only one. Thus there is arbitrariness in fixing couplings.
There is one way out that by invoking some property of gravity if we can justify that Lovelock polynomial should involve only one Nth order term. That would then be what we have called pure Lovelock. And that property is indeed universalization of kinematic property;i.e. gravity be kinematic (non-existence of non-trivial vacuum spacetime) in all odd d = 2N + 1 dimensions. Thus kinematic property uniquely picks out pure Lovelock gravity. This equation would be valid only for two (odd and even, d = 2N + 1, 2N + 2) dimensions because else kinematic property will be violated.
Note that pure Lovelock equation [6] has several interesting and desirable features. For instance even though the equation is completely free of the Einstein term yet static vacuum solution with Λ asymptotically goes over to Einstein-dS solution in the given dimension [12]. This is quite remarkable that pure Lovelock solution includes Einstein gravity asymptotically even though the equation is completely free of it. Similarly, bound orbits around static black hole exist in pure Lovelock gravity in all even d = 2N + 2 dimensions in contrast for Einstein gravity they do only in 4 dimension and none else [13]. Also thermodynamical parameters, temperature and entropy bear universal relation with horizon radius in all odd and even d = 2N + 1, 2N + 2 dimensions, in particular entropy always goes as square of horizon radius in all even dimensions [14].
Thus pure Lovelock equation has all the features what one would have asked for a gravitational equation. The newly recognised kinematic property is clearly its distinguishing feature. It is thus the right gravitational equation [6] in higher dimensions d = 2N + 1, 2N + 2. That is for each N, the equation is only for the corresponding two odd and even dimensions, for instance for N=1, Einstein equation is only for d = 3, 4, for N = 2, pure GB equation only for d = 5, 6, and so on. The Einstein equation is therefore good only for three and four dimensions, and in higher dimensions we should go over to next order of N. Thus pure Lovelock gravity is a new paradigm for higher dimensions, and it is the kinematic property that has played the key discerning role.
Discussion
By appealing to universalization of kinematic property (that there exists no non-trivial vacuum solution in odd dimensions) for all odd dimensions, we have arrived at the unique gravitational equation which is pure Lovelock involving only one Nth order term. Thus kinematic 5 property plays the key role as discerning criterion as well as a guiding principle for gravitational dynamics in higher dimensions. We have just universalized the already existing property in Einstein gravity to get to the proper equation in higher dimensions. A good and enlightening generalization of a theory always stems from extending some key property beyond the normal premise of the theory, and then the existing theory gets automatically included in the new theory. Pure Lovelock gravity, which uniquely incorporates kinematic property for all odd d = 2N + 1 dimensions, includes Einstein gravity for N = 1.
Again by appealing to another general principle of total charge being zero for a classical field, we have argued that gravity cannot remain confined to a given dimension instead it propagates off into higher dimension(s) but not deep enough. It is remarkable that this very general consideration has not only asked for higher dimension(s) but also prescribed that it has to be small.
The principal aim of the essay is to demonstrate the key discerning role kinematic property plays in picking up the right equation for gravity in higher dimensions. Having done that let's ask what more does it entail?
There is famous BTZ black hole solution [15] in 3 dimension which is a Λ-vacuum solution. Note that it is the presence of Λ that makes spacetime non-flat. For Einstein gravity, it can therefore occur only in 3 dimension and none else. Since pure Lovelock gravity is kinematic in all odd dimensions, hence analogues of BTZ black hole would exist in all odd d = 2N +1 dimensions [2].
All this is very fine, however the key question remains, how does the higher dimensional equation impact on 4-spacetime we live in? The braneworld model [10] that envisages propagation of gravity in higher dimension but not deep enough predicts 1/r 3 correction to the Newtonian potential on the brane corresponding to an AdS bulk. The situation should be similar in what we are proposing except perhaps we would rather employ pure Gauss-Bonnet equation in the bulk rather than Einstein. This may not however be much relevant so far as AdS (which is a solution of pure GB equation as well) bulk is concerned. The situation would be different if we consider a pure GB-BTZ black hole so that Weyl curvature in 5 dimensional bulk is non-zero which will project down on the brane as tracefree black radiation in the equation. Then we would have black hole on the brane given by Reissner-Nordstrom metric obtained by Dadhich et al [16] where Q 2 is not the Maxwell charge but it is Weyl charge appearing in the metric as −Q 2 /r 2 . It is though very insightful, however there exists no complete solution of the bulk-brane system.
It is indeed very remarkable and insightful that universalization of certain gravitational property uniquely picks out an equation in higher dimensions. In the same vein, let's further ask, is there any other similar instance of insightful deduction? The one thing that comes to mind is that how should vacuum energy gravitate [1,17]? It was argued that vacuum energy was on the same footing as gravitational field energy. Both are created by matter and hence have no independent existence of their own, and therefore they should not gravitate through a stress tensor in the equation notwithstanding whether a stress tensor could be written or not. Clearly we write no stress tensor for gravitational field energy on the right, and in fact it gravitates in a much subtler manner. It gravitates by enlarging the spacetime framework, by curving 3-space [18]. That is why Newton's inverse square law remains intact in GR. Something similar should happen for vacuum energy.
It is therefore a matter of principle that vacuum energy should not gravitate through a stress tensor but instead by enlargement of framework. This we would not know unless we have quantum theory of gravity. May what that be, Λ becomes free of the Planck length and hence it could, as a true constant of spacetime structure [1], have any value as being determined by acceleration of the Universe [19]. Thus it gets liberated and has nothing to do with vacuum energy [1,17]. At a concept level this is a very important realization.
The discovery of GR was solely driven by principle and concept, and hence in its centennial year the present exercise is a fitting tribute to that spirit of doing science and to its great creator. | 4,592 | 2015-06-16T00:00:00.000 | [
"Mathematics"
] |
C-ERB-2, P53, Ki67 Proteins and Receptors of Estrogen and Progesterone on the Prognosis of Epithelial Ovarian Cancer Current Opinion in Gynecology and Obstetrics
Ovarian cancer is the seventh most common cancer diagnosed in women worldwide. To date, many studies in epithelial ovarian cancer (EOC) have reported on the association HER-2/neu, p53 proteins and steroid hormones and their respective receptors with prognosis and/or the carcinogenesis process, but no definitive conclusion has been reached. Objectives: To assess the proteins c-erbB-2, p53, Ki67 and receptors of estrogen (ER) and progesterone (PR) of EOC, with regard to clinical stage findings and its effect on survival. Methods: 125 patients with a diagnosis of EOC treated by primary surgery and chemotherapy have participated. A surgical stage was noted and analyzed the correlation with c-erbB-2, p53, Ki67, ER and PR. Immunohistochemical analysis, using the anti-c-erbB-2, p53, Ki67 monoclonal antibodies, the antibody cod PR clone PgR and code ER-6-F11 Anti human estrogen. The c-erbB-2 study was complemented by genetic amplification and was reported univariate and multivariate analysis. Results: Age 55.7 ± 16; 50.2% with residual disease (< 2 cm); initial (54.6%) and advanced (45.4%) stage. Univariate analysis showed positive staining for c-erbB-2, p-53, Ki67, PR and ER. The patients with negative receptors had a significantly shortened survival time ( p = 0.01) than patients with positive receptors. Multivariable analysis revealed only clinical FIGO stage as an independent prognostic of overall survival ( p = 0.002). Other variables like c-erbB-2, p53, Ki67, and ER were not significantly related to survival. Conclusions: We concluded that patients with negative PR had a significantly shortened survival time than patients with positive receptors. The overexpression of markers c-erbB-2, p53, Ki67, and ER, were not significantly related to survival in EOC. Only the FIGO stage was achieved to be an independent predictor of overall survival. They should be evaluated together with the patient’s clinical status and other prognostic factors.
Introduction
Ovarian cancer is the seventh most common cancer diagnosed in women worldwide [1].Approximately onefourth of all gynecologic malignancies are of ovarian origin, and 47% of all gynecologic cancer-related deaths are due to ovarian cancer.Ovarian cancer carries the highest mortality among all gynecological malignancies.The high mortality is due mostly to the fact that the tumor is frequently diagnosed late, in advanced stages (III, or even IV), because the early stages are often asymptomatic, and no effective screening methods are available [2].The average 5-year survival of patients in all stages of the disease is only 40%, in patients with the advanced disease only 10-20% [3].Some studies reported a decreasing incidence in women, attributed to the use of an oral contraceptive pill [4,5].
It seems that there has been no significant decrease in incidence or mortality from ovarian cancer since the early 1980s, although the imagiologic exams had permitted diagnosed more and more the disease in early stages.In the absence of preventable etiologic factors or effective tools for screening, the only possible means of improving survival currently lies with the optimal management of patients after initial diagnosis.The prognosis of epithelial ovarian cancer can be correlated with biological (age), social (performance status) and clinical factors (tumor stage, histological grade, histological type, presence or absence of ascites; size and number of residual lesions after primary cytoreduction surgery, and chemotherapy) [6][7][8].Identification of new prognostic factors might be useful in directing the therapy and intensifying the follow-up of a selected group of patients.A variety of prognostic factors have been reported, but their independent prognostic significance remains unclear.Immunohistochemistry has been widely used in the biomarkers search.
The oncoprotein c-erbB-2 (HER-2/neu; Neu), encodes a transmembrane glycoprotein that is member of the class I receptor tyrosine kinase family, which includes the epidermal growth factor, HER-2/neu, HER-3 and HER-4 [9].Proto-oncogene HER-2/neu is located on chromosome 17 and is not activated by a point mutation but through amplification and overexpression of the wildtype gene.Amplification of the HER-2/neu oncogene may be observed in 20-30% of cases in a wide spectrum of neoplastic disorders (e.g., breast, lung carcinomas, others), and HER-2/neu overexpression has been associated with a poor prognosis of patients with cancer arising from other primary sites, but studies of ovarian cancer have produced conflicting results [10][11][12].Patients with breast carcinomas with amplified or overexpressed HER-2/neu can benefit from anthracycline-based regimens as well as Trastuzumab (Herceptin), a recombinant humanized monoclonal antibody against the Her-2/neu protein [13][14][15].HER-2/neu is generally assessed as protein overexpression by using immunohistochemistry (IHC), and patients with tumors that either have 2+ or 3+ results with this method become good candidates for treatment with Trastuzumab.
However, studies indicate that HER-2/neu determined as gene amplification provides better prognostic information and is associated with a better response to Trastuzumab.HER-2/neu gene amplification is primarily detected by in situ hybridization and uses fluorescence (FISH) to detect the signals [16].This method is expensive, it requires a fluorescence microscope, appropriate filters, and a sophisticated camera; so it is not practical as a screening tool.Chromogenic in situ hybridization (CISH) is a recently introduced method, and although it makes use of the in situ hybridization technology of FISH, it also takes advantage of the chromogenic signal detection of IHC that can be detected with the ordinary light microscope and has fewer costs [17,18].CISH is potentially able to detect HER-2/neu gene amplification and to minimize, if not eliminate, the false positive fraction with the IHC procedure.
To date, many studies in epithelial ovarian cancer have reported on the association between HER-2/ neu expression and outcome; some earlier studies reported that HER-2/neu overexpression was a poor prognostic factor, but later studies reported that HER-2/neu expression had no relationship with prognosis.Thus, no definitive conclusion has been reached as to the relationship between HER-2/neu expression and prognosis [14,19].Just like of studies in breast also for the ovary, more studies should be conducted for studies in order to enhance the HER2 prognostic value and advantage therapy with monoclonal antibodies (Trastuzumab or other) [9,20].
P53 is a tumor suppressor gene (inhibit cell division and/or promote cell death/apoptosis) located on the short arm of chromosome17 [21].It suppresses cell growth by controlling entry into the S-phase of the cell cycle.Mutation or deletion of the p53 gene is believed to result in uncontrolled cell proliferation.Most p53 gene mutations result in stabilization of the protein.In contrast to the short half-life of the wild type p53 protein, increased the stability of the mutant forms allows their detection by immunohistochemical techniques.Mutations of the p53 gene are the most common genetic abnormalities described in human cancers and have been implicated in the pathogenesis of several human tumors [22].Mutations of the p53 have been found in approximately 40-80% of epithelial ovarian cancer cases.Studies have demonstrated an association between p53 protein overexpression and poor prognosis in patients with several tumor types.In epithelial ovarian carcinoma, the role of p53 protein is contentious, and there are a number of studies with contradictory results [23].There are several studies that identified the p53 protein as an adverse prognostic factor for survival in ovarian cancer; others studies suggested that alterations in p53 expression in ovarian cancer can to affect the sensibility to chemotherapy [24].In contrast, there are a number of studies that suggest that p53 expression has no prognostic value in epithelial ovarian cancer [25].
The proliferation activity of the tumor cell can be determined using a variety of methods, but many of these methods have significant technical limitations (DNA flow cytometry, DNA image cytometry, immunohistochemistry and others) [26].Immunohistochemistry allows evaluating the Ki67 a nuclear non-histone protein expressed in cells in G1, S, G2, and M cell cycle phases, but absenting from quiescent cells in G0.High cellular proliferative activity was associated with poor outcome.On the other hand, other studies did not confirm the relationship between proliferation activity and epithelial ovarian cancer prognosis [27].
Steroid hormones (estrogen and progesterone) are important hormones secreted by the ovary and acting through specific receptors.The interaction between steroid hormones and their respective receptors (estrogen receptor (ER) and progesterone receptor (PR)) are thought to play an important role in the process of carcinogenesis in gynecologic cancers as well as other primary tumors.Since ER and PR were first recognized as prognostic factors for breast cancer, much interest has been focused on steroid receptors in tumors thought to be related to gonadal hormones (endometrium, prostatic, ovarian cancer).ER and PR have been found in about 50% of ovarian tumors.Although the significance of their presence in the pathogenesis of epithelial ovarian tumors has not yet been defined [28], a role similar to that in breast cancer has been claimed, in that their presence seems to be inversely related to tumor differentiation but has not yet been confirmed the relationship.Tumor expression of ER and/or PR, as well as their pattern of combinations (ER+/PR+, ER+/PR-, ER-/PR-), has been identified as predictive factors for response to endocrine treatment.Some studies found expression of PR to be an independent indicator of favorable prognosis in epithelial ovarian cancer [29,30].However, other studies did not confirm these results.Thus, we studied c-erbB-2 (ERBB2, HER-2/neu, neu), p53, Ki67 and steroid receptors (ER and PR) tumor expression and their possible prognostic value in epithelial ovarian cancer.
Objective
The objective was to evaluate the value of proteins c-erbB-2, p53, Ki67 and steroids receptors (ER and PR) in predicting long-term survival of patients with epithelial ovarian cancer.
Material and Methods
This retrospective study comprised one hundred and twenty-five patients with an epithelial ovarian cancer diagnosis and treated, at the Gynecologic Oncology Centre, Hospital Geral Santo Antonio, Porto, Portugal.
All patients were treated by multidisciplinary medicalsurgical teams and by international protocols.All patients were staged according to the criteria of the International Federation of Gynecology and Obstetrics FIGO) staging system I-IV, and after surgery received six courses of chemotherapy basis platinum.
In this series, all were invasive tumors.All histological sections were reviewed by reference pathologist and histological classifications were performed using the criteria defined by the World Health Organization (WHO).The tumors were graded according to the WHO histologic grading system as grade 1, 2 or 3. Clinical information was available for all patients (age, date of initial diagnosis, surgical stage, histological type, tumor grade, initial tumor volume, residual tumor volume, treatment, followup), and the date of death confirmed.
To improve antigen detection, the sections were pretreated in a microwave oven (900 W) for 20 min in a 10 mM citrate buffer pH 6.0 or EDTA buffer pH 8.0, for the deferent's markers.After cooling, the sections were immersed in 3% hydrogen peroxide (H 2 O 2 ) and distilled water for 30 min to block endogenous peroxidase activity.Nonspecific staining was eliminated by 60 min incubation.Excess normal serum was removed and replaced by primary antibodies used and incubated overnight (4°C) in a humidified chamber.After washing the slides, the sections were incubated in streptavidin-biotin-complex (HRP, Labvision Corporation) for 20 min at room temperature.Subsequently, the color was developed with 3, 3 diaminobenzidine tetrahydrochloride with H 2 O 2 in PBS buffer for 5min.Slides were counterstained with Gill's hematoxylin, were dehydrated and mounted.Primary antibodies and biotinylated secondary antibodies were diluted in PBS.Negative controls were carried out by replacing the primary antibody with PBS.Paraffin sections from ovarian cancer with known immunoreactivity to c-erbB-2, p53, Ki67, ER, and PR antigens were used as positive controls.For each case, positively-stained tumor cells within five microscopic fields with the highest immunoreactivity ("hot spot" areas) were counted at high magnification (400x) using a 10×10 grid.
The study of c-erbB-2 was complemented by genetic amplification through the chromogenic in situ hybridization technique (CISH), with the following procedure: tissues 4-5 μm thick were mounted on Histogrip-treated microscope slides, dried at 37°C, and baked for 2-4 hours at 60°C.The slides were deparaffinized for 15 min three times in xylene at room temperature (22-27°C) and washed for 2 min three times in 100% ethanol at room temperature.The slides were microwaved in SPOT-Light Tissue Heat Pretreatment Buffer for 10 min at 92°C and washed for 3 min twice in PBS.They were covered with 100 μL SPOT-Light Tissue Pretreatment Enzyme for 10 min at 37°C and washed for 2 min three times in PBS at room temperature.The slides were then dehydrated in 70%, 85%, 95%, and 100% ethanol for 2 min each, then air-dried.Denatured probe (15 μL) was added to the center of each sample and covered with a 24 mm × 32 mm coverslip, the edges of which were sealed with a thin layer of rubber cement to prevent the evaporation of probe solution during incubation.The slides were denatured at 94°C for 3 min and placed in a dark humidity box for 16-24 hours at 37°C.After removal of the rubber cement and coverslip, the slides were immersed in 0.5 × SCC buffer in a Coplin jar for 5 min at 75°C.They were then washed for 2 min three times in PBS-Tween 20 buffer at RT.The slides were submerged in peroxidase quenching solution and then washed for 2 min three times with PBS, after which endogenous biotin blocking was performed with Reagent A (100 μL of CAS Block).Using Zymed's SPOT-Light Detection Kit, 100 μL each of fluorescein isothiocyanate-labeled sheep anti-digoxigenin, horseradish peroxidase-labeled goat anti-fluorescein isothiocyanate, and diaminobenzidine chromogen were sequentially added to the slides, with three 2 min rinses with PBS-Tween between the additions of reagents.The slides were counterstained with 150 μL of Gill-2 hematoxylin and incubated for 3 min.They were then dehydrated with a graded series of alcohol, cleared in xylene, and mounted with a coverslip.The results of amplification were recorded as follows: two to four signs -lack of amplification; four to six signs -result misunderstanding; more than six signs -the presence of amplification.In all cases where there was amplified, the minimum number of signs was always clearly exceeding 6.The signs were or clearly distinct from each other or confluent, among themselves, in the stain.
Positive staining for p53 was nuclear.The reaction for p53 was considered positive when more than 25% of the tumors cells exhibited strong diffuse immunostaining.The Ki67 labeling index (LI) was calculated as the percentage of positive nuclei divided by the total number of cells examined.At least 1000 cells per tumor were examined.Staining in more than 10% (LI) of the tumor cells was considered positive.For the ER and PR, the percentage of tumor cells that exhibited nuclear staining for a particular receptor regardless of intensity was considered positive when more than 10% of cells showed stained.
Statistical analysis
The statistical analysis was done using the SPSS statistical package for Windows, version 22 (IBM SPSS, Chicago, IL).For univariate analysis, survival time was analyzed by the Kaplan-Meier method, and the logrank test was used to assess differences among groups.For multivariate analysis, Cox proportional hazard regression model was used to examine all factors found to be predictive of survival in univariate analysis simultaneously.Associations between tested parameters were studied by Spearman rank correlation.Differences were considered statistically significant at p = 0.20.
Results
The mean age of 86 patients was 55.7 ± 16 years (range 23-85 years).The median follow-up of patients was 70.5 months.In this series, the FIGO's stage I and II were 47 cases and stage III and IV were 39 cases.Concerning the survival rate, the group I and II were grouped and groups III and IV as well.Overall survival was defined as the time from diagnosis until death or the date of the last followup.
The immunohistochemical results (N and %) for Ki67, p53, PR, and ER are presented in table 1.
The study of HER-2/neu in our series was some difficulties arising largely because the older cases have had histologic, stages processing more or less timeconsuming Bouin fixing, which is an excellent fixative for certain tissues (e.g., for lymphomas), but it has the disadvantage of hamper studies immunocytochemistry, as occurs with the HER-2/neu.This limitation is felt in studies of immunocytochemistry and much more obvious becomes when we perform molecular genetic studies, such as CISH.Therefore, only 42 cases to obtain a material with an acceptable quality of fixing to perform immunocytochemistry and CISH.The results are shown in table 2, which underline the following data: of the 42 cases included 7 had intensity of positivity 2; 8 with the intensity was 3. In the other, the intensity 1 was 22 cases and 0 presented 5 cases.
In all cases, where the intensity of reaction was 0 or 1, gene amplification was through CISH.In 7 cases in which the intensity of positivity was 2, there were 4 amplification and absence in the other 3.In 8 cases, where the intensity was 3, all gene amplification showed to CISH.So, the concordance between 3+ IHC and CISHamplified cases was 100%, denoting all gene amplified cases to be overexpressing the HER-2/neu protein.
The entire cases with gene amplification and thus positive, for HER-2/neu, was 12 in 42, that is 28.57% of cases.We have not found an association between overexpression/amplification (IHC/CISH) and prognosis.We noted that IHC/CISH-positive cases, as well as CISHpositive-only cases, had the same prognosis regarding survival, whereas IHC-positive-only cases had a prognosis similar to that of IHC/CISH-negative tumors.The amplification evidenced by CISH was the second two patterns: or present themselves as signals and points individually and independently; or existing between themselves, presenting itself as a stain (Figures 1 and 2).Results for FIGO stage showed that this variable correlated significantly with survival (P = 0.002) (Table 3 and Figure 3).
In univariate analysis, the survival time was longer in patients with stage I-II disease than in those with stages III-IV disease (Cox p = 0.002).In the first group (I-II) the overall survival was 179.7 months (IC = 143.2-216.2) and in the second group (III-IV), the overall survival was 73.4 months (IC = 39.4-107.0).The univariate analysis did not show an association of HER-2/neu protein expression with overall survival.Also, in the multivariate analysis the overexpression/ amplification protein was not proven as an independent prognostic indicator.The survival in relation to markers HER-2/neu, p53, Ki67, and steroids receptors (ER and PR), the univariate analysis of clinical follow-up data revealed that patients with negative PR, had a significantly shortened survival time (p = 0.01) than patients with positive (Table 3 and Figure 4).Other variables like HER-2/neu, P53, Ki67 and ER were not significantly related to survival (Table 3).In the multivariable statistical analysis only FIGO stage (P = 0.002) was achieved to be independent predictor of overall survival (Table 4).None of the other variables showed any independent predictive value for patient prognosis.
In this study, the p53 suppressor gene showed a positive percentage (41.6%)and the univariate analysis did not show an association of p53 overexpression and prognosis (with overall survival) (Table 1).Also, the Multivariate analysis not showed that the p53 overexpression protein was not proven as an independent prognostic indicator.
Other market studded by immunohistochemical, the proliferative market Ki67 (with positive percentage, 24.0%) did not show association Ki67 expression with overall survival by the univariate analysis, as well as in multivariate analysis, the Ki67 was not proven as an independent prognostic indicator (Table 1).
The percentage of PR positive was 19.2% and ER positive 20.0%.We found that PR was associated with a better prognosis.Univariate analysis of clinical followup data revealed that patients with negative PR, had a significantly shortened survival time (p = 0.01) than patients with positive (Table 3 and Figure 4).The same was not observed with the ER.
Discussion
Wide substantial progress has been made and although more and more patients are living longer with their disease, the majority of patients with advanced ovarian cancer are not cured.The prognosis of ovarian cancer is discouraging compared to other malignancies of the female genital tract.Despite aggressive surgery and intensified chemotherapy, the outcome of patients with stage III and IV is poor.The importance of staging with regard to survival also stems from the influence it has on subsequent patient management.The prognostic value of stage according to the FIGO has been well established [8].In a surgical staging procedure, there is some controversy over some aspects, especially in the early stages (I-II), regarding the status of retroperitoneal lymph nodes.When is presumed FIGO I and II stage, the extensive surgical staging showed that tumors were a more advanced stage, in general stage III.The "aggressive surgical staging" cannot significantly change the survival of the patients, but it can increase the operation risk [3,31].In our study, unlike the great majority of the series, the percentage of early stages (I-II) cancers was higher compared to the advanced stage cancers.This is due to many of these cases been referred to the hospital.Most of the patients were diagnosed in routine gynecologic exams or by other specialties after the realization of imagiologic exams, for different clinical indications.In fact, nowadays, it seems like patients with ovarian cancer may be diagnosed at early stages, as a result of better primary care by general practitioners.Nevertheless, we diagnose a great number of advanced stage cancers, although this number may be decreasing.
In this study, only the FIGO stage was achieved to be an independent predictor of overall survival by the multivariable statistical analysis.None of the other variables showed any independent predictive value for patient prognosis.The stage is reorganized as the most important prognostic factor in epithelial ovarian cancer [8,32,33].
Valid prognostic factors are necessary to estimate the course of the disease and to define biologically similar subgroups for the analysis of therapeutic efficacy [32,34].Many studies have been devoted to finding "Prognostic Factors", and numerous features have been described that can help predict the prognosis of early and advanced ovarian cancer with varying degrees of accuracy and may provide a better understanding of the biological behavior of ovarian tumors.Immunohistochemistry has been widely used in the search of such marchers [35][36][37].
The HER-2/neu expression can be determined through IHC, FISH, CISH, and ELISA among other tests.This oncogene has been studied mainly in breast cancer by IHC and FISH where it has a prognostic, predictive and therapeutic target value.The HER-2/neu expressionin epithelial ovarian cancer has been less studied and we studied the overexpression by IHC and the amplification by CISH, a promising practical alternative to FISH.After the first CISH study by Tanner and colleagues [38] other reports favorably validated CISH results and the concordance between CISH and FISH [17,[39][40][41].Some reports noted the advantages of CISH over FISH; because CISH is a specific, sensitive, and easily applicable method for the detection of HER-2/neu gene amplification, and it can be used together with IHC for the evaluation of patients with breast carcinoma [17,18].In this study, different series reported positivity frequencies of overexpression, it may be attributable to the type of material analyzed (fresh or embedded in paraffin) and to differences in specificity among antibodies used.Enzyme and microwave treatment of the tissue during the staining process may greatly affect the staining results, and tissue fixation procedures may also influence immunostaining.Different scoring methods and subjective interpretation of immunohistochemical analysis may also be reasons for different results obtained by different studies.
In our study, the percentage positive of HER-2/neu has not found an association between overexpression/ amplification (IHC/CISH) and it was not proven as an independent prognostic indicator.Researchers have been trying to assess the real prognostic significance of HER-2/neu in ovarian cancer, but the results of their studies have been controversial.Some earlier studies reported that HER-2/neu overexpression was a poor prognostic factor [42,43], but later studies reported that HER-2/neu expression had not any relationship with prognosis [11,12,44,45].Thus, no definitive conclusion has been reached as to the relationship between HER-2/ neu expression and prognosis.
The same happens with the role of the p53 protein in epithelial ovarian cancer, several studies have identified the p53 protein as an adverse prognostic factor for survival in epithelial ovarian cancer [46].However, other studies suggest that p53 expression has not prognostic value in epithelial ovarian cancer [47].De Graeff and collaborators showed in meta-analysis that the outcome of analysis was influenced by FIGO stage, for example: in some studies the p53 overexpression was associated with shorter survival of patients in stage I and II, but not in advanced stage III and IV; in other studies, found contradictory results, with shorter survival in an advanced stage but not in the early stage of the tumors with p53 overexpression [47].In our study, we don't find any association of the p53 overexpression with the predictive value of overall survival in early stage or advanced stage.
As
for the other markers studied by immunohistochemical, we used the proliferative marker Ki67 that did not show association Ki67 expression with overall survival, as well as the Ki67 was not proven as an independent prognostic indicator.Studies on DNA content and proliferation in epithelial ovarian cancer have contradictory results regarding the prognostic significance of these parameters.Authors showed that ovarian cancer had a higher median percentage of Ki67 staining than borderline and benign tumors, and they found a significant relationship between the proliferative market Ki67 and disease-free survival that was independent the others parameters as histologic type, histologic grade, and stage [48].Others authors also observed that Ki67 is a marker that differs in expression significantly between the short and long-term survivors [14,48].Poor outcome was associated with the high proliferative activity, however series of studies did not confirm the relationship between proliferative activity and prognosis in epithelial ovarian cancer.
In our study, the percentage of PR was associated with a better prognosis, the univariate analysis of clinical follow-up data revealed that patients with negative PR, had a significantly shortened survival time than patients with positive.The same was not observed with ER.Therefore, some of the latest studies of the analysis of ER and PR status showed that steroids receptors have significant favorable prognostic value in ovarian cancer.Especially progesterone positive tumor receptor status is proved to be an independent prognostic variable of improved progression for free-survival among patients with ovarian carcinoma.This anti-tumor effect of PR (significant survival benefit) may be due to two hypotheses that have been proposed.Estrogen-responsive cells efficiently repair DNA and avoid apoptosis; progesterone promotes cell differentiation and apoptosis, and stimulation of PR inhibits DNA synthesis and cell division.What justified the theory, proposed to explain the causal mechanism of carcinogenesis, the hypothesis "incessant ovulation", that argues that repeated cycles of ovulation-induced trauma and repair of the ovarian surface epithelium (OSE) at the site of ovulation.According to with this hypothesis, the protective effect of progesterone will be: decreasing the exposure of the OSE to high levels of estrogens; antagonizing the growth-promoting effect of estrogens on OSE; inducing the apoptosis of tumor cells [49]; loss chromosome 11q23.3-24.3decreasing PR expression and elevated risk for ovarian cancer and poorer prognosis.So, this may explain why patients with ER + and PR-tumors have the worst and patients with ER-and PR+ tumors the best prognosis.It is important to identify reliable prognostic markers, such as PR.So, maybe possible targets of therapy.
Conclusion
Although this study, with a relatively small number of cases, we conclude that patients with negative PR had a significantly shortened survival time (p = 0.01) than patients with positive.The overexpression of markers HER-2/neu, p53, Ki67 and ER, were not significantly related to survival in ovarian carcinoma.CISH is a promising, practical alternative to FISH that can be used in conjunction with IHC, which remains the first screening procedure of choice.IHC is easy to perform, relatively inexpensive, and able to detect a majority of ovarian cancer patients whose tumors have negative (0 or 1+) or positive (3+) HER-2/neu status, all three of which have complete concordance with CISH.
Only FIGO stage was achieved to be an independent predictor of overall survival.They should be evaluated together with the patient's clinical status and other prognostic factors.
Figure 3 :
Figure 3: Overall survival of patients with epithelial ovarian carcinoma related to the FIGO stage.
Figure 4 :
Figure 4: Overall survivals of patients with epithelial ovarian carcinoma relative to progesterone receptor.
Table 4 :
Multivariable analysis of survival time using cox's regression proportional hazards model for identification of independent prognostic factors for patients with ovarian carcinoma. | 6,427.4 | 2019-04-08T00:00:00.000 | [
"Medicine",
"Biology"
] |
Varied Ways to Teach the Definite Integral Concept
In this paper, we report on a collaborative teaching experiment based on the Learning Study model (LS model) which grounds on the Variation Theory. Until today, most of such studies have focused on the teaching and learning of elementary school mathematics; ours was carried out in undergraduate mathematics education. In the following, we discuss how we managed to promote students’ conceptual learning by varying the treatment of the object of learning (the concept of definite integral and the Fundamental Theorem of Calculus) during three lectures on an introductory course in calculus. We also discuss the challenges and possibilities of the LS model and the Variation Theory in the development of the teaching of tertiary mathematics in general. The experiment was carried out at a Swedish university. The data of the study consists of the documents of the observation of three lectures and the students’ answers to the pre- and post-tests of each lesson. The analysis of learning results revealed some critical aspects of the definite integral concept and patterns of variations that seem to be effective to a significant degree. For example, we found several possibilities to use GeoGebra to enrich students’ learning opportunities.
In Sweden, like in many other countries (Artigue, 2001), the concept of definite integral is first met during the last two years of the upper secondary school. The integral function is usually introduced using the notion of anti-derivative, along the Fundamental Theorem of Calculus connecting the concept of the definite integral with the intuitive idea of area. The theory of integration and the Riemann integrals are systematically discussed only in universities.
Several studies have highlighted difficulties that students encounter with the integral concept. In early studies carried out by Orton (1983Orton ( , 1984, it was noticed that some students have difficulties in solving problems that require capacity to see integration as a limit process of sums. Orton's studies also showed that students interpret the integral sign as a signal "to do something" (cf. Attorps, 2006). Like Orton (1984), also Artigue (2001) found out that although some students' technical ability to calculate definite integrals can be quite impressive, their conceptual understanding of the concept itself may be poor. Similarly, Rasslan and Tall (2002) verified that a majority of the students cannot write meaningfully about the definition of the definite integral. Also many recent studies (e.g., Attorps, 2006;Rösken & Rolka, 2007;Viirman, Attorps & Tossavainen, 2011;Tossavainen, Haukkanen & Pesonen, 2013) concerning the learning of other concepts of calculus have verified that the formal definitions only play a marginal role in students' learning; intuition and non-formal representations dominate their concept learning. For example, Attorps, Björk, Radic and Tossavainen (2010), Blum (2000), Calvo (1997) and Camacho, Depool and Santos-Trigo (2010) have verified that students have a strong intention to identify the definite integral with the area of a domain restricted by the integrand and the coordinate axes.
On the other hand, it seems that students' learning of the definite integral can be supported by using graphing calculators in classroom (Touval, 1997). Also Machín and Rivero (2003) noticed that students may benefit from ICT in tasks which concern the graphic and procedural aspects of the definite integral. Nevertheless, the research reports cited above reveal the limitations of standard teaching methods. Although some students become reasonably successful in standard tasks and develop in procedural skills, most of them have difficulties in developing a solid conceptual understanding about the topics itself (Artigue, 2001).
The aim of this study is to investigate whether it is possible, by using technology-assisted teaching (in this case, the dynamic geometric software GeoGebra), to design such teaching sequences of the definite integral concept that help us to improve university students' conceptual understanding of the concept. The theoretical framework for our experiment is based on the Variation Theory which is described in the next section. In its terminology, we seek an answer to the following questions: Which critical aspects of the definite integral concept arise during the lectures? How can we compose effective patterns of variation (of the object of learning) that support students to discern these critical aspects and learn from them?
From a practical point of view, the design of our teaching experiment is that of the Lesson Study model (LS model). The LS model is a synthesis of the Japanese Lesson Study (Lewis, 2002;Stigler & Hiebert, 1999) and Design Experiments (Brown, 1992;Cobb et al., 2003;Collins, 1992). The LS model goes beyond the Japanese Lesson Study in two major aspects. The first is its theoretical basis: the design of teaching is based on the Variation Theory . Researchers and teachers work together to establish a framework for the joint inquiry. The second is its method for the evaluation of learning. In the Japanese version, the learners' understanding is evaluated as a long developing process. In the LS model, pre-and post-tests are made before and after every intervention in order to get an immediate conception of what students have learned (see e.g. Runesson, 1999;Häggström, 2008).
The LS model makes up a cyclic process as follows: • A learning study group of teachers determines a common object of learning (in our case the definite integral concept). Previous teaching experiences, theories of concept learning (e.g., Tall & Vinner, 1981) and results from prior research on the teaching and learning of the object are taken as a starting point for the design of a pre-test.
• Basing on the results of the pre-test, the learning study group plans the first lecture. The Variation Theory is used as a theoretical framework for designing the lecture.
• One of the teachers conducts the first lecture. The lecture is video recorded or observed by the other teachers (in our case, the teacher group made observations). The students' learning is tested in a post-test designed collaboratively.
• Both the test results and the video recordings or the documented observations are analysed by the learning study group. If the students' learning results are not sufficient with respect to the goals, the group revises the plan for the same lecture for the next group of students.
• A teacher of the group implements the new plan in another class. In an ideal setting, the cyclic process continues until the students' learning results are optimal.
In our experiment, altogether three researchers participated in the design and analysis of three lessons, a fourth researcher in the analysis of the results.
Theoretical Framework
The Variation Theory is a theory of learning which is based on the phenomenographic research tradition (Marton & Booth, 1997). The main idea in the phenomenography is to identify and describe qualitatively different ways in which people experience certain phenomena in the world, especially in an educational context (Marton, 1993).
A significant feature of The Variation Theory is its strong focus on the object of learning. A central assumption is that variation is a prerequisite for discerning different aspects of object of learning. Hence the most powerful didactic factor for students' learning is how the object of learning is represented in a teaching situation. In order to understand what enables learning in one teaching situation and not in another, a researcher should focus on discerning what varies and what remains invariant during a lesson (Marton & Morris 2001). have identified four patterns of variation or approaches to discuss the object of learning: contrast, generalization, separation and fusion. The following excerpts illuminate the essence of them: Contrast: … in order to experience something, a person must experience something else to compare it with.
Generalization: … in order to fully understand what ''three'' is, we must also experience varying appearances of ''three''.
Separation: … in order to experience a certain aspect of something, and in order to separate this aspect from other aspects, it must vary while other aspects remain invariant.
Fusion: If there are several critical aspects that the learner has to take into consideration at the same time, they must all be experienced simultaneously. (Marton et al., 2004, 16).
According to Leung (2003), these patterns of variation create opportunities for the students to understand the underlying formal abstract concept.
The object of learning can be seen from various different perspectives: that of a teacher, a student or a researcher. The intended object of learning refers to the object of learning seen from the teacher's perspective. It includes what the teacher says and wants the students to learn during the lecture. The students experience this in their own ways and what they recognize and learn is called the lived object of learning. Obviously, what students' really learn does not always correspond to what the teacher's intention was. The enacted object of learning is observed from the researcher's perspective and it defines what is possible to learn during the lecture, to what extent and in which forms the necessary conditions of a specific object of learning actualize in classroom. The enacted object of learning describes the space of learning that students and teacher create together, i.e., the circumstances for discerning the critical aspects of the object of learning. .
In the Variation Theory, the necessary conditions for learning are the experiences of discernment, simultaneity and variation. Variation is the primary factor to support students' learning. In order to understand what variations a teacher should use, he or she must first become aware of the varying ways students may experience the object of learning. This information is needed for identifying potential ways to help students to discern those aspects of the learning object they have not previously noticed (Marton, Runesson & Tsui, 2004).
Every concept, situation and phenomenon has particular aspects of their own. If one aspect is varied and others are kept invariant, the varied aspect should arise and be discerned. The thorough understanding of the object of learning, e.g., a mathematical concept, requires the simultaneous discernment of all critical aspects of the object of learning. (Marton & Morris, 2001;Marton, Runesson & Tsui, 2004). Consequently, the triangle of discernment, simultaneity and variation can be used also as a framework for analyzing teaching (ibid).
Although the theoretical framework in our study is mostly based on the Variation Theory, we also acknowledge the theory of concept image and concept definition. Tall and Vinner (1981, 152) use the term concept image "to describe the total cognitive structure that is associated with the concept, which includes all the mental pictures associated properties and processes". They suggest that when we think of a mathematical concept, something is evoked in our memory. Often these images do not relate to the formal definition of a concept, i.e., the concept definition, but students prefer to focus, for instance, on the archetypical examples discussing a concept (e.g., Tall, 1994;Viirman, Attorps, Tossavainen, 2011;Tossavainen, Haukkanen & Pesonen, 2013). Vinner (1991) claims that the role of definition in mathematical thinking is also neglected in the teaching of mathematics, textbooks and even in the documents about the goals of teaching mathematics. He encourages teachers not only to discuss definitions with students but to train them to use definitions as an ultimate criterion in mathematical reasoning (ibid). The Variation Theory implies that, in addition to typical examples, it is useful also to pay attention to nonexamples of mathematical concepts, even weird ones.
Method
The study took place at a Swedish university. Altogether 85 first-year undergraduate students (engineering and teacher students) and four university teachers participated in the study. The data consists of photos, observations, notebooks and the video recordings of three lectures in an introductory calculus course. The students' learning was measured using written pre-and post-tests and interviews.
The interviews focused on the participants' understanding about the concept of the definite integral. They were first transcribed and then analysed following a phenomenographic research tradition (Marton, 1993): the main goal is to describe how many qualitatively different conceptions from the certain phenomenon appear rather than to determine how many people who have a certain conception. In our case, the analysis should result in a number of the categories of description, i.e., categories representing the qualitatively different ways in which students comprehend the definite integral concept. (Booth, 1992).
The pre-and post-tests for measuring students' knowledge about the definite integral and the Fundamental Theorem of Calculus consisted of six problems; the items will be given below. In both tests, the same set of questions was used in order to make the learning outcomes statistically comparable. The maximum of points in each problem was three. To get three points, the answer needed to be correct and well motivated. For minor faults in calculations, we deducted one point. For a correct but not satisfactory motivated answer, we awarded one point. An empty or a meaningless answer resulted in zero points.
Students were given 25 minutes to do the test. The use of any technical facilities like graphing calculators was not allowed. The results were analysed by using a statistic program Minitab.
One can obviously ask whether the observed improvements in the post-tests are due to the familiarity of problems and not a consequence of the implementation of the design of lectures. In order to minimize this effect, we did not reveal the answers or the results of the pre-test to the students. Moreover, they did the post-test without any notice about it in advance. Furthermore, the participating groups were equivalent with respect to their preliminary education; all students were first-year undergraduates from the engineering or teacher programme studying the same introductory course in calculus.
A more detailed description of how we designed and implemented each lesson will be given together with the report on our findings since the design of subsequent lectures was based on the analysis of the previous one(s). The first lecture is to be considered as a reference one. It was prepared without any knowledge of the pre-test results.
The pre-and post-test questionnaire was originally in Swedish. The translations of the items in English are as follows: Question 1: If you want to calculate the area between the curve and the x-axis and the lines x=0 and x=5 (see the graphs below), you can get an approximate value of this area by calculating and summing the area of each column. a) Which of the following graphs should you choose in order to make the error as small as possible? b) Explain your answer.
The aim of the first question was to test a student's intuitive conception or concept image of the exact area as a result of a limiting process (of the upper Riemann sums). By observing Graph 1 Graph 2 Graph 3 that the width of each column is halved as we move from the graph 1 to the graph 3, a student should be able to discern that the area representing the error of approximation also decreases.
The second question aims at measuring whether a student is familiar with the symbol of the definite integral and, if so, what this symbol evokes in his or her concept image of the definite integral.
Question 3. There are some approximate values of x and ) (x F given in the table below: The purpose of the third question was to test whether this kind of a problem evokes a link to the Fundamental Theorem of Calculus in a student's concept image of the definite integral.
This question tests whether a student can apply the additive properties of the definite integral.
Question 5. Can you find any error in the following reasoning?
The aim of the fifth question was to examine whether a student have a correct conception about the prerequisites for applying the Fundamental Theorem of Calculus.
Question 6. Find the area of the region limited by the functions f (x) = 0.5x 2 and g (x) = x 3 . Give the exact value of it.
The idea of the last question was to test the students' procedural skills in applying the Fundamental Theorem of Calculus.
In the next section we are going to present the results of our study which consisted of three lectures on the same topic. The first lecture is to be considered as a reference one. It was prepared without any knowledge of the pre-test results. The second and the third lectures were designed on basis of the information of the post-test results of the first and the second lecture respectively. Having this information available, we revised the patterns of variation of the observed critical aspects of the object of learning in lecture two and three.
Results
The analysis of our findings follows the hypothesis of Marton and Morris (2002) and that different patterns of variation create different learning opportunities. Therefore, we begin by illuminating the progression of each lecture.
Lecture One
The first lecture (LS1) was designed by the first lecturer alone, without having any prior knowledge of the pre-test results. Two researchers observed the lecture. The first group is therefore to be considered as a reference group; it consisted of engineering students only.
The lecture started with a discussion about the area concept and how to calculate the area of common figures such as rectangles, triangles and parallelograms. For example, the area of a circle was estimated by transforming the circle into a parallelogram. It was done by cutting the circle into wedges which were then organized into the shape of a parallelogram. As the number of wedges increases, the area of the parallelogram approaches to the area of the original circle. The lecture continued with a discussion about how to calculate areas for irregular regions such as an area between an arbitrary continuous function and the x-axis. In this context, the sigma symbol (summing) and the concepts of Lower and Upper Riemann sums were introduced. The end of the lecture was spent on demonstrating how to proceed when calculating an area of the plane region lying above the x-axis and under the curve y = e x , i.e., The problem was studied first in terms of Lower and Upper Riemann sums and the limiting process and then solved by applying the Fundamental Theorem of Calculus. In discussion, the conditions for applying the theorem were not mentioned explicitly. After the lecture, the students answered the post-test anonymously.
Lecture Two
Before designing the second lecture, we decided that, in order to improve the precision of our statistical evaluation, we should compare the results of the pre-and post-test in the subsequent learning studies LS2 and LS3 at the individual level instead of the group level as was the case in LS1. Furthermore, we decided to videotape our next lectures.
Before the second lecture, we carefully analysed the observations and the results of the post-test. The results in Table 1 summarize the learning results of the first group. In a more thorough inquiry to LS1, we could identify the following three critical aspects.
First, we noticed that most of the students, who answered the second question, had interpreted the definite integral in Question 2 merely as an area and not as a real number that can have negative, zero or positive value. Second, the results both in the pre-and post-tests indicated that the students have difficulties in discerning the correct conditions for applying the Fundamental Theorem of Calculus, especially in the case when it is not possible (Question 5). Third, a large majority of the students failed in solving the ordinary routine exercise (Question 6). For example, they could not decide which one of the functions represents the upper or lower function or determine the intersection points between the functions. Some of them even had problems with the arithmetic of fractions.
Having this information available, we revised the patterns of variation of these three critical aspects in the next lectures so that the correct aspect should be easier to discern. For example, we decided to emphasize the formal definition of the definite integral and the fact that it cannot always be interpreted as an area. Further, students should pay more attention to the conditions of theorems to be applied.
The second lecture was carried out by a teacher in the research group to a mixed group of engineering and teacher students. The second lecture started with a discussion about the concept of area and regular (polygonal) and irregular regions in the plane. After that, the definite integral concept was introduced and discussed through a typical example from upper secondary school: . Also the geometric interpretation of the problem was illustrated and the problem was solved using the Fundamental Theorem of Calculus emphasizing the conditions for applying it. Then another variant of the same problem was discussed graphically by studying the functions Figure 3. Further, using two different approaches to solve the same problem, we especially aimed at the experiences of generalization and separation. = ) functions were introduced in this connection. We also recalled how to find the intersection points of the functions. After that, the second lecture continued similarly as the first one with discussions about how to find the area by using estimation (Lower and Upper Riemann sums) and the limiting process for arbitrary irregular regions above the x-axis. However, in order to show how to interpret the definite integral in the general case (i.e. not only as an area), the following example was considered thoroughly.
In order to stress the importance of the necessary conditions for applying the Fundamental Theorem of Calculus, here we emphasized the experiences of separation and fusion.
Lecture Three
The test results (Table 2) for the second group of engineering and teacher students revealed that students' understanding about the concept of definite integral was still inadequate although some statistically significant improvements were observed. Most of the students interpreted the definite integral again only as an area. Similarly, the problems related to the conditions for applying the Fundamental Theorem of Calculus (Question 5) remained actual; likewise the problems in solving the ordinary routine exercise (Question 6). In order to gain a more detailed view of students' conception of the definite integral, we interviewed five students from the second group. The analysis of the interviews revealed three different categories of description: the definite integral is seen as 1) a limiting process, 2) an area or 3) a procedure.
The first category represents those students whose conceptions of the definite integral focus on a limiting process, the approximation of the area of a curvilinear region by breaking it into thin vertical rectangles. One of the students describes the process in the following way: "The error decreases the closer the infinite the number of columns are nearing. The columns will look like the curve more and more." This excerpt and the test results from lectures one and two indicated that some students have a relatively good intuitive understanding about the definite integral as a limiting process.
For the students in the second category of description, the definite integral ∫ b a dx x f ) ( stands for the area between f(x) and x-axis. "It is an area between y=0 and y=f(x) in the interval [a, b]" as one of the students explained in the interview. Most of the students in this study described the definite integral in the pre-and post-tests in this or a similar way. The students belonging to the third category viewed the definite integral as a procedure. For them, the definite integral seems to be merely a formula and they use procedures without considering definitions and theorems when solving problems related to that. One of the interviewed students described his conception in the following way: "This I had to learn in upper secondary school. You write down the primitive function with brackets. I take the values of the end point minus the starting point, then it's just a simple subtraction". Another student said, when looking at Question 5, "It looks like an ordinary integral calculation. That is correct… " The weakest students of the study fell typically into this category. These students mentioned in interviews that theorems were not much discussed from a theoretical point of view in upper secondary school. Theorems were applied more like formulas.
Taking into account the results from the pre-and post-tests and the interviews we again revised our plan for the next lecture. The most notable difference between the third and the previous lectures is that we decided to use the free dynamic mathematics software GeoGebra for the illustration of critical aspects.
The third lecture was given by the same teacher as the second one but now to a new group consisting of only engineering students. It began with a short discussion about how to find an area for a (polygonal) regular and an irregular region lying above the x-axis.
The first exercise with GeoGebra (see Figure 6) focused on the numerical approximation of the area as the Lower and Upper Riemann sums and the definition of the definite integral as the limiting process. In Figure 6, two points, a and b, are shown and they can be moved along the x-axis in order to modify the investigated interval. The values of the Upper and Lower sums together with their difference are displayed as a dynamic text automatically adapting to the modifications. In this exercise, we kept f(x) and the interval invariant and varied the number of subintervals. Our intention was to show that, by increasing the number of subintervals, the difference between the lower and upper sums can be made to decrease, suggesting that the lower and upper sums eventually coincide with the value of the definite integral. By utilizing GeoGebra we created the pattern of generalization dynamically. x x x y − = must satisfy the following assumptions: it must be a defined, continuous and nonnegative function on the closed interval [a, b]. The following two figures demonstrate how we illustrated the conflict between the definition of the definite integral concept and the area interpretation of it. GeoGebra gave us a good opportunity to dynamically demonstrate contrast, which was one of the patterns of variation. In the second GeoGebra application related to Figures 7 and 8, two points, a and b, are shown so that they can be moved along the x-axis. The area and the value of the definite integral are displayed as a dynamic text. In this exercise, we kept only f(x) invariant and varied both the length of the interval and the upper and lower limit points in order to show that the values of the area between the function and the x-axis and the definite integral do not always coincide. Our goal with the third exercise (Figure 9) was to help the students to discern situations where it is possible to apply the Fundamental Theorem of Calculus and to notice when it is not. Figure 9. The illustration of the conditions of the Fundamental Theorem of Calculus.
By moving the point A along the x-axis, we can vary the position of the investigated interval. In this exercise, we kept the length of the interval and the functions f(x) and g(x) invariant and varied the location of the point A. By using the dynamic nature of GeoGebra we were able to demonstrate all the aspects of variation, i.e. contrast, generalization, separation and fusion. In the end of the third lecture, the same problem ( dx x ∫ − 2 0 1 1 ) as shown in Figure 5 was studied.
Quantitative Analysis of the Pre-and Post-Tests
We analysed the scores of the pre-and post-tests with the Minitab software using both the independent, two-sided, two-sample t-test (Lecture 1) and dependent, two-sided, t-test for paired samples (Lectures 2 and 3) at the significance level of 5% (0.05). In the pre-and post-test of the first lesson, the number of participants was 28 and 24, respectively. The results of the pre-and post-tests were recorded on each item only at the group level, which explains why we use the different t-test for this group. Concerning the following lessons, we compared the means of the test results on each item at the individual level. The second group (18/18 students) consists of both engineering and teacher students and the third group (39/39 students) only of engineering students. Tables 1 and 2 show the results of the analyses. Table 1 The quantitative results of the pre-and post-tests (unpaired t-test) In Table 1, we see that there are no statistically significant differences in learning results concerning the first lecture. The results related to the second and third lectures are given together in Table 2.
As Table 2 shows, the third lecture seems to have succeeded best: statistically significant improvements happened in many test items. The students' scores in question 1 a) and b) show that the students' intuitive understanding about the definite integral concept as an infinite process was quite good already at the beginning like their capacity to apply the additive property of definite integrals.
Almost all students failed to give an adequate response to question 5; most of them could not even find any errors at all. In Question 6, a majority of students could not discern which of the functions represented upper and lower functions or that how to determine the intersection points between the functions or how to calculate with fractions or how to give an exact answer.
Discussion
The purpose of this study was to find out whether university students' learning can be supported by finding suitable teaching sequences that help students to discern and experience mathematical concepts from the meaningful points of view. Experiencing variations of critical features of the object of learning should be, by the variation theory, a primary factor in enhancing students' learning (Marton & Booth, 1997;Marton & Morris, 2002).
In our study, two university teachers taught the definite integral concept for three student groups on an introductory course in calculus. Two of the lectures were prepared and planned with extraordinary care, taking into account the results from the written pre-and post-tests. Although the study consisted only of three lectures, it revealed that different teaching approaches had a significant influence on that how students' learning outcomes developed during the lectures.
We succeeded best in designing teaching sequences of the definite integral concept when we used the GeoGebra software. We interpret this being mainly due to the fact that GeoGebra is an effective tool for the illustration of dynamic processes, e.g., the limiting process of Riemann sums, and it allows a learner to experience simultaneously many critical aspects, e.g., how the area and the value of the definite integral are effected when the interval is modified. Also earlier research (i.e., Leung, 2003) shows that GeoGebra is a suitable pedagogical tool in creating the patterns of variation.
It is worth noticing that it did not provide a remarkable aid in Question 6 (although the difference between the mean scores of the pre-and post-tests improved for the third group in a statistically significant way). A plausible explanation is that GeoGebra or any other software cannot be used to compensate the lack of fundamental arithmetic skills although it often helps us to bypass challenging calculations and focus on the conceptual understanding of a mathematical problem.
In this study, we observed three critical aspects of the definite integral that seem to be important for the successful teaching of this concept and, consequently, for the design of the relevant patterns of variations. All these aspects can be discussed using GeoGebra.
First, it is important to consider the definite integral as a real number (i.e. the result of a limiting process) in a wider context and separate it from seeing it only as an area. This aspect was not elaborated during the first lecture -which can also be seen in the results of Tables 1 and 2. The use of GeoGebra during the third lecture seemed to extend students' possibilities to experience the concept of the definite integral in this wider context.
In the teaching sequences related to Figures 7 and 8, the students were given opportunities to experience an effective contrast, i.e., to discern the definite integral not only as an area but, simultaneously, also as a real number. This allowed them to experience a generalization, that is to say, to experience that the definite integral can be a negative number, zero or a positive number.
Second, in spite of many efforts, it is plausible that many students' concept images of the definite integral will be based on the area interpretation (cf. Blum, 2000) and Tall and Vinner (1981). To change this, it may require a thorough revision of mathematics textbooks in school since they seem to emphasize this aspect. It is hard for an individual teacher to resist such a tradition but as our third lecture verifies, it is possible in a technological environment.
Third, the results also indicated that most students have difficulties in applying the Fundamental Theorem of Calculus, especially when the assumptions of the theorem are not satisfied. During the first lecture, the theorem was only mentioned quite superficially. On other lectures, the issue was given more attention; both examples and counter examples were elaborated. In the teaching sequence particularly related to Figure 9, the students were given an opportunity to experience a separation and a fusion. In order to experience a specific aspect -when it is not possible to apply the Fundamental Theorem of Calculus -and in order to separate this aspect from other aspects, the aspect must be varied while other aspects must remain constant.
In our teaching sequence, we kept the length of the interval and the functions f(x) and g(x) invariant and by moving the point a along the x-axis we could vary the position of the investigated interval. The same sequence again gave the students an opportunity to experience the pattern of variation called fusion, i.e. if there are several critical aspects as 'it is possible to apply the Fundamental Theorem of Calculus', 'it is not possible to apply the Fundamental Theorem of Calculus', 'the function is defined and continuous in the closed and bounded interval', 'the function is not defined and continuous in the closed and bounded interval' and so on, they must all be experienced simultaneously.
The students' learning outcomes in Question 5 show that their conceptions of the conditions for applying the theorem were not changed after the second lecture. Only after the GeoGebra-based teaching sequence we could notice some statistically significant improvements of their results. We agree with Vinner (1991) that the students should be trained to use definitions as an ultimate criterion in mathematical issues in teaching and learning of mathematics. The students even mentioned in interviews that theorems were not discussed from the theoretical point of view; they were used as formulas. Students use procedures without considering definitions and theorems when solving problems. In order to develop a deeper understanding about the definite integral concept it is therefore important that the varying aspects of mathematical concepts are illuminated by using both examples and non-examples of the concepts in teaching of mathematics.
Yet another critical aspect we found is that students' poor arithmetic skills (Question 6) prevent them from gaining a deeper conceptual understanding about mathematical phenomena. Varying methods in order to solve this type of problem were applied during the second and third lecture but with a vanishing effect.
All in all, we are not very satisfied with the students' learning outcomes in this study. Further studies need to be undertaken to identify which other factors than the integration of technology and the LS model in the teaching and learning of mathematics can benefit both mathematics educators and students. It must be stressed once again that teaching and learning are very complex phenomena and the relation between them is not 'one to one'. In a teaching experiment like this, it would also be important to analyse what happens in the classroom in the interaction between the teacher and the students and between the students. Not even a good design of a lecture guarantees students' learning but it can increase possibilities for learning if students' conceptions and misconceptions of mathematical concepts are taken into account.
Finally, the study gave us a rare opportunity to collaborate with colleagues teaching and preparing a lecture. It was a rewarding experience to reflect and analyse students' learning together. We all agree that the LS model and the Variation Theory are effective tools for developing the teaching of mathematics and they provide a useful tool for increasing the teachers' awareness of the critical aspects of students' learning and enhancing the learning of mathematics in higher education. | 8,474.2 | 2013-11-10T00:00:00.000 | [
"Mathematics"
] |
Atomic-Scale Imaging of Organic-Inorganic Hybrid Perovskite Using Transmission Electron Microscope
: Transmission electron microscope (TEM) is thought as one powerful tool to imaging the atomic-level structure of organic inorganic hybrid perovskite (OIHP) materials, which provides valuable and essential guidance toward high performance OIHP-related devices. However, these OIHPs exhibit poor electron beam stability, severely limiting their practical applications in TEM. Here in this article, the application of TEM to obtain atomic-scale image of OIHPs, main obstacles in identifying the degradation product and future prospects of TEM in the characterization of OIHP materials are reviewed and presented. Three potential strategies (sample protection, low temperature technology, and low-dose technologies) are also proposed to overcome the current drawback of TEM technology.
Introduction
Organic-inorganic hybrid perovskites (OIHPs) have attracted broad attentions due to their excellent opto-electronic properties [1][2][3][4][5][6][7], including long diffusion length, high defect tolerance, decent absorption properties, etc., which have been widely used in photovoltaic, photocatalysis [8], and photoelectronic devices, including solar cells, LED [9], and photodetectors. In the past decade, the power conversion efficiency (PCE) of perovskite solar cells (PSCs) has rapidly increased from the initial 3.8% [10] to 26.0% [11], rivaling already the best silicon cells (~26.8%). OIHPs possess a general chemical formula of ABX 3 as illustrated in Figure 1, where A represents a monovalent cation, including methylammonium (CH 3 NH 3 + , denoted as MA + ) or formamidinium (HC(NH 2 ) 2 + , denoted as FA + ), B represents a bivalent metal cation such as Pb 2+ and Sn 2+ , and X for Cl − , Br − , or I − . Previous studies unraveled that even atomic-level structural changes could affect the resultant device performance, as can be observed from the deteriorated device PCE under operational conditions [12], which hamper the PSC commercialization. It is thus imperative and necessary to precisely determine the atomic configurations of OIHP materials and establish the comprehensive relationship between their structures and properties. [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Transmission electron microscopy (TEM) has been recognized as a powerful tool to monitor material structures at an atomic resolution [14,15]. Moreover, TEM can be performed under various imaging modes, as well as be integrated with electron diffraction (ED) and spectroscopic techniques (e.g., energy-dispersive X-ray spectroscopy (EDS) and electron energy-loss spectroscopy (EELS)) to gain both structural and compositional information with very high resolution (spatial resolution < 1 Å, energy resolution < 0.1 eV). As shown in Figure 2, two main imaging modes exit in the TEM technique, such as the TEM mode and the scanning TEM (STEM) mode. The TEM mode uses a parallel electron beam, and the obtained images are interference patterns of the scattered electrons that are formed by the objective lens ( Figure 2a). On the contrary, the STEM mode employes a focused electron beam to scan the specimen, and the images are formed by collecting transmitted electrons within a certain range of scattering angle using annular detectors ( Figure 2b). In general, it is much challenging to measure image-sensitive materials (i.e., OIHPs) via the STEM mode, due presumably to the intense interaction between the sample and the focused beam (a high dose rate). [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Although TEM may be employed to accurately reveal the atomic-level microstructure of OIHPs, these hybrid materials with soft ionic lattice features are extremely sensitive to beams. The critical dose for probing perovskite materials is estimated to be tens of electrons per Å 2 [13]. Although the all-inorganic perovskites exhibit better electron beam [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Transmission electron microscopy (TEM) has been recognized as a powerful tool to monitor material structures at an atomic resolution [14,15]. Moreover, TEM can be performed under various imaging modes, as well as be integrated with electron diffraction (ED) and spectroscopic techniques (e.g., energy-dispersive X-ray spectroscopy (EDS) and electron energy-loss spectroscopy (EELS)) to gain both structural and compositional information with very high resolution (spatial resolution < 1 Å, energy resolution < 0.1 eV). As shown in Figure 2, two main imaging modes exit in the TEM technique, such as the TEM mode and the scanning TEM (STEM) mode. The TEM mode uses a parallel electron beam, and the obtained images are interference patterns of the scattered electrons that are formed by the objective lens ( Figure 2a). On the contrary, the STEM mode employes a focused electron beam to scan the specimen, and the images are formed by collecting transmitted electrons within a certain range of scattering angle using annular detectors (Figure 2b). In general, it is much challenging to measure image-sensitive materials (i.e., OIHPs) via the STEM mode, due presumably to the intense interaction between the sample and the focused beam (a high dose rate). [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Transmission electron microscopy (TEM) has been recognized as a powerful tool to monitor material structures at an atomic resolution [14,15]. Moreover, TEM can be performed under various imaging modes, as well as be integrated with electron diffraction (ED) and spectroscopic techniques (e.g., energy-dispersive X-ray spectroscopy (EDS) and electron energy-loss spectroscopy (EELS)) to gain both structural and compositional information with very high resolution (spatial resolution < 1 Å, energy resolution < 0.1 eV). As shown in Figure 2, two main imaging modes exit in the TEM technique, such as the TEM mode and the scanning TEM (STEM) mode. The TEM mode uses a parallel electron beam, and the obtained images are interference patterns of the scattered electrons that are formed by the objective lens ( Figure 2a). On the contrary, the STEM mode employes a focused electron beam to scan the specimen, and the images are formed by collecting transmitted electrons within a certain range of scattering angle using annular detectors ( Figure 2b). In general, it is much challenging to measure image-sensitive materials (i.e., OIHPs) via the STEM mode, due presumably to the intense interaction between the sample and the focused beam (a high dose rate). [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Although TEM may be employed to accurately reveal the atomic-level microstructure of OIHPs, these hybrid materials with soft ionic lattice features are extremely sensitive to beams. The critical dose for probing perovskite materials is estimated to be tens of electrons per Å 2 [13]. Although the all-inorganic perovskites exhibit better electron beam [13]. Reprinted with permission from Ref. [13]. Copyright 2020, John Wiley and Sons.
Although TEM may be employed to accurately reveal the atomic-level microstructure of OIHPs, these hybrid materials with soft ionic lattice features are extremely sensitive to beams. The critical dose for probing perovskite materials is estimated to be tens of electrons per Å 2 [13]. Although the all-inorganic perovskites exhibit better electron beam stability than the hybrid ones, due possibly to the lack of organic moieties, keeping the dose below the critical value is still challenging [16][17][18]. A high dose might damage the perovskite structure and even induce phase transition problem during the TEM measurement, which limits its application scenarios [19][20][21]. To overcome, it is thus necessary to develop both low-dense and high-resolution TEM techniques, such as sample protection, cryo-TEM, and low-dose method, to acquire atomic-level images of the highly sensitive OIHP materials.
In this article, the research status and major challenges of TEM as a characterization tool in PSC investigations are summarized. Specifically, several conventional low-dose technologies, the miscalibration of OIHPs, and future advancements of TEM technology are discussed, which hopefully could provide essential guidance toward more efficient and accurate characterizations of TEM in OIHP materials.
Application Status of TEM in OIHPs
The unusual optoelectronic properties and performance of OIHPs are closely related to their unique crystal structure and microstructure, e.g., the crystal symmetry, the vibration and ordered arrangement of the organic groups, and the tilt of the [PbI 6 ] − octahedra [22]. Therefore, in recent years, TEM has been widely employed to reveal the atomic-level structure and image of OHIP materials, which promotes deeper understanding to their material properties and related device performance [23,24].
When OIHPs are imaged using conventional TEM, the structure of the perovskites might be destroyed in several seconds, which is sorted as the irradiation damage, as shown in Figure 3. Chen et al. noticed severe irradiation damages in the MAPbI 3 perovskite polycrystalline film when being imaged via conventional TEM at a high electron dose rate of~9870 e/(Å 2 ·s), where nanoparticles were precipitated quickly within the irradiated area ( Figure 3a) [25]. Kim et al. revealed the generation and expansion of "bubbles" by a series of TEM images through continuous irradiation on MAPbI 3 perovskite single crystals ( Figure 3b) [26]. The above results indicate that it is difficult to obtain the low-magnification morphology of OIHPs films via traditional imaging mode of TEM, which should be caused by electron beam induced electrical field [25] and the direction-selectivity of the electron beam damage in OIHPs [26]. The electrical field will be formed when there occurs the accumulation of positive charges in irradiated sample regions, following the emissions of Auger and secondary electron into vacuum. The beam periphery damage in the images has been unraveled in previous related literature, where various electron doses and different accelerate voltages were attempted [25]. [26]. Reprinted with permission from Ref. [26].
Copyright 2020, IOP Publishing on behalf of the Japan Society of Applied Physics (JSAP).
The structural instability of OIHP materials under various conditions (e.g., high temperature, oxygen, humid environment, light) becomes a vital issue to hamper the commercialization of PSCs [27][28][29]. By employing the SAED technique, Chen et al. investigated the decomposition mechanism of OIHPs under electron beam irradiations [30,31]. A possible decomposition way was thus proposed as shown in Figure 4. Under continuous beam illumination, we observe structure evolution of MAPbI3 (along the [110] zone axis) and MAPbBr3 (along the [001] zone axis) as exhibited by SAED patterns in Figure 4a-d and Figure 4g-j respectively. With the increased electron beam dose, the loss of methylamine and halogen ions could eventually cause the collapse of perovskite structure to PbX2 (X = I, Br), and the atomic resolution images can be seen in Figure 4f,l. The structure illustrations for decomposition from tetragonal MAPbI3 (viewed along its [110]) to PbI2 and cubic MAPbBr3 (along its [110]) to PbBr2 can be seen from Figure 4m-p. Their results indicated that the tetragonal CH3NH3PbI3 and the cubic CH3NH3PbBr3 may lose some halides during the irradiation, which then formed an intermediate product of perovskite superstructure with ordered vacancies (i.e., CH3NH3PbX2.5, X = I, Br), which can be seen in Figure 4e,k. The structural degradation behaviors of perovskites under various experimental conditions were also investigated via low-dose electron diffraction and imaging techniques, which optimized the operating conditions of TEM for characterizing OIHPs [30]. As shown in Figure 5, a TEM cryogenic holder (Gatan 636) was employed to study the SAED patterns of MAPbI3 at different temperatures, which are reported to be orthorhombic phase below -(111 ± 2) °C, tetragonal phase between −(111 ± 2) and (58 ± 5) °C, and cubic phase over (58 ± 5) °C, as shown in Figure 5a-c [31]. The MAPbI3 is grown to be a tetragonal phase, whose SAED pattern ( Figure 5e) matches with the simulated one (Figure 5h). The acquired SAED pattern at −180 °C (Figure 5d) shows no superstructure diffraction spots of the orthorhombic phase, highlighted by the circle on the simulated ED pattern (Figure 5g), suggesting that a low temperature in vacuum will not cause the transition from tetragonal to orthorhombic phase for the single crystal MAPbI3. The phase at a high temperature and the SAED pattern at 90 (Figure 5f) indicates either a [110] direction of cubic phase ( Figure 5i) or a [100] direction of the tetragonal phase (Figure 5h), making us unable to identify the specific phase. A liquid nitrogen side-entry specimen holder was applied to cool down the specimen temperature. When the temperature is at −180 °C, a [26]. Reprinted with permission from Ref. [26]. Copyright 2020, IOP Publishing on behalf of the Japan Society of Applied Physics (JSAP).
The structural instability of OIHP materials under various conditions (e.g., high temperature, oxygen, humid environment, light) becomes a vital issue to hamper the commercialization of PSCs [27][28][29]. By employing the SAED technique, Chen et al. investigated the decomposition mechanism of OIHPs under electron beam irradiations [30,31]. A possible decomposition way was thus proposed as shown in Figure 4. Under continuous beam illumination, we observe structure evolution of MAPbI 3 (along the [110] zone axis) and MAPbBr3 (along the [001] zone axis) as exhibited by SAED patterns in Figure 4a-d and Figure 4g-j respectively. With the increased electron beam dose, the loss of methylamine and halogen ions could eventually cause the collapse of perovskite structure to PbX 2 (X = I, Br), and the atomic resolution images can be seen in Figure 4f,l. The structure illustrations for decomposition from tetragonal MAPbI 3 (viewed along its [110]) to PbI 2 and cubic MAPbBr 3 (along its [110]) to PbBr 2 can be seen from Figure 4m-p. Their results indicated that the tetragonal CH 3 NH 3 PbI 3 and the cubic CH 3 NH 3 PbBr 3 may lose some halides during the irradiation, which then formed an intermediate product of perovskite superstructure with ordered vacancies (i.e., CH 3 NH 3 PbX 2.5 , X = I, Br), which can be seen in Figure 4e,k. The structural degradation behaviors of perovskites under various experimental conditions were also investigated via low-dose electron diffraction and imaging techniques, which optimized the operating conditions of TEM for characterizing OIHPs [30]. As shown in Figure 5, a TEM cryogenic holder (Gatan 636) was employed to study the SAED patterns of MAPbI 3 at different temperatures, which are reported to be orthorhombic phase below −(111 ± 2) • C, tetragonal phase between −(111 ± 2) and (58 ± 5) • C, and cubic phase over (58 ± 5) • C, as shown in Figure 5a-c [31]. The MAPbI 3 is grown to be a tetragonal phase, whose SAED pattern (Figure 5e) matches with the simulated one (Figure 5h). The acquired SAED pattern at −180 • C (Figure 5d) shows no superstructure diffraction spots of the orthorhombic phase, highlighted by the circle on the simulated ED pattern (Figure 5g), suggesting that a low temperature in vacuum will not cause the transition from tetragonal to orthorhombic phase for the single crystal MAPbI 3 . The phase at a high temperature and the SAED pattern at 90 (Figure 5f) indicates either a [110] direction of cubic phase (Figure 5i) or a [100] direction of the tetragonal phase (Figure 5h), making us unable to identify the specific phase. A liquid nitrogen side-entry specimen holder was applied to cool down the specimen temperature. When the temperature is at −180 • C, a rapid crystalline-to-amorphous phase transition was observed under low doses (129 to 150 eÅ −2 ). Interestingly, a large electron beam dose (450-520 eÅ −2 ) is required to induce the transition from MAPbI 3 to PbI 2 at higher temperatures. Such phenomenon suggests that lowering the temperature may not hinder the decomposition of OIHPs, but rater leads to rapid undesirable phase transformation. rapid crystalline-to-amorphous phase transition was observed under low doses (129 to 150 e Å −2 ). Interestingly, a large electron beam dose (450-520 e Å −2 ) is required to induce the transition from MAPbI3 to PbI2 at higher temperatures. Such phenomenon suggests that lowering the temperature may not hinder the decomposition of OIHPs, but rater leads to rapid undesirable phase transformation. . The inside frames are selected area to compare whether the SAED patterns match to the simulated diffraction patterns. Reprinted with permission from Ref. [31]. Copyright 2020, Elsevier. [31]. The inside frames are selected area to compare whether the SAED patterns match to the simulated diffraction patterns. Reprinted with permission from Ref. [31]. Copyright 2020, Elsevier.
Main Issues of TEM in Characterizating OIHPs
Despite the importance and necessarily of TEM in OIHP characterizations have been gradually realized, major challenges still remain, i.e., these perovskite materials are electron beam sensitive [32,33], which limit the practical application of TEM. Taking the wellknown MAPbI3 as an example, due to the negligence of electron beam-sensitive property, the decomposition products, such as PbI2, Pb and other intermediates have widely misidentified as perovskite in TEM characterizations, which negatively influenced the development of perovskite field.
In general, the electron dose value of normal HRTEM is within 800-2000 eÅ −2 s −1 , which is much higher than the critical value of MAPbI3 (~150 eÅ −2 ) [32,33]. Meanwhile, several interplanar spacings and angles of the decomposition product (e.g., PbI2) are similar with MAPbI3. For example, Figure 6 shows simulated electron diffraction (ED) patterns of the tetragonal MAPbI3 and the hexagonal PbI2 along different axis zones. The ED pattern of MAPbI3 along [110] zone axis was illustrated in Figure 6a, where the (1 10), (002) crystal planes were missed in previous HRTEM characterization work [34]. Figure 6b demonstrates the simulated ED patterns of PbI2 along [44 1] zone axis. As can be observed, (014) and (1 04) crystal planes of PbI2 exhibit the confusable interplanar spacing and angle comparing to (2 20) and (004) crystal planes of MAPbI3. Indeed, MAPbI3 may be damaged into PbI2 when being exposed to electron beams. Similarly, Figure 6c- Figure 6 [34]. We think the phase transformation may be local and hard to distinguish the main phase and the secondary phase by SAED. Otherwise, different phased can be told by their distinction conditions. [31]. The inside frames are selected area to compare whether the SAED patterns match to the simulated diffraction patterns. Reprinted with permission from Ref. [31]. Copyright 2020, Elsevier.
Main Issues of TEM in Characterizating OIHPs
Despite the importance and necessarily of TEM in OIHP characterizations have been gradually realized, major challenges still remain, i.e., these perovskite materials are electron beam sensitive [32,33], which limit the practical application of TEM. Taking the well-known MAPbI 3 as an example, due to the negligence of electron beam-sensitive property, the decomposition products, such as PbI 2 , Pb and other intermediates have widely misidentified as perovskite in TEM characterizations, which negatively influenced the development of perovskite field.
In general, the electron dose value of normal HRTEM is within 800-2000 eÅ −2 s −1 , which is much higher than the critical value of MAPbI 3 (~150 eÅ −2 ) [32,33]. Meanwhile, several interplanar spacings and angles of the decomposition product (e.g., PbI 2 ) are similar with MAPbI 3 . For example, Figure 6 shows simulated electron diffraction (ED) patterns of the tetragonal MAPbI 3 and the hexagonal PbI 2 along different axis zones. The ED pattern of MAPbI 3 along [110] zone axis was illustrated in Figure 6a, where the (110), (002) crystal planes were missed in previous HRTEM characterization work [34]. Figure 6b demonstrates the simulated ED patterns of PbI 2 along [441] zone axis. As can be observed, (014) and (104) crystal planes of PbI 2 exhibit the confusable interplanar spacing and angle comparing to (220) and (004) crystal planes of MAPbI 3 . Indeed, MAPbI 3 may be damaged into PbI 2 when being exposed to electron beams. Similarly, Figure 6c- Figure 6 [34]. We think the phase transformation may be local and hard to distinguish the main phase and the secondary phase by SAED. Otherwise, different phased can be told by their distinction conditions. Due to the inaccurate recognize of crystal planes, some researchers may identify PbI2 as MAPbI3 [35][36][37][38][39][40][41][42][43] even using low-dose electron diffraction (ED) technology. Some examples are showed in Figures 7 and 8 [44]. For instance, in contrast to the MAPbI3 perovskite, the structures of decomposition products were misidentified as pseudo perovskite. Figure 7a,b show the HRTEM image and Fast Fourier Transform (FFT) of the pseudo MAPbI3 perovskite respectively at high doses with conventional TEM condition [40]. The FFT was consistent with the simulated ED pattern along [44 1] zone axis (Figure 7c), which was identified as the perovskite. In fact, the HRTEM image and Fast Fourier Transform (FFT) of intrinsic MAPbI3 along [001] zone axis at a total dose of 1.5 e Å −2 at room temperature were obtained [13], as shown in Figure 7d,e. Obviously, (11 0), (110) planes with 0.62 nm interplanar spacing can be seen in images, matching the ED pattern ( Figure 7f) and XRD data of the intrinsic MAPbI3 [5,43]. Comparing the simulated ED of PbI2 along [44 1] zone axis with that of intrinsic perovskite along [001] zone axis, it was found that they were very similar, but (11 0), (110) planes missing and only (22 0), (220) planes remained, which results in the misidentified of the perovskite structure. Similarly, Zhu et al. [45] got the HRTEM images of intrinsic MAPbI3 along [2 01] zone axis at total doses of 3eÅ −2 in liquid nitrogen temperature (Figure 8d), while the FFT and simulated ED pattern was shown in Figure 8e. Figure 8a,b were shown as the HRTEM and FFT of the pseudo perovskite under normal TEM condition, which was identified as PbI2 rather than MAPbI3 due to the lacking of (11 2), (112) planes and the matched ED pattern (Figure 8c). Due to the inaccurate recognize of crystal planes, some researchers may identify PbI 2 as MAPbI 3 [35][36][37][38][39][40][41][42][43] even using low-dose electron diffraction (ED) technology. Some examples are showed in Figures 7 and 8 [44]. For instance, in contrast to the MAPbI 3 perovskite, the structures of decomposition products were misidentified as 'pseudo' perovskite. Figure 7a,b show the HRTEM image and Fast Fourier Transform (FFT) of the pseudo MAPbI 3 perovskite respectively at high doses with conventional TEM condition [40]. The FFT was consistent with the simulated ED pattern along [441] zone axis (Figure 7c), which was identified as the perovskite. In fact, the HRTEM image and Fast Fourier Transform (FFT) of intrinsic MAPbI 3 along [001] zone axis at a total dose of 1.5 eÅ −2 at room temperature were obtained [13], as shown in Figure 7d,e. Obviously, (110), (110) planes with 0.62 nm interplanar spacing can be seen in images, matching the ED pattern ( Figure 7f) and XRD data of the intrinsic MAPbI 3 [5,43]. Comparing the simulated ED of PbI 2 along [441] zone axis with that of intrinsic perovskite along [001] zone axis, it was found that they were very similar, but (11 0), (110) planes missing and only (220), (220) planes remained, which results in the misidentified of the perovskite structure. Similarly, Zhu et al. [45] got the HRTEM images of intrinsic MAPbI 3 along [201] zone axis at total doses of 3 eÅ −2 in liquid nitrogen temperature (Figure 8d), while the FFT and simulated ED pattern was shown in Figure 8e. Figure 8a,b were shown as the HRTEM and FFT of the pseudo perovskite under normal TEM condition, which was identified as PbI 2 rather than MAPbI 3 due to the lacking of (112), (112) planes and the matched ED pattern (Figure 8c). ] zone axis were also be analysed. The newly added annotations in reproduced HRTEM images were marked by yellow font [45]. Reprinted with permission from Ref. [45]. Copyright 2020, Elsevier.
The crystal planes that could be observed in other Bragg s law-based characterization tools, such as SAED and XRD [4,5,21,46], were missed in TEM results, which is attributed to the excessive electron beam irradiation in MAPbI3, damaging its original structure. Particularly, if {2h, 2k, 0} diffraction spots along the [001] direction is observed while the {2h + 1, 2k + 1, 0} reflections [e.g., (110)] are absent, it is reasonable to presume that the perovskite structure has already been decomposed into PbI2 [33]. Therefore, when using HRTEM images to identify phases, it seems incidental to misidentify perovskite phases ] zone axis were also be analysed. The newly added annotations in reproduced HRTEM images were marked by yellow font [45]. Reprinted with permission from Ref. [45]. Copyright 2020, Elsevier.
The crystal planes that could be observed in other Bragg s law-based characterization tools, such as SAED and XRD [4,5,21,46], were missed in TEM results, which is attributed to the excessive electron beam irradiation in MAPbI3, damaging its original structure. Particularly, if {2h, 2k, 0} diffraction spots along the [001] direction is observed while the {2h + 1, 2k + 1, 0} reflections [e.g., (110)] are absent, it is reasonable to presume that the perovskite structure has already been decomposed into PbI2 [33]. Therefore, when using HRTEM images to identify phases, it seems incidental to misidentify perovskite phases zone axis were also be analysed. The newly added annotations in reproduced HRTEM images were marked by yellow font [45]. Reprinted with permission from Ref. [45]. Copyright 2020, Elsevier.
The crystal planes that could be observed in other Bragg's law-based characterization tools, such as SAED and XRD [4,5,21,46], were missed in TEM results, which is attributed to the excessive electron beam irradiation in MAPbI 3 , damaging its original structure. Particularly, if {2h, 2k, 0} diffraction spots along the [001] direction is observed while the {2h + 1, 2k + 1, 0} reflections [e.g., (110)] are absent, it is reasonable to presume that the perovskite structure has already been decomposed into PbI 2 [33]. Therefore, when using HRTEM images to identify phases, it seems incidental to misidentify perovskite phases by merely comparing interplanar spacing and angles. During phase identification, misidentification may occur due to the similarity of certain crystal parameters, missing crystal planes, measurement errors, and other reasons. It is thus necessary to combine with other relevant diffractograms, simulated ED, nanodiffractions, or XRD specimen data [47] to conduct accurate phase identification.
Strategies to Improve the Compability of TEM in OIHPs
Driven by the urgent demands to understand the structure-property relationship of OIHPs, novel approaches have been developed to reduce the electron beam irradiation damage, which may be helpful to obtain the atomic-level structure of OIHPs using TEM characterizations. The specific mechanisms of beam damage are complex, which also vary with different types of materials. The damage caused by electron beam radiation could be categorized into three main types of radiation damage mechanisms, e.g., knock on damage, radiolysis, and rise of local temperature caused by phonons excited by electron beam radiation [48]. The knock-on damage is closely related to beam energy, while heating effects and radiolysis are more related to electron dose [49]. Cai et al. calculated the knock-on damage on OIHPs using first-principle calculations, and the result showed that iodine was only knocked-out when accelerating voltage is higher than approximately 250 kV. This is consistent with the experimental data, where low acceleration voltages were performed to study the degradation of OIHPs, and the results showed that the decomposition was not noticeably reduced in low acceleration voltage. Previous investigations demonstrate that radiolysis dominates the degradation of OIHPs under electron beam irradiation [50][51][52][53]. Developing low-dose TEM is vital for imaging OIHPs without cause negative impacts to the materials/films. Triggered from the OIHPs irradiation damage mechanism, various methods have been proposed to achieve atomic resolution imaging of OIHPs, including sample protection, Cryo-TEM, and low dose technology (e.g., direct-detection electroncounting, abbreviated as DDEC).
Sample Protection
Sample protection, as its name indicates, could directly protect the material and improve its stability [54]. By coating carbon about 6-10 nm thick on MAPbI 3 , Chen et al. revealed that the decomposition of OIHPs could be significantly suppressed, due to the thin carbon coating layer served as a diffusion barrier, reducing the escape rate of the volatile species (e.g., halogen atom and CH 3 NH 2 ), which helps to maintain the structure framework of perovskite [30]. However, for one-side coated specimen with half of shielding, the degradation was not slowed down, likely because the volatile species can escape from the other uncoated side. Furthermore, hexagonal boron nitride thin films were deposited as an encapsulation layer, which successfully extend the stability of MAPbI 3 , successfully reducing radiation damage induced by electron beam [36].
Low-Temperature-Based Technologies
To mitigate electron beam damage, low temperature-based technologies were also developed, which could effectively reduce mass loss and the heat damage [55,56]. Indeed, cryo-electron microscopy (cryo-EM) has already been applied for characterizing electron beam sensitive materials such as lithium-ion battery materials [53,[57][58][59]. Efforts have also been devoted to investigate the effect of low temperature on the structural stability of OIHPs under electron beam irradiation [17,33,[60][61][62]. It was found that the intrinsic structure of MAPbI 3 could be maintained at room temperature when the total electron dose is at~1.5 eÅ −2 [13]. When the total dose reaches 5.95 eÅ −2 , superlattice will be formed, which will damage the original perovskite structure. by employing Cryo-TEM, the critical dose of MAPbI 3 increases to 12 eÅ −2 , which is much higher than that at room temperature [31]. As a result, a more "stable" OIHP is achieved, which allows the use of higher electron dose to increase the signal-to-noise ratio of the image. However, conflicted results were reported in Rothmann's research, which suggests that low temperatures may lead to rapid amorphization [46]. Chen et al. [31] also found that low temperature (−180 • C) would cause rapid crystal-to-amorphous transition even at low doses (129 to 150 eÅ −2 ), suggesting that low temperature may not be helpful to reduce electron beam damage. The above inconsistent might source from the specimen properties or the discrepancy between the cryo-holder and cryo-microscope methods, which needs to be investigated in the near future.
The third approach refers to low-dose imaging technology, which is also an effective strategy to obtain atomic-level resolution images for electron beam sensitive materials [18]. By combining low-dose LAADF-STEM imaging with simple Butterworth and Bragg filters, atomic-level high resolution pictures of the FAPbI 3 perovskite film with only minor damages were acquired [51], which unraveled some unique phenomena of these perovskite materials that may not be feasibly measured using other techniques. Figure 9a shows the image of the damaged FAPbI 3 after mild radiations, where light and dark lattice patterns can be observed, as highlighted by white and black circles. In Figure 9b, an unexpected coherent transition boundary between the residual PbI 2 (yellow areas in Figure 9b) and FAPbI 3 grains was observed, with an undetectable lattice misfit. The existence of a low mismatch and low lattice strain interface between PbI 2 and FAPbI 3 perovskites suggests that a small amount of PbI 2 may not deteriorate the PSC device performance, in accordance with previous reports [63,64]. Figure 9c shows the high-resolution image of boundaries between FAPbI 3 grains. It could be observed that perovskite lattice at boundaries are highly crystalline, which indicates that the presence of boundaries might not disrupt the long-range crystal quality of the surrounding perovskite lattice. Additionally, aligned point defects (mainly vacancies) at the Pb-I sublattice of FAPbI 3 were found by conducting TEM measurements, such as stacking faults (Figure 9d, left) and edge dislocations (Figure 9d, right), which may provide valuable structural and defect information for future theoretical calculations and defect-related studies. In general, lowering the accelerating voltage of the incident electron beam can obtain electron low dose (reduce knock-on damage) but reduce imaging quality while also increasing radiation damage. Therefore, further research is needed on the method of obtaining atomic level resolution images of OIHPs by reducing voltage to achieve low dose. reported in Rothmann s research, which suggests that low temperatures may lead to rapid amorphization [46]. Chen et al. [31] also found that low temperature (−180 °C) would cause rapid crystal-to-amorphous transition even at low doses (129 to 150 eÅ −2 ), suggesting that low temperature may not be helpful to reduce electron beam damage. The above inconsistent might source from the specimen properties or the discrepancy between the cryo-holder and cryo-microscope methods, which needs to be investigated in the near future.
The third approach refers to low-dose imaging technology, which is also an effective strategy to obtain atomic-level resolution images for electron beam sensitive materials [18]. By combining low-dose LAADF-STEM imaging with simple Butterworth and Bragg filters, atomic-level high resolution pictures of the FAPbI3 perovskite film with only minor damages were acquired [51], which unraveled some unique phenomena of these perovskite materials that may not be feasibly measured using other techniques. Figure 9a shows the image of the damaged FAPbI3 after mild radiations, where light and dark lattice patterns can be observed, as highlighted by white and black circles. In Figure 9b, an unexpected coherent transition boundary between the residual PbI2 (yellow areas in Figure 9b) and FAPbI3 grains was observed, with an undetectable lattice misfit. The existence of a low mismatch and low lattice strain interface between PbI2 and FAPbI3 perovskites suggests that a small amount of PbI2 may not deteriorate the PSC device performance, in accordance with previous reports [63,64]. Figure 9c shows the high-resolution image of boundaries between FAPbI3 grains. It could be observed that perovskite lattice at boundaries are highly crystalline, which indicates that the presence of boundaries might not disrupt the long-range crystal quality of the surrounding perovskite lattice. Additionally, aligned point defects (mainly vacancies) at the Pb-I sublattice of FAPbI3 were found by conducting TEM measurements, such as stacking faults (Figure 9d, left) and edge dislocations ( Figure 9d, right), which may provide valuable structural and defect information for future theoretical calculations and defect-related studies. In general, lowering the accelerating voltage of the incident electron beam can obtain electron low dose (reduce knock-on damage) but reduce imaging quality while also increasing radiation damage. Therefore, further research is needed on the method of obtaining atomic level resolution images of OIHPs by reducing voltage to achieve low dose. [51]. Reprinted with permission from Ref. [51]. Copyright 2020, The American Association for the Advancement of Science.
The invention of the DDEC camera provides an alternate solution towards highresolution TEM images for OIHPs. Early in 2018, Han and coworkers reported the employment of DDEC cameras in TEM, which exhibit high detective quantum efficiency, thus enabling HRTEM with ultralow electron doses that is suitable for imaging OIHPs [64]. Moreover, the intrinsic structure of MAPbI 3 has been revealed successfully at a total electron dose of only 3 eÅ −2 by using DDEC cameras [31,47]. Li et al. obtained Cryo-TEM images of MAPbBr 3 and MAPbI 3 at different cumulative electron doses via DDEC camera, and investigated their electron dose thresholds at cryogenic temperatures. The resultant electron doses of MAPbI 3 and MAPbBr 3 were approximately 12 eÅ −2 and 46 eÅ −2 , respectively [32]. Song et al. also used a DDEC camera to obtain a set of high-resolution images of MAPbI 3 along the [001] zone axis, which matched well with the expected structure [13]. Nevertheless, despite the employment of DDEC is one of the prerequisites for HRTEM to probe sensitive OIHP materials, the DDEC camera alone is insufficient to gain high-quality images. There still remains several obstacles. First, the desired zone axis must be aligned with the electron beam in a very fast period to prevent the crystalline structure from damage. Second, the successive short-exposure low-dose frames must be precisely aligned to avoid any loss of resolution. Last but not least, the accurate defocus value should be known to obtain an interpretable image by image processing. Han and co-workers developed a simple program to achieve a one step, automatic alignment of the zone axis, as well as an "amplitude filter" to retrieve the high-resolution information hidden in the image stack, and a method to determine the defocus value of the image. By applying such methods, they successfully acquired the first atomic-resolution (≈1.5 Å) HRTEM image of hybrid CH 3 NH 3 PbBr 3 at 300 kV with a total electron dose of 11 eÅ −2 [64].
At the same time, we may also reduce the exposure dose of electron beam sensitive materials through some other techniques during the testing process, such as zone-axis auto-alignment and adjusting parameters of TEM in non region of interest (ROI). Instead of real-time observation, automatic zone-axis alignment utilizes one diffraction pattern to judge and rotate the sample to the desired zone-axis blindly using of programming control for parameter adjustment, which could save a lot of avoidable exposure. Moreover, focusing on the adjacent region of interest (ROI) instead of directly on the ROI and restoring the parameters in advance can further eliminate the electron irradiation. These dose-control strategies are able to diminish unnecessary electron exposure [65].
For electron beam sensitive materials, such as OIHPs, low-dose technology is required to obtain atomic level resolution images, however, there are problems such as sample drift during imaging processing, low signal-to-noise ration of images, and difficulty in data processing due to a large amount of data. Therefore, in order to precisely observe the atomic structure of OIHPs, processing data more efficiently and increasing the inputoutput ratio is also an indispensable point in practical HRTEM imaging. A combination of machine learning and development of algorithms for drift correction, denoising, and image reconstructor would benefit low-dose imaging.
Summary and Outlook
OIHP materials are highly sensitive to electron beams, which restrict their atomic-level structure characterization by using electron microscopes. The lack of structural information of perovskites may hamper their further developments. It is thus crucial to minimize the beam damage to perovskite materials, which may be achieved by controlling the imaging voltage and temperature, as well as developing low-dose imaging technologies. Nevertheless, low-dose technology will inevitably generate large amounts of data, which needs to be analyzed to obtain the atomic structure information. The development of appropriate algorithms to conduct drift correction, denoising, and image reconstruction will hopefully facilitate the process. Therefore, the combination of low-dose imaging and machine learning is expected as the next coming research hot spots in TEM-and perovskite-related studies. | 8,821.4 | 2023-06-19T00:00:00.000 | [
"Materials Science"
] |
WheatExp: an RNA-seq expression database for polyploid wheat
Background For functional genomics studies, it is important to understand the dynamic expression profiles of transcribed genes in different tissues, stages of development and in response to environmental stimuli. The proliferation in the use of next-generation sequencing technologies by the plant research community has led to the accumulation of large volumes of expression data. However, analysis of these datasets is complicated by the frequent occurrence of polyploidy among economically-important crop species. In addition, processing and analyzing such large volumes of sequence data is a technical and time-consuming task, limiting their application in functional genomics studies, particularly for smaller laboratories which lack access to high-powered computing infrastructure. Wheat is a good example of a young polyploid species with three similar genomes (97 % identical among homoeologous genes), rapidly accumulating RNA-seq datasets and a large research community. Description We present WheatExp, an expression database and visualization tool to analyze and compare homoeologue-specific transcript profiles across a broad range of tissues from different developmental stages in polyploid wheat. Beginning with publicly-available RNA-seq datasets, we developed a pipeline to distinguish between homoeologous transcripts from annotated genes in tetraploid and hexaploid wheat. Data from multiple studies is processed and compiled into a database which can be queried either by BLAST or by searching for a known gene of interest by name or functional domain. Expression data of multiple genes can be displayed side-by-side across all expression datasets providing immediate access to a comprehensive panel of expression data for specific subsets of wheat genes. Conclusions The development of a publicly accessible expression database hosted on the GrainGenes website - http://wheat.pw.usda.gov/WheatExp/ - coupled with a simple and readily-comparable visualization tool will empower the wheat research community to use RNA-seq data and to perform functional analyses of target genes. The presented expression data is homoeologue-specific allowing for the analysis of relative contributions from each genome to the overall expression of a gene, a critical consideration for breeding applications. Our approach can be expanded to other polyploid species by adjusting sequence mapping parameters according to the specific divergence of their genomes.
Background
Cereal crops provide a significant proportion of the calories consumed by humanity (http://faostat3.fao.org/) so maintaining and improving upon current production levels will be critical to provide food security for a growing world population. To meet this demand, continued and dedicated research efforts will be required to engineer solutions for the most pressing problems restricting agricultural production [1]. One important aspect of this research will be the identification and functional characterization of genes regulating the developmental stages most critical for determining yield and of genes which aid plant adaptation to a changing environment. Analyzing the dynamic expression profiles of each gene to describe their transcriptional regulation during the course of development, in different tissues and in response to specific environmental stimuli will be central to functional genetic studies.
In many economically-important crop species, such studies are complicated by polyploidy, the presence of two or more homoeologous genomes within a single nucleus. Polyploidy is widespread among plant species and is thought to aid the plant's adaptation to diverse environmental conditions [2]. This increased adaptability is favored by the possibility of increased diversity in multimeric protein complexes and by global gene redundancy, which in some instances may be followed by gene divergence and sub-or neo-functionalization [2].
Wheat is one example of a recent allopolyploid species. The diploid species of the Triticum-Aegilops complex diverged from one another 3-5 Ma million years ago and are, on average, 97 % identical within the protein coding regions [3]. The hybridization of diploids T. urartu (AA genome) and a species of the Sitopsis group (BB genome) less than 500,000 years ago generated the tetraploid wheat species (AABB genomes) currently used predominantly for pasta. The hybridization of tetraploid wheat with Aegilops tauschii less than 10,000 years ago resulted in the hexaploid wheats (AABBDD genomes) currently used to make breads and pastries [4].
The complexity of the wheat genome, together with its economic importance and the existence of a large public research and breeding community make wheat an ideal target for the development of an expression database and the tools required to analyze and distinguish between homoeologues. This is now possible, owing to the recent release of a homoeologue-specific draft assembly of the wheat genome by the International Wheat Genome Sequencing Consortium (IWGSC) [3] and the publication of several RNA-seq expression datasets [5][6][7][8][9][10].
To assemble the wheat draft genome, individual chromosome arms were first separated according to size using flow cytometry. This allowed for the sequencing and subsequent assembly of each homoeologous chromosome arm separately. This was coupled with a broad effort to annotate gene-coding regions, using species-specific transcripts and prediction algorithms, as well as manual annotation. Annotated gene sets are regularly updated and released through the Ensembl genomics platform [11]. Thus, for the first time, comprehensive transcript profiling can be applied directly in hexaploid wheat to support functional genomics studies, including accurate separation of distinct homoeologous genes.
The recent, rapid advances in next generation sequencing technologies have proved transformative for wheat as for multiple other species, by providing the ability to sequence the entire transcriptomes of multiple biological samples at great depth, an approach known as RNA-seq [12]. Falling sequencing costs and streamlined library construction protocols have resulted in the proliferation of RNA-seq studies in diverse plant species [13]. Increasingly, large volumes of raw sequencing data generated from these studies are deposited in online repositories (e.g. Sequence Read Archive [14], Gene Expression Omnibus [15] or European Nucleotide Archive [16]). In addition to the specific research questions addressed by the authors of these studies, these datasets also represent a rich source of information for the wider research community. However, processing and analyzing such large volumes of data is a technically difficult, time-consuming task which requires bioinformatics expertize and access to computing clusters with highperformance infrastructure. This has limited the ability of small research laboratories and individual researchers to benefit from the wealth of information available in RNAseq studies. To address this limitation and provide simple, free access to this data, we developed a pipeline to analyze transcriptomic data in polyploid genomes using wheat as a test case. Here we present WheatExp (http://wheat.pw.us da.gov/WheatExp/), an RNA-seq expression database and visualization tool that facilitates the analysis and comparison of homoeologous transcript profiles across a wide range of developmental and tissue samples in polyploid wheat.
Data sources and generation
All data contained within WheatExp is derived from RNAseq reads deposited in online sequence repositories [14][15][16]. Currently, six complementary studies are included; a broad study of five different tissues across multiple timepoints [5], a study of seedling photomorphogenesis [6], a study of drought and heat stress in wheat seedlings [7], a study of wheat grain layers at a single timepoint [8], a senescing leaf timecourse [9] and a timecourse of different grain tissue layers during development [10] (Table 1). In combination, these datasets represent a diverse set of wheat expression data across multiple tissues, developmental stages and environmental treatments.
We designed a pipeline specific for polyploid wheat sequence data to analyze previously published RNA-seq datasets using a uniform set of tools and quality controls. The output of our pipeline is a set of expression values for all annotated wheat genes from the IWGSC project. Briefly, raw RNA-seq reads are first trimmed for quality and adapter contamination using two opensource packages, "Sickle" (https://github.com/ucdavisbioinformatics/sickle) and "Scythe" (https://github.com/ vsbuffalo/scythe), respectively, ensuring that only highquality reads are considered when generating expression profiles. Trimmed reads are mapped to the full set of annotated homoeologue-specific wheat transcripts from the Ensembl genomics platform using BWA [17]. Uniquely-mapped reads are counted using "Htseq-count" [18] and then adjusted to derive RPKM/FPKM (Reads/ Fragments per kilobase of transcript per million mapped reads) values for each gene based upon mapping rate, transcript length and library size. This normalization means that within a dataset, expression values are directly comparable across different tissues and developmental time points. Although the same reference was used for each dataset, comparisons across different datasets are less reliable because of differences in the number and length of sequencing reads between different datasets. Our mapping parameters are selected to report only those reads with a mapping quality (MAPQ) score of 40 from the Sequence Alignment/Map (SAM) file, a value which signifies that the read was mapped uniquely. Reads which map ambiguously, either to multiple homoeologues or to other identical sequences, have a lower associated MAPQ score, and are excluded in this step. Table 1 reports the % of reads mapped from each dataset after the application of this selection criterion. Across all six datasets, an average of 50.1 % of reads were mapped uniquely, resulting in homoeologue-specific expression data for each gene. In general, datasets with longer reads (e.g. 101 bp PE reads) resulted in a higher proportion of uniquely mapped reads than those comprised of shorter reads (e.g. 50 bp SE reads).
Web implementation
The web interface was constructed using several different programming packages. The code base for the majority of the project is PHP (https://secure.php.net/) and JavaScript (https://www.javascript.com/). Relational database queries to the backend are performed with the PHP Data Object (PDO) module, allowing for secure queries. An additional advantage of using the PDO module is that the code is compatible with standard database engines such as MySQL, PostGreSQL and SQLite. In order to display dynamic graphs of the data, we implemented the HighCharts JavaScript library (https://github.com/highslide-software/ highcharts.com). Specifically, this project uses a PHP module, which implements the HighCharts JavaScript library freely available on github (https://github.com/ ghunti/HighchartsPHP). For dynamic text searches in portions of the website, the project implements Asynchronous JavaScript and XML (AJAX) technology using the package JQuery 1.11.3 (https://jquery.com/). Custom PHP and JavaScript code was written to develop a frontend website to enable BLAST [19] searches and to select multiple results for expression display. The site's frontend was written in HTML and JavaScript with BLAST search [19] SRA Short Read Archive, NCBI [14], GEO Gene Expression Omnibus, NCBI [15], ENA European Nucleotide Archive [16] and AJAX gene identifier search forms allowing the user to select multiple results for expression display.
Database implementation
The database implementation uses a flexible storage schema to house the data. The storage table has the following MySQL (https://www.mysql.com/) storage datatypes: study_id (varchar), seq_id (varchar), tissue (varchar), mean (float), se (float), se_low (float) and se_high (float). Binary search tree indices (BTREE) were implemented to increase the speed of queries using the study_id and seq_id columns.
System architecture
The WheatExp tool is housed on the GrainGenes server at the following
Web interface
The WheatExp homepage includes a brief description of the database and project design as well as details of all currently available datasets (Fig. 1a). Our data processing pipeline allows for the rapid incorporation of complementary RNA-seq expression datasets as they are published and we invite suggestions for the addition of new datasets from the user community. We anticipate regular expansions of the database to broaden the range of This approach will maximize the utility of the database for researchers studying diverse aspects of wheat development and ensures access to the most relevant high-quality expression datasets. From this main hub (Fig. 1a), the database can be queried in one of two ways; either by entering the DNA or protein sequence of a gene of interest as a BLAST query, or by a text search for a known gene ID from the Ensembl genomics annotation platform [5] (e.g. Traes_6AS_9E38 A95CB.1) or for an annotated functional term associated with the gene's encoded protein (e.g. "bHLH" or "Cytochrome P450"). For BLAST searches, results are displayed on a new page and include details of each BLAST alignment, sequence and a link to the corresponding gene ID page on the external Ensembl genomics hub for simple cross-referencing (Fig. 1b). A maximum of six matched results may be selected for side-by-side display within the same graph to allow simple comparisons between multiple genes. While this feature was originally implemented to enable comparisons among wheat homoeologues, any set of up to six genes may be selected for comparison, regardless of their relationship.
Likewise, when browsing using the text search function, up to six genes can be selected for addition to the results list, which can subsequently be viewed side-byside in the results window. For larger-scale analyses, tabular expression data for any number of genes can be downloaded by providing a list of Ensembl gene IDs of interest. The functional terms associated with each gene are obtained through standard gene annotation files in GFF3 format from the IWGSC which are stored within the database for text search function. We chose to adhere to the widely-used standard gene nomenclature format employed by the IWGSC and Ensembl genomics platform [5] and selected the set of annotated cDNA sequences from this platform as our mapping reference. External links to the annotated sequences for each gene are included in the results. This nomenclature format is increasingly becoming the standard for gene annotations within the plant research community, so our use of this reference will allow for the simple translation between projects and will maintain complementarity with the IWGSC project. This will facilitate comparative genomics studies with model plant species and other economicallyimportant crops, such as rice, barley and maize, as the genomic resources contained within the Ensembl platform in each of these species improves. Additionally, comparisons can be made with more distantly related species to analyze functional gene divergence during the course of evolution.
Graphical expression profiles from all datasets are presented on a single results page, displaying mean RPKM/FPKM values +/− Standard Error Mean (SEM) (Fig. 1c). Graphs can be downloaded in one of four image formats and data is also presented in an accompanying table, which can be exported in '.csv' format (Fig. 1c). Gene-level expression data can be downloaded separately, or in bulk as a single tabular file containing all data.
Expression data
All expression profiles in WheatExp are generated from RNA-seq datasets. This approach has several advantages over existing expression studies derived from microarray data, which until recently, was the standard technology used for large-scale expression analysis (e.g. "Plant Expression database (PLEXdb)", a database of microarray-based expression profiles in different plant species [20]). One of the advantages of RNA-seq is that it is an open platform that does not rely on predetermined sets of probes printed on a gene chip. In addition, this technology provides more reliable expression profiling across a broader dynamic range than is possible with microarrays.
An important advantage of the application of RNA-seq data in polyploid species is that it facilitates the distinction among homoeologues and recently-diverged paralogous genes by allowing the application of stringent read mapping thresholds. Our selection of only uniquelymapped reads has the dual benefit that the expression data are not only robust, but also homoeologue-specific, since the differences between these genomes (average 97 % identical) are distinguished by the selected mapping parameters. This is illustrated in Fig. 2 by two examples: CIRCADIAN CLOCK ASSOCIATED1, where the expression of the three homoeologous genes is approximately equal (Fig. 2a) and CONSTANS1 where the D-genome homoeologue contributes the majority of transcripts to the overall expression (Fig. 2b).
Simulated RNA-seq data
One drawback of using uniquely-mapped RNA-seq reads for expression analysis is that any read which maps equally well to identical regions in different genes is discarded, potentially resulting in an underestimation of the expression levels of highly similar genes [21]. To determine the extent of this effect in our database, we performed a simulated RNA-seq experiment. We generated 29.4 M synthetic 100bp paired-end reads with random expression levels and Illumina HiSeq2000 error profiles (' ART' , mode art-illumina, default parameters except -m 500, −s 100 -ss HS20 [22]). All reads were processed using the same pipeline as for all biological RNA-seq data. By comparing the known number of simulated reads with the number of mapped reads, we can determine for each contig the proportion of reads discarded during mapping. Using a set of 3,476 homoeologous triplets (=10,428 genes) identified from a previous study [7], we mapped the subset of reads originating from each homoeologue to a reference comprised only of their genome of origin (i.e. A-genome reads were mapped to Agenome transcripts etc.). For the A, B and D genomes, an average of 98.6, 98.4, and 98.4 % of reads mapped uniquely to their transcripts of origin, respectively, demonstrating that only a small proportion of reads are discarded during mapping when their homoeologous genes are absent from the reference. When we repeated the mapping of all generated reads to the full reference, unique mapping rates were reduced to 82.4, 83.6 and 80.6 % for the A, B and D homoeologous triplets. In each case, this was a slightly lower unique mapping rate than for all remaining transcripts in our dataset (84.4 %). Despite this reduction in the mapping rate, we observed a high level of correlation between the number of generated reads and the observed mapped reads (r = 0.95, 0.96, 0.95 for A, B and D homoeologous triplets, Fig. 3). Therefore, while the estimated expression levels of homoeologous genes in our database are, on average, slightly reduced due to their sequence similarity, the reported expression remains closely correlated with the true expression level. Furthermore, this effect is approximately equal for transcripts originating from the three homoeologous wheat genomes (Fig. 3), demonstrating the absence of bias when comparing homoeologue-specific expression profiles for a gene of interest.
Limitations
The main application of WheatExp is to compare the relative expression levels of the different homoeologues of a single gene across different tissues, developmental stages, environmental conditions and genetic backgrounds. For users interested in comparing the expression of different genes, we have included a statement on the website indicating that comparisons among genes are valid only when the genes being compared have the same number of homoeologues in the reference genome. Based upon results from our simulated RNA-seq experiment, genes where one homoeologue is absent from the reference will exhibit a higher proportion of uniquely-mapped reads and the expression levels of the two remaining homoeologues may also be inflated by the incorrect mapping of reads from the absent homoeologue. Additionally, no expression data will be reported for any genes which lack annotation within the current IWGSC release and any contig assemblies which are duplicated in the reference assembly will exhibit a reduced number of uniquely mapped reads. However, our project design allows for regular updates and refining of the mapping reference as this is expanded through the IWGSC project. As the mapping reference is improved we will re-map and re-process each dataset to generate updated expression sets using new versions of the reference, reducing the incidence and impact of such bias.
Our approach and data analysis pipeline can be applied to other polyploid species for which a homoeologuespecific genomic assembly is available to use as a reference. A critical parameter that must be considered in this application is the average level of identity among homoeologues, since this will affect the selection of the threshold for mapping uniquely mapped reads and thus the ability to discriminate between homoeologues.
Conclusions
The increasing volume of expression data from RNAseq studies represents a valuable source of information for the plant research community. We developed a pipeline tailored to polyploid wheat to rapidly process and analyze this data, and describe WheatExp, a database allowing the simple comparison of wheat homoeologuespecific sequences across a diverse set of temporal and spatial transcriptional profiles. Our database management is flexible, allowing for the incorporation of improvements in both the coverage of the wheat genomic reference and in the addition of complementary RNAseq datasets released by third-party research groups. WheatExp provides simple, free access to a comprehensive array of expression data, empowering small labs and individual researchers to mine complex and valuable expression datasets.
Availability and requirements
WheatExp is a free database and visualization tool open to all users with no login requirements and can be accessed at the following URL: http://wheat.pw.usda.gov/Wheat Exp/. The web tool is functional on all modern web browsing environments including Google Chrome, Mozilla Firefox and Safari.
Availability of supporting data
All raw sequence data used to generate processed expression data for WheatExp is accessible from public sequence databases as described in Table 1. Processed counts and reference files are available for download through the WheatExp website. | 4,732 | 2015-12-01T00:00:00.000 | [
"Biology",
"Computer Science",
"Environmental Science"
] |
Continuous Bangla Speech Segmentation using Short-term Speech Features Extraction Approaches
This paper presents simple and novel feature extraction approaches for segmenting continuous Bangla speech sentences into words/sub-words. These methods are based on two simple speech features, namely the time-domain features and the frequency-domain features. The time-domain features, such as short-time signal energy, short-time average zero crossing rate and the frequency-domain features, such as spectral centroid and spectral flux features are extracted in this research work. After the feature sequences are extracted, a simple dynamic thresholding criterion is applied in order to detect the word boundaries and label the entire speech sentence into a sequence of words/sub-words. All the algorithms used in this research are implemented in Matlab and the implemented automatic speech segmentation system achieved segmentation accuracy of 96%. Keywords-Speech Segmentation; Features Extraction; Short-time Energy; Spectral Centroid; Dynamic Thresholding.
INTRODUCTION
Automated segmentation of speech signals has been under research for over 30 years [1].Speech Recognization system requires segmentation of Speech waveform into fundamental acoustic units [2].Segmentation is a process of decomposing the speech signal into smaller units.Segmentation is the very basic step in any voiced activated systems like speech recognition system and speech synthesis system.Speech segmentation was done using wavelet [3], fuzzy methods [4], artificial neural networks [5] and Hidden Markov Model [6].But it was found that results still do not meet expectations.In order to have results more accurate, groups of several features were used [7, 8, 9 and 10].This paper is continuation of feature extraction for speech segmentation research.The method implemented here is a very simple example of how the detection of speech segments can be achieved.This paper is organized as follows: Section 2 describes techniques for segmentation of the speech signal.In Section 3, we will describe different short-term speech features.In section 4, the methodological steps of the proposed system will be discussed.Section 5 and 6 will describe the experimental results and conclusion, respectively.
II. SPEECH SEGMENTATION
Automatic speech segmentation is a necessary step that used in Speech Recognition and Synthesis systems.Speech segmentation is breaking continuous streams of sound into some basic units like words, phonemes or syllables that can be recognized.The general idea of segmentation can be described as dividing something continuous into discrete, nonoverlapping entities [11].Segmentation can be also used to distinguish different types of audio signals from large amounts of audio data, often referred to as audio classification [12].
Automatic speech segmentation methods can be classified in many ways, but one very common classification is the division to blind and aided segmentation algorithms.A central difference between aided and blind methods is in how much the segmentation algorithm uses previously obtained data or external knowledge to process the expected speech.We will discuss about these two approaches in the following subsections.
A. Blind segmentation
The term blind segmentation refers to methods where there is no pre-existing or external knowledge regarding linguistic properties, such as orthography or the full phonetic annotation, of the signal to be segmented.Blind segmentation is applied in different applications, such as speaker verification systems, speech recognition systems, language identification systems, and speech corpus segmentation and labeling [13].
Due to the lack of external or top-down information, the first phase of blind segmentation relies entirely on the acoustical features present in the signal.The second phase or bottom-up processing is usually built on a front-end parametrization of the speech signal, often using MFCC, LPcoefficients, or pure FFT spectrum [14].
B. Aided segmentation
Aided segmentation algorithms use some sort of external linguistic knowledge of the speech stream to segment it into corresponding segments of the desired type.An orthographic or phonetic transcription is used as a parallel input with the speech, or training the algorithm with such data [15].One of the most common methods in ASR for utilizing phonetic annotations is with HMM-based systems [16].HMM-based algorithms have dominated most speech recognition applications since the 1980's due to their so far superior performance in recognition and relatively small computational complexity in the field of speech recognition [17].
III. FEATURE EXTRACTION FOR SPEECH SEGMENTATION
The segmentation method described here is a purely bottom-up blind speech segmentation algorithm.The general principle of the algorithm is to track the amplitude or spectral changes in the signal by using short-time energy or spectral www.ijacsa.thesai.orgfeatures and to detect the segment boundaries at the locations where amplitude or spectral changes exceed a minimum threshold level.Two types of features are used for segmenting speech signal: time-domain signal features and frequencydomain signal features.
A. Time-Domain Signal Features
Time-domain features are widely used for speech segment extraction.These features are useful when it is needed to have algorithm with simple implementation and efficient calculation.The most used features are short-time energy and short-term average zero-crossing rate.
1) Short-Time Signal Energy
Short-term energy is the principal and most natural feature that has been used.Physically, energy is a measure of how much signal there is at any one time.Energy is used to discover voiced sounds, which have higher energy than silence/un-voiced, in a continuous speech, as shown in Figure -1.
The energy of a signal is typically calculated on a shorttime basis, by windowing the signal at a particular time, squaring the samples and taking the average [18].The square root of this result is the engineering quantity, known as the root-mean square (RMS) value, also used.The short-time energy function of a speech frame with length N is defined as The short-term root mean squared (RMS) energy of this frame is given by: Where ) (m x is the discrete-time audio signal and ) (m w is rectangle window function, given by the following equation:
2) Short-Time Average Zero-Crossing Rate
The average zero-crossing rate refers to the number of times speech samples change algebraic sign in a given frame.The rate at which zero crossings occur is a simple measure of the frequency content of a signal.It is a measure of number of times in a given time interval/frame that the amplitude of the speech signals passes through a value of zero [19].Unvoiced speech components normally have much higher ZCR values than voiced ones, as shown in Figure 2. The short-time average zero-crossing rate is defined as Where, andw(n) is a rectangle window of length N, given in equation (3).
B. Frequency-Domain Signal Features
The most information of speech is concentrated in 250Hz -6800Hz frequency range [20].In order to extract frequencydomain features, discrete Fourier transform (that provides information about how much of each frequency is present in a signal) can be used.The Fourier representation of a signal shows the spectral composition of the signal.Widely used frequency-domain features are spectral centroid and spectral flux feature sequences that used discrete Fourier transform.
1) Spectral Centroid
The spectral centroid is a measure used in digital signal processing to characterize a spectrum.It indicates where the "center of gravity" of the spectrum is.This feature is a measure of the spectral position, with high values corresponding to "brighter" sounds [21], as shown in Figure -3.The spectral centroid, SC i , of the i-th frame is defined as the center of "gravity" of its spectrum and it is given by the following equation: Here, f(m) represents the center frequency of i-th bin with length N and ) (m X i is the amplitude corresponding to that bin in DFT spectrum.The DFT is given by the following equation and can be computed efficiently using a fast Fourier transform (FFT) algorithm [22].
2) Spectral flux
Spectral flux is a measure of how quickly the power spectrum of a signal is changing (as shown in Figure 4), calculated by comparing the power spectrum for one frame against the power spectrum from the previous frame, also known as the Euclidean distance between the two normalized spectra.The spectral flux can be used to determine the timbre of an audio signal, or in onset detection [23], among other things.The equation of Spectral Flux, SF i is given by: is the DFT coefficients of i-th short-term frame with length N, given in equation (7).
C. Speech Segments Detection
After computing speech feature sequences, a simple dynamic threshold-based algorithm is applied in order to detect the speech word segments.The following steps are included in this thresholding algorithm.(10) where W is a user-defined weight parameter [24].Large values of W obviously lead to threshold values closer to M 1 .Here, we used W=100.
The above process is applied for both feature sequences and finding two thresholds: T 1 based on the energy sequences and T 2 on the spectral centroid sequences.After computing two thresholds, the speech word segments are formed by successive frames for which the respective feature values are larger than the computed thresholds (for both feature sequences).
D. Post Processing of Detected Segments
As shown in Figure 6, detected speech segments are analyzed in post-processing stage.Common segmentation errors are: short segments usually are noises/silences, and two segments with short space in between can be the same segment.
Post-processing with rule base can fix these and similar mistakes.Waheed [25] proposed to use 2 rules: , then two segments are merged and anything between the two segments that was previously left, is made part of the speech.
IV. IMPLEMENTATION
The automatic speech segmentation system has been implemented in Windows environment and we have used MATLAB Tool Kit for developing this application.The proposed speech segmentation system has six major steps, as shown in Figure 7.
A. Speech Acquisition
Speech acquisition is acquiring of continuous Bangla speech sentences through the microphone.
Speech capturing or speech recording is the first step of implementation.Recording has been done by native male speaker of Bangali.The sampling frequency is 16 KHz; sample size is 8 bits, and mono channels are used.
B. Signal Preprocessing
This step includes elimination of background noise, framing and windowing.Background noise is removed from the data so that only speech samples are the input to the further processing.Continuous speech signal has been separated into a number of segments called frames, also known as framing.After the pre-emphasis, filtered samples have been converted into frames, having frame size of 50 msec.Each frame overlaps by 10 msec.To reduce the edge effect of each frame segment windowing is done.The window, w(n), determines the portion of the speech signal that is to be processed by zeroing out the signal outside the region of interest.Rectangular window has been used.
C. Short-term Feature Extraction
After windowing, we have been computed the short-term energy features and spectral centroid features of each frame of the speech signal.These features have been discussed in details in Section 3. In this step, median filtering of these feature sequences also computed.
D. Histogram Computation
Histograms of both smoothed feature sequences are computed in order to find the local maxima of the histogram, from which the threshold values are calculated.
E. Dynamic Thresholding
Dynamic thresholding is applied for both feature sequences and finding two thresholds: T 1 and T 2 , based on the energy sequences and the spectral centroid sequences respectively.
After computing two thresholds, the speech word segments are formed by successive frames for which the respective feature values are larger than the computed thresholds.
F. Post-Processing
In order to segment words/sub-words, the detected speech segments are lengthened by 5 short term windows (each window of 50 msec), on both sides in the post-processing step.Two segments with short space in between have been merged to get final speech segments.These segmented speech words are saved as *.wav file format for further use.
V. EXPERIMENTAL RESULTS
In order to evaluate the performance of the proposed system different experiments were carried out.All the techniques and algorithms discussed in this paper have been implemented in Matlab 7.12.0version.In this experiment, various speech sentences in Bangla language have been recorded, analyzed and segmented by using time-domain and frequency-domain features with dynamic thresholding technique.Figure 8 shows the filtered short-time energy and spectral centroid features of the Bangla speech sentence " ", where the boundaries of words are marked automatically by the system.Table-1 shows the details segmentation results for ten speech sentences and reveals that the average segmentation accuracy rate is 96.25%, and it is quite satisfactory.
VI. CONCLUSION AND FURTHER RESEARCH
We have presented a simple speech features extraction approach for segmenting continuous speech into word/subwords in a simple and efficient way.The short-term speech features have been selected for several reasons.First, it provides a basis for distinguishing voiced speech components from unvoiced speech components, i.e., if the level of background noise is not very high, the energy of the voiced segments is larger than the energy of the silent or unvoiced segments.Second, if unvoiced segments simply contain environmental sounds, then the spectral centroid for the voiced segments is again larger.Third, its change pattern over the time may reveal the rhythm and periodicity nature of the underlying sound.
From the experiments, it was observed that some of the words were not segmented properly.This is due to different causes: (i) the utterance of words and sub-words differs depending on their position in the sentence, (ii) the pauses between the words or sub-words are not identical in all cases because of the variability of the speech signals and (ii) the non-uniform articulation of speech.Also, the speech signal is very much sensitive to the speaker's properties such as age, sex, and emotion.The proposed approach shows good results in speech segmentation that achieves about 96% of segmentation accuracy.This reduces the memory requirement and computational time in any speech recognition system.
The major goal of future research is to search for possible mechanisms that can be employed to enable top-down feedback and ultimately pattern discovery by learning.To design more reliable system, future systems should employ knowledge (syntactic or semantic) of linguistics and more powerful recognition approaches like Gaussion Mixture Models (GMMs), Time-Delay Neural Networks (TDNNs), Hidden Markov Model (HMM), Fuzzy logic, and so on.
Figure 1 .
Figure 1.Original signal and short-time energy curves of the speech sentence, " ".
Figure 2 .
Figure 2. Original signal and short-term average zero-crossing rate curves of speech sentence, " ".
1 . 5 .
Get the feature sequences from the previous feature extraction module.2. Apply median filtering in the feature sequences.3. Compute the Mean or average values of smoothed feature sequences.4. Compute the histogram of the smoothed feature sequences.Find the local maxima of histogram.6.If at least two maxima M 1 and M 2 have been found,
Figure 3 .
Figure 3.The graph of original signal and spectral centroid features of speech sentence, "".
Figure 4 .
Figure 4.The curves of original signal and spectral flux features of speech sentence, " ".
Figure 5 .Figure 6 .
Figure 5. Original speech signal and median filtered feature sequences curves with threshold values of a speech sentence " ".
A
. Speech Acquisition B. Signal Preprocessing C. Speech Features Extraction D. Histogram Computation E. Dynamic thresholding and F. Post-Processing
Figure 7 .
Figure 7. Block diagram of the proposed Automatic Speech Segmentation system.
Figure- 8 .
Figure-8.The segmentation results for a speech sentences " " which contains 5 (five) speech words.The first subfigure shows the original signal.The second subfigure shows the sequences of the signal's energy.In the third subfigure the spectral centroid sequence is presented.In both cases, the respective thresholds are also shown.The final subfigure presents the segmented words in dashed circles. | 3,522.4 | 2012-01-01T00:00:00.000 | [
"Computer Science"
] |
A reliable analytical technique for fractional Caudrey-Dodd-Gibbon equation with Mittag-Leffler kernel
Abstract The pivotal aim of the present work is to find the solution for fractional Caudrey-Dodd-Gibbon (CDG) equation using q-homotopy analysis transform method (q-HATM). The considered technique is graceful amalgamations of Laplace transform technique with q-homotopy analysis scheme, and fractional derivative defined with Atangana-Baleanu (AB) operator. The fixed point hypothesis considered in order to demonstrate the existence and uniqueness of the obtained solution for the projected fractional-order model. In order to illustrate and validate the efficiency of the future technique, we analysed the projected model in terms of fractional order. Moreover, the physical behaviour of q-HATM solutions have been captured in terms of plots for diverse fractional order and the numerical simulation is also demonstrated. The obtained results elucidate that, the considered algorithm is easy to implement, highly methodical as well as accurate and very effective to examine the nature of nonlinear differential equations of arbitrary order arisen in the connected areas of science and engineering.
Introduction
Fractional calculus (FC) was originated in Newton's time, but lately, it fascinated the attention of many scholars. From the last thirty years, the most intriguing leaps in P. Veeresha scienti c and engineering applications have been found within the framework of FC. The concept of the fractional derivative has been industrialized due to the complexities associated with a heterogeneities phenomenon. The fractional di erential operators are capable to capture the behaviour of multifaceted media having di usion process. It has been a very essential tool and many problems can be illustrated more conveniently and more accurately with differential equations having arbitrary order. Due to the swift development of mathematical techniques with computer software's, many researchers started to work on generalised calculus to present their viewpoints while analysing many complex phenomena.
Numerous pioneering directions are prescribed for the diverse de nitions of fractional calculus by many senior researchers, and which prearranged the foundation [1][2][3][4][5][6]. Calculus with fractional order is associated to practical ventures and it extensively employed to nanotechnology [7], human diseases [8,9], chaos theory [10], and other areas . The numerical and analytical solution for these equations illustrating these models have an impartment role in portraying nature of nonlinear problems ascends in connected areas of science.
In order to demonstrate the e ciency of the future scheme, we consider fth-order nonlinear CGD equation of the form [35,36] The above equation is a class of KdV equation and further, it possesses distinct and diverse properties. The CGD equation is also familiar as Sawada-Kotera equation [37]. Due to the importance of the considered problem, it has been magnetized the attention of many researchers from diverse areas. In 1984, Weiss illustrated the Painleve' property for the Eq. (1) [38]. It has been proved that it has a strong physical background in uid [39] and also has Nsoliton solutions [40].
In the present scenario, many important and nonlinear models are methodically and e ectively analysed with the help of fractional calculus. There have been diverse de nitions are suggested by many senior research scholars, for instance, Riemann, Liouville, Caputo and Fabrizio. However, these de nitions have their own limitations. The Riemann-Liouville derivative is unable to explain the importance of the initial conditions; the Caputo derivative has overcome this shortcoming but is impotent to explain the singular kernel of the phenomena. Later, in 2015 Caputo and Fabrizio defeated the above obliges [41], and many researchers are considered this derivative in order to analyse and nd the solution for diverse classes of nonlinear complex problems. But some issues were pointed out in CF derivative, like non-singular kernel and nonlocal, these properties are very essential in describing the physical behaviour and nature of the nonlinear problems. In 2016, Atangana and Baleanu introduced and natured the novel fractional derivative, namely AB derivative. This novel derivative de ned with the aid of Mittag-Le er functions [42]. This fractional derivative buried all the above-cited issues and helps us to understand the natural phenomena systematically and e ectively.
In the present framework, we consider the fractional Caudrey-Dodd-Gibbon (FCDG) equation of the form where α is fractional-order and de ned with AB fractional operator. The fractional-order is introduced in order to incorporate the memory e ects and hereditary consequence in the phenomenon and these properties aid us to capture essential physical properties of the nonlinear problems.
Recently, many mathematicians and physicists developed very e ective and more accurate methods in order to nd and analyse the solution for complex and nonlinear problems arisen in science and engineering. In connection with this, the homotopy analysis method (HAM) proposed by Chinese mathematician Liao Shijun [43]. HAM has been pro tably and e ectively applied to study the behaviour of nonlinear problems without perturbation or linearization. But, for computational work, HAM requires huge time and computer memory. To overcome this, there is an essence of the amalgamation of a considered method with wellknown transform techniques.
In the present investigation, we put an e ort to nd and analysed the behaviour of the solution obtained for the FCDG equation by applying q-HATM. The future algorithm is the combination of q-HAM with LT [44]. Since q-HATM is an improved scheme of HAM; it does not require discretization, perturbation or linearization. Recently, due to its reliability and e cacy, the considered method is exceptionally applied by many researchers to understand physical behaviour diverse classes of complex problems [45][46][47][48][49][50][51][52][53]. The projected method o ers us more freedom to consider the diverse class of initial guess and the equation type complex as well as nonlinear problems; because of this, the complex NDEs can be directly solved. The novelty of the future method is it aids a modest algorithm to evaluate the solution and it natured by the homotopy and axillary parameters, which provides the rapid convergence in the obtained solution for a nonlinear portion of the given problem. Meanwhile, it has prodigious generality because it plausibly contains the results obtained by many algorithms like q-HAM, HPM, ADM and some other traditional techniques. The considered method can preserve great accuracy while decreasing the computational time and work in comparison with other methods. The considered nonlinear problem recently fascinated the attention of researchers from di erent areas of science. Since FCDG equation plays a signi cant role in portraying several nonlinear phenomena and also which are the generalizations of diverse complex phenomena, many authors nd and analysed the solution using analytical as well as numerical schemes [54][55][56][57][58][59][60][61].
Preliminaries
Recently, many authors considered these derivatives to analyse a diverse class of models in comparison with classical order as well as other fractional derivatives, and they prove that AB derivative is more e ective while analysing the nature and physical behaviour of the models [62][63][64][65]. Here, we de ne the basic notion of Atangana-Baleanu derivatives and integrals [42].
De nition 2. The AB derivative of fractional order for a De nition 3. The fractional AB integral related to the nonlocal kernel is de ned by The following Lipschitz conditions respectively hold true for both Riemann-Liouville and AB derivatives de ned in Eqs. (3) and (4) [42], has a unique solution and which is dened as [42]
Fundamental idea of the considered scheme
In this segment, we consider the arbitrary order di erential equation in order to demonstrate the fundamental solution procedure of the projected algorithm where ABC signi es the source term, R and N respectively denotes the linear and nonlinear di erential operator. On using the LT on Eq. (10), we have after simpli cation The non-linear operator is de ned as follows Here, φ(x, t; q) is the real-valued function with respect to x, t and q ∈ , n . Now, we de ne a homotopy as follows (14) where L is signifying LT, q ∈ , n (n ≥ ) is the embedding parameter and ̸ = is an auxiliary parameter. For q = and q = n , the results are given below hold true Thus, by intensifying q from to n , the solution φ(x, t; q) varies from v (x, t) to v (x, t). By using the Taylor theorem near to q, we de ning φ (x, t; q) in series form and then we get where The series (14) converges at q = n for the proper chaise of v (x, t) , n and . Then Now, m-times di erentiating Eq. (15) with q and later dividing by m! and then putting q = , we obtain where the vectors are de ned as On applying inverse LT on Eq. (19), one can get where and km = , m ≤ , n, m > .
In Eq. (22), Hm signi es homotopy polynomial and presented as follows and φ (x, t; q) = φ + qφ + q φ + . . . . By the aid of Eqs. (21) and (22), one can get Using the Eq. (25), one can get the series of vm (x, t). Lastly, the series q-HATM solution is de ned as
Solution for FCDG equation
In order to present the solution procedure and e ciency of the future scheme, in this segment, we consider FCDG equation of fractional order. Further by the help of obtained results, we made an attempt to capture the behaviour of q-HATM solution for di erent fractional order. By the help of Eq. (2), we have with initial condition u (x, ) = µ sech (µx).
Taking LT on Eq. (27) and then using the Eq. (28), we get The non-linear operator N is presented with the help of future algorithm as below The deformation equation of m-th order by the help of q-HATM at H(x, t) = , is given as follows where On applying inverse LT on Eq. (31), it reduces to On simplifying the above equation systematically by using u (x, t) = s µ sech (µx) we can evaluate the terms of the series solution
Existence of solutions for the future model
Here, we considered the xed-point theorem in order to demonstrate the existence of the solution for the considered model. Since the considered model cited in the system (27) is non-local as well as complex; there are no particular algorithms or methods exist to evaluate the exact solutions. However, under some particular conditions the existence of the solution assurances. Now, the system (27) is considered as follows: The foregoing system is transformed to the Volterra integral equation using the Theorem 2, and which as follows Theorem 3. The kernel G satis es the Lipschitz condition and contraction if the condition ≤ δ + δ a + b + ab + δ < holds.
Proof. In order to prove the required result, we consider the two functions u and u , then where δ is the di erential operator. Since u and u are bounded, we have u (x, t) ≤ a and u (x, t ) ≤ b.
Putting η = δ + δ a + b + ab + δ in the above inequality, then we have t ) . (38) This gives, the Lipschitz condition is obtained for G . Further, we can see that if ≤ δ + δ a + b + ab + δ < , then it implies the contraction. The recursive form of Eq. (36) de ned as follows The associated initial condition is The successive di erence between the terms is presented as follows Notice that By using Eq. (38) after applying the norm on the Eq. (41), one can get We prove the following theorem by using the above result. Theorem 4. The solution for the system (27) will exist and unique if we have speci c t then Proof. Let us consider the bounded function u (x, t) satisfying the Lipschitz condition. Then, by Eqs. (42) and (44), we have Therefore, the continuity as well as existence for the obtained solutions is proved. Subsequently, in order to show the Eq. (44) is a solution for the Eq. (29), we consider In order to obtain require a result, we consider Similarly, at t we can obtain As n approaches to ∞, we can see that form Eq. (50), Kn (x, t) tends to .
Next, it is a necessity to demonstrate uniqueness for the solution of the considered model. Suppose u * (x, t) be the other solution, then we have On applying norm, the Eq. (48) simpli es to On simpli cation From the above condition, it is clear that But < λ < , therefore limn, m→∞ Sn − Sm = . Hence, {Sn}is the Cauchy sequence. Similarly, we can demonstrate for the second case. This proves the required result. Theorem 6. For the series solution (26) of the Eq. (10), the maximum absolute error is presented as Proof: By the help of Eq. (56), we get This ends the proof.
Results and discussion
In this manuscript, we nd the solution for CDG equation having arbitrary order using a novel scheme namely, q-HATM with the help of Mittag-Le er law. In the present segment, we demonstrate the e ect of fractional order in the obtained solution with distinct parameters o ered by the future method. In Figures 1 to 3, the nature of q-HATM solution for di erent arbitrary order is presented in terms of 2D plots. From these plots, we can see that considered problem conspicuously depends on fractional order. In order to analyse the behaviour of obtained solution with respect to homotopy parameter ( ), the -curves are drowned for diverse µ and presented in Figure 4. In the plots, the horizontal line represents the convergence region of the q-HATM solution and these curves aid us to adjust and handle the convergence province of the solution. For an appropriate value of , the achieved solution quickly converges to the exact solution. Further, the small variation in the physical behaviour of the complex models stimulates the enormous new results to analyse and understand nature in a better and systematic manner. Moreover, from all the plots we can see that the considered method is more accurate and very e ective to analyse the considered complex coupled fractional order equations.
Conclusion
In this study, the q-HATM is applied lucratively to nd the solution for arbitrary order CDG equations. Since AB derivatives and integrals having fractional order are dened with the help of generalized Mittag-Le er function as the non-singular and non-local kernel, the present investigation illuminates the e eteness of the considered derivative. The existence and uniqueness of the obtained solution are demonstrated with the xed point hypothesis. The results obtained by the future scheme are more stimulating as compared to results available in the literature. Further, the projected algorithm nds the solution for the nonlinear problem without considering any discretization, perturbation or transformations. The present investigation illuminates, the considered nonlinear phenomena noticeably depend on the time history and the time instant and which can be pro ciently analysed by applying the concept of calculus with fractional order. The present in- vestigation helps the researchers to study the behaviour nonlinear problems gives very interesting and useful consequences. Lastly, we can conclude the projected method is extremely methodical, more e ective and very accurate, and which can be applied to analyse the diverse classes of nonlinear problems arising in science and technology. [5] Podlubny I., Fractional Di erential Equations, Academic Press, New York, 1999. | 3,588.2 | 2020-01-01T00:00:00.000 | [
"Mathematics"
] |
Fish herpesvirus diseases : a short review of current knowledge
Fish herpesviruses can cause significant economic losses in aquaculture, and some of these viruses are oncogenic. The virion morphology and genome organization of fish herpesviruses are generally similar to those of higher vertebrates, but the phylogenetic connections between herpesvirus families are tenuous. In accordance with new taxonomy, fish herpesviruses belong to the family Alloherpesviridae in the order Herpesvirales. Fish herpesviruses can induce diseases ranging from mild, inapparent infections to serious ones that cause mass mortality. The aim of this work was to summarize the present knowledge about fish herpesvirus diseases. Alloherpesviridae, CyHV-3, CyHV-2, CyHV-1, IcHV-1, AngHV-1 Herpesviruses comprise a numerous group of large DNA viruses with common virion structure and biological properties (McGeoch et al. 2008; Mattenleiter et al. 2008). They are host-specific pathogens. Apart from three herpesviruses found recently in invertebrate species, all known herpesviruses infect vertebrates, from fish to mammals (Davison et al. 2005a; Savin et al. 2010). According to a new classification accepted by the International Committee on Taxonomy of Viruses (http:/ictvonline.org), all herpesviruses have been incorporated into a new order named Herpesvirales, which has been split into three families. The revised family Herpesviridae contains mammalian, avian, and reptilian viruses; the newly-created family Alloherpesviridae contains herpesviruses of fish and amphibians, and the new family Malacoherpesviridae comprises single invertebrate herpesvirus (Ostreid herpesvirus). The current taxonomy of fish herpesviruses, including most known and unassigned virus species, is shown in Table 1. The virion of herpesviruses comprises four morphologically distinct parts: a core consisting of linear dsDNA; an icosahedral capsid; a proteinaceous tegument; a hostderived lipid envelope with viral proteins embedded in it (Davison et al. 2005b). Although the specific virion architecture is common among all herpesviruses, the genetic relationships between three families of Herpesvirales are very tenuous. The only sequencebased evidence for a common ancestor for all herpesviruses is the gene encoding the ATPase subunit of the terminase, a protein complex which is responsible for DNA packaging into the procapsid (Davison 2002; McGeoch et al. 2006). The family of Alloherpesviridae is a highly diverse group with a genome size ranging from 134 kbp of channel catfish virus (the smallest sequenced genome) to 295kbp of cyprinid herpesvirus-3, which is the largest known genome among the order Herpesvirales (McGeoch et al. 2006; Davison 2010). The phylogenetic studies of Waltzek et al. (2009) revealed two major clades within the family Alloherpesviridae: the first comprises viruses from cyprinid and anguillid hosts, and the second comprises ictalurid, salmonid, acipenserid, and ranid taxa.
the United States, Israel, Russia, and in some Asian regions (China, Japan, Korea, Malaysia).Tumours found in fish are white to gray and can cover large areas of the body surface including the head and fins.Adult infected fish show no distinct behavioral or clinical signs; however, in juveniles, CyHV-1 infection results in clinical disease and high mortality.Infected juvenile carp may develop appetite loss, distended abdomens, exophthalmia, haemorrhages on the operculum and abdomen, and darkened skin pigmentation (Plumb and Hanson 2011a).The disease is seasonal and lesions usually develop when water temperature is lower than 15 °C, and it regresses when water temperature increases (Palmeiro and Scott Weber III 2010).Sano et al. (1993) demonstrated that CyHV-1 genome is present in nervous and subcutaneous tissues after disease regression, suggesting that the virus becomes latent, which might explain the recurrence of lesions.
Taxon name
Common name Abbreviation Host species Order: Herpesvirales (wl) Family: Alloherpesviridae Genus: Cyprinid herpesvirus 3 (CyHV-3, also named koi herpesvirus), first reported in 1998 (Ariav et al. 1998), is responsible for severe epizootic disease and mass mortality among Cyprinus carpio, causing enormous economic losses in carp industries worldwide (Hedrick et al. 2000;Ilouze et al. 2006).The disease affects common carp and its varieties (koi carp, ghost carp) and has spread around the world except for Australia and South America (Manual of Diagnostic Tests for Aquatic Animals 2010).Outbreaks appear seasonally when the water temperature is 18-28 °C (Perelberg et al. 2003).Affected fish exhibit apathy, gill epithelium necrosis, pale patches on the skin, pale and irregularly colored gills, increased mucus production, and enophthalmia (Hedrick et al. 2000;Pokorova et al. 2005).The study by Perelberg et al. (2003) found fry to be more susceptible to infection than mature fish.Further experimental trials conducted in Japan indicated that carp larvae (3-4 days post hatch) are resistant to CyHV-3 infection, although the same fish died when exposed to virus two months later (Ito et al. 2007).The latest research using bioluminescence imaging showed that skin is the major portal of entry for CyHV-3 in carp (Costes et al. 2009).It has been demonstrated that the virus is transmitted horizontally through fish excrement (Dishon et al. 2005) and viral particle secretion into the water (Perelberg et al. 2003).Ilouze et al. (2011) suggested that the virus might be transferred by birds moving sick fish from pond to pond.There are no data indicating that CyHV-3 is transmitted vertically.Most herpesviruses have been demonstrated to persist in the host as a latent virus, and there is some evidence suggesting that CyHV-3 can remain latent in clinically healthy fish, and that the infection can be reactivated by temperature stress (Gilad et al. 2004;Eide et al. 2011).Nested PCR, real-time PCR, and semi-nested PCR seem to be the most useful and sensitive techniques for detecting CyHV-3 (Bergmann et al. 2010).Since koi herpesvirus seems to pose the greatest threat to world carp population, most investigations are currently focused on developing efficient prophylaxis and control methods for CyHV-3 disease.
Anguillid herpesvirus 1 (AngHV-1) frequently causes disease in cultured and wild eels (Sano et al. 1990;Haenen et al. 2002).Van Beurden et al. (2010) sequenced the complete genome of the virus and confirmed that AngHV-1 is a new virus species within the family Alloherpesviridae.Although the AngHV-1 infects eels rather than cyprinid species, phylogenetic studies have shown that AngHV-1 is most closely related to the cyprinid herpesviruses (van Beurden et al. 2010).Thus, in the newest ICTV (International Committee on Taxonomy of Viruses) classification, AngHV-1 is assigned to the genus Cyprinivirus.AngHV-1 was isolated from farmed European eel (Anguilla anguilla) and Japanese eel (Anguilla japonica) in Japan (Sano et al. 1990), from Japanese eel in Taiwan (Ueno et al. 1992), and from European eel in many European countries (Davidse et al. 1999;Haenen et al. 2002;van Ginneken et al. 2004;Jakob et al. 2009).The virus was also found in wild eel populations in the Netherlands (van Ginneken et al. 2004) and Germany (Jakob et al. 2009).While clinical signs can vary among and within outbreaks, apathy, haemorrhages in the skin, fins, and gills, congested gill epithelium, and pale liver are the most frequent symptoms.Mortality ranges from 1 to 10% (Sano et al. 1990;Haenen et al. 2002).Diagnosis is usually made by virus isolation in cell culture (Sano et al. 1990;Haenen et al. 2002) or AngHV-1-specific PCR (Rijsewijk et al. 2005).Although the mortality rates are not very high, the control of disease outbreaks can be difficult because the virus is present in wild eel populations and has the ability to remain latent in apparently healthy fish (van Nieuwstadt et al. 2001).
Genus Ictalurivirus
Ictalurid herpesvirus 1 (IcHV-1) causes an acute haemorrhagic disease in cultured juvenile channel catfish, Ictalurus punctatus.Epizootics usually involve high mortality occurring in southern United States on commercial fish farms.Outbreaks of IcHV-1 disease have yet to be observed in wild populations suggesting that intensive fish farming may be a predictive factor.The disease occurs in the summer months, and mortality rates can reach 100% at 28 °C (Kucuktas and Brady 1999;Davison 2008).Clinical signs of IcHV-1 disease include erratic or spiral swimming, exophthalmia, swollen abdomen, pale or haemorrhagic gills, and haemorrhages at the fin bases and throughout the skin.Internal signs include yellowish fluid in the peritoneal cavity, dark, enlarged spleen, and extensive haemorrhages in the liver and kidneys (Smail and Munro 2001;Davison 2008;Plumb and Hanson 2011b).Transmission of the virus occurs by two routes: horizontally through the water and by direct contact; and also vertically (Kucuktas and Brady 1999).There is some evidence suggesting that survivors can be a reservoir for latent virus, which could cause disease reactivation and transmission (Davison 2008).Although IcHV-1 can cause significant losses on farms because of high mortality among young fish, its overall significance in the channel catfish industry is not very high (Plumb and Hanson 2011b).
Ictalurid herpesvirus 2 (IcHV-2), also named black bullhead herpesvirus and Ameiurine herpesvirus 1, is associated with disease causing mass mortality among black bullhead, Ameiurus melas, in Italy (Hedrick et al. 2003).The virus causes clinical signs similar to those of IcHV-1 in channel catfish, including spiral swimming and haemorrhages in the skin and internal organs, and at the bases of fins (Hedrick et al. 2003;Plumb and Hanson 2011b).Hedrick et al. showed that IcHV-2 is also pathogenic for channel catfish fry and fingerlings, which poses a potential danger to the channel catfish industry (Hedrick et al. 2003).
Acipenserid herpesvirus 2 (AciHV-2), commonly named white sturgeon herpesvirus 2 (WSHV-2), is associated with the disease of cultured and wild white sturgeon, Acipenser transmontanus, in North America and Italy.Affected fish display pale lesions on the body surface, lethargy, appetite loss, and erratic swimming.Mortality in adult fish is less than 10%.Experimental studies revealed that pallid sturgeon and shovelnose sturgeon are also susceptible to AciHV-2 infection.Phylogenetic research showed that AciHV-2 is more closely related to IcHV-1 than to AciHV-1, and, according to current taxonomy of fish herpesviruses, is assigned to the genus Ictalurivirus (Hua and Wang 2005;Plumb and Hanson 2011c).
Genus Salmonivirus
Salmonid herpesvirus 1 (SalHV-1) is a causative agent of mild disease in rainbow and steelhead trout and was reported in the USA.Disease outbreaks occur when the water temperature is 10 °C or less.It causes darkened pigmentation, apathy, pale gills, exophthalmia, and distended abdomen.In experimental trials, the virus was also pathogenic for chum salmon and chinook salmon, while brook trout, brown trout, Atlantic salmon, and coho salmon were not susceptible to SalHV-1 infection (Plumb and Hanson 2011d).Sequence analysis by Davison (1998) indicated that SalHV-1 is closely related to SalHV-2.
Salmonid herpesvirus 2 (SalHV-2), more commonly known as Oncorhynchus masou virus, causes significant economic losses of farmed and wild salmonid fish in Japan.The susceptible fish species include masu, coho, sockeye, chum salmon, and rainbow trout.Clinical signs of the disease depend on fish age.In juveniles, the virus causes acute infection with apathy, exophthalmia, and skin ulcers, and cumulative mortality can be as high as 100% (in 1-month-old sockeye salmon).Four months to one year post infection, 12-100% of surviving fish develop tumours, located mainly around the mouth and head.SalHV-2 is transmitted horizontally through the water and vertically via ovarian fluids (Plumb and Hanson 2011d).The disease is successfully controlled in hatcheries by disinfection with iodophore of all collected eggs after fertilization and again at the earlyeyed stage (Yoshimizu and Nomura 2001).
Salmonid herpesvirus 3 (SalHV-3) was originally found in cultured juvenile lake trout, and it causes acute disease with mortality approaching 100%.Clinical signs are nonspecific and include lethargy interspersed with periods of hyperexcitability, spiral swimming, haemorrhages in the eyes and mouth and at the base of the fins.Histopathologically, affected fish exhibit hyperplasia, hypertrophy, and necrosis of epidermal cells.Transmission experiments showed that only lake trout and hybrids of lake and brown trout are susceptible to the disease (Bradley et al. 1989;McAllister and Herman 1989).The disease is known to appear only in the region of Great Lakes in North America when water temperatures are 6-15 °C (Plumb and Hanson 2011d).Salmonid herpesvirus 3 is transmitted horizontally by waterborne contact (McAllister and Herman 1989), and some data suggest that it can also be transmitted vertically (Kurobe et al. 2009).Survivors of SalHV-3 infection might become long-term virus carriers (McAllister 1991).
Unassigned fish herpesviruses
Acipenserid herpesvirus 1 (AciHV-1), also known as a white sturgeon herpesvirus 1, was initially isolated from cultured juvenile white sturgeon, Acipenser transmontanus, in California (USA) hatcheries.Experimental infection resulted in a 35% mortality rate with no specific external signs.Histopathologically, the virus causes epidermal lesions and diffused dermatitis.AciHV-1 appears to be less virulent under experimental conditions than is AciHV-2 (Plumb and Hanson 2011c).
In conclusion, the number of identified fish herpesviruses is increasing every year.Most fish herpesviruses cause mild infections in the natural environment, but under aquaculture conditions these viruses can cause severe, clinical diseases with high rates of mortality.Most of discovered alloherpesviruses are epitheliotropic, and some might be oncogenic.
These viruses are a highly diverse group, with two apparent major classes; the first includes viruses that have large genomes (anguillid and cyprinid species), while the second includes viruses that appear to possess smaller genomes like IcHV1.Although the phylogenetic connections among all herpesviruses are tenuous, fish herpesviruses share at least several structural and biological properties with other herpesvirus families: similar virion structure, many fish herpesviruses can cause persistent, life-long infections (latency), and a high level of host specificity, although there is some evidence to suggest that interspecies transmission of fish herpesviruses does exist (Waltzek et al. 2009;Bandín and Dopazo 2011).
Table 1 .
The current taxonomy of fish herpesviruses including most known unassigned virus species (according to: | 3,008.6 | 2012-01-01T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Molecular Dynamic Investigation of the Anisotropic Response of Aluminum Surface by Ions Beam Sputtering
Aluminum optics are widely used in modern optical systems because of their high specific stiffness and high reflectance. With the applied optical frequency band moving to visible, traditional processing technology cannot meet the processing precision. Ion beam sputtering (IBS) provides a highly deterministic technology for high-precision aluminum optics fabrication. However, the surface quality is deteriorated after IBS. The interaction between the bombard atoms and the surface morphology evolution mechanism are not clear, and systematic research is needed. Thus, in this paper, the IBS process for single crystal aluminum with different crystallographic orientations are studied by the molecular dynamics method. The ion beam sputter process is firstly demonstrated. Then, the variation of sputter yield of the three crystal faces is analyzed. The sputter yield difference of different crystal surfaces causes the appearance of the relief structure. Then, the gravel structure generates on the single crystal surfaces and dominates the morphology evolution. The state of the atom diffusion of the specific crystal surfaces will determine the form of the gravel structure. Furthermore, the form and distribution of subsurface damage and stress distribution of three different crystal surfaces are analyzed. Although there are great differences in defect distribution, no stress concentration was found in three workpieces, which verifies that the ion beam sputter is a stress-free machining method. The process of IBS and the mechanism of morphology evolution of aluminum are revealed. The regularity and mechanism will provide a guidance for the application of IBS in aluminum optics manufacture fields.
Introduction
With fine mechanical properties, light weight and high reflectivity, aluminum is widely used in optical systems in recent years, especially in the micro-satellites with extreme requirements for weight and volume [1][2][3]. Currently, the applied optical frequency band of aluminum optics is moving from infrared (IR) to visible (VIS), which brings a great challenge to fabrication [4]. For application in visible band, the aluminum optics should possess nanometer scale surface profile precision and subnanometer scale surface roughness. Usually, single-point diamond turning (SPDT) and magnetorheological finishing (MRF) are widely used in the fabrication of aluminum optics [5,6]. However, the precision of these methods cannot meet the requirements for visible light usage. Moreover, due to the properties of high chemical reactivity and low surface hardness, contact machining will cause the contamination of the aluminum surface, which is often associated with a reduction in the surface quality [7].
As a high-determined machining method, ion beam sputtering (IBS) achieves surface profile correction by removing materials through physic sputtering effects [8,9]. When re-Micromachines 2021, 12, 848 2 of 14 ceiving enough energy from bombarded atoms, the surface atoms will be sputtered from the surface. IBS is believed to possess the highest machining precision. Moreover, the whole process is conducted in a near-vacuum environment which will not cause contamination [10,11]. All the advantages show that IBS can be a better polishing method for visible range aluminum optics compared with contact polishing methods such as MRF. In recent years, more researches have been focused on the roughness and surface morphology revolution during IBS. Remarkable experimental work is conducted to demonstrate the formation of specific micro-topographies, such as ripples [12,13] and nanoparticles [14,15]. However, most researches are conducted on amorphous materials such as fused silica. There is a great lack of studies about aluminum. A systematic experiment on roughness evolution of different materials during IBS was carried out by C.M. Egert [16]. Aluminum shows poor performance on the decrease in roughness. The same phenomenon is also observed in the experiments conducted by our research group. Unlike traditionally used materials in IBS, peculiar relief and gravel structure emerge on the aluminum surface during IBS [17]. In order to realize the regulation of surface microscopic morphology and surface roughness, the process of IBS and mechanism of morphology evolution need to be revealed. However, experimentalists face the daunting task of characterizing the material removal and surface evolution of aluminum at nanoscale time and space. Thus, the mechanism is still unrevealed.
Molecular dynamics (MD) simulations play a key role in understanding the experimental results, revealing mechanism and predicting outcomes [18][19][20]. Remarkable works have been successfully conducted, revealing the mechanism of interaction between energy particles and many other materials. Using MD, Wang et al. reveal the elastoplastic transformation process of monocrystalline silicon material induced by ion implantation [21]. By optimizing the ion implantation, amorphous layer with microns' scale can be generated. Xiao et al. have studied the surface damage evolution of nanoscale silicon caused by Ga-focused ion beam machining. Simulation results are in good agreement with the experimental results [22]. Xiao et al. have successfully revealed the material removal process and surface morphology evolution of single crystal silicon during IBS [23]. However, there is limited work related to the metal materials such as aluminum [24][25][26][27][28][29]. Moreover, the aluminum used in the IBS are usually in an alloy state. There are multiple crystal faces on the surface. In order to realize the regulation of surface microscopic morphology of aluminum, it is necessary to fully understand the mechanism in each crystallographic orientation through MD.
In this work, the IBS process and surface morphology evolution of three kinds of crystal surfaces are studied. In Section 2, the method of MD simulation, defect analysis, and visualization techniques are presented. In Section 3, the results of ion beam sputtering process and its mechanism, surface morphology evolution, sputtering yield, and subsurface damage are analyzed and discussed. Finally, a conclusion is summarized in Section 4. The results of this study will be beneficial to understanding the IBS of aluminum and promoting the application of IBS in the field of aluminum optics manufacture, which will significantly improve the machining efficiency and precision of aluminum optics.
Simulation Description
The MD model of Ar ion sputtering consists of a single crystal aluminum workpiece and Ar atoms, as indicated in Figure 1. The aluminum workpiece has a dimension of 15, 15, and 38 nm in X, Y, and Z directions, respectively. To investigate the influence of crystal orientation, three aluminum workpieces with Al(110), Al(111), and Al(001) free surface in Z direction are considered. Ar atoms bombard the workpiece vertically in the Z direction. Because IBS is conducted in a near-vacuum environment, the influence of the environment can be ignored. As shown in Figure 1, the aluminum sample is divided into three regions: the region of boundary atoms to fix the sample in space, the region of thermostat atoms to imitate the heat dissipation, and the region of Newtonian atoms which obeys Newton's second law [30][31][32][33]. The thickness of thermostat layer and boundary layer are both 1 nm. The initial temperature of the aluminum sample is maintained at 293 K. The periodic boundary condition is applied on the direction of X and Y to elimination of size effect.
Micromachines 2021, 12, x FOR PEER REVIEW 3 of 14 can be ignored. As shown in Figure 1, the aluminum sample is divided into three regions: the region of boundary atoms to fix the sample in space, the region of thermostat atoms to imitate the heat dissipation, and the region of Newtonian atoms which obeys Newton's second law [30][31][32][33]. The thickness of thermostat layer and boundary layer are both 1 nm. The initial temperature of the aluminum sample is maintained at 293 K. The periodic boundary condition is applied on the direction of X and Y to elimination of size effect. During IBS, Ar is firstly ionized in the cavity and accelerated by screen-grid voltage. Then, a neutralizer is applied to the generated electron to neutralize the charge of Ar + . Thus, the Ar atoms will have the same bombardment speed as shown in Table 1. The Ar will bombard the workpiece surface in a steady stream. Thus, in this study, we adopt a continuous bombardment situation. First, k Ar atoms are distributed randomly in a 10 × 10 × 1 nm 3 box to simulate specific ionic concentration in the cavity. Then, the 10 × 10 × 1 box will expand n times in the Z direction. For each 10 × 10 × 1 nm 3 box, the Ar atoms in the box are randomly distributed. The total Ar atoms bombarding on the surface will be k × n. All the MD simulations are based on LAMMPS developed by Sandia National Laboratory (PO Box 5800, Albuquerque, NM, USA). The Ovito is utilized to perform visualization of MD simulation of the IBS process. The velocity-Verlet algorithm is applied to integrate Newton's equations of motion with the time step of 1 fs. The common neighbor analysis (CNA) is used to identify the crystal structure during the ion sputtering. Firstly, the energy minimization is carried out by the conjugate gradient method to avoid the overlap of atomic position. Then, the temperature of workpiece is equilibrated to 293 K by the Nose-Hoover thermostat for 70 ps. Both the relaxation stage and sputtering simulation are performed in a microcanonical ensemble (NVE) [34][35][36]. The Ar atoms are placed at the height of 2 nm above the initial top surface of workpiece. Considering the commonly process parameters of IBS, the ion energy is chosen to be 500 eV and the incident angle is 90°. The total number of bombarded Ar atoms is 50. The ion dose is defined as the total number of Ar atoms trapped in workpiece divided by the upper surface area of workpiece. The simulated parameters are represented in Table 1. During IBS, Ar is firstly ionized in the cavity and accelerated by screen-grid voltage. Then, a neutralizer is applied to the generated electron to neutralize the charge of Ar + . Thus, the Ar atoms will have the same bombardment speed as shown in Table 1. The Ar will bombard the workpiece surface in a steady stream. Thus, in this study, we adopt a continuous bombardment situation. First, k Ar atoms are distributed randomly in a 10 × 10 × 1 nm 3 box to simulate specific ionic concentration in the cavity. Then, the 10 × 10 × 1 box will expand n times in the Z direction. For each 10 × 10 × 1 nm 3 box, the Ar atoms in the box are randomly distributed. The total Ar atoms bombarding on the surface will be k × n. All the MD simulations are based on LAMMPS developed by Sandia National Laboratory (PO Box 5800, Albuquerque, NM, USA). The Ovito is utilized to perform visualization of MD simulation of the IBS process. The velocity-Verlet algorithm is applied to integrate Newton's equations of motion with the time step of 1 fs. The common neighbor analysis (CNA) is used to identify the crystal structure during the ion sputtering. Firstly, the energy minimization is carried out by the conjugate gradient method to avoid the overlap of atomic position. Then, the temperature of workpiece is equilibrated to 293 K by the Nose-Hoover thermostat for 70 ps. Both the relaxation stage and sputtering simulation are performed in a microcanonical ensemble (NVE) [34][35][36]. The Ar atoms are placed at the height of 2 nm above the initial top surface of workpiece. Considering the commonly process parameters of IBS, the ion energy is chosen to be 500 eV and the incident angle is 90 • . The total number of bombarded Ar atoms is 50. The ion dose is defined as the total number of Ar atoms trapped in workpiece divided by the upper surface area of workpiece. The simulated parameters are represented in Table 1.
Potential Description
The mixing potentials are used in the ion beam sputtering simulation. The interaction potentials between atoms are described as follow: (1) For Ar-Ar atomic interaction, the Ziegler-Biersack-Littmark (ZBL) potential is adopted, which can be expressed as follow [37]: where e is the electron charge, ε 0 is the electrical permittivity of vacuum, and Z i and Z j are the nuclear charges of the two atoms. S(r) is the switching function.
(2) For Al-Al atomic interaction, the embedded-atom method (EAM) potential is adopted, which can be expressed as follow [38]: where E i is the total energy, F is the embedding energy which is a function of the atomic electron density ρ, φ is a pair potential interaction, α and β are the element types of atoms i and j.
(3) For Al-Ar atomic interaction, the splicing potentials are adopted. ZBL, the second order polynomial function and Lennard-Jones (LJ) potentials are used for different atomic spacing, respectively, to construct the Ar-Al potential function. The LJ potential is expressed as follow: where ε is the depth of the potential well, σ is the distance of zero potential. ZBL potential is used in the range of 0~0.31 nm and LJ potential in the range of 0.37~∞ nm. In the range of 0.31~0.37 nm, a second order polynomial function is used to join the two potentials [39].
Ion Beam Sputtering Mechanism
The IBS can be a complicated process. The Ar atoms bombard on the surface and perturb the Al atoms. The Al atoms receive enough energy and will be sputtered from the workpiece surface. During IBS, there will be two types of Ar ion behaviors: ion bounce and ion implantation, which are shown in Figures 2 and 3, respectively. For ion bounce, the Ar atoms impact the aluminum surface and embed into the substrate. Few surface Al atoms are disturbed, as shown in Figure 2d. Then, the Ar ion collides with Al atom, with a rapid transfer of kinetic energy. The Ar ion bounces back and the recoil Al atom will continue the movement, as shown in Figure 2f. For ion bounce, few Al atoms are exposed by Ar atoms during the bombardment and there is a rapid exchange of kinetic energy during the impact. Ion implantation will occur for most of the Ar atoms. As shown in Figure 3a,d, the implantation will cause the perturbation of the deeper Al atoms. With the motion of the Ar atoms, large numbers of Al atoms in the bombardment area gain kinetic energy. Some atoms are directly sputtered out after colliding with Ar atoms, as shown in Figure 3e, which is referred as primary sputter phenomenon. The Al atoms with kinetic energy will bump into other atoms in a cascade collision, which will cause secondary sputter phenomenon, as shown in Figure 3f.
Sputter Yield Analysis
IBS is believed to possess the highest machining precision. The steady erosion rate of IBS can be expressed as follow: Ion implantation will occur for most of the Ar atoms. As shown in Figure 3a,d, the implantation will cause the perturbation of the deeper Al atoms. With the motion of the Ar atoms, large numbers of Al atoms in the bombardment area gain kinetic energy. Some atoms are directly sputtered out after colliding with Ar atoms, as shown in Figure 3e, which is referred as primary sputter phenomenon. The Al atoms with kinetic energy will bump into other atoms in a cascade collision, which will cause secondary sputter phenomenon, as shown in Figure 3f.
Sputter Yield Analysis
IBS is believed to possess the highest machining precision. The steady erosion rate of IBS can be expressed as follow: Ion implantation will occur for most of the Ar atoms. As shown in Figure 3a,d, the implantation will cause the perturbation of the deeper Al atoms. With the motion of the Ar atoms, large numbers of Al atoms in the bombardment area gain kinetic energy. Some atoms are directly sputtered out after colliding with Ar atoms, as shown in Figure 3e, which is referred as primary sputter phenomenon. The Al atoms with kinetic energy will bump into other atoms in a cascade collision, which will cause secondary sputter phenomenon, as shown in Figure 3f.
Sputter Yield Analysis
IBS is believed to possess the highest machining precision. The steady erosion rate of IBS can be expressed as follow where J is the ion current, Ω is the atomic volume, and Y 0 is the sputter yield. In order to achieve high precision material removal, it is necessary to acquire precise sputtered yield for corresponding materials. Usually, Monte Carlo methods are used to simulate sputter yields, which have a high accuracy for isotropic materials such as fused silica. However, the simulation setup of many Monte Carlo methods is relatively simple, which cause great deviation from experimental results. Commonly used Monte Carlo simulations for IBS are based on SRIM. Figure 4 shows the Monte Carlo simulation results of IBS of aluminum. With same simulation parameters, the Ar atoms mainly distribute in the depth of around 58 Å or less. The displacements and vacancies have the same distributions, which is similar to the Ar distribution. The sputter yield calculated by SRIM is 0.6707 Atoms/Ions. However, the IBS process of different crystal orientations cannot be revealed by SRIM, which slightly limits the application of the Monte Carlo simulation.
Micromachines 2021, 12, x FOR PEER REVIEW 6 of 14 where J is the ion current, Ω is the atomic volume, and Y0 is the sputter yield. In order to achieve high precision material removal, it is necessary to acquire precise sputtered yield for corresponding materials. Usually, Monte Carlo methods are used to simulate sputter yields, which have a high accuracy for isotropic materials such as fused silica. However, the simulation setup of many Monte Carlo methods is relatively simple, which cause great deviation from experimental results. Commonly used Monte Carlo simulations for IBS are based on SRIM. Figure 4 shows the Monte Carlo simulation results of IBS of aluminum. With same simulation parameters, the Ar atoms mainly distribute in the depth of around 58 Å or less. The displacements and vacancies have the same distributions, which is similar to the Ar distribution. The sputter yield calculated by SRIM is 0.6707 Atoms/Ions. However, the IBS process of different crystal orientations cannot be revealed by SRIM which slightly limits the application of the Monte Carlo simulation. Based on MD simulation, the Ar atoms distribution and corresponding Gaussian distribution fitting in the z direction are presented in Figure 5. For Al(001), the average depth of bombardment is around 70 Å. As the depth increases, the number of Ar atoms decreases, which conforms to the actual processing conditions. Al(111) shares the same regularity with Al(001). However, according to the expectation and variance, the distribution of Ar atoms is more concentrated and the depth is shallower. For Al(110), Ar atoms are more evenly distributed relatively. The average depth of bombardment is 146 Å, which is the deepest among the three crystal orientations. Based on MD simulation, the Ar atoms distribution and corresponding Gaussian distribution fitting in the z direction are presented in Figure 5. For Al(001), the average depth of bombardment is around 70 Å. As the depth increases, the number of Ar atoms decreases, which conforms to the actual processing conditions. Al(111) shares the same regularity with Al(001). However, according to the expectation and variance, the distribution of Ar atoms is more concentrated and the depth is shallower. For Al(110), Ar atoms are more evenly distributed relatively. The average depth of bombardment is 146 Å, which is the deepest among the three crystal orientations.
where J is the ion current, Ω is the atomic volume, and Y0 is the sputter yield. In order to achieve high precision material removal, it is necessary to acquire precise sputtered yield for corresponding materials. Usually, Monte Carlo methods are used to simulate sputter yields, which have a high accuracy for isotropic materials such as fused silica. However, the simulation setup of many Monte Carlo methods is relatively simple, which cause great deviation from experimental results. Commonly used Monte Carlo simulations for IBS are based on SRIM. Figure 4 shows the Monte Carlo simulation results of IBS of aluminum. With same simulation parameters, the Ar atoms mainly distribute in the depth of around 58 Å or less. The displacements and vacancies have the same distributions, which is similar to the Ar distribution. The sputter yield calculated by SRIM is 0.6707 Atoms/Ions. However, the IBS process of different crystal orientations cannot be revealed by SRIM, which slightly limits the application of the Monte Carlo simulation. Based on MD simulation, the Ar atoms distribution and corresponding Gaussian distribution fitting in the z direction are presented in Figure 5. For Al(001), the average depth of bombardment is around 70 Å. As the depth increases, the number of Ar atoms decreases, which conforms to the actual processing conditions. Al(111) shares the same regularity with Al(001). However, according to the expectation and variance, the distribution of Ar atoms is more concentrated and the depth is shallower. For Al(110), Ar atoms are more evenly distributed relatively. The average depth of bombardment is 146 Å, which is the deepest among the three crystal orientations. Figure 6. The experiment is conducted on a φ100 mm planar aluminum surface. The surface is polished to roughness of 2 nmRa with no particular micro morphology. The processing parameters in Table 1 are used. The incident angle is 90 • . When the processing time is short, the sputtering Micromachines 2021, 12, 848 7 of 14 yield difference of different crystal surfaces will cause the obvious relief structure. The size of relief structure is similar to aluminum grains, which also verifies our analysis. Compared with Monte Carlo simulations, MD simulations are closer to the experiments, and are more comprehensive and accurate.
The sputter yields of Al(001), Al(110), and Al(111) are 1.24 atoms/ions, 0.84 atoms/io and 1.7 atoms/ions, respectively. The sputter yield of Al(111) is nearly twice that of Al(110). The experimental result of IBS of polycrystalline aluminum is shown in Figur The experiment is conducted on a ϕ100 mm planar aluminum surface. The surface is p ished to roughness of 2 nmRa with no particular micro morphology. The processing rameters in Table 1 are used. The incident angle is 90°. When the processing time is sh the sputtering yield difference of different crystal surfaces will cause the obvious re structure. The size of relief structure is similar to aluminum grains, which also verifies analysis. Compared with Monte Carlo simulations, MD simulations are closer to the periments, and are more comprehensive and accurate.
Surface Morphology and Roughness Evolution
Due to its highest machining precision, IBS is usually used in optics fabrication as final process. Thus, the surface quality and morphology evolution are important conce during IBS. Normally speaking, the surface quality can be preserved during IBS for ma non-metallic materials such as fused silica and monocrystalline silicon. However, for a minum, the surface quality deteriorates significantly during IBS in our previous exp ments. Figure 7 shows the surface morphology evolution of different crystal surfaces. Al(001), the evolution of morphology can be divided into four stages. Firstly, as shown Figure 7a, the Ar atoms bombardment causes obvious pits on the surface. The surf atoms are disturbed and the fluctuation appears on the surface. The atoms outside sputtering area are also affected by the bombardment and deviate from the original p tion. However, the bombardment and cascade collision process will end after few Thus, the form of the morphology varies, as shown in Figure 7b. The atoms in the sp tering area are disturbed and an embossment is formed. Meanwhile, most of the ato outside the sputtering area present a tendency to restore to the original state, which le to the surface of an unbombarded area to flatten out. Then, with increasing relaxat time, the atoms diffusion dominates the evolution of surface morphology. The atoms o side the sputtering area are subject to secondary perturbation. The height of the embo ment decreases with the diffusion and specific morphology emerging gradually. Fina with long-time relaxation, the surface morphology with local gravel structures is form and stabilized. As for the crystal surfaces of Al(110) and Al(111), the evolution of m phology can also be expressed by the same four stages. However, at the same stage, th is difference for different crystal surfaces. For Al(110), the atoms outside the sputter area do not restore after disturbance, which highly affects the subsequent atoms diffusi
Surface Morphology and Roughness Evolution
Due to its highest machining precision, IBS is usually used in optics fabrication as the final process. Thus, the surface quality and morphology evolution are important concerns during IBS. Normally speaking, the surface quality can be preserved during IBS for many non-metallic materials such as fused silica and monocrystalline silicon. However, for aluminum, the surface quality deteriorates significantly during IBS in our previous experiments. Figure 7 shows the surface morphology evolution of different crystal surfaces. For Al(001), the evolution of morphology can be divided into four stages. Firstly, as shown in Figure 7a, the Ar atoms bombardment causes obvious pits on the surface. The surface atoms are disturbed and the fluctuation appears on the surface. The atoms outside the sputtering area are also affected by the bombardment and deviate from the original position. However, the bombardment and cascade collision process will end after few ps. Thus, the form of the morphology varies, as shown in Figure 7b. The atoms in the sputtering area are disturbed and an embossment is formed. Meanwhile, most of the atoms outside the sputtering area present a tendency to restore to the original state, which leads to the surface of an unbombarded area to flatten out. Then, with increasing relaxation time, the atoms diffusion dominates the evolution of surface morphology. The atoms outside the sputtering area are subject to secondary perturbation. The height of the embossment decreases with the diffusion and specific morphology emerging gradually. Finally, with long-time relaxation, the surface morphology with local gravel structures is formed and stabilized. As for the crystal surfaces of Al(110) and Al(111), the evolution of morphology can also be expressed by the same four stages. However, at the same stage, there is difference for different crystal surfaces. For Al(110), the atoms outside the sputtering area do not restore after disturbance, which highly affects the subsequent atoms diffusion. Thus, the surface morphology of Al(110) shows no obvious specific structure after stabilization. For Al(111), the atoms outside the sputtering area are barely affected by the bombardment and diffusion. By comparison, the stabilized morphology of Al(111) has the most obvious morphological features. With the processing time increasing, the crystal surface begins to coarse with the increasing bombardment atoms. The relief structure will gradually disappear and evolve into gravel structure, as shown in Figure 8. In Figure 8, the regionalization of morphology is also obvious. The number of gravels at a specific area are significantly larger than other areas. Based on above analysis, different crystal surfaces will have a large difference on the generation of gravels, which is in good agreement with experimental results.
Thus, the surface morphology of Al(110) shows no obvious specific structure after stabi zation. For Al(111), the atoms outside the sputtering area are barely affected by the bom bardment and diffusion. By comparison, the stabilized morphology of Al(111) has t most obvious morphological features. With the processing time increasing, the crystal su face begins to coarse with the increasing bombardment atoms. The relief structure w gradually disappear and evolve into gravel structure, as shown in Figure 8. In Figure the regionalization of morphology is also obvious. The number of gravels at a specific ar are significantly larger than other areas. Based on above analysis, different crystal surfac will have a large difference on the generation of gravels, which is in good agreement wi experimental results. Thus, the surface morphology of Al(110) shows no obvious specific structure after stabilization. For Al(111), the atoms outside the sputtering area are barely affected by the bombardment and diffusion. By comparison, the stabilized morphology of Al(111) has the most obvious morphological features. With the processing time increasing, the crystal surface begins to coarse with the increasing bombardment atoms. The relief structure will gradually disappear and evolve into gravel structure, as shown in Figure 8. In Figure 8, the regionalization of morphology is also obvious. The number of gravels at a specific area are significantly larger than other areas. Based on above analysis, different crystal surfaces will have a large difference on the generation of gravels, which is in good agreement with experimental results. The surface roughness is an important characteristic for optical components. The surface contour arithmetic mean deviation R a is usually used to evaluate surface roughness, which can be expressed as follow: where y i is the height of sampling point, and l r is the sampling length. After stabilization of system, the surface roughness is calculated for each crystal surface, as shown in Figure 9. The Al(110) has the lowest surface roughness of 2.02 Å. Al(111) and Al(001) have the surface roughness of 3.4 Å and 3.8 Å, respectively. For Al(110), the surface pits caused by bombardment is not obvious and the surface state is relatively uniform after surface disturbed. Thus, the final surface roughness of Al(110) is relatively better than the other two crystal surfaces. Comparing with Al(111), the atoms outside the sputtering area of Al(001) are easily disturbed and are harder to stabilize during relaxation, which will cause resistance to atom diffusion and surface roughness deterioration. Based on the above analysis, the surface-disturbed state and atom diffusion state will dominate the final surface roughness. The surface roughness of different crystal surfaces varies greatly. In actual IBS process of aluminum alloy, the surface is comprised of various crystal surfaces. Thus, the surface roughness of aluminum alloy is harder to maintain during the IBS process, which is quite consistent with the experimental results.
where yi is the height of sampling point, and lr is the sampling length. After stabilization of system, the surface roughness is calculated for each crystal surface, as shown in Figure 9. The Al(110) has the lowest surface roughness of 2.02 Å. Al(111) and Al(001) have the surface roughness of 3.4 Å and 3.8 Å, respectively. For Al(110), the surface pits caused by bombardment is not obvious and the surface state is relatively uniform after surface disturbed. Thus, the final surface roughness of Al(110) is relatively better than the other two crystal surfaces. Comparing with Al(111), the atoms outside the sputtering area of Al(001) are easily disturbed and are harder to stabilize during relaxation, which will cause resistance to atom diffusion and surface roughness deterioration. Based on the above analysis, the surface-disturbed state and atom diffusion state will dominate the final surface roughness. The surface roughness of different crystal surfaces varies greatly. In actual IBS process of aluminum alloy, the surface is comprised of various crystal surfaces. Thus, the surface roughness of aluminum alloy is harder to maintain during the IBS process, which is quite consistent with the experimental results.
Subsurface Damage and Machining Stress Analysis
The subsurface damage and machining stress are important concerns in optics fabrication. The subsurface damage in the aluminum optics may cause the surface corrosion, which will severely destroy the surface quality. The subsurface damage evolution processes of the different crystal surfaces are displayed in Figure 10. The lattice defects are identified by the CNA method. The evolution of the subsurface damage can be roughly divided into four stages. Take Al(001) as an example, the bombarded Ar atoms firstly causes the disturbance of the subsurface. The bombardment will end quickly in less than 0.1 ps. The cascade collision of Al atoms will affect the deeper atoms, as shown in Figure 10b. Then, the stacking fault (SF) will generate with the system stabilizing. The disturbed Al atoms are reduced and the system can finally reach a stable state in Figure 10d
Subsurface Damage and Machining Stress Analysis
The subsurface damage and machining stress are important concerns in optics fabrication. The subsurface damage in the aluminum optics may cause the surface corrosion, which will severely destroy the surface quality. The subsurface damage evolution processes of the different crystal surfaces are displayed in Figure 10. The lattice defects are identified by the CNA method. The evolution of the subsurface damage can be roughly divided into four stages. Take Al(001) as an example, the bombarded Ar atoms firstly causes the disturbance of the subsurface. The bombardment will end quickly in less than 0.1 ps. The cascade collision of Al atoms will affect the deeper atoms, as shown in Figure 10b. Then, the stacking fault (SF) will generate with the system stabilizing. The disturbed Al atoms are reduced and the system can finally reach a stable state in Figure 10d. Comparing with Al(001), the disturbance layer of Al(110) and Al(111) are obviously deeper, which is consistent with the analysis in Section 3.3. The appearance of SFs in Al(110) and Al(111) is earlier than that of Al(001). After stabilization, the distribution of SFs in the Al(110) and the Al(111) are deeper than in the Al(001). The style of defects in the three crystal surface are consistent which is mainly SFs, but the distribution feature is quite different. For the Al(001), the SFs mainly concentrate in the bombardment area and have a shallow distribution. However, the SFs exist in a grid pattern and extend to a deeper location in the Al(110). For the Al(111), the SFs appear in the laminated structure and their amount is obviously highest.
which is consistent with the analysis in Section 3.3. The appearance of SFs in Al(110) and Al(111) is earlier than that of Al(001). After stabilization, the distribution of SFs in the Al(110) and the Al(111) are deeper than in the Al(001). The style of defects in the three crystal surface are consistent which is mainly SFs, but the distribution feature is quite different. For the Al(001), the SFs mainly concentrate in the bombardment area and have a shallow distribution. However, the SFs exist in a grid pattern and extend to a deeper location in the Al(110). For the Al(111), the SFs appear in the laminated structure and their amount is obviously highest. to stabilize. Comparing the stable stages in the Figure 11a-c, the Shockley dislocations are more obvious in the Al(111) than in the Al(001) and Al(110), while the stair-rod dislocations are relatively more in the Al (111). The phenomenon demonstrates that, after IBS, the density of dislocations in the Al(111) is largest, thereby bringing about significant dislocation strengthening for the Al(111). In addition, it can be seen from Figure 11d-f that the number of face-centered cubic (FCC) atoms rapidly increases with the abrupt decrease in the number of other atoms, which reveals that the FCC atoms in the workpiece are mainly turned into the other atoms due to the high speed intrusion of Ar atoms. The result is consistent with the phenomena shown in Figure 10a,e,i. With the time increasing, it is observed that the number of other atoms decreases gradually when that of hexagonal closepacked structure (HCP) atoms increases, indicating that the other atoms are further translated into the HCP atoms. As a result, the numbers of FCC, Other and HCP atoms obtain certain values. The number of HCP atoms is highest in the Al(111), which is consistent with the results in Figure 10.
The above phenomena indicate that the crystal orientation of a machined surface exert an apparent influence on the subsurface damage. For the three surfaces of Al(001), Al(110), and Al(111), the subsurface damage of Al(111) is severest, because the numerous SFs are generated and distributed in the deepest part of the workpiece. Figure 12a-c show the stress distribution of the three crystal surface after IBS. There is no significant concentration of stress after IBS. In addition, Figure 12d-f display the cross section of stress distribution in the Al(001), Al(110), and Al(111) after IBS. Similarly, hardly any stress concentration is introduced to the workpiece by the IBS machining. The phenomenon demonstrates that the Ar ion beam bombardment causes no additional stress in the workpiece. In addition, it can also be used to release stress caused by other processing methods. Therefore, the IBS is believed to be stress-free machining in the optical fabrication field. The simulation results are quite consistent with the experimental results [17]. In addition, it can be seen from Figure 11d-f that the number of face-centered cubic (FCC) atoms rapidly increases with the abrupt decrease in the number of other atoms, which reveals that the FCC atoms in the workpiece are mainly turned into the other atoms due to the high speed intrusion of Ar atoms. The result is consistent with the phenomena shown in Figure 10a,e,i. With the time increasing, it is observed that the number of other atoms decreases gradually when that of hexagonal closepacked structure (HCP) atoms increases, indicating that the other atoms are further translated into the HCP atoms. As a result, the numbers of FCC, Other and HCP atoms obtain certain values. The number of HCP atoms is highest in the Al(111), which is consistent with the results in Figure 10.
The above phenomena indicate that the crystal orientation of a machined surface exert an apparent influence on the subsurface damage. For the three surfaces of Al(001), Al(110), and Al(111), the subsurface damage of Al(111) is severest, because the numerous SFs are generated and distributed in the deepest part of the workpiece. Figure 12a-c show the stress distribution of the three crystal surface after IBS. There is no significant concentration of stress after IBS. In addition, Figure 12d-f display the cross section of stress distribution in the Al(001), Al(110), and Al(111) after IBS. Similarly, hardly any stress concentration is introduced to the workpiece by the IBS machining. The phenomenon demonstrates that the Ar ion beam bombardment causes no additional stress in the workpiece. In addition, it can also be used to release stress caused by other processing methods. Therefore, the IBS is believed to be stress-free machining in the optical fabrication field. The simulation results are quite consistent with the experimental results [17].
Conclusions
In this paper, the MD simulation of IBS process of aluminum with different crystal orientation is studied. Influences of different crystallographic orientations on IBS process and manufactured results are revealed.
Firstly, the ion beam sputtering mechanism is exposed. Two states of Ar atoms (implantation and bounce) are observed. The implantation of Ar atoms causes massive disturbance of the shallow Al atoms, which leads to primary sputtering effect. The shallow Al atoms will cause cascade collision and lead to secondary sputtering effect. The simulation results are consistent with the traditional sputtering theory, which verify the validity of the MD simulation.
Secondly, the sputter yield, morphology evolution, and surface roughness are revealed by simulation results. Three crystal surfaces show great variety. The sputter yield of Al(111) is nearly twice that of Al(110). When the processing time of IBS is short, the varied sputter yield of different crystal surfaces will cause the emergence of the relief structure. With increased bombard time, the gravel structure of single crystal surface will dominate the morphology evolution. The state of atom diffusion (during the bombardment and during the relaxation) determines the final morphology and roughness of specific crystal surfaces. With easier atom diffusion, Al(110) has the lowest roughness. However, with poor atom diffusion during the bombardment and large disturbance during relaxation, the roughness of Al(001) is nearly twice that of Al(110).
Finally, the subsurface damage and machining stress are analyzed. The main defects for different crystal surfaces are identical, which are stacking fault, Shockley dislocation, and stair-rod dislocation. However, the form and distribution show great difference. For Al(001), the defects generate on the bombardment area and have a shallow distribution. The defects are in a grid pattern and extend to a deeper location in Al(110). For Al(111), the defects have a laminated structure and have the highest amount. IBS is believed to be
Conclusions
In this paper, the MD simulation of IBS process of aluminum with different crystal orientation is studied. Influences of different crystallographic orientations on IBS process and manufactured results are revealed.
Firstly, the ion beam sputtering mechanism is exposed. Two states of Ar atoms (implantation and bounce) are observed. The implantation of Ar atoms causes massive disturbance of the shallow Al atoms, which leads to primary sputtering effect. The shallow Al atoms will cause cascade collision and lead to secondary sputtering effect. The simulation results are consistent with the traditional sputtering theory, which verify the validity of the MD simulation.
Secondly, the sputter yield, morphology evolution, and surface roughness are revealed by simulation results. Three crystal surfaces show great variety. The sputter yield of Al(111) is nearly twice that of Al(110). When the processing time of IBS is short, the varied sputter yield of different crystal surfaces will cause the emergence of the relief structure. With increased bombard time, the gravel structure of single crystal surface will dominate the morphology evolution. The state of atom diffusion (during the bombardment and during the relaxation) determines the final morphology and roughness of specific crystal surfaces. With easier atom diffusion, Al(110) has the lowest roughness. However, with poor atom diffusion during the bombardment and large disturbance during relaxation, the roughness of Al(001) is nearly twice that of Al(110).
Finally, the subsurface damage and machining stress are analyzed. The main defects for different crystal surfaces are identical, which are stacking fault, Shockley dislocation, and stair-rod dislocation. However, the form and distribution show great difference. For Al(001), the defects generate on the bombardment area and have a shallow distribution. The defects are in a grid pattern and extend to a deeper location in Al(110). For Al(111), the defects have a laminated structure and have the highest amount. IBS is believed to be stress-free machining in the optical fabrication field. There are no significantly concentrations of stress after IBS for all three crystal surfaces, which is consistent with the experimental results.
The process of IBS and mechanism of morphology evolution of aluminum are revealed. The regularity and mechanism will lay a foundation for the application of IBS in aluminum optics manufacture fields. | 9,730 | 2021-07-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Differential Phosphorylation of GluN1-MAPKs in Rat Brain Reward Circuits following Long-Term Alcohol Exposure
The effects of long-term alcohol consumption on the mitogen-activated protein kinases (MAPKs) pathway and N-methyl-D-aspartate-type glutamate receptor 1 (GluN1) subunits in the mesocorticolimbic system remain unclear. In the present study, rats were allowed to consume 6% (v/v) alcohol solution for 28 consecutive days. Locomotor activity and behavioral signs of withdrawal were observed. Phosphorylation and expression of extracellular signal-regulated protein kinase (ERK), c-Jun N-terminal kinase (JNK), p38 protein kinase and GluN1 in the nucleus accumbens, caudate putamen, amygdala, hippocampus and prefrontal cortex of these rats were also measured. Phosphorylation of ERK, but not JNK or p38, was decreased in all five brain regions studied in alcohol-drinking rats. The ratio of phospho/total-GluN1 subunit was reduced in all five brain regions studied. Those results suggest that the long-term alcohol consumption can inhibits GluN1 and ERK phosphorylation, but not JNK or p38 in the mesocorticolimbic system, and these changes may be relevant to alcohol dependence. To differentiate alcohol-induced changes in ERK and GluN1 between acute and chronic alcohol exposure, we have determined levels of phospho-ERK, phospho-GluN1 and total levels of GluN1 after acute alcohol exposure. Our data show that 30 min following a 2.5 g/kg dose of alcohol (administered intragastrically), levels of phospho-ERK are decreased while those of phospho-GluN1 are elevated with no change in total GluN1 levels. At 24 h following the single alcohol dose, levels of phospho-ERK are elevated in several brain regions while there are no differences between controls and alcohol treated animals in phospho-GluN1 or total GluN1. Those results suggest that alcohol may differentially regulate GluN1 function and ERK activation depending on alcohol dose and exposure time in the central nervous system.
Introduction
Alcohol dependence is a complex neuropsychiatric disorder characterized by chronic drinking, abstinence, relapse and behavioral impairments. Long-term consumption of alcohol has been reported to modify a multitude of molecular events such as the function of neurotransmitter receptors, intracellular signal transduction systems and biochemical processes in the central nervous system [1].
Drugs of abuse enhance the activity of the dopaminergic mesocorticolimbic pathway, which arises from the ventral tegmental area (VTA) and projects to the nucleus accumbens (NAc), caudate putamen (CPu), amygdala (Amy), hippocampus (Hip) and the prefrontal cortex (PFC) [2,3]. The NAc modulates motivation for drug seeking by integrating information from the basolateral Amy and PFC [4]. The CPu is thought to provide a link between motivation and motor outcomes, due to connections between the NAc and CPu via the ventral midbrain [5][6][7]. The Amy projects heavily to the NAc and is involved in conditioned learning of drug reinforcement and drug-associated cues [2]. The Hip plays important roles in the consolidation of information from short-term memory to long-term memory that is thought to be involved in addiction [8]. The PFC send reciprocal connections to the VTA, modulating the activity of this nucleus and its subsequent output to limbic structures [9], while convergent inputs from the Amy, Hip and PFC to the NAc modulates outputs to motor relay circuits that oversee motor actions and outcomes [6,10,11]. These pathways coordinate reward-related associative learning and motivated behaviors and limbic-associated pathways by drugs of abuse are thought to contribute the alterations in learning, memory and behavior that underlie addiction [12,13].
The mitogen-activated protein kinases (MAPKs) pathway represents a converging point for many signaling pathways and can be activated by chronic drug treatments [14]. MAPKs include the extracellular signal-regulated kinase (ERK), the c-Jun Nterminal kinase (JNK) and the p38 protein kinase [15]. Upon phosphorylation, MAPKs translocate to the nucleus and facilitate gene transcription [16]. A mounting body of research demonstrated that ERK is involved in neural plasticity, learning and memory and drug reinforcement [17,18]. Enhanced ERK phosphorylation has been found in the PFC, the shell of the NAc, the central and basolateral nucleus of the Amy, the paraventricular nucleus of the hypothalamus, and in the Edinger-Westphal nucleus following acute alcohol administration [19,20]. However, little is known about the effects of chronic alcohol consumption on the phosphorylation of these MAPKs in mesocorticolimbic areas.
The N-methyl-D-aspartate-type glutamate receptors (NMDARs) are heteromeric complexes that incorporate the NMDAR1 (GluN1), NMDAR2, and NMDAR3 subunits. Without GluN1, NMDAR complexes are not functional [21]. GluN1 subunits are also important determinant of alcohol sensitivity [22]. It was suggested that alcohol bound to the third transmembrane domain (TM3) of the GluN1 subunit [23]. Chronic exposure to alcohol induces a number of adaptive processes in the central nervous system, including an upregulation of NMDARs and inhibition of their function [24]. Phosphorylation of GluN1, especially at Ser897, is known to enhance NMDAR activity [25,26]. It has been demonstrated that NMDARs can activate the MAPK pathway mainly through a Ca 2+ -dependent signaling pathway. Increasing evidence also shows that glutamate receptordependent activation of the MAPK pathway is critical for the development of striatal neuronal plasticity and is an important molecular mechanism for the long-lasting behavioral plasticity [27].
Although the association between drug dependence and dysfunction of mesocorticolimbic systems is well documented, the molecular mechanisms and particularly the effects of long-term alcohol consumption on the phosphorylation of MAPKs and GluN1 subunits have not been elucidated. In the present study, we hypothesized that alcohol would alter the phosphorylation of ERK, JNK, p38 and GluN1 in the mesocorticolimbic system of the rat. To test this hypothesis, rats were exposure to a 6% (v/v) alcohol solution for 28 d and a single dose of alcohol (2.5 g/kg, intragastrically). The phosphorylated MAPKs and GluN1 in the NAc, CPu, Amy, Hip and PFC was examined in rats.
Animals
Sixty-four male Sprague-Dawley rats (Laboratory Animal Center of Xi'an Jiaotong University, China) that were 16 weeks old and weighed 260-280 g at the beginning of the experiment were habituated for 7 d before the experiments. Rats were individually housed in a temperature-and humidity-controlled room (2262uC and 6065%, respectively) under a 12 h light/dark cycle (lights on at 8:00). Food was available ad libitum. All experiments were carried out between 9:00 and 16:00. All experiments were approved by the Animal Care and Use Committee of Xi'an Jiaotong University, and the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines have been followed. Efforts were made to minimize animal suffering and to reduce the number of animals used.
Experiment 1
Long-term alcohol consumption. Rats were assigned randomly to two groups: a control group (n = 10) and an alcohol group (n = 10), receiving water and alcohol solution ad libitum, respectively, as the only liquid source during the experimental period. In order to induce alcohol consumption, an alcohol solution was administered at a concentration from 0.5% v/v to 6% v/v for adaptation in the first 12 d and 6% v/v in the following 28 d, as described by Turchan J. et al. [28]. Locomotor activity was assessed after 7, 14, 21 and 28 d of 6% alcohol treatment. At the end of the 28-d treatment, alcohol solution was replaced with water. Withdrawal syndromes were evaluated at different time points. The amount of alcohol consumed (g of pure alcohol per kg of body weight) was recorded daily. The body weight of the rats was monitored weekly.
To minimize the effects of handling and behavioral assessments, a separate cohort of rats was allowed to drink the 6% alcohol solution or water (n = 10 per group) for 28 d. Those rats neither undergo any behavioral tests nor alcohol withdrawal. The rats were sacrificed by decapitation at the end of the 28-d treatment, brains were quickly removed and stored at 280uC until use.
Open field test. According to the method of Erden, et al. [29], a black rectangular box was used (100650660 cm) for open field test. The box was illuminated with three 30 W fluorescent bulbs placed 2 m above the box. The experiment was carried out in a sound-attenuating room at 9:00. Rats were placed in the central area of the box and allowed 10 min of exploration. The total distance travelled was analyzed by a computerized videotracking system (SMART, Panlab SL, Barcelona, Spain). Rearing (lifting both fore limbs off the floor) was counted by an observer blind to the treatment.
Evaluation of Ethanol Withdrawal Syndrome (EWS). At the end of the 28-d exposure to 6% alcohol solution, alcohol was withdrawn from the drinking water at 9:00. The rats were then observed for 10 min at the 0 (before the removal of alcohol), 2, 6, 24, 48 and 72 h of the withdrawal period. At each observation time, rats were assessed simultaneously for the following behavioral signs: stereotyped behavior, agitation (irritability to touch), tail stiffness, abnormal posture and gait. In the study, grooming, sniffing, head weaving, gnawing, and chewing were observed as major stereotypes behaviors during the alcohol withdrawal. Stereotypic behaviors, abnormal posture and gait, agitation, and tail stiffness were scored using a rating scale as previously described (Table S1) [29]. The first 5 min of the scoring period were excluded from the analyses to allow the subjects sufficient time to habituate to the observation cage. The observation was carried out by an observer blind to the treatment.
Carlos Arias et al. [30]). Control animals were treated precisely in the same way, but sterile water was administered.
Blood samples were taken at 0 min (before the administration of alcohol), and at 30 min, 2 h, 6 h and 24 h after alcohol administration. Blood alcohol levels (BALs) were determined. To reduce the number of animals used, animals were decapitated when an acute effect was expected and disappared, i.e. 30 min (n = 6) and 24 h (n = 6). Controls (n = 12) were administered with water and killcd with the alcohol-treated rats. Brains were quickly removed and stored at 280uC until use.
Western blotting. The NAc, CPu, Amy, Hip, PFC were dissected on ice using a rat brain atlas ( Figure S1). Brain tissues were homogenized in a pre-cooled RIPA buffer (50 mM Tris-HCl pH 7.5, 50 mM NaCl, 5 mM EDTA, 10 mM EGTA, 2 mM sodium pyrophosphate, 4 mM paranitrophenylphosphate, 1 mM sodium orthovanadate, 1 mM phenylmethylsulfonyl uoride, 2 mg/ ml aprotintin, 2 mg/ml leupeptin and 2 mg/ml pepstatin). The homogenates were incubated on ice for 30 min and centrifuged at 12,0006g for 15 min at 4uC. The protein content was determined using a bicinchoninic acid method (Joincare Co., Zhuhai, China). The protein samples were subjected to 12% SDS-PAGE and transferred to PVDF membranes. The membranes were blocked with 5% fat milk in Tris-buffered saline (TBS) (500 mM NaCl, 20 mM Tris-HCl pH 7.5) containing 0.05% Tween-20 for 1 h and incubated overnight with one of the following antibodies at 4uC: primary antibodies against phospho-ERK, phospho-JNK, phospho-p38, phospho-GluN1 at a 1:1000 dilution. The same dilution was used for the antibodies against total protein. The next day, the membranes were washed three times with 0.1% Tween-20 TBS (pH 7.6) and incubated with horseradish peroxidaseconjugated anti-rabbit or anti-mouse secondary antibodies. An enhanced chemiluminescence kit (Millipore, MA, USA) was used Data analyses. Data analyses were performed using SPSS (Ver 13.0, SPSS Inc., USA). Alcohol consumption and BALs were determined using a one-way analysis of variance (ANOVA) with repeated measures. Weight gain, total fluid intake, locomotor activity, global EWS scores were compared between the two treatment groups using two-way ANOVA with repeated measures. Immunoreactive protein bands were quantified by densitometry using the Quantity One (BioRad, Hercules, US). Proteins levels in the water-drinking control rats were set at 100%. Independent Student's t-tests were conducted to analyze immunoreactivity changes in MAPKs and GluN1. All data are presented as means 6 the standard deviation (SD). Statistical significance was accepted at p,0.05.
Experiment 1: Long-term Alcohol Exposure
1.1. Weight gain, alcohol consumption and BALs. No significant differences in weight gain between the alcohol-and water-drinking rats were noted [treatment: F (4,190) = 2.54, p = 0.1128] (Fig. 1, A). All rats exhibited an increase in body weight during the alcohol-or water-drinking period [time: F (1,190) = 179, p,0.0001]. The mean body weight gain for the period was 121 g in the alcohol-drinking group and 125 g in water-drinking control group.
Thirty minutes after a single dose of alcohol administration, significant increase of the proportion of phosphorylation of GluN1(Ser897)/total-GluN1 was found in NAc (main effect: F (2,15) (Fig. 4). Tween-four hours later, no obvious change was observed in the phospho-GluN1/total-GluN1 in all five brain regions (Fig. 6, D).
Discussion
In our experiments, no significant differences were found in body weight between the alcohol-drinking and control rats, suggesting that the food intake is not affected during the 28 d of 6% alcohol drinking period. Rats stably consumed an average of 4.8560.79 g/kg/day of alcohol in their home cages for 28 d, which resulted in relatively lower BALs (average: 22.3561.3 mg/ dl). Shah et al. reported that the blood alcohol levels (BALs) in Swiss-Webster mice reached 20.0668.32 mg/dl on the first day, and 14.7563.98 mg/dl on the 7th day, by administrating a 7% v/ v alcohol solution for 7 d [9]. Furthermore, the report by Barson showed that the BALs in SD rats reached 18.964.0 mg/dl by administrating an alcohol solution of increasing concentration from 1% to 7% v/v for 16 d [10]. In our study, the blood samples were taken from the tail vein at 9:00 AM. The BALs was low because the rats consumed alcohol at a different time points at the dark phase. The rats with lower BAL may consume ethanol in the early period of the dark phase, whereas the rats with higher BAL probably consumed alcohol in the later period of the dark phase [31]. Taken together, those results indicate that the alterations in behavior and protein expression in the alcohol-drinking rats resulted from adaptations in brain function rather than insufficient food intake or alcohol accumulation in the blood.
Alcohol consumption inhibits locomotor activity [32]. In our hand, locomotor activity started to decrease in alcohol-drinking rats after 7 d. Rearing activity was also decreased. Notably, the distances travelled and rearing were not different compared to control rats at the end of the 28-day alcohol exposure, suggesting adaptation and compensation. Discontinuation of alcohol intake results in the nervous system hyperactivity and dysfunction [33]. EWS is the most important evidence indicating the presence of physical alcohol dependence either in humans or in experimental animals [34]. Withdrawal symptoms include increases in stereotype behaviors, tail stiffness, hyper-reflexia, agitaion, and anxiety [35]. In our study, rats exhibited obvious withdrawal signs after alcohol discontinuation. In line with previous studies [36,37], the global score of EWS was highest at 6 h. Our results indicated that 28-d of continuous 6% alcohol drinking induces physical dependence in rats.
Activated ERK phosphorylates cellular targets or translocates into the nucleus where it activates specific gene transcription factors [38][39][40]. By regulating cellular activities and gene transcription, the ERK cascade transduces the activity of a variety of extracellular and intracellular signals into enduring changes in the central nervous system. Several studies have shown that alcohol exposure alters ERK phosphorylation. Moderate doses of acute alcohol (1.5-3.5 g/kg) produce a dose-and time-dependent decrease in phosphorylated ERK in mouse cortex [41]. Another study extended these findings by showing that acute alcohol can reduce phosphorylated ERK in the cerebral cortex and Hip in both young and adult rats [42]. Forced exposure to chronic ethanol vapor suppressed ERK phosphorylation in the Amy, cortex, cerebellum and CPu in rats [43]. Our results showed that chronic or acute exposure to alcohol significantly decreased ERK phosphorylation in the NAc, CPu, Amy, Hip, and PFC. The most intriguing finding of the present study is that ERK phosphorylation also decreased significantly in the NAc, which is inconsistent with the finding of Sanna et al. wherein they found only minor and mostly non-significant changes [43]. This might be the result of the different sample sizes (n = 5 v.s. n = 10) used or the duration of the induction periods (12 days v.s. 28 days). Although drugs of abuse possess diverse neuropharmacological profiles, activation of the mesocorticolimbic system, particularly the NAc, Amy, CPu, PFC and Hip via dopaminergic and glutamatergic pathways, constitutes a common pathway by which various drugs of abuse mediate their reinforcing effects [12]. The similar alterations of ERK have been thought to contribute to the drug's rewarding effects and to the long-term maladaptation induced by drug abuse (including cocaine, amphetamine, D 9 -tetrahydrocannabinol, nicotine, morphine and alcohol) [44]. Although long-term alcohol exposure induced significant inhibition of ERK phosphorylation, distinct patterns of p-ERK/ERK ratio were observed in different brain regions. It may reveal that these different brain regions play different roles and have different sensitivity to alcohol. Together, our results demonstrated that chronic or acute alcohol exposure induces decreased ERK phosphorylation in the brain reward circuit. Alcohol inhibits glutamatergic neurotransmission, primarily by acting on ionotropic glutamate receptors (iGluRs). Many reports have demonstrated that acute ethanol exposure inhibits NMDAR channel function in the Hip, cerebellum, cerebral cortex, NAc, Amy and VTA [45][46][47]. After chronic alcohol exposure, the number of NMDA receptor complexes was increased [48][49][50], perhaps indicating an adaption to the chronic presence of ethanol to maintain receptor activity. These results confirmed that the activity of glutamate receptors but not the number of these receptors is inhibited by chronic alcohol exposure. One well-characterized cascade includes the calcium/ calmodulin-dependent inhibition of adenylyl cyclase and inhibition of cAMP-dependent protein kinase (PKA). PKA then triggers the inhibition of ERK signaling cascade, resulting in decreased ERK phosphorylation and nuclear translocation and further down-regulation of gene transcription [51,52]. Our recent study has demonstrated that continuous alcohol drinking inhibits the phosphorylation of ERK, and this inhibition is correlated with a decrease in the phosphorylation of CaMKII (Thr286) in hippocampal CA1 and DG subregions [53].
The GluN1 subunit forms the molecular backbone of functional NMDA receptors and has been hypothesized to play a role in chronic alcohol consumption. To address this issue, we examined the expression and phosphorylation levels of the GluN1 subunit. Figure 5. Effects of long-term alcohol intake on N-methyl-D-aspartate-type glutamate receptors 1 (GluN1) expression and phosphorylation (Ser897) in the nucleus accumbens (NAc), caudate putamen (CPu), amygdala (Amy), hippocampus (Hip) and the prefrontal cortex (PFC). Total-GluN1 expression was significantly increased. Obvious decreases were found in the phospho/total-GluN1 in all of the five brain regions examined. Data represent mean 6 SD relative to water-drinking controls that were set as 100%. b-actin was used as a loading control. *p,0.05, **p,0.01, ***p,0.001 vs. water-controls. doi:10.1371/journal.pone.0054930.g005 GluN1-MAPKs in Rat Brain after Alcohol Exposure PLOS ONE | www.plosone.org Long-term alcohol consumption induced significant up-regulation of GluN1 subunits in all five brain regions. Enhanced GluN1 subunit mRNA expression was evident in samples from chronic ethanol-exposed animals in amygdala neurons [54]. Previous studies have shown that alcohol consumption is accompanied by increased GluN1 protein levels in the Hip [55], Amy [48], striatum and medial PFC [56]. In Kroener et al. study, they observed that chronic alcohol exposure lead to an increase in expression of GluN1 subunit in the insoluble postsynaptic density fraction, but this increase was more transient and was no longer observed after 1 week of withdrawal [57]. The up-regulation of GluN1 subunits caused by chronic alcohol consumption may be a compensatory reaction that leads to recovery of NMDAR functional activity following ethanol-induced GluN1 inhibition [58,59].
Protein phosphorylation has been recognized as a major mechanism for the regulation of NMDA receptor function [60,61]. In our hand, the ratio of phospho/total-GluN1 was significantly lower in all brain regions examined. Phosphorylation of GluN1 at Ser897 alters NMDA receptor activity by regulating its sensitivity to glutamate or by activating downstream signal transduction pathways [62]. A serine at position 897 in the C1 cassette represents the major PKA phosphorylation site of the GluN1 subunit [63]. Dopamine D1 receptors have been shown to directly bind GluN1 and GluN2A subunits [64,65], these results suggest that the phosphorylation status of the GluN1 S897 site may be a critical factor that regulates the overall sensitivity of NMDA receptors to alcohol. However, a previous study showed that conditions that favor the phosphorylation of S897 in the GluN1 subunit had no effect on the acute alcohol sensitivity of NMDARs [66]. Aside from methodological differences between Xu et al. and the present study, it is also possible that neuronal NMDA receptors exist in a complex intracellular environment characterized by a wide array of signaling proteins that are presumably missing in a heterologous cell expression system like human embryonic kidney cells. Our results confirmed previous observations that continuous alcohol drinking greatly increases GluN1 in both total protein and its phosphorylated version [67]. Moreover, our results also showed the ratio of phospho/total-GluN1 subunit was reduced in these brain regions studied, which indicate that total-GluN1 were increased greater than phospho-GluN1 in the continuous alcohol drinking group. it is plausible that chronic alcohol use, by long-term inhibition of NMDA function, triggers compensatory adaptations. Zhao et al. have demonstrated that decreased phospho/total-GluN1 is consistent with decreased NMDAR function [68]. Moreover, we also demonstrated that acute exposure to alcohol induced increases in GluN1 phosphorylation in rat brain. The upregulated phosphorylation of GluN1 may lead to recovery of NMDAR functional activity under the acute effect of alcohol [30]. These effects of alcohol on the GluN1 may underlie the mechanisms that compensate for alcohol-induced inhibition of NMDARs. Previous study documented a profound increase in phospho-ERK in response to pharmacological activation of NMDA receptors in hippocampal, cortical and striatal neurons [39]. Administration of the NMDARs antagonist MK-801 prevented the increase in ERK phosphorylation in the Amy in alcohol withdrawn rats [69]. Thus, it may be speculated that the decreased ERK phosphorylation is associated with the inhibition of phospho-GluN1 by long-term alcohol drinking.
In conclusion, the findings of the present study indicate that alcohol can affects phosphorylation of ERK and GluN1 in brain reward circuits. JNK and p38 phosphorylation in mesocorticolimbic areas were not significantly influenced by this treatment. Our findings support previously reported associations between GluN1-ERK and dependence to alcohol and other substances, and emphasize the relevance of decreased phosphorylation of GluN1 and ERK in the mesocorticolimbic system for chronic alcohol exposure.
Author Contributions
Conceived and designed the experiments: YZ JL. Performed the experiments: YW BZ. Analyzed the data: SW MX. Contributed reagents/materials/analysis tools: EL. Wrote the paper: YW YZ. Figure 6. Effects of a single exposure of alcohol on blood alcohol levels (A), and ratios of phospho/total-ERK (B), total GluN1 (C), ratios of phospho/total-GluN1 (D) protein levels in the nucleus accumbens (NAc), caudate putamen (CPu), amygdala (Amy), hippocampus (Hip) and the prefrontal cortex (PFC) in rats following a single dose of alcohol exposure. Data were expressed as mean 6 SD. For panel A, ***p,0.0001 vs. 0 min (before alcohol administration). For panel B, C and D, water-controls were set as 100%, *p,0.05, **p,0.01, ***p,0.001 vs. water-controls. doi:10.1371/journal.pone.0054930.g006 GluN1-MAPKs in Rat Brain after Alcohol Exposure | 5,283.6 | 2013-01-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
Method for constructing estimates of accuracy of measuring equipment based on Bayesian scientific approach
Before putting new unique samples of technical systems into commercial operation, as well as before introducing new technologies into production, as a rule, all kinds of tests are carried out. Small and very small volume of statistical data during testing is a characteristic feature of unique and small-scale products and technical systems. Therefore, the problem of constructing effective statistical estimates with a limited amount of statistical information is an important practical problem. The article proposes the development of the Bayesian approach to the construction of point and interval estimates of the parameters of the known distribution laws. The joint use of a priori and posterior information in the processing of statistical data of a limited volume can significantly increase the reliability of the result. As an example, we consider two most typical distribution laws that arise when testing new unique samples of measuring devices and equipment: normal distribution with an unknown average value and a known dispersion, as well as with an unknown average value and an unknown dispersion. It is shown that for these cases, the parameters of the distribution laws themselves are random variables and obey the normal law and gamma normal law. Recalculation formulas are obtained to refine the parameters of these laws, taking into account a posteriori information. If these formulas are applied several times successively, the process of self-learning of the system or self-tuning of the system occurs. Thus, the proposed scientific approach can find application in the development of intelligent self-learning and self-turning systems.
Introduction
The Bayesian scientific approach is widely used to create effective statistical estimates in various fields of activity [1][2][3][4][5][6][7][8][9][10][11][12]. Radio engineering, classification theory, machine learning, the creation of self-learning and self-tuning systems are just some of the areas where the Bayesian approach is effectively used. This paper describes the application of the Bayesian approach to the problems of constructing effective statistical estimates of accuracy of new and innovation measurement devises and instruments [13][14][15]. The algorithms for constructing the distribution density function of the average value and the distribution function of the mean square deviation are described. The developed algorithm is based on taking account the available statistical data together with, a priori information about the process or object under investigation. The results of the paper are used in determining the accuracy class of measuring instruments according to the results of acceptance tests, of state tests or tests directed to confirm the type of measuring device and instrument [13][14][15].
The logic scheme of Bayesian scientific approach
Suppose T s ) ,..., , ( -random vector-parameter that involved in the description of the distribution law, s -dimension of . It is required to construct the best, in a certain sense, statistical estimation of this vector of parameters from the available kdimensional observations ) ,..., , ( hereinafter, means the operation of transposing a vector. Uppercase letters will denote vector quantities, lowercase letters will be used to denote the one-dimensional (possible or observed) values of the random variables being analyzed, and we will denote matrices (vectors whose vectors are also components). A priori information is a probability distribution function of the analyzed unknown parameter. It is assumed that this information was obtained prior to the collection of statistical data. As new statistics become available, the distribution function is refined. Under certain assumptions, there is a transition from an a priori distribution to an aposteriori distribution using the Bayes formula [1] 1.
where the likelihood function ) ,..., , ( (2) according to the Maximum Likelihood Method (MLM) [1]. The general logical scheme of the Bayesian method for estimating the distribution parameter values is presented in Fig. 1. We describe in accordance with the scheme the main steps of its implementation. A priori information about the parameter is based on the history of the functioning of the process under study, as well as on theoretical provisions about its essence and specificity. This a priori information should be presented in the form of function of density of a priori distribution law ) ( of the parameter .
Let additional statistics appear in the measurement result ) ,..., , ( In accordance with the law of probability distribution . It is assumed that observations (2) with a fixed one are independent. The calculation of the posterior distribution is carried out using a formula (1) The construction of Bayesian point and interval estimates is based on knowledge of the posterior distribution law ) ,..., , ( (4) To calculation the Bayesian confidence interval or Bayesian confidence area for a parameter, it takes: A) In case of one-dimensional distribution law to calculate according formula (1) , where -is the observed random variable: In [1] it was shown that the conjugate law for Problem 1 is the law of normal distribution with an unimportant mean value, and for Problem 2 the conjugate law is the gamma -normal distribution.
Note that the general form of the posterior distribution law ) ,..., , ( (1) is determined, accurate to the normalizing constant, only by the numerator of the right-hand side of this formula. Therefore, below when analyzing equalities that are accurate up to the normalizing constant we will use the sign " ".
According the [1] for the Problem 1: The right-hand side of this relation is (up to a normalizing factor independent of ), the density of the normal distribution with average value x and dispersion n / ) itself belongs to the class of normal distribution laws [1].
Note that for the Problem 2 the right-hand side is (up to a normalizing factor independent of and h ) the density of two-dimensional gamma -normal distribution law Consequently, the set of conjugate a priori distribution laws of a two-dimensional parameter belongs to the class of two-dimensional gamma-normal distribution laws [1].
Method for calculation specific parameter values in conjugate priory distribution laws
Using as a priori laws the probability distributions associated with the observed general population allows us to determine their general form, i.e., it defines a whole set of a priori . Then the parameters of the a priori distribution law can be determined by the method of moments [1].
Since the calculation of parameters for the Problem 1 is obviously, we describe the procedure for calculating parameters only for the Problem 2. From the properties of twodimensional gamma -normal distribution law it follows that the partial a priori distribution of a parameter h is a gamma distribution law with parameters and . Therefore, using the given values of For the Problem 1:
Method for recalculation of parameter values
Note that the average value 1 d and dispersion 2 d of a posterior normal distribution law are the weighted average values of a priori and sample mean and variances, respectively. For the Problem 2 when implementing the general scheme for converting a priori parameters into a posteriori parameters ones in this case, one should take into account the representation of the likelihood function L in the form (6); a priori density form of twodimensional gamma -normal distribution (6) too. And the vector of parameters is Table 1). Table 2 shows the point estimates and confidence intervals based on the Bayesian approach and the MLM. It can be seen that the application of the Bayesian approach allows one to construct more accurate and reliable estimates. Fig. 2 shows a general view of gamma -normal distribution law. ( h is 0,95 and 0,975, respectively. Note that with increasing n confidence areas will become more and more similar to ellipses, since the gamma -normal distribution will tend to a two-dimensional normal law. We also note that currently in the scientific works of other authors the methods for constructing confidence regions which have the shapes of ellipses, rectangles, ellipsoids are described and implemented.
Discussion
Modern innovative projects lead to the need to develop new measurement technical means and devices with specified technical, metrological and operational characteristics. The above characteristics of a new creating product are detailed in the relevant technical specifications of the development of a new product. Before the introduction of these technical means and devices or before the state acceptance of these technical means and devices, a whole test cycle is carried out. The goal of testing these devices is to confirm the specified characteristics defined in the technical projects. To achieve this goal the results of testing are carefully analyzed, including processed by statistical methods.
The algorithms and results obtained in the article are aimed at methodological support of the problems described above. The algorithms are based on taking account the available statistical data together with a priori information about the process or object under study.
It is important that the in the paper the obtained distribution function law has an analytical form. We note that in most practical problems it is possible to construct a distribution function only by numerical methods.
The developed in this paper method can be used to create self-learning and self-tuning systems. For this, it is necessary to consistently apply formulas (7) -(9).
Conclusions
The use of a priori information about the unknown value of the parameter and the application of the Bayesian approach made it possible to refine the estimates and, in particular, narrow the interval estimate in comparison with MLM from 1,5 to 2 times. Note that the developed Bayesian scientific approach can yield significant gains in accuracy with limited sample sizes compared to the traditional approach (MLMmethod). With an increase in the dimension of additional data or the arrival of a large number of different samples (statistical information) both approaches, due to their consistency, will give more and more similar results. | 2,282 | 2019-09-01T00:00:00.000 | [
"Engineering",
"Mathematics"
] |
The Origin of a “Zebra” Chromosome in Wheat Suggests Nonhomologous Recombination as a Novel Mechanism for New Chromosome Evolution and Step Changes in Chromosome Number
An alloplasmic wheat line, TA5536, with the “zebra” chromosome z5A was isolated from an Elymus trachycaulus/Triticum aestivum backcross derivative. This chromosome was named “zebra” because of its striped genomic in situ hybridization pattern. Its origin was traced to nonhomologous chromosome 5A of wheat and 1Ht of Elymus; four chromatin segments were derived from chromosome 1Ht and five chromatin segments including the centromere from 5A. In this study, our objective was to determine the mechanism of origin of chromosome z5A, whether by nonhomologous recombination or by multiple translocation events. Different crossing schemes were used to recover recombinants containing various Elymus chromatin segments of the z5A chromosome. In addition, one z5AL telocentric chromosome and three z5AL isochromosomes were recovered. The dissection of the Elymus segments into different stocks allowed us to determine the chromosomal origin of the different chromosome fragments on the basis of the order of the RFLP markers employed and suggested that the zebra chromosome originated from nonhomologous recombination. We present a model of possible mechanism(s) of chromosome evolution and step changes in chromosome number applicable to a wide range of organisms.
C LASSICAL phenomena of changes in multiple sets of basic chromosome number by polyploidy and step changes in basic chromosome number through aneuploidy have been documented for over a century. Stebbins (1950) provided the first synthesis of the enormous literature on chromosome variation and evolution in plants. However, such studies were limited to groups of species that could be hybridized and their meiotic behavior analyzed. With the dawn of the genomics era, comparative genomics and fluorescence in situ hybridization (FISH) are providing new insights into mechanisms of chromosome evolution, including changes in size, morphology, and number at the levels of tribes, families, or higher hierarchies sharing and spanning tens or even hundreds of million years of coevolutionary history. Typically, one or more rounds of whole-genome duplications, followed by diploidization and step changes in basic chromosome numbers, have been hypothesized. In grasses, Salse et al. (2008) have provided a synthesis of the whole-genome duplications and step changes in basic chromosome numbers, including chromosome fissions, fusions, and translocations that have spawned basic chromosome changes of 1x ¼ 5, 7, 10, or 12 in maize (1x ¼ 10), wheat (1x ¼ 7), rice (1x ¼ 12), and sorghum (1x ¼ 10), sharing 50 million years of evolutionary history. Apart from chromosome fission and fusion, unequal translocations followed by the elimination of a diminutive chromosome have been hypothesized in Arabidopsis species, leading to step changes of decrease in chromosome number from 1x ¼ 8 to 1x ¼ 5 (Lysak et al. 2006;Schubert 2007).
However, there have been only a few experimental demonstrations of step changes in basic chromosome number (reviewed in Schubert 2007). In wheat, there is a long history of interspecific hybridization and experimental introgression aimed at transferring useful genes from alien species into wheat. There is also a large body of literature on chromosome behavior and structural aberrations in interspecific hybrids. One particularly useful aspect of the wheat genetic system is buffering of the genome due to polyploidy that allows study of chromosome structure and behavior in isolation and over a large number of generations. Some key aspects of these studies were summarized by Gill (1991): (1) Interspecific hybridization is mutagenic; (2) many chromosomes from either one of the parents exhibit meiotic drive and show preferential transmission; and (3) cytoplasmic sterility resulting from nucleo-cytoplasmic interactions is common and leads to unilateral introgression of fertility restoration (Rf) and/or genes specific to the cytoplasmic donor in backcross derivatives.
We have studied one particular wide hybrid between Elymus trachycaulus (Link) Gould ex Shinners (2n ¼ 4x ¼ 28, S t S t H t H t ) and wheat for .25 years. The original hybrid could only be made using E. trachycaulus as female and common wheat as male (Sharma and Gill 1981). Upon backcrossing with wheat, ''unfavorable nucleocytoplasmic interactions were evident that led to seed shriveling, embryo abortion, failure of seed or cultured embryos to germinate, seedling death, plant weakness and sterility in the backcross derivatives'' (Sharma and Gill 1983a,b). All viable and partially fertile alloplasmic derivatives carried chromosomes 1H t or 1S t or their short arms as telocentrics. Twelve simple or complex translocations involving 1S t or 1S t S with other E. trachycaulus or wheat chromosomes were recovered (Morris et al. 1990;. Among them, an alloplasmic wheat line carrying a zebra chromosome designated as z5A-a complex translocation involving 1H t S and wheat chromosome 5A that replaced wheat chromosome 5A-was recovered (Jiang and Gill 1993). Chromosome z5A gave a striped appearance upon genomic in situ hybridization (GISH) and was made up of four segments from E. trachycaulus and five segments including the centromere from wheat. This line was fully fertile and flowered 10 days earlier than the wheat recurrent parent.
In this article, our objective was to determine the mechanism of origin of chromosome z5A, whether by nonhomologous recombination or by multiple translocation events. By separating and genetically isolating the four E. trachycaulus segments of z5A using centric breakage and homologous recombination, we obtained evidence that chromosome z5A likely arose by nonhomologous recombination, resulting in a mosaic chromosome comprised of alternate blocks of Triticum aestivum and E. trachycaulus chromatin segments. A model is presented and the significance of this phenomenon is discussed in relation to chromosome evolution and repatterning forming new chromosomes and also leading to step changes in basic chromosome number in diverse organisms.
MATERIALS AND METHODS
Plant material: An alloplasmic wheat line, TA5536, carrying the zebra chromosome z5A pair substituting for chromosome 5A of wheat in addition to 20 pairs of wheat chromosomes, was isolated from a derivative of an E. trachycaulus/T. aestivum cv. ''Chinese Spring'' (CS) hybrid (Sharma and Gill 1983a,b;Morris et al. 1990;Jiang 1993;Jiang and Gill 1993;. Chromosome z5A was recovered in progeny that had a complete 1H t chromosome, a 1H t S telosome or isochromosome, or a T1H t SÁ5AL translocation chromosome in addition to a monosomic wheat chromosome 5A. These observations suggested that the E. trachycaulus chromatin in z5A was derived from chromosome 1H t (Jiang and Gill 1993). Alloplasmic euploid wheat plants in E. trachycaulus cytoplasm are male sterile with reduced vigor. The short arm of chromosome 1H t has the fertility restoration gene, Rf-H t 1. The alloplasmic stock TA5536 is vigorous and fully fertile, in-dicating that the 1H t segments present in chromosome z5A have the Rf-H t 1 gene. The long arm of chromosome 5A has the major domestication gene Q, which controls the free-threshing character and square spike morphology (Huskins 1946;Unrau et al. 1950;Sears 1952Sears , 1954Mackey 1954;Simons et al. 2006). TA5536 has a square head and is free threshing, indicating that the chromosome 5A segment containing the Q gene is present in z5A. Meiotic metaphase I pairing revealed that the short arm of z5A paired with the short arm of chromosome 1H t , whereas the distal region of the long arm of z5A paired with the distal region of 5AL. These data confirmed that chromosome z5A was derived from chromosomes 1H t of E. trachycaulus and 5A of wheat (Jiang and Gill 1993).
Different crossing schemes were used to separate the four Elymus chromatin bands in chromosome z5A. For obtaining recombinants with different Elymus chromatin bands, the CS double ditelosomic dDt5A stock (20$ 1 5AS$ 1 5AL$) was crossed as female with TA5536. The F 1 was either backcrossed with CS or selfed. To obtain z5A telosomes, the CS monosomic 5A stock was crossed as a female with TA5536. Because chromosome 5A is expected to be absent in $75% of the female gametes, the F 1 plants were expected to segregate 25% with the chromosome constitution 20$ 1 z5A9 1 5A9 (42 chromosomes) with normal square spike morphology (two doses of Q) and 75% with 20$ 1 z5A9 (41 chromosomes) and speltoid (spelled-like) spikes (one dose of Q). Plants with speltoid spikes were selfed and screened for z5A telosomes using GISH. In addition, ditelosomic addition (DtA) stocks DtA1H t S and DtA1H t L were included in Southern hybridization analysis to enable the assignment of DNA markers to either the 1H t arms or Elymus chromatin bands. All materials are maintained at the Wheat Genetic and Genomic Resources Center, Kansas State University, Manhattan, Kansas.
Mitotic and meiotic analyses: Chromosome measurements were performed on 20 z5A and 1H t chromosomes using 3B of wheat as a standard. Anthers from F 1 plants of dDt5A/TA5536 undergoing meiotic metaphase I were fixed in Carnoy's solution I (100% ethanol:glacial acetic acid ¼ 3:1), stained in 1% acetocarmine, and squashed in one drop of 45% acetic acid. Meiotic metaphase I pairing was analyzed in pollen mother cells (PMCs) after GISH.
FISH and GISH analyses: Seeds were germinated in distilled water on filter paper in a petri dish at room temperature for 2-3 days until roots were $2-cm long. Roots were cut, pretreated in ice water for 24 hr, and fixed in Carnoy's solution I. FISH and GISH were according to Zhang et al. (2001).
The Aegilops tauschii Coss. plasmid clone pAet6-J9, containing a 750-bp sequence with high sequence similarity to the gagpol polyprotein of the Ty3/gypsy retrotransposon cereba, was cloned into the plasmid pCR4Blunt-TOPO (Invitrogen Life Technologies, Carlsbad, CA) (Zhang et al. 2004). This clone contains a centromere-specific repetitive sequence that hybridizes to the centromeric regions of wheat, rye, barley, and maize.
Plasmid DNA was isolated using a QIAprep Spin miniprep kit (QIAGEN, Valencia, CA). Genomic DNA from E. trachycaulus was isolated using the standard CTAB method. One microgram of plasmid DNA was labeled with either rhodamine-6-dUTP using rhodamine-nick translation mix or fluorescein-12-dUTP (Roche Applied Science, Indianapolis) using nick translation according to the manufacturer's protocol. One microgram of total genomic DNA from E. trachycaulus was labeled with fluorescein-12-dUTP. Probes were purified with the QIAquick nucleotide removal kit (QIAGEN).
Southern hybridization: Total genomic DNA isolation, digestion, gel electrophoresis, and Southern blot hybridization followed Faris et al. (2000). Probes used were selected from the consensus map of wheat homeologous group 1 chromosomes (Van Deynze et al. 1995).
RESULTS
Characterization of chromosome z5A by GISH, FISH, and meiotic pairing analyses: Chromosome z5A has a length of 9.47 6 0.79 mm (67% of 3B length) with an arm ratio (long/short) of 4.6. Chromosome 1H t has a length of 8.23 6 0.68 mm (60% of 3B length) with an arm ratio of 1.8. Chromosome 3B is the largest wheat chromosome with a size of 13.8 mm . GISH revealed that 33% of chromosome z5A (3.13 mm) was derived from E. trachycaulus and 67% (6.34 mm) was derived from wheat. The calculated size of 1H t S (2.99 mm) is similar to that of the 1H t S segment present in chromosome z5A as determined after GISH (3.13 mm). Similarly, the 5AL segment in z5A is 6.34 mm, which is just a little shorter than that of the 5AL reported earlier (7.7 mm, Gill et al. 1991). The measurement data suggested that almost the complete 5AL and 1H t S arms were retained in chromosome z5A.
GISH analysis confirmed that chromosome z5A consists of four E. trachycaulus segments designated as E1, E2, E3, and E4, and five wheat segments, including the centromere, designated as W1, W2, W3, W4, and W5 ( Figure 1A, Figure 2). The W1 segment contains the centromere derived from chromosome 5A. FISH with the centromeric clone pAet6-J9 on the disomic addition line DA1H t revealed strong centromeric FISH sites on all wheat chromosomes, whereas chromosome 1H t showed a very weak centromeric FISH signal ( Figure 1B). Clone pAet6-J9 on TA5536 revealed three FISH sites on chromosome z5A ( Figure 1E). Sequential GISH, performed on the same slide after stripping off the FISH probe, revealed that the major pAet6-J9 FISH site mapped to the centromeric W1 segment, whereas the second pAet6-J9 FISH site mapped between the W2 and W3 segments and the third minor pAet6-J9 FISH site mapped to the distal region of the E4 segment ( Figure 1E). Colocalization of the minor pAet6-J9 FISH site to the E4 segment suggested that this site was derived from E. trachycaulus, in agreement with the much smaller centromeric pAet6-J9 FISH site in chromosome 1H t compared to those of wheat chromosomes.
Meiotic metaphase I pairing was analyzed in 197 PMCs of plants with the chromosome constitution 20$ 1 z5A9 1 5AS9 1 5AL9. Chromosome z5A was univalent in 37 PMCs (19%) ( Figure 1C) and paired in the distal region of the long arm with the 5AL telosome as a rod bivalent in 160 PMCs (81%) ( Figure 1D). These data confirmed that the distal W5 segment in chromosome z5A was derived from the distal region of 5AL. The chiasmata were almost exclusively in the distal regions of z5A and 5AL, with no chiasma formed in proximal regions.
Identification of z5A recombinants: The F 1 hybrids of the cross dDt5A/TA5536 were either backcrossed with CS or selfed. One hundred and nineteen BC 1 plants and 68 F 2 plants were screened for recombinants using GISH. Almost all detected recombination events (55 of 56) occurred in the W4 segment, resulting in 28 recombinants having the first three E. trachycaulus segments (E1, E2, and E3) on a complete z5A chromosome, designated as rec1, and 27 recombinants with only the E4 segment on a 5AL telosome, designated as rec3 ( Figure 1F). The frequent recovery of the rec1 and rec3 recombinants further suggested that the W4 segment in z5A is homologous and not rearranged relative to the corresponding region of wheat chromosome 5A. Only one recombinant event occurred in the W3 segment and resulted in a z5A recombinant chromosome with only the E1 and E2 segments (rec2, Figure 1F). The recovery of the rec2 recombinant likewise suggested that the W3 segment in z5A is homologous and not rearranged relative to the corresponding region of wheat chromosome 5A.
Separation of the first E. trachycaulus chromatin segment (E1) from the remaining three segments (E2, E3, and E4): F 1 hybrids from the cross of a monosomic 5A plant with TA5536 with speltoid spikes, and thus putatively monosomic for z5A, were selfed. One hundred and ten F 2 individuals were screened for telocentric chromosomes using GISH. One plant with a tz5AL telosomic chromosome and a normal z5A chromosome (2n ¼ 41 1 t) having a square spike (two doses of the Q gene), one plant with a z5AL isochromosome (iz5AL) (2n ¼ 40 1 iz5AL) (square spike, two doses of the Q gene), and two plants with an iz5AL and a normal z5A chromosome (2n ¼ 41 1 iz5AL) (compactoid spike, three doses of the Q gene) were recovered ( Figure 1G). In addition, plants with three different types of translocation chromosomes were identified ( Figure 1G), including (1) T1, with the short arm from an unknown wheat chromosome and the long arm from z5AL; (2) T2, with three Elymus chromatin bands (one band in the short arm and two bands in the long arm); and (3) T3, with five Elymus chromatin bands. The centromerespecific probe pAet6-J9 was used to determine the location of the centromere in T2 by FISH and identified T2 as a small metacentric chromosome ( Figure 1G). The GISH pattern of T2 suggested that the GISH band in the short arm was derived from the E4 segment of z5A, whereas the two Elymus segments in the long arm corresponded to the E2 and E3 segments of z5A. The GISH pattern of T3 suggested that the two Elymus segments in the short arm correspond to the E3 and E4 segments of z5A, whereas the long arm of T3 is identical to the long arm of z5A. The T3 translocation stock had speltoid spikes and low fertility, with only 20 seeds produced on 16 tillers. Although z5A long-arm telosome and isochromosomes were recovered, no z5A short-arm telosome was obtained.
Southern hybridization analysis: Chromosome 1H t is homeologous to the group 1 chromosomes of wheat. Molecular markers spanning the short and long arms of wheat group 1 chromosomes were used to map the Elymus segments in chromosome z5A by Southern hybridization analysis (Figure 2). The recovered recombinants, derived tz5AL telosome, and iz5AL isochromosomes allowed the assignment of wheat group 1 short-arm markers to specific Elymus segments in chromosome z5A. None of the group 1 long-arm markers hybridized to z5A, indicating that all the Elymus segments in z5A were derived from the short arm of chromosome 1H t . The presence of centromeric marker BCD1072 in 1H t S and its absence in z5A indicated that not all the short arm of 1H t is present in chromosome z5A. Jiang and Gill (1993) similarly reported that the short-arm marker PSR161, previously mapped to 1H t S, was absent in chromosome z5A. Later z5A recombinant chromosomes (rec1-rec3) (F), telosomic chromosome tz5AL, isochromosome iz5AL, and translocation chromosomes (T1-T3) (G). Total genomic DNA from E. trachycaulus and the centromeric clone pAet6-J9 were labeled with fluorescein-12-dUTP (except for the pAet6-J9 FISH pattern on T2 in G) and visualized by yellow-green fluorescence. Chromosomes were counterstained with propidium iodide and fluoresced red. (A) The total genomic DNA from E. trachycaulus generated a striped GISH pattern on a pair of z5A chromosomes in TA5536. The four yellow-green chromosome segments originated from chromosome 1H t of E. trachycaulus and the four red segments, including the centromere, originated from wheat chromosome 5A. (B) Clone pAet6-J9 hybridized strongly to all 42 wheat centromeres in line DA1H t , but very weakly to the centromeres of chromosomes 1H t (arrowheads). (C and D) GISH on the meiotic metaphase I cells in pollen mother cells from anthers of F 1 plants of dDt5A/ TA5536 with the chromosome constitution 20$ 1 z5A9 1 5AS9 1 5AL9. Chromosome z5A was univalent (arrowhead) in C and was paired in the distal region of the long arm with the 5AL telosome (arrowhead) in the form of a rod bivalent in D. (E) (Middle) Three FISH sites on chromosome z5A probed with clone pAet6-J9; (left) The sequential GISH result on the same chromosome; and (right) a merged image from the first left and middle images. The chromosome, the GISH signals, and the FISH signals were pseudocolored blue, green, and magenta, respectively. The merged image shows that the top FISH site mapped to the centromeric wheat segment (W1), the second site mapped to the second wheat segment (between W2 and W3), and the third site mapped to the distal region of the last Elymus segment (E4) (arrowhead). (F) GISH pattern on z5A chromosomes and three recombinant chromosomes derived from either selfed or backcrossed derivatives of dDt5A/TA5536. The chromosomes rec1, rec2, and rec3 had the first three (E1, E2, E3), first two (E1, E2), and the last Elymus segment(s) (E4), respectively. Rec3 is a telosomic chromosome. (G) tz5AL and iz5AL show a telosomic chromosome and an isochromosome, respectively; T1 shows a translocation chromosome with the short arm from an unknown wheat chromosome and the long arm from z5AL; T2 (GISH) shows a translocation chromosome with three Elymus bands (one in the short arm and two in the long arm). T2 (pAet6-J9) shows a sequential FISH pattern using clone pAet6-J9 labeled with rhodamine-6-dUTP and visualized by red fluorescence, identifying T2 as a small metacentric chromosome, with the GISH band in the short arm derived from the last Elymus segment of z5A and the two in the long arm corresponding to the middle two Elymus segments of z5A. T3 shows a translocation chromosome with five Elymus bands, two in the short arm corresponding to the last two Elymus segments of z5A, and the long arm of T3 is identical to the long arm of z5A. Bars, 10 mm.
it was shown that marker PSR161 mapped to the functional centromere of wheat group 1 chromosomes (Francki et al. 2002), indicating that the 1H t S chromatin missing in z5A is closely associated with the centromere.
Separation of the four E. trachycaulus segments in chromosome z5A also allowed the determination of the marker order in these segments. Southern hybridization analysis revealed that the marker order in the Elymus segments E1 and E4 was conserved relative to wheat, whereas they were reversed in segments E2 and E3 (Figure 2). The reversed marker order in the latter two segments was probably caused by a paracentric inversion in the short arm of chromosome 1H t . The short-arm marker BCD200 was mapped to both E3 and E4 E. trachycaulus segments, suggesting that a duplication event occurred during the origin of chromosome z5A.
DISCUSSION
Southern hybridization analysis revealed that the four E. trachycaulus segments present in chromosome z5A were derived from the short arm of 1H t . However, 1H t Sspecific centromeric marker PSR161 and the adjacent marker BCD1072 were missing in z5A, indicating that not all of the 1H t S arm was included in the formation of 5A. The mapping data further indicated the presence of a paracentric inversion in 1H t S.
Although Southern hybridization analysis did not allow the assignment of 5AL markers to the wheat segments in z5A, the structure of these segments can be inferred from GISH and meiotic pairing analyses. This study confirmed previous reports by showing that the distal part of the short arm of chromosome z5A paired with the distal region of the short arm of 1H t and that the distal region of the long arm of z5A paired with the distal region of wheat arm 5AL (Jiang and Gill 1993). In the presence of the major meiotic pairing locus Ph1 in wheat, homology at chromosome ends triggers the formation of chiasmate metaphase I associations (Curtis et al. 1991;Jones et al. 2002;Qi et al. 2002). Thus, E1 was derived from the distal region of 1H t S, and W5 was derived from the distal region of 5AL.
GISH analysis showed that the functional centromere was derived from wheat and was located within the W1 segment. The homology of wheat segments W3 and W4 was inferred from the recovered recombinants rec2, rec1, and rec3. In the presence of Ph1, recombination occurs normally only between homologous regions in wheat. The recovery of these recombinants indicated that the W3 and W4 segments in chromosome z5A are not structurally rearranged but are homologous to corresponding regions in chromosome 5AL.
During the formation of the zebra chromosome, many changes occurred not only in chromosome 1H t of Elymus, but also in chromosome 5A of wheat. As seen (Top) Schematics for telosomic chromosomes 1H t S and 1H t L, z5A, three recombinant chromosomes (rec1-rec3), and isochromosome iz5AL. A scaled-up z5A chromosome is shown on the right. The Elymus short-arm chromatin is in black, the Elymus long-arm chromatin is in gray, and wheat chromatin is in white. The Elymus centromere is shown in blue and the wheat centromere in red. A schematic of the consensus map of wheat group 1 chromosome is on the left. Only tested markers are listed. The ''1'' and ''À'' indicate ''presence'' and ''absence'' of markers, respectively. The marker order in Elymus segments E1 and E4 is conserved relative to wheat, whereas the order is reversed in segments E2 and E3. None of the group 1 long-arm markers hybridized to z5A. from our sequential FISH and GISH results, the original centromere on wheat chromosome 5A was split into two portions in z5A chromosome ( Figure 1E, pAet6-J9 on z5A; see also Figure 3). Although two of the group 5 long-arm markers, including Q (this study and Jiang and Gill 1993) and H1 (Jiang and Gill 1993), also are present in the z5A chromosome, some wheat markers in chromosome 5AL were deleted in z5A.
In addition to the recombinants, telosome, and isochromosomes, three different types of translocation chromosomes were recovered. The overall high translocation frequency (3 of 110) involving the z5A chromosome indicated that this chromosome is prone to breakage and translocation. Jiang and Gill (1993) postulated two possible origins for the zebra chromosome. One was that it arose by illegitimate recombination between nonhomologous chromosome arms 1H t S of E. trachycaulus and 5AL of wheat. In this scenario, the marker order of the four Elymus and five wheat segments is expected to be linear from one telomere to the other on chromosome z5A. The second possibility was that chromosome z5A arose from multiple breakage and fusion translocation events, and, in this case, the marker order is expected to be random. With the exception of Elymus segments E2 and E3, where the marker orders are reversed relative to wheat-most likely resulting from a paracentric inversion in 1H t S-the marker orders or homologies of the remaining Elymus and wheat segments are linear from one telomere to the other. Such an order would be highly unlikely if chromosome z5A originated by multiple translocation events.
On the basis of our data, a possible mode of origin of chromosome z5A is shown in Figure 3. The alloplasmic wheat plant was double monosomic for 5A and 1H t S and the two nonhomologous chromosomes may have prealigned prior to S-phase of meiosis or mitosis in germinal tissues during transition from the vegetative to reproductive phase. Chromosome z5A might have originated from nonhomologous recombination during the DNA double-strand break (DSB) repair process (Gorbunova and Levy 1999;Pacher et al. 2007) during the S-phase. Initially, at least five breaks occurred in chromosome 5A, resulting in six chromatin blocks. Similarly, four breaks in the 1H t S telosome split it into five chromatin blocks. Rejoining of the prealigned 5AL and 1H t S blocks in succession resulted in the formation of chromosome z5A. It was accompanied by the complete loss of 5AS, including the telomere and the loss of a very small distal centromeric end of 1H t S. The formation of chromosome z5A was not the result of a reciprocal segment exchange.
Amazingly, the z5A has the centromere of 5A of wheat embedded in E1 and E2 segments from 1H t S of E. trachycaulus. Its short arm, including the telomere, was derived from E. trachycaulus and its long arm is a chimera of 5AL and 1H t S segments with the distal part and the telomere derived from 5AL. This represents an aneuploid change in chromosome number from two to one and the beginning of the evolutionary history of a new z5A chromosome.
The aneuploid changes in chromosome number and/ or the origin of structurally rearranged chromosomes may be associated with interspecific hybridization. The F 1 hybrids are usually sterile and one component of (right) telosome 1H t S, from which the four Elymus segments present in z5A were derived. The centromeres of wheat and Elymus are shown in red and blue, respectively. The original centromere of wheat chromosome 5A was split into two portions in z5A. The major part is the functional centromere, located within the W1 segment, and the smaller part is located in W2. The third and smallest centromere was derived from part of the centromere of chromosome 1H t (E4). Marker analysis indicated that the entire short arm of 1H t was not present in z5A. GISH and meiotic pairing analyses showed that the E1 segment was derived from the distal region of 1H t S and W4 was derived from the distal region of 5AL. Because the marker order in segments E1 and E4 is conserved relative to wheat, but reversed in segments E2 and E3, either a paracentric inversion in the short arm preexisted in 1H t or fusion of the broken chromatin blocks occurred, as indicated. sterility is due to adverse nucleo-cytoplasmic interactions (NCI) (Gill 1991;Maan 1991). During the process of evolutionary introgressive hybridization in subsequent generations, certain chromosomes that overcome adverse NCI and restore fertility are exclusively transmitted to the progeny and are often involved in complex chromosomal rearrangements (Gill 1991). In alloplasmic hybrid derivatives between T. aestivum and E. trachycaulus, all progeny plants in backcrosses to wheat carried either chromosome 1H t or 1S t or their short arms as telocentrics or in complex rearrangements with wheat chromosomes (Jiang and Gill 1993). The z5A plant was recovered in one such progeny. It was fully fertile and flowered 10 days earlier than the Chinese Spring wheat, the recurrent parent. In nature, the z5A plant will be a candidate for a founder population for the evolution of a new species.
The two genomes in alloplasmic hybrids between T. aestivum and E. trachycaulus exist in a common nucleus in an Elymus cytoplasm. Because of the incompatibility and genomic stress imposed by interspecific hybridization, certain intergenomic and/or chromosomal structural rearrangements might have occurred (Gill 1991), similar to those reported by Naranjo et al. (1987) and . Nonhomologous recombination might be initiated to repair chromosomal DSBs by joining sequences with little or no homology. If chromosomes 1H t and 5A were in close proximity at the time of DSB repair, DNA ends or other chromosome breaks on 1H t and 5A might have been joined by nonhomologous end joining. Alternatively, because there were free DNA ends from chromosomal DSBs, these free ends might have enhanced random integration by the copy-join process of nonhomologous recombination to link new DNA sequences together (Merrihew et al. 1996). In our case, the sequences from the Elymus 1H t chromosome were linked to those from wheat chromosome 5A because these two chromosomes were by chance physically close at the time.
The molecular and cytogenetic research has documented extensive chromosome changes, including inversions, translocations, and fusions/fissions leading to aneuploid changes and karyotype repatterning in the Brassica family (reviewed in Schubert 2007). It was suggested that aneuploid changes as well as extensive chromosome repatterning-including inversions, unequal translocations involving nonhomologous chromosomes, and elimination of minichromosomes-led to step changes in chromosome number from n ¼ 8 in Arabidopsis lyrata to n ¼ 5 in A. thaliana (Lysak et al. 2006). The experimental data on the origin of z5A allude to additional mechanisms. Results from our study indicated that the zebra chromosome arose by illegitimate recombination between nonhomologous chromosomes of T. aestivum and E. trachycaulus, resulting in a mosaic chromosome comprised of alternate blocks of T. aestivum and E. trachycaulus chromatin. Although our Figure 4.-Hypothetical scheme leading to changes in chromosome number after interspecific hybridization, whole-genome duplication, and chromosomal rearrangements. Species A and B with n ¼ 2 chromosomes have differentiated cytoplasms (indicated by open and shaded outer circles) and nuclear genomes. Species A has a fertility factor F1 on chromosome A1 that is essential for restoring fertility to species-A-specific cytoplasm and species B has another fertility factor, F2 on chromosome B2, that is essential for gametophytic or sporophytic viability. Interspecific hybridization between species A and B followed by whole-genome duplication created genomic stress that resulted in chromosomal repatterning and the formation of the novel chromosome Z3 by nonhomologous recombination, harboring both fertility factors F1 and F2. Plants with such a chromosomal constitution have strong selective advantage and will form species D with n ¼ 3 chromosomes. findings involved a polyploid species, it is possible that similar mechanisms might have been involved in diploid species.
A hypothetical scheme showing possible evolutionary chromosomal changes during speciation is shown in Figure 4. We assume species A and species B with n ¼ 2 co-evolved from a common ancestor and have differentiated cytoplasm and nuclear genomes. Species A has fertility factor F1 on A1 chromosome that is essential for restoring fertility to species-A-specific cytoplasm. Species B has another fertility factor, F2 on B2 chromosome, that is essential for the viability of the gametophyte or the sporophyte. Next, species A and B come in contact and undergo hybridization and a whole-genome duplication to form species C with n ¼ 4 chromosomes. The wholegenome duplication is a polyploidy-associated mechanism that fixes hybrid vigor. However, hybridization is also mutagenic and may trigger chromosomal repatterning events. Genome duplication provides buffering for the formation of new chromosomes that otherwise may be deleterious because they involve segmental deletions and duplications. Most such events will be lethal; however, those that bring genetic factors F1 and F2 onto the novel chromosome Z3 will have strong selective advantage and may be fixed to form species D with n ¼ 3 chromosomes. In our experimental hybrids, the z5A chromosome in the alloplasmic z5A plant had the Rf gene from E. trachycaulus that restored fertility. The group 5 long-arm chromosomes in the Triticeae are known to carry genes essential for the viability of the gametophyte (Endo and Gill 1996) and alien group 5 chromosomes show exclusive preferential transmission (Jiang and Gill 1998). This gene is the hypothetical gene F2 and because the z5A carried these two essential genes, the hybrid derivatives carrying z5A will be fertile and this chromosome will be fixed in the population. | 7,652.4 | 2008-07-01T00:00:00.000 | [
"Biology"
] |
Murine astrotactins 1 and 2 have similar membrane topology and mature via endoproteolytic cleavage catalyzed by signal peptidase
Astrotactins 1 (Astnl) and Astn2 are membrane proteins that function in glial-guided migration, receptor trafficking and synaptic plasticity in the brain, as well as in planar polarity pathways in skin. Here, we used glycosylation mapping and protease-protection approaches to map the topologies of mouse Astnl and Astn2 in rough microsomal membranes (RMs), and found that Astn2 has a cleaved N-terminal signal peptide (SP), an N-terminal domain located in the lumen of the RMs (topologically equivalent to the extracellular surface in cells), two transmembrane helices (TMHs), and a large C-terminal lumenal domain. We also found that Astnl has the same topology as Astn2 but we did not observe any evidence of SP cleavage in Astnl. Both Astnl and Astn2 mature through endoproteolytic cleavage in the second TMH; importantly, we identified the endoprotease responsible for the maturation of Astnl and Astn2 as the endoplasmic reticulum signal peptidase. Differences in the degree of Astnl and Astn2 maturation possibly contribute to the higher levels of the C-terminal domain of Astnl detected on neuronal membranes of the central nervous system. These differences may also explain the distinct cellular functions of Astnl and Astn2, such as in membrane adhesion, receptor trafficking, and planar polarity signaling.
Astrotactins are vertebrate-specific integral membrane glycoproteins known to play critical roles in central nervous system (CNS) and skin development (1)(2)(3)(4). An understanding of the function of Astn1 and Astn2 in the control of neuronal migration and of synaptic function could be important for treatment of human brain disorders such as epilepsy and autism spectrum disorders. Although the number of gene mutations that can disrupt neuronal migration is large (5), Astn1 is one of a few adhesion receptors shown to directly function in migration (6).
In mouse, there are two astrotactin family members, Astn1 and Astn2 (ASTN1 and ASTN2 in humans). Astn1 is involved in glial-guided neuronal migration early in development (1,3,6,7) through the formation of an asymmetric complex with N-cadherin (CDH2) in the glial membrane (6). Astn2, which is 48% homologous to Astn1 and has two isoforms, is abundant in migrating cerebellar granule neurons where it forms a complex with Astn1, and regulates the trafficking of Astn1 during migration (4). At later stages of development, Astn2 regulates synaptic function by trafficking of other membrane receptors, including the Neuroligins and other synaptic proteins (8). A recent structure of the C-terminal endodomain of Astn2 shows distinctive features responsible for its activity (9). Astn1 and Astn2 are believed to share the same membrane topology, with a cleaved Nterminal signal peptide (SP), two transmembrane helices (TMHs), and a large extracellular Cterminal domain (10). Both Astn1 and Astn2 undergo an endoproteolytic maturation step in which an unknown protease cleaves the protein just after the second TM segment, with the two fragments remaining attached through a single disulfide bond (10,11).
In the present work, we have mapped the topologies of mouse Astn1 and Astn2 in rough microsomal membranes using glycosylation mapping and protease-protection assays. We find that Astn2 has a cleaved N-terminal SP, an Nterminal domain located in the lumen of the RMs (topologically equivalent to the extracellular surface in cells), two TMHs, and a large C-terminal lumenal domain. We further show that Astn1 has the same topology as Astn2, but see no evidence of SP cleavage for Astn1. Finally, we identify the endoprotease responsible for the maturation of Astn1 and Astn2 as signal peptidase, an ERlocalized enzyme that normally removes SPs from secreted and membrane proteins.
Results
Predicted topologies of mouse Astn1 and Astn2 -Topology predictions for mouse Astn1 (UniProtKB Q61137-1, splicing isoform 1) and Astn2 (UniProtKB Q80Z10-3, splicing isoform 3) produced by the TOPCONS server (12) agree with the topology model for Astn2 derived from epitope tagging and cell-surface staining (11), i.e., an Nterminal signal peptide (SP) followed by two transmembrane segments (TMH1 & 2) and a large C-terminal extracellular domain, Fig. 1. In cells, both Astn1 and Astn2 are cleaved by an unidentified endoprotease into two fragments that remain linked by a disulfide bond (11). Edman sequencing of the two Astn2 fragments showed that the N-terminal one starts at Gly 52 (just after the predicted signal peptide) and the C-terminal one at Asn 466 (corresponding to Asn 414 in the isoform analyzed here). For Astn1, the C-terminal fragment starts at Ser 402 ; no sequence could be obtained from the N-terminal fragment in this case.
Topology mapping of mouse Astn1 -To characterize the mouse Astn1 protein we used a well-established in vitro glycosylation assay (13,14) to determine the topology of the protein when cotranslationally inserted into dog pancreas rough microsomes (RMs). The transfer of oligosaccharides from the oligosaccharide transferase (OST) enzyme to natural or engineered acceptor sites for N-linked glycosylation (-Asn-Xxx-Ser/Thr-Yyy, where Xxx and Yyy cannot be Pro (15)(16)(17)(18)) in a nascent polypeptide chain is a characteristic protein modification that can only happen in the lumen of the ER where the active site of the OST is located (19,20). The topology of Astn1 in RMs was also probed by treatment with proteinase K, that can only digest parts of the protein protruding from the cytosolic side of the RMs (21).
To be able to investigate the topology of the 1,302-residues-long and heavily glycosylated Astn1 protein, we selected to work with various truncated versions of the full-length protein. This was necessary both because in vitro translation of such large proteins is inefficient, and because the attachment of an oligosaccharide increases the size of the protein by only 2-3 kDa, a shift that is too small to be detectable by SDS-PAGE for the fulllength protein but can easily be visualized when using truncated versions.
Truncated versions of Astn1 were expressed in vitro using the TNT ® SP6 Quick Coupled System supplemented with columnwashed dog pancreas rough microsomes (RMs) (14,21). The glycosylation status was investigated using SDS-PAGE, and truncated Astn1 versions were designed such that differences in glycosylation patterns could be used to infer the topology of the protein in the RM membrane.
As shown in Fig. 2A, Astn1 1-381, a version that extends from the putative SP to the end of the loop between TMH1 and TMH2, receives a single glycan when translated in the presence of RMs (compare lanes 1 and 2). Notably, there is no sign of the SP being cleaved (which would reduce the Mw of the protein by 2.6 kDa). Astn1 78-381 (lanes 3, 4) and Astn1 78-451 (lanes 5, 6) also receive only a single glycan, while Astn1 78-470 (lanes 7, 8) is glycosylated on two sites (note that glycan acceptor sites are rarely if ever modified to 100% in the in vitro translation system, hence molecules with both one and two added glycans are visible on the gel). The second glycan addition therefore must be on Asn 453 .
To determine whether the first glycan addition is on Asn 115 or Asn 226 (Asn 328 is too close to TMH2 to be reached by the OST (22)), we expressed Astn1 versions lacking the entire Nterminal region, up to but not including TMH2, Fig. 2B. The two shorter versions were not glycosylated at all when expressed in the presence of RMs, while Astn1 160-470 was modified on a single glycosylation site. The latter must be Asn 453 , showing that neither Asn 226 nor Asn 328 become glycosylated. We conclude that the putative SP in Astn1 appears not to be cleaved by signal peptidase and probably forms an N-terminal transmembrane helix (TMH0), and that Astn1 has two segments (residues 22-152 and 402-1,302) exposed to the lumen of the RMs, and one segment (residues 174-380) exposed to the cytosol. Further, since Asn 115 is glycosylated in all four constructs, it appears that the N-terminal segment in the Astn1 constructs that start at M 78 can be translocated to the lumenal side of the RMs even though it lacks the putative SP.
We further used a protease-protection assay (21) to verify the proposed topology of Astn1. In order that segments of Astn1 that are protected from proteinase digestion by the RM membrane would be of a convenient size for SDS-PAGE separation, we first expressed Astn1 78-728. As seen in Fig. 2C, the protein becomes glycosylated (compare lanes 1 and 2) but it is difficult to determine on how many sites. Interestingly, two prominent bands at ~38 kDa (marked N) and ~36 kDa (marked C) were generated in the presence of RMs (lane 2), suggesting an internal endoproteolytic cleavage, in agreement with the published Edman sequencing results that identified a cleavage site between Ser 401 and Ser 402 (11). In addition, a third band at ~65 kDa that appears to receive a single glycan in the presence of RMs was also seen (lanes 1 and 2). The latter would be consistent with internal translation initiation at Met 160 , and indeed comigrates with Astn1 160-728 (lane 4).
Proteinase K treatment of RMs carrying Astn1 78-728 digests cytoplasmically accessible parts of the protein and leaves only two protected fragments: one of identical size to the "endoproteolytic" 36 kDa band, and one at ~39 kDa (lane 3). The two protease-protected fragments are precisely what would be expected from the topology derived from the glycosylation study: the 39 kDa band (marked C*) represents the fragment 381-728 generated when proteinase K digests the cytosolic loop, and the 36 kDa band represents the slightly smaller C-terminal fragment 402-728 generated by endoproteolytic cleavage near the Cterminal end of TMH2. The expected protected Nterminal fragment 78-181 is too small to be resolved on the gel.
Similar results were obtained for Astn1 160-728. In addition to the full-length protein at ~65 kDa, two bands at ~36 kDa (marked C) and ~25 kDa (marked N) were seen in the presence of RMs (compare lanes 5 and 6); EndoH treatment shifted both the full-length band at ~65 kDa and the ~36 kDa band to a lower Mw, while the 25 kDa band did not shift (lane 8). Consistent with the Astn1 160-728 results, the glycosylated ~36 kDa band represents the same endoproteolytic C-terminal fragment 402-728, while the unglycosylated 25 kDa band represents the N-terminal endoproteolytic fragment 160-401.
Given the sequence context of the endoproteolytic cleavage site (see Discussion), we hypothesized that the responsible protease may be signal peptidase. Indeed, inclusion of a signal peptidase inhibitor (23) in the in vitro translation of Astn1 160-728 completely inhibits the formation of the ~36 kDa and ~25 kDa products (lane 11).
We conclude that Astn1 has the same topology as previously proposed for Astn2, namely with two lumenal domains (residues 22-152 and 173-1,302) and one cytosolic domain (residues 174-381). The putative SP appears to not to be cleaved, but rather forms an N-terminal transmembrane helix (TMH0). We identify signal peptidase as the enzyme responsible for the endoproteolytic cleavage event at Ser 401 .
Topology mapping of mouse Astn2 -We used the same glycosylation mapping approach to determine the topology of the 1,300 amino acidslong mouse Astn2 protein (splice isoform 3, lacking exon 4 that encodes a 52 residues segment in the domain between TMH1 and TMH2). Astn2 1-482 includes both the putative SP, the two predicted transmembrane helices TMH1 and TMH2, and a portion of the large C-terminal domain. A small amount of glycosylated full-length product at ~56 kDa, two weak bands at ~50 kDa that might represent glycosylated and unglycosylated products lacking the SP (which has a calculated Mw of 6.4 kDa), and a prominent product at ~43 kDa are seen in the presence of RMs, Fig. 3A (lanes 2, 4, 5). The latter is sensitive to EndoH digestion, and the two bands at ~50 kDa collapse to the lower Mw form upon the same treatment (lane 6). The glycosylated 43 kDa band fits the Mw expected for a product resulting from removal of the signal peptide (residues 1-51) and the endoproteolytic cleavage at Asn 413 observed by Edman sequencing (11) (note that we use a different splice version of Astn2 that lacks 52 residues in the cytosolic segment compared to the one used in this reference). This explains the limited amount of glycosylated fulllength product (lanes 2, 4, 5), since most of the molecules that become glycosylated are cleaved after the SP and/or TMH2, as seen in lane 6.
To confirm this interpretation, we also analyzed Astn2 161-482 that lacks the putative SP. As seen in Fig. 3B, Astn2 161-482 yields four prominent bands when expressed in the presence of RMs (lane 2): unglycosylated full-length product at ~37 kDa, singly-and doubly-glycosylated fulllength products at ~39 kDa and ~42 kDa, and a smaller endoproteolytic product at ~35 kDa. EndoH treatment collapses the ~39 kDa and ~42 kDa bands to the size of the unmodified full-length product at ~37 kDa, and the ~35 kDa band to a smaller ~30 kDa band (lane 5). Similar to Astn1, addition of the signal peptidase inhibitor to the in vitro translation completely inhibits the formation of the ~35 kDa endoproteolytic product (lane 3), and signal peptidase inhibitor plus EndoH treatment of RMintegrated Astn2 161-482 leaves only the unmodified full-length product (lane 7; for unknown reasons, the signal peptidase inhibitor makes bands run slightly higher in the gel).
These results are entirely consistent with the proposed topology of Astn2 (11), and identify signal peptidase as the enzyme responsible for the endoproteolytic cleavage event at Asn 413 .
Discussion
Earlier work using epitope mapping of Astn2 expressed in COS7 cells have shown that the N-and C-termini are exposed on the cell surface, while the domain between TMH1 and TMH2 can only be immunodecorated in detergentpermeabilized cells (11). Further, both Astn1 and Astn2 were shown to be cleaved by an unknown endoprotease into an N-and a C-terminal fragment, and Edman sequencing of the C-terminal fragments identified cleavage sites between Ser 401 -Ser 402 in Astn1 and Gly 465 -Asn 466 in Astn2, just after TMH2. In addition, for Astn2, Edman sequencing of the Nterminal endoproteolytic fragment indicated removal of the putative SP (residues 1-51); no sequence was obtained for Astn1, leaving open whether or not the putative SP is cleaved in this protein.
Here, we have confirmed and extended these results for Astn1 and Astn2 using glycosylation mapping and protease-protection assays in a coupled in vitro transcription-translation system supplemented with RMs. Our results for Astn2 are in perfect agreement with those from the earlier study: Astn2 has a cleaved N-terminal SP, an N-terminal domain located in the lumen of the RMs (topologically equivalent to the extracellular surface in cells), two TMHs, and a large C-terminal lumenal domain, Fig. 4. We find that Astn1 has the same topology as Astn2 but see no evidence of SP cleavage; rather, it seems that the putative Nterminal SP in Astn1 remains a part of the protein, presumably forming a third transmembrane helix (TMH0).
We further show that an inhibitor of the signal peptidase complex completely inhibits the endoproteolytic cleavage of both Astn1 and Astn2. The unknown endoprotease involved in the maturation of Astn1 and Astn2 is thus signal peptidase, the enzyme that cleaves SPs from secretory and membrane proteins in the ER (24). While it is uncommon that signal peptidase catalyzes internal cleavage reactions of this kind in cellular proteins, many viral polyproteins mature through signal peptidase-catalyzed cleavages after internal hydrophobic segments in the primary translation product (25,26). Indeed, the SP cleavage site and the cleavage site after TMH2 identified by Edman sequencing in Astn2 are precisely the ones predicted by the SignalP server (27), Supplementary Fig. S2.
The present findings raise the possibility that higher levels of SP-mediated cleavage of Astn2 relative to Astn1 explain the higher levels of the Astn1 C-terminus we previously detected on CNS neuronal surface membranes by antibody labeling and functional assays (6,8). This also likely contributes to the apparently distinct functions of Astn1 as a membrane adhesion receptor that functions in glial-guided migration (3,6,7), and of Astn2 as an endolysosomal trafficking protein that functions in both migration (4) and synaptic function (8). Finally, the exceptionally long Astn2 SP hints at the possibility that, after cleavage, the SP may have additional functions in the cell, as seen for many other very long SPs (28). It will therefore be of interest to determine whether the Astn2 SP domain functions in receptor trafficking or planar polarity signaling pathways.
Experimental procedures
Enzymes and chemicals -Unless otherwise stated, all chemicals were from Sigma-Aldrich ( Germany).
DNA manipulations -The cDNAs of mouse astrotactin 1 and 2 (Astn1 and Astn2) (1,302 respectively 1,300 amino acid residues; see Supplementary Figure S1) were cloned into the pRK5 vector using ClaI/SalI (Astn1) and BamHI/XbaI (Astn2) sites. The DNA was then transferred to the pGEMI vector (Promega) at XbaI/SmaI sites together with a preceding Kozak sequence (29), as previously described (13). To create truncations in Astn1, deletions were made between amino acid position 1-78 and 1-160, and stop codons were introduced at positions 382, 452, 471, and 729. Astn2 truncations were created in the same way, with a deletion between 1-161 and a stop codon at 483. The Astn1 and Astn2 cDNAs were amplified by PCR using the Phusion DNA polymerase with appropriate primers, and sitespecific mutagenesis was performed using the QuikChange TM Site-Directed Mutagenesis Kit from Stratagene. All mutants were confirmed by sequencing of plasmid DNA at Eurofins MWG Operon (Ebersberg, Germany) and BM labbet AB (Furulund, Sweden).
In vitro expression -All Astn constructs cloned in the pGEMI and pRK5 were transcribed and translated in an in vitro the TNT ® SP6 Quick Coupled System from Promega. 150-200 ng DNA template, 1 µl of [ 35 S]-Met (5 µCi) and 0.5 µl column-washed dog pancreas rough microsomes (RMs) (tRNA Probes, US) (30) were added to 10 µl of reticulocyte lysate at the start of the reaction, and the samples were incubated for 90 min at 30 °C (21).
Proteinase K treatment -PK treatment was performed by adding 1 µl of CaCl2 (200 mM) and 0.2 µl of Proteinase K (4.5 units/ µl) to the translation reaction. After incubating on ice for 30 min, 1 ml of PMSF (20 mM ethanolic solution) was added to inactivate PK and samples were further incubated on ice for 5 min (21).
Analysis and quantitations -Translation products were analyzed under reducing conditions by SDS-polyacrylamide gel electrophoresis, and proteins were visualized in a Fuji FLA 9000 phosphorimager (Fujifilm, Tokyo, JP) using the Image Reader FLA 9000/Image Gauge V 4.23 software (Fujifilm). 1 and 2). Unglycosylated products are indicated by an open circle, and singly glycosylated products by a filled circle. Two cleavage products potentially resulting from removal of the SP by signal peptidase are indicated by a bracket, and the Nterminal endoproteolytic fragment is marked by *. EndoH digestion of RMs with Astn2 1-482 is shown in lanes 3-6; note that the two products potentially generated by removal of the SP (bracket) coalesce into one band and that the endoproteolytic fragment (N) shifts to a lower molecular weight upon deglycosylation (lane 6). (B) Astn2 161-482 was translated in vitro with [ 35 S]-Met in the presence (+) or absence (-) of RMs and the signal peptidase inhibitor N-methoxysuccinyl-Ala-Ala-Pro-Valchloromethylketone (SPI). After translation, RMs were further treated with EndoH (EH) or subjected to mock treatment. The glycosylated Asn residues are indicated by a red circle in the cartoons. Figure S1. Amino acid sequences of the splice variants of Astn1 and Astn2 used in this study. Hydrophobic regions identified by TOPPRED are shown in yellow, potential acceptor sites for N-linked glycosylation in red, confirmed signal peptidase cleavage sites in bold, and Met residues used as start codons in N-terminally truncated versions in green. | 4,564 | 2018-12-11T00:00:00.000 | [
"Biology",
"Physics"
] |
Neutrino Induced Doppler Broadening
When a nucleus undergoes beta decay via the electron capture reaction, it emits an electron neutrino. The neutrino emission gives a small recoil to the atom, which can be experimentally observed as a Doppler broadening on subsequently emitted gamma rays. Using the two-axis flat-crystal spectrometer GAMS4 and the electron capture reaction in 152Eu, the motion of atoms having an excess kinetic energy of 3 eV in the solid state was studied. It is shown how the motion of the atom during the first hundreds of femtoseconds can be reconstructed. The relevance of this knowledge for a new neutrino helicity experiment is discussed.
Introduction
During the last decade ultra-high resolution gammaray spectroscopy with the two-axis flat crystal spectrometer GAMS4 [1,2], installed at the high flux beam reactor of the Institut Laue-Langevin (ILL), Grenoble, France, in a ILL/NIST collaboration, has allowed the observation of very small Doppler broadening of gamma-ray transitions [3]. The measured broadening is of the order of ⌬E/E = 10 -4 to 10 -6 and is induced by preceding high-energy gamma-ray emissions following neutron capture. The resolving power of the GAMS4 spectrometer, which is of the order of 10 -6 even allows the measurement of the Doppler broadening caused by the thermal motion of the atoms in a solid state target [4]. The observation of such small broadenings led to the development of the GRID (Gamma Ray Induced Doppler broadening) method [4], which is used either to determine short lifetimes of nuclear excited states, or to study the slowing-down process of low-energy recoiling atoms in the bulk of the target. The latter permits the extraction of information on the form of the interatomic potential [5,6].
Besides the (n, ␥) reaction, atomic electron capture provides both the excitation and the extra kinetic energy leading to Doppler broadened energy profiles of subsequently emitted gamma rays. Electron capture is a beta-decay process in which a nucleus captures an atomic electron and emits a neutrino which, in first approximation, is the only particle to leave the atom. Because the neutrino has a well defined energy, it induces a well-defined recoil velocity in contrast to normal beta decay which leads to a continuous range of initial recoil energies. Note that pure electron capture occurs only if the mass difference between the initial and final atom does not exceed 1022 keV, because then the competing  + decay is forbidden.
The initial motivation to study Neutrino Induced Doppler broadening (NID) was the measurement of the neutrino helicity as proposed 10 years ago by H.G. Börner et al. [7]. The basic idea of this proposal was to redo the classical experiment of Goldhaber, Grodzins and Sunyar [8] in which the helicity of the neutrino is transferred to a Doppler-shift dependent circular polarisation of 963 keV gamma rays. In contrast to the earlier experiment the Doppler shift would be measured directly by GAMS4 without the need to rely on nuclear resonance fluorescence. An additional motivation was that the NID experiments allow the study of the slowing down process at very low kinetic energies [9]. At these energies the recoiling atom is not able to definitively leave its lattice site but instead performs vibrations around its equilibrium position.
Neutrino Induced Recoil
Beta decay associated with electron capture takes place via the following nuclear reaction: which (as discussed above) results in a well defined value for the initial recoil energy if the Q value is lower than 1.022 MeV. Because of this upper limit on the Q-value the recoil energies that can be obtained are only of the order of a few eV for heavy nuclei. The recoil velocity of the newly formed atom is given from the conservation of momentum (for zero-rest mass neutrinos and using nonrelativistic kinematics) by: The Q ec value for the reaction is defined as: In Eqs. (2) and (3) M ( A X)c 2 and M ( A Y)c 2 denote the atomic rest masses in the initial and final states and B e is the binding energy of the captured electron. After electron capture a hole is created in an inner electron shell and the filling of this hole leads to secondary effects such as x-ray emission or the emission of Auger electrons. In the former case the recoil energy equals: with E ␥ the photon energy. When an Auger electron with energy E e is emitted one has: and the typical Bohr velocity of an electron. For NID the induced recoil is very low (3 eV in the Eu case) and so the recoiling atom is not able to definitively leave its equilibrium position. In order to analyse the measured line shapes different approaches can be followed. The first, analytic, approach is based on a phonon creation model, and was used to analyse the NID data in Ref. [9]. The second, numerical , approach relies on Molecular Dynamics (MD) simulations of the slowing-down process at ultra-low recoil energies, and is presented in [10][11][12]. Both approaches allow one to obtain information on either the lifetime of the nuclear state fed by the electron capture process, or to study the effect of ultra-low recoil energies on atoms located in a lattice of bulk matter.
Performed Experiments
All NID experiments have relied on electron capture in the isomeric 0state of 152 Eu. In order to populate this state, which has a half life of 9.3 h, natural europium is used. Placed at the GAMS4 in-pile target position, the isotope 151 Eu, which has a 48 % natural abundance, captures a thermal neutron and forms 152 Eu. The cross section for this reaction is 9204 barns, making the targets black for thermal neutrons. Thus the neutron capture rate per cm 2 of target area is limited to 2.5ϫ10 14 s -1 . Following the electron capture a 1level at 963 keV in 152 Sm is populated. This 1state decays to the nuclear ground state by emitting either a single 963.4 keV gamma ray, or a 841.4 keV to 121.8 keV gamma-ray cascade via the long-lived 2 + state. These three gamma rays were measured with the GAMS4 spectrometer. Two different kinds of targets have been used for the experiments. In the first measurements powder targets of different chemical composition were used [9]. These polycrystaline targets consisted out of Eu 2 O 3 , EuF 3 , EuF 2 and EuCl 3 powders. In the more recent experiments oriented single crystals of EuO were observed for different orientations towards the spectrometer [11,12]. EuO crystal has a fcc cubic structure and the NID measurements were performed once with the [100] direction towards the GAMS4 spectrometer and once with the [110] direction. Details on this experiment are given in ref [12]. Table 1 lists the thermal velocities T (cf. Eq. (6)) which were measured using the 121.8 keV transition. This transition decays from a long-lived state such that the broadening is solely due to the thermal motion.
Description of the Slowing Down Process and Results
Before discussing the two different ways that were used to analyse the neutrino induced Doppler broadening line shapes, we note the main differences to those observed using gamma-ray induced Doppler broadening. The recoils induced in medium-heavy nuclei by electron capture are too small to create significant dammage of the regular lattice. Instead, after the initial recoil, the atom moves a small distance away from its equilibrium position, due to the extra kinetic energy. Because of this, it pulls the neighbouring atoms away from their positions, loses energy and slows down. When the recoiling atom has lost all of its kinetic energy, because of the forces exerted by the other atoms, it stops and starts to move back to its initial position. Now, regaining energy, the velocity will increase and reach a maximum near the equilibrium position. This time the velocity will be smaller than the initial one, since energy is dissipated in the crystal by pulling more and more atoms away from their equilibrium positions. Thus a lattice vibration, or phonon, is created. Since, as the name suggests, this is a collective process, it is sensible to test collective descriptions of the slowing down process. In contrast, in standard GRID measurements the recoil energies are about 400 eV and the velocity function of the recoiling atom is, to a good approximation, deduced from binary collisions until the atom moves at velocities in the order of the thermal motion of the target atoms [4]. In this respect the slowing down process can be described by considering individual twobody collisions. Another difference is the influence of the thermal motion itself which is important even for short lifetimes, because it is in the same order of magnitude as the recoil velocity.
To understand this process all data have been analyzed using two different approaches: the analytical phonon creation model and the semi-microscopic Molecular Dynamics simulations.
Analytic Description of the Slowing Down Process
For the slowing down of atoms with very low kinetic energies in solids the Phonon Creation Model (PCM) gives an analytic and simple description of the velocity. Assuming the Debye approximation and neglecting phonon-phonon interactions and incoherent thermal oscillations of the atoms in the lattice, the velocity of the recoiling atom for a isotropic medium or a monoatomic cubic Bravais lattice, is given by [13]: Equation [7] gives the velocity of the recoiling atom as a function of the time t and the Debye frequency D . In the Debye approximation the direction of the recoil is ignored because of the assumed isotropy and the neglection of temperature. Employing the Debye approximation, the Doppler broadened line shape is described by: with v (t, D ) given by Eq. (7) and the lifetime of the deexciting nuclear state. When neglecting the natural linewidth of the deexciting state I D (E, v ) is approximately given by: The integration of Eq. (8) is done numerically. Because the initial recoil velocity, r /c = 6.54ϫ10 -6 , is very close to the thermal velocity, one has to take account of the velocity spread due to the thermal motion. This is done by folding the theoretical line shape with a thermal width as explained in detail in Ref. [9]. Table 2 lists the deduced frequencies D for all five targets using the lifetime value of = 29 fs obtained by Jungclaus et al. [14]. Note that all crystal effects connected to the EuO target were neglected. In this analysis also the effects of x-ray and Auger electron emission are neglected. Figure 2 shows the corresponding time-dependent velocities. One clearly notices the quick slowing down in Eu 2 O 3 and EuCl 3 compared with the other targets. For a helicity experiment it is clear that the ideal target would be EuF 2 . We have also tried to find an effect of a finite lifetime of the created phonons, but the fit of an exponentially damped Eq. (7) converged always to an infinite lifetime for the phonons. This shows that this effect is negligible when dealing with a nuclear lifetime of a few tens of fs while typical phonon lifetimes are about 1 ps.
Molecular Dynamics Description of the Slowing Down Process
Using Molecular Dynamics (MD) simulations and Monte Carlo simulations the description of the slowing down can be done in a much more detailed way [10]. By solving the Newtonian equations for atoms recoiling in random directions it is possible to study the spread in velocities and to treat in detail the effect of the thermal motion, x-ray and Auger-electron emission on the slowing down. For the MD simulations the main input is the interatomic potential which is treated as a set of pair potentials dependent on the distance r ij in between the atom i having charge q i and the atom j having charge q j . In the case of NID energies it was found that a Buckingham-type of potential describes best the data [10][11][12]. This potential has the form: Table 3 lists the parameters used for the analysis of the data. Knowing the interatomic potential the lifetime can be fitted and compared to those available in the litera-ture. The experimental line shapes were analysed using the computer code GRIDDLE [15], and Fig. 3 illustrates the very good quality of the fit of the line shape for the EuO measurement [11]. The fitted lifetimes are given in Table 4 and a good agreement is obtained in comparison with the (n,n'␥) experiment, justifying the use of = 29 fs in the phonon creation model. In the study using the oriented EuO crystals a small, but observable, dependence of the Doppler broadening on the crystal orientation was found [12]. In contrast to the analytic description the MD simulations treat the thermal velocity and the emission of x rays and Auger electrons exactly. A particular but albeit marginal problem encountered in the simulations, is Coulomb implosion of the simulation cell. In rare cases the atom reaches a very high charge state after an Auger cascade which leads by the attractive Coulomb force to the destruction of the lattice. Whether such an effect really takes place in nature is, however, not clear.
Discussion
From the MD simulations discussed above a detailed description of the slowing down is obtained which we analyse here in the context of the helicity measurement. Although EuF 2 is the ideal target we will rely on the EuO data for our study, because the experimental data set was the best defined for EuO due to the use of oriented single crystals and the good statistics. The slowing down in EuO and EuF 2 is also similar as was illustrated in Fig. 2.
Before proceeding we recall three positive points: the dependence on crystal structure is small, the Auger cascades are rare and the lifetime of the 963 keV level in 152 Sm is very short. However, a major problem remains. Figure 4 shows the velocities obtained for 100 Table 4. Lifetime values for the 963 keV level. Columns 1-5 are obtained from [10] and column 6 from [12]. They are compared to the lifetimes obtained using the (n,n' ␥) reaction [14], nuclear resonance fluorescence [16] and the GRID technique [17]. individual trajectories in EuO. While the oscillatory character of this motion is clear by the turning point at 70 fs one notices a large velocity spread at any time. This spread is due to the thermal motion at the moment of the initial recoil. Because of this large effect we have studied the time dependence of the angle ␣ (t ), defined as the angle between the recoil velocity (t ) at time t and the direction opposite to the neutrino momentum. This angle is directly related to the measurement of the neutrino helicity. This because the helicity: is measured via the Doppler shift related to (t ) and any deviation from the p v direction is given by ␣ (t ). Figure 5 shows the values for ␣ (t ) for 25 individual recoils. At time t = 0 the averaged value for ␣ (0) equals already 12.5Њ. The behavior of this value as a function of temperature can be approximated by: yielding in our case 14Њ. The individual thermal velocities follow the broad Maxwell distribution and T is only the root mean square velocity of this distribution, therefor individual atoms can have very low or high thermal velocities. This can be observed in Fig. 5. In some cases one finds atoms that perform either linear oscillations while other perform quasi-circular orbits around their equilibrium positions. In order to reduce this disturbing effect, which obscures the helicity measurement, the temperature should be reduced drastically. A spread of ␣ (0) of 1Њ corresponds for instance to a T of 34 m/s occuring at 7 ЊK. This is clearly not attainable at an in-pile source. At this stage one might wonder how Goldhaber et al. [8] were able to reach definite conclusions in their work. Besides the fact that they had only to show an effect, it should be remembered that they relied on difference measurements and varied the composition of the target. The slowing down only affects the information on the neutrino momentum independent from the one on the neutrino spin. The measurement of the difference in circular polarisation allowed than the elimination of gross effects connected to the slowing down or temperature. Nevertheless, for a very precise measurement these effect become dominant.
Conclusions
Over the last years we have analysed the different Neutrino Induced Doppler broadening experiments using Eu compounds as targets. Except for the problems associated with Auger cascades, a good understanding of the slowing down process at kinetic energies around 3 eV was obtained with the Molecular Dynamics simulations. Moreover, the lifetime of the 963 keV state in 152 Sm could be determined to be (28.7Ϯ1.0) fs. This knowledge was then used to study the rôle of the slowing down and the temperature on the measurement of the neutrino helicity. We found important effects which need to be considered in detail if one wants to obtain a precise measurement of the helicity. We consider that now all data needed for a full Monte Carlo simulation of the experiment proposed in [7] are available. | 4,012.4 | 2000-02-01T00:00:00.000 | [
"Physics"
] |
TASK-3 Downregulation Triggers Cellular Senescence and Growth Inhibition in Breast Cancer Cell Lines
TASK-3 potassium channels are believed to promote proliferation and survival of cancer cells, in part, by augmenting their resistance to both hypoxia and serum deprivation. While overexpression of TASK-3 is frequently observed in cancers, the understanding of its role and regulation during tumorigenesis remains incomplete. Here, we evaluated the effect of reducing the expression of TASK-3 in MDA-MB-231 and MCF-10F human mammary epithelial cell lines through small hairpin RNA (shRNA)-mediated knockdown. Our results show that knocking down TASK-3 in fully transformed MDA-MB-231 cells reduces proliferation, which was accompanied by an induction of cellular senescence and cell cycle arrest, with an upregulation of cyclin-dependent kinase (CDK) inhibitors p21 and p27. In non-tumorigenic MCF-10F cells, however, TASK-3 downregulation did not lead to senescence induction, although cell proliferation was impaired and an upregulation of CDK inhibitors was also evident. Our observations implicate TASK-3 as a critical factor in cell cycle progression and corroborate its potential as a therapeutic target in breast cancer treatment.
Introduction
Breast cancer is one of the most prevalent types of cancer affecting women [1] and remains a leading cause of cancer-related mortality worldwide [2]. In spite of their common tissue of origin, breast tumors display an extensive heterogeneity, which is reflected by a diverse array of molecular and histological subtypes [3]. Despite important therapeutic advances in chemotherapy and adjuvant endocrine therapies [4], a number of breast cancers are likely to require the development of new therapeutic approaches in order to confront the almost certain development of drug resistance.
Over the last years, accumulating evidence has supported the involvement of potassium (K + ) channels in cellular processes commonly disrupted in cancer [5,6]. Indeed, several reports have associated the expression and function of K + channels with cancer progression, making these channels attractive targets for novel cancer therapies and useful diagnostic tools [7,8]. K + channels have been associated to several hallmarks of cancer, such as sustained proliferation, migration, invasion, angiogenesis, and metastasis [6,[9][10][11][12][13][14].
Mechanistically, K+ channels selectively transport K + ions across cell membranes and play a crucial role in maintaining resting membrane potentials in various cell types [15,16]. K + channelregulated cell membrane potential is also essential during cell cycle progression [17], as well as in the regulation of cell death by necrosis and apoptosis [12,18].
Recent studies have begun to unveil the contribution of two-pore domain (K2P) K + channels to the establishment of some of the hallmarks of cancer [6,10,19,20]. So far, fifteen members of the K2P family have been identified. This family can be divided into six subfamilies denoted as TREK, TALK, TASK, TWIK, THIK, and TRESK [21][22][23]. Of all K2P channel family members, four have been involved in cancer (TASK-1, -2, -3 and TREK-1) [12,24,25]. Among these, TASK-3 (also known as K2P9.1), encoded by the KCNK9 gene, has been recognized for its potential oncogenic properties [26]. TASK-3 is highly expressed in neurons of the central nervous system, including the cerebellum [15,16,27,28], where it contributes to generate resting and action potentials [15,16,29]. Importantly, KCNK9 can be overexpressed in up to 44% and 35% of human breast and lung tumors, respectively [30]. Additionally, KCNK9 has been reported to be overexpressed in over 90% of ovarian tumors [31]. More recently, overexpression of this channel at the protein level has been documented in colorectal cancer and melanoma [18,31,32]. Of note, heterologous overexpression of TASK-3 has been shown to induce tumorigenesis in experimental animal models, confirming its oncogenic properties [10].
Gain of function of TASK-3 is associated with the acquisition of several malignant characteristics, including resistance to hypoxia and serum deprivation [30]. Recently, it has been shown that the use of monoclonal antibodies against the cap domain of TASK-3 inhibits tumor growth and metastasis in animal models with no significant side effects [33,34].
Here we examine the expression of TASK-3 in the triple-negative (ER, PR, and HER-2 negative) breast cancer cell line MDA-MB-231, a cell line that is also deficient in the p53 suppressor gene [35], and in the non-transformed human breast cancer cell line MCF-10F. From a clinical standpoint, triple negative breast cancer cells are more aggressive and metastatic, commonly failing to respond to current pharmacological approaches (such as Herceptin and Estrogen antagonists). Therefore, the development of more effective therapies to treat these tumors remains a challenge. Our results show that knocking down TASK-3 leads to reduced proliferation in MDA-MB-231 cells and identified cellular senescence as the likely mechanism involved. In addition, TASK-3 downregulation also reduced proliferation in the non-tumorigenic cell line MCF-10F, although we were unable to document signs of permanent cell cycle arrest (senescence).
Expression of TASK-3 Channels in MDA-MB-231 and MCF-10F Cells
We first examined the expression of TASK-3 by immunofluorescence in tumorigenic MDA-MB-231, as well as in non-tumorigenic MCF-10F cells. Positive staining for TASK-3 was detected in both types of cells ( Figure 1A,B,D,E) with an expected membrane localization pattern (arrows, Figure 1B,E). This result indicates that TASK-3 channel is stably expressed on the surface of both tumorigenic and non-tumorigenic mammary epithelial cell lines. The positive signal was not detected when the primary antibody was omitted (control, Figure 1C,F). In order to corroborate the immunofluorescence results, TASK-3 mRNA expression was determined by quantitative real-time PCR. In agreement with the immunofluorescence results, TASK-3 was also detectable at the mRNA level in both cell lines, although expression was clearly higher in MCF-10F cells (Supplementary Figure S1). (C,F) immunostaining when the primary antibodies were omitted (control). DAPI was used for nuclear staining (blue fluorescence). The scale bar represents 20 µm; (G,J) expression of TASK-3 (KCNK9) and TASK-1 (KCNK3) genes in cells transduced with either pMKO.1 empty vector (control) or shRNAs directed against TASK-3 (shK2P9A, shK2P9B, and shK2P9C) was assessed by quantitative real-time PCR. Gene expression was normalized against Homo sapiens ribosomal protein L19 (RPL19) using the ∆∆C t method. Error bars correspond to mean ± SEM (n = 3); (H,K) western blot analysis for TASK-3 detection following shRNA-mediated knockdown of TASK-3. Representative immunoblots for TASK-3 and GAPDH are shown. (I,L) The relative abundance of TASK-3 is expressed as the ratio between the intensity of the TASK-3 band of treated samples and the control sample, normalized on intensity of the GAPDH band (loading control). Data are expressed as mean ± SEM of three independent experiments. For (G,I,J,L) * p < 0.05, compared with the control, based on one-way ANOVA with Tukey HSD (Honestly Significant Difference) post-test.
Short Hairpin RNA-Mediated Knockdown of TASK-3
In order to study the effects of reducing the expression of TASK-3 in mammary epithelial cells, shRNA-mediated knockdown of TASK-3 was implemented and confirmed by both qPCR and Western blotting. MDA-MB-231 and MCF-10F cells were transduced with the vector control (pMKO.1) or three different shRNAs targeting TASK-3. As shown in Figure 1G,J TASK-3 mRNA levels (KCNK9) were significantly reduced in both cell lines following transduction with the shRNAs. The specificity of the knock down was confirmed by assessing the expression of TASK-1, a highly homologous TASK channel. As shown in Figure 1G,J, two shRNA constructs (shK2P9B and shK2P9C) displayed the highest specificity for TASK-3.
Next, TASK-3 knockdown was also evaluated by Western blotting ( Figure 1H,K). A densitometric analysis of the relative intensity of the~47 kDa TASK-3 band, after being normalized to GAPDH, is shown in Figure 1I,L. We observed a highly significant decrease in the levels of the TASK-3 protein in MDA-MB-231 cells subjected to shK2P9A-, shK2P9B-and shK2P9C-mediated knockdown compared to the levels of the protein in cells that that were transduced with the vector control ( Figure 1I). In MCF-10F cells, however, the reduction in TASK-3 protein was only significant upon shK2P9Band shK2P9C-mediated knockdown ( Figure 1L). Given that the shK2P9B construct showed greater specificity for the TASK-3 channel, this construct was chosen for the next set of experiments.
Electrophysiological Characterization of TASK-3 Knockdown
We also evaluated the electrophysiological effects of knocking down TASK-3 in MDA-MB-231 and MCF-10F cells. To this end, the whole cell patch-clamp technique was used to record the macroscopic potassium current in MDA-MB-231 and MCF-10F cells that had been previously transduced with either the vector control (pMKO.1-puro) or an shRNA targeting TASK-3 (shK2P9B). We observed that reducing the expression of TASK-3 inhibited the macroscopic potassium current (Figure 2A,B,D,E). As shown in Figure 2C,F, the shRNA-mediated depletion of TASK-3 also decreased significantly the basal activity of TASK-3 channels in MDA-MB-231 and MCF-10F cells. Also, the extracellular pH changes, in a physiological range from pH 8.0 to 5.0, were used to demonstrate the contribution of TASK-3 (a pH-sensitive K2P channel) to the macroscopic potassium current. In agreement with previous studies [36,37], the macroscopic potassium current was strongly inhibited in the presence of an external pH of 5.0, compared to currents studied in the same cell at pH 7.4. In presence of an external alkaline pH (8.0), the opposite effect was observed ( Figure 2G,H). As shown in Figure 2H, the shRNA-mediated depletion of TASK-3 also suppresses the pH-dependent potassium current in MCF-10F cells. These data clearly show that TASK-3 is a functional channel in both cell lines, and that the pH-dependent portion of the macroscopic potassium current is carried through TASK-3 channels.
Reducing the Expression of TASK-3 Inhibits Proliferation of MDA-MB-231 and MCF-10F Cells
We next investigated the proliferative effects of knocking down TASK-3 in MDA-MB-231 and MCF-10F cells. To this end, cells previously transduced with either the pMKO.1-puro empty vector (control) or shRNAs targeting TASK-3 (shK2P9B) were counted after being propagated for 2, 4, and 6 days in culture. As shown in Figure 3A,C, the hairpin could reduce the accumulation of cells over time. A significant effect was evident from day 2, but it was most prominent on days 4 and 6 in MDA-MB-231 cells ( Figure 3A). In MCF-10F cells a significant effect was observed on day 4, which remained significant up to day 6 ( Figure 3C). These results indicate that shRNA-mediated depletion of TASK-3 reduces the accumulation of both tumorigenic and non-tumorigenic mammary epithelial cell lines, although it was not clear whether the effect was secondary to cell cycle arrest or an increase in cytotoxicity. To distinguish between alternatives of cell fate, MDA-MB-231 and MCF-10F cells with reduced expression of TASK-3 were further evaluated for cell viability using the Trypan dye exclusion method. As shown in Figure 3B, MDA-MB-231 cells with reduced expression of TASK-3 displayed a marginal but statistically significant decrease in the percentage of viable cells compared to vector-transduced (control) cells (from 98.4 to 95.6%). The reduction in the viability of MCF-10F cells upon depletion of TASK-3 was also marginal but statistically significant (from 97.3 to 94.6%) ( Figure 3D). These results indicate that reducing the expression of TASK-3 leads to a reduced accumulation of cells over time, and the major mechanism involved was not an increase in cytotoxicity.
In order to complement these studies, we also investigated the effects of hK2P9G201E, a dominant-negative mutant of TASK-3. The dominant-negative effect occurs in conditions where both the wild type and mutant subunits are co-expressed. In these conditions, the wild type subunits co-assemble with mutant subunits (carrying a site-directed mutation in the sequence encoding the pore region). Incorporation of these mutant subunits suppresses the functional properties of the channel. The dominant-negative mutant of TASK-3 was able to phenocopy the anti-proliferative effect observed in MDA-MB-231 cells with reduced expression of TASK-3 ( Figure S2). A significant decrease in the number of cells was evident 48 h after hK2P9G201E transfection compared to cells transfected with an empty vector ( Figure S2). The dominant negative effect of hK2P9G201E was also validated in HEK-293 cells (see Supplementary Figure S3). Thus, while HEK-293 cells expressing wild-type TASK-3 displayed electrophysiological properties concordant with leak potassium currents ( Figure S3A,B), the co-expression of hK2P9G201E and wild type TASK-3 reduced the amplitude of TASK-3-associated currents ( Figure S3C). These results indicate that the activity of TASK-3 as a conductor of background K + currents is linked to the proliferative impairment observed in MDA-MB-231 cells with downregulation of TASK-3.
Cellular Senescence and Autophagy in TASK-3-Depleted MDA-MB-231 and MCF-10F Cells
To further understand the consequences of TASK-3 depletion in MDA-MB-231 and MCF-10F cells, cellular senescence was investigated. To this end, senescence-associated β-galactosidase (SA-β-gal) activity was assessed in cells that had been previously subjected to shRNA-mediated knockdown of TASK-3. The proportion of positive cells (blue cells) was determined by microscopic inspection ( Figure 4A,B,D,E), and the intensity of SA-β-gal staining was quantified by densitometric analysis ( Figure 4C,F). As shown in Figure 4C, shRNA-mediated knockdown of TASK-3 in MDA-MB-231 cells led to a significant increase in the proportion of SA-β-gal positive cells. These results indicate that knocking down TASK-3 inhibits cell proliferation through the induction of senescence in MDA-MB-231 breast cancer cells. In MCF-10F cells, however, shRNA-mediated knockdown of TASK-3 did no lead to significant changes in SA-β-gal activity ( Figure 4F). These results suggest that knocking down TASK-3 inhibits cell proliferation through different mechanisms in MDA-MB-231 and MCF-10F cells.
We also assessed the status of autophagy in TASK-3-deficient MDA-MB-231 and MCF-10F cells, since this process has also been postulated as a mechanism of cell death in some experimental settings. To assess autophagy, the protein levels of the microtubule-associated protein 1 light chain 3 B (LC3B) were determined in cells with reduced expression of TASK-3 ( Figure 4G,I). Following cleavage, LC3B becomes lipidated (LC3B-II) and localizes to autophagosome membranes, a modification that allows LC3B-II to be distinguished from the soluble form LC3B-I [38,39]. Densitometric analysis of the LC3B-II band (14 kDa), normalized to GAPDH, is shown in Figure 4H,J. For TASK-3-depleted MDA-MB-231 cells, we actually observed a decrease in the lipid-conjugated form of LC3B (LC3B-II) protein compared to control cells ( Figure 4H), while for TASK-3-depleted MCF-10F cells, we were unable to detect significant changes in the lipid-conjugated form of LC3B (LC3B-II) protein compared to control cells ( Figure 4I,J). Taken together, these results are consistent with an autophagy-independent mechanism of proliferative impairment in both cell lines with reduced expression of TASK-3.
TASK-3 Knockdown in MDA-MB-231 and MCF-10F Cells Is Not Accompanied by an Increased Rate of Apoptosis
Because TASK-3 has been previously associated with apoptotic cell death [40][41][42], we also explored the possibility that reducing the expression of TASK-3 might increase the apoptotic activity in both cell lines. To explore this possibility, the levels of cleaved caspase-3 were assessed by Western blotting in MDA-MB-231 cells ( Figure 5A). The results indicated that knocking down TASK-3 in MDA-MB-231 cells did not lead to changes in the levels of cleaved (active) caspase-3 when compared to control cells ( Figure 5A). Similarly, apoptosis induced by TASK-3 depletion was assessed by TUNEL assays and DAPI labelling in MCF-10F cells ( Figure 5B). The results indicated that knocking down TASK-3 in MCF-10F cells did not lead to the appearance of pyknotic or fragmented nuclei visualized with DAPI staining, or the appearance of bright green fluorescent signal following TUNEL assay ( Figure 5B). Altogether, these results indicate that apoptosis is not a factor contributing to the reduced proliferative capacity of TASK-3-depleted MDA-MB-231 and MCF-10F cells.
Analysis of the Cell Cycle Regulators
We next examined the expression of several cell cycle regulators involved in the G1/S cell cycle transition, in an attempt to explore the potential mechanisms involved in the implementation of senescence in TASK-3-deficient MDA-MB-231 cells, and cell cycle arrest observed in MCF-10F cells, following shRNA-mediated knockdown of TASK-3. To this end, mRNA expression of cell cycle regulators (CCNA1, CCND1 and CCNE1, encoding cyclins A1, D1, and E1, respectively; CDK4, Cyclin-Dependent Kinase 4; CDKN1A and CDKN1B, Cyclin-Dependent Kinase Inhibitors 1A and 1B, also know as p21 and p27) was determined by qPCR ( Figure 6A,B). The sequences of the specific primers used are listed in Table 1. Surprisingly, we observed a significant increase in the expression of genes encoding Cyclin A1 (CCNA1), Cyclin D1 (CCND1) and Cyclin E1 (CCNE1) following the knocking down of TASK-3 in MDA-MB-231 cells ( Figure 6A). Similarly, a significant increase in the expression of genes encoding Cyclin D1 and Cyclin E1 in TASK-3-depleted MCF-10F cells was also observed ( Figure 6B). However, these changes were also accompanied by a significant increase in the expression of cell cycle inhibitory genes p21 (CDKN1A) and p27 (CDKN1B) ( Figure 6A,B). These results suggest that the induction of senescence that follows the reduced expression of TASK-3 in MDA-MB-231 cells might be preceded by a G1/S arrest. A similar cell cycle arrest might be involved in TASK-3-depleted MCF-10F cells, although this process is not followed by the implementation of a senescent phenotype. In line with the suppression of cell cycle progression, the levels of the retinoblastoma tumor suppressor protein (pRB), the main substrate of cyclin-dependent kinases, were altered in TASK-3-depleted cells ( Figure 6C). We observed an increase in the total pRB, as well as a decrease of phospho-pRB, in response to TASK-3 knockdown in MDA-MB-231 cells when compared to control cells ( Figure 6C). On the other hand, western blot analyses revealed an increase of the pRB as well as phospho-pRb in MCF-10F cells following TASK-3 knockdown ( Figure 6C). These results indicate that shRNA for TASK-3 inhibits cell proliferation through a mechanism associated to activation of pRB.
Discussion
Human cancers are the result of a gradual and dynamic accumulation of genetic and epigenetic changes in somatic cells. These changes endow cancer cells with the ability to proliferate without control, invade surrounding tissues, and form colonies in distant parts of the organism. Efforts to systematize the cellular processes that are disrupted in cancer cells have produced a relatively short list of "hallmarks" of cancer [43]. Importantly, Ion channels are emerging as important modulators in the orchestration of at least some these hallmarks and, accordingly, they may now be considered as potential targets for the development of anti-cancer drugs [5]. However, it is presently difficult to assign a specific mechanism of action to a particular class of ion channel in the context of tumorigenesis.
The TASK-3 potassium channel is overexpressed in a variety of tumor cell lines and solid tumors from different histological origins, including breast, colon, lung and melanoma tissues [30,32,42,44,45]. However, the relative advantages for cancer cells to upregulate the expression of these channels and not others are far from clear.
Here, we investigated the role of TASK-3 channels in MDA-MB-231 human breast cancer cells and MCF-10F human mammary epithelial cells by first evaluating gene expression. We then tested the association of shRNA-mediated depletion of TASK-3 with senescence, autophagy, apoptosis and cell cycle arrest. We provide strong evidence for the quantitative mRNA transcript detection and protein immunolocalization of TASK-3 channels in both cell lines. The Immunofluorescence characterization of TASK-3 channels revealed a staining pattern that was consistent with membrane localization. These results indicate that TASK-3 channel is stably expressed on the cell surface of the MDA-MB-231 and MCF-10F cells, and are in agreement with what was previously reported [44].
In order to examine the effects of TASK-3 deficiency in MDA-MB-231 and MCF-10F cells, we designed shRNA constructs that effectively reduced the expression of this channel. Importantly, the expression of the highly homologous TASK-1 channel was not affected, suggesting that the shRNA-mediated knockdown of TASK-3 was specific. We also conducted western blot analyses and functional evaluation (macroscopic outward K + currents) for TASK-3 channels in order to confirm that the protein was reduced MDA-MB-231 and MCF-10F cells. Our results corroborate the effectivity and specificity for the knockdown of TASK-3.
The possibility of a role of TASK-3 channels in the proliferation of MDA-MB-231 and MCF-10F cells was also explored. In keeping with other reports [46], the proliferative ability of cells deficient in TASK-3 was greatly impaired. Therefore, based on our cell proliferation experiments, we can conclude that knocking down TASK-3 might cause a strong reduction in cell viability.
To elucidate the mechanism that may explain the inhibition of proliferation observed in TASK-3-depleted cells, senescence, autophagy, apoptosis and cell cycle arrest were examined. The first marker used for the identification of senescent cells was senescence-associated β-galactosidase (SA-β-gal) staining [47]. We showed that this marker was only significantly increased in TASK-3-depleted MDA-MB-231 cells ( Figure 4A-C). Autophagy, a plausible mechanism of cell death investigated, was also unlikely to contribute to the proliferative arrest of these cells ( Figure 4G-J).
Reduced expression of the TASK-3 channel has been associated with apoptotic cell death [40][41][42]. Nonetheless, in our hands, the TASK-3 knockdown has no significant effect over apoptotic rates ( Figure 5A,B). This discrepancy can be explained by differences in the cellular phenotype displayed by cancerous cells versus non-tumorigenic cells such as MCF-10F cells. Of note, MDA-MB-231 cells have very high levels of phospholipase D (PLD) activity [48] relative to other breast cancer cells, providing a survival signal that suppresses apoptosis when these cells are subjected to apoptotic stress [48]. Also, MDA-MB-231 cells have shown resistance to genotoxic drugs, such as etoposide and cisplatin, chemotherapeutic agents that activate the mitochondrial apoptotic pathway [49]. Taken together, senescence induction in MDA-MB-231 cells was independent of autophagy and apoptosis, ruling out these mechanisms as the main contributors of the cell proliferation impairment observed in TASK-3-depleted cells.
Surprisingly, TASK-3-deficient MDA-MB-231 and MCF-10F cells showed a significant increase in the expression of cell cycle promoting cyclins (encoding Cyclin D1 and Cyclin E1), which was, however, accompanied by an increase in the expression of the CDK inhibitors p21 and p27 [50,51] ( Figure 6A,B).
CDK inhibitors (CKI) play a crucial role in cell cycle arrest. CKIs bind either CDK or CDK/Cyclin complexes to inhibit CDK activity and cause cell cycle arrest [52]. In particular, p21 and p27, which belong to the Cip/Kip family of CKIs, cause cell cycle arrest specifically by inactivating the CDK2/Cyclin E complexes in the G1 phase. In addition, p21 has also been shown to inhibit DNA synthesis [52,53].
The canonical model of the G1-to-S cell cycle transition involves a CDK4/Cyclin D-dependent initial phosphorylation of pRB and the consequent release of E2F transcription factors [54]. In turn, E2F factors promote transcription of Cyclin E, leading to activation of CDK2/Cyclin E complexes, which further phosphorylates pRB and release more E2F, thus providing a positive feedback loop.
In line with this model, TASK-3 depletion in MDA-MB-231 cells seems to affect the kinase activity of CDK-containing complexes, reflected in a reduction in the levels of phosphorylated pRB, at least in MDA-MB-231 cells ( Figure 6C). Therefore, it is plausible that inhibition of Cyclin/CDK complexes secondary to upregulation p21 and p27 contributes to orchestrate cell cycle arrest and senescence in MDA-MB-231 cells in the context of TASK-3 depletion.
On the other hand, TASK-3-depletion in MCF-10F cells did not have a significant effect on phosphorylation of pRB ( Figure 6C). Although both qPCR and Western blot analyses show an increase in the expression of the cell cycle drivers Cyclin D1, Cyclin E1 and pRB, the concomitant increase in the expression of the CKIs p21 and p27 may explain the cell cycle arrest observed during the G1/S phase, although it does not seem to involve changes in phosphorylation of pRB. However, increases in gene expression of CKIs support the finding that TASK-3 knockdown induces cell cycle arrest in MCF-10F human mammary epithelial cells via the inhibition of the activity of CDK/Cyclin complexes.
Our current results indicate that down-regulation of TASK-3 expression, or prevention of TASK-3 up-regulation, reduced the proliferation rate of the human mammary epithelial MCF-10F cells. In this study, we found that prevention of TASK-3 up-regulation using shRNA-mediated gene silencing indeed potentiates the growth inhibitory effects in MCF-10F cells. Thus, these results confirm the previously described finding that TASK-3 regulates proliferative advantages in breast cancer cell lines [24,55].
In summary, in this work we corroborate the presence and localization patterns of TASK-3 channels in MDA-MB-231 and MCF-10F cells, by means of gene expression analysis by quantitative real-time PCR, Western blotting and immunofluorescence. In addition, TASK-3 knockdown exerted an inhibitory effect on the proliferation rate of both cell lines, generated by cell cycle arrest mediated by CDK inhibitor upregulation.
Immunocytochemistry
Cellular localization of TASK-3 in MDA-MB-231 and MCF-10F cells was analyzed by immunofluorescence as described previously [16]. Briefly, cells were seeded on coverslips, fixed in 4% paraformaldehyde (PFA)/1× phosphate-buffered saline (PBS) for 20 min at room temperature, and permeabilized with 2% bovine serum albumin in 1× PBS containing 0.1% Triton X-100 for 30 min. After incubating in blocking buffer, cells were incubated with anti-TASK-3 antibody (1:100; sc-11317, Santa Cruz Biotechnology, USA) overnight at 4 • C. Negative controls were treated in the same way, replacing the primary antibody with 1× PBS. Cells were then incubated with an Alexa Fluor 594-conjugated secondary antibody (ab150132; Abcam, Cambridge, MA, USA) at 1:1000 dilution, for 1 h at room temperature. Finally, DAPI (4 ,6-diamidino-2-phenylindole, 0.1 µg/mL for 5 min) was used to stain cell nuclei. Immunofluorescence images were acquired in a fluorescence microscope (Olympus BX53; Center Valley, PA, USA), coupled to a CCD camera. Digital images were acquired using the Q-Capture Pro 7 software (QImagine, Surrey, Canada). Each staining was done in triplicate in 3 independent experiments.
TASK-3 Silencing with Short Hairpin RNA (shRNA)
In order to generate retroviral vectors expressing short hairpin RNAs (shRNAs) targeting TASK-3, the following oligodeoxyribonucleotide sequences were annealed and subcloned between the AgeI and EcoRI restriction sites of pMKO.
Trypan Blue Exclusion
Cell viability was assessed using the Trypan blue exclusion assay. MDA-MB-231 and MCF-10F cells undergoing shRNA-mediated knockdown and growing in six-well tissue culture plates were treated for 48 h with DMEM containing 0.1% ethanol. The cells were then harvested by trypsinization and centrifugation at 300× g for 5 min. Pellets were resuspended in 0.4% Trypan blue solution (Sigma-Aldrich, St. Louis, MO, USA), and live (unstained) and dead (stained blue) cells were counted using a hemocytometer to determine the total number of viable cells. The percentage of surviving cells was calculated based on the ratio of viable cell to total cell population from each well. The proliferation rate was calculated based on the number of viable MDA-MB-231 or MCF-10F cells infected with vector control (pMKO.1 puro) versus MDA-MB-231 or MCF-10F cells infected with shRNAs against K2P9 (shK2P9A, shK2P9B, and shK2P9C). Experiments were repeated to confirm the accuracy of the results.
Extraction and Quantification of mRNA by Real Time PCR
Total RNA was extracted from MDA-MB-231 and MCF-10F cells using TRIzol Reagent (Life Technologies) followed by an additional DNase treatment (TURBO DNA-free kit; Life Technologies). First-strand cDNA was primed with oligo(dT) from 1 µg of RNA and synthesized using the RevertAid H Minus First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, Waltham, MA, USA) at 42 • C for 60 min. The cDNA generated was used as a template for PCR amplification using specific primers (Table 1).
Real-time PCR reactions consisted of 5 µL of 2× Maxima SYBR Green/ROX qPCR Master Mix (Thermo Fisher), 250 nM of each primer, and 100 ng of cDNA template. The cycling conditions were the following: a cycle of 95 • C for 10 min, followed by 40 cycles at 95 • C for 15 s, 60 • C for 15 s, 72 • C for 20 s, a final cycle at 95 • C for 1 min, and a melting curve from 55 • C to 95 • C at 0.5 • C/s increments. These assays were performed in triplicate in a Stratagene Mx3000P real-time thermal cycler, and analyzed with MxPro qPCR software (Agilent Technologies, Santa Clara, CA, USA). All pairs of primers were tested and the efficacies evaluated (only those giving efficiencies of 90-100% were selected). Additionally, gel electrophoresis and melting curve analyses, were performed in order to confirm the specificities of the PCR products. The expression of each gene was normalized to ribosomal protein L19 (RPL19).
Protein Extraction and Western Blotting
Cultured MDA-MB-231 and MCF-10F cells were gently scraped, pelleted by centrifugation at 5000× g for 5 min at 4 • C, resuspended and lysed in ice-cold RIPA buffer supplemented with protease and phosphatase inhibitors. For Western blot, 60 µg of protein were subjected to 10% SDS-polyacrylamide electrophoresis. Proteins were then transferred to nitrocellulose membranes (Thermo Fisher) and incubated with primary antibodies against TASK-3 (1:1000 antibody dilutions).
Assessment of Apoptosis
Apoptosis in adherent cells was assessed by terminal deoxynucleotidyl transferase-mediated DNA nick-end labelling (TUNEL) assays. For nuclear staining purposes, cells were fixed with freshly prepared 4% PFA and incubated in a DAPI solution (0.1 µg/mL during 5 min). For TUNEL labelling, the DeadEnd Fluorometric TUNEL system (Promega) was used according to the manufacturer's instructions. Cells were analyzed by fluorescence microscopy and images were digitally acquired as indicated above.
Statistical Analysis
Data were compiled and analyzed with the SigmaPlot software version 12.0 (Systat Software, San Jose, CA, USA). Group differences were calculated with one-or two-way ANOVA with post-hoc Tukey HSD test. p < 0.05 was considered statistically significant and all data shown are mean ± standard error of mean (SEM).
Conclusions
We corroborate the overexpression and functional status of TASK-3 in the MDA-MB-231 human breast cancer cell line and also in the non-transformed human breast cancer cell line MCF-10F. In addition, we show that the TASK-3-silencing in MDA-MB-231 cells suppresses cell proliferation by inducing senescence. To MCF-10F cells, TASK-3 knockdown exerted an inhibitory effect on the proliferation rate, generated by cell cycle arrest mediated by CDK inhibitor upregulation. This anti-proliferative effect is likely mediated by p21 or p27. Altogether, our results have confirmed the role of TASK-3 in proliferation and highlight this channel as a potential target for the development of specific inhibitors that can be used in the treatment of triple-negative breast cancers. | 6,871.2 | 2018-03-29T00:00:00.000 | [
"Biology"
] |
An analytical velocity field of spiral tips in reaction–diffusion systems
Spiral waves are ubiquitous in diverse physical, chemical, and biological systems. The tip (phase singularity) of a spiral wave is considered to represent its organizing center. Here, we derive an analytical velocity field of spiral tips based on the variables of a general two-variable reaction–diffusion (RD) equation. From this velocity field, we can predict the velocities of spiral tips at time t as long as the values of the variables are given at that time. Numerical simulations with two-variable RD models are in quantitative agreement with the analytical results. Furthermore, we also demonstrate the velocity field of spiral tips in the Luo–Rudy model for cardiac excitation.
Introduction
Spiral waves represent a very prominent example of self-organization in quite different active distributed systems, including the Belousov-Zhabotinsky (BZ) reaction [1], the catalytic oxidation of CO on platinum [2], aggregations of Dictyostelium discoideum amoebae [3], eye retina [4], and cardiac tissue [5]. In cardiology, such self-sustained spiral wave activities play an essential role in cardiac arrhythmia and fibrillation [6,7].
The spatiotemporal behavior of a spiral wave is quite complex, but some of its features, e.g., rigid rotation, meander, drift, etc, can be well described by the motion of the spiral's tip-phase singularity, a kind of topological defect [8,9]. The simplest motion of a spiral wave is rigid rotation, in which the tip follows a circular orbit. Quasi-periodic motions of spiral tips are also possible and this phenomenon was called meander [10,11]. Close to the bifurcation, meander occurs generally in two types called outward meandering and inward meandering, dependent of whether their tip trajectories form a flower-like orbit with petals pointing outward or inward, respectively. It is shown that the meandering dynamics of spiral waves is organized around the codimension-2 bifurcation where Hopf eigenmodes interact with eigenmodes resulting from Euclidean symmetry [12]. Experiments on the BZ reaction unfold the bifurcation from simple rotating spirals to meandering spirals in the neighborhood of a codimension-2 point [13]. Under certain conditions, spiral waves can break up due to the strong meandering of their tips [14][15][16].
If the velocity of the spiral tip is known, then its movement can be quantitatively described. Generally, this velocity could be calculated by the locations of the spiral tip at time t and at a subsequent time t + Δt, where Δt T and T is the rotation period of the spiral wave. Many studies have demonstrated that many important features (e.g. meander, breakup, etc) in spiral wave dynamics can be reproduced by two-variable reaction-diffusion (RD) systems [17,18]. This paper uses a general two-variable RD equation and the velocity field of topological defects [19] to derive an analytical velocity field of spiral tips that is independent of the specific models. The analytical velocity field can predict the velocities of spiral tips at time t as long as the values of the variables are given at that time, so it is no longer necessary to have the values of the variables at two moments. Besides simple cases, e.g., the rigidly rotating spiral wave and the meandering spiral wave, the analytical velocity field can also give the velocities of spiral tips in complex cases, such as in spiral turbulence. In addition, the analytical velocity field of spiral tips can be applied to the Luo-Rudy model, a more realistic model of cardiac excitation.
Theoretical results
Theoretical investigations of spiral waves have been performed predominantly in RD systems [20][21][22]. A general two-variable RD system in two dimensions can be described by the following partial differential equations [15,[23][24][25]: where the variables u and v can be chemical concentrations in a hypothetical chemical reactions, or membrane potential and current in a hypothetical physiological medium. The functions f(u, v) and g(u, v) model the local dynamics, e.g. chemical reaction kinetics. D u and D v are the diffusion coefficients of u and v, respectively, The spiral tip can be defined as the intersections of isolines u = u * and v = v * [11,26,27]. Namely, is the locus of the spiral tip. The solution of equation (2) can be generally expressed as Note that, different sets of u * and v * will give different spiral tip locations, and thus can produce different tip trajectories depending on the choice of u * and v * [28]. A phase variable φ can be computed at each point according to [28,29]: The spiral tip can then be described according to the topological charge [30,31], which is defined as where Γ is a closed curve encircling the spiral tip. The medium in each phase of the activation-recovery cycle encircles the spiral tip, but at the tip itself, the phase is undefined, and the tip is thus a phase singularity. A phase singularity appears when the line integral of the phase change around a point is 2π or −2π, i.e., n t = 1 or −1.
Topological current theory [19,32] shows that Here, ∂S = Γ is the boundary of the surface S, D 0 (u/x) is the Jacobian determinant, and the δ-function is defined as . So, the charge density of the spiral tip is It can be proved that (more details can be found in the appendix) where V 1 and V 2 are calculated according to equation (7). The theoretical descriptions oscillate around the numerical results due to the discrete error. Model parameters are a = 0.84, b = 0.07, and ε = 0.04.
where D 0 (u/x) tip is the value of D 0 (u/x) at the spiral tip.
From topological current theory, we can also obtain the continuity equation satisfied by the charge density (more details can be found in the appendix): where ∇ = i ∂/∂x+ j ∂/∂y, and V(x, y, t) is the velocity field of spiral tips: Note that the above formulas are also applicable to the velocity field of other topological defects, e.g., topological defects in phase-ordering systems [33] and wave dislocations in light beams [34]. In equation (6), to calculate ∂ t u and ∂ t v, we must know u and v at time t and at a subsequent time t + Δt (Δt T). Interestingly, from the general two-variable RD system (1), we can calculate ∂ t u and ∂ t v from u and v at time t. So, we can predict the velocities of the spiral tips at time t as long as we know the values of u and v at that time: The expressions given by equation (7) for the velocity field of spiral tips are useful, because they avoid the need to explicitly specify the positions of the tips.
Numerical simulations
To test the validity of the velocity field of spiral tips in equation (7), we use the Bär model [15] (its supporting spirals have wealthy behaviors including rigid rotation, meander, breakup, etc) to numerically study four different types of spiral wave dynamics: rigidly rotating spiral wave, meandering spiral wave, generation of a spiral pair, and annihilation of a spiral pair. The model kinetics is as follows: The simulations are conducted on 1024 × 1024 grid points employing the explicit Euler method. The space and the time steps are Δx = Δy = 0.025 and Δt = (Δx) 2 /5, respectively. No-flux conditions are imposed at the boundaries. In this paper, to ignore the boundary effect, the regions near the boundaries are not considered. Figure 1(a) shows the u field of a rigidly rotating spiral wave at a certain time, and figure 1(b) shows the field of D 0 (u/x). We can see that the Jacobian determinant D 0 (u/x) is localized at the rotation center, as D 0 (u/x) has a maximum value approximately at the rotation center and is almost zero in other regions [35]. The isocontours of the two-state variables intersect only near the rotation center, and are parallel to each other far from the rotation center. That is, when far away from the rotation center, equation (2) has no solutions, then according to the theorem of implicit function, D 0 (u/x) = 0. Thus, tips only exist near the rotation center. Due to the discrete error in the numerical simulations, D 0 (u/x) can be very small but not equal to zero in the areas far from the rotation center. Therefore, we limit the absolute value of D 0 (u/x) to be greater than a certain value (can be determined by the numerical simulations) when calculating the values of V 1 and V 2 . We find that all tips (different sets of u * and v * ) rotate rigidly.
We calculate the velocity field according to equation (7), and figure 1(c) shows the field distributions of velocity V. Since spiral tips only exist near the rotation center, we can see that the velocity vectors V only exist around the rotation center of the spiral and do not occur in other regions, and that the velocity field is rotationally symmetric about the point where V = 0. Figure 1(d) shows three different isolines of u * (solid lines) and three different isolines of v * (dashed lines). The u * isolines intersect the v * isolines and produce nine intersections that can all be regarded as tips of a spiral.
Additionally, from equation (7), we calculate V and determine the location where V = 0 (marked as '•' in figure 1(e)). Figure 1(e) shows the tip orbit (red circle) of the spiral and the orbit center (marked as '+'). The location where the velocity V = 0 and the center of the tip orbit is coincident. This result follows from the rotational symmetry. To further confirm the validity of equation (7), figure 1(f) compares the values of speed V obtained by theory with those from direct numerical simulation.
We consider a meandering spiral wave in figure 2(a). Similar to the case of the rigidly rotating spiral wave, D 0 (u/x) is localized, and there is a rotating velocity field for the case of the meandering spiral wave, as shown in figures 2(b) and (c). Interestingly, there also exists a point where V = 0 in the velocity field, which is moving in space with time. The velocity field is not rotationally symmetric about the point where V = 0, which is different from the case of the rigid rotation (see figure 1(c)). Figure 2(d) shows the meandering tip trajectory (red curve) with outward petals [12], which is given by the intersection of the isolines u * = 0.50 and v * = 0.35. The trajectory of the point where the velocity V = 0 is shown by the blue curve. Compared with the spiral tip, the point V = 0 can also be considered to represent the organizing center of the spiral wave. Now, we consider a spiral wave which is breaking up [18], as shown in figures 3(a) and (b). Figure 3(c) shows that, when a new spiral pair is generated, the velocity directions near the breakup region (i.e., the area enclosed by the dashed rectangular) are opposite and the speeds are very large. The open ends will become the new organizing centers for spirals later in the simulation. These findings agree with the theoretical results of figure 6(a) in [19], which show that the speeds of topological defects are very large when they are generated.
In addition, we can also find some regularities in the breakup process by observing the distributions of D 0 (u/x), as shown in figure 3(d). Specifically, the maximum of D 0 (u/x) is located at the original spiral tip. When new spirals are generated (the upper part in the panels), the value of D 0 (u/x) is positive on one side and negative on the other side, which means that the topological charges of the two new generated spirals are opposite due to n t = sgn D 0 (u/x) tip [35]. Furthermore, the value of D 0 (u/x) in the intermediate region between the two sides equals zero; this finding is consistent with the results from the branch process of the limit point in section 3.3 in reference [19].
To better observe the velocity vector distributions, we magnify the dashed rectangular areas of figures 3(a) and (c), as shown in figure 4. The left, the middle and the right panels show the distributions of the u field, the velocity field and the isolines of u and v, respectively. From the patterns of the u field in figures 4(a) and (d), it seems that there are no tips, but through the corresponding isolines of u and v as shown in figures 4(c) and (f), we can see the intersections generated by the isolines of u and v. That is, tips exist in these regions.
In figure 5, we also study the case of an annihilation of a spiral pair. Figures 5(a) and (b) show the evolution of the u field for a spiral pair. Figure 5(c) shows the isolines of u * = 0.50 (blue curve) and v * = 0.35 (red curve), as well as the velocity vectors. It is found that when the two counter-rotating spiral waves collide with each other and become annihilated, the tip speeds around the annihilating region are also very large, which agrees with the results of figure 6(b) in referene [19]. Also, the value of D 0 (u/x) is positive on one side and negative on the other side, and the intermediate region between them is zero, as shown in figure 5(d). These results are also consistent with the theoretical results from the branch process of the limit point [19].
Discussion
In figures 1-5, we use the Bär model with D v = 0 to demonstrate the velocity field of spiral tips. To check whether the velocity field works for D v = 0, we also consider the case of D v = 0.6 to do the simulations. Compared with figure 1 (D v = 0), similar results are also observed with D v = 0.6, as shown in figure 6.
To further verify that the analytical formula (7) for the velocity field of spiral tips can be applied to different models, we test the Barkley model [25] the FitzHugh-Nagumo model [23,24] , as shown in figures 7 and 8, respectively. It can be seen that there is a rotating velocity field around the rotation center of the spiral, and the velocities obtained by numerical simulations are in agreement with the analytical results.
In order to check whether the analytical velocity field can be extended to more realistic RD models (e.g., ionic models for cardiac excitation), we analyze spiral waves generated by the Luo-Rudy model [36]: where V m is the membrane potential; C m = 1μF cm −2 is the membrane capacitance; D = 0.001 cm 2 ms −1 is the diffusion current coefficient; I ion is the total ionic current, and I ion = I Na + I Si + I K + I K1 + I Kp + I b with I Na = G Na m 3 hj (V m − E Na ) being the fast inward Na + current; is the time-independent K + current; I Kp = G Kp K p V m − E Kp is the plateau K + current; and is the total background current. m, h, j, d, f, and x are gating variables satisfying the following differential equation: where y represents the gating variables. The parameters are simplified as in reference [37]. The Luo-Rudy model is integrated on a 6 cm × 6 cm medium with no-flux boundary conditions via the Euler method. The space and the time step are Δx = 0.0075 cm, Δy = 0.0075 cm, and Δt = 0.00125 ms, respectively. A rigidly rotating spiral is considered as shown in figure 9. To obtain the velocity field distributions, we arbitrarily choose two time-dependent variables in the Luo-Rudy model to replace the variables u and v in equation (7). For example, figure 9(b) shows the velocity field of spiral tips by choosing the membrane potential V m and the gating variable m; figure 9(c) displays the velocity field by using the membrane potential V m and the gating variable x; figure 9(d) exhibits the velocity field by considering the gating variable m and the gating variable x. From figures 9(b)-(d), we can see that there exists a rotating velocity field around the rotation center of the spiral, regardless of the selected time-dependent variables. These velocity field distributions in figures 9(b)-(d) are similar to those in figure 1(c).
Recently, in reference [38], the authors studied the creation and annihilation of a spiral pair in a state of spiral turbulence. They also found numerically that the tip speeds are large when two new spirals are generated or two spirals are annihilated, which is consistent with our results in figures 3-5 and with the topological current bifurcation theory [19]. The generation and annihilation of other topological defects have been studied before. Bray [33] studied the velocity distribution of topological defects in phase-ordering systems and provided a scaling argument associated with the final annihilation of defects, which leads to a large velocity tail. The authors in reference [34] showed that the speeds of wave dislocations in light beams are very large when they are annihilated or generated.
Conclusion
In conclusion, we proposed a quantitative description of the velocity field of spiral tips, providing analytical results that contribute to a deeper and more comprehensive understanding of the dynamics of spiral waves. From a general two-variable RD equation and a velocity field of topological defects, we obtained an analytical velocity field of spiral tips, which is independent of specific models. Using this velocity field, we can obtain the velocities of spiral tips at time t as long as we have the values of u and v at that time. The predictions of the velocity field are in agreement with the results obtained from numerical simulations of RD models and with the topological current bifurcation theory.
Namely, the continuity equation satisfied by the charge density is | 4,209.8 | 2020-09-16T00:00:00.000 | [
"Physics"
] |
The Behavioural Aspects of Financial Literacy
: In this paper, we investigate the contribution of behavioural characteristics to the financial literacy of UAE residents after controlling for demographic factors. Specifically, we test the relationship between financial literacy and behavioural biases such as representativeness, self-serving, overconfidence, loss aversion, and hindsight bias. Using data collected through survey questionnaires, we apply the methodology developed by the Organization of Economic Co-operation and Devel-opment (OECD) to compute financial literacy scores. Our overall results show that all behavioural biases except for overconfidence bias are positively related to financial literacy. Furthermore, some biases exhibit a stronger quantitative relationship with financial literacy than others. For example, hindsight bias displays the strongest link to financial literacy, followed by self-serving bias. The weakest but still statistically significant effect is loss aversion bias. Although biases, in general, have negative connotations, behavioural biases appear to be related to higher levels of financial literacy. five behavioural biases: representativeness, self-serving, overconfidence, loss aversion, and hindsight.
Introduction
Research on behavioural finance (e.g., Baker et al. 2019) suggests that individuals, investors, and markets do not always act rationally and systematically deviate from optimal financial decision making. Behavioural finance draws insights, mainly from psychology and finance; considers financial behaviour in various market settings that may deviate from standard suppositions; and indicates that individual and market decisions are sometimes inefficient (Yoong and Ferreira 2013). Behavioural biases are often the driving force behind an individual's financial inconsistencies. Kahneman and Tversky (1979) suggest that people make irrational decisions under risk and uncertainty and violate axioms of expected utility theory. Financial literacy is the ability to make informed judgments and effective decisions related to using and managing money (Noctor et al. 1992). Therefore, Baker et al. (2019) indicate that there exists a necessity to improve individuals' behaviour concerning financial products and services. As economic and financial literacy are crucial for sound financial decisions, the impact of education on financial behaviour has increasingly been investigated by academics in recent years. For instance, in the form of financial literacy, education has a significant effect on risk-taking behaviour. Bianchi (2018), for instance, shows that more financially literate households rebalance their portfolios to hold a greater portion of riskier funds within their portfolios. These more literate households are also contrarian investors who rebalance their portfolios toward funds that have experienced relatively lower returns in the past.
According to Baker et al. (2019), only slight evidence is available on how individuals' financial literacy and various demographic characteristics relate to behavioural biases, except for risk-taking behaviour. Of the studies in this area, Jonsson et al. (2017) show that financial literacy can mitigate disposition biases, while Kiymaz et al. (2016) indicate that overconfidence is related to financial literacy through investor sophistication. Carpena et al. (2019) highlight the limitations of financial education. They show that financial education improves financial awareness and attitudes but fails to improve longer-term behavioural outcomes. However, in conjunction with tailored interventions, financial education helps individuals circumvent behavioural constraints. Financial literacy may play a role in the way individuals react to certain situations within the confines of individuals' behavioural biases. This point is even more valid in the context of a crisis (see Ramiah 2013), and we propose that individuals may react in a similar way during the novel coronavirus pandemic.
While the virus outbreak is a human tragedy of many dimensions, its impact on the finance of individuals is alarming. Prior studies have purposefully investigated many scenarios around periods of turmoil, analogous to the COVID-19 outbreak, such as the Global Financial Crisis (GFC). For instance, Klapper et al. (2013) investigate financial literacy and the ability of individuals to deal with macroeconomic shocks. Of specific importance is their main question investigating whether less financially literate individuals can deal with financial crises. Bucher-Koenen and Ziegelmeyer (2014) investigate behavioural characteristics following losses experienced during the GFC. The authors find that households with lower financial literacy are more likely to withdraw entirely from investing in the equity market. This dynamic appears akin to loss aversion bias. We caution the reader from inferring that financial crises cause behavioural biases-empirical evidence does not support or disprove this assumption. We believe that an individual's behavioural bias exists as an extension of their personality. Irrespective, these biases are easier to identify during potentially traumatic events such as the COVID-19 pandemic. Accordingly, an investigation of the relationships between behavioural biases of individuals, who experienced a financial shock, 1 and financial literacy in the context of a crisis is of utmost relevance. Consequently, it is interesting to answer the question: What role do behavioural biases play in improving financial literacy?
Thus, the purpose of this paper is to investigate the effects of behavioural biases on the financial literacy of individuals during the COVID-19 crisis while controlling for demographics factors. The study focuses primarily on five behavioural biases: overconfidence bias, loss aversion bias, hindsight bias (i.e., overreaction to new information), representativeness bias, and self-serving bias and their role in explaining the financial literacy of individuals in times of a crisis.
Research has shown that overconfident individuals (e.g., investors) have a strong faith in their intuitive reasoning, abilities, and judgments (Pompian 2006). Barber and Odean (2000) find that individual investors often exhibit overconfidence bias, which leads to excessive trading and poor performance. Camerer et al. (1989) show that individuals who exhibit hindsight bias tend to be overconfident and overreact to new information. Chen et al. (2007) find that investors extrapolate recent past returns of stocks they purchase. This evidence indicates that investors tend to exhibit representativeness bias by extrapolating the results of past return stock they purchase to recent investments while also being loss averse and expecting to earn higher returns without considering associated risks (Chen et al. 2007).
Although research has investigated the impact of behavioural biases on the performance of investors and individuals, little research exists on how financial literacy relates to behavioural biases or demographics. Baker et al. (2019), for instance, focus on an everyday context rather than a crisis. There has also been little investigation relating to an emerging market, such as the UAE. The UAE is the site under study as it is an emerging market that is transitioning gradually toward a developed country status. The UAE has a population of 10 million and a GDP of USD 371 billion (GDP per capita of USD 37,500) (Statista 2021), and since 1997, the UAE dirham has been pegged to the US dollar at a rate of AED 3.6725 per USD 1. The COVID-19 pandemic has had severe economic and financial consequences on the UAE economy and the population. In addition, individuals within the UAE have access to retirement funds and interact with businesses domestically and internationally. This access and these connections also expose the UAE to crises such as the COVID-19 pandemic. More importantly, the UAE is among the first countries that (1) implemented a stimulus package and (2) opened its economy with social distancing rules.
Our findings show that all significant variables (hindsight, self-serving, loss aversion, and representativeness bias) have a positive statistical relationship with financial literacy. This trend indicates that respondents with any of these behavioural biases have a higher financial literacy than the respondents who do not exhibit them. Overconfidence bias does not show a significant relationship with financial literacy. We note, and reiterate, that these biases are likely to be present in non-crisis periods; however, crisis periods provide a natural event that allows us to measure the biases. Given that losses and financial hardship are likely to be prevalent during crisis periods, measuring biases such as loss aversion and hindsight biases, amongst others, is easier to detect. The ability to detect these biases provides stakeholders, policymakers, and risk managers with information about the resilience or fragility of the population which will, in turn, provide information on methods to recover from crises such as the COVID-19 pandemic.
The paper is structured as follows. The following section reviews the relevant literature involving the behavioural biases of individuals and financial literacy, followed by the research design and the questionnaire in Section 3. Section 4 presents the empirical findings, followed by a conclusion.
Financial Literacy
Financial behaviour and the financial attitude of individuals help determine their financial decisions in terms of financial management, personal financial budgeting, or how individuals decide investment strategies. Prior research demonstrates that financial attitude positively affects financial literacy (Ameliawati and Setiyani 2018;OECD 2013), financial planning (Agarwal et al. 2015;Atkinson and Messy 2012;Lusardi and Mitchell 2011), and a higher tendency to save (Agarwal et al. 2015;Atkinson and Messy 2012).
Understanding how people make financial decisions can advance understanding of how people prepare and recover from adverse events such as the COVID-19 pandemic. This section presents the financial planning literature covering retirement planning, superannuation, and insurance and highlights the importance of financial and economic knowledge contributing to sound financial decisions. In addition, we consider the literature on the demographic factors (gender, age, and income level) that influence these decisions.
Financial literacy and its importance in achieving financial wellbeing have been widely investigated, particularly for retirement planning. For example, Bernheim and Garrett (2003) state that a positive change in financial literacy positively affects retirement planning. Lusardi and Mitchell (2007) demonstrate that inefficient retirement planning can be a result of insufficient knowledge of economics and finance, which also confirms the findings of van Rooij et al. (2007) and Agnew et al. (2012). The most salient factors affecting retirement planning are personal finance (e.g., tax, savings pattern, estate planning) and retirement (e.g., longevity and health risk, insurance coverage, risk of bankruptcy, and clients who stay home instead of moving to care centres). Moreover, risk (i.e., inflation, debt management, market risk, and liquidity risk) and investments (i.e., diversification, asset allocation, clear financial goals, and number of meetings with clients) (Chowk et al. 2016;Delpachitra and Beal 2002;Frank 1935;Ng et al. 2011;van Rooij et al. 2011) also affect retirement planning decisions. These studies document the various factors that are important when conducting any analysis in financial planning.
Many countries around the world use pension funds as a valuable type of retirement plan whereby employers make contributions on the employees' behalf. These funds are distributed to employees when they retire. The US, for instance, has 401(k) pension plans as well as traditional pension plans, generally consisting of defined benefit plans or a defined contribution plan. In contrast, a defined contribution plan bases payments on the employee's balance at retirement, which is a function of the employer's employee's monetary contributions. The UAE has similar characteristics in that there are fund managers that manage employees' funds. The employee's retirement benefits are called a gratuity which the employer holds as a liability until retirement. Australia also has a very similar structure to the US (and the UAE). More importantly, there is a large body of academic research into Australian retirement funds, namely superannuation funds, which cover several aspects of financial literacy. The fundamental elements associated with pension plans are categories of funds ( Morling and Subbaraman 1995;Rothman 2003Rothman , 2011, superannuation fees and disclosures (Finch 2005;Rice and McEwin 2002), superannuation contributions (Clare 2013;Fernandez 2007;MLC 2012;Rothman 2011), and relevant information sources (ANZ Survey of Adult Financial Literacy in Australia 2008). Gallery (2002) claims that having insufficient information and the difficulty of understanding the architecture of the financial planning systems are why the public is unable to select a fund. Other studies, such as Fry et al. (2007), suggest that it is highly probable that people with inadequate financial knowledge change pension funds, ignore the returns offered by superannuation funds, and are uninformed of the market risk from changing their funds. Babiarz and Robb (2014) find that more financially knowledgeable and confident households are more likely to maintain emergency funds, supporting the growing literature on the relationship between financial knowledge and economic behaviour.
Furthermore, financial literacy is related to education level, profession, homeownership, health, and age, as shown by Xue et al. (2019). While Xue et al. (2019) document a level of financial illiteracy in elderly citizens, people 18 to 24 years are unaware of how many funds they hold in their pension plan (ANZ Survey of Adult Financial Literacy in Australia 2008). Following these examples, the extensive body of research into Australian superannuation funds and the similarities between Australian and UAE funds indicate expected associations between individuals' financial literacy and their behaviour concerning UAE retirement funds.
According to Lin et al. (2019), it is essential to consider the frequency of meetings with advisors, social networks, publications, workshops, and any other additional information. Gender is a salient demographic factor as per the studies of Jefferson (2005) and Noone et al. (2010), who suggest that household responsibilities, low levels of education, and earnings cause inadequate retirement planning in women. Bernasek and Shwiff (2001) argue that women are more risk-averse when making investment decisions. Additionally, Grable and Joo (2001) argue that women are more likely to ask for financial advice than men. The above-cited literature highlights demographic factors such as gender, age, education, marital status, residency, and nationality as factors influencing financial literacy. In addition, economic factors can also be influential, such as income, employment status, presence of children in the household making financial decisions, homeownership, and sources of financial advice.
Behavioural Biases
Another stream of literature worth considering is the behavioural finance literature, arguing that individuals are not always rational. Irrationally acting individuals are more likely to follow what others do and avoid losses more than seek gains. They also believe in their abilities during good times while blaming external factors for their failures, explaining errors by heuristics and behavioural biases (Chowk et al. 2016;Tversky and Kahneman 1974;Kahneman and Tversky 1979). Chowk et al. (2016) argue that the prevalent financial biases include loss-aversion, self-serving bias, representativeness, and overconfidence.
Behavioural biases are not necessarily negative. In some circumstances, they may lead to better outcomes. For example, loss aversion bias (i.e., the perception of loss by individuals as psychologically or emotionally more severe than an equivalent gain) deters risk-taking, leading to savings (Kahneman and Tversky 1979). Studies of loss aversion bias (Kahneman and Tversky 1979;Rabin 1998;Shalev 1997) show that the pain of losing outweighs the satisfaction of an equal gain. This imbalance leads to a tendency of evading losses over seeking gains. Similarly, Guthrie (2003) shows that people tend to seek risk in order for them to avoid a loss, but it is less probable for them to consider a risky strategy to achieve or increase gains.
Another well-documented bias is self-serving bias, which attributes personal factors to positive outcomes but external, situational factors to adverse outcomes (Kahneman and Tversky 1972). Miller and Ross (1975) and Zuckerman (1979) classify the factors people attribute their success or failure into either internal or external factors. Self-serving bias occurs when an individual blames the failure on factors outside their control but attributes the success to their skills and abilities. Daniel et al. (1998) and Malmendier and Tate (2008) show that this bias results in (1) under-and over-reaction in the securities market, (2) poor acquisition decisions, (3) an over-reliance on debt financing, (4) an unnecessary increase in stock re-purchases, and (5) small and infrequent dividend payouts. Mezulis et al. (2004) argue that self-serving bias is the most common bias among individuals.
A common interpretation of representativeness bias defined by Grether (1980), Barberis et al. (1998), Kahneman and Tversky (1979), and Tversky and Kahneman (1974) is that people displaying this bias tend to see and consider patterns that have no fundamental basis while ignoring the probabilities associated with the scenario. Moreover, as Mitchell and Utkus (2004) indicated, an essential element of this bias is the tendency to change an arrangement when new information is available while disregarding previous knowledge related to the choice, especially when the decision is hard. Kahneman and Tversky's (1979) study on availability and representativeness highlighted scenarios whereby people convince themselves, after an event, that they accurately predicted the scenario before it happened. This behavioural trait is known as hindsight bias. Individuals suffering from this bias tend to use the phrase knew-it-all-along. Fischhoff (1975), Fischhoff and Beyth (1975), and Wood (1978) describe this bias as the tendency for individuals to change their perception once they already know an outcome. In recent studies, Chelley-Steeley et al. (2015) show that individuals exhibit greater hindsight bias when they earn more in a financial market-oriented environment. The authors also suggest that better investment performance may result in increased hindsight bias which, in turn, may also produce a degree of overconfidence. Hindsight bias is also prevalent in Australian Independent Director roles, where bias influences the evaluation of these directors' performance (Bryce et al. 2021). As such, hindsight bias is prevalent in many settings, from ordinary households to company boardrooms.
As suggested previously, Baker et al. (2019) indicate that slight evidence exists on how financial literacy relates to behavioural biases. This slight evidence entails recent studies into the disposition effect (Jonsson et al. 2017), overconfidence (Kiymaz et al. 2016), and tailored interventions aimed at helping individuals circumvent behavioural constraints (Carpena et al. 2019). Given this dearth of literature and the notion that financial literacy helps individuals recover, or mitigate, from potentially traumatic experiences, this study aims to add to the literature on financial literacy and behavioural biases within the context of the COVID-19 pandemic. Based on the existing literature and the highlighted literature gap, we investigate these behavioural biases concerning individuals and household financial recovery from the COVID-19 pandemic in this paper. This paper examines five behavioural biases: representativeness, self-serving, overconfidence, loss aversion, and hindsight.
Questionnaire
We use a similar methodology to that used by Asbi et al. (2020), Chowk et al. (2016), Greer et al. (2000), Linsky (1975), Ramiah et al. (2014Ramiah et al. ( , 2016, Sudman et al. (1996), and Gerth et al. (2021) to develop our questionnaire. Following the ethics approval from the American University of Dubai, we interviewed 11 UAE households financially affected by the pandemic to align the prior literature with the effects of the COVID-19 pandemic. This exercise was fruitful in that we were able to: (1) identify charitable programs that emerged during the pandemic (COVID-19 appeal, Community Solidarity Fund against COVID-19, and Together We Are Good program); (2) confirm the various relief packages offered by UAE banks; (3) identify that the lockdown period turned into a forced savings mechanism; (4) understand that seasonality adds to the resilience of particular businesses; (5) recognise that online platforms acted as a revenue stream; and (6) identify travel and leisure as a loss not documented in the literature.
The interviews with these 11 UAE households were crucial to the extent that they identified missing elements in our original, pilot questionnaire. Incorporating refinements gleaned from interviews, a comprehensive questionnaire was developed and used for data collection. 2 Due to the social distancing rule imposed during the pandemic period, the traditional face-to-face data collection method was not appropriate due to the risk of infection. Consequently, we advertised the survey via online social media platforms such as LinkedIn, Facebook, WhatsApp, university portals, and emails. The data was collected from interested individuals through a survey platform facilitated by SurveyMonkey. We had over 1500 views on LinkedIn immediately after posting the questionnaire. The interested parties were university professors, research houses, banks, corporate finance specialists, business strategists, salespersons, portfolio managers, and many others. The quick response (QR) code reader seems to have made it easier and more time-efficient for respondents to complete the survey using their smartphones.
Our methodological approach differs from the above-cited papers. We used a snowballing sampling approach to send emails to several UAE residents from our mailing lists. Some of the emails involved inviting the recipients to participate in the survey. In contrast, another stream of emails requested that the recipients forward the link to other people they suspect have experienced losses from the pandemic. We used similar strategies across the entire online platform to increase our response rate and to remain within our budget. Furthermore, we also shared our questionnaire with a group of final year master's students at the University of Wollongong in Dubai with the sole purpose of collecting more data and allowing students to use their collected data to conduct analyses related to educational purposes. A caveat of the snowballing approach is that it does not allow us to have a response rate, given that we did not have a targeted audience.
In total, our questionnaire had 11 questions (nine closed-ended and two open-ended), and each question has sub-questions (total of 52 questions). Respondents needed between 20 and 30 min to complete the survey, consistent with earlier studies by Bezhani (2010) and Truell et al. (2002). Based on our survey design, the time commitment of our respondents reflects the quality of the data collected. Question 1 documents factors such as gender, age, income level before COVID-19, marital status, residency status, level of education, the industry of employment, employment status, duration of residency in the UAE, ethnicity, homeownership, number of children living with respondent, household size, who manages household finances, the source of financial advice, and geographical location.
Question 2 aims at collecting information about the various losses encountered during COVID-19 and their corresponding recovery time. The variables for loss are adopted and adapted from Asbi et al. (2020). We added travel and leisure as an extra variable based on the in depth-interviews that we conducted. With regard to the loss variables, respondents answered in both financial terms (UAE dirhams) and on a five-point Likert scale ranging from 1 for "no loss" to 5 for "total loss". For the recovery variables, we collected data in months. Question 3 deals with health information, while Question 4 covers retirement planning. Questions 5 and 6 capture pension plans and insurance. Although Question 7's heading states it is the investment section, we use this section to conceal the behavioural biases questions similar to Asbi et al. (2020), Chowk et al. (2016), and Ramiah et al. (2014Ramiah et al. ( , 2016. Question 8 is about financial planning and COVID-19. In contrast, Question 9 collects data on various tools to recover from the current pandemic, including our variables of interest-bank relief schemes. The last two questions are open-ended questions around COVID-19 impacts and finance in general. We adapt questions from Asbi et al. (2020), Chowk et al. (2016), and Ramiah et al. (2014Ramiah et al. ( , 2016 to construct questions measuring financial bias, specifically representativeness, selfserving, overconfidence, loss aversion, and hindsight biases. We follow the definitions set by Kahneman and Tversky (1979) and Tversky and Kahneman (1974)
Construction of Financial Literacy Scores
The empirical analysis in this paper consists of two parts: (i) the construction of financial literacy scores and (ii) the regression modelling that incorporates the information found in the construction of the financial literacy scores. Regarding the former, the computation of the financial literacy scores follows the methodology developed by OECD/INFE (2015). 3 Since our questionnaire differs from the one conducted by the OECD in terms of scope and context, we adapted the score calculations while keeping its primary purpose unchanged. The ultimate aim of the measure is to gauge the respondent's financial knowledge, financial behaviour, and financial attitude. In order to do so, the score includes five categories: (i) financial advice, (ii) retirement planning, (iii) pension planning, (iv) insurance purchase behaviour, and (v) financial instruments. Through a battery of five-point Likert-scale questions, each category intends to elicit the respondent's financial characteristics. The maximum achievable score is 105 points. A higher score represents a higher degree of financial literacy, a lower score a lower degree of financial literacy. 4 In order to facilitate better understanding and comparability, we follow Morgan and Trinh (2019) and convert the different indicators into a common z-score. Calculation of the score is as follows: where the variable score z is the normalised, score is the non-normalised financial literacy score for each individual, score is the average score across all 446 respondents, and the standard deviation of the financial literacy score variable is score sd . Figure 1 shows a histogram of the variable score z . The distribution skews to the positive with the central mass of observations (59%) to the left of the mean. It demonstrates that 59% of the respondents have a degree of financial literacy below average and 41% above average. Furthermore, 23% are one standard deviation, and approximately 3% are two standard deviations, to the right of the mean, a trend indicating that a quarter of all respondents are highly skilled in financial matters. Figure 2 sub-divides the analysis by gender. It shows that in our sample, compared to men, more women have financial literacy scores at the bottom of the spectrum. Furthermore, relatively more men have higher scores above the average. Nevertheless, women dominate the proportion of respondents who have financial expertise 1.5 standard deviations above the mean. Figure 3 decomposes the financial literacy score into age groups (15-30, 31-45, 46-60, and 61+). It shows that the respondents in the lowest age group exhibit the highest proportion of scores at the bottom of the spectrum. As the age bracket increases, financial literacy increases. The proportion of respondents experiencing a financial literacy score of −1 drops by about two-thirds for the 46-60 group. For the last age group, no respondent exhibits a score as low as −1. This descriptive analysis leads us to believe that younger respondents exhibit a higher degree of financial ignorance than people at a higher age. Figure 2 sub-divides the analysis by gender. It shows that in our sample, compared to men, more women have financial literacy scores at the bottom of the spectrum. Furthermore, relatively more men have higher scores above the average. Nevertheless, women dominate the proportion of respondents who have financial expertise 1.5 standard deviations above the mean. Figure 3 decomposes the financial literacy score into age groups (15-30, 31-45, 46-60, and 61+). It shows that the respondents in the lowest age group exhibit the highest proportion of scores at the bottom of the spectrum. As the age bracket increases, financial literacy increases. The proportion of respondents experiencing a financial literacy score of −1 drops by about two-thirds for the 46-60 group. For the last age group, no respondent exhibits a score as low as −1. This descriptive analysis leads us to believe that younger respondents exhibit a higher degree of financial ignorance than people at a higher age. Figure 2 sub-divides the analysis by gender. It shows that in our sample, compared to men, more women have financial literacy scores at the bottom of the spectrum. Furthermore, relatively more men have higher scores above the average. Nevertheless, women dominate the proportion of respondents who have financial expertise 1.5 standard deviations above the mean. Figure 3 decomposes the financial literacy score into age groups (15-30, 31-45, 46-60, and 61+). It shows that the respondents in the lowest age group exhibit the highest proportion of scores at the bottom of the spectrum. As the age bracket increases, financial literacy increases. The proportion of respondents experiencing a financial literacy score of −1 drops by about two-thirds for the 46-60 group. For the last age group, no respondent exhibits a score as low as −1. This descriptive analysis leads us to believe that younger respondents exhibit a higher degree of financial ignorance than people at a higher age. Figure 4 presents financial literacy by income group. It shows that the relationship is not linear. Financial literacy seems to increase until the AED 10,001-29,000 income bracket and after that decreases again. As expected, we find high-income groups exhibit a proportionately lower level of financial ignorance or a higher level of financial literacy. 4 presents financial literacy by income group. It shows that the relationship is not linear. Financial literacy seems to increase until the AED 10,001-29,000 income bracket and after that decreases again. As expected, we find high-income groups exhibit a proportionately lower level of financial ignorance or a higher level of financial literacy. Figure 4 presents financial literacy by income group. It shows that the relationship is not linear. Financial literacy seems to increase until the AED 10,001-29,000 income bracket and after that decreases again. As expected, we find high-income groups exhibit a proportionately lower level of financial ignorance or a higher level of financial literacy. Figure 5 shows the respondent age decomposition. The dataset consists of 446 respondents, out of which 40% are female and 60% male. Regarding age, 54% are between 31 and 45 years. The proportion of under 30 years is 25%, between 46 and 60 years is 18%, and above 60 years is 3%. Figure 5 shows the respondent age decomposition. The dataset consists of 446 respondents, out of which 40% are female and 60% male. Regarding age, 54% are between 31 and 45 years. The proportion of under 30 years is 25%, between 46 and 60 years is 18%, and above 60 years is 3%. Regarding the income levels of our respondents, displayed in Figure 6, 42.5% of the respondents have monthly earnings between AED 10,001 and AED 29,000. Around 22% earn between AED 3001 and AED 10,000, 16.5% earn between AED 29,001 and AED 40,000, 9.5% earn above AED 40,001, 4.5% earn less than AED 3000, and around 5% prefer not to answer. Regarding the income levels of our respondents, displayed in Figure 6, 42.5% of the respondents have monthly earnings between AED 10,001 and AED 29,000. Around 22% earn between AED 3001 and AED 10,000, 16.5% earn between AED 29,001 and AED 40,000, 9.5% earn above AED 40,001, 4.5% earn less than AED 3000, and around 5% prefer not to answer. Regarding the income levels of our respondents, displayed in Figure 6, 42.5% of the respondents have monthly earnings between AED 10,001 and AED 29,000. Around 22% earn between AED 3001 and AED 10,000, 16.5% earn between AED 29,001 and AED 40,000, 9.5% earn above AED 40,001, 4.5% earn less than AED 3000, and around 5% prefer not to answer. Table 1 describes the characteristics of the five behavioural biases studied. The table shows that loss aversion and representativeness biases are the most common biases, with 180 respondents exhibiting the former and 177 the latter. Overconfidence bias, with 24 observations, is the least common. Furthermore, for the whole range, men are more prone to exhibit behavioural impediments than women. The data show that the second and third income groups (3001-10,000 and 10,001-29,000) exhibit the most cases of behavioural biases compared to the others in terms of monthly earnings. Finally, the two youngest age groups (15-30 and 31-45) are more prone to behavioural disorders. Table 1 describes the characteristics of the five behavioural biases studied. The table shows that loss aversion and representativeness biases are the most common biases, with 180 respondents exhibiting the former and 177 the latter. Overconfidence bias, with 24 observations, is the least common. Furthermore, for the whole range, men are more prone to exhibit behavioural impediments than women. The data show that the second and third income groups (3001-10,000 and 10,001-29,000) exhibit the most cases of behavioural biases compared to the others in terms of monthly earnings. Finally, the two youngest age groups (15-30 and 31-45) are more prone to behavioural disorders.
Empirical Evidence
In order to analyse the empirical relationship between (mis-)behaviour and financial literacy rates, we estimate the following model: where the dependent variable FL is the financial literacy z_score calculated above for individual i, (i = 1, . . . , 446). As mentioned in Section 2.2, the study focuses on five different behavioural biases: hindsight, self-serving, overconfidence, loss aversion, and representativeness bias. These categories appear in the variable Bias, where r determines the individual bias classification (r = 1, . . . , 6). 5 The second set of exogenous variables, Control, controls for socio-economic and demographic factors. The n different factors are gender (male, female, and others), age (i. 15-30, ii. 31-45, iii. 46-60, and iv. 60+), and income (i. AED 0-3000, ii. AED 3001-10,000, iii. AED 10,001-29,000, iv. AED 29,001-40,001, v. AED 40,001+ per month, and vi. Prefer not to say). 6,7 The statistics for the dependent variable, its underlying concept, and the control variables age and income can be found in Table 2. 8 We use ordinary least squares (OLS) to model Equation (1) for comparison purposes and ease of reproducing our results. We apply adequate care during the estimation process to comply with the classical linear regression function assumptions. Consequently, the statistical behaviour of the residual terms and the parameter values were analysed and adapted where necessary. Furthermore, to prevent misleading diagnostic tests, we calculated the variance as an exponential function of the covariates specified in the model. The last step is necessary because OLS explicitly assumes that the residuals of the variables are constant, which does not apply to our data. Concerning the stability of our results, we use the method-of-moments estimation technique as a robustness test. We obtain the same qualitative and quantitative results by assuming predeterminedness and imposing the moment condition of non-stochastic covariates.
The empirical results for Equation (1) appear in Table 3. 9 To view the full regression model, please see Table A1 in Appendix A. The overall statistical model is significant at a 1% significance level. The coefficient of determination assumes a value of 0.4180, which indicates that the regression covariates explain 41.8% of the variability in the financial literacy score. In the realm of cross-sectional empirics, this signifies a high explanatory power of our empirical specification. Furthermore, except for the two variables capturing the overconfidence bias, all variables are statistically significant, at least at a 5% significance level. 10 All significant variables (hindsight, self-serving, loss aversion, and representativeness bias) have a positive statistical relationship with the dependent variable, the financial literacy z_score. The positive coefficients mean that the respondents who exhibit any of these behavioural biases have a higher financial literacy score than the respondents who do not exhibit them. For example, a respondent who inhibits hindsight bias has a financial literacy z_score of 0.7134 standard deviations higher than a respondent that does not inhibit the hindsight bias. Overconfidence bias does not show a significant relationship with financial literacy. Note: Values in brackets below the coefficients represent their respective t-statistics. ***, **, and * indicate significance at the 1%, 5%, and 10% level, respectively.
Furthermore, the regression results, based on the coefficients of biases, suggest that some biases exhibit a stronger quantitative relationship with financial literacy than others.
For example, having a coefficient value of 0.7134 for the hindsight bias exhibits the most substantial relationship with financial literacy. The second strongest is the self-serving bias with a coefficient of 0.5527, followed by the representativeness bias with a parameter value of 0.5299. The weakest statistically significant covariate is the effect of the loss aversion bias with a coefficient value of 0.4569. These numbers show that each behavioural bias shares an individual statistically robust relationship with the respondents' financial literacy.
Conclusions
Research in financial literacy is an increasing focus of scholarly debate. Prior research shows that many factors influence financial literacy, with knowledge about personal finance at its core. Our research investigates if cognitive and behavioural biases influence the financial literacy of individuals in times of crisis. Despite the extensive literature on behavioural finance, limited academic research (e.g., Baker et al. 2019) has attempted to unravel the relationship between financial literacy and behavioural biases, particularly the behaviour of individuals during the crisis period. Therefore, this study examines the presence of behavioural biases using a sample of individuals who exhibited a loss in the UAE during the COVID-19 context.
Our main results show that behaviourally biased people tend to perform better at their level of financial literacy. Individuals with a high level of hindsight biases believe that certain adverse events will happen one day as they constantly use the phrase "I knew it will happen". We suspect that people prone to this bias tend to research certain events before their occurrence to prepare themselves against any potential negative outcome. The research for the potential solutions is what improves their financial literacy. Such finding is consistent with the study carried out by Asbi et al. (2020). People with the loss aversion bias will similarly mitigate the risk of a potentially harmful outcome as they feel more financial pain from events with potential adverse outcomes (such as a crisis). We argue that to prevent the pain from any potential loss, they explore mitigating financial solutions that contribute to their financial literacy, an affirmative finding relating to the earlier study by Ramiah et al. (2016). As for representativeness bias, people tend to look at a pattern and, when detecting a possibility of a negative outcome, act on it before others. Their actions lead to finding the correct financial solution, which implies a higher financial literacy score, confirming the findings of Gerth et al. (2021).
Despite the challenges academics face in publishing in this field, we contribute to this important debate by looking at the behavioural attributes. We remain convinced that industry partners (as evidenced by the interest of LinkedIn) have a strong interest in this area. Our unique contribution to this field is establishing a link between behavioural biases, such as representativeness, self-serving, overconfidence, loss aversion, and hindsight, and financial literacy. We can also show the order of importance regarding these biases: hindsight bias, self-serving bias, representativeness bias, and loss aversion bias. The financial planning industry increasingly recognises the importance of behavioural biases, as evidenced by its inclusion in planning software such as Xplan, which includes a behavioural section. Practitioners believe it is a crucial element as it helps them to understand their clients. Our study is documenting this market occurrence.
We caution readers from generalising the findings of this study as it is based on the UAE which is a developing economy but highly influenced by the economic activities of the region and the globe. Additionally, the behaviours of the individuals and households in the UAE may not represent the behaviours of people in other countries. Indeed, as we have polled individuals and measured their biases during the crisis, and not before, we cannot make inferences on the evolution of biases in people or, indeed, whether there are differences between crisis periods and non-crisis periods. However, future researchers could focus on investigating the behavioural biases during the COVID-19 pandemic period of individuals in developed and developing countries. Additionally, as a comparison, polling the same survey participants in the future would identify any differences in behaviour. In terms of the technical feasibilities of our estimation method, our regression modelling approach only tests for statistical correlation, not for causation. Consequently, cause-and-effect relationships cannot be assumed. We leave such considerations for further research.
Author Contributions: Conceptualization, methodology, formal analysis, investigation, resources, data curation, writing-original draft preparation, writing-review and editing, was undertaken colloboratively and all authors have equally contributed. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement:
The ethics approval for this research was obtained from the American University of Dubai.
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author.
1
Financial shock refers to any expense or loss of income that individuals or households do not plan for when budgeting (The Pew Charitable Trust 2015).
2 For brevity, the questionnaire questions have not been included; however, they are available upon request from the authors. 3 Another paper that follows this method is the work by Morgan and Trinh (2019). The authors apply these calculations in order to find the determinants of financial literacy in Cambodia and Vietnam. 4 To construct the financial literacy score, the total sum of the questions for each category is added up. Consequently, the score might range from 0 (total financial ignorance) to 105 (total financial literacy). 5 Note that we have five different behavioural biases, but six different bias variables. The reason is that overconfidence can be captured through two different behavioural patterns. Therefore, we assign two different variables to it. 6 The regression model under Equation (1) might potentially suffer from reverse-causality. That is, there might be feedback effects coming from the independent to the dependent variable and back. In order to avoid such a problem, instrumental variable estimation might be used. The data obtained, unfortunately, do not allow us to construct instruments to control for such an issue. We leave this shortcoming to further research. 7 The reader should be cautious about the interpretation of the results. Regression modelling in its present form solely means a statistically robust relationship between variables. It does not refer to causality. We leave the issue of causality between the variables for further research. 8 To not duplicate the information given, we refrain from discussing the results represented by Table 2 and encourage the reader to study Section 4.2: Empirical Evidence. Furthermore, the control variable gender is not presented. The reason is that the variable is a dummy variable, and therefore numerical analysis as shown in Table 2 is meaningless. 9 For ease of interpretation and display, the control variables have been omitted from Table 3. They were, nonetheless, included in the model, Equation (1). Furthermore, the results have been estimated considering a heteroscedasticity-consistent variancecovariance matrix. 10 As a robustness test, the same model was estimated using the method-of-moments estimation technique. In order to do so, we assumed predeterminedness and the moment condition of non-stochastic covariates. The quantitative and qualitative results remain the same. | 9,679.6 | 2021-08-25T00:00:00.000 | [
"Economics",
"Business"
] |
On the existence and convergence of best proximity points in Menger probabilistic metric spaces
In this article we will study the existence of best proximity point of some special mappings like cyclic mappings in a Menger probabilistic metric space.
Introduction
If A and B are nonempty closed subsets of a complete metric space (X, d) such that A ̸ = B, a non-self mapping T : A → B does not necessarily have a fixed point.Finding an optimal approximate solution to the equation T x = x for a mapping T which has no fixed point is one of the important branches of the nonlinear analysis which KY Fan [9] in 1969 was the first one that began this subject by introducing approximation point and after him many authors such as Prolla [17], Reich [18], Sehgal and Singh [21,22] and etc. extended this subject.Basha and Veeramani [4] in 1997 introduced another useful compliment which is called a best proximity point.This point is the most optimal solution to the problem of minimizing the real valued function x → d(x, hx) over the domain A of a nonself mapping h : A → B. If the mapping is a self-mapping, the best proximity point reduces to a fixed point .On the other hand, the concept of a probabilistic metric space is generalization of a metric space which has been introduced by Karl Menger [15] in 1942.Fixed point theory in a probabilistic metric space is an important branch of probabilistic analysis and many results on the existence of fixed points or solutions of nonlinear equations under various types of conditions in Menger Probabilistic metric spaces have been extensively studied by many scholars.In this article we will study the existence of best proximity point of some special mappings like cyclic mappings in a Menger probabilistic metric space.
A special distribution function which is important for us is denoted by H
Remember that we can say F x,y (t) = α means that P(d(x, y) ≤ t) = α, that is, Definition 2.4.Let (X, F, T ) be a Menger probabilistic metric space.
(1) A sequence {x n } in X is said to be converge to x ∈ X if for any given ε > 0 and λ > 0, there exists a positive integer N = N(ε, λ ) such that F x n ,x (ε) > 1 − λ , whenever n ≥ N.
(2) A sequence {x n } in X is called a Cauchy sequence, if for any ε > 0 and λ > 0, there exists a positive integer (3) (X, F, T ) is said to be complete, if each Cauchy sequence in X converges to some point in X.
If (X, F, T ) is a Menger probabilistic metric space and A and B are nonempty subsets of X, we define, Definition 2.5.Let (X, F, T ) be a Menger probabilistic metric space and A and B be two nonempty subsets of X.The mapping h : Definition 2.6.Let (X, F, T ) be a Menger probabilistic metric space, A and B be two nonempty subsets of X and h : A → B be a mapping.We say that a * is a best proximity point of the mapping h if for all t > 0, Definition 2.7.Let (X, F, T ) be a Menger probabilistic metric space, A and B be two nonempty subsets of X and h : A ∪ B → A ∪ B be a cyclic mapping.We say that a * is a best proximity point of the cyclic mapping h if for all t > 0, International Scientific Publications and Consulting Services Definition 2.8.Let (X, F, T ) be a Menger probabilistic metric space, A and B be two nonempty subsets of X and H, S : A → B be two mappings.We say that a * is a common best proximity point of f and g if for all t > 0, Lemma 2.1.[16]Let (X, F, T ) be a Menger probabilistic metric space and define E λ ,F : for each λ ∈ (0, 1) and x, y ∈ X.Then we have, 1.For each µ ∈ (0, 1), there exists λ ∈ (0, 1) such that, for any x 1 , x 2 , ..., x n ∈ X.
Main section
For the beginning of our main results, let us start with reminding the definition of A 0 and B 0 .For nonempty subsets A and B of a Menger probabilistic metric space (X, F, T ), A 0 and B 0 are the following sets, We define the class Θ of all functions θ : [0, +∞) → [0, +∞) such that θ is onto and strictly increasing which ∑ +∞ n=1 θ n (t) < ∞ for all t > 0, where θ n (t) denotes the nth iterative mapping of θ (t).Also, we define the class Γ of all functions γ : [0, +∞) → [0, +∞) such that γ is onto and nondecreasing with γ(t) ≥ t.Theorem 3.1.Let A and B be nonempty subsets of a complete Menger probabilistic metric space (X, F, T ) such that A ̸ = B and A 0 and B 0 are nonempty and closed.Also, assume that {h n } is a sequence of cyclic mappings h n : A ∪ B → A ∪ B such that for each i, j ∈ N and t > 0, where m ∈ N, x, y ∈ X, γ ∈ Γ, and θ i, j : [0, +∞) → [0, +∞) is a function such that there exists θ ∈ Θ which θ i, j (t) ≤ θ (t).
If there exists x
then, there exist a sequence {x 2n } = {h 2m (x 2n−2 )} ⊆ A and a * ∈ A 0 such that lim n→+∞ x 2n = a * and a * is best proximity point of {h n }.
International Scientific Publications and Consulting Services
Proof.By 3.2, there exists . Now, by using 3.1 we have, We can continue this process and so, by induction, for all λ ∈ (0, 1), Now, By lemma 2.2 and 3.3, By 1 of lemma 2.1, for all µ ∈ (0, 1), there exists λ ∈ (0, 1) such that, Since X is complete and A 0 is closed, so there exists a * ∈ A 0 such that lim n→+∞ x 2n = a * and since h n (A) ⊆ B and a * ∈ A 0 , consequently, h n (a * ) ∈ B 0 .Therefore, a * is a best proximity point of {h n }.Corollary 3.1.Let A and B be two nonempty subsets of a complete Menger probabilistic metric space (X, F, T ) such that A ̸ = B and A 0 and B 0 are nonempty and closed.Also, assume that {h n } is a sequence of cyclic mappings h n : A ∪ B → A ∪ B such that for each i, j ∈ N and t > 0, where m ∈ N, x, y ∈ X, γ ∈ Γ, and θ ∈ Θ.If there exists x 0 ∈ A 0 with, then, there exist a sequence {x 2n } = {h 2m (x 2n−2 )} ⊆ A and a * ∈ A 0 such that lim n→+∞ x 2n = a * and a * is best proximity point of {h n }.
By defining x 2n+1 = h 2m 2n+1 (x 2n−1 ), with an argument similar to the proof of theorem 3.1, we can conclude the next corollary.
Corollary 3.2. Let A and B be two nonempty subsets of a complete Menger probabilistic metric space (X, F, T )
such that A ̸ = B and A 0 and B 0 are nonempty and closed.Also, assume that {h n } is a sequence of cyclic mappings h n : A ∪ B → A ∪ B such that for each i, j ∈ N and t > 0, where m ∈ N, x, y ∈ X, γ ∈ Γ, and θ i, j : [0, +∞) → [0, +∞) is a function such that there exists θ ∈ Θ which θ i, j (t) ≤ θ (t).If there exists x 1 ∈ B 0 with, International Scientific Publications and Consulting Services Corollary 3.3.Let A and B be two nonempty subsets of a complete Menger probabilistic metric space (X, F, T ) such that A ̸ = B and A 0 and B 0 are nonempty and closed.Also, assume that {h n } is a sequence of cyclic mappings h n : A ∪ B → A ∪ B such that for each i, j ∈ N and t > 0, where m ∈ N, x, y ∈ X, γ ∈ Γ, and θ ∈ Θ.If there exists x 1 ∈ B 0 with, 3. there exists x 0 ∈ A 0 with, then, there exist a * ∈ A 0 and a sequence {x n } with x n = H(x n−1 ) such that H(x 2n ) = S(x 2n−2 ), lim n→+∞ x 2n = lim n→+∞ x 2n−2 = a * and a * is common best proximity point of H and S.
Proof.By 3, there exists (by 1, we can define a sequence with this property).Now, consider the subsequence {x 2n } ⊆ {x n }.
It is obvious that {x 2n } ⊆ A 0 too.Now, by 2, and We can continue this process and so, by induction, for all λ ∈ (0, 1), With the same argument as we used in theorem 3.1, we conclude that {x 2n } is a Cauchy sequence.Since X is complete and A 0 is closed, so there exists a * ∈ A 0 such that lim n→+∞ x 2n = a * .But since H and S are continuous, so lim n→+∞ H(x 2n ) = H(a * ) and lim n→+∞ S(x 2n−2 ) = S(a * ).Since H is a cyclic mapping, so we have Consequently, H(a * ) ∈ B 0 and a * is best proximity point of H. Also, since by 1, S(A 0 ) ⊆ H(A 0 ) and H is a cyclic mapping, so S(a * ) ∈ B 0 .Consequently a * is best proximity point of S too.
By defining {x n } such that x n = H(x n−1 ) and H(x 2n+1 ) = S(x 2n−1 ) next corollary is true too.
Let Ω be the class of all real continuous functions ω : (R + ) 4 → R such that ω is nondecreasing in the first argument and for u, v ≥ 0, if ω(u, v, u, v) ≥ 0 or ω(u, v, v, u) ≥ 0 then u ≥ v.The next theorem and the next corollary are true.
Theorem 3.3.Let (X, F, T ) be a complete Menger probabilistic metric space and A and B be nonempty subsets of X such that A ̸ = B and A 0 and B 0 are nonempty and closed.Also, let G, M : A → B and H, N : B → A be continuous mappings and the following conditions be satisfied for some ω ∈ Ω and all t > 0, Then for every {u n } and {v n } in X with n = 0, 1, 2, 3, ..., such that {u 2n } ⊆ A and , there exist a * and b * such that lim n→+∞ v 2n = a * and lim n→+∞ v 2n+1 = b * and also, for all t > 0, Moreover, if, 6. NM(x) = HG(x) for all x ∈ A 0 and MN(x) = GH(x) for all x ∈ B 0 ; Proof.Since by 4, there exists u 0 ∈ A 0 such that E F (u 0 , HGu 0 ) = sup{E λ ,F (u 0 , HGu 0 ) : λ ∈ (0, 1)} < ∞, so define v 2 = HGu 0 , v 4 = HGu 2 ,..., v 2n = HGu 2n−2 .By 1, it is obvious that v 2n ∈ A 0 .Also, since by 5, there exists On the other hand, since Gu 2n = v 2n+1 and Mu 2n = v 2n+3 , so with Since ω ∈ Ω, so we can conclude that, With the same argument as we used above, By continuing this process and by using induction,
International Scientific Publications and Consulting Services
Now, we claim that if Now, by 1 of lemma 2.1, for all µ ∈ (0, 1) there exists λ ∈ (0, 1) such that, as m, n → +∞.By 2 of lemma 2.1, {v 2n+1 } is a Cauchy sequence and since X is complete and B 0 is closed, so {v 2n+1 } is converges to some b * ∈ B 0 .Also, since Hu 2n−1 = v 2n and Nu 2n−1 = v 2n+2 , so with y 1 = u 2n+1 , y 2 = u 2n−3 , y 3 = u 2n−1 and y 4 = u 2n−5 and by 3, Since ω ∈ Ω, so with the same argument as we used above, we can conclude that {v 2n } is a convergent sequence too and there exists a * ∈ A 0 such that lim n→+∞ v 2n = a * .On the other hand, by the hypotheses we have v 2n = Hu 2n−1 and consequently, Gv 2n = SHu 2n−1 .Since by 1, GH(B 0 ) ⊆ B 0 , so Gv 2n ∈ B 0 and since B 0 is closed and G is continuous, by passing to limit as n → +∞, and hence, F a * ,Ga * (t) = F A,B (t).Again, since v 2n+1 = Gu 2n , consequently, Hv 2n+1 = HSu 2n .Since by 1, HG(A 0 ) ⊆ A 0 , so Hv 2n+1 ∈ A 0 and since A 0 is closed and G is continuous, by passing to limit as n → +∞, To prove the last part, consider that by 6, we can use the same sequence which was defined in the beginning of the proof and it is concluded that NM(A 0 ) ⊆ A 0 and MN(B 0 ) ⊆ B 0 .Also, we can say that v 2n = Nu 2n−3 .Consequently, Mv 2n = MNu 2n−3 and since MN(B 0 ) ⊆ B 0 , so Mv 2n ∈ B 0 .Also, since B 0 is closed and M is continuous, by passing to limit as n → +∞, Hence, F a * ,Ma * (t) = F A,B (t) too.Again, since v 2n+1 = Mu 2n−2 , consequently, Nv 2n+1 = NMu 2n−2 and since NM(A 0 ) ⊆ A 0 , so Nv 2n+1 ∈ A 0 .Since A 0 is closed and N is continuous, by passing to limit as n → +∞, Hence, F Nb * ,b * (t) = F A,B (t) and the proof is complete.
F a,b (t) = F A,B (t) f or some b ∈ B}, and B 0 := {b ∈ B : F a,b (t) = F A,B (t) f or some a ∈ A}.
Corollary 3 . 4 .
Let A and B be two nonempty subsets of a complete Menger probabilistic metric space (X, F, T ) such that A ̸ = B and A 0 and B 0 are nonempty and closed.If H, S : A ∪ B → A ∪ B are two continuous cyclic mappings with the following conditions, 1. S(B 0 ) ⊆ H(B 0 ); International Scientific Publications and Consulting Services
Proof.
It is enough to put x 1 = u 2n , x 2 = u 2n−2 , y 1 = u 2n−1 and y 2 = u 2n−3 .With an argument as we used in the theorem 3.3, we can prove this corollary too.Now, Let Ω′ be the class of all real continuous functions ω′ : (R + ) 4 → R such that ω ′ is nonincreasing in the first argument and for u, v ≥ 0, if ω ′ (u, v, u, v) ≤ 0 or ω ′ (u, v, v, u) ≤ 0 then u ≤ v. Theorem 3.3 and corollary 3.5 can be changed to the next corollaries.
4→ R such that ω(x, y, z,t) = x − y + z − t, it is obvious that for all x 1 , x 2 ∈ A and t > 0, ω(F | 3,880.8 | 2016-01-01T00:00:00.000 | [
"Mathematics"
] |
Edge information based object classification for NAO robots
: This paper presents a research regarding the development of a computationally cheap and reliable edge information based object detection and classification system for use on the NAO humanoid robots. The work covers ground detection, edge detection, edge clustering, and cluster classification, the latter task being equivalent to object recognition. In this work, a new geometric model for ground detection, a joint edge model using two edge detectors in unison for improved edge detection, and a hybrid edge clustering model have been proposed which can be implemented on NAO robots. Also, a classification model is outlined along with example classifiers and used values.
ABOUT THE AUTHORS
Karl Tarval recieved his BSc in Computer Engineering from University of Tartu in 2016. He is currently a senior software developer. Anastasia Bolotnikova is a master's student at faculty of Computer Science in University of Tartu where she obtained her BSc as well. She is a member of iCV Research group, Institute of Technology at University of Tartu since 2014. She joined the RoboCop team of University of Tartu as team Philosopher, in fall 2014. Currently she is working in the field of image processing developing the real-time self-localization algorithm for Nao robots. Her BSc thesis work was granted the second best thesis work in the faculty of Computer Science. Gholamreza Anbarjafari received his PhD from the Department of Electrical and Electronic Engineering at Eastern Mediterranean University (EMU) in 2010. He has been working in the field of image processing and is currently focusing in many research works related to multimodal emotion recognition, image illumination enhancement, super resolution, image compression, watermarking, visualization and 3D modeling, and computer vision for robotics. He is currently head of iCV Research Group and is working as an associate professor in Institute of Technology at University of Tartu. He is an IEEE senior member and the vice chair of Signal Processing Chair of IEEE Estonian section. He also holds Estonian Research Council's grant (PUT).
PUBLIC INTEREST STATEMENT
This paper presents a research regarding the development of a computationally cheap and reliable edge information based object detection and classification system for use on the NAO humanoid robots. This work will introduce how a NAO robot can start to recognize the environment for better localization with minimum dependency on color information. The work covers ground detection, edge detection, edge clustering, and cluster classification, the latter task being equivalent to object recognition. In this work a new geometric model for ground detection, a joint edge model using two edge detectors in unison for improved edge detection, and a hybrid edge clustering model have been proposed which can be implemented on NAO robots. Also, a classification model is outlined along with example classifiers and used values.
Introduction
The NAO humanoid robot is a programmable robot developed by Aldebaran Robotics. The robot is widely used both in academia and in the private sector for research and other educational purposes (Unveiling of NAO Evolution, 2014). The NAO is currently the standard robot used for the Robot Soccer World Cup, RoboCup for short, in which teams from across the world compete in robot soccer and other events annually (RoboCup Standard Platform League rules, 2014).
The NAO is 58cm tall, weighs 4.3kg, and has a total 25 degrees of freedom in its joint control. All of the robot's software is run on a single Intel Atom 1.6 GHz processor, making multitasking and complex procedures a challenge (Aldebaran NAO Documentation, 2015). All processing power must be shared between the robot's custom Linux-based OS NAOqi and different modules which handle moving, multiple sensors, communication etc. As the hardware platform is fixed and no modifications are allowed, all teams compete on the same basis (RoboCup Standard Platform League rules, 2015). The NAO's main source of information is vision, provided by two cameras, each with a maximum resolution of 1,280 × 720 px. Additionally, the robot has infrared sensors, tactile sensors, pressure sensors, and other systems, all of which will not be covered further herein.
The RoboCup hosts numerous different competitions for robots, only the Standard Platform League (SPL) soccer competition scenario will be addressed from here on out. The competition features two teams playing on opposite sides of a green field analogous to a scaled down version of a regular soccer field. During the competition, the robots must operate autonomously both to cooperate as a team and to play as an individual player (RoboCup Standard Platform League rules, 2014). Interpreting information provided by the cameras quickly and accurately is a critical prerequisite for succeeding in that task.
In earlier years, the RoboCup competition field consisted of components with unique color characteristics: yellow goal posts, orange soccer ball, green field area, etc. As the complexity of the participating teams' software has improved, the field setup has been modified to better match that of an actual soccer field: the goal posts are now white and the ball is a black and white truncated icosahedron (RoboCup Standard Platform League rules, 2014), both shown in Figure 1.
Since numerous objects of interest, namely goal posts, robots, ball, and field lines, are now all dominantly white, an approach based solely on color information is insufficient for a reliable model as demonstrated in Bolotnikova (2015). The main contribution of this work is lying on using the following three main design principles: • Computation speed-information must be provided rapidly to enable the robot to make adequate decisions during the game; • Conservative use of resources-the NAO's single Intel Atom processor is shared by all its systems (Aldebaran NAO Documentation, 2015); • Universality-the module must work reliably regardless of fluctuations in lighting and noise.
To successfully implement the module, some important topic, namely, ground detection, edge detection, edge clustering, and cluster classification, will be discussed within this work. Each section will be analyzed from the perspective of the above main principles. In this work for more robust classification new method is introduced which is incorporating detection of edge using random forest and canny edge detector.
Ground detection
In order to detect and classify different objects properly, identifying the area of the playing field currently in view is a crucial prerequisite. Determining the field's area divides all items of interest into two categories ("in the field" or "not in the field") and gives valuable information regarding the robot's location on the playing area. Using the histogram normalization technique (Sridharan & Stone, 2009;Anbarjafari, Jafari, Jahromi, Ozcinar, & Demirel, 2015) and initial mean values proposed in Bolotnikova (2015) for similar purposes, the green playing field area can be easily detected by setting a threshold value. This process, however, can leave many areas where the view may be obstructed by other robots excluded, as demonstrated in Figure 2, where both the ball and the nearby robot are considered "not in the field" by the naive thresholding approach.
To bypass these occlusions, a simple geometric approach is proposed as follows: given an input image directly from the camera Ψ raw , the image is sized down to make all further operations computationally cheaper. The image is resized by a heuristically determined factor of 1 9 , i.e. both the width and the height of the input image will be a third of their original size. From here on out, Ψ shall refer to the resized input frame.
Basic color thresholding is applied to Ψ based on values from Bolotnikova (2015), yielding a binary array Ψ B . All areas of set bits under a heuristically determined surface area are unset, reducing the amount of noise present in Ψ B . In practice, a cheap erode and dilate is used with a rectangular morph with a heuristically determined side length of 10px, a sample is shown in Figure 3. The lowest corner points for the ground area are found: where R ⊆ Ψ B is the detected ground region and subscripts l and r refer to left and right, respectively. Each lowest corner is corresponding to a highest point roughly above it, i.e. there exist A l and A r which satisfy: where snap is chosen heuristically. In practice,
Note that:
Provided all work is conducted in the OpenCV standard Cartesian coordinate system (Laganière, 2011) demonstrated in Figure 3, for each point in R, a weight is calculated by: where W is the width of Ψ B in pixels. For each half portion of the region between the bottom corners are then defined (Figures 4 and 5). While they overlap with previously found points in many generic scenarios, as can be seen in Figure 6, Additionally, for each half portion of the same region, minima are selected by Finally, all points defined above are snapped to the closest edges of Ψ B within a small threshold snap , defined prior. This approach yields an eight-vertex polygon which is then padded to ensure all objects of interest that should be classified as "in the field", are classified as such reliably. The resulting polygon closely approximates the ground area regardless of occlusions and viewport orientation, as can be seen in Figure 7. The proposed method is a computationally cheap way (O(N), where N is the size of Ψ) to closely estimate the position of the playing field in the current video frame, i.e. to find a subset F representing the field from the input image Ψ: The approach can be sensitive to large areas of noise of matched color in areas outside the field. If necessary, additional filtering can be performed, but current testing has shown no need for further processing. (8)
Motivation
An edge E ⊆ Ψ is a part of an image where significant variations in color intensity or brightness occur (Canny, 1986;Gomes, 2011;Oskoei & Hu, 2010). Discontinuities in said properties generally correspond to changes in either depth, surface orientation, material properties, or scene illumination (Oskoei & Hu, 2010), and as such offer valuable information regarding the contents of the image.
Edge detection refers to a collection of different algorithms which aim to identify the edges in an input image (Oskoei & Hu, 2010). Classical edge detection algorithms can broadly be divided into two categories: first derivative based, also known as Gradient, and second derivative based, also known as Laplacian (Šimec, 2014;Yitzhaky & Peli, 2003;Ziou & Tabbone, 1998). First-derivative-based methods look for local extrema in the first derivative of the input function, second-derivative-based methods look for zero crossings in the second derivative of the input function.
Classical edge detection algorithms convolve an input image with a two-dimensional operator O characteristic to that specific detector, yielding a grayscale response where edges are distinctively shown with either maxima or minima (Gomes, 2011): Giving O different properties affects a detector's sensitivity to fine detail (and as such, noise), different types of edges (thick or thin, consistent or inconsistent etc.) and different edge orientations (Sharifi, Fathy, & Mahmoudi, 2002). The computational complexity of the filter is also directly related to both the size and computation cost of the operator. , one for each diagonal direction (Oskoei & Hu, 2010). Using two kernels yields two separate grayscale responses: A very commonly used (Sharifi, et al., 2002) example of a gradient-based approach is the Sobel operator, which uses Notes: Dashed red marks the area considered "in the field", i.e. dashed red marks F. as its kernels (Duda & Hart, 1973). Given the above, the general gradient magnitude can be obtained by commonly approximated instead by as the latter is much faster to compute (Gomes, 2011). Using separate kernels also makes edge directions easily computable from the responses, e.g. given responses from the aforementioned kernels (Ziou & Tabbone, 1998): Other first-derivative-based methods compute gradient magnitude and edge direction in an analogous manner, with possible constants depending on kernel properties (Gomes, 2011).
Laplacian-based methods rely on the Laplace operator Δ, which is a differential operator given by the divergence of a function's gradient in Euclidean space (Vinogradov & Hazewinkel, 2001), given in two dimensions as (Ziou & Tabbone, 1998): For edge detection, approximate two-dimensional convolution kernels are used instead as the input space is discrete, e.g. Gomes (2011): All of the above methods are highly sensitive to noise and are generally used with an additional smoothing step, commonly convolving with a discrete approximation of a Gaussian filter. Since convolution is associative, smoothing can be applied to O prior to convolving, instead of applying it directly to Ψ. This makes computation cheaper, as for all common cases the size of O is considerably smaller than the size of Ψ (Gomes, 2011).
Canny
The edge detection approach proposed in Canny (1986), commonly referred to as "Canny edge detector" (Gomes, 2011), is a multiple stage algorithm based on optimizing functionals, i.e. functions mapping an input vector to a scalar, for detection (identifying edges), localization (locating edges), and singularity (identifying each edge at most once) on the operator's impulse response (Canny, 1986). Prior to convolving, the input is smoothed by an approximation of a Gaussian filter. The detec- given implementation (Oskoei & Hu, 2010). For each operator's response, non-maximum supression is applied, resulting in thinner, well-defined edge candidates, after which the responses are merged and hysteresis is applied, resulting in binary edges (Canny, 1986). Hysteresis in edge detection means tracking all edge candidates using two thresholds, a lower one and a higher one, lower and higher , respectively.
All points with brightness above lower that can be connected to a point with brightness above higher without any intermediate point having a value below lower are set to full brightness, all others are suppressed (Gomes, 2011). This means hysteresis can be used to map a grayscale input to a binary output: In practice, a heuristically chosen combination of the mean value and the standard deviation of the grayscale input frame are used to find suitable threshold values: The approach yields accurate, consistent binary edges, demonstrated in Figure 8. The result is improved further when histogram equalization has been previously applied to the grayscale input (Rizon, 2006).
Canny's algorithm is the most commonly used edge detection algorithm due to its reliability, low complexity and availability (Sharifi et al., 2002). However, the algorithm is highly sensitive to fine detail, oftentimes more sensitive than required, and scenario-specific parametrization is a prerequisite for good results (Oskoei & Hu, 2010). The same problem applies to the current setting, as demonstrated in figure 8: while sufficient detail is obtained in the playing area, objects outside of the playing area can create a lot of unrelated information which will still need to be processed. An efficient solution to the issue is proposed later in Section 3.4.
Random forests
Random forests is a generic supervised machine learning algorithm that can be used for classification, regression, and other similar tasks. The algorithm outlined in Breiman (2001) consists of independently training a large group of decision trees, then passing the input data to each tree individually which then collectively vote to identify the best candidate output label according to an ensemble model. A crucial part of the system is recursively training each tree so that the remaining data is split at each new node to achieve a large identifying information difference between the branches. Geurts, Ernst, and Wehenkel (2006) demonstrates that up to a certain limit, introducing more randomness at node level yields higher accuracy forests and therefore perfect splits are actually detrimental to overall ensemble performance and as such, undesired. Random forests can be trained and stored beforehand which make them a good candidate for systems with reasonable amounts of storage but no strong computing power, such as the NAO. Once trained, the importance of each input variable can be deducted from the model with reasonable ease, giving valuable insights for further configuration (Breiman, 2001). The algorithm is both fast (Breiman, 2001;Dollár & Zitnick, 2013;Roy & Larocque, 2012) and, given a reasonably large training set, very accurate (Geurts et al., 2006). Dollár and Zitnick (2013) proposes extending random forests to general structured output spaces in such a manner that an input image patch P ⊆ Ψ can be mapped to a corresponding label, creating a novel type of edge detector that inherits the previously mentioned benefits of generic random forests. The central issue for the approach is comparing similarity during the training process, which is not well defined over the output space. To bypass the problem, an intermediate mapping Π simil. from the input space Ψ to an Euclidean space Z is used, where comparing similarity can simply be done by comparing Euclidean distance. In order to avoid the issues outlined by Geurts et al. (2006), a new mapping is randomly generated for each tree to ensure sufficient levels of deviation from the norm at the node level. To reduce the amount of noise generated by the randomness component, each point is oversampled, i.e. there exist two different patches P 1 and P 2 such that The results are averaged across patches which can potentially lead to a general loss of accuracy. To counter the issue (Dollár & Zitnick, 2013), runs the algorithm at multiple scales, labeling the input at the original, half and double the resolution, and then averaging the results after resizing each back to original input dimensions. Based on practical application in the industry, the original authors proceed to outline further scenario-specific optimizations in Dollár and Zitnick (2015).
An edge detector based on random forests trained with the properties proposed by Dollár and Zitnick (2013) has characteristically soft edges as shown in Figure 9. As a result of oversampling, the detector inherently discriminates against noise and detects the dominant features in an image which are more likely to hold interesting information (Dollár and Zitnick, 2013). While this works well for general edge detection cases, in the given setting, crucial information may be lost in numerous scenarios as demonstrated in Figure 9. This issue is addressed in a later Section 3.4.
Merged edge model
As covered prior, both Canny's edge detector and the random forests edge detector have failure cases in which either too much noise is detected or too much detail omitted, respectively, shown in Figures 8 and 9. As the problem can be isolated to the area considered "not in the field" for the former algorithm and to the area considered "in the field" for the latter, a simple combined model is proposed. Canny's algorithm is used to detect edges from the area considered "in the field": and the random forests edge detector is used to detect edges elsewhere: To avoid breaking consistent edges, both Ψ Canny and Ψ Rand.For. are padded with a small value overlap to create overlap, in practice where overlap is chosen heuristically. Using too large values for overlap creates unwanted noise while using too small values yields edges that are disconnected at the boundary of the two areas. As such, choosing an optimal value is critical.
The edges from the random forest edge detector are binarized using hysteresis with heuristic parameters assuming all values fall within [0, 1]. This approach has suboptimal accuracy as many edges are not singular, but it is fast and sufficiently accurate with the proposed classification model, outlined in the following sections.
The results from the two detectors, Ψ B Canny and Ψ B Rand.For.
, are then joined using bitwise or, annotated with |, resulting in a binary output, demonstrated in Figure 10:
Edge clustering
Each point Q ∈ E in every edge E ⊆ Ψ G has a corresponding direction Q that is equal to the gradient direction at that point, i.e.
All directions are quantized into four categories analogous to the approach outlined in Canny (1986) and a label L is associated with every edge point using visualized in Figure 11.
Similar to the approach proposed in Zitnick and Dollár (2014), edges are then grouped by joining all eight-connected points, except only edges with an identical label are joined, forming a cluster demonstrated in Figure 12. Using the relative direction difference proposed in Zitnick and Dollár (2014) without quantized labels was tested, but proved less reliable in the given setting. Every point may be a member of at most one cluster, i.e. for any two clusters C 1 and C 2 : Grouping is done recursively, proceeding along the horizontal axis of Ψ G at first, as the memory addresses are sequential, and then vertically, as is the industry standard practice (Laganière, 2011). Clusters with mass under a small heuristically determined threshold mass are discarded as noise.
General overview
Multi-variable decision models have been shown to consistently outperform holistic, single descriptor classification models (Serre, Wolf, & Poggio, 2005). Many general algorithms have been proposed for both detecting and classifying objects. Numerous commonly used approaches are unfeasible for the given setting: some approaches are licenced prohibitively (e.g. Bay, Tuytelaars, & Van Gool, 2006;Lowe, 2004), some are too general and as such computationally too expensive (e.g. Rosten & (31) C ⊆ E (32) ∄Q:Q ∈ C 1 , Q ∈ C 2 (33) mass = 5 px Drummond, 2006;Rublee, Rabaud, Konolige, & Bradski, 2011). Using the proposed merged edge model with the clustering approach based on Canny (1986), Zitnick and Dollár (2014), the number of candidates both calculated and checked against can be reduced greatly. As such, a simple classification model is constructed on the general principles of Lowe (2004), Zitnick and Dollár (2014): based on the computed cluster information, each observed variable is given a relatively wide acceptance range, as opposed to a single narrowly ranged variable based model, i.e. a collection of weak classifiers is used instead of a single strong classifier. A weak classifier is a classifier that accepts a wide range of values as matching, a strong classifier accepts a narrow range, a sample comparison is shown in Figure 13. Multiple weak classifiers working in unison make noise in any input variable less relevant (Breiman, 2001).
Classifying the clusters can be done by storing a number of characteristic properties for each of them. For every cluster, two points contained in it with the longest distance between them, Q 1 and Q 2 , are found, which can be done cheaply during cluster creation.
The distance between Q 1 and Q 2 is used as a cheap approximation for the running length of a cluster. Additionally, Q 3 is found which is a point directly between Q 1 and Q 2 .
It is worth noting that Q 3 may not necessarily be a point on any edge: For all three points, brightness of Ψ at the given point is stored as well as whether the point is considered "in the field" or "not in the field" by ground detection. Additionally, the parameters for a bounding rectangle are found for each cluster along with the average gradient orientation. The cluster mass based classification heuristic (where the mass of a cluster is equal to the number of unique pixels contained in it) proposed in Oskoei and Hu (2010) is also employed.
Following, two sample classifications are described without probabilistic components, constructing a probabilistic model is considered out of scope for this work. All thresholds are determined heuristically given an input image Ψ, where Ψ is scaled down as outlined in "Ground detection".
Goals
A goal can be described by either one or two goalposts that have their bottoms on the ground area and also have a connecting part at the top. Identifying goals is a three-step process: identify candidate goalposts, identify top connecting parts, and finally remove all goalpost candidates that are not connected at the top. Initial goalpost candidates are picked using constraints on the average direction of the cluster, the running length of the cluster and the properties of Q 1 , Q 2 and Q 3 for that cluster, all threshold values are chosen heuristically. C , the average direction for the cluster must fall within C length , the running length of the cluster, must be between C F shows how many of {Q 1 , Q 2 , Q 3 } for that specific cluster are considered "in the field": For initial goalpost candidates, this value must be If all the above criteria are met, the given cluster is added to the list of initial goalpost candidates. Top connectors are selected from the remaining clusters using constraints on position and distance.
The distance between either end of the given cluster, i.e. Q 1 or Q 2 for that cluster, and any goalpost candidate must be under a threshold: Additionally, the approximate center for the given cluster must be higher than the goalpost candidate's approximate center, i.e. Q 3 for the given cluster must have a lower ordinate value than the Q 3 of the candidate goalpost, given the standard OpenCV Cartesian system (Laganière, 2011 connector = 20 px
Ball
A ball can be described as a collection of small clusters where both very high brightness (white) and very low brightness (black) are present nearby and the clusters are considered "in the field". Ball clusters are identified using constraints on brightness around cluster, darkness around cluster, cluster mass, and saturation, all with heuristically chosen threshold values.
C m , the cluster's mass in pixels, must be under a given size C sat , saturation around the cluster's end points Q 1 and Q 2 in a bounding rectangle with side r C sat must not change more than a given value, i.e. assuming saturation values fall in [0, 1]: C bright , the highest brightness value in a bounding rectangle with a side length r C bright around both of the cluster's end points must be at least a given value, assuming all values fall in range [0, 1]: C dark , the lowest brightness value with similar configuration must be below a given value C F , described before, must be sufficiently high These values reliably locate the ball on views where the ball is nearby, a sample detection is shown in Figure 16. The developed algorithm has been tested on 100 real-world scenarios in which the ball was placed on different positions and the NAO robot detected the ball for 98 occasions.
(42) C m < 20 px Notes: Yellow marks detected goal posts, red marks detected top connectors.
Conclusion and future work
The basis of a new edge information-based vision module for the NAO humanoid robot has been proposed and implemented. A new method of ground detection has been outlined, building on the work in Bolotnikova (2015), improving the previous approach by removing any existing occlusions. A new merged edge model has been proposed that uses the algorithms outlined in Canny (1986) and Dollár and Zitnick (2013) in unison. As a prerequisite for object classification, an edge clustering method is proposed by combining the approaches from Zitnick and Dollár (2014) and Canny (1986). Building upon it, a non-probabilistic edge information-based object detector and classifier has been implemented. Sample-classified objects have been outlined and tested, demonstrating the proposed classifier's functionality, its strong points and places where improvements can be made. Each step has been covered in sufficient detail, explaining the design decisions. A solid foundation for further work has been laid, outlining both specific improvements to build upon as well as long-term outlooks for future research.
The current implementation is a coarse base demonstrating the viability and efficiency of the proposed approach. It is open to both model-scale optimizations covered in Hinterstoisser et al. (2012), precision improvements outlined in Dollár and Zitnick (2015) and cluster merging outlined in Zitnick and Dollár (2014). The binarization of the random forest edge detector can be improved using the approaches outlined in Canny (1986). The currently used random forest edge detection model is trained with parameters similar to those outlined in Breiman (2001) on the general BSDS500 dataset (Arbelaez, Maire, Fowlkes, & Malik, 2011). Using a different dataset, using a specifically constructed subset of the one currently used or tuning the forest parameters may provide a final model that is smaller and faster to operate on. The implemented classifier is non-probabilistic and does not fully leverage all of the data available. However, the proposed classification model is a perfect candidate for machine learning. Notes: Red marks clusters detected to be on the ball. | 6,570.2 | 2016-12-05T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Good-bye email, welcome Slack
Email is the standard in communication with university students; as a one-to-one communication device, students repeat their doubts along multiple messages, multiplying the teacher's work and preventing the participation and learning of other students. The topic forums on virtual campuses allow better a management of doubts but require the active participation of all the students to achieve a final outcome. We propose to abandon the email and traditional forums of the campuses and enter tools with a higher professional profile to manage communication with the students. The results of this first experience, carried out with more than 90 postgraduate students, encourage its mass employment in both undergraduate and postgraduate. The survey conducted over 102 students in three different subjects of two different MSc shows how Slack, the tool we used, has been highly valued, with differences depending on the studies but neither on the gender nor on the intensity of use of the social networks.
Introduction
University teaching, both in degree and postgraduate, is evolving with the integration of new technologies. We cannot pretend that our students, with access to global information in near real time, will be satisfied with traditional communication, which has historically occurred in the classroom and, or, in the teacher's offices, at the planned tutoring hours. Information arises at anytime, anywhere, and the need to resolve doubts is growing in the face of the enormous volume of information available. However, while it is true that on many occasions the urgency is not such, in many others rapid intervention is required to solve doubts and problems that should be, in fact, solved sooner.
Technological evolution is undeniable, and the incorporation of the younger ones into them is getting faster and faster. According to Pew (2018), in the US the smartphone penetration rate has risen from 35% in the 1st half of 2011 to 81% in the first quarter of 2019. 96% for 18-to 29-year-olds own a smartphone, while only three out of four adults declare having a PC or laptop. Currently, the dependence of the smartphone has grown to important levels, with 22% of Americans between the ages of 18 and 29 reporting not having broadband at home but using a smartphone instead for their social and professional communications. The situation in Europe is similar. According to Eurostat (2019) 85% of European households would have access to the internet, up from 55% in 2007. It is interesting to note how the number of internet access in households with dependent children increases to 96%. In Spain, and always according to the same report, 90% of young people aged 18 to 29 access the Internet daily. More than 80% of young Europeans in that age range (more than 95% for Spain) access the internet via smartphone. As regards the use of email as a means of communication, more than 85% of young Europeans in the previous age range report to do so on a regular basis, according to Eurostat (2019).
Email is the communication tool par excellence between teachers and students. However, there are several problems in relation to their use and effectiveness. Thus, Ha et al (2017) examined the effect that excessive connectivity of students could have on the effectiveness of emails. Through face-to-face interviews and self-administered survey/quantitative data, they found that students' rejection of the reading of emails was not so much due to social media as to over-communications through this route. University departments, student organizations, and faculty advisors' e-mails were most likely to be avoided, and they recommend both university administrators and academic advisors "to reconsider the e-mail communication to students, target at the instant communicator social media users, and use Facebook to create a strong sense of community and campus involvement for their students." Almost 40% of students said they don't always read emails from academic advisers, and 54% of students said the same about emails from the university or from academic departments. Among all students, email use (12.1%) ranked behind social media (35.2%) and texting (50.2%), but ahead of phone calls (2.2%). The most important findings of the study, in relation to the use of emails by students, were that 72% of students treat emails from student groups like spam, more than 50% of students don't always read emails from their institution or academic department, and nearly 40% of students don't always read emails from their advisors.
The privacy of life outside the classroom is one of the main challenges that lecturers face. Thus, Hillman et al (2019) maintain that it is unpredictable to handle in a different way the communication in real time with the students because, as the title of their paper graphically points out, "I have a life". Dawkins (2019) reviewed 19 studies to confirm that academics are behind the industry when it comes to mass email evaluation and optimization strategies. He cites technological limitations of learning management systems (LMS) and the limited expertise of teaching staff as the main obstacles to more effective mass email at university. Some other experiences have been set up to surpass email as a communication tool with the students.
López Zafra and de Paz (2014) showed how Twitter could be a powerful tool to communicate insights to students and engage them into the subject. As far as the second semester of 2010-2011 they started to communicate with the students in a course of Business Statistics in the Business Administration grade in a private school in Madrid, Spain. Twitter was seen as very powerful in motivating students in a field where they felt well. The latter observation in the line of what the CEO of Twitter Spain stated in the Talking About Twitter congress (Granada, Spain, June 22 nd and 23 rd ), according to de Paz Cobo and López Zafra (2015); accessing Twitter is via mobile by 85% of young users. That is, by demographic target and usability it seems that Twitter can be a tool with which to connect with the students of the university.
Twitter is formed as a social network whose teaching use can be very productive. According to the experience of its use in the classroom over five years, always in subjects of the quantitative area and in different schools, De Paz Cobo et al (2017) found that students demand more and more alternative means of communication to the traditional ones. Access to sources of information in many different formats (text, audio, video) on the Internet, with the additional feature of the constant renewal of content, requires the use of a tool with the maximum accessibility, dissemination, immediacy and versatility, as well as the most important of them all: to be accepted almost natively by students Of course, Facebook has also been widely evaluated both as a communication and learning tool. Walsh (2010) was one of the first in announcing the possibilities of the, then, increasing in importance network; he states a comment by Kristen Nicole Cardon, lecturer of a British Literary History course, where she describes the advantages of using Facebook in the classroom, such as motivation, pointing at matters that really care and not wasting time in those already understood, the possibility of getting insights from those who generally don't participate in the discussion and the wider perspective coming from more and more students. Duncan and Barcyczk (2015) found that students in the Facebookenhanced courses reported having more favorable attitudes toward the social media and a slight increase in their sense of classroom community compared to students in non-Facebook-enhanced courses.
The rise of Instagram (IG) has also been the subject of use in the classroom. Instagram, owned by Facebook, has 1 billion monthly and over 500 million daily active users; of those, 71% are under age 35, proving that Instagram is one of the most widely used social networks by young people, the target group of a college class. Time spent is nearly just 5 minutes behind the almost daily hour spent on Facebook. Byrd and Denney (2018) report the success of that social network in a journalism course. Results showed by Arceneaux and Dinu (2018) proved that information retention was most improved by visually based information published by professional news outlets, after an experimental design confronting Twitter to Instagram. De Paz Cobo and López Zafra (2020) have been using IG during the first semester of the 2019-2020 course, showing a higher engagement of the students and increase of the interactions regarding those in Twitter.
Anyway, it seems that email will continue to reign as in the top of the digital communication system. According to Becker (2016), it's going to be very difficult to substitute a tool that has been evolving over 30 years, with around 3 billion users in 2019, with an expectation of 319 billion emails sent in 2021 (Campaign Monitor, 2019), the third most influential source of information for B2B audiences, behind only colleague recommendations and industry-specific thought leaders (Finn, 2019). The main problem we have experienced as long-time users of Twitter and early adopters of Instagram is the difficulty in managing in a professional way the feedback of the students, their doubts and problems. In the search of improving the latter, trying to overcome email avoidance and engaging students in a two-sense communication system, we decided to enter Slack.
Slack in the classroom
As explained by Woodgate (2019), Slack is a workplace communication tool, "a single place for messaging, tools and files." This means Slack is an instant messaging system with lots of add-ins for other workplace tools. The add-ins aren't necessary to use Slack, though, because the main functionality is all about talking to other people. There are two methods of chat in Slack: channels (group chat), and direct message or DM (person-to-person chat). With over 12 million daily active users (Chan, 2019), Slack is close behind Microsoft Teams, its main competitor.
Being a slightly young communication tool (the first version was out during 2013 while Facebook was founded back in 2004, Twitter in 2006 and Instagram in 2010, and email in place since mid-seventies), and clearly leaning on the professional market, no so many experiences have been reported on the matter. Talbot (2015) is one of the early adopters of Slack in teaching, because of the artificiality of the discussion rooms in former LMS, the not-so-easy to share rich content such as videos in the LMS, and finally because the lack of efficiency when compared to email or SMS in the one-to-one communication. Talbot stresses the email-free possibilities of Slack as one of the motivating issues for adopting the tool, along with the increase in the amount of communication; students' perceptions were mostly favorable. Peck (2018) describes his first year as a professor as a terrible administration time, spending hours in checking, reading and answering often redundant emails. He chose Slack because of the possibility of shifting conversations out of the inbox, into a platform where they could be easily met any time during the semester, the possibility of answering once every issue and, a very important matter, the possibility of Teckchandani (2018) describes how Slack works, stating that the free version is enough for the classroom. Hussain et al (2018) compare the use of WhatsApp groups to Slack for classroom activity; although they expected undergraduate students would use Slack heavily, their usage habits suggested the application was not as effective as other research has shown it to be at the graduate level.
We decided to promote the effective use of Slack in four different postgrad classes in two different MSc, two into the Master's Program in Data Science for Finance (MDSF; courses in Reduction and Segmentation Techniques, on one hand, and Forecasting, on the other, with the same 21 students in each course) and another two in the Master's Program in Financial Markets (MMF; two different classes with a total of 60 students in a course in Quantitative Methods for Business). The scope of the master programs is different but both groups of courses share as a common feature the strong use of programming languages, in particular the R software. The page mdsf.slack.com was built to communicate with the students of the MDSF and the finanzascuantitativas.slack.com with those in the MMF. In every Slack page, four general threads were built: #general, for communication of whatever the issue in any course; #not_everything_is_somthg (where somthg stands for finance, in the case of the MMF, and data_science in the other case), a very popular thread among students where talking about anything, related or not to the course, was promoted; and a specific thread for each subject: #quantitative_finance in the MMF and #segmentation and #forecasting in the MDSF. The Direct Messaging possibilities of Slack were also used to solve particular problems.
Slack retrieved a quite heavy use of the platform, specifically among the MDSF students. As of January 23 rd , and with days before the final exams, 3.129 messages out of the maximum 10.000 that the free version allows where crossed in the different channels of the 639 Good-bye email, welcome Slack MDSF area. The weekly traffic (measured in terms of active users, those with at least an open channel) was intense, with all the students reading messages and the half publishing messages every week except during the Christmas period. #forecasting was the preferred thread, with 134 published messages, followed by #general with 76, #segmentation with 40 and ##not_everything_is_data_science with 9. The same statistics in the MMF showed 914 messages (so one third, for over three times the number of students), a quite lower weekly traffic, descending in terms of active users as the course was advancing, 179 messages published under the #quantitative_finance thread, 98 under the #not_everything_is_finance one, and 9 under the #general one.
We conducted a survey among the users for understanding both their behavior and the experience. 102 Students of the three different subjects were asked about different aspects of the use of Slack through a Google Form. 89 (87.25%) of them answered, 48 from the Quantitative Finance subject in the MMF and the remaining 41 in the subjects of Reduction and Segmentation Techniques and Forecasting in the MDSF. For 95% of them, it was their first experience with Slack in the classroom, and 77.3% of them heard about Slack for the first time in our subjects. A tiny 15% of them (20% among the MDSF students) had a Slack account prior to the present experience.
In a 5 point scale, where 1 means "totally disagree" and 5 "completely agree", the Slack global experience graded 3.78 points, but just 3.29 among those in the MMF for a 4.39 among those in the MDSF, showing possibly that Slack is better suited for those in technical studies; remember that the use of the tool among these students was quite heavier. The sentence "I love Slack" hit an average of 3.2 points but falling down to 2.71 among the students in the MMF and jumping up to 3.8 among those in the MDSF. No significant differences were found in terms of gender.
It's quite interesting the fact that the sentence "I think Slack is an interesting communication tool for the university" is valued with 4.1 out of 5 points (just 3.71 for those in the MMF, while the MDSF students increased the grade up to 4.55) and "Slack should be used by the remaining lecturers" get 3.98 points out of 5 (again, a lower value of 3.5 among those studying the MMF for a 4.55 among those in the MDSF studies).
In terms of use of social networks, all but one were WhatsApp users, 78.7% IG, 71% FB and 50% Twitter. 84.3% of them declared accessing their social networks more than once daily. The number of different social networks and the intensity of use didn't not cause any effect on the previous results.
Conclusion
Following previous experiences with Twitter and a parallel one with Instagram with undergraduate students, we decided to enroll four different classes in two master programs under the professional, workplace communication tool Slack. The results are different according to the profile of the programs. The those following the master's in data science were much more active and heavier users than their mates in the master's in finance, even if the different courses share the common feature of intense use of the R statistical software. As the survey proved, Slack was a good alternative to email and the usual forums in the LMS. The experience was enjoyed by the students, and as lecturers we were able to reduce the volume of emails along with the possibility of focusing the problems in a professional platform. | 3,857 | 2020-04-28T00:00:00.000 | [
"Education",
"Computer Science"
] |
A Space-Variant Deblur Method for Focal-Plane Microwave Imaging
: In the research of passive millimetre wave (PMMW) imaging, the focal plane array (FPA) can realize fast, wide-range imaging and detection. However, it has suffered from a limited aperture and off-axis aberration. Thus, the result of FPA is usually blurred by space-variant point spread function (SVPSF) and is hard to restore. In this paper, a polar-coordinate point spread function (PCPSF) model is presented to describe the circle symmetric characteristic of space-variant blur, and a log-polar-coordinate transformation (LPCT) method is propagated as the pre-processing step before the Lucy–Richardson algorithm to eliminate the space variance of blur. Compared with the traditional image deblur method, LPCT solves the problem by analyzing the physical model instead of the approximating it, which has proved to be a feasible way to deblur the FPA imaging system.
Introduction
In the research of passive millimetre wave (PMMW) imaging, focal plane array (FPA) is widely applied for object detection, scene monitoring, and security checks, because of its fast and wide-range of imaging abilities. In the area of electromagnetic compatibility (EMC) detection, we adopt the FPA system to take an image of spatial electromagnetic leakage of electronic equipment (working in the microwave band), which can be called the passive focal-plane microwave (PFPM) imaging system, as shown in Figure 1. The reflector of the PFPM system is offset to offset center obscuring, and its aperture is much larger than that in PMMW to match the microwave wavelengths according to diffraction theory. Compared with PMMW imaging, due to the much lower frequency of the source (3-6 GHz) and the limited aperture of the parabolic reflector, the PFPM imaging system is more diffraction-limited, so the image has larger blur degradation. Moreover, the aberration of the system caused by the off-axis parabolic (OAP) reflector is pretty serious, which increases the space-variant characteristic of the point spread function(PSF). The two reasons both make the blurred image restoration of the PFPM image much harder than PMMW. In the field of optical imaging, several restoration algorithms of the degraded image caused by space-variant blur have been proposed. There are three main kinds of methods: direct restoration, segmentation restoration, and space coordinate transformation restoration.
The direct recovery method considers image degradation from a global perspective. It decomposes, collates, and compresses a large amount of PSF data into smaller storage quantities and performs recovery by an iterative or non-iterative method. Both Jain and Angel [1] proposed solving the SVPSF image restoration problem with the conjugate gradient method and experimenting with the 31 × 31 image. In 1977, John [2] proposed the Kalman filtering method, and it was subsequently revised and improved [3]. Mehmet [4] used the projection onto convex set (POCS) method for space-variant blurred image restoration. Fish [5] improved the singular value decomposition method and sped up the calculation. James [6] used the interpolation method to process the PSF when performing image restoration on the Hubble telescope image. After analyzing the SVPSF with the diffraction limit and aberrations of the optical system, Thomas et al. [7] completed the image restoration using the Gauss-Seidel iterative algorithm.
The segmentation algorithm divides the image into several sub-blocks whose PSF are considered space-invariant. Then, the space-invariant restoration method is used to complete the image restoration in each image sub-block, and finally, the sub-blocks are spliced to obtain the final image. Trussell and Hunt proposed an image block-based restoration algorithm in 1978 [8,9]. Thomas [10] discussed, in detail, the effects of different segmentation methods on the image restoration quality, and the results were affected by the computational complexity and the edge ringing effect. Guo et al. [11] used the image maximization algorithm to estimate the PSF after image segmentation, and then, used the spatial adaptive least squares method to complete image restoration. Du Xin [12] proposed a new segmentation method based on the mean-shift algorithm to split the signal region out in an electromagnetic image.
The coordinate transformation restoration (CTR) method is mainly used for the degradation of linear motion blur, radial symmetric blur, and so on. By transforming the image into a certain coordinate space where the point spread function is spatially invariant, restoration of degraded image can be simplified and achieved effectively and accurately. In reference [13], the author used polar-coordinate transformation to realize space-invariant transformation in the azimuth direction in optical imaging.
Recently, some new approaches have been proposed to solve this problem. Camera [14] improved the sectioning approach and applied a deconvolution method with boundary effect correction in the segmentation algorithm and accelerated the method with scaled gradient projection (SGP). Mourya [15] proposed a distributed shift-variant image deblurring algorithm to solve the limited resource problem when the image is extremely large (up to gigapixels). Zhang [16] estimated the blur map and adopted the BM3D-based, non-blind deconvolution algorithm to reconstruct the image. In the area of artificial intelligence, Schuler [17] improved a neural network with a deep layered architecture and trained it to estimate the blur kernel and reconstruct the image alternately. Sun [18] used a convolutional neural network to predict the distribution of the motion blur and extend it by image rotation. Then, a Markov random field model was applied to remove the motion blur using the prior patch-level image. The method was proved to effectively estimate and remove complex non-uniform motion blur.
In the field of PMMW, Li Liangchao [19] studied the relationship between the focal plane and the image plane and presented the spherical anti-projection transformation method to eliminate the space-variant characteristic of the PSF. After analyzing the PSF of our system, we proposed the use of a log-polar-coordinate transformation (LPCT) method before the Lucy-Richardson algorithm to eliminate the space-variant blur in both the angular direction and the radial direction. In this paper, we introduce the PCPSF model of OAP firstly in Section 2, and then our algorithm is explained in detail in Section 3. Finally, we apply the algorithm to both the simulation and experiment results displayed in Section 4, and the conclusions are given in the final section.
Space-Variant PSF of OAP System
The OAP is usually designed as the intersection of a symmetric parabolic, which is called the parent parabolic, with a circular object, as shown in Figure 2. So, the research on PSF of OAP can be carried out in two steps: the diffraction limited effect of parent parabolic without aberration and the inherent aberration caused by the off-axis characteristic. In the next two subsections, we give a detailed explanation of how these two factors cause the space-variant blur.
Space-Variance Correction in the Angular Direction
Usually the research on propagation and the focus of the light or electromagnetic field relies on the Fresnel diffraction integral formula, Kirchhoff formula, or Rayleigh-Sommerfeld diffraction integrals. To avoid these complex formulas, we derive the PSF of the parent parabolic by the Collins formula, which has proved to be consistent with the Fresnel diffraction integral formula for the numerical analyses of symmetric paraxial optical systems [20]. As mentioned in [21], the parabolic mirror has the same function of light field transformation as a lens. Therefore, we can derive the ideal imaging of the parabolic reflector using the mature method of lens. According to [20], the PSF of the parent parabolic can be written as follows: In Equation (1), u 1 (ξ 1 , η 1 ) is the object plane, u 2 (x 2 , y 2 ) is the image plane, and u 0 (r 0 , θ 0 ) is the pupil plane. So it is obvious that the PSF is variant along with the coordinate parameter. Inspired by the symmetrical, circlular characteristics of the blur, we transform the object plane and image plane from Cartesian-coordinates into polar-coordinates followed by Equation (2): Then, we derive the PSF under the polar-coordinate: In particular, we take the integral about θ into consideration to get Equation (4): where (4) can be calculated as follows according to the properties of trigonometric functions: So, as we can see, the PSF is not related to θ under the polar-coordinate. In other words, the PSF is invariant to θ after the coordinate transformation. The PSF in polar-coordinates can be called the PCPSF model, which decouples the two-dimensional space-variant blur from the angular direction.
Space Variance Correction in the Radial Direction
Since the object and image planes of the focal plane imaging system are on the same side of the reflector, most systems use an off-axis approach to avoid center occlusion. For parabolic reflectors, the characteristics of off-axis and non-parallel incident rays cause relatively large aberrations. In the literature [22,23], Arguijo used the ray tracing method in geometric optics to analyze the aberration of the off-axis parabolic reflection mirror in optical imaging systems, pointing out that the aberration mainly includes astigmatism and coma. According to Seidel's primary aberration theory, the astigmatism and coma can be described by polar-coordinates as follows [12,24]: In Equation (6), the two rows above represent the astigmatism aberration and the two rows below denote the coma aberration. Obviously, the aberration above has nothing to do with x θ nor u θ , which means that there is no variance caused by θ, the same as the PSF of the parent parabolic. So, the work on the restoration of blurred image is focused on the radial direction after the polar-coordinate transformation.
In Equation (6), the u r contributes a little to astigmatism and nothing to the coma abberation of the direction of θ, but it has a significant impact on the direction of r. Since the radial blur in Equation (6) is related to u r or u 2 r , we can put an logarithm operation into the r direction in the polar-coordinate transformation, which can be helpful to depress the radial variance. After that, the radial aberration described in Equation (6) can be written as follows, where the blur is in weak or has no relationship with u r anymore: ln(x r ) − ln(u r ) = ln[1 + (2C + D)u r r cos θ ] Here, we give an example of the effect of logarithm in Figure 3. After the logarithm operation in the radial direction, the variance along r has been almost removed. In Section 2, the description and correction in the angular and radial directions is given by analyzing the PSF under the polar-coordinate. By combining the two subsections, we set up an log-polar-coordinate transformation method to eliminate the space variance of the blurred image. The realization of this method is given in the next section.
Log-Polar Transformation
In the image processing algorithm, the log-polar transformation has the characteristics of scaling invariance and rotation invariance and is usually used for target extraction and recognition [25,26]. However, the log-polar transformation in our method is inspired by analyzing the characteristics of the imaging system, as derived above. Here, we present the steps and flow charts that describe the application of the log-polar transformation method to achieve resolution recovery.
• Step 1 Obtain the image data f (x, y) of M*N by photoelectric sensors and the point spread function PSF(x, y) of M*N by simulation of the ideal point source in FEKO. • Step 2 Transform the image data f (x, y) and PSF(x, y) coordinate into the polar-coordinate to get the new image g(r, θ) and the new PSF(r, θ) Equation (8), and interpolate the new image using the bicubic interpolation method. • Step 3 Use the Lucy-Richardson algorithm with g(r, θ) and PSF(r, θ) to reconstruct the high-resolution image g (r, θ). • Step 4 Inverse transform g (r, θ) into the Cartesian coordinate system and interpolate it by the bicubic method to obtain the final result with resolution recovery. Figure 4 shows the transformation process. As we can see, the space-variant blur in the Cartesian-coordinate turns to be space-invariant in the log-polar-coordinate. However, this method has a blind spot at r = 0, and it will not work well close to the blind spot since there is too few pixels to be sampled. The mathematical method can hardly improve it, and now, the feasible way is to avoid putting the detected object around the blind point. In the process of transformation, bicubic interpolation will result in an uneven rise of values in the different locations of the image, which can be depressed by choosing appropriate filter methods, such as median filter, or by improving the interpolation procedure.
Lucy-Richardson Iterative Algorithm
According to the Fourier optics theory, the finite aperture is equivalent to a low-pass filter in the spatial light field transformation, which will cause a cutoff effect on the spatial frequency in cases above a certain value [20]. It is represented in the airspace as a high-resolution image convolved with the point spread function, resulting in image blur, as described in Equation (9): Unlike the super-resolution recovery of ordinary under-sampled images, whose goal is to recover high frequency components from the aliased spectrum, the main purpose of super-resolution in our system is to recover the high-frequency components that are filtered out by the finite aperture, which is called the inverse problem. To solve this problem, many methods have been proposed, such as the Winner inverse filter, projection onto convex sets, etc. Our system adopts the Lucy-Richardson iterative algorithm, and its main principle is the maximum likelihood criterion theory. As shown in Equation (10), it attempts to find a high-resolution image estimate f (x, y) to maximize the likelihood function P(g| f ), where g is the low resolution image, resulting in a maximum likelihood estimatef (x, y),f = argmaxP(g| f ), (10) where P(g| f ) is a statistical model that reflects the radiation distribution of the object plane. The iterative equation of this algorithm is as follows: In Equation (11), * denotes the convolution operation, h(x, y) is the PSF obtained from the simulation result of an ideal point source in FEKO, and ∑ h(x, y) = 1,ĥ(x, y) represents the spatially inverse value of h(x, y). In addition, the initial iteration condition is generally set tof 0 = g(x, y). To solve the ill-posed problem, the iteration number should be limited [27,28] and is set to 300 in this paper based on the restoration effect. Finally, the f k+1 that satisfies the iteration requirement is the estimated high-resolution image.
In conclusion, the flow chart of our LPCT and L-R iterative method that was presented in Section 3 is shown in Figure 5.
Results and Analysis
To verify the effectiveness of the algorithm, we applied it to the super-resolution process with simulation results and experimental results. In addition, we did a comparison with the typical algorithms for solving such problems in optical imaging, including the direct L-R method and the segmentation method in the literature [29]. Since the super-resolution process is a typical inverse process, the corresponding high-resolution image cannot be compared with the low-resolution one, so the traditional image processing evaluation method could not be used here, and the high-resolution does not mean that the visual effect is better either. Considering the requirements for imaging and locating the electromagnetic leakage of electronic equipment, here, we used the correctness of position and the number of radiation sources as the criteria for evaluating the super-resolution algorithm.
PSF Used for the Results
Since the L-R iterative algorithm is a non-blind deconvolution method, the selection of the PSF is important. In our algorithm, the PSF is acquired by the simulation result in FEKO with each frequency being the same as the experiment and the size of the PSF and image are both 51 × 101. In Figure 6, the PSF values of different methods at 3 GHz are shown: a is the PSF of the direct deconvolution algorithm; b is the polar transformation's PSF; c is the PSF of our method, whose axes are set with the same range as the acquisition image; d is the 9 sub-PSFs of the PSF set in the segmentation method which includes the sub-PSFs acquired at different locations. The axis represents the pixel number of each sub-block, which is usually 17 × 33 or 17 × 35.
Deblur of Simulation Results
We used FEKO to build a system model and simulated the imaging of ten dipole sources (4 GHz) placed in a circle shape on the object plane. The model structure and the degraded pictures are shown in Figure 7. As shown in Figure 8, the row above is the step result of the log-polar transformation algorithm. It can be seen that the log-polar transformation basically eliminates the spatial variation characteristics of the blur, and the number and position of the electromagnetic radiation source are consistent with the real conditions. The row below is the result of applying other algorithms to Figure 8a. It can be seen that due to the large blur and the obvious spatial variation characteristics, the direct super-resolution (SR) algorithm and segmentation SR algorithm cannot satisfy the space-invariant hypothesis which results in some speckle noise and false signals. In addition, the only polar transformation method can recover part of the blur, leaving the noise caused by the radial variant blur.
Deblur of Experiment Results
We built an imaging system in a microwave darkroom and performed imaging experiments on three horizontally-placed, double-ridged horn antennas (3-6 GHz, 10 dBm), as shown in Figure 9. The results are shown in Figure 10. The row above in Figure 10 is the step result of the log-polar transformation algorithm, and the row below shows the results of the other algorithms. It can be seen that the direct SR and segmentation SR algorithms still generate noise and interfere, which will cause mistakes in the judgment and localization of the electromagnetic leakage. The log-polar algorithm can effectively suppress this problem. Figure 11 below shows the results of the imaging (upper) and resolution recovery (lower) results of radiation sources of different frequencies. So, as we can see, the algorithm can be applied to a wide-range spectrum if we can get the PSF.
In theory, the recovered image should have the same resolution and position as the target signal. The resolution can be measured by the beamwidth of the recovered signal. So, we evaluated the results in Figure 11 by the consistency of position and beamwidth of the three signals. In the evaluation, the positions of different antennae were determined by the x and y-axes (unit: pixel) of the maximum value of their beams. The beamwidth used in this evaluation was introduced from the antenna area and was set at a beamwidth of −6 dB, which means the area (unit: pixel) of the value that is 6 dB less than the peak of the beam. In this paper, it was calculated by the following: • Step 1 Convert the z-axis of the image matrix into a dB area by log operation.
• Step 2 Mark the pixel as 1 if its value is larger than the peak subtracted by 6 dB and 0 if not.
• Step 3 Calculate the number of 1 in the whole image as the −6 dB beamwidth.
The results are as follows. In Tables 1 and 2, left means the left signal on the recovered image, and similarly with the center and right. The units of both tables are pixels. Table 1 shows that the positions of the three signals were almost the same and met up with the real antenna. Table 2 shows that the recovery signals had good consistency in beamwidth with each other at different frequencies.
The position and beamwidth prove that the method is suitable for wide-frequencies.
The transformation is not complex and the size of image is also not too big, so the time that the algorithm consumes (on the laptop with MatLAB R2014b, Windows 7, Intel I5) is within 1 s. The number of iterations in the L-R iteration method is set to 300 for regularization.
Conclusions
In the deblurring processing of focal-plane image, the space-variant characteristic is hard to eliminatedue to its large dimensions. This problem is particularly acute in our focal-plane microwave imaging systems due to the low frequency and off-axis aberration. In this paper, we analyzed the physics model of the space-variant blur and proposed an LPCT method as the preprocessing method before Lucy-Richardson restoration. This algorithm starts from the principle of image blur instead of approximating it, and proved to be effective for the space-variant deblurring of focal-plane microwave imaging. It improves the resolution and accuracy of electromagnetic imaging and detecting in EMC detection significantly. However, this method brings a blind spot, as discussed above, and there is no better solution but to avoid it. In addition, in future work, the interpolation of transformation will be studied further to depress the uneven rise of values and regularization with the L-R algorithm to help to improve the noise problem. | 4,839.8 | 2018-11-06T00:00:00.000 | [
"Engineering",
"Physics"
] |
Pattern Classification of Signals Using Fisher Kernels
The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.
used for randomly generating observable data; whereas discriminative models are used in machine learning for assessing the dependency of an unobserved random variable on an observed variable using a conditional probability distribution function.They extract more information from a single generative model and not just its output probability 5 .The features obtained after applying Fisher kernels are known as Fisher scores.We analyze how these scores depend on the probabilistic model, and how they give us the information about the internal representation of the data items within the model.
The benefit of using Fisher kernels comes from the fact that they limit the dimensions of feature space in most cases thereby giving some regularity to visualization.This is very important when we are dealing with inseparable classes 6 .A typical Fisher kernel representation usually consists of numerous small gradients for higher probability objects in a model and vice-versa 7 .Fisher kernels combine the properties of generative models such as Hidden Markov model, and discriminative methods such as support vector machines 1 .The use of Fisher kernels for dimension transformation has the following two advantages 1, 5 : 1 Fisher kernels can deal with variable-length time series data; 2 discriminative methods such as SVMs, when used with Fisher kernels can yield better results.
Fisher kernels have been applied in various domains such as speech recognition 8-10 , protein homology detection 3 , web audio classification 11 , object recognition 12 , image decoding 13, 14 , and so forth.For detailed information on Fisher kernels, refer to 5 .Following Table 1 indicates some recent applications of Fisher Kernels and their performances.
Applying Fisher Kernels to Time-Series Models
In this study, we investigate the application of Fisher kernels for identifying patterns emerging from a time-series model.We use a generative Gaussian mixture model 5, 21, 22 for our complex system.For a binary classification problem, there will be two Gaussian components for each time series signal.These Gaussian components for each signal help us in assessing the tendency of the signal to be categorized in either of the two classes.Let us make the following assumptions before we mathematically generate the Fisher score model.The methodology in this section has been adopted from 23 .i The complex system to be modeled is a black box out of which a time series signal is emerging.
ii In order to make the data distribution in the signal to be i.i.d independent and identical distribution , we apply some mathematical relations such as log-normal relations or normalizing the dataset.By doing so, we generate another set of data which is nearly an i.i.d distribution, thus giving us a transformed time series signal.
iii Upon applying the previous assumptions, we then generate the Fisher score model for the complex system as explained further.
Before we proceed with deriving the model and its equations, we assume the following variables and their meanings as shown in Table 2. Normalized value for the ith sample θ a j , μ j , σ j Gaussian estimates for the N g components, with a j being the weight vector, μ j being the mean vector and σ j being the variance vector.
R i, j
Gaussian mixture model for the ith week's returns, built using j 2 Gaussian components P r i | θ Probability density function for the ith normalized sample value P C | θ Probability density function for the entire input vector or time-series signal 2 Our study is on binary pattern classification, and in order to achieve this we use the Fisher scores generated as described further in this section, for plotting and visualizing between the two categories e.g., active and dead companies, or abnormal and normal knee joints .
3 The length of each input vector or the number of samples available can be variable, or all the input vectors can be of same length.We assume each input vector to be a unique time series signal.
4 Our study based on binary pattern classification has been applied to two domains: the financial domain wherein we classify between the potentially dead and active companies; and the biomedical domain wherein we classify between abnormal and normal knee joints for assessing the risk of cartilage degeneration.In the financial study, we intend to find the log-normal stock returns using the weekly stock prices; whereas in the biomedical domain we normalize the knee angle signals between the interval 0, . . ., 1.These stock returns or normalized knee angles are taken as our r c values.We do this so as to make the distributions i.i.d in nature, as per the assumptions mentioned in the beginning of this section.
5 We first find the initial values of the Gaussian estimates, θ a j , μ j , σ j using the expectation maximization algorithm 24, 25 for j 1, . . ., N g .The expectation maximization algorithm is used for estimating the likelihood parameters of certain probabilistic models.
6 Using these estimates we create the Gaussian mixture model M so that R i, j is an The diagonal covariance GMM likelihood is then given by the probability density function for the ith normalized sample value.Thus, The global log-likelihood of an input vector's normalized values C {r 1 , r 2 , . . ., r N c } is given using the probability density function as follows.Therefore, log P C | M, θ is a single value for each input time series signal: The Fisher score vector is composed of derivatives with respect to each parameter in θ a j , μ j , σ j .The likelihood Fisher score vector for each signal is thus given as follows: Each of the derivatives comprises of two components, thus giving us a 1 × 2 matrix for each derivative.Thus, for each input vector we get a 6 × 1 Fisher score matrix.In order to plot the scores, we then add up each pair of Fisher scores with respect to weights, mean, and variance , to get a three-dimensional scatter plot, as shown in equations below: where SFS-sum of Fisher scores.
The Fisher scores obtained for each input vector are then further used as input data for training and testing our SVM model.The SVM model basically performs binary classification, using which we can infer statements about the future state of the complex system taken into consideration.It should be noted that the datasets used our financial timeseries experiments are balanced 256 active and 256 dead companies , whereas in biomedical time-series they are imbalanced 38 abnormal and 51 normal cases .In order to solve the problem of performance loss, we have used the Gaussian radial basis function as our kernel function, which creates a good classifier for nonoverlapping classes, and application of SMO sequential minimal optimization method for finding the hyperplane, which splits our large quadratic optimization problem into smaller portions for solving.
The correctness of the classification performed by SVM is further verified when we apply the same set of Fisher score data for linear discriminant analysis LDA , along with the false positives versus the false negatives.As a note, our study is not intended to analyze the performance of LDA approach, or compare it with SVM.For detailed information on SVM concepts, the reader may refer to 26 .
Following Section 3 describes our experiments with real-time data.
Financial Time-Series
In this study, we have considered the companies falling under the Pharmaceuticals and Biotechnology sectors listed in the TSX 27 , NYSE 28 , TSX-Ventures 27 and NASDAQ 29 stock exchanges.We collected the weekly stock price data for various companies from a common time frame of January 1950 to December 2008.Figures 1 and 2 illustrate the stock price distribution for the active and dead companies falling under the pharmaceuticals and biotechnology sector.
From observing the stock price distribution charts, it becomes clear that it is difficult or almost impossible to predict the next stock price or even the future state of the stock price, that is, whether the price will be high or low.In our study, by classifying between the active and dead companies, we have tried to infer statements about the performance of each company and whether it would be a potential survivor in the long run or not.Our experiments are not intended to predict an active company's rise or fall within a specific time range, but rather are developed to provide a qualitative measurement of its performance in the stock market with respect to the dead companies' cluster.In other words, based on cluster analysis we can infer that a company represented in three dimensions has more inclination to survive if it is nearer to a an active cluster, or collapse if it is nearer to a dead cluster.
An "active" company in this context indicates that it is currently trading and is listed in a particular stock exchange.Whereas, a "dead" company indicates that the firm has been delisted from the stock exchange, and that it no longer performs stock trading.A company can be listed as "dead" for many reasons such as bankruptcy, mergers, or acquisitions.Thomson datastream 30 uses a flat plot or a constant value as an indication that the company has stopped trading in the exchange.This becomes clear from Figure 2.
As observed in Figures 1 and 2, the stock price distribution is not an i.i.d independent and identical distribution .So in order to normalize the distribution, the datasets for various active and dead companies are then processed for getting the stock price returns using Black and Scholes theory 31-33 .Figures 3 and 4 hence the stock returns must be close to zero.Our data collection and hence Figures 2 and 4 indicate that a constant stock price line indicate that the company stopped trading at that corresponding price, and there onwards the stock returns plot for each dead company indicates a convergence with the zero constant once the company stops trading.
The normalized stock returns data is then used for finding the initial estimates of mean, variance and weight vectors using the expectation maximization algorithm 24, 25 .The normalized dataset for each company is then processed using Fisher kernels implemented with a Gaussian mixture model in order to obtain the Fisher scores with respect to three parameters.These parameters are basically the derivatives of the global log-likelihood of each dataset with respect to each of mean, variance, and weight vectors.These Fisher scores when plotted in three dimensions provide a scope for visually classifying between the active and the dead companies, as shown in Figure 5.At this stage, we have basically performed a transformation of a financial time-series into six dimensions.That is, for each company we have processed its stock market data into a set of six Fisher scores.In order to plot these Fisher scores, we have summed up the Fisher score pairs for all the parameters.
SVMs were applied to both three-dimensional Figure 5 and six-dimensional Fisher scores for classification and prediction.The results for this have been shown in Table 3. we randomly split the Fisher score dataset into training and testing groups.We then applied support vector machines for training and testing of the system, using our kernel functions as a Gaussian radial basis function RBF .For finding the hyperplane separating the two classes, we used the method of sequential minimal optimization SMO 34 .
In order to validate our results, we further used the method of linear discriminant analysis LDA 21 along with the leave-one-out cross validation technique, as shown in Tables 4 and 5.
i In case of the three-dimensional Fisher scores, we obtained a classification accuracy of 95.9% in original grouped cases, and about 95.7% in cross validated cases.
ii Similarly, in case of six-dimensional scores, the classification accuracy for both original grouped cases and cross validated cases was 95.7%.
Biomedical Time-Series
As mentioned, the Fisher kernel technique was also applied for classifying abnormal and normal knee joints.A database of 38 abnormal and 51 normal knee-joint case was used in our experiments.The knee-joint signal data was collected using vibration arthroscopy as described in 35 .Sample plots of the signals are shown in Figures 6 and 7.In order to simplify our calculations, we normalize the dataset values for each case study between the interval 0, . . ., 1, for generating the Fisher scores as shown in Figures 8 and 9.
Once the Fisher scores are generated using the method described in Section 2, we plot them similar to the Fisher score plot for financial time-series as shown in Figure 10.
SVMs were then applied to both three-dimensional and six-dimensional Fisher scores for classification and prediction.The results for this have been shown in Table 6.
The correctness of the classification performed by SVM is further verified when we apply the same set of Fisher score data for linear discriminant analysis LDA 21 , as shown in Tables 7 and 8.
i In case of the three-dimensional Fisher scores, we obtained a classification accuracy of 82.0% in original grouped cases, and about 75.3% in cross-validated cases.
ii Similarly, in case of six-dimensional scores, we obtained a classification accuracy of 91.0% in original grouped cases, and about 88.8% in cross-validated cases.
Discussions, Conclusions, and Future Works
In our previous work 36 , Fisher kernels were not able to perform binary classification in two dimensions.But in this study, by introducing three-dimensional Fisher scores, we have been able to separate and visualize the two classes, more accurately.The intention of our research work in this study was to analyze the classification performance of time-series signals using Fisher kernels as feature extractors.Specifically when we classified active companies versus dead companies in a given economic sector, our intention was to see how good the dimension transformation of the time-series was.Also with regards to the separation between the two classes, we were attempting to predict the potential survival of a company using SVMs.In other words, by visualizing the two clusters we observed that few active companies were more nearer to the dead cluster, which led to inferring that these companies could potentially collapse in the long run.This was evident from the observation that these active companies exhibited stock price changes similar to dead companies before they collapsed .A similar observation could be derived from our biomedical time-series results.
A normal distribution is easier to analyze and model using GMM, as compared to a non-i.i.d distribution.In other words, we can say that Gaussian mixture models GMM give the best fit for normally distributed datasets.A qualitative observation of the time-series signals used in our experiments reveals that the distribution in Laplacian in nature.That is, although the histogram of a sample vector appears to be bell-shaped as is in a normal distribution, we actually observe that the curve is peaked around the mean value and has fat-tails on either sides.Upon further training, testing and cross-validation operations using SVMs and LDA, we have achieved a high classification rates in both the studies, as indicated in Tables 3 and 6.The highlighting factors behind such high classification rates could be as follows.i The characteristic property of Fisher kernels-retaining the essential features during dimension transformation.
ii The application of SMO method for finding the hyperplane, which splits out large quadratic optimization problem into smaller portions for solving.
iii Other studies as indicated in Table 1, wherein Fisher kernels have performed exceptionally well.
Automation of our research work in order to yield dynamic outputs in the form of predictive statements and visualization plots can be pursued as a future study.Experimenting with variable-length time-series in this study has definitely opened doors for more research, such as assessing the time-frame of cartilage degeneration and a scope for monitoring Osteoarthritis.Analyzing these issues in near future will be quite interesting and challenging.
Figure 1 :Figure 2 :
Figure 1: Stock price distribution of active companies.
Figure 3 :
Figure 3: Log-normal stock returns distribution of active companies.
Figure 4 :
Figure 4: Log-normal stock returns distribution of dead companies.
Figure 10 :
Figure 10: Fisher score plot for visualizing biomedical time-series.
Table 1 :
Examples of Fisher kernel applications.
Table 2 :
List of variables used for Fisher scores' computation.
Table 3 :
SVM performance results for financial time-series.
Table 4 :
LDA results for 3D Fisher scores: financial data.
Table 5 :
LDA results for 6D Fisher scores: financial data.
Table 6 :
SVM performance results for biomedical time-series.
Table 7 :
LDA results for 3D Fisher scores: biomedical data.
Table 8 :
LDA results for 6D Fisher scores: biomedical data. | 4,110.4 | 2012-09-19T00:00:00.000 | [
"Computer Science"
] |
Research and Implementation of Millet Ear Detection Method Based on Lightweight YOLOv5
As the millet ears are dense, small in size, and serious occlusion in the complex grain field scene, the target detection model suitable for this environment requires high computing power, and it is difficult to deploy the real-time detection of millet ears on mobile devices. A lightweight real-time detection method for millet ears is based on YOLOv5. First, the YOLOv5s model is improved by replacing the YOLOv5s backbone feature extraction network with the MobilenetV3 lightweight model to reduce model size. Then, using the multi-feature fusion detection structure, the micro-scale detection layer is augmented to reduce high-level feature maps and low-level feature maps. The Merge-NMS technique is used in post-processing for target information loss to reduce the influence of boundary blur on the detection effect and increase the detection accuracy of small and obstructed targets. Finally, the models reconstructed by different improved methods are trained and tested on the self-built millet ear data set. The AP value of the improved model in this study reaches 97.78%, F1-score is 94.20%, and the model size is only 7.56 MB, which is 53.28% of the standard YoloV5s model size, and has a better detection speed. Compared with other classical target detection models, it shows strong robustness and generalization ability. The lightweight model performs better in the detection of pictures and videos in the Jetson Nano. The results show that the improved lightweight YOLOv5 millet detection model in this study can overcome the influence of complex environments, and significantly improve the detection effect of millet under dense distribution and occlusion conditions. The millet detection model is deployed on the Jetson Nano, and the millet detection system is implemented based on the PyQt5 framework. The detection accuracy and detection speed of the millet detection system can meet the actual needs of intelligent agricultural machinery equipment and has a good application prospect.
Introduction
Millet is one of the most important miscellaneous grain crops in China.Its planting area accounts for around 80% of the world's total planting area, while its output accounts for approximately 90% of the world's total output [1].For a long time, the number of ears had to rely on manual observation and statistics in the study of millet cultivation and breeding, which is labor-intensive, time-consuming, and inefficient.In the actual mixed environment, the similarity, dense distribution, occlusion, and subjectivity of statisticians make counting grains and ears difficult, and mistakes are common.Millet ears are a key agronomic index to evaluate the yield and quality of foxtail millet, which plays an important role in nutritional diagnosis, growth period detection, and pest detection.Therefore, rapid and accurate detection of millets on mobile devices can play an important role in yield estimation and [2,3].At present, the research on grain spike detection is mainly based on wheat [4][5][6], rice [7][8][9][10], and other major grain crops.The research problems are mainly aimed at improving the detection accuracy and detection speed of the model.Bao et al. [11] proposed a wheat spike recognition model based on convolutional neural network.In order to improve the recognition accuracy, a sliding window was constructed by image pyramid to realize multi-scale recognition of wheat spikes.The accuracy of the model was 97.30%.The model was used to complete the counting of wheat spikes and estimate wheat yield.Zhang et al. [12] realized a convolutional neural network recognition model for winter wheat spikes and combined it with non-maximum suppression values to achieve rapid and accurate detection of wheat spikes in the actual environment.Wang et al. [13] realized the detection and counting of wheat spike targets in different periods by improving the YOLOv3 model.The detection results of the improved YOLOv3 model showed strong robustness, but it was still difficult to detect occluded wheat spikes and smaller wheat spikes.The research of Bao et al. [14] based on the deep convolutional neural network CSRNet network studied the density map of a single wheat ear and counted the wheat ears according to the density value.Xu et al. [15] adopted the minimum area intersection ratio (MAIR) feature extraction algorithm and the transfer learning technology to achieve automatic wheat ear counting based on the YOLOv5 model.Liu et al. [16] used the improved Bayes matting algorithm to segment the wheat ear from the complex background, and used smoothing filtering, erosion, filling, and other algorithms to segment the wheat ear spikelets and form a connected area for marking and counting.This method improves the technical accuracy.Xie et al. [17] proposed a wheat ear detection model based on deep learning (FCS R-CNN) and introduced methods such as feature pyramid network (FPN) through Cascade R CNN to improve detection accuracy and detection speed.
In the actual environment, the millet ear is densely distributed and seriously occluded, and the model is difficult to detect the ear head in a complex environment.Therefore, when designing the model, it is necessary to consider the model's blurring of small-scale targets, occlusion targets, and target boundaries, as well as the deployment of the model on the embedded platform to the actual environment.Jiang et al. [18] designed a rice panicle detection method based on generating feature pyramid (GFP-PD).Aiming at the noise of small-sized rice panicles and leaves blocking rice panicles, the structural feature pyramid and occlusion sample repair module (OSIM) were used to improve the detection accuracy of the model.Zhang et al. [19] introduced dilated convolution based on the Faster R-CNN model to solve the problem of small-sized rice panicle target and used ROIAign instead of ROIPooling to optimize and improve the average detection accuracy of the model for rice panicles.Jiang et al. [20] proposed an improved NMS-based max intersection over portion (MIoP-NMS) algorithm and implemented it in the YOLOv4 network framework for singlestage target detection, and estimated the number of banana trees in dense occluded banana forests with about 98.7% accuracy.Bao et al. [21] designed a lightweight convolutional neural network simple net, which is constructed using convolution and reverse residual blocks, and combined it with the convolutional attention mechanism CBAM module, which can be used for automatic recognition of wheat ear diseases on mobile terminals.Zhao et al. [22] proposed an improved YOLOv5-based method to detect wheat ears in UAV images.By adding a micro-scale detection layer and using the WBF algorithm, the detection problem caused by the dense distribution and occlusion of small-sized wheat ears was solved.Yang et al. [23] proposed an improved YOLOv5 apple flower growth state detection method, introduced CA attention module, and designed multi-scale detection structure to improve the detection accuracy of the model.Zhang et al. [24] designed a potato detection model by improving the YOLOv4 model.The CSP-Darknet53 network of the YOLOv4 model was replaced by the MobilenetV3 network to reduce the model volume and ensure the average detection accuracy of the potato.The experiment was deployed on embedded devices, and YOLOv4-MobilenetV3 showed strong robustness.Due to the growth characteristics of millet in the natural environment, the shape and spatial distribution of millet ears are irregular, so it is difficult to apply the target detection model to detect the millet ears in the actual environment.In this study, the YOLOv5s model was used as the original model, and the main feature extraction network is replaced by the lightweight MoblienetV3 model to reduce the model size.On this basis, the feature fusion detection structure is improved, and the Merge-NMS algorithm is used to improve the lightweight model.By testing and evaluating the model on the self-built grain data set, it provides a theoretical basis for rapid and accurate detection of grain on mobile devices.
Image Acquisition
The millet ear images were collected from the experimental field of Shenfeng Village, Shanxi Agricultural University.The millet ear images (Figure 1) included 25 heading stage, 230 filling stage, and 45 mature stage, for a total of 300 images.The length of millet ear is about 20-35 cm, the growth state is inclined to one side along the end of the stem, the millet ear head is downward, and the planting density of millet is very high, about 375,000 to 600,000 plants/ha, resulting in serious occlusion of the millet ears in the field, which affects the detection effect of the traditional target detection model on the millet ears.Therefore, the images were taken from the upper side in this study.The resolution of the collected grain image is 4032 pixels × 3024 pixels, which is stored in the jpg format.Due to the limited computing resources in the laboratory, the original image is compressed to 1024 pixels × 768 pixels to speed up the data processing time.There are many complex situations in the millet images collected in the natural environment, such as grains covered by leaves and stems, grains intertwined with each other, dense distribution of grains, etc., which have certain interference in the detection of grains by the model.
of the potato.The experiment was deployed on embedded devices, and YoloV4-Mo-bilenetV3 showed strong robustness.
Due to the growth characteristics of millet in the natural environment, the shape and spatial distribution of millet ears are irregular, so it is difficult to apply the target detection model to detect the millet ears in the actual environment.In this study, the YoloV5s model was used as the original model, and the main feature extraction network is replaced by the lightweight MoblienetV3 model to reduce the model size.On this basis, the feature fusion detection structure is improved, and the Merge-NMS algorithm is used to improve the lightweight model.By testing and evaluating the model on the self-built grain data set, it provides a theoretical basis for rapid and accurate detection of grain on mobile devices.
Image Acquisition
The millet ear images were collected from the experimental field of Shenfeng Village, Shanxi Agricultural University.The millet ear images (Figure 1) included 25 heading stage, 230 filling stage, and 45 mature stage, for a total of 300 images.The length of millet ear is about 20-35 cm, the growth state is inclined to one side along the end of the stem, the millet ear head is downward, and the planting density of millet is very high, about 375,000 to 600,000 plants/ha, resulting in serious occlusion of the millet ears in the field, which affects the detection effect of the traditional target detection model on the millet ears.Therefore, the images were taken from the upper side in this study.The resolution of the collected grain image is 4032 pixels × 3024 pixels, which is stored in the jpg format.Due to the limited computing resources in the laboratory, the original image is compressed to 1024 pixels × 768 pixels to speed up the data processing time.There are many complex situations in the millet images collected in the natural environment, such as grains covered by leaves and stems, grains intertwined with each other, dense distribution of grains, etc., which have certain interference in the detection of grains by the model.
Image Preprocessing
LabelImg annotation tool is used to make the grain image data set according to the PASCAL VOC data set format for the collected millet image, and the grain in the image is marked (Figure 2) to generate the corresponding XML file.In order to prevent the overfitting of the network model caused by the small data set and improve the generalization ability of the network model training results, it is necessary to use data augmentation for the millet ear data set (Figure 3).In this study, the self-made grain and millet data set was randomly enhanced by rotation, flipping, mirroring, brightness adjustment, and other methods.The annotation files corresponding to each image were transformed at the same time, and the data set was expanded to 2100.The data set was randomly divided into a training set, verification set, and test set according to the ratio of 8: 1: 1.
Image Preprocessing
LabelImg annotation tool is used to make the grain image data set according to the PASCAL VOC data set format for the collected millet image, and the grain in the image is marked (Figure 2) to generate the corresponding XML file.In order to prevent the overfitting of the network model caused by the small data set and improve the generalization ability of the network model training results, it is necessary to use data augmentation for the millet ear data set (Figure 3).In this study, the self-made grain and millet data set was randomly enhanced by rotation, flipping, mirroring, brightness adjustment, and other methods.The annotation files corresponding to each image were transformed at the same time, and the data set was expanded to 2100.The data set was randomly divided into a training set, verification set, and test set according to the ratio of 8:1:1.
YoloV5 Model
The YOLO (You Only Look Once) [25] series is a single-stage target detection model using regression method with good performance.As shown in Figure 4 The YOLO (You Only Look Once) [25] series is a single-stage target detection model using regression method with good performance.As shown in Figure 4
YoloV5 Model
The YOLO (You Only Look Once) [25] series is a single-stage target detection model using regression method with good performance.As shown in Figure 4 The YoloV5s input retains the same mosaic data augmentation method as YoloV4 and randomly scales, cuts, distributes, and splices the four images into a new picture, a shown in Figure 5. Yolov5 uses the gradient descent method to optimize the objective function during the training process.As the number of iterations increases, the loss value (LOSS) is clos to the global minimum, and the learning rate is also small.In order to make the mode reach the best convergence state after training, the cosine annealing learning rate adopted by Yolov5 is to reduce the learning rate through the cosine function.The cosine function value decreases slowly with the increase of x, then rises rapidly, and then decrease slowly.The purpose is to avoid falling into the current local optimal point, and constantly adjust the learning rate to make the model converge to a new optimal point until th model training stops.The principle of cosine annealing learning rate is as follows: ( ) YOLOv5 uses the gradient descent method to optimize the objective function during the training process.As the number of iterations increases, the loss value (LOSS) is close to the global minimum, and the learning rate is also small.In order to make the model reach the best convergence state after training, the cosine annealing learning rate adopted by YOLOv5 is to reduce the learning rate through the cosine function.The cosine function value decreases slowly with the increase of x, then rises rapidly, and then decreases slowly.The purpose is to avoid falling into the current local optimal point, and constantly adjust the learning rate to make the model converge to a new optimal point until the model training stops.The principle of cosine annealing learning rate is as follows: where l new is the latest learning rate, i is the number of executions (index value), l i min is the minimum learning rate, l i max is the maximum learning rate, T cur is the number of epochs currently executed, and T i is the total number of epochs in the execution i. MoblienetV3 [27] is a lightweight neural network that combines real time, speed, and accuracy.The backbone network of MoblienetV3 is based on the Bneck structure composed of inverted residual blocks, including ordinary convolution, and deep separable convolution, and adds an attention mechanism (SE module) to the fully connected layer.Compared with the standard convolution, the depthwise separable convolution in the inverted residual block can significantly reduce the number of parameters of the overall model and reduce the model size [28].
As shown in Figure 6, assuming that the size of the input feature map is currently executed, and Ti is the total number of epochs in the execution i.
Improvement of YoloV5 Model
Use MobilenetV3 to Modify the Model Structure of YoloV5 MoblienetV3 [27] is a lightweight neural network that combines real time, speed, and accuracy.The backbone network of MoblienetV3 is based on the Bneck structure composed of inverted residual blocks, including ordinary convolution, and deep separable convolution, and adds an attention mechanism (SE module) to the fully connected layer.Compared with the standard convolution, the depthwise separable convolution in the inverted residual block can significantly reduce the number of parameters of the overall model and reduce the model size [28].
As shown in Figure 6, assuming that the size of the input feature map is The parameters of standard convolution are calculated as follows: The depth separable convolution is composed of deep convolution and pointwise convolution.The convolution kernel size of the deep convolution is k × k × 1, and there are M convolution kernels, which are responsible for filtering each channel of the input.The convolution kernel of point-by-point convolution is 1 × 1 × M, which has N convolution kernels and is responsible for converting channels.The parameters of depth separable convolution are calculated as follows: Therefore, the depth separable convolution is compared with the standard convolution parameter as follows: Merge-NMS Algorithm The parameters of standard convolution are calculated as follows: The depth separable convolution is composed of deep convolution and pointwise convolution.The convolution kernel size of the deep convolution is k × k × 1, and there are M convolution kernels, which are responsible for filtering each channel of the input.The convolution kernel of point-by-point convolution is 1 × 1 × M, which has N convolution kernels and is responsible for converting channels.The parameters of depth separable convolution are calculated as follows: Therefore, the depth separable convolution is compared with the standard convolution parameter as follows:
Merge-NMS Algorithm
The influence of the resolution of the image will reduce the detection performance, i.e., the blurred pixels of the image will lead to the problem of blurred boundary of the detection target.Due to this factor, it is not easy to accurately distinguish overlapping and occluded millets.In this study, the standard non-maximum suppression value (NMS) was improved to the fusion non-maximum suppression value (Merge-NMS) [29] to reduce the blurred grain target boundary in the post-processing process.At the end of each iteration, the standard NMS only retains the anchor box with the highest score, and the anchor boxes that overlap with this anchor box will be suppressed, and a large number of valuable anchor boxes will also be suppressed.Merge-NMS utilizes the anchor frame information suppressed by the standard NMS and fuses it with other anchor frames to obtain a more accurate prediction anchor frame.The box in the pseudo-code of Merge-NMS is the detection anchor box, Cls is the classification confidence, and Loc is the location confidence.The final score S of the anchor box is obtained by multiplying Cls and Loc.At the beginning, all anchor boxes are sorted according to the score S. In each cycle, the anchor frame (b m ) with the highest score is taken out from all anchor frames.If the score of the anchor frame highly overlapped with bm is greater than the threshold of Merge-NMS, bm will merge with these frames to form a new detection anchor frame and put it into the final detection set D. The new detection anchor frame calculation method is as follows: where x m is the coordinate of b m , loc k is the location confidence of k, and x k is the coordinate of the selected anchor frame in each cycle.The higher the location confidence is, the higher the weight of the anchor frame of loc k in the new detection anchor frame x m .
Improvement of Multi-Feature Fusion Detection Structure
The original structure of YOLOv5s is designed with three scale feature detection layers.For the input image, the feature maps of 8 times, 16 times, and 32 times down sampling were used to detect targets of different sizes.In the network model, the resolution of the low-level feature map is higher, the target features are obvious, and the target position is more accurate.After multiple convolution operations, the high-level feature map obtains rich semantic information, but it also reduces the resolution of the feature map.Due to the uneven grain size in the images obtained in the actual environment, the three-layer detection layer of the original structure of YOLOv5s has a large down-sampling multiple, which is easy to lose the feature information of small targets, and the high-level feature map is not easy to obtain the feature information of small targets.In this study, by adding a micro-scale feature detection layer, the low-level feature map and the high-level feature map are fused by splicing to detect, which can effectively improve the detection accuracy.
Millet Ear Detection Model Based on Lightweight YOLOv5
As shown in Figure 7, the structure of the ear detection model based on lightweight YOLOv5 is shown.The adaptive image scaling function of the input end processes the input image into a uniform size of 640 ×640 × 3 and replaces the backbone module of YOLOv5 with Mobilenetv3 as a feature extraction network, which can reduce the complexity of the model and reduce the amount of model calculation, but it is also easy to miss overlap and smaller ears.In the multi-feature fusion detection structure, the micro-scale feature detection layer is added to reduce the loss of information during feature fusion, which can better adapt to the detection of millet ears in the complex environment of natural fields, obtain more target information, and improve the detection of small targets.In the post-processing stage, the Merge-NMS algorithm is used to merge the anchor frames by using the position confidence obtained in the feature fusion structure, so as to reduce the false detection and missed detection caused by boundary blurring.
Jetson Nano Platform Test
The experiment was conducted in the laboratory of Shanxi Agricultural University from January 2022 to March 2023.This study was based on the Pytorch deep learning framework for training and testing.The hardware configuration was AMD Ryzen 7 5800 H processor, 6 GB NVIDIA GeForce RTX 3060 Latop GPU.The operating system was Windows 10, 64-bit, Python 3.8.5,CUDA 11.4, cuDNN 8.2.4.The number of batch samples of the model was 4, and the epoch was set to 500.The momentum factor was 0.937, the attenuation coefficient was 0.0005, and the starting learning rate was 0.01.
Evaluating Indicator
In this study, the average detection accuracy (AP, %), F1 score (F1, %), detection time (s), model size, and floating-point operations (GFLOPs) were used as evaluation indicators.The average detection accuracy is the precision-recall curve (P-R curve), i.e., the area enclosed by the coordinate axis below the curve.The F1 score is an indicator for comprehensive evaluation of precision and recall rates, reflecting the overall performance of the model.The detection time is the average time for the model to detect an image.The size of the model is the memory space occupied by the model in the system.Floating-point arithmetic is used to reflecte the complexity of the model.The calculation formulas of precision rate (P, %), recall rate (R, %), AP value (%), and F1 (%) are as follows:
Jetson Nano Platform Test
The experiment was conducted in the laboratory of Shanxi Agricultural University from January 2022 to March 2023.This study was based on the Pytorch deep learning framework for training and testing.The hardware configuration was AMD Ryzen 7 5800 H processor, 6 GB NVIDIA GeForce RTX 3060 Latop GPU.The operating system was Windows 10, 64-bit, Python 3.8.5,CUDA 11.4, cuDNN 8.2.4.The number of batch samples of the model was 4, and the epoch was set to 500.The momentum factor was 0.937, the attenuation coefficient was 0.0005, and the starting learning rate was 0.01.
Evaluating Indicator
In this study, the average detection accuracy (AP, %), F 1 score (F 1 , %), detection time (s), model size, and floating-point operations (GFLOPs) were used as evaluation indicators.The average detection accuracy is the precision-recall curve (P-R curve), i.e., the area enclosed by the coordinate axis below the curve.The F 1 score is an indicator for comprehensive evaluation of precision and recall rates, reflecting the overall performance of the model.The detection time is the average time for the model to detect an image.The size of the model is the memory space occupied by the model in the system.Floating-point arithmetic is used to reflecte the complexity of the model.The calculation formulas of precision rate (P, %), recall rate (R, %), AP value (%), and F 1 (%) are as follows: where TP is a true positive sample, indicating the number of correctly identified millets, FP is a false positive sample, i.e., the number of other errors identified as millets, and FN is a false negative sample, i.e., the number of unrecognized millet targets.
Because some millet ears were not detected or overlapping millet ears were identified as single during image testing, and there were multiple objects in the images, we calculated the number of detected millet ears as a percentage of the total, which was another parameter used to evaluate the model.
where TP is a true positive sample, indicating the number of correctly identified FP is a false positive sample, i.e., the number of other errors identified as millets is a false negative sample, i.e., the number of unrecognized millet targets.
Because some millet ears were not detected or overlapping millet ears were id as single during image testing, and there were multiple objects in the images, w lated the number of detected millet ears as a percentage of the total, which was parameter used to evaluate the model.where TP is a true positive sample, indicating the number of correctly identified millets, FP is a false positive sample, i.e., the number of other errors identified as millets, and FN is a false negative sample, i.e., the number of unrecognized millet targets.
Analysis of Training Results
Because some millet ears were not detected or overlapping millet ears were identified as single during image testing, and there were multiple objects in the images, we calculated the number of detected millet ears as a percentage of the total, which was another parameter used to evaluate the model.
Analysis of Training Results
The change trend of the loss value with the number of iterations reflects the training effect of the model; the closer the loss value is to the end of 0 training, the stronger the model effect.The training loss value curve of the improved Yolov5s model and the stand-
Analysis of Training Results
The change trend of the loss value with the number of iterations reflects the training effect of the model; the closer the loss value is to the end of 0 training, the stronger the model effect.The training loss value curve of the improved YOLOv5s model and the standard YOLOv5s model for this study is shown in Figure 10.It can be seen from the curve in the figure that the loss values of the two models decrease with the increase of the number of training iterations, and gradually tend to be stable.After 200 iterations of the improved model, the training set loss value and the verification set loss value gradually converge, the training set loss value is less than 0.04, the verification set loss value is less than 0.02, and the loss value changes basically smoothly after 300 iterations.The standard model YOLOv5s gradually converges after 350 iterations of the training set loss value and the verification set loss value.After the standard model YOLOv5s tends to be stable, the loss value of the validation set is 59.27% higher than that of the improved model, and the loss value of the validation set is 55.72% higher than that of the improved model.The loss values of the improved model training set and the verification set in this study are closer to 0, indicating that the model training effect is better, and the generalization ability of the whole model is stronger.
Performance Comparison of Model Improvement
In order to verify the influence of each improved method on the performance of the model, this study conducts comparative experiments based on the standard Yolov5s model.The experimental results are shown in Figure 11 and Table 1. Figure 12 shows the visual comparison of the detection effects of different models to reflect the effectiveness of each method on the model.
In this study, MobilenetV3 was used to replace the standard Yolov5s model backbone structure to reduce the model volume.The experimental results are shown in Figure 11.
Performance Comparison of Model Improvement
In order to verify the influence of each improved method on the performance of the model, this study conducts comparative experiments based on the standard YOLOv5s model.The experimental results are shown in Figure 11 and Table 1. Figure 12 shows the visual comparison of the detection effects of different models to reflect the effectiveness of each method on the model. of training iterations, and gradually tend to be stable.After 200 iterations of the impro model, the training set loss value and the verification set loss value gradually conve the training set loss value is less than 0.04, the verification set loss value is less than 0 and the loss value changes basically smoothly after 300 iterations.The standard mo Yolov5s gradually converges after 350 iterations of the training set loss value and the ification set loss value.After the standard model Yolov5s tends to be stable, the loss v of the validation set is 59.27% higher than that of the improved model, and the loss v of the validation set is 55.72% higher than that of the improved model.The loss value the improved model training set and the verification set in this study are closer to 0, i cating that the model training effect is better, and the generalization ability of the wh model is stronger.
Performance Comparison of Model Improvement
In order to verify the influence of each improved method on the performance of model, this study conducts comparative experiments based on the standard Yolo model.The experimental results are shown in Figure 11 and Table 1.The evaluation index shows that the TP value and FP value are directly related to the model performance.In order to improve the detection effect of the model, this study uses the Merge-NMS algorithm to reduce the sample missed detection in the post-processing stage, and the detection results are shown in Figure 12.When the Merge-NMS algorithm is used in the post-processing stage of the Yolov5s-MobilenetV3 model, the average detection accuracy is increased to 95.56%.The sample data statistics detected in the test set (a total of 2864 samples) are shown in Table 2. From Figure 11 and Table 1, it can be seen that the volume of the YOLOv5s-MobilenetV3 model is greatly reduced compared with that of the YOLOv5s model, and the average detection accuracy is also greatly reduced by 4.2%.
The floating-point operation of the YOLOv5s-MobilenetV3 model is 49.4% less than that of the YOLOv5s model, and the detection time is 0.010 s.It is further proved that replacing the standard YOLOv5s model backbone structure with MobilenetV3 can reduce the model complexity and reduce the detection time.The F1 score of the YOLOv5s-MobilenetV3 model is 5.84% lower than that of the YOLOv5s model, reflecting that the performance of the model structure will also be degraded after lightweight replacement.The micro-scale detection layer is used separately on the YOLOv5s-MobilenetV3 model and the YOLOv5s model.The number of floating-point operations of the YOLOv5s-MobilenetV3 model and the YOLOv5s model has a small increase, indicating that the micro-scale detection layer can increase the complexity of the model to obtain more target information, and the average detection accuracy of the YOLOv5s-MobilenetV3 model is increased from 95.20% to 97.70%, indicating that the micro-scale detection layer can improve the detection accuracy of millet targets in some degree.The percentage of detected millet ears ranged from 78.26% to 91.30% in different models; the YOLOv5s-MobilenetV3-Microscale-MergeNMS combination had the highest value of 91.30%.
In the natural environment, the distribution of grain targets is very dense, and the size targets are alternately distributed, and there are many situations such as grain winding and grain occlusion.The target samples with blurred boundaries may be missed as negative samples, as shown in Figure 12(1).
The evaluation index shows that the TP value and FP value are directly related to the model performance.In order to improve the detection effect of the model, this study uses the Merge-NMS algorithm to reduce the sample missed detection in the post-processing stage, and the detection results are shown in Figure 12.When the Merge-NMS algorithm is used in the post-processing stage of the YOLOv5s-MobilenetV3 model, the average detection accuracy is increased to 95.56%.The sample data statistics detected in the test set (a total of 2864 samples) are shown in Table 2.After YOLOv5s-MobilenetV3 adopts the Merge-NMS algorithm, the number of FN samples is reduced from 286 to 265.Finally, the number of FN samples of the improved model in this study is reduced to 180, and the recall rate is increased from 90.00% to 93.70%, indicating that the Merge-NMS algorithm is effective in solving the problem of target boundary blurring.
After improving the YOLOv5s model with MobilenetV3 lightweight, the complexity of the model is reduced, which makes the feature extraction of the model insufficient.In this study, the micro-scale detection layer is added to the multi-feature fusion detection structure, and the target information extracted from the high-level feature map and the lowlevel feature map is effectively fused to reduce the loss of target information and improve the detection of small targets.At the same time, the Merge-NMS algorithm can effectively detect targets with fuzzy boundaries in the feature map.As shown in Figure 12 (6), the detection visualization effect diagram of the improved model in this study shows that the front ear targets are basically detected and marked, and the occluded ear and the smaller ear in the yellow frame are also successfully detected, indicating that the lightweight model YOLOv5s-MobilenetV3 can effectively improve the detection performance of the model by using both methods.
Comprehensive Comparison of Different Target Detection Networks
In order to verify the effectiveness of the millet detection model in practical applications, classical models such as YOLOv3, YOLOv3-tiny, and YOLOv5-shufflenetV2 were used to compare with the improved model in this study.The experiment uses the same 640 × 640 image as input, sets the same model parameters, and conducts experimental tests on the grain and millet data set self-built in this study.The results are shown in Figure 13 and Table 3.
ear in the yellow frame are also successfully detected, indicating that the lightw model Yolov5s-MobilenetV3 can effectively improve the detection performance model by using both methods.
Comprehensive Comparison of Different Target Detection Networks
In order to verify the effectiveness of the millet detection model in practical ap tions, classical models such as Yolov3, Yolov3-tiny, and Yolov5-shufflenetV2 were u compare with the improved model in this study.The experiment uses the same 640 image as input, sets the same model parameters, and conducts experimental tests o grain and millet data set self-built in this study.The results are shown in Figure 1 Table 3.It can be seen intuitively from Figure 13 that the equilibrium point of the imp model and Yolov3 model in this study is closer to point (1, 1), and the area under th curve of the improved model and Yolov3 model in this study is larger than that of models, i.e., the average detection accuracy is higher.From the comparison of th results of different models in Table 3, it can be concluded that this study has oth vantages while ensuring the accuracy of model detection, such as small model vo and less floating-point operation.The model volume and floating-point operation Yolov5-shufflenetV2 model and the Yolov3-tiny model are relatively small, but the age detection accuracy is low.The detection accuracy of the Yolov3 model is high, b model size reaches 18.05 MB, and the floating-point operation is 2.7 times that of th proved model in this study.The results show that compared with other model It can be seen intuitively from Figure 13 that the equilibrium point of the improved model and YOLOv3 model in this study is closer to point (1, 1), and the area under the P-R curve of the improved model and YOLOv3 model in this study is larger than that of other models, i.e., the average detection accuracy is higher.From the comparison of the test results of different models in Table 3, it can be concluded that this study has other advantages while ensuring the accuracy of model detection, such as small model volume and less floating-point operation.The model volume and floating-point operation of the YOLOv5-shufflenetV2 model and the YOLOv3-tiny model are relatively small, but the average detection accuracy is low.The detection accuracy of the YOLOv3 model is high, but the model size reaches 18.05 MB, and the floating-point operation is 2.7 times that of the improved model in this study.The results show that compared with other models, the improved model in this study maintains a balance between detection accuracy and detection speed while reducing model complexity and model volume.
Monitoring Results of Jetson Nano
The test results of improved models on the Jetson Nano development board are shown in Table 4.The mean average precision of the lightweight model YOLOv5s-MobileNetV3s-Multiscale-MergeNMS was 91.80%, which was slightly lower than the standard YOLOv5s model.The detection speed was 6.95 FPS, indicating that the model maintains a good detection speed after lightweight.The size of the improved model was reduced by 6.63 MB.The comparison of the test results shows that the lightweight improved models on the Jetson Nano can meet the real-time and accuracy requirements of the millet detection applied to the actual environment.
Figure 1 .
Figure 1.Data set of millet ear in different stages.
Figure 1 .
Figure 1.Data set of millet ear in different stages.
Figure 3 .
Figure 3. Millet ear image data set enhancement.
, the structure of the YoloV5 model is shown.The YoloV5s model mainly includes input, backbone, neck, and prediction.The backbone structure is used as a feature extraction and convolution operation of different times to determine the model complexity and parameter quantity.
Figure 3 .
Figure 3. Millet ear image data set enhancement.
2. 3 .
YOLOv5 Model and Its Improvement 2.3.1.YOLOv5 Model , the structure of the YOLOv5 model is shown.The YOLOv5s model mainly includes input, backbone, neck, and prediction.The backbone structure is used as a feature extraction and convolution operation of different times to determine the model complexity and parameter quantity.
Figure 3 .
Figure 3. Millet ear image data set enhancement.
, the structure of the YoloV5 model is shown.The YoloV5s model mainly includes input, backbone, neck, and prediction.The backbone structure is used as a feature extraction and convolution operation of different times to determine the model complexity and parameter quantity.
Figure 4 .
Figure 4.The structure of YOLOv5s model.The YOLOv5s input retains the same mosaic data augmentation method as YOLOv4, and randomly scales, cuts, distributes, and splices the four images into a new picture, as shown in Figure5.
Figure 5 .
Figure 5. Mosaic data augmentation.Note: 0 is millet ears in the heading stage, 1 is the mature stage, and 2 is the filling stage.
Figure 5 .
Figure 5. Mosaic data augmentation.Note: 0 is millet ears in the heading stage, 1 is the mature stage, and 2 is the filling stage.The backbone of YOLOv5s adds the Focus structure to realize the slicing operation of the input image.The size of the input feature map is 640 × 640 × 3. The size of the output feature map obtained by the Focus structure is 320 × 320 × 32.The backbone network follows the cross-level partial network (CSP) structure of YOLOv4, and mainly uses the residual network structure to extract the features of the input image, in which the convolution operation determines the complexity and parameter quantity of the whole model[26].The neck uses FPN-PAN structure.The feature pyramid network FPN (FPN) transmits and fuses the high-level feature information and the information of the backbone feature extraction network from top to bottom through up sampling.The pyramid attention network (PAN) structure transmits the target positioning feature from bottom to top through down sampling.The combination of the two improves the detection ability of the model.The bounding box loss function of Prediction uses the CIOU_LOSS (Complete IoU Loss) function and the non-maximum suppression (NMS) method to effectively obtain the best prediction anchor box.YOLOv5 uses the gradient descent method to optimize the objective function during the training process.As the number of iterations increases, the loss value (LOSS) is close to the global minimum, and the learning rate is also small.In order to make the model reach the best convergence state after training, the cosine annealing learning rate adopted by YOLOv5 is to reduce the learning rate through the cosine function.The cosine function value decreases slowly with the increase of x, then rises rapidly, and then decreases slowly.The purpose is to avoid falling into the current local optimal point, and constantly adjust the learning rate to make the model converge to a new optimal point until the model training stops.The principle of cosine annealing learning rate is as follows:
Figure 6 .
Figure 6.The depth separable convolution.Note: The '*' representing the multiplication of two convolutions
Figure 6 .
Figure 6.The depth separable convolution.Note: The '*' representing the multiplication of two convolutions.
Figure 7 .
Figure 7. Structure diagram of millet ear detection model based on lightweight YoloV5.
Figure 7 .
Figure 7. Structure diagram of millet ear detection model based on lightweight YOLOv5.
Figure 9 Figure 9 .
Figure 9 shows the schematic diagram of the image detection and video det the grain detection system based on the Jetson Nano development board.The v tion results show that the lightweight model has a good detection effect on the im video detection in the Jetson Nano detection platform.
Figure 9
Figure 9 shows the schematic diagram of the image detection and video detection of the grain detection system based on the Jetson Nano development board.The visualization results show that the lightweight model has a good detection effect on the image and video detection in the Jetson Nano detection platform.
Figure 9
Figure 9 shows the schematic diagram of the image detection and video detection of the grain detection system based on the Jetson Nano development board.The visualization results show that the lightweight model has a good detection effect on the image and video detection in the Jetson Nano detection platform.
Figure 9 .
Figure 9.The grain detection system based on the Jetson Nano.
Figure 9 .
Figure 9.The grain detection system based on the Jetson Nano.
Sensors 2023 ,
23, x FOR PEER REVIEW 10 of 16of training iterations, and gradually tend to be stable.After 200 iterations of the improved model, the training set loss value and the verification set loss value gradually converge, the training set loss value is less than 0.04, the verification set loss value is less than 0.02, and the loss value changes basically smoothly after 300 iterations.The standard model Yolov5s gradually converges after 350 iterations of the training set loss value and the verification set loss value.After the standard model Yolov5s tends to be stable, the loss value of the validation set is 59.27% higher than that of the improved model, and the loss value of the validation set is 55.72% higher than that of the improved model.The loss values of the improved model training set and the verification set in this study are closer to 0, indicating that the model training effect is better, and the generalization ability of the whole model is stronger.
Figure 10 .
Figure 10.Model training loss value curve.
Figure 11 .
Figure 11.Comparison of model size.The model size of Yolov5s is 14.19 MB, and the model size of Yolov5s-MobilenetV3 is 6.77 MB, which is reduced by 7.42 MB.The Yolov5s-MobilenetV3 model adds micro-scale detection alone to make the detection part of the structure complex, which will slightly increase the size of the model.Compared with Yolov5s-MobilenetV3, it only increased by 0.79 MB, but still 46.7% smaller than the Yolov5s model.The Merge-NMS algorithm does not increase the model volume, so the Yolov5s-MobilenetV3 model volume using the Merge-NMS algorithm alone is 6.77 MB.The improved model in this study is a model
Figure 10 .
Figure 10.Model training loss value curve.
Sensors 2023 ,
23, x FOR PEER REVIEW 10 o
Figure 10 .
Figure 10.Model training loss value curve.
Figure 11 .
Figure 11.Comparison of model size.
Figure 11 .
Figure 11.Comparison of model size.
Figure 12 .
Figure 12.Visual comparison of detection effects of different models.
Figure 12 .
Figure 12.Visual comparison of detection effects of different models.In this study, MobilenetV3 was used to replace the standard YOLOv5s model backbone structure to reduce the model volume.The experimental results are shown in Figure 11.The model size of YOLOv5s is 14.19 MB, and the model size of YOLOv5s-MobilenetV3 is 6.77 MB, which is reduced by 7.42 MB.The YOLOv5s-MobilenetV3 model adds microscale detection alone to make the detection part of the structure complex, which will slightly increase the size of the model.Compared with YOLOv5s-MobilenetV3, it only increased by 0.79 MB, but still 46.7% smaller than the YOLOv5s model.The Merge-NMS algorithm does not increase the model volume, so the YOLOv5s-MobilenetV3 model volume using the Merge-NMS algorithm alone is 6.77 MB.The improved model in this study is a model composed of two methods on YOLOv5s-MobilenetV3.The model size is 7.56 MB, which is still greatly reduced by 6.63 MB compared with the model size of the standard YOLOv5s model.This proves the effectiveness of MobilenetV3 replacing the backbone structure of YOLOv5s.
Figure 13 .
Figure 13.P-R curve of different networks.
Figure 13 .
Figure 13.P-R curve of different networks.
Table 1 .
The influence of improved methods on model performance.
Table 2 .
Model set test sample statistics.
Table 2 .
Model set test sample statistics.
Table 3 .
Test results of different models.
Table 3 .
Test results of different models.
Table 4 .
Comparison of test results of improved models on the Jetson Nano. | 10,605 | 2023-11-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Mobile Computing Technologies for Health and Mobility Assessment: Research Design and Results of the Timed Up and Go Test in Older Adults
Due to the increasing age of the European population, there is a growing interest in performing research that will aid in the timely and unobtrusive detection of emerging diseases. For such tasks, mobile devices have several sensors, facilitating the acquisition of diverse data. This study focuses on the analysis of the data collected from the mobile devices sensors and a pressure sensor connected to a Bitalino device for the measurement of the Timed-Up and Go test. The data acquisition was performed within different environments from multiple individuals with distinct types of diseases. Then this data was analyzed to estimate the various parameters of the Timed-Up and Go test. Firstly, the pressure sensor is used to extract the reaction and total test time. Secondly, the magnetometer sensors are used to identify the total test time and different parameters related to turning around. Finally, the accelerometer sensor is used to extract the reaction time, total test time, duration of turning around, going time, return time, and many other derived metrics. Our experiments showed that these parameters could be automatically and reliably detected with a mobile device. Moreover, we identified that the time to perform the Timed-Up and Go test increases with age and the presence of diseases related to locomotion.
Background
The increasing age of the world population has promoted research in several areas and advances in different types of sensors, which have contributed to the evolution of healthcare assessment methodologies [1]. The increased life expectancy has led to growing interest and the need for solutions that can improve the quality of life of the elderly. In Europe, the aging rate was 125.8% in 2017, and 94.1% in 2001 [2][3][4][5].
Mobile computing technologies made it possible to aid individuals with different health statuses. They now include multiple sensors, which can be used for a verity of diverse functions [6]. The magnetometer and the accelerometer are essential because they facilitate the acquisition of physical and biological data from the user [7][8][9]. Moreover, these sensors can support the analysis of bodily functions like gait [10,11]. Furthermore, combining mobile computing technologies with external sensors can promote older people's quality of life [12]. However, in such studies, there are challenges related to choosing adequate tests, and interpretation and analysis of the collected data [13][14][15][16][17].
Embedded sensors may help to monitor the different functional tests with the detection of different types of movements [18][19][20][21][22]. The Timed-Up-and-Go test is a quick and straightforward clinical test for assessing lower extremity performance related to balance, mobility and fall risk in the elderly population and people with pathologies (i.e., Parkinson's disease, amyotrophic lateral sclerosis, in post-stroke patients, in patients with orthopedic pathologies, and cardiovascular incidents) [23][24][25][26][27][28]. Aging effects can be identified with the Timed-Up-and-Go test, and it could be supplemented with smart technology to be used in clinical practice [29]. The automation of the measurement of sensor data when performing the Timed-Up and Go test can be valuable, particularly in older adults [30,31]. Some approaches, such as [32], make it possible to perform the Timed-Up and Go test using low-cost devices in a real-time setting with reduced needs of processing capabilities to be used in commonly used devices.
Motivation
The Timed-Up and Go test can provide a practical analysis of the degree of prevalence and level of certain diseases [33]. With this test, clinicians can assess physical conditions by evaluating the way the individual walks, and the time it takes to perform the analysis. Therefore, this test allows the medical team to assess whether the individual has an accelerated degree of disease development or is in the initial state [34].
Furthermore, the Timed-Up and Go test can be used in individuals with neurological diseases [35]. This test allows for the evaluation of their reaction time. It is possible to assess whether they get up quickly or still stop for a long time. Moreover, it is possible to evaluate whether the individual walks in a straight line or cannot maintain the correct direction [36,37]. Therefore, this test can also provide a practical assessment of cognitive problems that do not allow him to follow the right path.
This test is widely used in assessing a patient's recovery process associated with diseases that have affected their mobility [38]. The data collected in this test support the evaluation of patient recovery to establish standards related to the reaction time, test time, angular derivation, and walking strength that an individual with different degrees of the disease might have [39]. This paper's motivation is to present a cost-effective method for the automatic measurement of the Timed-Up and Go test using sensors available on common smartphones. This document also states the calculation of numerous features that aim to create a reliable dataset for pattern recognition on specific health symptoms. Moreover, this study provides a comparative analysis of different subjects, which live in nursing homes separated by age, institution, and various diseases of people, finalizing with the comparison with the other results available in the literature to state the useful contribution of the proposed approach. Finally, the major challenge with this is related to the definition of the best positioning of the sensors for the correct data acquisition. Thus, it affects the measurement of the different results of the Timed-Up and Go test, e.g., in case the experiments are performed under adverse conditions, the probability of having the incorrect measurement of the results is very high. Technological constraints may also affect the data acquisition and processing, such as low memory, power processing, connectivity, network, and battery constraints of the mobile devices [40,41]. Previously, we explored and presented the positioning of the sensors available in a mobile device or connected in a Bitalino device with the preliminary results in [42,43].
Prior Work
There are some studies available in the literature that involved the calculation of the different features related to the Timed-Up and Go test for further conclusions about the performance of the test. The inertial sensors, e.g., accelerometer, magnetometer, and gyroscope, available in a mobile device may be used to evaluate the benefits of the training based on the Timed-Up and Go test, calculating the velocity and the time of a sit-to-stand transition [44].
Fall risk assessment based on wearable inertial sensors was performed based on an instrumented Timed-Up and Go test in [45], relying on a variety of features, as summarized in Table A1. The types of gait and balance were evaluated with a similar set of features in [46]. The accelerometer sensor was used for the identification and measurement of the duration of each stage of the Timed-Up and Go test in individuals with spinal cord injury [47]. The different phases were also evaluated in [48] with an accelerometer sensor, measuring the mobility angles, and the average of the sit-to-stand transition time in frail elderly individuals with Parkinson's disease. In [49], the measurement of the Timed-Up and Go test results was performed with an accelerometer sensor for fall risk assessment. The different phases of the test for people with Parkinson's disease were analyzed in [50] and [51]. In [52], patients with Parkinson's disease were analyzed during a walking activity to measure the duration of the test. A smartphone application suite for assessing mobility is presented in [53]. Whether the individual was sitting during the Timed-Up and Go test is investigated in [32]. The authors of [54] perform analysis, mainly focusing on people with frailty syndrome. A wearable system for assessing mobility in older adults is presented in [55], relying on a variety of statistical features. Similarly, a wearable system for measuring the probability of human falls is introduced in [56], while [17] is concerned with identifying the reasons for falls. In [57], the authors show that the mobile device accelerometer can study and analyze the Romberg test's kinematic between frail and non-frail older adults.
Structure of the Study
The remainder of this paper is organized as follows: Section 2 presents the methods used for the development of the proposed analysis, including the study design and participants, description of the Timed-Up and Go test, the data acquisition and processing methods used, and the statistical analysis performed in this study. The mobile application developed for data acquisition, the requirements, and the statistical analysis are presented in Section 3. Furthermore, Section 4 offers a discussion on the main findings, limitations, and comparison with our study's prior work. In the end, Section 5 presents the conclusions of this study.
Study Design and Participants
We selected Android as the operating system for data collection software development as it is open-source software and a market leader. Moreover, we chose the external Bitalino sensors for their Sensors 2020, 20, 3481 4 of 22 appropriate use in research projects in this research domain [59]. This technology could facilitate the creation of significant datasets for health assessment that can be used to support decision-making in medical diagnostics. The mobile device was incorporated in a sports belt to be worn on the waistline. The start of the Timed-Up and Go test was indicated by a sound alarm using the mobile application. The chair incorporated a pressure sensor to register the moment when the older adult re-acted to this sound. The volunteer had to walk for 3 m, go back, and sit down again. All the data were collected on the mobile device, and, after test finalization, a text file was sent to the Cloud by using the FireBase service. Different mobile devices were used for data acquisition to compare the different frequencies of the data acquisition, which verified that the XIAOMI MI 6 was one of the devices that more accurately acquired the different types of data. As the experiments were controlled, we used the same device for final data acquisition and analysis. The data acquisition showed an influence of the environment and varied with the place for data acquisition. It was associated with the study of older adults with different health conditions and ages and resulted in the creation of a dataset with diverse and heterogeneous data.
The data acquired were processed with the Java programming language to extract the different features for the statistical analysis. Firstly, the pressure sensor is used to measure the reaction and total test time. Secondly, the magnetometer sensors are used to extract the total test time, turning around instant by the magnitude of the vector and turning around instant by the absolute value of the z-axis. Finally, the accelerometer sensor is used to extract the reaction time, total test time, duration of turning around, going time, return time, and the averages of the acceleration, velocity, force, and power during going and returning time.
The proposed method was tested on 40 older adults with an age of 60-to 97-years-old (83.8 ± 7.95), privileging gender equality from four institutions, such as Centro Comunitário das Lameiras, Lar Aldeia de Joanes, Lar Minas, Lar da Misericórdia, and others. The "others" corresponds to an open group from different locations. They have several types of health complications, such as Parkinson's disease, scoliosis, mobility, and cardiovascular problems, and dementia complications (presented in Table A2). The volunteers were institutionalized in nursing homes in the center of Portugal. The selection process was conducted in close collaboration with the nursing team. However, the inclusion criteria relied on mobility capabilities to perform the test. The individuals are randomly selected, and there is no relationship between the individuals and the team of this study. The volunteers were informed about all the specifications and goals of the experiments. Furthermore, they signed an ethical agreement allowing us to share the results of the tests in an anonymous form. The agreement also provided the participants' informed consent considering the risks and the objective of the study. Ethics Committee from Escola Superior de Saúde Dr. Lopes Dias at Polytechnic Institute of Castelo Branco approved the study with the number 114/CE-ESALD/2019.
Moreover, other information such as age and weight were provided to support the conclusions of the study. These data were guaranteed to be used in an anonymous form. The data were then measured using a feature extraction method that will be explained in Section 2.2.
Only consistent data were considered in these results. The experiments were held between October and December 2019, and each volunteer underwent the test at three different times. These tests were conducted in an isolated environment to avoid any distractions, which could impact the results. Each institute provided the chair used in the experiments. The volunteers had different health states, some of them still healthy, had diseases related to the spine, such as multiple sclerosis, diseases related to the heart, arrhythmia, or angina pectoris, or illnesses associated with the mental health, such as Parkinson's. These people had various health statuses and distinct degrees of progress for each disease, which indicated that the population's health status was variable. Thus, the data collected were heterogeneous.
The mobile application acquired the data from the sensors at intervals of milliseconds, but it was converted to seconds to improve its readability. The collection process started with an audible signal. This sound signal represented the beginning of the data capture, which was recorded in text files and sent over the Internet using the Firebase service. Initially, the data were saved in text files. The accelerometer and magnetometer were tri-axis sensors, represented in four columns in the different files, including timestamps and one column for each axis of the sensors (x, y, and z). Further, the pressure sensor acquired the force performed with the user sitting on the chair. These sensors were complementary for the measurement of the different parameters of the Timed-Up and Go test.
Description of the Timed-Up and Go Test and Data Acquisition and Processing
The Timed-Up and Go test was developed in 1991 to examine functional mobility in the elderly [60,61]. This test allows the recognition of other different diseases, mainly related to walking activities. It has certain phases where it is possible to obtain different readings and calculations of various features, such as sitting on the chair, lifting from the chair, walking for three meters, reversing the march, walking another three meters toward the chair, and sitting on the chair.
The data acquisition was performed with a mobile device equipped with accelerometer and magnetometer sensors, placed in a belt at the waist of the person, and two Bitalino devices, i.e., one with a pressure sensor placed on the back of the chair, and the other with one ECG and one EEG sensor placed in a belt at the chest of the individual.
Currently, only the data acquired from the pressure sensor and the sensors available in the mobile device are processed. Thus, different calculations are performed, including reaction time, time of the end of data acquisition, the total time of the test, turning instant, turning time, walking time, returning time, the average of the acceleration, speed, force, and power. The measurements of the speed, strength, and power are essential to detect some abnormalities in the actions of older adults.
Statistical Analysis
After the acquisition of the data from the sensors available in off-the-shelf mobile devices and the sensors connected to the Bitalino device, the data analysis was performed. Firstly, the data acquired by the pressure sensor were processed, extracting the reaction time and the total test time. Secondly, the data obtained by the magnetometer sensor were processed, extracting the start time, the end time, the instant and acceleration value of turning around by the Euclidean norm, and the instant and acceleration value of turning around by the minimum absolute value of the acceleration. Thirdly, the data acquired by the accelerometer sensor were processed, extracting the start, reaction, end, and total test times, the instant and duration of turning around, time of walking the first three meters, time to walk back to the chair, and the mean of the acceleration, velocity, force, and power during the walk for the first three meters and during the walk back to the chair.
After measuring the different variables, a statistical comparison between them was performed, analyzing and comparing the results to the averages of each institution, person, and healthcare disease. Also, descriptive statistics, normality tests, and the detection of outliers were performed. After checking the conditions and making sure we can apply ANOVA, we used it to compare averages between institutions and age groups. Thirdly, the results were analyzed by each disease. The ANOVA test was used for the dependence between the different variables to test the relation between the results obtained and the sample characteristics. ANOVA is a statistical test that allows the discovery of potential differences or relations between different variables useful in testing with the distinct features of human beings [62,63]. It will enable the assessment of possible ties and dependencies between different variables. As the Timed-Up and Go test is a physical test related to people's physical conditions, different variables may be affected.
Data Acquisition with a Mobile Application
The mobile application was developed for Android devices using the Android Studio Integrated Development Environment (IDE). The mobile application has two main functionalities. On the one Sensors 2020, 20, 3481 6 of 22 hand, this mobile application performs a continuous data collection using the built-in magnetometer and accelerometer sensors. The data are collected with a sampling rate of 1 kHz and 16 bits of precision. On the other hand, the mobile application handles the communication technologies required to receive data through Bluetooth from the Bitalino device with a pressure sensor but is also responsible for sending the collected data to the Firebase service for storage. The analysis showed that the mobile devices with embedded sensors provide reliability and automation in the Timed-Up and Go test, unlike traditional measurement methods that require manual measuring.
Requirements
There are two different types of requirements verified for the performance of the experiments, i.e., one related to the environment and the other to the individual. For the execution of the Timed-Up and Go test, the individual should have the possibility to walk, stand-up, and sit-down on the chair independently. It needs a chair, a tape-measure for the identification of the place related to the three meters to walk, and an adhesive tape to mark the site where the individual should reverse the gait. Also, electrodes to position the EEG and ECG sensors in the individual, an adhesive tape to fix the pressure sensor on the chair, and two sports belts to carry the mobile device and the Bitalino device are used.
Comparison of Different Acquired Data
There are a few options to measure the turning around instant, which are:
•
The minimum value or amount of the magnitude of the vector of the accelerometer, calculated after the reaction time; • The minimum absolute value of the z-axis of the magnetometer, calculated after the reaction time.
Based on the presented steps for the calculation of the turning around instant, the first moment of mobility, and the start time of the test can be measured by the accelerometer and the pressure sensor.
Incidentally, the analysis performed in this paper includes several values. These are: • Pressure sensor: reaction time, whole test time; • Magnetometer: total acquisition time, turning around instant by the magnitude of the vector, turning around moment by the absolute value of z-axis; • Accelerometer: reaction time, total test time, duration of turning around, going time, return time, the average acceleration during going time, the average acceleration during return time, the average velocity during going time, the average speed during return time, the average force during going time, the average force during return time, the average power during going time, the average power during return time; Next, the presentation of these results by age (Section 3.3.1), by institution (Section 3.3.2), and by disease (Section 3.3.3) will be performed.
Results by Age
After checking the requirements, we used the ANOVA test. We found out that there is no statistically significant difference (alpha = 0.05) between the three age groups for all variables/measurements of interest. Figure 1 shows the mean values for the different age ranges for the reaction time and total test time variables obtained with the pressure sensor. Thus, the results of the F-test, through the respective limited probability associated with the test statistic allowed us to conclude that the average values between the three age groups are statistically equal for the analysis for the magnetometer sensor, such as Pr (F > F-test) = 0.231 > 0.05 for the total test time variable, and Pr (F > F-test) = 0.815 > 0.05 for the reaction time variable. Therefore, we accept the null hypothesis that the averages are statistically equal. Although the averages are statistically equal, it is interesting to note that both for the reaction time and for the total variable test time, it is the younger individuals who have shorter Then, in Figure 2, we can observe the mean values for the different age range for total test time, turning around instant measured by the magnitude of the vector, and turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor. The results of the ANOVA test, through the respective limit probability associated with the test statistic, allowed us to conclude that the average values between the three age groups are statistically equal for any of the variables under analysis for the magnetometer sensor, namely 32.88 (s) for total test time (Pr (F > F-test) = 0.637 > 0.05), 20.21 (s) for turning around instant measured by the magnitude of the vector Pr (F > F-test) = 0.772 > 0.05, and 20.28 (s) for turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor Pr (F > F-test) = 0.735 > 0.05.
Results by Institution
Aiming to investigate any differences between the participating institutions in this study, we performed a set of ANOVA tests where alpha = 0.5. In cases when there is a statistically significant difference (p < alpha), we applied Tukey's multiple comparison tests to identify homogeneous Then, in Figure 2, we can observe the mean values for the different age range for total test time, turning around instant measured by the magnitude of the vector, and turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor. Then, in Figure 2, we can observe the mean values for the different age range for total test time, turning around instant measured by the magnitude of the vector, and turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor. The results of the ANOVA test, through the respective limit probability associated with the test statistic, allowed us to conclude that the average values between the three age groups are statistically equal for any of the variables under analysis for the magnetometer sensor, namely 32.88 (s) for total test time (Pr (F > F-test) = 0.637 > 0.05), 20.21 (s) for turning around instant measured by the magnitude of the vector Pr (F > F-test) = 0.772 > 0.05, and 20.28 (s) for turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor Pr (F > F-test) = 0.735 > 0.05.
Results by Institution
Aiming to investigate any differences between the participating institutions in this study, we performed a set of ANOVA tests where alpha = 0.5. In cases when there is a statistically significant difference (p < alpha), we applied Tukey's multiple comparison tests to identify homogeneous The results of the ANOVA test, through the respective limit probability associated with the test statistic, allowed us to conclude that the average values between the three age groups are statistically equal for any of the variables under analysis for the magnetometer sensor, namely 32.88 (s) for total test time (Pr (F > F-test) = 0.637 > 0.05), 20.21 (s) for turning around instant measured by the magnitude of the vector Pr (F > F-test) = 0.772 > 0.05, and 20.28 (s) for turning around moment measured by the absolute value of z-axis variables obtained with the magnetometer sensor Pr (F > F-test) = 0.735 > 0.05.
Results by Institution
Aiming to investigate any differences between the participating institutions in this study, we performed a set of ANOVA tests where alpha = 0.5. In cases when there is a statistically significant difference (p < alpha), we applied Tukey's multiple comparison tests to identify homogeneous institutions. For conciseness, we only list the parameters which are statistically significantly different between the institutions (p < alpha).
Namely, the variables with a significant difference in the mean for different institutions are: total test time (s), the conclusion is that there are significant differences between institutions (p-value = 0.03 < alpha = 0.05). The total test time (s) by the pressure sensor, the turning around instant by the absolute value of z-axis (s) by the magnetometer, the total and return test times (s), the averages of velocity during going and returning time (m/s), and the averages of power during going and returning time (J), the total test, going and returning times (s), the average of velocity during return time (m/s), the total test and return times (s), and the averages of velocity and power during going time (m/s) by accelerometer and magnetometer.
Also, we concluded that the average values of all institutions are statistically equal for the reaction time, duration of turning around, the averages of acceleration, velocity, force, and power during going and returning times. The results of this analysis can show that more generic features are statistically equal in different institutions, and therefore might be useful for drawing general conclusions that apply to older adults in general.
Results by Disease
At this stage, approximately 40 different pathologies associated with the subjects were identified. Some individuals have only one pathology, but others have more diseases and from very diverse areas, as shown in Table 1. Of the 40 individuals involved in the study, there are 11 patients with one pathology, nine patients with two pathologies, five patients with three pathologies, five patients with four pathologies, two patients with five pathologies, and only one patient with 6, 7, and 9 pathologies. We can also see the number of individuals identified by pathology and the classification of the respective pathologies by respective categories. This analysis reflects the great diversity of pathologies vs. individuals under study, which may make it difficult and even compromise inferential statistical analysis. Also, it was not possible to read all sensors in the same way for all individuals, resulting in different numbers of samples for the different variables under study. As presented in Table 2, two groups were formed with the pathologies under analysis, including one for diseases directly related to mobility, and others with the other conditions found in the population. In Figure 3, we can observe the mean and the standard deviation values for reaction time and total test time measured by the pressure sensor by groups of diseases related to mobility and not directly related to movement. Through using the Student's t-test to compare two groups of independent samples, it was possible to assess whether there are statistical differences in the level of measurements made between individuals with diseases related to mobility and not associated with movement.
In Figure 3, we can observe the mean and the standard deviation values for reaction time and total test time measured by the pressure sensor by groups of diseases related to mobility and not directly related to movement. Through using the Student's t-test to compare two groups of independent samples, it was possible to assess whether there are statistical differences in the level of measurements made between individuals with diseases related to mobility and not associated with movement. First, we concluded the variances are homogeneous (Pr (F > F-test) = 0.079 > 0.05). With the Student's t-test, it was possible to conclude that the reaction time (s) between the two groups of diseases not related and related to mobility is equal (Pr (|T| > t-test) = 0.838 > 0.05), and the average is statistically similar to 37.133 (s). Hence, it can be said that the 13 individuals with pathologies not related to mobility take less time to perform the test (36.044 vs. 38.222), but this difference is not statistically significant.
Furthermore, the same conclusions can be achieved from the total test time (s) that has identical variances between the groups of diseases not related and related to mobility ((Pr (F > F-test) = 0.960 > 0.05)), and the average is statistically equal (Pr (|T| > t-test) = 0.710 > 0.05).
In Figure 4, it is possible to observe the mean values for the total test time (s), turning around instant by the magnitude of the vector (s) and turning around instant by the absolute value of the zaxis (s) by magnetometer sensor by diseases related or not related to mobility. First, we concluded the variances are homogeneous (Pr (F > F-test) = 0.079 > 0.05). With the Student's t-test, it was possible to conclude that the reaction time (s) between the two groups of diseases not related and related to mobility is equal (Pr (|T| > t-test) = 0.838 > 0.05), and the average is statistically similar to 37.133 (s). Hence, it can be said that the 13 individuals with pathologies not related to mobility take less time to perform the test (36.044 vs. 38.222), but this difference is not statistically significant.
Furthermore, the same conclusions can be achieved from the total test time (s) that has identical variances between the groups of diseases not related and related to mobility ((Pr (F > F-test) = 0.960 > 0.05)), and the average is statistically equal (Pr (|T| > t-test) = 0.710 > 0.05).
In Figure 4, it is possible to observe the mean values for the total test time (s), turning around instant by the magnitude of the vector (s) and turning around instant by the absolute value of the z-axis (s) by magnetometer sensor by diseases related or not related to mobility.
In Figure 3, we can observe the mean and the standard deviation values for reaction time and total test time measured by the pressure sensor by groups of diseases related to mobility and not directly related to movement. Through using the Student's t-test to compare two groups of independent samples, it was possible to assess whether there are statistical differences in the level of measurements made between individuals with diseases related to mobility and not associated with movement. First, we concluded the variances are homogeneous (Pr (F > F-test) = 0.079 > 0.05). With the Student's t-test, it was possible to conclude that the reaction time (s) between the two groups of diseases not related and related to mobility is equal (Pr (|T| > t-test) = 0.838 > 0.05), and the average is statistically similar to 37.133 (s). Hence, it can be said that the 13 individuals with pathologies not related to mobility take less time to perform the test (36.044 vs. 38.222), but this difference is not statistically significant.
Furthermore, the same conclusions can be achieved from the total test time (s) that has identical variances between the groups of diseases not related and related to mobility ((Pr (F > F-test) = 0.960 > 0.05)), and the average is statistically equal (Pr (|T| > t-test) = 0.710 > 0.05).
In Figure 4, it is possible to observe the mean values for the total test time (s), turning around instant by the magnitude of the vector (s) and turning around instant by the absolute value of the zaxis (s) by magnetometer sensor by diseases related or not related to mobility. With the application of the Student's t-test for comparing the variables measured in the magnetometer sensor, by diseases related or not related to mobility, it was concluded that there are no significant differences in measurements between diseases related to mobility and not related to mobility. However, we can verify the following conclusions:
•
The total test time (s) has homogeneous variances between the groups of diseases not related and related to mobility (Pr (F > F-test) = 0.459 > 0.05), and the average is statistically equal (Pr (|T| > t-test = 0.490 > 0.05); • The turning around instant by the magnitude of the vector (s) has non-homogeneous variances between the groups of diseases not related and related to mobility (Pr (F > F-test) = 0.029 < 0.05), but the average is statistically equal (Pr (|T| > t-test = 0.642 > 0.05); • The turning around instant by the absolute value of the z-axis (s) has homogeneous variances between the groups of diseases not related and related to mobility (Pr (F > F-test = 0.628 > 0.05), and the average is statistically equal (Pr (|T| > t-test = 0.961 > 0.05).
Main Findings
The Timed-Up and Go test performed by the elderly population showed a considerable diversity of data because the participants had different types of diseases. The various physical states of each participant in the study demonstrated that the evaluation of the test was reliable with the use of sensors. Thus, the sensors available in the off-the-shelf mobile devices allowed practical data acquisition and further conclusions in real-time. Further, we used a pressure sensor for the reliable detection of the mobility of getting up from the chair. Thus, for additional findings, we extracted several features from the accelerometer and the magnetometer available in off-the-shelf mobile devices, and pressure sensors connected to the Bitalino device.
We anonymously collected the age and different diseases of people to consider during the test's application in older adults. The data were analyzed from different viewpoints, including the measurements by each person, institution, and disease. It was proven that environmental conditions were essential for the reliability of the analysis of the results.
The conditions of the performance of the test, data acquisition, and network connection were adverse in two institutions, namely Lar Aldeia de Joanes and Lar Minas, as presented in Table 3. Considering the measurements performed by the data acquired from the magnetometer sensor, only the data obtained for 32 persons were reliable for further analyses. The relevant report was presented in Table 3. Thus, it is verified that the time measured by the magnetometer sensors was lower than the time measured with the data acquired from the pressure sensor. Considering the measurements performed using the data received from the accelerometer sensor, we concluded that the use of only the accelerometer sensor invalidated some tests in the calculation of the turning around instant. Only 16 persons performed the experiments with reliability, Table 3 presents the data. However, fusing these data with the measurements performed by the magnetometer sensor and using the turning around moment measured by the magnitude of the vector, we found that 22 persons performed the experiments with reliability. By using the turning around instant measured by the absolute value of the z-axis, we found that 33 persons performed the examinations successfully. Considering the measurements performed using the data acquired from the accelerometer sensor, we found that the use of only the accelerometer sensor invalidated some tests in terms of the calculation of the turning around instant. Thus, only three institutions performed the experiments with reliability, and only people with nine diseases were analyzed. However, fusing these data with the measurements performed by the magnetometer sensor, we concluded that the six institutions performed the experiments with reliability. Therefore, we find that the return time was higher than the going time with higher acceleration, velocity, force, and power during the return time. Thus, we concluded that the return time was higher than the going time with higher acceleration, velocity, force, and power during the return time. With the fusing of these data with the measurements performed by the magnetometer sensor and using the turning around moment measured by the magnitude of the vector, we analyzed 16 diseases. Using the turning-around instant measured using the absolute value of the z-axis, we analyzed 27 illnesses. Table 3. Relation between sensors and results obtained.
Sensors
Parameters Analysis
By Age By Institution By Diseases
Pressure sensor Some individuals reported an inconsistency between the different diseases and the results obtained by the values acquired using the various sensors, and this inconsistency could be attributed to the adverse conditions of the data acquisition. In general, older adults have more than one disease. Still, the best results obtained with the magnetometer were obtained in people with arthrosis disease, where the person only has arthrosis, and the other people have several diseases. The same problem was observed in the case of people with osteoarticular pathology, and prosthesis in the right humeral, where the going time was lower than that for the other people. In conclusion, the sensors might report bad data, and the findings might be argued. The other problem was that people with osteoarticular pathology and prostheses in the right humeral reported better results in the measurement of turning around than people with lumbar hernias and gastric ulcers. They were attributed to the fact that people with gastric ulcers had more than one disease, and people with several diseases reported higher times than the others. To ensure that these data collection methodologies can be used to assess physical and functional performance in the clinic, this data should be valid, reliable, and with proper responsiveness, as has been demonstrated by the Timed-Up and Go test in a variety of conditions [64,65].
Limitations
As presented in Table 4, there are three possible origins of limitations found, such as individuals, environment, and technical. The older adults and environments for the different tests are heterogeneous. However, other technical barriers related to the Internet and Bluetooth connection availability, and synchronization between the various devices were found. The individuals performed the examination three consecutive times to avoid some problems, and the acquisition started at the same time in all devices. Table 4. Relation between the origin and limitations of the study.
Origin Limitation
Individuals Different health conditions.
Environment
The experiments were performed in uncontrolled environments.
Technical
The Internet connecting is needed for data synchronization.
Bluetooth connected reported some failures.
A large volume of data needs to be processed in the mobile device.
Data cannot be processed in real-time.
Sometimes it was not possible to consistently synchronize the timestamps of the acquired data, because Bitalino does not have real timestamps.
Comparison with Prior Work
Different studies analyzed the performance of the Timed-Up and Go test with sensors to measure the various parameters. Still, only two studies [45,50] show the values of the measured parameters. These studies are not comparable with the values obtained in our study, because they only calculate the power. There are multiple literature surveys of the Timed-Up and Go test [60,64,66], but they do not explicitly consider the inclusion of older adults. It is also evident because of the discrepancy in the reported values of high power, which is uncommon for older adults who usually have low energy. As the people of other studies are younger, the power/energy used to perform the Timed-Up and Go test is higher than in our research, reporting −28,934.32 J. However, it depends on the health diseases and age of older adults in the study. The age range of participants in our study is higher than the studies available in the literature.
Among the other approaches that use mobile devices for automation of the Timed-Up and Go text, the most prominent ones are [32,45,49,67]. Similarly, our study also measures the duration of the Timed-Up and Go test and identify the different stages. Unlike them, our study is mainly performed by older adults, uses multiple sensors to monitor the various movements, and measures parameters including power, velocity, acceleration, force, reaction time, and others, to measure the performance of the test more accurately. The main differences and advantages of our study are presented in Table 5. Table 5. Comparison of the studies in the literature with our study.
Study
Differences Compared to Our Study Advantages of Our Study [45] The study is related to the fall risk assessment, and our research is associated with the analysis of the performance of the Timed-Up and Go test for the creation of patterns by age, disease, and institution.
Our study proved that a relation between diseases related to mobility and the performance of the Timed-Up and Go test exists, allowing the creation of different patterns with the inertial sensors. [49] The study identified the different phases of Timed-Up and Go sensors. The authors also calculated the Minimal Detectable Change based on the speed, where we identified the various stages, and measured the force, power, and acceleration of the movement.
The older adults sometimes performed more force and power than the other population. The measurement of these parameters is vital to identify the reliability of the test in the different repetitions. [32] The study tracks the different stages of the Timed-Up and Go test, and the angles of the knee and ankle. Our study identified the different phases and made other measurements.
Our study is focused on older adults that commonly have different pathologies, performing different measurements and relationships between diseases. [67] The authors implemented machine learning methods for the distribution of the individuals in different groups to cluster the types of diseases.
Our study performed the analysis of the different features extracted with a focus on the diseases related to the movement.
Conclusions
The Timed-Up and Go test is an easy test used to measure different types of mobility. This study considered performed the analysis of older adults. This test consists of the individual sitting on the chair, getting up from the chair, walking three meters, reversing the direction of the walking, walking another three meters to back to the chair, and sitting on the chair.
The automatic measurement of the Timed-Up and Go test with mobile devices is possible, validating the different parts of the test. This work considers the data acquired from the various sensors available in the mobile device, including the accelerometer and magnetometer sensors, where the magnetometer sensors help in the detection of the changes of the direction during the test, where the accelerometer sensors allow the measurement of the acceleration, velocity, force, and power. A Bitalino device with a pressure sensor in the chair is used to detect the mobility's start. Another Bitalino device was used to acquire the electrocardiography (ECG) and electroencephalography (EEG) for future processing.
This work aimed to analyze the data obtained in different elderly institutions with various conditions. It was verified that data acquisition conditions influenced data acquisition. The different diseases of the individuals also affect the results of the performance of the Timed-Up and Go test. Through the automatic calculation of the features, different values were obtained. Thus, various analyses were carried out by age, institution, and type of disease, which allowed the measurement of exciting results. It was verified that this study allows the possibility to create different patterns of physical states of people. However, several constraints may have influenced the experiment's results, including the test environment and the reception conditions of the network. The data are somewhat heterogeneous because we are analyzing older adults with different health conditions. The statistical grouping by different age ranges allows us to show the influence that age may have on the test results. The Timed-Up and Go test has been demonstrated to be an accessible and clinically relevant test to assess mobility, balance, and risk of falls in the elderly and other populations with health problems.
With the rise of chronic health conditions, it is fundamental to create accessible, valid, and reliable online instruments that evaluate and record physical health performance, like the Timed-Up and Go test. It is also vital to guarantee that the follow up gives a real evolution of this performance with some health treatments, such as physiotherapy. | 10,225.6 | 2020-06-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Medicine"
] |
A Topology Control with Energy Balance in Underwater Wireless Sensor Networks for IoT-Based Application
As part of the IoT-based application, underwater wireless sensor networks (UWSN), which are typically self-organized heterogeneous wireless network, are one of the research hot-spots using various sensors in marine exploration and water environment monitoring application fields, recently. Due to the serious attenuation of radio in water, acoustic or hybrid communication is a usual way for transmitting information among nodes, which dissipates much more energy to prevent the network failure and guarantee the quality of service (QoS). To address this issue, a topology control with energy balance, namely TCEB, is proposed for UWSN to overcome time-delay and other interference, as well as make the entire network load balance. With the given underwater network model and its specialized energy consumption model, we introduce the non-cooperative-game-based scheme to select the nodes with better performance as the cluster-heads. Afterwards, the intra-cluster and inter-cluster topology construction are, respectively, to form the effective communication links of the intra-cluster and inter-cluster, which aim to build energy-efficient topology to reduce energy consumption. With the demonstration of the simulation, the results show the proposed TCEB has better performance on energy-efficiency and throughput than three other representative algorithms in complex underwater environments.
Introduction
The Internet of Things (IoT) is a widely spread information technology using smart sensors, RFID, smartphones and various communication protocols [1]. In recent years, there are many IoT-based application scenarios, such as smart cities and smart environments (e.g., smart home), environmental monitoring and disaster prevention, intelligent transportation and auxiliary navigation as well as battlefield surveillance. As an important part of IoT technologies, underwater wireless sensor networks (UWSN) are typically a self-organized heterogeneous wireless network, which is composed of many multi-functional underwater micro-sensor nodes with acoustic communication links [2]. With different assembled sensors, nodes are used to collaboratively sense the underwater environment and collect
Related Work
Due to the characteristics of the underwater acoustic channel, e.g., high propagation delay, high bit error rate, multi-path effect and Doppler effect, there is a larger difference between the UWSN and TWSN. In UWSN, we must take more care about the complex underwater environment, especially for the contaminated water. It needs an extra design for real scenarios and computational complexity. Consequently, the commonly traditional topology control in the TWSN cannot be directly applied to the UWSN. We need to redesign a series of algorithms or protocols to meet the requirements of UWSN according to the underwater environment conditions.
Energy-efficiency is one of the most important targets to design a topology control mechanism in UWSN. To address this aim, clustering technology is the most common and available scheme for reliable communication and energy conservation. LEACH [7], which is a typical cluster-based topology control algorithm, periodically selects cluster-heads and uniformly drains energy by role rotation with data fusion strategy. Nevertheless, it is more suitable for ideal network model in the homogeneous TWSN, and there are some problems in its own clustering mechanism, such as uneven distribution of cluster-heads and poor energy balance. EDCS [8] is another cluster-based scheme which is proposed for heterogeneous network scenarios. Compared with the LEACH, EDCS achieves the better energy-saving effects for more general heterogeneous network because it introduces more accurate average network estimation and gravitational strategies. However, this does not apply to UWSN, which is a more complex and general heterogeneous network.
Recently, many topology control algorithms have been proposed for UWSN applications [9]. As one of the most important factors, energy-efficient topology control is firstly focused on by scholars under UWSN. Coutinho et al. [10] proposed two mechanisms, namely, the centralized topology control and the distributed topology control, to organize the network by some nodes depth adjustment. Combined with the corresponding geographic forwarding protocol, the data packets delivery ratio can be achieved into a higher level even in such hard scenarios (e.g., very sparse or dense network), while the energy consumption can also be reduced. Nevertheless, it must have the precise node position information which is provided by the extra localization system; simultaneously, the network cannot guarantee connectivity because it may have isolated nodes. Jouhari et al. [11] proposed a new kind of greedy forwarding (NGF) strategy for the geographic-based topology control in UWSN using the splitting mechanism with Chinese remainder theorem. In NGF, it is effective for more than two nodes to participate in the forwarding of one packet instead of selecting only one node as the next-hop, because the source node can reduce the number of bits transmission by splitting mechanism. NGF can increase the lifetime and other network performance, but it needs an extra strategy to solve isolated and void nodes problem appearing in the topology formation result.
A scale-free network model for calculating the edge probability is used to randomly generate the initial topology in [12]. Subsequently, a complex network theory-based topology control strategy is put forward to build a dual-clustering structure with two kinds of cluster-heads to ensure the connectivity and coverage, as well as optimize network energy consumption and propagation delay. It indicates the scale-free model can be applied in UWSN hierarchical topology but it may not satisfy the demand of specific applications. Specially, they do not mention the tradeoff between cluster-heads vulnerability and the cost from role rotations when using clustering technology. Further, to meet the requirement of diverse coverage in UWSN, Liu et al. [13] proposed the traversal algorithm for diverse coverage (TADC) and the radius increment algorithm for diverse coverage (RIADC), respectively. Actually, both TADC and RIADC satisfy the coverage for topology control through altering the sensing radii of nodes. However, the only difference is TADC only adjusts the sensing radius of one node at each round while multiple nodes in RIADC may increase their sensing radii in each round simultaneously.
Because to UWSN is typically an opportunistic network, QoS-based target is a focus issue that we are always concerned about. As we know, the links for the instant message in UWSN are always unstable and cut off [14,15]. The probabilistic multipath routing behavior driven by the opportunistic routing protocols is modeled in [15]; simultaneously, the probabilistic-based multipath centrality metric is proposed to measure the importance of UWSN to the data delivery task. It can be used to guide topology control to make better network performance because of identifying the critical nodes. In [16], to gain the preferable throughput efficiency of the network, the improved distributed topology control (iDTC) and the power adjustment distributed topology control (PADTC) are, respectively, proposed to guarantee the delivery of data by dealing with communication void problem of geographic opportunistic routing. With the depth adjustment and power adjustment of a void node, both protocols can obtain better energy efficiency and perform minimum displacement in the case of a void node while maintaining the same throughput.
On the other hand, due to the characteristic of water flowing, node mobility is another critical factor in UWSN [17]. Therefore, many kinds studies discuss mobility-based topology control techniques. Zhang et al. [18] and Liu et al. [19] proposed either mobility models for specialized applications or mobility-targeted topology control algorithms for movable UWSN. In addition, topology control can also contribute to other aspects of UWSN, e.g., localization technique. Usually, the unlocalized node can find its location by utilizing the spatiotemporal relation with the reference nodes; however, most nodes lack the required number of the reference nodes in the sparse scenario. To address this problem, Misra et al. [20] proposed an opportunistic localization by topology control (namely, OLTC) for sparse UWSN. In OLTC, some reference nodes are discovered through the topology construction process, while a game-theoretic model based on single-leader-multi-follower Stackelberg game for topology control is established to describe the relationship between the unlocalized and the localized nodes. Consequently, with the help of OLTC scheme, the localization coverage and the energy-efficiency can be promoted better than before.
In summary, topology control is indeed one of the worthy techniques for UWSN to study, even if research scholars focus on the different points and targets. From almost all literature, the energy-efficiency is still the common point which is only concerned about for the whole network. However, the current study for topology control in UWSN has the following drawbacks. (1) The overly idealized network model: The current assumed network model should be application-oriented built for the real environment. (2) Topology control scheme without preferable load-balance: Since energy is the key point for the underwater IoT-based application, topology control without preferable load-balance cannot ensure energy conservation and limits the further application. Consequently, we propose a topology control algorithm with energy balance (namely, TCEB) for UWSN, which contributes to a better energy-efficiency under such complicated underwater communication mode. The main target is to make underwater network load-balance and consume energy efficiently so that the lifetime of the network would be prolonged.
Network Model
Actually, the underwater network model and its corresponding communication are not the same in shallow water and deep water. In this paper, we only focus on the shallow water environments since most IoT-based applications are located in inland lakes and rivers, e.g., aquaculture and water quality monitoring. As mentioned earlier, UWSN is a typical heterogeneous wireless network, for which we take care of the energy heterogeneity factor. Assume that the three-dimensional network can be mapped into a two-dimensional network. Let n nodes be randomly deployed in a static M × M shallow monitoring water area. All nodes can be regarded approximately in the same plane, and more characteristics are as follows: (1) Initially, each node is equipped with the different energy over the interval of where E 0 is the lower bound (namely unit energy), and λ is a constant without upper bound to determine the value of the maximal initial energy, which satisfies λ > 0. (2) The sink is located at the center of the monitoring area, which is the only one not restricted by resources, such as energy, memory and calculation ability. Moreover, the sink can directly communicate with the gateway (base station). (3) Owing to the adoption of clustering mechanism, non-cluster-head nodes are permitted to communicate with the cluster-head through single or multiple hops, while they cannot directly send packets to the sink. (4) Each node is anchored at the specified area with a buoy, which means the network is relatively stable. (5) Assume that the surface and the bottom of the water are relatively smooth planes, where the influence of underwater transmission delay and success rate for each node are the same.
Energy Consumption Model
Due to the different media between the underwater environment and the air, the traditional energy consumption in TWSN could not be applied to UWSN. To ensure the reliability of the communication, it may be necessary to send data multiple times, which accounts for a larger proportion of the entire energy consumption because of the additional propagation loss. Thus, the energy expended to transmit the l-bits message (E tx ) and to receive this message (E rx ) are, respectively, [21]: where E elec is the electronics energy dissipated per bit, R denotes the transmission rate (bit/s), and P t is the transmitted power. Specially, l/R expresses the time for sending the message. Generally, there are two main acoustic signal propagation mechanisms (namely, Urick Propagation Model [22]) as the geometrical effect. One is cylindrical spreading, and the other is spherical spreading. Cylindrical spreading refers to sound propagation in the shallow water (i.e., depth lower than 100 m) with a cylinder bounded by the surface and the water bottom, while spherical spreading is for sound propagation between the sender and the receiver in the deep water (i.e., deeper ocean). According to the assumed network model, we focus on the cylindrical spreading for shallow water in this paper. Let I t be the current intensity, A be the cylindrical flank area, and then the transmitted power can be: where r and H are the radius and the height of the cylinder, which refers to the distance between acoustic source and receiver as well as water depth, respectively. To obtain I t , the average intensity I, which is a plane wave with the root-mean-squared pressure p in a medium of density ρ and sound speed c, should be known previously, where I = p 2 /ρc [23], and ρc is the acoustic impedance. Let SL denote the source level, the original I t can be expressed as the product of the intensity of the source level and the average intensity: Specially, ρc is 1.5 × 10 6 kg/(m 2 s) in some underwater environments. Thus, a plane wave of root-mean-squared pressure (10 −6 Pa) has an intensity of 0.67 × 10 −18 W/m 2 [23,24], and from Equation (4), we can get SL within logarithmic mode: On the other hand, the source level can also be written by the passive sonar equation [21]: where SNR refers to the signal to noise ratio, and TL and NL are the transmission loss under the underwater environment and the noise level (i.e., ambient noise caused by turbulence, shipping, waves and thermal noise), respectively, DI denotes the directivity index and is 0 while using the omni-directional hydrophones. Basically, the transmission loss (TL), which can be defined as the accumulated decreasing in acoustic intensity when an acoustic wave propagates outwards from the source [21], is a significantly important effect on sound communication in underwater. It can always be estimated by a variety of phenomena in underwater, e.g., geometrical spreading, absorption and scattering. With cylindrical spreading in shallow water, TL can be approximated as follows [24]: where α( f ) refers to the absorption loss in medium using Thorpe's equation [23]. According to Equations (3)- (7), the transmitted power (P t ) can be written as: where Finally, substituting Equation (8) into Equation (1), the energy consumption for sending a message is:
The Proposed Algorithm-TCEB
From aforementioned energy consumption analysis, we can see that energy dissipated in underwater is quite different from the typical TWSN (energy consumption in TWSN can be found in [7]). Usually, the underwater acoustic communication needs to spend more resources without any energy recharge. Due to the non-uniform distribution of nodes, a node in the critical location (e.g., as the relay node in the communication link) sends the collected information, is prone to burden forwarding too much data. That will lead to network failure caused by premature node energy depletion. To be associated with the links situation, we use Markov model as the channel error model [25] in this paper. In addition, once the network is load imbalance, some nodes need to compete for communication channels queuing to send data, which greatly increases the end-to-end delay. Therefore, a topology control algorithm with energy balance (TCEB) using clustering technology for UWSN is proposed to consider how to improve energy efficiency and load balance to prolong the network lifetime as much as possible.
Non-Cooperative-Game-Based Cluster-Head Selection
With clustering technology, the number of cluster-heads in the network is critical to impact the final performance. To obtain the reasonable number of clusters and select nodes with more residual energy as the cluster-head is the goal of extending lifetime under the condition of limited resources. As far as we know, how to divide the entire network into clusters, i.e., how to determine the number of clusters, is a typical NP-hard problem [7]. Therefore, we use Nth-order nearest-neighbor analysis theory [26] to adaptively calculate and obtain the optimal number of clusters (namely, k opt ). Moreover, the non-cluster-head node is only permitted to communicate with the sink through its cluster-head, which means the cluster-head should consume more energy on its extra communication and the collected data fusion. To ensure the load balance and the greatest degree of energy saving, the nodes in each cluster should be elected to be the cluster-head in turn. Thus, the non-cooperative game theory-based strategy in economics is adopted to determine which nodes are more suitable to be the cluster-head in each round of communication process. Notice that we only use the main idea from the non-cooperative game to make the energy balance.
In the non-cooperative game, each node can be selfish but rational, and then a node whether to being the cluster-head is depending on the game. Let G = {N, S, U} be the game model, where N, S, and U are defined, respectively, as: (1) N = {n 1 , n 2 , . . . , n n }: Set of players, i.e., n i ∈ N is corresponding to node i in the network.
(2) S = {s 1 , s 2 , . . . , s n }: Set of strategies, i.e., s i ∈ S (s i = 0 or 1) is a strategy of player i (i.e., node i), where s i = 1 refers to be the cluster-head, otherwise s i = 0, which means do not want to be the cluster-head.
the node i is elected as the cluster-head.
To guarantee energy consumption balance, we consider node's energy and path loss as the main factors of cluster-head selection. Thus, the payoff function u i can be further expressed as: where E r (i) andĒ r are the residual energy of node i and the average residual energy of the whole network in current round, respectively; pl(i, j) denotes the path loss of node i to its one-hop neighbor node j; Nei i and q i , respectively, refer to the set of one-hop neighbor nodes and its quantity of the node i; and β is a constant adjustment factor which satisfies 0 < β < 1. Notice that ∑ j∈Nei i pl(i, j) q i denotes the average path loss of node i to its one-hop neighbor nodes. According to the non-cooperative game theory, the player chooses the optimal strategy from the set of strategies to obtain the greater profit. From Equation (12), we can find that the higher the residual energy of the node and the lower the average path loss of the node to its one-hop neighbor nodes, the greater the payoff of the node has. To balance the energy consumption of the network, we make a rule that the node which is being a cluster-head can get better profit. In that case, we move to see which one has obtained the greater profit. In other words, the node which has greater u i is easier elected with high probability to be the cluster-head. In addition, β is used for adjusting the proportion of the energy and path loss so that we can obtain an optimum payoff for each node, i.e., this is a trade-off between the energy and the path loss.
The detailed steps for cluster-head selection can be described as follows. Algorithm 1 shows the pseudo-code of the cluster-head selection.
•
Step 1 A triad game model, namely, G = {N, S, U} is firstly built for the network.
•
Step 2 According to current node distribution, the optimal number of clusters (i.e., k opt ) is calculated through Nth-order nearest-neighbor analysis theory [26].
•
Step 3 Each node broadcasts the HELLO message with its maximum transmission power and set a timeout for waiting for the reply message. Meanwhile, it collects the neighbor node's message under the specified timeout and establishes its neighbor list.
•
Step 4 Each node calculates its own payoff (i.e., u i ) according to Equation (12), and then broadcasts its calculation result of u i within the timeout.
•
Step 5 A node receives the payoff (u i ) of its neighbor node and stores u i to the corresponding node in the neighbor list. After getting all payoffs of nodes in the neighbor list, each node sorts the payoffs (includes itself) out in terms of descending order.
•
Step 6 According to the optimal cluster number, the former k opt nodes with greater u i are selected as the final cluster-head by the game from the network. Then, the newly elected cluster-heads broadcast the message that they have been selected as the cluster-head of the k opt cluster. That means all k opt nodes make a decision to be cluster-head in this round during the game, and the process of cluster-head selection is ended.
Algorithm 1 Cluster-Head Selection
Require: A triad game model G = (N, S, U) 1: Initialize the network where n[i] ← 'N' 2: Calculate k opt using Nth-order nearest-neighbor analysis method from [26] 3: n i broadcasts the HELLO and set a timeout 4: while t < timeout do 5: n i collects its neighbor node's HELLO 6: end while 7: n i establishes its neighbor list 8: n i calculates its u i according to Equation (12) 9: n i broadcasts its u i and set a timeout 10: while t < timeout do 11: n i receives payoff (u i ) of its neighbor nodes and stores in the neighbor list 12: end while 13: n i sorts the payoffs (u i ) out by descending order 14: chs ← Chooses the former k opt nodes with greater u i as the cluster-head 15: for k ← 1 to k opt do 16: n[chs[k]] ← "C" 17: n chs[k] broadcasts its message of newly elected to be cluster-head 18:
end for
Through the process of cluster-head selection, we can find that it is precisely because of the introduction of the game model and node's payoff function. The k opt cluster-heads are correctly elected under the premise of the comprehensive balance of energy consumption and path loss. Then, the intra-cluster and inter-cluster topology are beginning to construct one after the other.
Intra-Cluster Topology Construction
After the non-cooperative-game-based cluster-head selection, the cluster-heads have been exactly determined as well as the probably partitioned region of the cluster. However, the current clusters constitute only the basic topology so that communication on intra-cluster and inter-cluster is further to build to carry out the data transmission. Usually, the underwater acoustic communication is affected by multipath and high end-to-end delay, while also depending on the distance and the time as well as the frequency. Therefore, topology construction will have a great impact on network communication performance.
Initially, we take the delay with the multipath effects into consideration when the intra-cluster topology is built. Due to the non-uniformity of the underwater medium space, the acoustic channel has a multipath phenomenon. That means the signal from the source to the destination may pass through the different path under a certain transmission power. On the other hand, because of the difference of path length, the acoustic wave which is to reach the destination by different paths takes a different time, as well as signal attenuation. As shown in Figure 1, multiple paths exist from the source node i to the destination node j. They can roughly be represented by L 0 , L 1 , and L 2 , respectively, where L 0 indicates the direct path between the source and the destination, L 1 denotes that a reflection arrives through the water surface, and L 2 is a reflection as well but through the water bottom. Obviously, L 0 is the shortest path for acoustic wave propagation, which costs the minimum duration with the less delay.
The delay is complicated to correctly calculate in the practical underwater environment, thus we show the scenario in Figure 1. Inspired by Ibrahim et al. [27], the total delay between the node i to j can be written as: where l and c are the length of the data packet in bits and the propagation velocity of the acoustic wave in water, respectively. d(i, j) refers to the distance between the node i and j, ∆τ is the delay caused by multipath propagation. C is the channel capacity in bits per second which can be expressed according to the Shannon's theorem: where B is the bandwidth of the channel, and SNR refers to the signal to noise ratio. Assuming that the noise is Gaussian and the channel is time-invariant for some interval time, the capacity can divide the total bandwidth into many narrow sub-bands [28]. The ith sub-band is centered around the frequency f i (i = 1, 2, ...), and its width is ∆ f . We introduce the power spectral density of the signal while considering the real scenario. Therefore, more general channel capacity [29] can be expressed as: where X( f ) is the power spectral density of the transmitted signal from the source, H( f ) denotes the channel transmission function, and N( f ) is the noise power spectral density.
c + τ 0 denote the total delay when the signal is propagating on path L 0 . To reduce the complexity of delay analysis by multipath effects, the communication paths from the source to the destination would be restricted in this paper, which means data propagation through the reflection path is limited. Therefore, according to this rule, the path can be defined invalid if the delay for signal propagation from the node i to j on such path (any path except for L 0 ) is greater than delay 0 i→j . When the intra-cluster topology construction is beginning, all the non-cluster-head nodes wait to join one of the k opt clusters. After broadcasting the INVITATION message including the ID and u i by the cluster-head, the non-cluster-head node will perform the next action promptly depending on how many INVITATION messages it has gotten. Considering the case that the non-cluster-head node may have received a number of INVITATION messages from the multiple cluster-heads, various strategies are adopted according to the value of u i to make all of the clusters load balance as much as possible. Once the non-cluster-head node receives only one INVITATION message, it naturally becomes the cluster member of the cluster-head which sends such message previously and replies the ACK message. If the non-cluster-head node receives more than one INVITATION message, then it chooses to be the cluster member of the cluster-head with the largest u i and sends back the reply message of ACK. Otherwise, the non-cluster-head node waits to join a cluster with multi-hop communication method. Specially, once the non-cluster-head node receives multiple INVITATION messages from different cluster-heads with the same largest u i , then it chooses one of the clusters randomly to join while answering the corresponding ACK back. During such process, we notice that non-cluster-heads may not have received any messages from any cluster-head. To achieve the goal of minimizing the energy consumption of the network, it is necessary to design a strategy on how to select the next hop neighbor node to form the best transmission path. Consequently, the relay node will be selected in terms of the communication cost to make sure that kind of node joins into one of the clusters. The mechanism for selecting the relay node can be defined as: where E r (j) is the residual of the node j, E cons (i, j) is the total energy consumption on communication between the node i and j, which refers to both energy dissipation when the node i sends the message and the node j receives the message. R link (i, j) and pd loss (j) are the link reliability between the node i and j and the packet loss rate of the node j, respectively, and γ is the adjustment factor, which satisfies 0 < γ < 1. Equation (16) reflects the probability that the non-cluster-head node i chooses its one-hop neighbor node j as the relay node to communicate with the cluster-head. It can be easily seen from Equation (16) that the more the residual energy of the node j and the lower the cost between both communication nodes, the easier the node j to be selected as the relay node. Simultaneously, the higher the link reliability and the smaller the packet loss rate, the easier the node j to be the relay node as well. Therefore, to obtain an optimum relay node during the intra-cluster topology construction, we cannot merely look at a parameter or a part of the parameters in this strategy, but take care the trade-off between the various parameters. At this time, γ is the key factor for adjusting the trade-off to select the optimum relay nodes to build better local topology.
Algorithm 2 gives the pseudo-code of intra-cluster topology construction. The detailed steps for intra-cluster topology construction are presented as follows.
•
Step 1. The cluster-head broadcasts the INVITATION message including its ID and u i , and set a timeout as well.
•
Step 2. Once the non-cluster-head can receive the INVITATION message from the cluster-head within timeout, go to Step 3, otherwise, perform Step 5.
•
Step 3. Once the non-cluster-head node receives various INVITATION messages from multiple cluster-heads, it will choose to join the cluster which has the cluster-head with the largest u i . If there is existing an equally largest u i from different cluster-heads, then go to Step 4, otherwise, reply an ACK message and perform Step 6 directly.
•
Step 4. The non-cluster-head randomly chooses one of the clusters which have the same largest u i , and then it responses the ACK reply.
•
Step 5. Once the non-cluster-head does not receive any INVITATION message from any cluster-head, it has to select the relay node according to Equation (16). Afterwards, the non-cluster-head will establish the communication link with the relay node.
•
Step 6. The intra-cluster topology keeps going on constructing until all non-cluster-heads join one of the k opt clusters.
Algorithm 2 Intra-cluster topology construction 1: for k ← 1 to k opt do 2: chs k broadcasts the INVITATION and set a timeout 3: end for 4: for i ← 1 to n do 5: if n[i]! = 'C' then 6: if n i receives the INVITATION within timeout then 7: if Num(I NV ITATION) > 1 then 8: if there is no equal u i then 9: n i becomes the cluster member of the cluster-head with the largest u i
Inter-Cluster Topology Construction
The inter-cluster topology construction is the final process to connect all of the parts (i.e., clusters) as a tree for further communication requirement. It actually is how to build the communication link between the clusters and the sink. That is the construction of the global network if the intra-cluster topology construction can be regarded as the local network formation. Consequently, in this section, we need to find a way to rapidly build the path from the cluster-head to the sink. Specially, the cluster-head which is far away from the sink should be give more care, because it cannot directly communicate with the sink by one-hop transmission. It must rely on another cluster-head that is closer to the sink by multiple-hop communication to complete the task of data transmission.
When the inter-cluster topology is tried to build, it begins from the sink. Initially, the sink broadcasts the HELLO message to the monitoring area as its minimum transmission power, and then gradually increases until its maximum transmission power or the communication radius can reach over the entire monitoring region. As shown in Figure 2, according to the power level of the sink, it can be recorded as level I, level II, level III, etc. (from low to high), which are corresponding to the different ring monitoring area. If a cluster-head receives the HELLO message within one of the broadcast ranges corresponding to a power level of the sink, then it records its current power level.
Once the grading is completed, the cluster-head with the lowest power level of the sink (e.g., the area in which the power level I is located in Figure 2) first starts to establish the communication relationship with the sink. Then, the cluster-head within power level II is considered to be next task that makes the connection to the sink. Whether it chooses one of the cluster-heads within power level I as the relay node or still keeps directly communicating with the sink depends on the communication cost it spends. The main cost function of communication between the cluster-head i and the cluster-head j (may be the sink) is given as: (17) where E r (i) and E Init (i) are the residual energy and the initial energy of cluster-head i, respectively; d(i, j) refers to the distance between cluster-head i and j; and r sens (i) is the sensing radius of the cluster-head i. ω denotes the adjustment factor, which can adjust the proportion relationship of the residual energy and the communication distance.
Cluster-head
Sink I II III IV V if power[k] > 1 then 10: best ← cost(k, sink) // Initial best cost by Equation (17) 11: relay ← sink 12: for j ← 1 to k opt do 13: if cost(k, j) + cost(j, sink) < best then 14: best ← cost(k, j) + cost(j, sink) 15 (17), we can make a correct decision whether the current cluster-head needs the relay cluster-head to forward the data to the sink. If the cost of the cluster-head spent is indeed less than previous communication way (i.e., directly communicate with the sink) via the relay cluster-head, then it establishes the multi-hop communication between the cluster-head and the sink. In that way, it also enhances the close connection between the different inter-clusters simultaneously. Traversing all of the cluster-heads of each power level one after another, the process of inter-cluster topology control will be continued until all of the cluster-heads have already built the direct or indirect connection with the sink. Algorithm 3 presents the pseudo-code of the detailed inter-cluster topology construction.
Afterwards, the collected data can be prepared to forward through the intra-clusters and the inter-clusters. Both cluster-head and non-cluster-head will perform the communication task according to their TDMA scheduling table. On the other hand, during the process of the intra-cluster and the inter-cluster topology construction in every round, each non-cluster-head or cluster-head also stores its corresponding routing information to prevent failures of the node and the link in TCEB algorithm. That could make a quick response and restart the topology reconstruction, i.e., the process of topology maintenance is triggered to perform immediately.
Simulation Environment
To validate the effectiveness of the proposed algorithm, we implement and evaluate TCEB with the typical clustering-based algorithms LEACH [7], EDCS [8], and GR-CTC [10] under almost the same parameters setting in Matlab R2013a and Atarraya [30]. As we discussed before, all nodes are randomly deployed in a square of 100 m×100 m, and the sink is located at the center of the area, i.e., (50, 50). More detailed parameters which we used in the simulation are given in Table 1. Particularly, the weighted factors, such as β, γ, and ω, are taken from the results of hundreds of experiments. During the simulation, we mainly make two kinds of testing experiments. One is for self-comparison which is to observe the efficiency of TCEB under different node numbers and energy factors. The other is for comparison with typical clustering-based algorithms to find whether TCEB has preferable merits. To eliminate the randomness of the experimental results, all of the tests are performed at least 100 times to get the average value.
Partial Parameters Impact Analysis
In this section, we firstly focus on the energy-efficiency of TCEB, i.e., the changes of network lifetime under the different number of nodes and energy heterogeneity factor in UWSN. Usually, the lifetime contains two periods: the stable phase and the unstable phase. The stable phase refers to the time when the first node dies, and the unstable phase denotes the period between when the first node dies and the last node dies. The round is presented as the unit of the network lifetime. Consequently, the variational trend of both periods should be watched out when the number of nodes and energy heterogeneity parameter change. Figures 3 and 4 show the variation of the stable period and the whole lifetime with the different number of nodes under different energy heterogeneity factor, respectively. In Figure 3, with the increase of λ, i.e., as nodes are deployed with more energy, the dead time of the first node will become later than before, no matter what the current number of nodes is, vice versa. Simultaneously, from the longitudinal observation, as more nodes are deployed, the first node dies earlier under the same energy heterogeneity factor (λ). This is because the number of exchange messages between nodes and the interference in the network will sharply increase when the number of nodes is greater than a certain value. Obviously, it may cause dissipating more additional energy during the networking, which directly leads to the first node dead earlier. Therefore, more nodes is not always better, especially in a certain specific concentration area. As shown in Figure 4, with the increase of λ, the lifetime also becomes longer under the different number of deployed nodes. Generally, increasing the number of deployed nodes is equivalent to increasing the energy of the network. Nevertheless, we can find that the whole network lifetime does not vary greatly under the same λ while increasing the number of deployed nodes. Similar to the situation of the first node dead, the main reason is located at the increased frequency of communication and the occurrence of interference. It spends extra energy on exchanging the messages and controlling the topology, while increasing the total energy of the network. Consequently, enlarging the energy heterogeneity factor can definitely extend the network lifetime, while the number of nodes should be deployed in reasonable quantity in the practical underwater scenarios.
We also pay close attention to the impact of the end-to-end delay caused by the change of nodes number and energy heterogeneity parameter in UWSN. The variation of the average end-to-end delay which is under the different n and λ in TCEB is shown in Figure 5. As the energy heterogeneity factor λ enlarges, basically, the average end-to-end delay does not change much under the scenario of the same deployed nodes. The changes are only milliseconds, which means equipping more energy almost doe not influence the end-to-end delay. At the same time, it is also reflected from the side that the topology is timely scheduled to make nodes adapt to the new roles by dynamic adjustment, which ensures the QoS of the network. Furthermore, with the increase in the number of deployed nodes, the average end-to-end delay inevitably rises. This is because the interference among nodes also gradually increases when the number of deployed nodes has reached a certain degree of the saturation. Apparently, it could inadvertently reduce the QoS of the network. Consequently, we come to the conclusion again from the average end-to-end delay that more nodes is not always better in the network, even though many nodes can provide much more total energy. In addition, we also need to investigate the overhead on control message since it is related to the lifetime and final energy consumption of the entire network. As shown in Figure 6, we can see the total energy consumption on control message is rising for any network scale (i.e., n = 100, 200, 300, 400) with increasing energy heterogeneity parameter λ. This is because the network which has more energy should have a longer lifetime at work, and the energy overhead on control message will continue. Meanwhile, from the longitudinal observation, enlargement of network scale not only provides more total energy of the network, but also increases overhead on control message. Obviously, it is a normal phenomenon that it brings the additional communication between nodes when more nodes are added into the network. Fortunately, it has not grown exponentially and is still within acceptable limits.
Finally, the overhead on the control message accounts for 10-20% of the total communication cost by the statistic.
Performance Evaluation
In this section, we compare the performance of TCEB with typical LEACH, EDCS, and GR-CTC in the underwater environment. We pay attention to when the first node and all of the nodes are dead, the average end-to-end delay, and the throughput of the network. For the first two aspects, we focus on the energy-efficiency which is presented as the time period (namely, round) including the stable phase and the whole lifetime of the network. Moreover, the latter is to examine the communication performance of the network. It also reflects from the side the QoS, e.g., the packet delivery loss rate, which is too high to result in the substantial increase of the last received data packets. All of the algorithms are evaluated under different deployed nodes density (n = 100 or n = 200) with equipping different energy, where the results are averaged over multiple times to eliminate the randomness. Figures 7 and 8 are, respectively, given to show the comparison of four algorithms into the stable phase (i.e., the time when first node is dead) and the whole lifetime (i.e., the time when all nodes are dead) under different energy heterogeneity factor, in which the number of deployed nodes is 100. Basically, with the enlargement of λ, the death of the first node and the death of all nodes are postponed in four algorithms. That means to equip more energy for each node can help prolong both the stable period and the whole lifetime. Nevertheless, from a longitudinal comparison, the TCEB is better than the three other algorithms under the same λ in both stable period and whole lifetime. Obviously, the typical LEACH has the worst energy-efficiency since it has a shorter lifetime. As far as we know, LEACH was once the classical clustering-based algorithm for the homogeneous TWSN. However underwater is a very complicated environment, and easily cut LEACH down on communication efficiency due to various factors. Since LEACH cannot respond to a variety of underwater emergencies, it would not be suitable for applying to UWSN which is a special heterogeneous wireless network. On the other hand, EDCS and GR-CTC are also better than LEACH in the lifetime under the network scale of 100 nodes, where the EDCS is designed for general heterogeneous wireless sensor network and the GR-CTC is proposed for UWSN. In EDCS, many impact factors or existing difficult problems have been concerned in design, but it still ignores the complex status of underwater acoustic communication. The same packets may be forwarded multiple times since there might be higher packet delivery loss rate. That is the reason to lead to dissipating more energy on resending the packets. We can find there is a centralized strategy adopted in GR-CTC which makes the extra interaction between the nodes. It spends some energy and reduces the efficiency of the GR-CTC, even if it is specifically designed for underwater scenarios. Unlike with LEACH, EDCS, and GR-CTC, the proposed TCEBt combines the characteristics of the heterogeneous network and the particularities of the underwater environment; simultaneously, the larger underwater energy consumption and the multipath propagation, as well as the link stability, are all taken into account to guide to select the cluster-heads and build the multi-hop tree topology. Moreover, with a serial of strategies of load balance on energy consumption between nodes, the TCEB prolongs both the stable period and the whole of the lifetime, naturally. Next, we observe the comparison of the stable period and the whole of lifetime under the nodes density of n = 200, when the energy heterogeneity factor (λ) increases. As shown in Figures 9 and 10, we can see almost the same trend of the curve while comparing with Figures 7 and 8, respectively. That is, the stable period and the whole lifetime of the four algorithms are all extended with the increasing of λ under the network density of n = 200. However, from the comparison of Figures 7 and 9, we can find that the first node is dead relatively earlier under n = 200 than the scenario of n = 100 within the same energy heterogeneity factor in most of the algorithms. The main reason is there are too many nodes located in the same smaller monitoring area, which would exchange much more messages in the local network. At the same time, too many nodes inevitably make much more communication interference among clusters and nodes. It could result in nodes failure or even the partial network paralysis. Specially, the complex underwater environment may cause increasing the number of link-hops, which expands the burden of the relay nodes. Hence, it leads to depleting extra energy on frequently resending the data to enable the normal work and make the entire network load balance as well as stabilization. In Figure 10, the lifetime in TCEB is evidently better than other three algorithms. Without loss of generality, combined with aforementioned analysis, the TCEB has the merit of energy-efficiency over the other three algorithms under different network density even if there are equipped with the different energy heterogeneity parameter. From the energy-efficiency point of view, the proposed TCEB can contribute to currently further underwater applications.
Furthermore, Figures 11 and 12 show the comparison of average end-to-end delay in TCEB and other typical algorithms under n = 100 and n = 200, respectively, when the energy heterogeneity factor (λ) changes. In Figure 11, we can clearly see that there is only tiny fluctuation for the curve of each algorithm under different λ. That means the average end-to-end delay is relatively stable under n = 100 when the total energy of the network changes (i.e., λ changes). Meanwhile, compared with other algorithms, TCEB has less average end-to-end delay because it takes more real situations into consideration. In contrast, LEACH has highest average end-to-end delay within four algorithms since it is more suitable for ideal homogeneous environments. As shown in Figure 12, it reflects almost the same trends for each of algorithm under n = 200 while comparing with the scenario of n = 100. However, in Figures 11 and 12, we find that the average end-to-end delay for each algorithm is rising when the network scale enlarges, but the varied amplitude is small. This is because each algorithm adopts more or fewer mechanisms to reduce the end-to-end delay.
Finally, Figures 13 and 14 show the comparison of throughput in TCEB and other typical algorithms under n = 100 and n = 200, respectively, when the energy heterogeneity factor (λ) changes. We can clearly see the throughput is increased in all four algorithms under different network density (both n = 100 and n = 200) when the λ becomes larger. Obviously, it follows the rule that the more energy the network equips, the higher the throughput is. On the other hand, no matter the network density is 100 or 200 nodes, the throughput in TCEB is higher than the three other algorithms with the same λ. This is because TCEB has more efficient clustering-based scheme with a load balance strategy to deal with the complicated underwater communication. It not only saves energy and prolongs the lifetime of the network, but also brings the higher throughput. In addition, it also reflects from the other side that the TCEB has higher transmission success rate than the three other algorithms, thereby increases the throughput of the network.
Conclusions
In this paper, we address the issue of energy-efficiency with longer time-delay and multipath effect in underwater wireless sensor networks and propose a topology control with energy balance scheme to ensure the network load balance and prolong the lifetime. Combined with the path loss and the energy as well as the average energy of current network, the proposed TCEB adopts the idea of game-based scheme to select the nodes with better payoff as the cluster-head. Both intra-cluster and the inter-cluster topology construction are built to choose the reasonable candidates of relay nodes to form the optimum links in the underwater network. With the help of topology maintenance, TCEB also has an ability to dynamically adjust the topology when the underwater network is unavailable or non-optimal. Simulation results show that TCEB is efficient to prolong the network lifetime, and its performance is better than LEACH, and outperforms EDCS and GR-CTC as well. | 11,193 | 2018-07-01T00:00:00.000 | [
"Environmental Science",
"Engineering",
"Computer Science"
] |
IL-23 receptor deficiency results in lower bone mass via indirect regulation of bone formation
The IL-23 receptor (IL-23R) signaling pathway has pleiotropic effects on the differentiation of osteoclasts and osteoblasts, since it can inhibit or stimulate these processes via different pathways. However, the potential role of this pathway in the regulation of bone homeostasis remains elusive. Therefore, we studied the role of IL-23R signaling in physiological bone remodeling using IL-23R deficient mice. Using µCT, we demonstrate that 7-week-old IL-23R−/− mice have similar bone mass as age matched littermate control mice. In contrast, 12-week-old IL-23R−/− mice have significantly lower trabecular and cortical bone mass, shorter femurs and more fragile bones. At the age of 26 weeks, there were no differences in trabecular bone mass and femur length, but most of cortical bone mass parameters remain significantly lower in IL-23R−/− mice. In vitro osteoclast differentiation and resorption capacity of 7- and 12-week-old IL-23R−/− mice are similar to WT. However, serum levels of the bone formation marker, PINP, are significantly lower in 12-week-old IL-23R−/− mice, but similar to WT at 7 and 26 weeks. Interestingly, Il23r gene expression was not detected in in vitro cultured osteoblasts, suggesting an indirect effect of IL-23R. In conclusion, IL-23R deficiency results in temporal and long-term changes in bone growth via regulation of bone formation.
Notably, in both extremes, a role for Interleukin-23 (IL-23) has been reported [6][7][8] . IL-23 belongs to the IL-12 cytokine family, and is composed of a heterodimer of the subunits IL-23p19 and IL-12p40 9,10 . The receptor for IL-23 (IL-23R) is formed by the subunits IL-23R and IL-12Rβ1 11,12 . Due to its role in induction of other proinflammatory cytokines, such as IL-17A, GM-CSF, IL-22, the IL-23R signaling pathway has been the subject of interest in different immune-mediated inflammatory diseases accompanied with bone erosions 13 .
In this context, patients with rheumatoid arthritis and psoriatic arthritis have increased levels of IL-23 in their serum 7,8 . In mice, systemic overexpression of IL-23 via hydrodynamic delivery of IL-23 minicircle DNA, induces chronic arthritis, increases osteoclast differentiation and systemic bone loss 14 . Similarly, a psoriasis-like disease develops in the novel K23 mouse model, which suffer from increased levels of IL-23 in the skin 15 . In these mice, the psoriasis-like disease proceeds psoriatic arthritis including enthesitis, dactylitis and bone destruction. Interestingly, overexpression of IL-23 via IL-23 minicircle DNA in a murine model of spondyloarthropathy, leads to pathological new bone formation during the initial phase of disease, while destruction of articular surfaces occurs at later time points 16 . These studies indicate that increased levels of IL-23 result to inflammatory conditions accompanied with excessive bone formation and/or degradation.
On the other hand, studies have demonstrated that absence of IL-23 also affects bone physiology. Reduced trabecular bone mineral density was detected in 12-and 26-week-old IL-23p19 −/− mice 17 . Other studies did not find any bone abnormalities in 8-14 and 12-week-old 18 IL-23p19 −/− mice, but reported higher trabecular number (Tb.N) in 26-week-old IL-23p19 −/− mice 14 . Yet a recent study demonstrated higher trabecular bone mass in 2-and 12-months-old IL-12p40 −/− mice, which lack both IL-12 and IL-23 19 . Clearly, IL-23 deficiency results in altered bone physiology, however the lack of consensus in the findings emphasizes the need for additional in vivo studies to unravel the precise role of IL-23 deficiency herein.
In this attempt, knowledge gained from in vitro studies, which have demonstrated both direct or indirect effects of IL-23 on osteoblasts and osteoclasts is valuable 20 . While one study demonstrated that IL-23 can promote osteoclast formation by upregulation of RANK on bone marrow (BM)-derived osteoclast precursors 21 , another study demonstrated inhibitory effects of IL-23 on osteoclast formation through induction of GM-CSF in T cells 17 . It should be noted that although most studies use the whole BM population for differentiation of osteoclasts, it was demonstrated earlier that only the early blasts (CD31 + Ly6C − ), myeloid blasts (CD31 + Ly6C + ) and monocytes (CD31 − Ly6c + ) differentiated towards osteoclasts 22 . From these three, myeloid blasts appeared to be the most potent in osteoclastogenesis.
In primary osteoblasts, IL-23R protein is absent, and IL-23 treatment does not influence alkaline phosphatase (ALP) activity, RANKL expression and the proliferation of these cells 23 . However, signals of the IL-23R pathway can affect osteoblasts indirectly through IL-17A and IL-22, since their receptors are present on osteoblasts. Indeed, IL-17A inhibits ALP activity of osteoblasts 24 , while IL-22 stimulates mesenchymal stem cell migration and osteogenesis-related genes 25 . The above studies demonstrate the complexity of the interplay between IL-23 and cells of the bone. Adding to this complexity, bone physiology is influenced by different factors such as endocrine hormones 26 or fat metabolism proteins such as leptin 27 , which could also affect IL-23 levels 28,29 .
Despite the ample amount of data suggesting pleiotropic effects of the IL-23R pathway and its downstream cytokines on bone cells, the effects of IL-23R signaling on bone remodeling during steady state are not well defined. We studied the role of IL-23R signaling in bone homeostasis at different ages using IL-23R deficient mice, and demonstrate that IL-23R deficiency results in changes in bone mass via indirect regulation of osteoblast function.
Material and methods
Animals. Knock-in IL-23R-GFP reporter (IL-23R GFP/+ ) mice were kindly provided by Dr. Mohamed Oukka, Seattle, USA and Prof. Dr. Vijay Kuchroo, Boston, USA 30 . For generation of IL-23R GFP/+ mice, an IRES-GFP cassette was introduced after exon 8 of the endogenous IL-23R gene. The targeting construct was electroporated into Bruce4 ES cells. Targeted ES cells were injected into BALB/c blastocysts and male chimeras were bred with female C57BL/6 mice 30 . IL-23R GFP/+ mice were bred to generate IL-23R −/− (IL-23R GFP/GFP ) and WT (IL-23R +/+ ) mice in the Erasmus MC experimental animal facility. Seven-, 12-and 26-week-old littermate male mice were used for this study. All mice were kept under specific pathogen-free conditions at the Erasmus MC experimental animal facility. Food and water were provided ad libitum. All animal experiments were performed in accordance with relevant guidelines and regulations and were approved by the Erasmus MC Dutch Animal Ethics Committee (DEC).
Micro-computed tomography (μCT).
The left femurs were dissected and fixed overnight in 10% formalin at 4 ̊ C. The bones were then stored in 70% ethanol at 4 °C until microcomputed tomography (μCT) analysis was performed using a SkyScan 1076 at a 9 μm voxel resolution and 2300 ms exposure time. The following settings were used: X-ray power of 40 kV and tube current of 250 mA. Beam hardening (20%) was reduced using a 1 mm aluminum filter, ring-artefacts was set at 5 and an average of three photos (frame averaging) at each angle (0.8°) was taken to generate the final images. For the analysis of trabecular bone parameters, the distal metaphysis was scanned (a scan area of 1.35 mm from the distal growth plate towards femoral center). Analysis of the cortical bone parameters was performed in the diaphyseal cortex, which comprised a scan area of 0.45 mm in the femoral center. 3D reconstruction and data analysis were performed using manufacturer-provided software from Bruker MicroCT (NRecon, Data viewer, CT analyzer, SkyScan). Analyzed trabecular and cortical bone parameters are depicted according to the 'Guidelines for assessment of bone microstructure in rodents using micro-computed tomography' of the American society for bone and mineral research 31 . www.nature.com/scientificreports/ Three-point bending test. The same femurs used for the μCT analysis were subjected to a three-point bending test using a Chatillon TCD225 series force measurement system (Technex BV, The Netherlands) 32 . Displacement (mm) and force (N) were registered. Stiffness (N/mm) and work-to-failure (total amount of energy required to fracture, indicated by area under the curve for load and distance, Nmm) were calculated.
Flow cytometry. Monoclonal antibody stainings of BM cells were performed as described previously 33 .
Briefly, BM cells were incubated for 30 min with 50 μl anti-FCγRII/III antibodies (Bioceros) to block nonspecific binding. Cells were subsequently incubated for 30 min with anti-mouse CD31 (Biorad) and Ly6C (Bio-Legend) antibodies. For exclusion of dead cells, BM cells were incubated with Fixable Viability Dye eFluor506 (eBioscience) for 30 min. All incubation steps were performed at 4 °C in the dark. Samples were acquired on an LSRII flow cytometer (BD Biosciences), and data were analyzed using FlowJo v7.6 software (Tree Star Inc. Ashland, OR).
Cell culture. To obtain osteoclasts, BM cells from femurs and tibia were cultured for 5-9 days in the presence of 30 ng/ml recombinant M-CSF (R&D Systems) and 20 ng/ml recombinant RANKL-TEC (R&D Systems). Cells were seeded in 96-well flat-bottom plates at a density of 1.0 × 10 5 BM cells/well in α-MEM (ThermoFisher Scientific) supplemented with 10% fetal calf serum, 100 U/ml penicillin/ streptomycin (Lonza) and 250 ng/ml amphotericine B/fungizone (Antibiotic antimycotic solution, Sigma). The medium was refreshed every 3 days. For the bone resorption assays, cells were cultured in a Corning osteoassay surface plate (Corning, USA). To differentiate osteoblasts, 1.0 × 10 6 BM cells were cultured in α-MEM supplemented with 15% fetal calf serum, 100 U/ml penicillin/streptomycin and 250 ng/ml amphotericine B/fungizone in 24-well plates. Half of the medium was refreshed every 3 or 4 days and l-ascorbic acid (Sigma-Aldrich) and β-glycerophosphate (Sigma-Aldrich) were added to the medium. At day three, 0.1 mM of l-ascorbic acid and 0.01 M of β-glycerophosphate were added. During subsequent medium refreshments, 0.05 mM of l-ascorbic acid and 5 mM of β-glycerophosphate were added.
Tartrate-resistant acid phosphatase (TRAP) assay. Cells were washed with PBS and fixed with 10%
formalin. TRAP + cells were stained using a TRAP leukocyte kit (Sigma-Aldrich). The staining was performed according to manufacturer's instructions, except the following adaptation: to visualize osteoclasts specifically, we used 1 M tartrate solution instead of the 0.3 M recommended by the manufacturer. Per well, 7 photos were taken at different locations to minimize the effect of unequal osteoclast development across the wells. Osteoclasts with ≥ 3 nuclei were counted using the online available ImageJ software (https:// imagej. net/ Welco me).
Bone resorption assay. The supernatant of the cells was removed, and the wells were washed three times with distilled water to lyse cells. The wells were stained with 5% silver nitrate (Sigma-Aldrich) in bright daylight for 30 min. The wells were subsequently fixed for 40-60 s in 5% sodium carbonate (Merck) solubilized in 25% formalin. Lastly, the wells were incubated for 2 min in 5% sodium thiosulphate (Merck) in deionized water. After each incubation step, wells were washed three times with distilled water. Per well, 4 photos were taken to minimize the effect of unequal resorption activity across the wells. Bone resorption was quantified by measuring the percentage resorbed area per photo, using the online available ImageJ software (https:// imagej. net/ Welco me). The mean percentage of the 4 photos was used to determine osteoclast activity in each well.
Real-time PCR. RNA was isolated from osteoblasts at day 10 of culture, using the GenElute Mammalian Total RNA Miniprep Kit according to the manufacturer's instructions (Sigma Aldrich). RNA was treated with 0.1 U/μl DNAse I Amplification Grade (Invitrogen). cDNA was synthesized using random hexamer primers, oligo(dT) primers and 10 U/μl Superscript II (Invitrogen). Primer sequences and probes are listed in Table 1. Real-time PCR was performed using the Viia7 (Applied Biosystems) system.
ELISA.
Leptin and testosterone were measured in serum samples using the Leptin or the Testosterone mouse/ rat ELISA (Alpco Diagnostics). Procollagen I N-terminal propeptide (PINP) was measured using the mouse PINP ELISA (Abbexa), and Anti-Müllerian Hormone (AMH) was measured using the mouse AMH ELISA (Ansh Labs). ELISA's were performed according to manufacturer's instructions.
Statistical analysis
Data are expressed as mean ± SEM. Data were tested for normality with Kolmogorov-Smirnov method using IBM SPSS Statistics 24. Statistical difference between groups was assessed using unpaired t tests. www.nature.com/scientificreports/ To assess interactions between age (7, 12, 26 weeks) and genotype (WT and IL-23R −/− ) of mice, two-way ANOVA was performed. In case of significant interaction between age and genotype, unpaired t test was performed to compare WT vs IL-23R −/− at each age. One-way ANOVA with Tukey's multiple comparison test was used to assess differences within each genotype over time (7, 12 and 26 weeks). In case of no significant interaction between age and genotype according to two-way ANOVA, but a significant difference for only one of the parameters age or genotype, the steps were performed as explained above only for the parameter that came out as significantly different. Statistical differences were determined using GraphPad Prism version 5.01 (GraphPad Software) and p values < 0.05 were considered statistically significant. (Fig. 1, Fig. S1A; Table S1). However, differences in these parameters were most notable at 12 weeks of age between both groups. IL-23R −/− BV/TV, Tb.Th, Tb.N and SMI were lower compared to WT, while Tb.Sp was higher (Fig. 1, Fig. S1A; Table S1). This difference can be explained by the stronger decrease in BV/TV (WT 3.5%; KO 33%) and Tb.N (WT 16%; KO 37%) of IL-23R −/− mice between the age of 7 and 12 weeks compared to WT mice (Fig. 1B). Furthermore, Tb.Th increased significantly (14%; P < 0.01) in WT mice between 7 and 12 weeks, but this difference was not-significant (6%) in IL-23R deficient mice.
IL-23R deficiency leads to temporal abnormalities in trabecular bone
Surprisingly, trabecular bone mass was similar to WT in 26-week-old IL-23R −/− mice. In WT mice, there was a significant decrease of 65% in BV/TV (P < 0.001), 54% in Tb.N (P < 0.001) and 22% in Tb.Th (P < 0.001) between 12 and 26 weeks, while in IL-23R −/− mice there was a smaller decrease of 29% in BV/TV, 23% in Tb.N and 9% in Tb.Th. Above data demonstrate that IL-23R deficiency leads to temporal changes in trabecular bone mass parameters.
IL-23R deficient mice have shorter femurs at 12 weeks.
Femur length was similar between both groups at 7 weeks. In line with the lower bone mass at 12 weeks, IL-23R −/− femurs were significantly shorter than those of WT mice (WT 15.4 ± 0.2 µm; KO 14.9 ± 0.4 µm; P < 0.01; Fig. 3). Interestingly, femur length in 26-weekold IL-23R −/− mice was similar to WT mice. This suggests that IL-23R deficiency leads to temporal abnormalities in longitudinal bone growth.
Femurs of 12-week-old IL-23R deficient mice are more brittle than WT. The μCT data demonstrated that 12-week-old IL-23R −/− mice have lower bone mass. To investigate if the mechanical properties of IL-23R −/− femurs are also affected, we subjected the femurs to a three-point bending test. The total force required to fracture the bones was significantly lower in IL-23R −/− femurs (WT 20 ± 3.7 N; KO 16.1 ± 3.7 N; P < 0.05; Fig. 4). In agreement, there was a clear trend towards lower stiffness and energy required to fracture (work-to-failure) in IL-23R −/− femurs. Altogether, these data suggest that IL-23R deficiency leads to more fragile bones.
Differentiation and function of IL-23R −/− osteoclasts is unaltered at 7 and 12 weeks.
Since bone mass was lower in 12-week-old IL-23R −/− mice, we investigated if this was potentially induced by increased osteoclast development. To study if there were abnormalities in the osteoclast precursors of 12-week-old IL-23R −/− mice, we analyzed their bone marrow by flow cytometry (Fig. S2). The percentage of early blasts and myeloid blasts did not differ between both groups (Fig. S2B). However, the fraction of monocytes was reduced in IL-23R −/− BM. Next, we determined the ability of the BM cells to differentiate towards osteoclasts in vitro upon stimulation with RANKL and M-CSF at 7 and 12 weeks. We detected equal numbers of TRAP + cells, with similar sizes, in both WT and IL-23R −/− cultures (Fig. 5A,C).
In addition, the bone resorptive capacity of IL-23R −/− osteoclasts was similar to WT (Fig. 5B,D). In summary, these data demonstrate that IL-23R deficiency does not affect osteoclast function and differentiation in 7-and 12-week-old mice. www.nature.com/scientificreports/ Bone formation marker PINP is lower in the serum of 12-week-old IL-23R −/− mice compared to WT. Next, we assessed serum levels of the bone turnover marker PINP in our mice. At 7 weeks, there was no difference between both groups (Fig. 6A). However, at 12 weeks, IL-23R −/− mice had significantly lower PINP levels compared to WT (Fig. 6B). At 26 weeks, PINP levels were similar between the two groups (Fig. 6C). To determine if IL-23 can act directly on osteoblasts, we cultured BM cells towards osteoblasts and analyzed gene expression of the IL-23R subunits in these cells at day 10 of culture. Although Il12rβ1was expressed similarly in osteoblasts of both genotypes, Il23r could not be detected (Fig. S3). Combined, these data suggest that IL-23R signaling affects osteoblast activity indirectly.
Serum leptin is similar between 7-and 12-week-old WT and IL-23R −/− mice. Our data suggest
that the effects of IL-23 on osteoblasts are indirect, therefore, other factors are likely involved. To determine whether an altered gonadal function might be involved, we measured serum levels of testosterone and Anti-Müllerian hormone, but did not detect differences between both groups (data not shown). Next, we investigated whether absence of IL-23R affected the metabolic system by assessing the body weight of the mice at 7, 12 and 26 weeks. Notably, 12-week-old, but not 7-and 26-week-old, IL-23R −/− mice had significantly lower body weight than WT mice (Fig. 7A, Fig. S4). This difference in weight prompted us to measure leptin levels in the mice. We detected similar serum levels of leptin in 7-and 12-week-old IL-23R −/− mice compared to WT (Fig. 7B). Our data suggest that testosterone, AMH and leptin are not involved in IL-23R signaling-mediated regulation of osteoblasts.
Discussion
In this study, we demonstrate that 12-week-old IL-23R deficient mice have reduced bone mass, more fragile bones and shorter femurs compared to WT. While trabecular bone mass is restored at 26 weeks, the effects on cortical bone parameters seem of long-term. No changes were found in osteoclasts of 7-or 12-week-old IL-23R −/− mice. However, serum PINP levels were significantly lower in 12-week-old IL-23R deficient mice. The effects of IL-23R signaling on osteoblasts are most likely indirect, since the il23r subunit was not expressed in these cells. www.nature.com/scientificreports/ Apparently, mice lacking IL-23R undergo systemically driven temporal changes in osteoblast function, for which the exact cause yet has to be identified. These changes result into temporal and long-term alterations of the bone. Our finding that 7-week-old IL-23R deficient mice have similar bone mass as WT mice, is in agreement with the study of Adamopoulos et al., who did not find any bone abnormalities in 8-week-old IL-23p19 −/− mice 14 . Furthermore, no defects in bone mass were found in 4-week-old IL-23p19 −/− mice 17 , suggesting that bone abnormalities due to disruption of the IL-23/IL-23R signaling develop at later ages. Indeed, 12-week-old IL-23R deficient mice had significantly less bone mass compared to WT, which is in line with the previously reported trabecular bone phenotype in IL-23p19 −/− mice of similar age 17 . In contrast to our study and that of Quinn et al., Sato et al. did not find any bone abnormalities in 12-week-old IL-23p19 −/− mice 18 .
Earlier studies have demonstrated that cortical bone plays a major role in determining the mechanical properties of the bone and the risk of fracture, since most of fragility fractures occurs at non-vertebral sites where bone is composed mainly by cortical tissue 34 . To our knowledge, we demonstrate here for the first time that cortical bone mass is affected in the absence of IL-23R at 12 weeks, and that the bones of IL-23R deficient mice are more fragile than WT. www.nature.com/scientificreports/ We found that trabecular bone mass and femur length of 26-week-old IL-23R −/− mice were similar to WT. This seemed to be induced due to reduced bone loss in IL-23R −/− mice between the age of 12 and 26 weeks. Supporting this, a recent study demonstrated higher bone mass in 12-months-old IL-12p40 −/− mice, which lack both IL-12 and IL-23, suggesting that lack of IL-23 has a protective role in age-related bone loss 19 . In contrast, Quinn et al. reported reduced BV/TV, Tb.N and Tb.Th in 26-week-old IL-23p19 −/− mice 17 , while Adamopoulos et al. reported no significant differences in BV/TV, but higher Tb.N in 26-week-old IL-23p19 −/− mice 14 . These differences could be due to the use of equipment with different sensitivities for bone mass determination, differences in the genetic background of the mice and/or differences in used control mice (littermate vs non-littermate). Additionally, due to reduction in trabecular bone mass at older ages, detection of small differences in bone mass could be more challenging. Nonetheless, all these studies demonstrate that age is an important factor in the IL-23-dependent effects on the bone.
In contrast to trabecular bone, the effects of IL-23R deficiency on cortical bone including radial bone growth seemed to be of long-term. Considering the different effects of IL-23R deficiency on longitudinal (temporal) versus radial bone growth (long-term), it would be interesting to follow these mice for longer periods to obtain a better understanding of their phenotype at older ages.
In contrast to osteoclasts, our data suggest the involvement of osteoblasts in the IL-23R-dependent regulation of bone mass. Notably, PINP levels were not different at 7 and 26 weeks, but were significantly lower in 12-week-old IL-23R −/− mice compared to WT. PINP is a bone formation marker that is used by a large number of studies as a good clinical marker of bone metabolism related diseases 35 . Serum PINP levels correlated well with the bone phenotype of the mice over time, suggesting that the observed lower bone mass is due to defects in bone formation. The notion that osteoblast-related factors are involved in the effects of IL-23 on the bone, are supported by a recent study which reported decrease in ALP activity in mice lacking IL-12p40 19 .
Interestingly, the effects of the IL-23/IL-23R signaling pathway on osteoblasts seem to be indirect, since we did not detect expression of Il23r subunit on osteoblasts. This is in line with previous studies which have reported lack of IL-23R on osteoblasts and no direct effect of IL-23 stimulation on these cells 17,24 . To study which factors were involved in the indirect effects of IL-23R signaling on osteoblasts, we assessed body weight, serum levels of AMH, testosterone and leptin. The lower bone mass phenotype and body weight of 12-week-old IL-23R −/− mice was not due to systemic changes in AMH, testosterone or leptin levels. Future studies should reveal whether IL-23R signaling has a role on other aspects of osteoblast function, and what is causing the effects of IL-23R www.nature.com/scientificreports/ deficiency on the bone formation to occur at 12 weeks, but not at earlier or later ages. Also, it should be investigated whether the lower body weight at 12 weeks is due to lower bone mass or whether other factors are involved. The bone phenotype observed at 12 weeks of age in IL-23R −/− mice may resemble an osteoporotic phenotype. Our data support the study of Azizieh et al., who found lower IL-23 levels produced by peripheral blood mononuclear cells of osteoporotic women compared to women with normal bone mineral density 36 . This suggests that low IL-23 levels could serve as a marker for bone loss in these patients. On the other hand, in PsA patients, treatment with anti-IL-23 biologicals has been demonstrated effective in reduction of disease activity and consequently halting inflammation-induced bone erosions 37 . However, the effects of long-term anti-IL-23 biological treatment on (systemic) bone mass and excessive local bone formation has not been clearly established.
It should be noted that our study has a few limitations. We studied the effects of IL-23R deficiency on the bone until 26 weeks of age. Despite having studied these mice at 3 different ages and revealing both temporary and long-term effects of IL-23R deficiency, it would be interesting to study the long-term effects on bone mass and strength at later ages after 26 weeks.
Moreover, we used in vitro differentiated cells and since cultured cells do not necessarily reflect the in vivo situation, further in vivo studies should be performed to understand the effects of IL-23R deficiency on bone mass.
In conclusion, we demonstrate here that IL-23R deficient mice have a temporary defect in their bone formation, which results in temporal effects on trabecular bone and long-term effects on cortical bone. Our study points towards possible implications for patients with long term treatment with anti-IL-23 biologicals who may suffer from loss of bone mass due to prolonged decrease in IL-23 levels. | 5,745.6 | 2021-05-13T00:00:00.000 | [
"Biology",
"Medicine"
] |
China’s first step towards probing the expanding universe and the nature of gravity using a space borne gravitational wave antenna
In this perspective, we outline that a space borne gravitational wave detector network combining LISA and Taiji can be used to measure the Hubble constant with an uncertainty less than 0.5% in ten years, compared with the network of the ground based gravitational wave detectors which can measure the Hubble constant within a 2% uncertainty in the next five years by the standard siren method. Taiji is a Chinese space borne gravitational wave detection mission planned for launch in the early 2030 s. The pilot satellite mission Taiji-1 has been launched in August 2019 to verify the feasibility of Taiji. The results of a few technologies tested on Taiji-1 are presented in this paper. Gravitational wave astronomy has opened the door to test general relativity and the effect of gravity in the Universe. The authors present the capabilities of an overlap between space gravitational wave detectors LISA and Taiji to constrain the Hubble constant to 0.5%, in 10 years, and what can be learned from the satellite pilot Taiji-1 launched in 2019.
T he observation of gravitational waves (GWs) enables us to explore the Universe in more details than that is currently known. By testing the theory of general relativity, it can unveil the nature of gravity. In particular, a GW can be used to determine the Hubble constant by a standard siren method 1,2 . This method 3 was first used by the Advanced LIGO 4 and Virgo 5 observatories when they discovered GW event GW170817 6 . Despite the degeneracy problem in the ground-based GW detectors, the Hubble constant can reach a precision of 2% after a 5-year observation with the network of the current surface GW detectors 6 , although LIGO's O3 data have shown that the chance to detect electromagnetic (EM) counterpart might be a little optimistic 7 .
In this paper, we discuss a method to further improve the fractional uncertainty of the Hubble constant to a precision <1% by the space-borne GW antennas. The improvement not only comes from that a space-borne GW antenna such LISA 8,9 and Taiji [10][11][12][13] can avoid the degeneracy problem because of its orbital motion 14 but also the precision of a GW source's position and its luminosity distance can be improved by 2-3 orders of magnitude by the LISA-Taiji network 15 , compared to the individual antenna such as LISA or Taiji. It requires a 1-year overlap of LISA and Taiji missions to achieve this precision.
The joint measurement requires that the Taiji scientific collaboration establish a three-step plan to guarantee the launch of Taiji in the 2030s 13 . The first step of this new observatory is to launch a pilot satellite, Taiji-1, to prepare the necessary technology for the second step, Taiji pathfinder, also called Taiji-2. Consisting of two satellites, Taiji-2 will be used to demonstrate the Taiji technology around 2023-2025.
Launched in 2019, Taiji-1 accomplished multiple tasks 12,13 . It studied the manufacturing process for the Taiji payload, the on-orbital working sequence of the Taiji payload, the data processing stream for the Taiji mission, and the feasibility of a few Taiji key technologies in space. In this article, we will also report the main results of the payload test for Taiji-1.
Hubble's law Hubble's law is interpreted as an evidence that the universe has been expanding since the Big Bang occurred~13.8 billion years ago 17 . In Eq. 1, H 0 is the mean expansion rate of the Universe, called the Hubble constant, V H is the receding velocity of the galaxy, and d is the distance of the receding galaxy. According to the Friedmann-Robertson-Walker cosmological model (FRW model), the dynamics of our expanding universe are governed by the density and the curvature of our universe. By measuring the Hubble constant, one can deduce the age, the size, the current state, and even the fate of our Universe 18 . In general, there are two primary methods to measure the Hubble constant. The first way to estimate the distance is to exploit the so-called "standard candles"-Cepheid variable stars or type 1a supernovae that are at the same luminosity. The data from the first method 19 implies that the value of H 0 = (74.03 ± 1.42) km s -1 Mpc -1 at 68% confidence level. The other method focuses on the cosmic microwave background, and it examines how the cosmic microwave background has evolved over time. The data from the cosmic microwave background 20 indicates that the value of H 0 = (67.4 ± 0.5) km s -1 Mpc -1 at 68% confidence level. The mismatch between the two measurements needs to be explained since the values should agree if the models are correct.
A new and independent determination of the Hubble constant using a GW as a standard siren 1,2 may solve this cosmic riddle. In 2017, the Advanced LIGO and Virgo detectors observed a GW signal (event GW170817) from the merger of a binary neutronstar system 21 . The EM follow-up measurements of the area were then sequentially observed, which triggered the first "multimessenger" astronomical observation 22 . With the absolute distance to the source being determined directly from the GW measurements, GW170817 was used as a "standard siren" to measure the Hubble constant 5 . The Hubble constant was found in this case to be 70:0 þ12:0 À8:0 km s −1 Mpc −1 . The uncertainty in the Hubble constant measurement (Eq. 1) largely comes from the inaccuracy of the absolute distance evaluation. The receding velocity, represented by a galaxy's redshift, can be measured precisely by taking the spectra of the galaxy. For a ground-based GW observatory, such as the LIGO and Virgo detectors, the degeneracy between the distance D L and the inclination of the GW measurement 2 (Eqs. 2 and 3) results in a face-on or face-off binary far away, which has a similar gravitational-wave amplitude to that of a close edge-on binary. This degeneracy contaminates the precision of the distance measurement. For simplicity, we use the geometrized unit system where c = G = 1.
where h + and h × are the strengths of the GW signal of two different polarizations, M z is the redshifted chirp mass, f is the wave frequency, Φ is the phase of GW,L is the unit vector of the source's angular momentum, andn is the unit vector pointing to the direction of the source. Taking into account more standard sirens, it leads to a N −1/2 convergence to the uncertainty of the Hubble constant, where N is the number of binary neutron-star mergers 23 . It is predicted that, when more standard sirens are detected and another ground-based detector joins the network, the fractional uncertainty of the Hubble constant determined by the ground-based GW detectors will reach 2% within 5 years 6 . This result is slightly better than that of the standard candle method. 19 A space-borne GW antenna such as LISA and Taiji can avoid the degeneracy problem by virtue of its orbital motion. (For a detailed discussion of the orbital configuration for the spaceborne antenna, please refer to refs. [9][10][11] .) When a space-borne GW antenna is orbiting around the sun, the position and the orientation of the source relative to the antenna are gradually changing. The motion of the detector thus modulates the measured signal and its modulation depends on the position and the orientation of the source. As a result, the distance and the inclination of a GW source are no longer degenerate 2,8 . This reduction in ambiguity increases the space antenna's ability to determine the luminosity distance.
For instance, let us assume a binary black hole GW source is randomly distributed in the universe, its redshift is not >1, and its total mass is <10 6 solar masses. Then, there is a 90% likelihood that an individual space antenna can localize the GW source with an error given by δD L /D L <8% for the fractional distance and δΩ <4 deg 2 for the orientation 2,14,15 . For an individual antenna, without the EM counterparts of a GW source, the entanglement between the luminosity distance and the orientation will limit the precision to determine the distance 2,8 . The fractional distance precision will be improved dramatically if the sky position of that GW source can be pinpointed. Considering the same type of GW sources discussed above, the distance error can be greatly reduced to 0.5% when the correlation between the distance and the orientation is combined 2 . One way to pinpoint the GW source is to find its EM counterpart 24,25 . Due to the poor understanding of the relation between the binary black hole merger GW event and its EM counterpart, it becomes difficult to find the EM counterpart either in advance or simultaneously 26 .
Determining the Hubble constant with LISA-Taiji network It was recently calculated that by the LISA-Taiji network, the localization of GW sources can be improved significantly without an EM counterpart 15 . Taking the above example, the orientation uncertainty of such a GW source can be reduced to δΩ <0.005 deg 2 . Consequently, the fractional distance precision will be improved to δD L /D L <0.5%. However, the EM counterpart is still important, as it can provide a redshift that is essential to calculate the Hubble constant. With such precise localization of a GW source, it will be relatively easy to discover its counterpart galaxy. The distributed density of a counterpart galaxy can be expressed as 27,28 with R as the co-moving distance and R * as the Hubble distance. If we only consider a small redshift such as z < 1, the exponential part of Eq. (4) will always be~1. The projected number density dN/dΩ is~300 galaxies/arcmin 2 given by the Hubble Deep Field 29 . We can normalize Eq. (4) by integrating it into projected number density, which should be <300 galaxies/arcmin 2 . Then, we have where R 0 is the distance at z = 1. By assuming a cosmological model, we can convert the measured luminosity distance and its error to any other desired cosmic distance measure. By multiplying Eq. (5) with the GW error cube, the number of galaxies within the error cube can be derived. With δΩ <0.005 deg 2 and δD L /D L <0.5%, the number of candidate host galaxies for a GW source with z < 1 is no >54. According to the redshift-apparent magnitude relation, the apparent magnitude of galaxies at distance z = 1 is between 24 and 25. Typically, for the spectroscopic measurement, the limiting magnitude for the device should be 3-4 magnitude greater, say 27-29, which challenges all the existing telescopes. Fortunately, future telescope such as LSST and WFIRST, assuming 2 years observation, could reach limiting magnitude of 27 and 29, respectively 30,31 . Thus, all the candidate galaxies could be traced by a fiber spectrograph on an LSST-like or WFIRST-like telescope. With such a lower number, it is probable that an EM event is correlated with a GW event. In some particular cases, the spin-induced precession effects may allow certain degeneracy to be broken and the analysis can achieve 1 arcmin −2 pointing accuracy 32 . In such cases, the number of candidate galaxies can be reduced to a few. Thus, we can technically identify the counterpart galaxy of the GW source.
Once the host galaxy of the GW source is identified, the redshift can be determined by the EM observation. With the distance of a GW source measured, the uncertainty of the Hubble constant δH 0 /H 0 will now be <0.5%.
Towards LISA-Taiji network
The LISA-Taiji network requires at least a 1-year overlap to realize the above purpose, which means Taiji needs to advance its schedule to match LISA's availability to collect data. As the pioneer, LISA pathfinder has been launched in 2015. The mission is a technology demonstration and has achieved great success 33 . LISA pathfinder has paved the way for the full LISA 34 project, which will start operating in orbit~2032-2034. Taiji, the Chinese spaceborne GW detection mission, which has a heliocentric orbit similar to LISA, has established a three-step plan to launch in early 2030s. Thus, it will have an overlap with LISA's operating time 12,13 . The first step has been accomplished by launching a pilot study satellite known as Taiji-1 satellite in 2019. The second step is to launch the Taiji pathfinder (also called Taiji-2) no later than 2025. Taiji-2 consists of two satellites, which are planned to demonstrate most technology of Taiji and to pave the way for the full Taiji project. The final step is to launch Taiji, which is similar to the LISA constellation in 2030s. Taiji (also called Taiji-3) consists of three identical satellites. The distance between the different pairs of two satellites is three million kilometers 12,13 .
Taiji pathfinder, consists of two satellites, will be equipped with more technology related to the inter-satellite laser link compared to LISA pathfinder. The high technical requirement makes the Taiji pathfinder challenging 13 . As a completely new mission in this field, directly launching Taiji-2 seems to be a quite risky task. Thus, a pilot study satellite mission Taiji-1 was approved in 2018 not only to prepare the necessary technology for Taiji-2 but also to verify the performance capability of Taiji mission. Taiji-1 also serves as a benchmark to testify the feasibility of Taiji's threestep plan.
Taiji-1, the first step of China's efforts Approved on 30 August 2018 and set to fly on 31 August 2019, Taiji-1, a 180 kg satellite, was a successful and quick mission. The orbit of Taiji-1 was a circular Sun-synchronous dawn/dusk orbit that was inclined at an angle of 97.69 deg. The orbit provided a relatively stable sun-facing angle, which ensured that the battery could always be constantly charged and that the temperature of the satellite should not fluctuate drastically. The orbit altitude was chosen to be 600 km, a tradeoff between the launching costs and the air drag. Similar to LISA pathfinder 33 , two major technology units were tested on Taiji-1: the optical metrology system 35 and the drag-free control system 36 . Due to the short-term development circle and the limited budget, the payload design was highly simplified (Fig. 1a, b). The optical metrology system consisted of an optical bench, a phasemeter and two laser sources. The dragfree control system was composed of a gravitational reference sensor (GRS) (it consisted of both a sensor head and the corresponding sensor electronics), a drag-free controller, and two types of micro-propulsion systems. Figure 1 shows the distribution of the payload in Taiji-1.
Taiji-1 used two Nd-YAG lasers (first and second laser in Fig. 1a) with a wavelength of 1064.5 nm. Only one laser was working during the measurement process. The optical metrology system could switch one laser to the other under the command. The two laser beams were delivered to an optical bench by two fiber couplers (Fig. 1c). There was a frequency difference of 1 kHz between the two delivered beams. Except for the reference interferometer, the optical bench contained two primary interferometers. The first was test mass interferometer (T.M. int.). One of the laser beams was aimed at a test mass and reflected to an optical bench (Fig. 1c). This unit measured the test mass motion. The other interferometer, called optical bench interferometer (O. B. int.), was used to monitor the optical bench noise. All of the interferometric beat notes were sensed and converted into sinusoidal voltages by the photodetectors. The phases of the sinusoidal voltages were decoded by the phasemeter 37 .
By the data from the phasemeter, the precision of the two primary interferometers can be derived by δL = δφ·λ·(2π) −1/2 , where δL is the precision of the interferometer, δφ denotes the phase noise of phasemeter data, and λ is the laser wavelength.
The GRS in Taiji-1, served as an accelerometer 38 , is composed of a sensor head and the corresponding electronics. The sensor head consists of a cage and a test mass (Fig. 1c). The GRS has three axes, one nonsensitive axis and two sensitive axes (Fig. 1c). The nonsensitive axis points to the earth and the drag free is used along nonsensitive axis. The first sensitive axis is along flight direction.
By capacitive sensing, the GRS measured the disturbing acceleration of Taiji-1. The data were sent to a drag-free controller. The controller then commanded the thruster to exert forces to compensate the disturbing force experienced by Taiji-1. Two different types of thrusters were tested: a radio frequency ion thruster and a Hall effect thruster. Each type has four individual thrusters. They are assembled symmetrically on both sides of the satellite (Fig. 1a). These two types of thrusters, like the two lasers, backed up each other.
During the mission, all of the payloads were tested. All results fulfilled the mission requirement. Some of the measurements were shown in Fig. 2. For the T.M. int. and the O.B. int. (different lasers were used for each), the precision evaluated in the frequency band between 0.01 and 10 Hz was found to be <1 nm Hz −1/2 . For some frequency bins, the precision could reach 25 pm Hz −1/2 (Fig. 2a, b). The Taiji-1 GRS noise was taken from the second sensitive axis. The dynamic range of the second sensitive axis was ±5.3 × 10 −5 m s −2 and the acceleration noise of this axis measured by the readout voltage fluctuation was 10 −10 m s −2 Hz −1/2 (Fig. 2c). The disturbance accelerations of the three axes of the Taiji-1 satellite readout by Taiji-1 GRS were shown in Fig. 2d. The nonsensitive axis was earth pointing, and the noise was mainly dominated by the GRS readout noise. While the first sensitive axis was along the flight direction, this noise was considered mainly caused by the air drag. The second sensitive axis was the orbit plane normal direction, and the acceleration measured by the GRS was <2 × 10 −9 m s −2 Hz −1/2 . The noise of both thrusters measured by the GRS was found to be <1 μN Hz −1/2 (Fig. 2e) which was believed to be dominated by the GRS readout noise. The noise of the thrusters could also be calibrated by the data of the ion acceleration voltage, gas pressure at the supply valve, and the temperature around the thruster. By this method, the true thruster noise of the radio frequency ion thruster was derived as~0.15 μN Hz −1/2 (Fig. 2e).
A drag-free control experiment was also performed along the nonsensitive axis of the GRS. We used the thrusters on one side to exert a sinusoidal force (the modulated peak in Fig. 2f), and the feedback controlled the thruster on the other side to compensate. The respective spectra densities were shown in Fig. 2f. The sinusoidal force (the modulated peak in Fig. 2f.) was well suppressed by using drag-free control, and the residue acceleration of Fig. 1 Anatomy of Taiji-1 and its payloads. a The distribution of the payloads in Taiji-1. The optical bench and the sensor head are integrated in order to ensure the laser beam to get accurate access to the test mass. The difference between the nominal geometric center of the test mass and the mass center of the satellite is smaller than 0.15 mm after the balancing is achieved with a 0.1 mm measurement accuracy. b Taiji-1 satellite before assembling. c The core measurement unit. It contains an optical bench and a sensor head. The plate of the optical bench is made of invar steel, and the mirrors are made of fused silica. The cage of the sensor head is made of the low thermal expansion glass ceramics, and is coated with gold. The test mass is also coated with gold and is made of titanium alloy. The stoppers are used to prevent the test mass from contacting the inner surface of the cage during launch. the satellite was <10 −8 m s −2 Hz −1/2 . The stability of the temperature control was~±2.6 mK. A more detailed analysis of Taiji-1 payload testing and the improved results would be presented in a special issue to be published soon.
The GRS noise induced by the voltage fluctuation is always proportional to its dynamic range 39 (Eqs. 6 and 7), where S is the area of the capacitor plate, m is the mass of test mass, d is the distance of the capacitor, V p is the preload voltage of the capacitor, V r is the readout voltage, δV p is the preload voltage noise, δV r is the readout voltage noise, and ε 0 is electrostatic constant. GRS readout acceleration : It is obvious that reducing the dynamic range of GRS will in turn reduce the GRS acceleration noise (Fig. 2c).
To summarize, the first on-orbit scientific run of Taiji-1 showed that the space-borne interferometers could work properly, the distance measurement noise amplitude spectra density of O.B. interferometer was at the level of 100 pm Hz −1/2 (10 mHz-1 Hz). In some higher frequency bin, it approached 25 pm Hz −1/2 . The performance of GRS also fulfilled the requirement, with the evaluated acceleration measurement noise amplitude spectra density being 10 −10 m s −2 Hz −1/2 . The noise amplitude spectra density of the thruster exerting force was calibrated to be <0.15 μN Hz −1/2 . The residue acceleration of the satellite after the drag free was <1 × 10 −8 m s −2 Hz −1/2 . The on-orbit performance of Taiji-1 demonstrated the feasibility of the payloads. The design, the manufacturing, the assembling, and the adjusting of payloads were effectively verified.
Outlook
A new frontier was depicted above that the data of LISA-Taiji network can be used to study the cosmology in a more precise manner. The searching for GW signals with space-borne detectors network will help us to understand not only the nature of gravity but also the expanding history of our universe. However, a few technology challenges faced by LISA and Taiji are still needed to be tackled in near future.
The successful flight of Taiji-1 has verified the feasibility of the three-step plan of Taiji. Encouraged by the achievement of Taiji-1, the Taiji scientific collaboration is looking forward to flying Taiji in early 2030s. It is our optimistic expectation that LISA and Taiji will be both orbiting the sun to detect a standard siren from a massive binary black hole merger. With the LISA-Taiji network, it is highly possible that the Hubble constant can be determined with an uncertainty <0.5%.
Data availability
The data that support the findings of this study are available from the authors on reasonable request, see Author contributions for specific data sets. The precision of the two primary interferometers (using second laser). c The noise of Taiji-1's gravitational reference sensor (GRS) and the equivalent noise of the GRS corresponding to smaller dynamic ranges. d The acceleration of the Taiji-1 satellite in three axes readout by GRS. e The noise from the two types of thrusters readout by the GRS and the calibrated noise of the radio frequency ion thruster. f The gravitational wave readout in spectrum density before and after the drag-free control. | 5,446 | 2021-02-24T00:00:00.000 | [
"Physics",
"Engineering"
] |
Wigner-function-based solution schemes for electromagnetic wave beams in fluctuating media
Electromagnetic waves are described by Maxwell’s equations together with the constitutive equation of the considered medium. The latter equation in general may introduce complicated operators. As an example, for electron cyclotron (EC) waves in a hot plasma, an integral operator is present. Moreover, the wavelength and computational domain may differ by orders of magnitude making a direct numerical solution unfeasible, with the available numerical techniques. On the other hand, given the scale separation between the free-space wavelength λ0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\lambda _0$$\end{document} and the scale L of the medium inhomogeneity, an asymptotic solution for a wave beam can be constructed in the limit κ=2πL/λ0→∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\kappa = 2\pi L / \lambda _0 \rightarrow \infty$$\end{document}, which is referred to as the semiclassical limit. One example is the paraxial Wentzel-Kramer-Brillouin (pWKB) approximation. However, the semiclassical limit of the wave field may be inaccurate when random short-scale fluctuations of the medium are present. A phase-space description based on the statistically averaged Wigner function may solve this problem. The Wigner function in the semiclassical limit is determined by the wave kinetic equation (WKE), derived from Maxwell’s equations. We present a paraxial expansion of the Wigner function around the central ray and derive a set of ordinary differential equations (phase-space beam-tracing equations) for the Gaussian beam width along the central ray trajectory.
Introduction: a model wave equation
In this work, we analyze electromagnetic waves of high frequency (order of 100GHz , with the associated vacuum wavelength of the order of few mm) propagating through a fluctuating plasma. Such waves are applied in nuclear fusion devices like tokamaks for heating and current drive purposes, and they are referred to as electron cyclotron (EC) waves [1]. For this kind of waves in tokamaks usually the wavelength is much smaller than the scale length L of the background medium (typically tens of centimeters). In such limit, semiclassical methods apply. We start from a precise definition of the medium scale L in terms of the gradient of the background medium, that is, for every spatial coordinate r and for every function f describing the background medium (for a plasma, f can represent the particle density, temperature, magnetic field). Neglecting for a moment the fluctuations we consider a stationary medium, that is, f depends on position, but not on time. For simplicity, we absorb the scale length L in a normalized and dimensionless spatial coordinate with the consequence that gradients of any quantity f with respect to x are of the same order as f itself. The semiclassical limit then is defined by where 0 is the free-space wavelength (in physical units).
Generally, the properties of a non-uniform dispersive medium enter the wave equation via the dielectric operator (1) for the electric displacement ⃗ D . The operator ̂ is composed by a Hermitian contribution ̂h (describing the dispersive properties of the medium) and an anti-Hermitian contribution îa (describing dissipation), In the case of EC waves in fusion plasmas, it usually makes sense to apply the cold plasma approximation to the Hermitian part ̂h [1]. In this approximation, the thermal motion of plasma particles is neglected, and the Hermitian part ̂h of the operator ̂ reduces to a Hermitian matrix. The focus of this paper is on the effect of random fluctuations of the medium on the propagation of the beam. Therefore, we consider a simplified model, and dissipative effects are not addressed, i.e., ̂a = 0.
It should be noted in this respect that the influence of the anti-Hermitian (dissipative) part of ̂ is usually limited to narrow regions in the plasma where the resonance condition between the wave and the (cyclotron) motion of the electrons is satisfied, and significant damping can occur. Outside these regions, it is justified (and a standard approach) to treat the propagation in the dissipationless limit.
We also consider an isotropic medium so that ̂h = ( , x)1 , where 1 is the identity matrix, and = ( , x) is the frequency-dependent real valued dielectric function. The dependence on is implied in the remaining part of this paper.
Electromagnetic waves are vector valued. For simplicity, we consider the case of a linearly polarized wave with constant polarization unit vector , i.e., the electric field is written as This is possible e.g., where is a symmetry direction ( ⋅ ∇ = 0 ), and we look for symmetric solutions ⋅ ∇E = 0 . The choices made simplify the mathematical model, retaining the essential physics effects that we are interested in (refraction, diffraction and scattering) without the technical difficulties. As an example, the assumptions hold for the O-mode of EC waves in slab geometry. Then the equation for electromagnetic waves in frequency domain reduces to the Helmholtz equation, for the scalar wave field E ≡ E(x) . The dielectric function can be expressed as and the function n = n(x) is referred to as refractive index and describes the local properties of the medium at a given frequency and spatial position x. In the case of EC waves in a plasma, the function n = n(x) can be computed from the plasma parameters [2].
In a turbulent fusion plasma, the correlation time C of fluctuations is typically in the order of tens of microseconds [3], whereas the beam is turned on for a time of hundreds of milliseconds or larger [4], and both time scales are orders of magnitude larger than the time of propagation of the beam (estimated by the typical size of the machine divided by speed of light). Heuristically, such time-scale separation can be invoked in order to justify a statistical approach to the description of the effects of random fluctuations on the wave field: Instead of physical time-dependent fluctuations of plasma parameters f(t, x), we consider time-independent random fields f ( , x) where varies in an abstract probability space with probability density ℙ . Then, the refractive index becomes a random field n(x) = n( , x) (the variable is implied when not explicitly needed). For each point , one can solve the frequency-domain problem (7) with n = n( , x) . The solution E = E( , x) thus obtained is a random electric field, with the same probability density ℙ . The relation to physical quantities is provided by the ergodic hypothesis. E.g., for the wave field, that is, the time average of the physical wave field E(t, x) can be computed as an ensemble-average of the random wave field E( , x) . The same hypothesis is made for any derived physical quantity, such as the electric-field energy density |E| 2 ∕8 .
We always assume that the fluctuations are weak in the sense that with a random field n 2 of order O(1) and n 2 (x) = 0 . Further, we have the two-point correlation function This function enters the subsequent analysis and is proportional to the two-point correlation function of the refractive index perturbation n 2 − n 2 0 which is of order O −1 . Finally, we will be looking for the ensemble averaged Wigner function associated to the electric field which is a function of position x and refractive-index vector N, together forming coordinates in the phase-space z = (x, N) . Unlike the refractive index defined as a function of the spatial position x in (8), here N is an independent coordinate. A connection between both is drawn in Sect. 2.
Special attention must be paid to the fact that for wave beams (without dissipation) the wave field is not in L 2 (squared integrable). This is in contrast to the case of wave packets usually analyzed in quantum mechanics, e.g., in the work of Graefe et al. [5]. However, in literature a Wigner function description of beams can be found for various applications when time is a parameter in the wave equation. For example, Hermite-Laguerre-Gaussian Wigner beams are described in [6], and a decomposition into multiple Gaussian beams is employed in [7]. In the field of plasma physics, a robust description of drift waves and zonal flows as well as geodesic acoustic modes is possible upon a Wigner function description [8,9]. In the present paper, as an additional feature "mixing of the Wigner function" is induced by plasma fluctuations, as suggested in the work of McDonald [10].
In Sect. 2, we exploit the well-known fact that under simplifying assumptions of a straight beam trajectory the wave equation may be reduced to a Schrödinger-type equation. From this starting point, an initial value problem for the Wigner function can be formulated [10]. A paraxial solution is proposed in Sect. 3. In Sect. 4, mixing is quantified by means of an entropy functional [11]. A benchmark and examples are presented in Sect. 5 for standard test cases. Finally, a numerical scheme which allows for super-diffusive scattering is introduced in Sect. 6.
Paraxial wave equation
In order to illustrate the proposed phase-space paraxial expansion in a simple context free of technical difficulties, we consider equation (7) under the following conditions on the refractive index of the medium: sian coordinates, and z-axis is a symmetry direction; then n 2 (x, y) = n 2 0 (x, y) + 1 √ n 2 (x, y).
(ii) n 0 (x, 0) > 0 , so that no reflection from cut-off can occur; (iii) y n 0 (x, y) Under these conditions, we first simplify the wave equation (7) via the standard paraxial approximation. This preliminary step is however not essential. We consider a beam which is localized around the x -axis (to be referred to as the central ray) and propagating straight in x-direction. Existence of such solution requires the assumption made above in point (iii) that n 0 (x, 0)∕ y = 0.
For the wave field E, we introduce a change of variable with a phase factor (x) which will be chosen appropriately below [12][13][14] In addition, we assume that the new field a(x, y) is slowly varying in x-direction ( n a∕ x n = O(1) ) and rapidly decreasing in y-direction, i.e., y n a(x, y) = O −n∕2 and n a∕ y n = O +n∕2 . A physically relevant example of such a function is the Gaussian a(x, y) = a 0 (x)e − 2 y 2 w 2 (x) . As a consequence, from the wave equation (7) we find where the term proportional to 2 a∕ x 2 is of O −2 . Here, the refractive index function has been Taylor expanded, with the integral remainder q(x, y) . By assumption, the firstorder derivative vanishes.
As an asymptotic series in −1 , the leading order term in equation (14) is zero if we choose In equation (14), we have the first derivative of a with respect to x only and, thus, can consider the x-coordinate as a parameter of the evolution of a. However, we introduce the change of variable x = x( ) , defined by the transformation where we have assumed that n 2 0 (x, 0) > 0 . This absorbs the prefactor d d x in (14). Further, we define a rescaled field function At last, to the lowest order we are left with where, with some abuse of notation, we write q( , y) and n 2 ( , y) for q(x( ), y) and n 2 (x( ), y) , respectively. Equation (19) is a one-dimensional Schrödinger equation, where −1 plays the role of the Planck's constant, the first term on the r.h.s. represents the kinetic energy operator and the second term the potential.
The derivation of the WKE for this type of equations is one of the examples in the paper of McDonald [10], which can be applied to equation (19). In this specific case, the Hamiltonian is which is the Weyl symbol of the operator in equation (19). The dispersion relation H = 0 with H given in (20) has been introduced. The Wigner function W A associated to the amplitude field A is related to the averaged Wigner function W E introduced in equation (12) by which follows from the respective definitions.
One should notice that the substitution of (20) yields the dispersion relation which is the standard paraxial approximation of the Helm- The WKE (23) is the starting point for the derivation of a set of phase-space beam-tracing equations in the next section.
Paraxial solution scheme
We search for approximate solutions of (23) such that the cross section in phase-space, that is w at constant , is a Gaussian around y = N y = 0, with the amplitude c and the quadratic form in the exponential being the unknowns to be determined. The matrix G associated to the quadratic form is assumed to be strictly positive definite.
Initial conditions are given at = 0 . For EC beams usually an initially Gaussian shape for the wave field is assumed, with A 0 a constant measuring the initial field amplitude, with R 0 the initial wave-front curvature radius and Explicit computation shows that the Wigner transformation (12) in y-direction is given by (28) at = 0 and with This set of equations yields the initial parameters c(0), G rr (0) , G rN (0) and G NN (0) in terms of the initial beam parameters.
From ansatz (28) and the fact that G is strictly positive definite, it can be proven that y m N n y w = O − (m+n)∕2 , justifying a paraxial expansion of the WKE around y = N y = 0 in the semiclassical limit.
In substituting (28) into (23), we observe that The derivatives of the Hamiltonian are where H is given in equation (20). The fluctuation spectrum to lowest order reads where Γ 0 ( , N y ) = Γ( , 0, 0, N y ) , and the reminder for O N has been estimated by means of the relation (21) between N , y and N y . We consider the diffusive limit of the scattering operator [15,16]. Heuristically, this holds when the scale ΔN y of w in direction N y satisfies L C ≫ ΔN y [17,18] where L C is the correlation length defined as the width of the two-point correlation function C(x + s 2 , x − s 2 ) with respect to s. Then, Taylor expansion of the Wigner function w( , y, N � y ) in the (30b) scattering operator around N y up to second order is appropriate. Under the integral, zero-order terms cancel out, the firstorder term does not contribute due to symmetry of the Gaussian, and the resulting second order term yields Here, the diffusion coefficient has been defined, and the remainder arises from the fourth order term of the Taylor expansion. The second derivative of w in (28) amounts to Using (31-34) and (36) in equation (23) and collecting powers of (y, N y ) yields This system of ordinary differential equations describes the evolution of the beam parameters c, G rr , G rN and G NN and is referred to as phase-space beam tracing equations. One should notice that when D = 0 , the solution of (37) is independent of , but in general the diffusion term introduces a residual dependence on .
Entropy
In presence of fluctuations, the statistically averaged Wigner function does not necessarily belong to the range of the Wigner transform, since in general we cannot guarantee that there exists a deterministic wave field Ẽ such that (E(x)E(y)) =Ẽ(x)Ẽ(y) . When this is the case, e.g., when E =Ẽ with probability one, then we say that the wave field is in a pure state. In general, we speak of mixed states. Following [19], we use an entropy functional which measures how distant w is from a pure state: Indeed, it can be shown that we have S = 0 for the initial cross section with parameters (30). In this paper, we focus on the paraxial approximation, and with ansatz (28), the integrals can be computed analytically, with the result that From (39), the time derivative can be evaluatd upon making use of the phase-space beam tracing equations (37), with the result that This shows that with fluctuations suppressed (i.e., D = 0 ) the derivative of the entropy vanishes identically and, in consequence, the beam remains pure. By contrast, for D > 0 we have a monotonously increasing entropy with S → 1 for → +∞ , which shows the increasing impact of fluctuations.
Examples
Convergence of the paraxial solution is demonstrated for the cases of free space and a lens-like medium, addressed in Sects. 5.1 and 5.2, respectively. In both cases, we choose with F the (in general -dependent) fluctuation strength which corresponds to the root mean square of the refractive index fluctuations.
The test scenarios are chosen with a focus on simplicity of the resulting expressions. However, background media with more complicated refractive index functions n would also be possible as far as the points (i)-(iii) of Sect. 2 are fulfilled. For the application of EC beams in tokamaks, one could e.g., imagine a beam propagating through a layer of increasing refractive index n = n(x) , following the example of [20].
For the example results, the Wigner function itself is considered as well as the projections on the configuration space and refractive-index space, given by (39) , respectively. With the paraxial ansatz, one has that both E y and E N y are Gaussians, of widths respectively. As a reference solution, we have analytical expressions (no fluctuations only) and the WKBeam solution [17] (including the effect of fluctuations).
Free space results
Here, we make the idealized assumption that the beam propagates in free space ( n 2 0 = 1 and q = 0 in equation 15) but that fluctuations of the refractive index around n 2 0 = 1 are possible. This reduces the phase-space beam-tracing equations (37) to We first discuss a scenario without fluctuations, i.e., D = 0 , and a focused beam propagating in positive x-direction. Figure 1 shows the phase-space structure for several cross sections. Due to the Gaussian ansatz (28), elliptical contours are observed.
In general, during the propagation of the beam the ellipse rotates and gets squeezed. Via (30c) it is seen that the offdiagonal term in the exponential of the Gaussian ansatz vanishes if and only if the beam curvature is infinite, and this occurs at the focal point of a Gaussian beam. Accordingly the principal axes are aligned to the coordinate axes in that case (top right of Fig. 1). From equation (45a), it follows that the maximum value of the Wigner function at y = N y = 0 remains constant. Along with equations (45b) to (45d), it can be proven that the phase-space integral ∫ w d y d N y is a constant of motion, as it should due to energy conservation. Further, it can be checked that S( ) ≡ 0 . This shows that, indeed, the Wigner function under study corresponds to a pure state as expected if fluctuations are not considered.
The projection on configuration space (42a) is proportional to the energy density of the beam resolved in y and, thus, establishes a link to a wave field description. The result of equation (44a) along with (43a) is in agreement with the standard results on the width of a Gaussian beam in free space [21]. For the beam parameters as used in Fig. 1, the width is plotted in Fig. 2 (no fluctuations case). It shows a minimum at the position of the focal point.
In order to understand the effect of fluctuations on the beam, we consider a fluctuation layer with constant fluctuation strength of F(x) = 0.02 if 0 < x < 10cm and zero elsewhere and a two-point correlation length L C = 3cm.
The entropy as computed from formula (39) increases inside the fluctuation layer and then remains constant, cf. Figure 2. This confirms that mixing of the wave field is induced by fluctuations.
Due to the special structure of the diffusion operator (34), which modifies the shape of the cross section in N y -direction only, fluctuations do not directly induce beam broadening in configuration space (top panel of Fig. 2). By contrast, diffusion affects the refractive index projection directly (middle panel of the figure). Fluctuation-induced beam broadening in configuration space is observed after further propagation and must hence be understood as an indirect effect, provoked by the broadened spectrum.
In order to demonstrate the validity of the paraxial solution in Figs. 3 and 4, we show a comparison of the computed beam width to the analytical solution disregarding fluctuations and to the WKBeam solution.
In both cases, a clear effect of fluctuations is found. We see good agreement of the paraxial solution with the = 120cm and x = 160cm ). Beam frequency f = 140GHz , initial beam width Δ 0 = 3cm , initial curvature radius R 0 = 100cm , no fluctuations WKBeam solution (not based on the diffusive approximation) in Fig. 3, whereas the paraxial approximation cannot reproduce the WKBeam results in Fig. 4. This can be explained by the small two-point correlation length in the latter case which invalidates the diffusive approximation, see also discussion in Sect. 3.
For the application of EC waves in tokamak plasmas, the fluctuation parameters are within the diffusive regime for an ITER scenario and beyond for an ASDEX Upgrade scenario [18].
Lens-like medium results
A background medium with and a central refractive index n 0 (x, 0) = 1 exhibits cut-off layers at y = ±1 . The results for such a profile of the refractive index are presented in Fig. 5.
The reference solution [22] (solution of the Helmholtz equation (7)
Multi-Gaussian solution scheme
In this section, we propose a heuristic approach to the approximation of the solution for w in terms of Gaussian functions beyond the diffusion limit (34). The difficulty is due to the We apply to equation (47) the Lie-Trotter splitting scheme [23]. Such solution method requires a time discretization k = k ⋅ Δ with a small time step Δ . Formally, the solution w( k ) can be computed from the solution at the previous time w( k−1 ) as where Φ A+B is the time evolution operator. We consider the sub-problems solved by the evolution operators Φ A and Φ B , respectively. In the Lie-Trotter scheme, the time evolution operator of the complete problem is approximated by a composition of the separate operators, For the operator A defined in (48), the effect of Φ A on a beam mode has been derived in Sect. 3: The matrix G solves equations (37) with D = 0 , which then can be integrated along the time step.
Integration of equation (53) in terms of the explicit Euler method yields The factor in round brackets multiplying the function w leads to an amplitude decay of the propagating mode. The second summand on the r.h.s. amounts to the convolution of the Gaussian fluctuation spectrum Γ 0 with the Gaussian w. The result of such convolution is again a Gaussian function with parameters (c, G rr , G rN , G NN ) to be determined explicitly from the convolution term. Thus, operator Φ B applied to a Gaussian generates a new Gaussian which is summed to w( k−1 ) . The total amount of energy carried by the beam (measured by the y-N y -integral of the Wigner function) is conserved when both the amplitude decay and generation of a new mode are accounted for.
The number of Gaussian contributions is exponentially growing during time propagation, as illustrated in Fig. 6.
The scheme presented here can easily hit the wall due to insufficient computer resources, given the fast increase of the number of equations to be integrated. Even though a direct solution of the equations can be provided with a limited number of time steps rather than an out-of-the-box technique, it should be understood as a basis for more efficient numerical approaches which will be investigated in the future.
Summary
We have addressed paraxial solutions of the WKE thus reducing the problem of beam propagation to integration of ordinary differential equations. The results produced by such method have been found accurate within the limits of validity of the paraxial approximation for a beam propagating in free space and in a lens-like medium. The evolution of the shape of the beam cross section in phase-space has been discussed, and the entropy has been introduced. As expected, the paraxial approach is inaccurate whenever the diffusion approximation of the scattering operator breaks. For such cases, however, we have shown how the paraxial approximation can be combined with a Lie-Trotter splitting scheme which in principle can also deal with such scenarios. While a naive implementation of the splitting method is computationally too expensive due to an exponential increase of Gaussian terms in the solution, more efficient generalizations of such an idea are currently under investigation. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 6,067.6 | 2021-10-19T00:00:00.000 | [
"Physics"
] |
A Wolf in Sheep's Clothing: SV40 Co-opts Host Genome Maintenance Proteins to Replicate Viral DNA
Simian virus 40 (SV40) was discovered in 1960 as a contaminant in early polio vaccines. Its discovery coincided with an explosion of knowledge in the new field of molecular biology, and SV40 was quickly adopted as a model to study eukaryotic genome structure, expression, replication, and cell growth regulation in cultured cells [1]. With a genome of only 5.2 kbp, SV40 relies heavily on host cell machinery to propagate, affording investigators a powerful tool to discover key host proteins that the virus manipulates. Indeed, a single multifunctional viral protein, the large tumor (T) antigen (Tag) (Figure 1A), is sufficient to orchestrate the replication of the viral mini-chromosome in infected monkey cells [2], [3]. The origin DNA binding domain of Tag binds specifically to the viral origin of DNA replication, and the C-terminal helicase domain of Tag unwinds parental DNA at SV40 replication forks. The development of a cell-free reaction containing purified Tag and primate cell extract enabled the identification of ten evolutionarily conserved host proteins that are necessary and sufficient, together with Tag, to replicate SV40 DNA in vitro [3], [4].
Figure 1
Assembly and activation of the SV40 pre-replication complex in vitro.
Simian virus 40 (SV40) was discovered in 1960 as a contaminant in early polio vaccines. Its discovery coincided with an explosion of knowledge in the new field of molecular biology, and SV40 was quickly adopted as a model to study eukaryotic genome structure, expression, replication, and cell growth regulation in cultured cells [1]. With a genome of only 5.2 kbp, SV40 relies heavily on host cell machinery to propagate, affording investigators a powerful tool to discover key host proteins that the virus manipulates. Indeed, a single multifunctional viral protein, the large tumor (T) antigen (Tag) ( Figure 1A), is sufficient to orchestrate the replication of the viral mini-chromosome in infected monkey cells [2,3]. The origin DNA binding domain of Tag binds specifically to the viral origin of DNA replication, and the C-terminal helicase domain of Tag unwinds parental DNA at SV40 replication forks. The development of a cell-free reaction containing purified Tag and primate cell extract enabled the identification of ten evolutionarily conserved host proteins that are necessary and sufficient, together with Tag, to replicate SV40 DNA in vitro [3,4].
Initiation: How Does Tag Recognize Origin DNA?
Assembly of Tag on the viral core origin of DNA replication (64 bp) is the first step in replication [3,5]. The core origin DNA is composed of three elements: a central palindrome composed of four GAGGC sequences, flanked by a so-called EP element and an asymmetric AT-rich element ( Figure 1B). Binding of a Tag monomer to each GAGGC in the central palindrome nucleates cooperative assembly of additional Tag to form a double hexamer of ,1 MDa ( Figure 1B). The central lobe of the dodecamer consists of the N-terminal 250 residues of both Tag hexamers ( [6] and citations therein). The C-terminal helicase lobe of each hexamer (residues ,260-708) interacts with the EP or AT element of the origin DNA. This pre-replication complex, in the presence of Mg-ADP or -ATP, is sufficient to locally melt (EP element) or untwist (AT element) duplex origin DNA. These local distortions are necessary, but not sufficient, to activate bidirectional helicase activity of the Tag complex in vitro or in vivo.
Activation of Replication: How Does the Tag Double Hexamer Unwind DNA?
Activation of the double hexamer on origin DNA requires a unique phosphorylation state of Tag: phospho-Thr124, and unmodified Ser120 and 123 [7,8]. Cooperative interactions between the N-terminal regions of the two hexamers during assembly on the origin require this same hypo-phosphorylated form of Tag, which, fortuitously, is expressed by recombinant baculovirus. When hypo-phosphorylated Tag double hexamers assemble in the presence of Mg-ADP, which prevents helicase activity, they adopt two distinct conformations [6] ( Figure 1C). In one conformation (parallel), the duplex core origin DNA is buried in the central channel of the double hexamer. In each hexamer, the six origin DNA binding domain (OBDs) form a left-handed spiral structure surrounding the central palindrome [6,9]. In the displaced conformation, the central lobe of the dodecamer is more open, yielding a bent structure. Intriguingly, bacterially expressed Tag double hexamer displays only the parallel conformation, consistent with its inability to activate bidirectional origin unwinding [3,6,7]. Thus, we suggest that conformational changes in the central lobe, in concert with local distortions in the EP and AT elements bound to the helicase lobes, may allow single-stranded DNA (ssDNA) release from the central channel of the double hexamer ( Figure 1C, dashed lines). Hypothetically, the displaced protein conformation could shift back to the parallel conformation without fully recapturing both strands of ssDNA. Indeed, the observation that Tag double hexamer-ADPorigin DNA complexes dissociate into single hexamers after exposure to a single-strand-specific nuclease argues that ssDNA must become accessible outside of the protein complex [10]. The observed conformational flexibility [6] could thus generate an activated dodecamer poised for bidirectional unwinding by steric exclusion, as proposed for the cellular Mcm2-7 replicative helicase [11,12]. Future studies to define the path of the DNA through an active Tag helicase complex will be required to test this model.
Elongation and Termination: Is Movement of Sister Replication Forks Coupled?
Both unphosphorylated and phosphorylated forms of Tag assemble double hexamers on duplex SV40 origin DNA ( Figure 1D, I). However, only the hypo-phosphorylated form of Tag displays cooperative interactions between the two hexamers and undergoes remodeling to activate the helicase to unwind with 39 to 59 polarity ( Figure 1C, right). In vitro, purified hypo-phosphorylated Tag can unwind origin DNA bidirectionally without disrupting the cooperative interactions between the two hexamers, resulting in ''rabbit-ear'' DNA structures detectable by electron microscopy [13]. If this looped template were replicated, DNA synthesis at the two sister replisomes might be coupled ( Figure 1D, II). However, in infected primate cells, most of the Tag is Funding: This work was supported by NIH RO1 GM52948 and T32 AI089554. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing Interests: The authors have declared that no competing interests exist.
* E-mail<EMAIL_ADDRESS>additionally phosphorylated on Ser120 and Ser123. Alanine substitution of either residue abolishes viral DNA replication in vivo [7,14], implying that modification of both sites is important for replication. Since phosphorylation of Ser120 or Ser123 disrupts cooperative interactions between hexamers, we suggest that hyper-phosphorylation of Tag uncouples the two replisomes soon after initiation of replication ( Figure 1D, III). Since hyper-phosphorylation of Tag has no detectable effect on its unidirectional helicase activity [3,7,8], the sister replication forks could migrate independently and converge to complete replication in vivo ( Figure 1D, IV). , and helicase domain), composed of the zinc (Zn) and AAA+ ATPase sub-domains, are connected by flexible regions (white) (P, cluster of phosphorylated residues that regulates origin activation; HR, host range function). (B) Diagram of ADP-associated SV40 Tag double hexamer bound to the duplex SV40 core origin of DNA replication (EP, central palindrome, AT), with non-origin DNA protruding from the complex (adapted from [6]). (C) 3D cryo-electron microscopy reveals two conformations (parallel, displaced) of ADP-associated hypo-phosphorylated SV40 Tag double hexamer on SV40 origin DNA as in (B) (adapted from [6]). A hypothetical conformation for the activated double hexamer is shown at the right. Dashed lines suggest potential paths of the DNA strands through each protein conformation. (D) Stages of SV40 replication. I, Tag dodecamer assembled on duplex SV40 DNA as in (B); II, hypo-phosphorylated Tag dodecamer activated as in (C) unwinds DNA bidirectionally [13] and may assemble host proteins (not shown here) into two sister replisomes that interact physically through the central lobe of the Tag dodecamer; III, hyper-phosphorylation of Tag disrupts interactions between the hexamers [7,8], releasing the replisomes to progress independently along the template chromatin; IV, replication forks converge slowly, accompanied by DNA decatenation, to complete replication, which may involve additional host proteins [3,14,[18][19][20][21]. doi:10.1371/journal.ppat.1002994.g001 at a viral and a host fork. Topoisomerases, nucleosomes, and chromatin modifiers known to act at both forks are not shown (adapted from [24]). (B) DNA damage signaling in SV40 DNA replication centers at 48 hours post-infection, but not in host DNA replication centers. Mock-infected or SV40infected BSC40 monkey cells were labeled with 10 mM EdU (a thymidine analog) for 5 minutes to visualize newly replicated DNA. Soluble proteins were pre-extracted and cells were fixed [18]. EdU (teal) was coupled to a fluorescent dye using click chemistry (Invitrogen) and DNA was stained with DAPI. Chromatin-bound Tag (green) and histone cH2AX (red) were stained for indirect immunofluorescence as described [18]. Cells were visualized with a 636 objective at a 0.6 mm z-axis slice using an Apotome (Zeiss). Scale bars represent 10 mm. doi:10.1371/journal.ppat.1002994.g002 SV40: A Simple Model for Host DNA Replication?
Investigation of SV40 replication has been motivated in part by anticipation that it would provide insight into host replication proteins and mechanisms. The architecture, dimensions, and assembly of Tag and yeast Mcm2-7 double hexamers on their cognate origin DNAs are closely related [6,15,16]. Much of the protein machinery at SV40 and host replication forks is also remarkably similar [2][3][4] (Figure 2A). Furthermore, the SV40 genome replicates in vivo as a mini-chromosome packaged in host nucleosomes and utilizes a variety of chromatin remodeling proteins and histone chaperones. Yet, the SV40 replisome clearly excludes several key components of host replication forks, e.g., the leading strand DNA polymerase e, Mcm10, and Cdc45 ( [17,18]; G. Sowd, unpublished data), and all of the host proteins essential for SV40 replication in vitro (Figure 2A) function in host DNA repair, as well as replication, pathways. Lastly, SV40 infection induces host DNA damage signaling that is required to replicate viral chromatin in vivo [14,18,19]. These observations have prompted a re-evaluation of the viral replication strategy as a model for host chromosomal replication, and suggest the possibility that the virus may co-opt host repair pathways.
Host Genome Maintenance: A Niche for Viral Chromatin Replication?
Recently, fluorescence microscopy of SV40 chromatin replication in infected cells has revealed that Tag and the host proteins required for SV40 replication in vitro co-localize in prominent subnuclear foci that enlarge with time after infection in permissive cells [18] ( Figure 2B). Moreover, thymidine analogs, e.g., EdU, that are incorporated into nascent viral chromatin co-localize with these proteins, suggesting that these foci represent viral replication centers ( [20,21]; G. Sowd, unpublished data) ( Figure 2B). Intriguingly, a variety of host DNA damage signaling and repair proteins, e.g., cH2AX, Mre11, Nbs1, Rad51, and FancD2, also reside in SV40 replication centers [18,20,21] (Figure 2B). Although punctate foci of such genome maintenance proteins are observed in chromatin of uninfected cells exposed to DNA damaging agents, such foci are generally much smaller than SV40 replication centers [18]. Of note, the association of host genome maintenance proteins with viral replication centers is not unique to SV40 or polyomaviral infections, but also occurs in cells infected by other DNA viruses, including adeno-, papilloma-, and herpesviruses [22,23]. These findings suggest that host damage signaling and genome maintenance pathways serve important, though still poorly understood, roles in viral propagation, and raise questions about how viruses activate damage signaling. The localization of host genome maintenance proteins at SV40 replication centers suggests the possibility that viral chromatin may masquerade as ''damage'' to attract host proteins needed for replication (Figure 2A). A second possibility is that replicating viral chromatin may suffer actual DNA damage that host genome maintenance proteins could then repair. In either case, the activation of DNA damage checkpoints controlled by ATR and ATM signaling may arrest SV40-infected cells in a pseudo-S/G2 phase state that provides conditions favorable for viral DNA amplification [19,21,23]. Thus, much remains to be learned about how SV40 infection activates DNA damage signaling and uses it to facilitate viral propagation. | 2,759.4 | 2012-11-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
SU(3) truncated Wigner approximation for strongly interacting Bose gases
We develop and utilize the SU(3) truncated Wigner approximation (TWA) in order to analyze far-from-equilibrium quantum dynamics of strongly interacting Bose gases in an optical lattice. Specifically, we explicitly represent the corresponding Bose--Hubbard model at an arbitrary filling factor with restricted local Hilbert spaces in terms of SU(3) matrices. Moreover, we introduce a discrete Wigner sampling technique for the SU(3) TWA and examine its performance as well as that of the SU(3) TWA with the Gaussian approximation for the continuous Wigner function. We directly compare outputs of these two approaches with exact computations regarding dynamics of the Bose--Hubbard model at unit filling with a small size and that of a fully-connected spin-1 model with a large size. We show that both approaches can quantitatively capture quantum dynamics on a timescale of $\hbar/(Jz)$, where $J$ and $z$ denote the hopping energy and the coordination number. We apply the two kinds of SU(3) TWA to dynamical spreading of a two-point correlation function of the Bose--Hubbard model on a square lattice with a large system size, which has been measured in recent experiments. Noticeable deviations between the theories and experiments indicate that proper inclusion of effects of the spatial inhomogeneity, which is not straightforward in our formulation of the SU(3) TWA, may be necessary.
I. INTRODUCTION
Quantum simulators built with synthetic quantum platforms that are highly controllable have been applied for studying quantum many-body physics in and out of equilibrium. Examples of such quantum simulators include ultracold gases in optical lattices [1][2][3][4][5], Rydberg atoms in optical tweezer arrays [6], trapped ions [7], and superconducting circuits [8,9]. Of particular interest is far-from-equilibrium quantum dynamics of isolated many-body systems described by the tight-binding Hubbard-type models, which can be simulated with ultracold gases in optical lattices. The quantitative accuracy of such analog quantum simulators for non-equilibrium lattice systems has been examined through direct comparisons with outputs from exact computational methods for some special cases, such as the exact diagonalization for small systems [10] and the matrix-product-state (MPS) approaches for onedimensional (1D) systems [3,4]. With the high accuracy confirmed, results obtained from optical-lattice quantum simulators have been exploited in order to test approximate computational methods for quantum many-body dynamics in higher dimensions. For instance, it has been shown in Ref. [5] that the non-equilibrium dynamical mean-field theory can quantitatively capture dynamics of the three-dimensional (3D) Hubbard model subjected to a periodic driving. Moreover, in Ref. [11], the Gross-Pitaevskii truncated-Wigner approximation (GPTWA), *<EMAIL_ADDRESS>which is a semiclassical phase-space method on the basis of the GP mean-field theory [12,13], has been directly compared with experimental data regarding dynamics of the 3D Bose-Hubbard model in a weakly interacting regime after a quantum quench. It has been shown that the outputs of GPTWA with no free parameter are in good agreement with experimental data for early-time regions.
In recent years, some experimental works have explored quantum quench dynamics of strongly interacting ultracold gases in two-dimensional (2D) and 3D optical lattices [14,15]. In Ref. [15], an experimental group at Kyoto University has studied sudden-quench dynamics of equal-time single-particle correlation functions for a strongly interacting 174 Yb gas loaded into a deep 2D lattice. In contrast to 1D systems, it is generally hard to numerically simulate time evolution of correlation functions in 2D and 3D even on a short timescale. It has been found in Ref. [15] that the ordinary GPTWA cannot fully capture characteristic properties of the correlation propagation after sudden quenches, e.g., peak and dip properties observed in the correlation signals and saturated values of the correlation at relatively long times. This can be attributed to the fact that in the strongly interacting regime the adequate classical limit of the system is not condensates of coherent bosons described by the GP theory.
In Ref. [16], Davidson and Polkovnikov have introduced a promising phase-space approach for analyzing strongly interacting Bose-Hubbard systems. This method is called the SU(3) TWA [hereafter SU(3)TWA]. For sufficiently large local interactions, the Bose-Hubbard model reduces to an effective pseudospin-1 model acting on a projected Hilbert space [17,18]. In the SU(2) TWA method, which is typically discussed and used in the context of experiments of large-spin systems and arrays of trapped Rydberg atoms [19,20], this effective model is treated as a Hamiltonian consisting of the SU(2) spin operators for S = 1 [13]. However, for the SU(3) TWA, the model is translated into a Hamiltonian consisting of SU(3) matrices, which gives an alternative phase-space representation of the system with extra five dimensions in addition to the three dimensions of the SU(2) phase space. Since the local interaction terms of the effective model can be linearized in the SU(3) matrices, the local particle and hole fluctuations, which produce key effects on the dynamical properties of the strongly interacting regime, are accurately captured at the level of the semiclassical approximation [16]. The TWA method based on the GP trajectories is not suitable to formulate those fluctuations in the strongly interacting limit, just as the Bogoliubov approximation for weakly interacting dilute Bose gases fails to describe the quantum phase transitions to the Mott-insulator phases at low temperatures [21]. We therefore expect that, the SU(3) TWA may simulate the dynamics in the strongly interacting regime of the experiment [15], beyond the capability of the GPTWA, and also the SU(2) TWA.
In their original work, the performance of the SU(3)TWA was tested by applying it to a fully connected spin-1 model, which has an all-to-all spin-exchange (or hopping) term and can be numerically diagonalized even at a large size. However, its quantitative accuracy in realistic cases, where the hopping reaches only nearest neighbors and the system size is large, has not been examined so far. Furthermore, an effective model that they used to describe Bose-Hubbard systems is valid only for highfilling cases. Therefore, their formulation is not directly applicable to unit-filling Bose-Hubbard systems, which are typically considered in the context of the quantumsimulation studies. We note that a numerical calculation of the SU(3)TWA for a unit-filling experimental setup has been presented in Ref. [22]; however, its explicit formalism has not been provided so far.
The goal of this paper is to examine the performance of the SU(3)TWA in simulating quench dynamics of strongly interacting Bose gases in a 2D optical lattice [15]. We extend the previous formalism, which was applied to an effective pseudospin-1 model for the Bose-Hubbard model with large filling factors and strong interactions [23,24], to the unit-filling case [17,18] corresponding to the experimental setup. As a technique to evaluate the phase-space integration emerging in the SU(3)TWA, we will employ two different approaches, i.e., the Gaussian approximation for the (continuous) Wigner function [16] and the discrete TWA (DTWA) approach [19,20,25]. In particular, the DTWA approach is thought to be better than the Gaussian approach. Indeed, the numerical sampling of the DTWA can be readily carried out without approximation of the probability distribution functions (see also Refs. [19,25]). In this paper, we also study the performance of a DTWA sampling for the SU(3)TWA via large-scale numerical simulations for a fully connected spin-1 model. A numerical simulation on the basis of the DTWA scheme will be compared with the experimental data as well as that of the Gaussian approximation.
The remainder of this paper is organized as follows: In Sec. II, we introduce an effective pseudospin-1 model for the Bose-Hubbard Hamiltonian in a strongly interacting regime and a fully connected spin-1 model, respectively. In Sec. III, we formulate the SU(3)TWA for the effective model. In Sec. IV, we study the Gaussian approximation and the DTWA approach for SU(3) phase-space variables. In Sec. V, using the SU(3)TWA, we calculate quench dynamics of equal-time single-particle correlation functions for a strongly interacting Bose gas in a 2D optical lattice. There, we compare some semiclassical results with actual experimental data obtained in Ref. [15]. In Sec. VI, we conclude this paper and present outlooks for future studies.
II. MODELS
In this paper, we study time evolution of a strongly interacting Bose gas loaded into an optical lattice. To describe this system, we consider the Bose-Hubbard Hamiltonian on a certain lattice structure [26,27] whereâ † j andâ j are the creation and annihilation operators of bosons at site j. The angular brackets j, k indicate a nearest-neighbor link on the lattice. The real parameters J and U denote the hopping amplitude and interaction strength, respectively. A ratio of the parameters, U/J, can be widely controlled by tuning the opticallattice depth [15] or utilizing a Feshbach-resonance technique [14].
In a strongly interacting regime of Eq. (1), fluctuations of occupation per site are sufficiently suppressed from the mean fillingn. Therefore, only a subset of local Fock states near the mean filling is relevant to strongly interacting dynamics governed by Eq. (1). If the interaction is sufficiently strong, i.e., U/(nJ) 1, one can safely assume that only three Fock states, i.e., |n − 1 j , |n j , |n + 1 j are relevant to time evolution of the interacting bosons. In a projected Hilbert space spanned by such a local basis, the Bose-Hubbard Hamiltonian (1) is approximated as an effective pseudospin-1 model [17,18], which is given bŷ where δν − = 1 + 1/n − 1 andŜ ± j =Ŝ x j ± iŜ y j . The pseudospin operatorŜ µ j (µ = x, y, z) satisfies the SU(2) Lie algebra The three-leg tensor µνγ is the fully antisymmetric structure constant satisfying xyz = − yxz = yzx = · · · = 1. Hereinafter, the repeated greek indices indicate the contraction of tensors. It should be noticed that if one takes the high-filling limit, i.e.,n 1, the effective model is simplified [23,24] aŝ where B can be interpreted as a magnetic field applied along the z-axis. In the previous work [16], the SU(3)TWA was applied to this high-filling model defined on a cubic lattice. However, in order to analyze experimental systems with a setup ofn = 1 as realized in Ref. [15], it is required to use Eq. (2) rather than the high-filling model. In Sec. III, we will explain how one generalizes the SU(3)TWA to Eq. (2). In Sec. IV, we present detailed investigations on Monte Carlo integration methods employed for SU(3)TWA simulations. To examine quantitative validity of our numerical approaches, especially a DTWA approach for SU(3) phase-space variables, we will revisit a fully connected spin-1 model, which is a model studied in Ref. [16]. The Hamiltonian of the fully connected model is given bŷ The spin-exchange coupling term describes all-to-all connections between distant spin operators. Hence, each lattice point has a coordination number z = M − 1. As M increases, the valid timescale of the SU(3)TWA for this model becomes longer for a certain U/(zJ) [16]. Furthermore, due to a characteristic property described in Appendix A, exact quantum dynamics of this model can be easily simulated by using classical computers even for a considerably large M . Accordingly, the fully connected model is suitable for examining the performance of the sampling methods. See also Appendix A for details about how to implement exact numerical simulations of this model.
III. SU(3) TRUNCATED-WIGNER APPROXIMATION
The first step for building the SU(3)TWA for spin-1 models is to rewrite their Hamiltonian by means of eight numbers of SU(3) matrices [16]. Let us consider a set of SU(3) generators {X µ } (µ = 1, · · · , 8) obeying the SU(3) Lie algebra Here f µνγ is a fully antisymmetric structure constant accompanied by the SU (3) group. If we take the Jordan-Schwinger mapping into account, each generator can be written in the bi-linear form of the SU(3) Schwinger bosons [17,18,23] To reproduce the original Hilbert space, the particle number must be preserved per site by a constraint nb † nbn = 1. The value of f µνγ depends on the detail of T mn µ . Our choice for T µ will be shown later in Eq. (11), and the corresponding f µνγ will be given by Eq. (13). The SU(3) matrices T µ form a complete set of 3 × 3 matrices, so that an arbitrary local operator acting on the three-state Hilbert space is expressed as a linear combination of these matrices. Using this property, one can linearize local interaction terms in spin-1 models, such as U 2 j (Ŝ z j ) 2 , in terms of SU(3) matrices. Specifically for the effective model (2), if the interaction U is sufficiently large compared tonJ characterizing the hopping term, then the Hamiltonian is regarded as being almost linear in SU(3) matrices. Therefore, the SU(3)TWA for this model is expected to be valid during a long timescale. Furthermore, if the hopping term is negligible, the SU(3)TWA becomes exact at all times because there exists no truncation error stemming from higher-order derivatives of the time-evolving equation for the Wigner function [13].
Let us generalize the SU(3)TWA formalism to the arbitrary filling model (2). First, we express the effective Hamiltonian by means of the local SU(3) generators denoted byX A key point is that the local interaction term of the SU(2) spin operators is translated into a linear combination of such SU(3) generators as Then, we make a Wigner-Weyl transform of the Hamiltonian and obtain a classical Hamiltonian for the SU(3) phase-space variables where δν + = 1 + 1/n + 1. The SU(3)TWA states that within a semiclassical approximation the time evolution of the expectation value of an operatorΩ, i.e., Ω (t) , can be represented in terms of saddle-point trajectories of SU(3) variables, which are governed by H W and weighted with a Wigner quasi-probability distribution function where 0,µ is the integration measure and Ω W is a Weyl symbol ofΩ. The classical trajectory X cl (t) obeys Hamilton's equation associated with the SU(3) Lie algebra This equation of motion is integrated under an initial condition X (j) 0,µ is distributed according to W (X 0 ). The width of the Wigner function gives quantum-fluctuation corrections to saddlepoint or mean-field results, which formally correspond to the time-dependent Gutzwiller approximation with a single-site cluster consisting of three levels.
If we take the high-filling limit for the classical Hamiltonian (8), all the terms involving X disappear. Therefore, these additional variables are responsible for different consequences between the high and low filling descriptions. It should be noted that a constant term has been eliminated from Eq. (8) because it does not affect Eq. (10). The above formalism will be used in Sec. V to analyze the experimental setup in Ref. [15].
In this paper, we will utilize the following representation for the SU(3) matrices, according to the notations by Davidson and Polkovnikov [16]: These matrices are normalized as It is confirmed that, in this specific representation, nonzero values of f µνγ are given by Of course, this is not the unique choice. Instead of this representation, one can also use the Gell-Mann matrices, which are more familiar in high-energy physics [28].
IV. MONTE CARLO INTEGRATIONS
In this section, we study Monte Carlo integration methods for evaluating the phase-space integration of the initial Wigner function. In Ref. [16], an approximate Gaussian-Wigner function has been used to perform numerical simulations. This Gaussian approximation is a simple and efficient prescription for resolving a kind of minus-sign problem in TWA simulations, which means that the exact Wigner function defined by means of the Schwinger-boson coherent states typically takes negative values. In Sec. IV, to simulate the experimental setup, we will indeed employ the Gaussian approach.
As an alternative sampling scheme that allows us to avoid the appearance of negative-valued Wigner function, we also use a DTWA approach [19]. This approach is formulated on the basis of the discrete-Wigner representation of a finite Hilbert space quantum system. The concept of the discrete-Wigner representation has been invented by Wootters in Ref. [29]. In this section, by extending the previous DTWA method for SU(2) spin systems [19], we develop a DTWA approach suited for the SU(3)TWA. To this end, we will introduce phasepoint operators for the SU(3) generators, each of which is represented as a 3 × 3 matrix.
A. Gaussian approximation
In the Gaussian approximation for exact Wigner functions, an appropriate Gauss distribution is used to approximately express initial density matrices within a class of positive-definite functions [16]. To be specific, let us consider a fully polarized state along the x axis, i.e., ρ 1 = |S x = 1 S x = 1|. Its matrix form is given by To obtain the corresponding Gaussian-Wigner function, we make the following ansatz with free parameters R = (R µν ), m = (m µ ), and σ = (σ µ ): This distribution defines the first and second order moments of the SU(3) phase-space variables The free parameters are determined such that the Gaussian-Wigner function exactly reproduces the firstand second-moments of the density matrix (14), i.e., The angular brackets in the right-hand side mean the quantum-mechanical average withρ 1 . To determine R in practice, we diagonalize an 8 × 8 matrix corresponding to a connected and symmetrized correlation function with respect to the density matrix The eight-dimensional matrix R is constructed from the eigenvectors, which are obtained when C µν is diagonalized. Each eigenvalue gives the squared covariance σ 2 µ . The mean value m µ is the rotation of the vector ( X µ ), i.e., R µν X ν = m µ . The direct calculation leads to the following result: With these parameters, the Gauss distribution (15) randomly generates the phase-space variables reproducing the exact low-order moments of the state in Eq. (14).
In the projected Hilbert space for the effective pseudospin-1 models, the deep Mott-insulator state, which is approximately realized in a sufficiently deep optical lattice, is expressed as a direct product state of ρ 2 = |S z = 0 S z = 0|. The matrix form ofρ 2 is given by The corresponding parameters of the Gauss distribution function are calculated as
B. SU(3) discrete-Wigner representation
Let us consider a discrete-Wigner representation for a finite-level system, whose Hilbert space is spanned by three basis vectors {|0 , |1 , |2 }. The key building blocks for this representation are the so-called phase-point oper-ators α , which are 3 × 3 matrices acting on the Hilbert space. The integer index α = (a 1 , a 2 ) (a 1 , a 2 = 0, 1, 2) expresses a point in the discrete phase space Γ, which now contains nine points. The phase-point operators are also called the Stratonovich-Weyl kernels [30].
The phase-point operators are important because they define a Wigner-Weyl transform of quantum-mechanical operators. In the discrete-Wigner representation, the Weyl symbol of an operatorΩ is defined as its projection onto a point α ∈ Γ: Specifically, such a projection of a given density matrix ρ leads to the discrete-Wigner function The pre-factor 1/3 is needed to ensure the unity normalization of the Wigner function α∈Γ w α = 1, see also below. By analogy with continuous cases, where the coordinate and momentum operators (x,p) define a continuous phase-point operator, the discrete phase-point operator should have the following properties [29]: 1. Hermiticity:Â † α =Â α for any α ∈ Γ. Then, the phase space functions are real as long as the corresponding operators are Hermitian.
Such discrete phase-point operators can also be made for general cases where the Hilbert space is in N dimensions (N ≥ 2 is a primal number) [29]. Furthermore, it is possible to construct a discrete number-phase representation for Bose systems, whose Hilbert space is spanned by generators of the Heisenberg-Weyl group, and it provides a DTWA-like semiclassical approximation for their quantum dynamics if the allowed occupancy of particles is sufficiently large [31].
As an inverse transformation of Eqs. (28) and (29), the operatorsΩ andρ are linearly expanded in α such that Then, the expectation value ofΩ forρ reads as The summation in the last expression is taken over the whole Γ. In the second equality, we have used the trace orthogonality of α . The concrete forms of Ω α and w α are specified after one determines α for all α = (a 1 , a 2 ) such that they satisfy the required conditions as presented above. If we adopt Wootters's representation of the phase-point operators [29], we have It is convenient to expand α in the generators of the SU(3) Lie algebra, i.e., Its projection coefficient x µ (α) = Tr[ αXµ ] is the discrete Weyl symbol ofX µ . After direct calculations, we obtain the following discrete phase-space variables for A (0) α : x 4 (α) = 2δ a1,1 cos 4πa 2 3 , as a combined eight-dimensional vector on each phase point, α = (0, 1), (1,2), and (2, 0) correspond to the following configurations, respectively: Two classical spins x(α) and x(α ) at different points α = α are not orthogonal to each other. Indeed, these have a finite inner product even for α = α In the DTWA simulation, such discretized spins are randomly distributed according to w α and give a set of initial conditions for the classical trajectories. The discussions of the DTWA for the SU(3) systems will be presented in Sec. IV C.
To clarify the sampling weight of DTWA simulations, which will be used in the following sections, let us calculate the discrete Wigner function for the Mott insulator state [Eq. (24)] by using A (0) α . It results in a positivedefinite distribution function This result means that in the Mott-insulator state three configurations at α = (1, 0), (1, 1), (1, 2) are realized with equal probability 1 3 while other ones have the zero probability. Therefore, we can directly evaluate the average with the Wigner function in numerics without further approximation of the distribution function. However, the positivity of Eq. (36) is not a general property. For example, the x-polarized state in Eq. (14) yields oscillatory terms in the distribution While the first term with δ a1,1 is always positive, the second term with δ a1,0 and δ a1,2 takes negative values due to the oscillating contributions.
As mentioned in previous works [29,32], the definition of the phase-point operators is not unique. In general, there exists a non-singular (or regular) transformation, A α →Ŝ −1Â αŜ , which retains the required properties of the phase-point operators [29]. This type of ambiguity will be utilized in Appendix C to construct a reasonable set of phase-point operators for given density matrices.
C. SU(3)DTWA
Here we formulate the DTWA for the SU(3) phasespace variables. Throughout this paper, we refer to this approach as the SU (3)DTWA.
Let us consider real-time dynamics of a many-body spin-1 system described by a HamiltonianĤ. The initial density matrixρ 0 =ρ(t = 0) can be expressed as an expansion in a tensor product of local phase-point operatorsρ where w α ≡ w α1,··· ,α M is a many-body discrete-Wigner function defined in the M -body phase space Γ M ≡ Γ 1 ⊗ · · · ⊗ Γ M . Note that M typically represents a total number of sites for lattice systems. Each local operator A αj acts on the site j. Such an expansion is expected to exist for any states because a set of αj forms a local operator basis. An operatorΩ that we are interested in has also an expansion given bŷ Then, the expectation value ofΩ at time t > 0, i.e., The propagation function U W (β, α; t) connecting two Weyl symbols Ω β and w α is defined by where α = M j=1 αj andÛ (t) = e − i Ĥ t is the unitary time-evolution operator. This propagator contains complete information of quantum many-body dynamics governed byĤ. However, the unitary transformation given byÛ (t) αÛ † (t) changes the tensor product into complicated operator strings in the Hilbert space, so that the exact evaluation of U W (β, α; t) is generally impossible.
The TWA for quantum dynamics is nothing else but an appropriate semiclassical approximation for the phasespace propagator U W (β, α; t) [33]. In the treatment discussed in Ref. [19], one makes the following directproduct ansatz for the many-body phase-point operators at time t > 0: The time dependence of x The classical Hamiltonian H W can be derived by replac-ingX where x Inserting Eq. (44), we finally arrive at the SU(3)DTWA representation of Ω (t) : If we putΩ =X (j = k) and perform the summation over β ∈ Γ M , we have the formulas In typical cases, initial density matrices are factorized with respect to the single-body index j. Then, the discrete-Wigner function reads as Therefore, we obtain As learned from these expressions, the only difference of the SU(3)DTWA from the standard SU(3)TWA comes from their probability distributions for the phase-space variables. In other words, the classical dynamics in the SU(3)DTWA still happen in the continuous phase space. Compared to the Gaussian approximation, the DTWA method features a numerical advantage that it allows to sample spin configurations with positive probabilities for typical product states, which give rise to negative probabilities in the exact continuous representation [19,20]. In the literature such as Ref. [19], examples are presented, demonstrating that the DTWA improves revival properties of the quantum dynamics, which the Gaussian approximation fails to capture. The direct comparison between the two methods will be presented in Sec. V B.
We mention that our description, which explicitly uses the phase-point operators and, therefore, explicitly defines a discrete-Wigner function for a density matrix, is distinct from a similar discrete-sampling approach for general SU(N ) systems developed in Ref. [25]. The latter approach has not introduced any phase-point operators explicitly, but instead has utilized a quantumtomography-like methodology to define probability distributions for each phase-space variable. This state-ofthe-art sampling technique, which is also called the generalized DTWA (GDTWA) [25], has been already applied to actual experimental setups of large-spin systems such as 52 Cr gases [34] and 167 Er gases [35], and the performance has been evaluated against the experimental data. In Sec. V, we compare this sampling scheme to our schemes, specifically for the 2D Bose-Hubbard model with a small size.
To implement the tomography technique for the SU(3) TWA, we decompose each SU(3) matrix T µ in its diagonalized basis, i.e., denote the eigenvectors of T µ associated with the eigenvalues λ (s) µ . Note that, generally speaking, the matrices T µ cannot be simultaneously diagonalized. We compute an expectation value of T µ with a density matrix ρ to obtain Following Ref. [25], the coefficients p (s) µ are regarded as the probabilities for the discrete spins d µ ∈ {λ where s λ We determine the probabilities p Therefore, in the TWA simulations, d 1 , d 2 , d 6 , and d 7 randomly choose either 1 or −1 with an equal probability, while d 3 = d 4 = d 5 = 0 and d 8 = 2/ √ 3 for all samples. Note that the fluctuations of each variable are statistically independent of those of the other ones. More detailed discussions of the tomography technique are found in Ref. [25]. In Appendix D, we add a supplemental discussion on the relationship between this tomography method and our DTWA scheme, associated with the reproducibility of a second-order moment for a pure state.
D. Fully connected spin-1 model
To compare the SU(3)DTWA with the Gaussian SU(3)TWA, we study the fully connected spin-1 model (4). To be specific, we calculate sudden-quench dynamics of several physical quantities by using the SU(3)TWA with the Gaussian-Wigner function and the SU(3)DTWA, respectively, and compare these semiclassical results with the exact ones.
In Fig. 2, we numerically simulate the time evolution of the fully connected spin-1 model of Eq. (4) after sudden quenches from the x-polarized direct-product state Fig. 2(b)]. In the lower panels in Fig. 2, we also show the time evolution of M −1 j (Ŝ z j ) 2 (t) starting from the same initial state. It should be noticed that the slight recurrence of the oscillation observed in Fig. 2(d) at late times after t ≈ 60 /U are not captured within the semiclassical approximation as expected in typical TWA simulations [13,16].
In Fig. 2, we also simulate the same dynamics by using the SU(3)DTWA approach. For all the panels, the SU(3)DTWA results (red dotted lines) reasonably reproduce the same dynamics as those of the Gaussian SU(3)TWA. As explained in Appendix C, for the DTWA results in Fig. 2, we have prepared a statistical mixture of random initial conditions characterized by multiple sets of phase-point operators. A similar technique has been used in Ref. [32]. We emphasize that if we only use the Wootters representation for samplings, it will fail to correctly produce the dynamics [see also Fig. 7(a)].
In Fig. 3, we compute the expectation value M −1 j (Ŝ z j ) 2 (t) for another initial state discrete (red dotted) SU(3)TWA results reproduce the first and second peaks of the exact expectation value (blue solid) within t < 30 /U . As U/J decreases, the timescale, during which the exact quantum dynamics are reasonably captured by the semiclassical expressions, is shortened. This tendency can be attributed to the nonlinearity of the system that gives rise to a significant error in the exact time evolution of the many-body Wigner function. In Fig. 3(b) corresponding to U = 125J, both semiclassical approaches only recover the first peak within t < 10 /U , however, they fail to describe the second peak, especially, its amplitude.
After leaving from the early-time stage, the SU(3)TWA clearly deviates from the exact dynamics. In particular, it is clearly seen in Fig. 3 that the Gaussian SU(3)TWA tends to saturate into a steady value but not to make a recurrence of the oscillation, both for U = 250J and 125J. Interestingly, especially in Fig. 3(b), while the SU(3)DTWA also fails to describe the exact dynamics for t > 10 /U , but it exhibits an oscillatory behavior rather than saturation. However, it should be emphasized that the discrete Monte Carlo sampling does not affect the quantitative timescale itself, during which the quantum dynamics are almost accurately captured within the semiclassical expressions. This seems to be reasonable because the classical equations of motion for the continuous and discrete cases are the same.
To close this section, we have demonstrated that the SU(3)DTWA is nearly as accurate as the Gaussian approximation with respect to simulating the quench dynamics. In the next section, we apply these techniques to analyses of the experimental results for 2D Bose-Hubbard systems [15].
V. APPLICATION TO THE 2D BOSE-HUBBARD SYSTEM
In this section, we apply the SU(3)TWA approaches for studying far-from-equilibrium dynamics of the Bose-Hubbard model on a square lattice at unit filling. We specifically analyze dynamics of equal-time singleparticle correlation functions after a quench from a Mottinsulating state to a parameter region near the quantum critical point [15]. Theoretical studies on dynamics of equal-time correlation functions have been reported in Refs. [11,[36][37][38][39][40][41][42][43][44][45].
A. Experimental setup
First we briefly summarize the details of the experimental setup in Ref. [15]. Takasu and his coworkers have measured sudden-quench dynamics of the single-particle correlation functions inside the 2D Mott-insulator phase in the following steps: 1. They prepared a unit-filling Mott insulator of an ultracold 174 Yb gas in an optical square lattice with s = V 0 /E R = 15. The energy scales V 0 and E R denote the optical lattice depth and the recoil energy of this system, respectively. The prepared system is well described by the direct product Fock state for bosons wheren j |n j = n j |n j .
2. The lattice depth was abruptly decreased from s = 15 to s = 9. The time to ramp down the lattice depth is approximately 0.1 ms. The lattice depth after the quench implies U/J = 19.6.
3. After the quench, the resulting dynamics was observed by measuring the time-of-flight interference pattern that can be converted to the equal-time single-particle correlation functions, where r j = (x j , y j ) indicates each site on the square lattice with units of the lattice constant d lat = 266 nm. The real-space summation is performed under the conditions |x j − x j | = ∆ x and |y j −y j | = ∆ y , and we write ∆ = (∆ x , ∆ y ). Recall that M is the total number of lattice points.
In this work, as a simplified setup, we neglect harmonic trap potentials in numerical simulations. We simply assume that all the atoms participate in a uniform Mottinsulator state before the quench. Effects due to spatial inhomogeneity of the gases will be discussed in Sec. V D.
To close this subsection, here we note that the qualitative behaviors of the dynamics of the spatial correlation functions measured after quantum quenches can change depending on the initial states that we take. For instance, for the coherent state as the initial states, which describes a coherent condensation of bosons at the non-interacting limit, sudden changes of the interaction, from zero to weak interactions, result in observing fine oscillations in time of the density-density equal-time correlation function, reflecting the coherent motion of the Bogoliubov quasiparticles [11]. By contrast, if we choose the Mottinsulator states as the initial conditions, and propagate the states with the Hamiltonian with the same interactions (i.e., quenches from infinite to weak interactions), we observe propagation of a peak signal without fine oscillations in the same correlation function [11]. Its propagation velocity is well explained by the single-particle excitation spectrum of the Hartree-Fock approximation. Reliable TWA results on this kind of initial-state dependence of the quench dynamics can be found in our previous study for the 2D Bose-Hubbard model with a large filling factor [11].
B. Small size case
Before proceeding to our main results corresponding to the experimental setup, let us consider the quench dynamics for a small-size 2D Bose-Hubbard system, say 9 sites, in order to compare outputs of the SU(3)TWA approaches with those of the exact numerical calculation. For simplicity, we focus on the sudden-quench limit, in which the ramp-down time is neglected. Figure 4 shows the numerical results for the suddenquench dynamics of K ∆ (t) for a small-size Bose-Hubbard system. The simulation setup has M = 3 2 = 9 sites and we adopt periodic boundary conditions [46]. In Fig. 4, the full-quantum dynamics of the Bose-Hubbard system is evaluated by integrating the time-dependent Schrödinger equation of the Hamiltonian (1) (gray dotted line). The maximum occupation of the local site is n max = 2, hence, the three lowest states, i.e., |0 , |1 , |2 , are allowed in this simulation. We observe that the correlation functions at ∆ = (1, 0) and (1, 1) form a first-peak region in the time range of 0 < tJ/ < 0.5. At later times, tJ/ > 0.5, the time evolution of correlations exhibits an almost undamped oscillation reflecting its small size.
In Fig. 4, we also simulate the same dynamics by using the SU(3)TWA for the effective-model Hamiltonian (2) according to the Gaussian and discrete-Wigner approaches of Monte Carlo samplings. The unit-filling Mott-insulator state is given by Eq. (53), i.e., |Ψ ini ≈ |Ψ Mott . We observe that both the Gaussian SU(3)TWA (red circle) and SU(3)DTWA (blue triangle) quantitatively capture the first-peak region in the range of 0 < tJ/ < 0.5, especially its initial growth, its time point of the center of the region, and its correlation intensity. However, the later-time dynamics for tJ/ > 0.5 can not be well captured within the SU(3) semiclassical representation. Indeed, the semiclassical results exhibit almost saturated behaviors rather than the temporal oscillation with a large amplitude. It should be emphasized that the difference between two semiclassical results in the later-time dynamics comes from our choice of the initial distribution for the phase-space variables. Interestingly, it is clearly seen that, around t = 0.5 /J in Fig. 4, the SU(3)DTWA gives a slightly better result, i.e., shows deeper dips of correlations. For this comparison, the SU(3)DTWA can be seen as a better description than the Gaussian SU(3)TWA. Figure 4 also displays the simulation result on the basis of the tomography technique as presented in Sec. IV C. We numerically find that it is closer to the Gaussian result, rather than the DTWA one. This coincidence to the Gaussian simulation indicates that the tomography technique also provides a reasonable sampling scheme for the initial condition. Since there is no considerable deviation from the Gaussian result, in the following discussions, we do not use the tomography technique.
It is interesting and helpful to calculate the quench dynamics by using the GPTWA for the strongly interacting Bose-Hubbard system as a reference. In order to carry out an efficient simulation, we have used an approximate Gaussian distribution representing the Fock states [11]. The details of the GPTWA will be briefly reviewed in Appendix B. In Fig. 4, the GPTWA simulation (green square) fails to describe the correlation intensity in the first-peak region while it reproduces well a very early growth of the correlation function at ∆ = (1, 0) within tJ/ < 0.2. Therefore, for the purpose of simulating the strongly interacting dynamics, the SU(3)TWA certainly provides a better description than the GPTWA.
C. Comparison to the experimental results
We calculate the quench dynamics for a larger-size system corresponding to the experimental setup. Figure 5 shows the correlation function K ∆ (t) for M = 20 2 = 400 with periodic boundary conditions. First, we prepare the system in the unit-filling Mott-insulator state (t < 0), and then abruptly decrease the lattice depth until t = 0. For t > 0, the system evolves in time at U = 19.6J. While the dynamics of the effective pseudospin-1 model (2) is computed in the SU(3)TWA simulations, that of the Bose-Hubbard model (1) with no truncation of the local Hilbert space is computed in the GPTWA. We note that the GPTWA result in Fig. 5 is a reproduction from Ref. [15].
In Fig. 5(a), we observe that all the semiclassical results explain well the growth of the nearest-neighbor correlation at ∆ = (1, 0) in the early-time stage within t < 0.1 /J. In addition, these reasonably describe the correlation offset at t = 0. The experimental data show a peak in the time domain of 0 < tJ/ < 0.2. At longer times, the measured correlation gradually satu-rates to a steady value. In the comparison performed in Fig. 5(a), the experimental result is seemingly closer to the GPTWA rather than the SU(3)TWA. In particular, the peak position and the correlation intensity in the time window indicated by Fig. 5(a) are relatively close to the ones simulated by the GPTWA. This is in contrast to the small-size case in Sec. V B, where the SU(3)TWA is closer to the exact dynamics and can provide a reasonable first-peak region at short times. Notice that the correlation intensity of the experiment is typically lesser than both SU(3)TWA and GPTWA results.
Next, we focus on longer distances, say ∆ = (1, 1) [ Fig. 5(b)] and ∆ = (2, 0) [ Fig. 5(c)]. The experimental data are seen to achieve a peak during 0.1 < tJ/ < 0. , it is hard to locate the center of the first-peak region in the experimental data because of significant noises. Within the error bars, we expect that there exists a peak region somewhere in the range of t < 0.5 /J. For these long distances, correlation intensities in the GPTWA are seen to be suppressed because it cannot capture strong quantum fluctuations in the parameter regime. In particular, no clear peak region is observed in the simulation even at short times. Therefore, its agreement to the experiment is worse. By contrast, the SU(3)TWA, which is expected to describe local quantum fluctuations in the regime more accurately, can produce a reasonably strong correlation, which is comparable to the experiment, and clear peak regions in the range of t < 0.5 /J. Hence, in this case, these SU (3) simulations are closer to the experiment.
Finally, let us mention that, in the experiment, not only the nearest-neighbor correlation but also the longerdistance ones exhibit a finite and non-negligible offset at t = 0. However, according to the SU(3)TWA and the GPTWA, such an offset for longer distances should be almost zero. We will discuss this point in details in the next section.
D. Discussions
In the direct comparisons for the large system, we observed that the experimental results of the spatial correlation function are closer to the GPTWA especially at short distances while the SU(3)TWA looks better at long distances. As learned from the numerical simulations for the small size, the SU(3)TWA should work better more than the GPTWA in the strongly interacting parameter regime. Moreover, we also recognized that the nonzero offsets of the correlations at distances except for nearest neighbors are not consistent to all the semiclassical results. We argue that the above unexpected observations could be attributed to some contributions present in the actual experiment, which are not precisely taken into account in our SU(3)TWA simulations.
First, we discuss occupations of bosons allowed in the SU(3)TWA for the Bose-Hubbard systems. The formalism of SU(3)TWA for bosons is constructed under assumptions that the local Hilbert space is truncated up to three states. If the dimension of the reduced state space is extended from three to more, it will improve the simulated result, more or less, quantitatively. To perform this extension, one needs to increase the local phase space furthermore. For instance, if five states are relevant locally, SU(5) matrices should be chosen as a phase-space variable. However, we may expect that higher occupations give no significant effect, at least in our current case, in which the strength of the interaction is large enough to suppress them. In order to justify this expectation, in Appendix E, we will clarify the degree to which occupations greater than 2 affect the quench dynamics of the correlation function in the parameter regime of the experiment by utilizing an exact numerical calculation for a small size.
Second, we make a comment on effects of an inhomogeneous trap potential. In the experimental setup in Ref. [15], the prepared initial state actually contains a strongly correlated superfluid component with incommensurate fillings due to a harmonic trap while the region of the Mott insulator with unit filling is much larger. Such a contribution is not dominant over the whole gas, but not completely negligible. In the Supplemental Material of Ref. [15], an MPS calculation has been performed for a 1D trapped Bose gas in the presence of narrow superfluid regions in the system. A numerical result shows that a finite offset appears at the end point of quenches at several distances in addition to the nearest neighbor. This strongly indicates that the presence of superfluid contributions, more or less, affects the time evolution of the correlation function. To our current techniques for the SU(3)TWA, it is difficult to initialize a system into such inhomogeneous states as prepared in the MPS simulation. In future works we will develop an efficient technique to treat this kind of initialization problem.
VI. CONCLUSIONS AND OUTLOOKS
In conclusion, we have analyzed far-from-equilibrium dynamics of strongly interacting Bose gases in an optical lattice by using the SU(3)TWA on the basis of different Monte Carlo sampling schemes. In the middle of this paper (Sec. IV), the SU(3)DTWA approach has been developed as a sampling scheme, and applied to the fully connected spin-1 model with a large size in order to examine this approach. We demonstrated that the SU(3)DTWA is nearly as accurate as the Gaussian SU(3)TWA in simulating time evolution after sudden quantum quenches.
In the main part of this paper (Sec. V), we have applied the SU(3)TWA to sudden quench dynamics of a strongly interacting Bose gas in the 2D optical lattice. The semiclassical methods on the basis of the GPTWA, the SU(3)DTWA, and the Gaussian SU(3)TWA have been compared with exact numerical calculations for the 2D Bose-Hubbard model with a small size. We recognized that the SU(3)DTWA and the Gaussian SU(3)TWA can provide better descriptions than the GPTWA in a strongly interacting regime. The numerical results on the basis of those semiclassical methods have also been compared with the recent experiment at Kyoto University. We found that at short distances, the experiment is closer to the GPTWA while, at relatively-long distances, it is reasonably close to the SU(3)DTWA and the Gaussian SU(3)TWA. We argued that this observation can be attributed to parts of the actual experimental realization including an inhomogeneous trap potential, which are not precisely taken into account in our numerical simulations.
Beyond the scope of this work, it would be interesting to develop a cluster TWA approach [47] for the strongly interacting Bose-Hubbard systems. For applications of this strategy in higher dimensions than 1D, a reasonable reduction scheme of dimensions of cluster phase-space variables may be required to make simulations realistic and efficient. Subject to a fixed M , an arbitrary state of this system is spanned by a Fock vector labeled by two non-zero integers ν 1 ≥ 0 and ν 2 ≥ 0, where 0 ≤ ν 1 +ν 2 ≤ M . This basis state is a simultaneous eigenstate forΠ 3 andΠ 8 , therefore, Π 3 |ν 1 , ν 2 = (ν 1 − ν 2 )|ν 1 , ν 2 , The rest of the operators, e.g.,Π 1 , behave as a ladder operator connecting different Fock stateŝ One can evaluate the matrix element of the Hamiltonian between |ν 1 , ν 2 and |ν 1 , ν 2 , i.e., ν 1 , ν 2 |Ĥ fc |ν 1 , ν 2 . Its dimension algebraically increases with M , so that one can implement the exact numerical analysis on computers even at large M . In Fig. 6, we compute the time evolution of the expectation value j (Ŝ z j ) 2 (t) = 2M/3− Π 8 (t) / √ 3 for M = 8 using two different approaches: the dashed line is a numerical integration of the time-dependent Schrödinger equation for the Hamiltonian matrix expressed in terms of the collective spinΠ α , whereas the solid line is for the spin Hamiltonian in terms ofŜ α j . The initial state for this simulation is the zero-magnetization direct-product state |Ψ(t = 0) = j |S z j = 0 . The perfect agreement of two results manifests that the collective-spin expression gives a more efficient way to have the same result than the straightforward approach.
Appendix B: Gross-Pitaevskii truncated-Wigner approximation for the Bose-Hubbard Hamiltonian
For the TWA in the coherent-state phase space, the classical time evolution of the Bose-Hubbard Hamiltonian is governed by the discrete GP equation associated with the Heisenberg-Weyl group: The classical function H W (α, α * ) = (Ĥ BH ) W is the Weyl symbol ofĤ BH given by If we write α cl (t) as a solution of the GP equation with conditions α cl (t = 0) = α 0 , the expectation value of an operatorΩ, i.e., Ω (t) = Tr[Ωρ(t)] = Tr[Ω(t)ρ(t = 0)] is reduced to the following phase-space integration form (for details, see [11][12][13]): Here dαdα * = π −M M j=1 dRe[α j ]dIm[α j ] is the measure of the phase-space integration. The weight function over the phase space is the Wigner function defined by means of the coherent state basis We note that the GPTWA typically provides quantitative descriptions of real-time dynamics of the Bose-Hubbard systems when they have a sufficiently small interaction or sufficiently large filling factor. In recent years, this type of semiclassical method has been applied to multiple dynamical problems of lattice bosons, e.g., see Refs. [11,15,[49][50][51] for details.
Appendix C: Details of the SU(3)DTWA simulation
First let us present the numerical sampling when the x-polarized state is chosen as our initial state. Generally speaking, the discrete-Wigner function representing such a superposed state exhibits negativity. To carry out an efficient numerical simulation, we take the following steps: As the first step, we prepare a polarized down-spin state along z-axis at t = −π ≡ t 0 If we use the Wootters representation for the phase-point operator, the corresponding discrete-Wigner function is positive. Therefore, it is easy to sample randomized spins from the distribution.
Then, we shine a global pulse such that it evolves |Ψ 0 into the desired target state, i.e., the polarized state in the x-axis |Ψ 0 = j |S x j = 1 . Such a spin-flip process is designed via a unitary time evolution described byÛ p (t) = e − i Ĥ p(t−t0 ) with a Hamiltonian If the unitary operation of the pulse is applied from t = t 0 to t = 0, each spin state is locally flipped such that |S z j = −1 → −|S x j = 1 . The minus sign at the final state gives no effect on the expectation value. Notice that the time evolution governed byÛ p is exactly simulated by the SU(3)TWA becauseĤ p is linear in the phasespace variables. At t = 0, the prepared random values of the spin configurations are expected to obey the Wigner distribution of |Ψ 0 = j |S x j = 1 . Fig. 7(a) is calculated by using the Wootters representation A (0) α and following the above procedure. In what follows, we write S 0 as a statistical ensemble of the discretized phasespace variables sampled from the discrete-Wigner function for A (0) α . It is clear that the SU(3)DTWA with S 0 fails to reproduce the exact dynamics even though the Gaussian approach can do so. This consequence seems to be related to the fact that realizable configurations in S 0 are quite restricted compared with those belonging to the Gaussian distribution.
To resolve this problem, we utilize a prescription in which we define a few other sets of the phase-point operator and make a statistical mixture of them as done in Ref. [32]. As a non-trivial example, we can construct the following two sets of phase-point operators instead of and result in X The discrete sampling scheme based on the ensemble S 1 (S 2 ), however, fails to reproduce the moment in the phase-space representation. In fact, the phase-space average X 2 1 → x 2 1 produces 0 (1) as checked via direct computations. Hence, there is an underestimation (overestimation) of the quantum correlation in the classical ensemble generated by the naive phase-point-operator method. Note that, if in the beginning the squared op-eratorX 2 1 is linearized in the SU(3) matrices, and after that it is transformed to the phase-space quantities, the point-operator method accurately reproduces the moment. The statistical mixture S 1 ∪ S 2 that we have made in the previous appendix adequately averages the fluctuations of the classical variable belonging to each ensemble, and, as the consequence, produces the exact value of the moment as the phase-space average [namely, in this case, (0 + 1)/2 = 1/2]. This observation catches an underlying reason of the success of the DTWA simulation for the state |S x = 1 , which is prepared after the unitary evolution of |S z = −1 (see also Appendix C). We expect that the tomography scheme will also give the adequate sampling for the simulation, but it is not explicitly implemented in this paper. Thorough analyses about the connection between our DTWA scheme and the tomography method will be addressed elsewhere, which are beyond the central purpose of this work. To visualize how the three-state truncation works in the parameter regime of the experiment, we numerically integrated the time-dependent Schrödinger equation for the 2D Bose-Hubbard Hamiltonian with M = 2 2 = 4 and some values of n max . Recall that n max means the maximum occupation of each site. The initial state of the following simulation is the unit-filling and homogeneous Mott-insulator state.
In Fig. 8, we show exact numerical results for the quench dynamics of the single-particle correlation function â † jâ j . The interaction during the time evolution is set to U/J = 20, which is close to the actual value of the experiment, i.e., U/J = 19.6. The results for n max = 3 (green solid line) and n max = 4 (red dotted line) agree with each other, indicating that 4particle occupations are completely suppressed at least until t = 50 /U = 2.5 /J. Although the result for n max = 2 (blue dashed line), which corresponds to the assumptions of the SU(3)TWA, fails to perfectly reproduce the result for n max = 3, it captures very well the short-time evolution of the peak region of the correlations within t ≤ /J. Indeed, the peak region at early times agrees well with the one for n max = 3 and the intensity of the correlation is close to the exact one. As the system evolves in time, the deviation between the results for n max = 2 and 3 gradually gets significant. | 12,446 | 2020-08-22T00:00:00.000 | [
"Physics"
] |
Projecting onto any two-photon polarization state using linear optics
Projectors are a simple but powerful tool for manipulating and probing quantum systems. For instance, projecting two-qubit systems onto maximally entangled states can enable quantum teleportation. While such projectors have been extensively studied, partially-entangling measurements have been largely overlooked, especially experimentally, despite their important role in quantum foundations and quantum information. Here, we propose a way to project two polarized photons onto any state with a single experimental setup. Our scheme does not require optical non-linearities or additional photons. Instead, the entangling operation is provided by Hong-Ou-Mandel interference and post-selection. The efficiency of the scheme is between 50% and 100%, depending on the projector. We perform an experimental demonstration and reconstruct the operator describing our measurement using detector tomography. Finally, we flip the usual role of measurement and state in Hardy's test by performing a partially-entangling projector on separable states. The results verify the entangling nature of our measurement with six standard deviations of confidence.
I. INTRODUCTION
In quantum physics, measurements are used for both controlling and probing quantum systems. The simplest measurement has two possible outcomes, 1 or 0, and is described by an operator P having a single eigenstate |ψ with a non-zero eigenvalue, i.e. a projector P = |ψ ψ|. Despite their simplicity, projectors are the archetypal measurement in many quantum information processing tasks such as secure key distribution [1], state estimation [2], and testing Bell's inequalities [3,4]. Usually, it is experimentally easy to project a single qubit onto any state. In the case of a photon's polarization, a combination of quarter-wave plate and polarizer can achieve any projector. However, quantum information processing aims to leverage the resources that emerge in multi-photon systems, especially entanglement. Projecting multi-photon systems onto maximally entangled states can enable optical quantum computing and communication protocols, including quantum logic gates [5][6][7][8] and quantum teleportation [9].
Much less studied are partially-entangling projectors. Some work has shown that these outperform ideal Bell measurements in realistic models of quantum teleportation protocols [10][11][12][13]. In quantum metrology, a seminal paper showed that partially-entangling measurements can optimally extract information from a finite number of copies of a system [14]. Such a collective measurement has been realized recently and provided a metrological advantage in state estimation [15]. Moreover, partially entangled states are central to Hardy's test which demonstrates the incompatibility of quantum mechanics with local realism in an easily comprehensible manner and without the need of an inequality [16][17][18]. *<EMAIL_ADDRESS>
VPPBS(t H ,t V )
VPPBS(t H ,t V ) 50:50 BS FIG. 1. Schematic sketch of the scheme. The input system is two polarized photons in spatial modes a and b. The projector works probabilistically by post-selecting on cases when each photon exits the 50:50 beam splitter (BS) into separate modes. Any projector can be measured by choosing the appropriate unitaries (Ua and U b ), and transmission amplitudes (tH , tV ) in each variable partially-polarizing BS (VPPBS).
All these applications motivate the need for a single measurement device capable of projecting a two-qubit system onto any desired state. In principle, this could be achieved using a CNOT gate combined with local operations on each qubit [19]. Although the CNOT gate has been realized experimentally with two-photon polarization states [6], this approach is neither the simplest nor the most efficient. Other proposed schemes require complications such as ancilla photons [20,21]. Here, we propose and experimentally demonstrate a straightforward scheme for measuring the projector P = |ψ ψ| where (1) is a general two-photon polarization state (a and b label two spatial modes, H is horizontally polarized, and V is vertically polarized).
II. THEORY
The scheme is shown schematically in Fig. 1. It consists only of linear optical elements, such as wave plates and beam splitters, and does not require ancillas. In general, the state |ψ that we wish to project onto may be entangled and thus our scheme needs an entangling operation. Due to poor photon-photon interactions, a deterministic entangling operation would require optical non-linearities or a complicated combination of ancillas and linear optical elements [22]. Taking inspiration from previous demonstrations of probabilistic quantum logic gates [5][6][7][8], our entangling operation is provided by Hong-Ou-Mandel interference at a beam splitter (BS) along with post-selection. This is well studied in the context of Bell measurements where postselection on two anti-bunched photons after a BS is used to project onto the maximally entangled anti-symmetric state [9,23]. By adding local unitaries (i.e. wave plates) before and after the BS, one can project onto any maximally entangled state. However, in order to be able to project onto partially entangled states, local unitaries do not suffice since these cannot decrease the entanglement of the projected state. Instead, we induce controllable polarization-dependent loss in one of the modes before and after the BS which imbalances the Hong-Ou-Mandel interference effect. This loss is achieved by a variable partially-polarizing BS, which we now describe in detail.
A polarizing BS completely separates horizontal (H) and vertical (V ) light into separate spatial modes. A more general operation can be achieved by allowing the splitting ratios for H and V to be independent and tunable. Previous works used such partially-polarizing BSs in probabilistic quantum logic gates [5][6][7][8]. However, in those experiments, the BSs had fixed splitting ratios.
Here we consider a device where the splitting ratios can be tuned, i.e. a variable partially-polarizing beam splitter (VPPBS) [24], so that any projector can be implemented in a single setup. A VPPBS acting on mode a is described by the following transformation: where a † j |0 = |j a is a creation operator, r is the reflected mode, and t j ∈ [0, 1] with j = H, V are independently tunable real transmission amplitudes for H and V polarized light, respectively. By ignoring the reflected mode of the VPPBS, we can induce polarizationdependent loss in mode a. In the two-photon ba- the transformation W describing a VPPBS in mode a and the identity operator in mode b is: We assumed that both t H and t V are real, but we note that there could be a non-zero relative phase δ between the two in a physical realization of the VPPBS. After the VPPBS, both photons impinge onto different ports of a non-polarizing 50:50 BS. The photons are assumed to have the same spatial distribution and arrive at the BS at the same time. As such, if the two photons are in a symmetric polarization state (i.e. their combined state is unchanged when the modes of both photons are swapped), they always leave the BS from the same port due to Hong-Ou-Mandel interference [23]. Hence, by post-selecting on cases where the photons exit the BS from different ports, we project onto the anti- where the matrix is written in the same basis as Eq. 3. Finally, a second VPPBS, with the same transmission amplitudes as the first one, is placed in mode a after the 50:50 BS. The result of the entire sequence can be found by multiplying the three transformations in the correct order: is the VPPBS splitting ratio, and η is an efficiency factor that we discuss later. We derive the same result using second quantization in Appendix A. The quantity |γ| sets the degree of entanglement (i.e. concurrence) C of |ψ since C(ψ) = 1 − γ 2 [25]. When γ = 0 (|γ| = 1), the state is maximally entangled (separable).
Any state |ψ with a degree of entanglement set by |γ| will have the same coefficients as |ψ when written in its Schmidt basis. One can apply local unitaries U a in mode a and U b in mode b to transform |ψ from its Schmidt basis to the {|H , |V } basis, that is: |ψ = U a U b |ψ . Thus, the remaining step to achieve the most general projector |ψ ψ| (see Eq. 1) is to make the transformations U a U b before the first VPPBS and U † a U † b after the second VPPBS. These can be accomplished with a quarter-wave plate and a half-wave plate in each mode, as well as a birefringent element to control the phase δ between t H and t V (see Appendix B) [26]. In that case, our scheme involves four wave plates angles (the angles for U † a and U † b are fixed by those used for U a and U b , respectively), the phase δ, and the VPPBS splitting ratio γ. All together, these comprise six degrees of freedom which is the same number as in a pure two-photon polarization state.
A successful projection is heralded by the presence of a photon in modes a and b. For an input state |φ in , this occurs with probability η | φ in |ψ | 2 . Since | φ in |ψ | 2 is the ideal probability of a successful projection, we can treat η = (t 2 H +t 2 V )/2 as the efficiency of our scheme. The total VPPBS transmission t 2 H + t 2 V should be maximized for a given γ ∈ [−1, 1] to avoid necessary loss. If γ ≥ 0, this can be achieved by setting t 2 H = 1 and In both cases, note that the efficiency is independent of the input state but depends on the degree of entanglement C of the projector being performed. This is because post-selecting on anti-bunching after the 50:50 BS is an efficient way to project onto maximally entangled states (C = 1, η = 1) but not onto separable states (C = 0, η = 1/2). While the latter can be achieved with unit efficiency using a simpler setup consisting of wave plates and polarizers, we stress the fact that our scheme is far more general. The measurement device presented thus far is nondestructive since the two photons are in the state |ψ whenever they exit the device from separate modes. More generally, quantum measurements do not necessarily leave the measured system in an eigenstate of the measurement operator. For example, a VPPBS in mode a followed by a 50:50 BS and post-selection on antibunching implements the transformation Mψ = |s s| × VPPBS(t H , t V ) = η 1/2 |s ψ |. Applied to some state |φ in , Mψ projects the system onto the state |s with a probability η φ in |ψ 2 . The measurement enacted by the transformation Mψ can be described using the projector operator valued measure (POVM) formalism, in which case the measurement operator is the POVM element Π = M † ψ Mψ = η |ψ ψ | [27]. This operator looks the same as the projector in Eq. 5 with the distinction that the measurement device leaves the system in the state |s rather than |ψ . As before, this measurement can be generalized to an arbitrary state |ψ by adding the appropriate local unitaries U a U b before the VPPBS, i.e. M ψ = η 1/2 |s ψ | U a U b where |ψ = U a U b |ψ . Thus, the projector P = |ψ ψ| can be achieved with a simpler experimental setup if it is not a requirement that the photons be in the state |ψ after the measurement.
There are many scenarios in which the postmeasurement state of the system is not of importance. Perhaps the most obvious one is if two polarizationinsensitive detectors are placed after the 50:50 BS, one in mode a and the other in mode b. Then, the coincidence rate of both detectors is proportional to the expectation value φ in |Π|φ in . By varying |φ in and keeping Π fixed, the measurement operator Π can be reconstructed using a technique known as detector tomography [28,29]. We demonstrate this idea experimentally in the next section.
III. EXPERIMENT
The experimental setup is shown in Fig. 2a. A 404nm-wavelength diode laser pumps a type-II β-barium borate crystal with 40 mW of power. Through spontaneous parametric down-conversion, pairs of 808-nm-wavelength photons with orthogonal polarization are generated collinearly with the pump laser. The latter is then blocked by a long pass filter. The photon pair splits at a PBS into modes a and b. A pair of quarter-wave and halfwave plates (QWP and HWP) in each mode is used to define the input state |φ in (henceforth, U a = U b = 1).
Photons in path a are sent into a VPPBS which we describe below. The exit of the VPPBS and path b are then coupled into a single-mode-fiber non-polarizing 50:50 BS. A delay stage ensures that paths a and b have equal length such that the two photons can interfere. We precompensate for any polarization transformations in the fiber BS using an additional QWP and HWP pair in mode b. Finally, we measure the coincidence rate at the exit of the fiber BS using single-photon avalanche photodiodes (Excelitas SPCM-AQRH-24-FC).
The VPPBS (see Fig. 2b) consists of a displaced Sagnac interferometer which benefits from passive phase stability. The probability amplitude that the photon exits into mode a depends on the relative phase between both paths in the interferometer. We adjust this relative phase for both H and V polarizations independently by introducing two phase shifters in the interferometer, one for each polarization. These phase shifters are birefringent liquid crystal cells with their optical axis aligned with either H or V . Depending on the AC voltage applied to these liquid crystal cells, we can directly control the transmission probabilities T H and T V , as shown in Fig. 2c. We observed a voltage-dependent phase δ between t H and t V , i.e. t H = √ T H and t V = √ T V e iδ . Due to limited interference visibility in the Sagnac interferometer (∼ 93 %), we can vary T H ∈ [0.03, 0.95] and T V ∈ [0.02, 0.84]. This limits the range of projectors we can achieve experimentally, but does not affect their quality.
A. Detector tomography
Detector tomography [28,29] is the ideal tool to verify that our experimental setup performs the desired measurement. We treat our setup as an unknown measurement device and probe it by determining φ in |Π|φ in for sixteen different input states, where |D = (|H +|V )/ √ 2 and |R = (|H +i |V )/ √ 2. The resulting counts are processed by a maximumlikelihood algorithm to reconstruct the closest matching positive and Hermitian operator Π exp .
To demonstrate that various projectors can be achieved, we scan the VPPBS splitting ratio γ by varying the voltage applied to the liquid crystal cell controlling T H and fixing T V = 0.458. For each voltage step, we perform detector tomography and expect to reconstruct the operator Π = η |ψ ψ |. However, both the efficiency η and the projector |ψ ψ | depend on γ and thus the voltage. To distinguish between the two varying quantities, we normalize out η from Π, i.e. Π th = |ψ ψ |. Various . 2. Experimental details. The experimental setup is shown in (a). The variable partially-polarizing beam splitter (VPPBS) is realized using the displaced Sagnac interferometer shown in (b). The phase between both paths in the interferometer is adjusted for both H (ϕH ) and V (ϕV ) polarizations independently using birefringent liquid crystals. In c), we plot the VPPBS transmission probability TH (black line) and TV (grey line) measured when the input photon is H and V polarized, respectively, as a function of the voltage applied to the liquid crystal. As is common for such devices, the relation between voltage and retardance ϕH and ϕV is not linear. BBO: β-barium borate, LP: long pass, (P)BS: (polarizing) beam splitter, (Q/H)WP: (quarter/half) wave plate, APD: avalanche photodiode. matrix elements of the reconstructed Π exp are shown in Fig. 3. Since the phase δ also varies with voltage, for clarity, we plot −| H a V b |Π exp |V a H b | so that the magnitude of this element can be compared to theory when δ = 0. The elements not plotted are nearly zero ( 0.035), as expected. Also not shown is The bold lines are the expected values for the matrix elements of Π th calculated using the measured transmission T H (shown in Fig. 2c) and fixing T V = 0.458. We compute the fidelity F = Tr( Π exp Π th Π exp ) 1/2 for each voltage step. Overall, we find an average of F = 0.95 with standard deviation 0.02, which suggests that there is good agreement between experiment and theory. As can be seen in Fig. 3, our scheme enables us to control the matrix elements of Π exp . In particular, we can control the degree of entanglement of the projected state. To quantify this, we compute the concurrence C of the reconstructed matrices Π exp and find that we can vary C ∈ [0.13, 0.85]. The lower bound of C is limited by the range over which we can vary the transmission probabilities T H and T V , which in turn is limited by the interference visibility in our Sagnac interferometer (∼ 93%). This could be improved by using a different approach to implement the VPPBS, as discussed in Ref. [24]. The upper bound of C is limited by the visibility of the quantum interference at the 50:50 BS (∼ 90%). In the next section, we describe the use of our setup to perform a partiallyentangling projector that is of foundational importance for quantum mechanics.
B. Hardy's test
The violation of Bell's inequalities is the most convincing evidence that quantum mechanics cannot be described by a local realist hidden variable theory. Unfortunately, the derivation of the inequality is rather complicated and requires a number of involved logical steps before arriving at the final result [3,4]. A more straightforward manifestation of the incompatibility of quantum mechanics with local realism is Hardy's test [16,30]. The arguments, which we outline below, are based on intuitive notions of joint probabilities that can be understood by the layman [31]. Both Bell's and Hardy's tests require performing joint measurements on two entangled and space-like separated particles. Here we implement a "reversed" Hardy test by performing a partially-entangling projector on particles in separable states. That is, we flip the usual role of measurement and state in Bell's and Hardy's test since entanglement is used as a resource in the measurement rather than in the state preparation. Our measurement cannot, even in principle, be space-like separated. As such, although we measure the same joint probabilities as in Hardy's test, we cannot exclude the existence of local realism. Instead, a "reversed" test such as ours can certify the entangling nature of the measurement, which is necessary for protocols such as entanglement swapping [32] Table I. or measurement-device-independent quantum key distribution [33].
Input state Number of coincidences in 420 s To her surprise, Alice in fact measures P (β, −β ⊥ ) > 0. By choosing γ = 0.645 (C = 0.764), P (β, −β ⊥ ) is maximized while the other three joint probabilities still vanish, meaning a partially-entangling projector is optimal for Hardy's test. In Fig. 4, we plot the coincidence rate as a function of the delay between the photons in modes a and b. When the delay is zero, we implement the projector η |ψ ψ | (we set δ = 0 by tilting a wave-plate about its axis). Due to experimental errors, we do not observe any vanishing coincidence rates (see Table I). There are two ways to deal with this. The first is to consider where N is the coincidence rate of each measurement [34]. We find the left-hand-side to be 420 ± 60 and thus satisfy the inequality to within seven standard deviations. However, this inequality is susceptible to systematic errors since N (β, −β ⊥ ) grows faster than the sum of the three other terms as γ decreases and the input states are fixed. A second more convincing approach is to ask, given the measurement statistics in Table I, what can Alice infer about N (α, −α ⊥ ) had she not measured that quantity [18]? She finds that N (β, −α ⊥ )/(N (β, −α) + N (β, −α ⊥ )) = 0.822 ± 0.03, instead of the ideal probability of one, as discussed earlier.
IV. CONCLUSIONS
In summary, we proposed a straightforward way of projecting two polarized photons onto any state. Our scheme has an efficiency of at least 50% which far exceeds that of any scheme based on a probabilistic CNOT gate (11%) [6]. We performed an experimental demonstration and reconstructed the operator describing our measurement using detector tomography. Finally, we flipped the usual role of measurement and state in Hardy's test and verified the entangling nature of our measurement.
We anticipate that our scheme will find applications in quantum metrology and quantum information. In single-parameter estimation problems, entangling measurements cannot in general extract more information about that parameter than separable measurements [35]. However, in multi-parameter problems such as state estimation [14] or certain phase estimation scenarios [36], our scheme could be used to implement the partiallyor maximally-entangling projectors that optimize the amount of information extracted. Although limited to two qubits, such a measurement would be of foundational importance in quantum information as it saturates a fun-damental limit in the amount of information that can be extracted from quantum systems. Finally, our scheme can also be used prepare any two-photon polarization state, uniquely without modifying the photon source, unlike in Ref. [37]. | 5,056.8 | 2018-05-09T00:00:00.000 | [
"Physics"
] |
Paeonol Ameliorates Inammation and Cartilage of Chondrocytes in Osteoarthritis by Upregulating SIRT1
Objective: To explore the possible role of paeonol on chondrocyte inammation and cartilage protection in osteoarthritis (OA). Methods: Primary chondrocytes were isolated from rat stie joints, and were identied through toluidine blue staining and immunouorescence staining of type II collagen. The chondrocytes were transfected with sh-SIRT1 or/and paeonol (0, 20, 50, 100, 200, 1000 mg/L) before OA modeling induced by IL-1β. ELISA determined the expressions of TNF-α, IL-17 and IL-6, and apoptotic rate was examined by ow cytometry. qRT-PCR and Western blot quantied the expressions of MMP-1, MMP-3, MMP-13, TIMP-1, cleaved-caspase-3, Bax, Bcl-2, and the proteins related to NF-κB pathway. Results: Increases in TNF-α, IL-17, IL-6, MMP-1, MMP-3 and MMP-13 and decrease in TIMP-1 were found in IL-1β stimulated chondrocytes. The apoptotic rate as well as the expressions of cleaved-caspase-3 and Bax was up-regulated, and Bcl-2 expression was suppressed in response to IL-1β treatment. NF-κB pathway was activated in IL-1β-stimulated chondrocytes. Paeonol enhanced SIRT1 expression to inactivate NF-κB pathway, thus ameliorating the secretion of inammatory cytokines, extracellular matrix degradation and chondrocyte apoptosis. Conclusion: Paeonol inhibits IL-1β induced inammation and extracellular matrix degradation in chondrocytes through up-regulating SIRT1 and suppressing NF-κB pathway.
Introduction
Osteoarthritis (OA), a chronic disease which impacts the lives of millions of people worldwide, primarily induces disability, joint stiffness, and pain [1]. The pathogenesis of OA, featured by extracellular matrix (ECM) degradation and cell stress, is primarily induced by micro-and macro-injuries that activate maladaptive repair responses, such as pro-in ammatory pathways of innate immunity [2]. Chondrocytes synthesize and secrete components of the ECM infrastructure as well as various enzymes responsible for matrix-degradation, aggrecan-degradation and hydrolysis, which consequently facilitates the degradation and elimination of denatured and dysfunctional ECM proteins, thus altering ECM structure [3]. IL-1β is generally applied for OA modeling for it can elicit matrix metalloproteinases (MMPs) and nitric oxide (NO) in chondrocytes, among which MMP-3 and MMP-13 act as the two most important collagenases involved in the degradation of the cartilage matrix in OA [4]. Tissue inhibitors of metalloproteinases (TIMPs), including TIMP-1 and TIMP-2, were generated to neutralize the physiological activities of MMPs [5]. The existing therapy of OA includes interventions that relief the symptoms only, and the only de nite treatment for OA is joint arthroplasty, which is expensive and requires revision in 10 ~ 15 years [6]. Inhibition of chondrocyte in ammation and ECM degradation could be a promising strategy to blunt OA progression.
Paeonol is the mainly effective compound in Paeonia lacti ora Pallas, Cynanchum paniculatum, and Paeonia suffruticosa Andr [7]. Its protective role in hepatocytes has been documented against oxidative stress and in ammation, and hepatocyte apoptosis in LPS/d-GalN-induced acute liver failure (ALF) in mice with both NF-κB and MARK signaling pathways involved [8]. More importantly, paeonol has been demonstrated to inhibit TNF-α and IL-6 expressions released in chondrocytes after IL-1β treatment [7], whereas, the precise mechanism has yet to be developed. SIRT1 has been denoted to be involved in the development of OA [9][10][11]. In addition, SIRT expression was decreased in degenerated cartilage, and SIRT1 inhibition in chondrocytes elicits hypertrophy and cartilage matrix degradation [12]. More importantly, paeonol up-regulated expression and nucleus accumulation of SIRT1 in high-glucoseinduced glomerular mesangial cells (GMCs) [13]. So far, it remains much to be seen how paeonol regulates OA development and whether it implicates in OA through mediating SIRT1.
In the present study, we examined the effects of paeonol on in ammation and ECM degradation in IL-1βstimulated chondrocytes from rat joints. Moreover, we analyzed the precise mechanism underlying the regulation of paeonol on OA is associated with SIRT1 and NF-κB signaling pathway.
Materials And Methods
Isolation and culture of chondrocytes Speci c pathogen free (SPF) Wistar rats (n = 5), weighting 180 ~ 200 g, were obtained from Hunan SJA Laboratory Animal Co., Ltd (Changsha, China). The rats were anesthetized with 50 mg/kg of pentobarbital sodium and then killed by cervical dislocation. The mice were immersed in 75% ethanol with their bilateral sti e joints collected under sterilized conditions. The articular cartilage was isolated and cut into pieces (1 mm × 1 mm × 1 mm), followed by washing with PBS. The pieces were then centrifuged at 800 r/min for 5 min prior to digestion by 0.25% pancreatin at 37℃ for 1 h. After the supernatant was removed, the cartilage was digested by 0.04% type II collagenase containing 5% FBS at 37℃ overnight using water bath. Then the cartilage was ltered using a 200 mesh screen before centrifuged at 800 r/min for 5 min. After the supernatant was discarded, the cells were washed and then were incubated with DMEM/F12 (Gibco, Grand Island, NY, USA) containing 10% FBS and 1% doubleantibody at 37℃ under 5% CO 2 . All experiments were in accordance with the guidance for the care and use of laboratory animals.
Identi cation of chondrocytes
When the cells grew over 80% of the slides, the cells were washed with PBS for twice and xed with 4% paraformaldehyde for 20 min at room temperature. Then toluidine blue staining and immuno uorescence staining of type II collagen were performed for identi cation of chondrocytes. Toluidine blue staining: The cells were subjected to staining by 1% toluidine blue for 30 min, dehydration by gradient alcohol and sealing by neutral balsam before observation under a microscope. Immuno uorescence staining: The cells were successively incubated with 3% H 2 O 2 for 10 min and with goat serum at room temperature for 15 min. COL2A1 antibody (sc-52658, 1:100, Santa Cruz, CA, USA) was added for incubation at 4℃ overnight, and then FITC-labeled secondary antibody was incubated with the chondrocytes for 1 h. The excessive secondary antibody was removed by PBS wash, after which uorescent quencher was added.
The chondrocytes were visualized and photographed under a uorescence microscope.
Treatment of chondrocytes
The powder of paeonol (Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) was dissolved in DMSO to prepare for 1.0 g/L mother solution. The mother solution was maintained at room temperature and was diluted into the appropriate concentration before the performance of following experiments. Paeonol (0, 20, 50, 100, 200 and 1,000 mg/L) was used to treat the chondrocytes for 24 h before the toxicity of chondrocytes was measured by MTT. Then the PBS-dissolved IL-1β (10 ng/ml, Sigma-Aldrich, Merck KGaA, Darmstadt, Germany) reagent was applied to induce OA model in chondrocytes for 24 h.
MTT
The chondrocytes were inoculated in 96-well plates (3,000 cells/well), and paeonol (0, 20, 50, 100, 200 and 1,000 mg/L) was added into each well. Each group has three repeated wells. The chondrocytes were cultured in an incubator at 37℃ under 5% CO 2 for 24 h, followed by incubation with MTT (5 mg/ml) dissolved by 10 µl of DMSO for 4 h. Then the OD value was examined at 570 nm.
Flow cytometry
After corresponding treatment, the concentration of chondrocytes in all groups were adjusted to 10 5 cell/ml. Suspension (3 ml) of each sample was collected in centrifuge tubes (10 ml) and centrifuged at 500 r/min for 5 min, after which the culture solution was removed. Then the samples were washed with PBS and centrifuged at 500 r/min for 5 min. After that, the supernatant was discarded, and the cells were re-suspended using 100 µl of binding buffer. Annexin V-FITC (5 µl) and PI (5 µl) were mixed and incubated with the cells in dark for 15 min. The uorescence of FITC and PI was detected by ow cytometer to analyze the apoptotic rate of chondrocytes. The detection on each group was performed in three times.
ELASA
The expressions of TNF-α, IL-17 and IL-6 were detected using ELISA kit (R&D Systems, MN, USA), and all procedures were measured according to the protocols.
qRT-PCR TRIZOL was applied to extract the total RNAs (Invitrogen, Carlsbad, CA, USA), and RNAs were reversely transcribed using reverse transcription kit (TaKaRa, Tokyo, Japan) according to the instruction. The expressions of RNAs were examined using LightCycler 480 (Roche, Indianapolis, IN, USA) and the reaction conditions were performed in accordance with the directions of uorescence quantitative PCR kit (SYBR Green Mix, Roche Diagnostics, Indianapolis, IN). The thermal cycle parameters were as follows: 95℃ for 5 s; 95℃ for 5 s, 60℃ for 10 s, 72℃ for 10 s (total of 45 cycles); and nally extension at 72℃ for 5 min. Each reaction was performed in triplication. GAPDH was used for normalization. Data were analyzed using 2 −ΔΔCt and ΔΔCt was expressed as (Ct target gene -Ct internal control ) experimental group -(Ct target gene -Ct internal control ) control group . The primers of all genes and their internal controls were shown in Table 1. Table 1 Primer sequences of all genes.
Statistical analysis
GraphPad prism7 was applied for statistical analysis, and all data were represented as average ± standard deviation (average ± SD). Difference between two groups was assessed through T test, and comparisons among multiple groups were measured using One-way analysis of variance with Dunnett's multiple comparisons test as post hoc test. P values of less than 0.5 were regarded as statistical signi cant.
Identi cation of primary chondrocytes
The extracted cells were subjected to toluidine blue staining and immuno uorescence staining of type II collagen. Toluidine blue staining suggested that the extracted cells were blue-violet (Fig. 1A), implying the generation of aggrecan in these cells. According to immuno uorescence staining of type II collagen, the extracted cells were full with green uorescence-represented type II collagen in the cytoplasm (Fig. 1B). All these results indicated that the extracted cells were chondrocytes.
Effect of paeonol on cartilage and in ammation in IL-1βstimulated chondrocytes
Paeonol (0, 20, 50, 100, 200, 1000 mg/L) was used to treat chondrocytes for 24 h before cell viability was detected by MTT assay. The cell toxicity experiments disclosed that paeonol (concentration less than 200 mg/L) had no toxic effect on chondrocytes, and chondrocyte viability was signi cantly decreased when chondrocytes were treated with paeonol at the concentration of 1,000 mg/L ( Fig. 2A, P < 0.05).
After pre-treatment of paeonol (0, 20, 50, 100, 200 mg/L) for 24 h, the chondrocytes were stimulated by 10 ng/ml of IL-1β for 24 h. In IL-1β group, the levels of TNF-α, IL-17 and IL-6 were notably elevated when compared with Control group (Fig. 2B, P < 0.05). Compared with IL-1β group, the expressions of above in ammatory cytokines were inhibited by paeonol pre-treatment in a dose dependent manner (Fig. 2B, P < 0.05). Flow cytometry uncovered that apoptotic rate in IL-1β group was marked higher than that in Control group, while paeonol treatment decreased apoptotic rate of chondrocytes in a dose dependent manner (Fig. 2E, P < 0.05). In addition, the expressions of cleaved-caspase-3 and Bax in IL-1β group were much higher than those in Control group, and Bcl-2 expression was downregulated in IL-1β group, compared with Control group (Fig. 2F, P < 0.05). Paeonol acted as a potent inhibitor of cleaved-caspase-3 and Bax, and upregulated Bcl-2 expression in a dose dependent manner (Fig. 2F, P < 0.05).
qRT-PCR and Western blot unraveled increases in
Aforementioned results con rmed that IL-1β treatment could activate in ammation in chondrocytes, and encourage degradation of extracellular matrix and chondrocyte apoptosis. More importantly, paeonol could impair IL-1β-triggered in ammation, extracellular matrix degradation and chondrocyte apoptosis in a dose dependent manner.
Paeonol regulates in ammation in chondrocytes and protects cartilage through SIRT1
The expressions of SIRT1 in chondrocytes treated with paeonol (0, 20, 50, 100, 200 mg/L) and IL-1β (10 ng/ml) were assessed by qRT-PCR and Western blot. In IL-1β group, chondrocytes had lower expression of SIRT1 than in Control group, and paeonol rescued SIRT1 expression in a dose dependent manner ( Fig. 3A-B, P < 0.05), suggesting the important role of SIRT1 in paeonol regulating chondrocyte in ammation and extracellular matrix degradation.
Then the chondrocytes were transfected with sh-SIRT1 or sh-NC before paeonol (200 mg/L) and IL-1β treatment. qRT-PCR and Western blot suggested the markedly decreased SIRT1 expression in chondrocytes after sh-SIRT1, compared with sh-NC group (Fig. 3C-D, P < 0.05).
After transfection of sh-SIRT1, the expressions of TNF-α, IL-17 and IL-6 was higher than those in sh-NC + Paeonol group (Fig. 3E, P < 0.05). In sh-SIRT1 + Paeonol group, the levels of MMP-1, MMP-3 and MMP-13 were increased in chondrocyte, and TIMP-1 expression was decreased when compared with sh-NC + Paeonol group (Fig. 3F-G, P < 0.05). Transfection of sh-SIRT1 strengthened the apoptotic rate of chondrocytes, expressions of cleaved-caspase-3 and Bax in addition to suppressing Bcl-2 expression, compared with sh-NC + Paeonol group (Fig. 3I, P < 0.05). Taken together, transfection of sh-SIRT1 could partially abolish the effect of paeonol on in ammation in chondrocytes and degradation of extracellular matrix, and paeonol acted its protective effects on chondrocytes through upregulating SIRT1.
Paeonol inhibits NF-κB signaling pathway
Compared with Control group, the expressions of p-IκBa and p-p65 were increased in IL-1β group, but downregulated by paeonol in a dose dependent manner in comparison to IL-1β group (Fig. 4A-B, P < 0.05).
Moreover, transfection of sh-SIRT1 signi cantly increased the expressions of p-IκBa and p-p65 in chondrocytes, compared with sh-NC-Paeonol group (Fig. 4A-B, P < 0.05). These results elucidated that IL-1β treatment could activate NF-κB signaling pathway in chondrocytes, and paeonol could inhibit the activation of NF-κB signaling pathway through upregulating SIRT1 in a dose dependent manner.
Discussion
OA is a multifactorial disorder characterized by low-grade, chronic in ammatory response, resulting in interactions between immune system and factors including local tissue damage and metabolic dysfunction [14]. Therefore, how to inhibit the release of in ammatory cytokines and block cellular signaling pathways is regarded as an attractive option for the management and treatment of OA. Collected evidence in present study supported that paeonol alleviated chondrocyte in ammation and ECM degradation induced by IL-1βvia enhancing SIRT1 expression. Our results indicated that NF-κB signaling pathway is activated in in ammatory chondrocytes, while whose activation can be suppressed by paeonol through regulating SIRT1.
Paeonol is the main component isolated from the root bark of paeonia suffruticosa, which has been reported to have pharmacological effects on in ammation and pain-related indication in diseases including OA [15]. However, little is known regarding the precise mechanism underlying the antiin ammatory effect of paeonol on OA. Here, chondrocytes from the sti e joint of rats were isolated and stimulated with IL-1β to explore the potential effects and mechanism of paeonol treatment in OA. We con rmed paeonol has no toxicity on chondrocytes through MTT assay, which showed that paeonol (less than 200 mg/L) had little toxicity on chondrocytes. Additionally, the expressions of TNF-α, IL-6 and IL-17 were markedly increased in IL-1β-treated chondrocytes. The in ammatory mediators such as IL-1β trigger the expressions of in ammatory factors such as TNF-α, leading to the enhanced secretion of IL-6 and IL-17 [16]. Moreover, paeonol treatment could suppress the expressions of in ammatory cytokines in a dose dependent manner. MMP-3 can cleave multiple ECM including aggrecan [17]. Both MMP-3 and MMP-13 are responsible for the digestion of type II collagen by causing the triple helix to unwind and inducing cleavage at the P4-P11' site [18]. In OA, the increased expressions of MMP-1, MMP-3, MMP-13 and decreased expression of TIMP-1 was found [19]. Consistently, the expressions of MMP-1, MMP-3 and MMP-13 were facilitated, and the expression of TIMP-1 was inhibited in IL-1β-stimulated chondrocytes. After paeonol treatment, MMP-1, MMP-3 and MMP-13 expressions were attenuated and TIMP-1 expression was elevated. In addition, the apoptotic rate of chondrocytes was increased by IL-1β stimulation and ameliorated by paeonol treatment. Taken together, paeonol is able to protect chondrocytes against in ammation, ECM degradation and cell apoptosis.
SIRT has been elucidated to exert its anti-in ammatory effect and regulate ECM in OA [20], and overexpression of SIRT1 in human chondrocytes leads to repression of MMP-3, MMP-8 and MMP-13 expressions [21]. Herein, we found that SIRT1 expression was hampered in the chondrocytes treated with IL-1β, while paeonol treatment could dose-dependently increase SIRT expression. Silence of SIRT1 enhanced the chondrocyte in ammation, apoptosis and ECM degradation, and the positive effects of paeonol were attenuated by SIRT1 silence. Therefore, we concluded that paeonol upregulated SIRT to implicate in OA development. A former study demonstrated that NF-κB signaling pathway has been activated in OA [22]. In monosodium urate (MSU)-induced arthritis (MIA), paeonol has been reported to reduce the expressions of TNF-α, IL-1β and IL-6 through inhibiting NF-κB-mediated proin ammatory cytokine production [23]. The implication of NF-κB signaling pathway in paeonol attenuating in ammatory response and apoptosis in IL-1β-stimulated chondrocytes was explored. In our study, NF-κB signaling pathway was activated by IL-1β in chondrocytes, and paeonol treatment reduced the expressions of proteins related to NF-κB signaling pathway. However, the inhibition of paeonol on NF-κB signaling pathway was neutralized by SIRT1 silence. P65, an indicator of NF-κB signaling pathway exists in the cytoplasm, and when activated by in ammatory cytokines such as IL-1β, p65 is phosphorylated and translocates into the nucleus, further leading to increases in the expressions of multiple in ammation-related genes like MMPs and IL-6 [24,25]. Interestingly, paeonol has been demonstrated to inhibit the phosphorylation of p65, thus inactivating NF-κB signaling pathway [26]. Accordingly, this study declared that paeonol dose-dependently enhanced SIRT1 expression to inactivate NF-κB signaling pathway.
Our results con rm the potential of paeonol as a candidate for OA drugs by virtue of its ability to suppress chondrocyte in ammation and ECM degradation through upregulating SIRT1 and inactivating NF-κB signaling pathway. Declarations | 4,068.4 | 2020-11-12T00:00:00.000 | [
"Biology",
"Medicine"
] |
Absence of Desmin Results in Impaired Adaptive Response to Mechanical Overloading of Skeletal Muscle
Background: Desmin is a muscle-specific protein belonging to the intermediate filament family. Desmin mutations are linked to skeletal muscle defects, including inherited myopathies with severe clinical manifestations. The aim of this study was to examine the role of desmin in skeletal muscle remodeling and performance gain induced by muscle mechanical overloading which mimics resistance training. Methods: Plantaris muscles were overloaded by surgical ablation of gastrocnemius and soleus muscles. The functional response of plantaris muscle to mechanical overloading in desmin-deficient mice (DesKO, n = 32) was compared to that of control mice (n = 36) after 7-days or 1-month overloading. To elucidate the molecular mechanisms implicated in the observed partial adaptive response of DesKO muscle, we examined the expression levels of genes involved in muscle growth, myogenesis, inflammation and oxidative energetic metabolism. Moreover, ultrastructure and the proteolysis pathway were explored. Results: Contrary to control, absolute maximal force did not increase in DesKO muscle following 1-month mechanical overloading. Fatigue resistance was also less increased in DesKO as compared to control muscle. Despite impaired functional adaptive response of DesKO mice to mechanical overloading, muscle weight and the number of oxidative MHC2a-positive fibers per cross-section similarly increased in both genotypes after 1-month overloading. However, mechanical overloading-elicited remodeling failed to activate a normal myogenic program after 7-days overloading, resulting in proportionally reduced activation and differentiation of muscle stem cells. Ultrastructural analysis of the plantaris muscle after 1-month overloading revealed muscle fiber damage in DesKO, as indicated by the loss of sarcomere integrity and mitochondrial abnormalities. Moreover, the observed accumulation of autophagosomes and lysosomes in DesKO muscle fibers could indicate a blockage of autophagy. To address this issue, two main proteolysis pathways, the ubiquitin-proteasome system and autophagy, were explored in DesKO and control muscle. Our results suggested an alteration of proteolysis pathways in DesKO muscle in response to mechanical overloading. Conclusion: Taken together, our results show that mechanical overloading increases the negative impact of the lack of desmin on myofibril organization and mitochondria. Furthermore, our results suggest that under these conditions, the repairing activity of autophagy is disturbed. Consequently, force generation is not improved despite muscle growth, suggesting that desmin is required for a complete response to resistance training in skeletal muscle.
Background: Desmin is a muscle-specific protein belonging to the intermediate filament family. Desmin mutations are linked to skeletal muscle defects, including inherited myopathies with severe clinical manifestations. The aim of this study was to examine the role of desmin in skeletal muscle remodeling and performance gain induced by muscle mechanical overloading which mimics resistance training.
Methods: Plantaris muscles were overloaded by surgical ablation of gastrocnemius and soleus muscles. The functional response of plantaris muscle to mechanical overloading in desmin-deficient mice (DesKO, n = 32) was compared to that of control mice (n = 36) after 7-days or 1-month overloading. To elucidate the molecular mechanisms implicated in the observed partial adaptive response of DesKO muscle, we examined the expression levels of genes involved in muscle growth, myogenesis, inflammation and oxidative energetic metabolism. Moreover, ultrastructure and the proteolysis pathway were explored.
Results: Contrary to control, absolute maximal force did not increase in DesKO muscle following 1-month mechanical overloading. Fatigue resistance was also less increased in DesKO as compared to control muscle. Despite impaired functional adaptive response of DesKO mice to mechanical overloading, muscle weight and the number of oxidative MHC2a-positive fibers per cross-section similarly increased in both genotypes after 1-month overloading. However, mechanical overloading-elicited remodeling failed to activate a normal myogenic program after 7-days overloading, resulting in proportionally reduced activation and differentiation of muscle stem cells. Ultrastructural analysis of the plantaris muscle after 1-month overloading revealed muscle fiber damage in DesKO, as indicated by the loss of sarcomere integrity and mitochondrial abnormalities. Moreover, the observed accumulation of autophagosomes and lysosomes in DesKO muscle fibers
INTRODUCTION
Desmin belongs to the family of intermediate filaments and is specifically expressed in skeletal, smooth and cardiac muscle cells. In absence of desmin, or due to mutations in the encoding gene, several defects have been described in all three muscle types, and particularly in cardiac (Brodehl et al., 2018) and skeletal muscle (Goldfarb and Dalakas, 2009;van Spaendonck-Zwarts et al., 2010). In skeletal muscle cells, desmin forms filaments that connect different organelles between them, to the cytoskeleton, and to the plasma membrane. Desmin filaments are linked to the costameres and Z-discs through interactions with synemin (Granger and Lazarides, 1980;Bellin et al., 2001), plectin (Konieczny et al., 2008), nebulette and indirectly to actin filaments (Hernandez et al., 2016), contributing to its role in the maintenance of the structural and mechanical integrity of the contractile apparatus in muscle tissues.
Since their generation Milner et al., 1996), the desmin knock-out mice (DesKO) were used to assess the role of this intermediate filament in skeletal muscles. Under resting conditions, desmin-deficient soleus muscle shows an increase in the number of slow/oxidative fibers, but also a decrease in force and fatigue resistance, as compared to control (Li et al., 1997). However, the role of desmin in the long-term adaptation process following extreme muscle mechanical stimulation is not clear. It has been suggested that desmin is protective in the case of remodeling induced by endurance exercice and, on the contrary, deleterious when the muscle is submitted to eccentric exercise (Sam et al., 2000).
Herein, we compared the effects of 1 month-OVL on absolute maximal force, specific maximal force, fatigue resistance, muscle growth, and fiber type transition of the plantaris muscle of adult desmin-deficient mice (DesKO) and of age-and sexmatched control mice. We collected evidence of a dichotomy between the OVL-induced muscle gain of mass, and the effects on muscle function in DesKO muscles compared to controls. Our data suggest that the down regulation of the myostatin pathway efficiently promotes muscle hypertrophy in DesKO muscles. However, OVL-elicited remodeling failed to activate normal myogenic program, resulting in proportionally less muscle stem cell (MuSC) activation and differentiation. Furthermore, desmin absence prevented the upregulation of LC3, suggesting that there is a link between desmin and the autophagic process that accompanies muscle remodeling following mechanical overloading.
Animals and Treatments
All procedures were performed in accordance with national and European legislations, in conformity with the Public Health Service Policy on Human Care and Use of Laboratory Animals under the license 75-1102. 32 2-months-old DesKO female mice were used in this study. Age-matched wild-type (n = 24) or Des +/− heterezygous (n = 12) female mice were used as controls. Mice were randomly divided into different control and experimental groups. All animal studies were approved by our institutional Ethics Committee (Charles Darwin, projet number: 01362.02) and conducted according to the French and European laws, directives, and regulations on animal care (European Commission Directive 86/609/EEC). Our animal facility is fully licensed by the French competent authorities and has animal welfare insurance. For OVL, the mice were anesthetized with pentobarbital (50 mg/kg body weight, i.p.). The plantaris muscles of both legs were mechanically overloaded for 7 days or 1 month by the surgical removal of soleus muscles and a major portion of the gastrocnemius muscles as described (Joanne et al., 2012;Ferry et al., 2015).
Muscle Force Measurements
Plantaris muscle function was evaluated by measuring in situ isometric force, as described (Vignaud et al., 2007;Hourdé et al., 2013a). Briefly, mice were anesthetized (pentobarbital sodium, 50 mg/kg, i.p.). During physiological experiments, supplemental doses were given as required, to maintain deep anesthesia. The knee and foot were fixed with clamps and stainless-steel pins. The plantaris muscle was exposed (and the distal tendon of the gastrocnemius and soleus muscle complex was cut in non-overloaded muscles). The distal tendon of the plantaris muscle was attached to an isometric transducer (Harvard Apparatus) with a silk ligature. The sciatic nerves were proximally crushed and distally stimulated by a bipolar silver electrode using supramaximal square wave pulses of 0.1 ms duration. Responses to tetanic stimulation (pulse frequency 75-143 Hz) were successively recorded. At least 1 min was allowed between contractions. Absolute maximal forces were determined at optimal length (length at which maximal tension was obtained during the tetanus). Force was normalized to the muscle mass (m) as an estimate of specific maximal force. Fatigue resistance was then determined after a 5-min rest period. The muscle was continuously stimulated at 50 Hz for 2 min (submaximal continuous tetanus). The duration corresponding to a 50% decrease in force was noted. Body temperature was maintained at 37 • C using radiant heat. After the measurements, mice were euthanized with an overdose of pentobarbital.
Histology and Immunohistochemistry
Transverse 10 µm-thick frozen sections were prepared from the mid-belly region of plantaris muscles using a cryostat (Leica Microsystems, Nanterre, France). Some sections were processed for histological analysis (Hematoxylin-eosin, Sirius red stainings) according to standard protocols. Other sections were processed for immunohistochemistry as described previously (Joanne et al., 2012;Hourdé et al., 2013b). Briefly, the sections were incubated with primary antibodies against heparan sulfate proteoglycan (Perlecan) (1:400, rat monoclonal, Millipore), myosin heavy chain (MHC)-2a (1:50, mouse monoclonal, clone SC-71, Developmental Studies Hybridoma Bank, University of Iowa) or MHC-2b (1:5, mouse monoclonal, clone BF-F3, Developmental Studies Hybridoma Bank, University of Iowa). After washing in PBS, sections were incubated 1 h with secondary antibodies (Alexa Fluor R , Life Technologies). After washing in PBS, slides were finally mounted using mowiol containing 5 µg/ml Hoescht 33342 (Life Technologies). Images were captured using a motorized fluorescent microscope (Dmi8, Leica Microsystems). Morphometric analyses were made using the ImageJ software and a custom macro as described previously (Joanne et al., 2012;Hourdé et al., 2013a). The percentage of fiber type and smallest diameter (min-Ferret) of all muscle fibers of the whole muscle section were measured.
Electron Microscopy
Electron microscopy was carried out as described previously (Agbulut et al., 2001;Joanne et al., 2013). Briefly, the calf muscles of mice were fixed in 2% glutaraldehyde and 2% paraformaldéhyde in 0.2 M phosphate buffer at pH 7.4 for 1 h at room temperature. After 1 h, the plantaris muscle was dissected and separated in three by a short-axis section, then fixed overnight at 4 • C in the same fixative. After washing, specimens were post-fixed for 1 h with 1% osmium tetroxide solution, dehydrated in increasing concentrations of ethanol and finally in acetone, and embedded in epoxy resin. The resin was polymerized for 48 h at 60 • C. Ultrathin sections (70 nm) were cut with an ultramicrotome (Leica UC6, Leica Microsystems), picked-up on copper rhodium-coated grids and stained for 2 min with Uranyl-Less solution (Delta Microscopies, France) and 2 min with 0.2% lead citrate before observation at 80 kV with an electron microscope (912 Omega, Zeiss) equipped with a digital camera (Veleta 2kx2k, Emsis, Germany).
Relative Quantification of Gene Expression by qPCR
Total RNA was extracted from the plantaris muscle using QIAzol R lysis reagent, TissueLyser II system, and Rneasy minikit (Qiagen France SAS) following the manufacturer's instructions. Extracted RNA was spectrophotometrically quantified using NanoDrop 2000 (Thermo Fisher Scientific). From 500 ng of extracted RNA, the first-strand cDNA was then synthesized using the Transcriptor First Strand cDNA Synthesis Kit (Roche Diagnostics) with anchored-oligo(dT)18 primer and according to the manufacturer's instructions. Using a Light Cycler R 480 system (Roche Diagnostics), the reaction was carried out in duplicate for each sample in a 6 µl reaction volume containing 3 µl of SYBR Green Master Mix, 500 nM of the forward and reverse primers each and 3 µl of diluted (1:25) cDNA. The thermal profile for SYBR Green qPCR was 95 • C for 8 min, followed by 40 cycles at 95 • C for 15 s, 60 • C for 15 s and 72 • C for 30 s. To exclude PCR products amplified from genomic DNA, primers were designed, when possible, to span one exon-exon junction. Primers sequences used in this study are available on request. The expression of hydroxymethylbilane synthase (Hmbs) and succinate dehydrogenase complex flavoprotein subunit A (Sdha) was used as a reference transcript. At least five animals were used for each experimental point.
Statistical Analysis
Groups were statistically compared with GraphPad Prism 7 using ordinary two-way analysis of variance. Multiple comparisons were performed to compare means of Basal groups to OVL-treated groups (see Supplementary Table 1). Multiple comparisons were corrected by the Tukey statistical hypothesis testing. Gain between WT and KO groups were statistically compared using unpaired two-tailed T-test. If normal distribution (verified using Shapiro-Wilk's test) and/or equal variance (verified using Bartlett's test) are not assumed, groups were statistically compared using the test of Wilcoxon-Mann-Whitney. A p < 0.05 was considered significant. Values are given as means ± SEM.
Reduced Gain of Muscle Performance in Response to Mechanical Overload (OVL) in Desmin Knock-Out Mice
To analyze the role of desmin during adaptation to resistance training, we examined the gains in muscle weight, muscle force generation capacity and fatigue resistance in response to overload (OVL) in DesKO and control mice. Plantaris muscles were overloaded by surgical ablation of gastrocnemius and soleus muscles. One month following OVL, muscle weight was markedly upregulated in both genotypes ( Figure 1A). The percentage of muscle weight gain compared to the unchallenged muscle was calculated and was not different in both groups ( Figure 1B), suggesting that desmin depletion does not prevent muscle mass regulation following resistance training. It should be noted that body weight in DesKO mice in basal state was lower compared to control mice (16.80 g ± 0.60 g in DesKO vs. 21.24 g ± 0.50 g in Ctr, p = 0.0001) and OVL did not modify this difference (18.40 ± 0.15 for DesKO + OVL vs. 20.27 ± 0.27 for Ctr + OVL, p = 0,0258). In situ force production in response to nerve stimulation was analyzed. In contrast to control mice, OVL did not increase absolute maximal force of plantaris muscle in DesKO mice (Figures 1C,D). The specific maximal force, which represents the normalization of absolute maximal force by muscle weight, was decreased in response to OVL in both genotypes ( Figure 1E) (p < 0.05). Notably, this decrease was higher in DesKO mice as compared to control mice ( Figure 1F) (p < 0.05). Fatigue resistance was also analyzed by continuously stimulating plantaris muscle and measuring the time corresponding to a decrease of 50% of initial force. Our results showed that fatigue resistance increased in response to OVL only in control mice (Figures 1G,H) (p < 0.05). Taken together, our results indicate that desmin plays an important role in the gain of muscle performance, but not in the gain of muscle weight in response to OVL.
Muscle Remodeling in Response to Mechanical Overload (OVL) in Desmin Knock-Out Mice
Since the changes in muscle force generation capacity and fatigue resistance can be related to myosin heavy chain (MHC) composition of muscle fibers, we used immunohistochemistry to compare MHC composition of DesKO and control plantaris muscles after 1 month of OVL (Figures 2A-D). The proportion of fibers expressing two major MHC isoforms, MHC-2a (oxidative fiber) and MHC-2b (glycolytic fiber), was analyzed using a custom macro (ImageJ software) to count MHC-positive cells. The proportion of MHC-2a-expressing fibers increased and the proportion of MHC-2b-expressing fibers decreased in FIGURE 1 | Maximal force of plantaris is not increased in response to OVL in DesKO mice. Muscle weight (A), maximal force (P0, C), specific force (sP0, E) and fatigue resistance (G) were evaluated in both genotypes (Ctr and DesKO) in basal condition or after 1 month of OVL. The gain or loss in these parameters was calculated compared to the corresponding basal condition for the Ctr + OVL and DesKO + OVL groups (B,D,F,H). Data are given as means ± SEM. DesKO, Desmin knock-out mice; Ctr, Control mice; OVL, mechanical overloading. ns: non-significant, *p < 0.05, **p < 0.01, ***p < 0.001.
Frontiers in Cell and Developmental Biology | www.frontiersin.org response to OVL in both genotypes (Figures 2E,F). Despite the fact that the change in MHC composition of plantaris muscle was found similar between DesKO and control mice, the increase rate of the percentage of MHC-2a fiber in response to OVL in DesKO mice was lower by more than 2.5-fold compared to control mice (+ 49% ± 9% in DesKO vs. + 136% ± 15% in control, p = 0.002). However, it should be noted that the percentage of oxidative MHC-2a fibers in DesKO mice was already increased before OVL (Figure 2E) (p < 0.05), presumably as a consequence of the lack of desmin as previously demonstrated (Agbulut et al., 1996). We also examined the oxidative energetic metabolism of plantaris muscle after 1 month of OVL using succinate dehydrogenase (SDH) staining. Our results indicated higher SDH activity in response to OVL in both genotypes ( Figure 2G) (p < 0.05). Taken together, our results indicate that the glycolytic to oxidative metabolism transition pattern in response to OVL did not markedly differ between DesKO and control mice.
Myostatin Pathway Mediates Muscle Mass Plasticity in Desmin Knock-Out Mice
OVL is known to induce muscle hypertrophy and hyperplasia. As several players can be involved in the control of the muscle mass, we examined the mRNA levels of intracellular signaling molecules involved in hypertrophy. Semi-quantitative PCR analysis was performed on muscle samples after 7 days of OVL during the early phase of muscle remodeling (Figures 3A-J). Myostatin (Mstn or Gdf8), an mTOR deactivator, mRNA levels decreased with OVL (Figure 3A), to a lower extent in DesKO mice ( Figure 3B). In the same line, the transcript of follistatin (Figures 3C,D), an antagonist of myostatin, and insulin growth factor 1 (Igf1) (Figures 3E,F) increased in all genotypes but these increases were lower in DesKO compared to control mice muscles. In addition, two other markers MuRF1 and atrogin, which are important markers for skeletal muscle atrophy by promoting protein catabolism and contribute to the decline of muscle mass and strength in sarcopenia, were examined. Our results demonstrated that OVL did not modify MuRF1 expression in control, but strongly reduced it in DesKO mice, suggesting that desmin presence reduces protein degradation upon OVL stimulation (Figures 3G,H). Regarding atrogin expression, no stricking difference was observed after OVL between the two genotypes (Figures 3I,J). Taken together, our data suggest that conventional muscle mass regulation pathways are activated in DesKO mice and contribute to the plasticity of muscle mass in OVL-elicited remodeling.
To complete these results, we analyzed the number and the size distribution of the MHC-2a and the MHC-2b muscle fibers on cross sections of plantaris muscle in basal condition and 1 month after OVL. As demonstrated in Figure 4, the number of MHC-2a fibers was increased in both genotypes. Interstingly, in control mice the size distribution of MHC-2a fibers was not modified by OVL. On the other hand, in DesKO mice, the size distribution was slightly shifted toward the higher values indicating an asymmetrical increase in the number of MHC-2a fibers with high diameter (Figures 4A,B). Thus, the mean diameter of MHC-2a fibers was increased in response to OVL in DesKO mice but not in control mice (+ 31.91% ± 11.32% in DesKO vs. + 2.93% ± 3.18% in control, p = 0.048) ( Figure 4C). As expected, after 1 month of OVL, the number of MHC-2b fibers was decreased in both genotypes (Figures 4D,E). However, it seems that the decrease in the mean size of MHC-2b fibers in response to OVL was more important in control mice compared to DesKO mice although the difference did not reach statistical significance (-15.79% ± 7.09% in DesKO vs. -27.11% ± 5.87% in control, p = 0.264) (Figure 4F).
Mechanisms Responsible for the Reduced Gain of Performance in Desmin Knock-Out Mice
To determine the mechanisms involved in the deficit of muscle function observed in DesKO muscles challenged with OVL, myofiber number were quantified in plantaris muscles. Myofiber number was increased by OVL in control muscle, but not in DesKO ( Figure 5A). A defect in myogenesis affecting DesKO muscles could lead to less myofiber formation following OVL, and could participate in the observed deficit in muscle function. Thus, Pax7 transcripts were quantified ( Figure 5B). Pax7 expression increased in response to OVL both in control and DesKO mice while there was no statistically significant difference observed between the two genotypes (Figures 5B,C). To support this result, we also quantified Pax7 positive cells in DesKO and control mice after OVL ( Figure 5D). Our results show no difference in the ratio of Pax7 positive cells between control and DesKO mice in response to OVL, suggesting that MuSC are not depleted in absence of desmin. However, we found that the induction of MyoD expression due to OVL was repressed in DesKO muscle, suggesting that MuSC activation is repressed (Figures 5E,F). Consistent with this observation, myogenin, neonatal and embryonic MHC were less induced by OVL in DesKO mice compared to control mice muscles (Figures 5G-L). Unchanged Pax7 expression associated with decreased markers for regeneration in response to OVL in DesKO compared to control muscles, pleads for a reduction in the myogenic program affecting DesKO MuSCs in response to OVL. In order to explore the consequences of this reduced myogenic program on the capacity of DesKO muscle to repair muscle damage due to the mechanical overloading, we evaluated the extent of fibrosis and inflammatory response in DesKO and control mice. Fibrosis was first assessed by Sirius red staining after 1 month of OVL (Supplementary Figure 1A). No difference was found between the two genotypes. Moreover, qPCR analyses were performed on muscle samples 7 days after OVL during the early phase of muscle remodeling (Supplementary Figure 1B). The mRNA levels of the fibrosis and inflammation markers Il1b, Tgfβ1, Col3a1, Col1a1 and Timp1 increased strongly in response to OVL in a similar manner in both genotypes (p < 0.05). We also studied protein kinase PKA since it contributes to muscle regeneration (Stewart et al., 2011). We found that the levels of the phosphorylated forms of PKA regulatory subunit IIα (PKA RIIα) protein increased in control mice in response to OVL but not in DesKO mice (Supplementary Figures 1C,D). Taken together, despite minor modifications observed in DesKO mice compared to control, the absence of desmin does not impair muscle fiber regeneration.
To better understand the morphological perturbations affecting DesKO muscles, we also examined the ultrastructure of the plantaris muscle fibers using transmission electron microscopy (Figure 6). In comparison to control muscles, DesKO myofibers present increased number of mitochondria and irregularities in the organization of the myofibrils, with misalignment of Z-lines (Figures 6A,B). One month after OVL, control mice present only minor modifications, i.e., increase of the number and the size of mitochondria (Figure 6C, see asterisk). In contrast, DesKO mice presented serious muscle damages as indicated by the loss of sarcomere integrity, alignment and orientation, and abnormalities in size, number and distribution of mitochondria (Figures 6D,E). Moreover, mitochondria appeared swollen and accumulated in the muscle fibers. As presented in Figure 6F, an accumulation of autophagosomes was spotted under the sarcolemma (white arrowheads) and lysosomes (white empty arrowheads) in DesKO mice after 1 month of OVL. These observations could . The gain or loss in these parameters was calculated compared to the corresponding basal condition for the Ctr + OVL and DesKO + OVL groups (C,F). Data are given as means ± SEM. For the size distribution, n = 4 for all conditions (n = 3 for Ctr + Basal group). DesKO, Desmin knock-out mice; Ctr, Control mice; OVL, mechanical overloading; MHC, myosin heavy chain. ns, non-significant, *p < 0.05.
indicate a perturbation of autophagy. Indeed, an impairment of proteolysis mechanisms leading to the accumulation of unfunctional proteins, such as proteins contributing to muscle contraction, could participate in the reduced gain of performance in DesKO mice. To address this possibility, we explored two main proteolysis pathways, the ubiquitin-proteasome system and the autophagy, in DesKO and control mice and in response to OVL. Regarding the ubiquitin-proteasome system, we measured chymotrypsin-like, trypsin-like and caspase-like activities of the proteasome 20S catalytic core using the fluorogenic substrates Suc-LLVY-AMC in DesKO and control muscle homogenates after 1 month of OVL. Chemotrypsin-like and caspase-like activities show no difference between DesKO and control mice (Figures 7B,C). However, the proteasome trypsin-like activity was increased in response to OVL in DesKO mice but not in control mice ( Figure 7A) (p < 0.05). Concerning autophagy, we examined LC3-II protein level by western-blotting in DesKO and control plantaris muscle 1month after OVL (Figure 7D). Our results show that both mRNA and protein levels of LC3-II did not change in DesKO mice, while they increased in control mice, in response to OVL (Figures 7D-H) (p < 0.05). It should be noted that the mRNA levels of LC3 were higher in DesKO mice at the baseline compared to the control. Together, these results underline alteration of proteolysis pathways in DesKO mice in response to OVL.
DISCUSSION
Skeletal muscle responds to resistance training by activating adaptation mechanisms at the cellular level, which result in muscle fiber growth and regeneration, increase in fatigue resistance, and gain of force (Joanne et al., 2012). In this study, we used the desmin-deficient (DesKO) female mouse to address the role of desmin in the response of skeletal muscle to mechanical overload (OVL), a well-studied experimental model which mimics resistance training. We found that, in response to OVL, gain in performance is not fully achieved in the absence of desmin, despite notable muscle remodeling and in relation with impaired proteolysis.
Muscle Remodeling Is Affected by Desmin Depletion in Response to OVL
One month after surgical ablation of gastrocnemius and soleus muscles, the plantaris muscle of both DesKO and control mice responded by an important increase in weight (Figure 1), suggesting resistance training-induced muscle growth (Joanne et al., 2012). Both genotypes also displayed a fiber type switch toward a more oxidative phenotype, which is consistent with increased fatigue resistance. In particular, MHC2b-positive, glycolytic fibers are partially replaced by MHC2a-positive and more oxidative fibers. Also, mitochondrial activity, as detected by SDH staining, appeared increased by twofold (Figure 2). Gene expression levels of mTOR deactivator myostatin decreased whereas follistatin, an antagonist of myostatin, and the positive regulator of muscle growth Igf1 transcripts increased in both genotypes, also suggesting that the hypertrophy process is activated (Figure 3). Igf1 is also implicated in muscle regeneration by promoting both proliferation and differentiation of MuSCs (Jang et al., 2011). However, desmin-deficiency toned down both OVL-elicited myostatin reduction and stimulated follistatin expression (Figures 3A-D). This suggests that the regulation of muscle mass is not affected by desmin deficiency. Overloaded DesKO and control muscles showed increase in gene expression of Pax7, embryonic MHC and neonatal MHC (Figure 5). These results suggest that hyperplasia and regeneration also occur in both genotypes. Interestingly, OVL does not increase myofiber number in DesKO muscles, although it does in control mice ( Figure 5A). While Pax7positive cells, and OVL-induced gain of Pax7 expression was identical in control and DesKO muscles. However, OVL-induced upregulation of MyoD, Myogenin, neonatal, and embryonic MHC were dampened in DesKO muscles (Figure 5). Our data suggest that the myogenic program triggered by OVL stimulation is only partially supported in absence of desmin. We also examined differences in phosphorylation of PKARIIα which leads to an increase in protein synthesis, in favor of hypertrophy and muscle growth. Although the levels of phosphorylated PKARIIα increased only in the control muscle after OVL, these levels were already elevated in the DesKO in the basal condition. A possible explanation may be related to another intermediate filament, synemin. Synemin is involved in the control of hypertrophy by modulating the subcellular localization of PKA. We have previously reported higher phosphorylated levels of PKA and increased hypertrophy in the synemin-deficient skeletal muscle as compared to control mice and in response to OVL (Li et al., 2014). Formation of synemin filaments in muscle requires copolymerization with desmin. Consequently, synemin appears unstable and delocalized in DesKO muscle fibers (Carlsson et al., 2000), providing one possible explanation for the higher levels of phosphorylated PKARII in DesKO under basal conditions. Taken together, these results suggested that the process of muscle fiber remodeling was activated. Muscle fiber type switch as well as hypertrophy and regeneration appear to occur in both genotypes as a response to OVL. These processes are not impaired by the lack of desmin; however, our data suggest that muscle remodeling is moderated in response to OVL.
Impaired Gain in Performance in the Absence of Desmin
Interestingly, here we show that, despite muscle hypertrophy, maximal force production is not improved in DesKO plantaris muscle in response to OVL. It was known that muscle hypertrophy does not necessarily increase maximal force production as shown in the case of myostatin inhibition (Stantzou et al., 2017). Increased fibrosis and inflammation as compared to control could be a possible partial explanation for the fact that maximal force was not proportional to muscle weight in DesKO (Costamagna et al., 2015). However, although gene expression of proteins involved in inflammation and fibrosis greatly increased after 7 days of OVL, this increase was similar for both genotypes and no statistically significant increase in fibrosis was measured in situ on muscle sections, at least after 1 month of OVL (Supplementary Figure 1). Efficient muscle fiber contraction can be impaired by structural disorganization of sarcomeres, and in particular by misalignment, disintegration, and loss of myofibrils (Li et al., 1997). Electron microscopy analysis (Figure 6) showed that OVL did not destabilize the contractile apparatus in control muscle fibers. The main effect was the increased number of mitochondria, as expected by the switch toward a more oxidative metabolism. On the contrary, OVL had major structural consequences in DesKO muscle. Under basal conditions, DesKO muscle fibers displayed intermyofibrillar accumulation of mitochondria as well as misaligned Z-lines. However, the contractile apparatus was not disintegrated. After OVL, DesKO muscle fibers appeared damaged, with abnormally accumulated and often swollen mitochondria as well as misaligned, disintegrated and disoriented myofibrils, as shown by the coexistence of longitudinal and cross-sectioned myofibrils within the same muscle fiber ( Figure 6E). A similar cellular phenotype has been previously described for DesKO muscle fibers in basal condition (Milner et al., 1996;Li et al., 1997), although only for slow-twitch muscles and in older mice (≥5 months-old). Therefore, we conclude that in the absence of desmin, muscle fiber growth cannot compensate the increased structural damage of the contractile apparatus, and is not sufficient to improve force production.
It is known that accumulation of damaged mitochondria leads to increased reactive oxygen species generation, decreased ATP production, cellular dysfunction, and finally cell death (Bloemberg and Quadrilatero, 2019). Intense muscle effort under OVL conditions results in production of damaged proteins and organelles which need to be effectively cleared. We examined the two major proteolytic pathways, the ubiquitinproteasome system and autophagy, in DesKO and control muscles. Proteasomal activity was reported in OVL experiments (Baehr et al., 2014), and the role of autophagy in muscle adaptation to exercise has been extensively studied and established (Lira et al., 2013;Luo et al., 2013). In our experiments, in response to OVL, the proteasome activity appeared partially FIGURE 7 | Proteostasis pathways are disturbed in DesKO after OVL. The trypsin-like activity (A), chymotrypsin-like activity (B) and caspase-like activity (C) of proteasome were measured in both genotype (Ctr and DesKO) in basal condition or after 1 month of OVL. LC3-II protein (D-F) and LC3 gene expression (G,H) were quantified in both genotype (Ctr and DesKO) in basal condition or after 1 month of OVL. Data are given as means ± SEM. DesKO, Desmin knock-out mice; Ctr, Control mice; OVL, mechanical overloading. ns: non-significant, *p< 0.05, **p < 0.01, ***p< 0.001. activated (trypsin-like activity), whereas the levels of LC3-II, the activated (lipidated) form of autophagy marker LC3 protein, did not increase in DesKO muscle fibers. On the contrary, both LC3 gene expression and LC3-II protein levels increased in control muscle fibers after OVL. Interestingly, LC3 gene expression was already increased in DesKO muscle fibers under basal conditions. Moreover, electron microscopy reveals the presence of autophagosomes and autolysosomes in the cytoplasm of DesKO muscle fibers after OVL ( Figure 6F). One possible explanation could be that although the process is activated, the autophagy machinery (autophagosome production, fusion to lysosomes and clearance) in DesKO is already at maximum capacity in basal state because of high production of damaged cell material due to the lack of desmin, and is getting inefficient under OVL conditions. This hypothesis is supported by the accumulation of damaged mitochondria under OVL conditions (Figure 6). Interestingly, it has been previously shown that blockage of autophagy in muscle-specific Atg7-null mice resulted in impairment of force transmission and in accumulation of dysfunctional mitochondria (Masiero et al., 2009). We therefore propose that, after OVL and in the absence of desmin, the repairing activity of autophagy could be disturbed and this may at least partially explain the accumulation of damaged mitochondria and dysfunctional contractile proteins which leads to compromised integrity of muscle and reduced production of specific maximal force, as was shown in other cases (Wohlgemuth et al., 2010). Further studies are required to elucidate the role of autophagy in our model system by generating functional data of autophagy inhibition/promotion in vivo.
CONCLUSION
In conclusion, here we show that during muscle remodeling in response to OVL, muscle growth and increase in force production can be dissociated in DesKO mice. It has been proposed that, during resistance exercise, force transmission at the myofibril level must be supported by the cytoskeleton, as suggested by increased expression of desmin (Parcell et al., 2009). We propose that mechanical OVL increases the negative impact of the lack of desmin on myofibril organization and mitochondria. Furthermore, our results suggest that under these conditions, the repairing activity of autophagy is impaired. Consequently, force generation is not improved despite muscle growth, suggesting that desmin is required for a complete response to resistance training in skeletal muscle.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
All animal studies were approved by our institutional Ethics Committee (Charles Darwin, project number: 01362.02) and conducted according to the French and European laws, directives, and regulations on animal care (European Commission Directive 86/609/EEC).
AUTHOR CONTRIBUTIONS
PJ, MB, OA, AF, and EK contributed to the data collection, the statistical analysis, the interpretation, and the manuscript writing. PJ, YH, MB, and JG-L carried out the histological and immunostainig and molecular analysis. AL, YH, M-TD, AP, and ZL carried out the western blot experiments and proteasome activity measurement and analysis. GT and OA carried out the electron microscopy experiments. AF carried out the muscle force measurements. OA and AF designed and supervised the research. All authors read and approved the final manuscript.
FUNDING
This work was supported by funds from CNRS, Sorbonne Université, Université de Paris, the "Association Française contre les Myopathies" (AFM) (contracts numbers: 16605, 19353, and 20479). YH and M-TD were supported by fellowships from the "Association Française contre les Myopathies." | 8,129.8 | 2021-07-15T00:00:00.000 | [
"Medicine",
"Biology"
] |
Iso-mukaadial acetate and ursolic acid acetate bind to Plasmodium Falciparum heat shock protein 70: towards targeting parasite protein folding pathway
Plasmodium falciparum is the most lethal malaria parasite. P. falciparum Hsp70 (PfHsp70) is an essential molecular chaperone (facilitates protein folding) and is deemed a prospective antimalarial drug target. The present study investigates the binding capabilities of select plant derivatives, iso-mukaadial acetate (IMA) and ursolic acid acetate (UAA), against P. falciparum using an in silico docking approach. The interaction between the ligands and PfHsp70 was evaluated using plasmon resonance (SPR) analysis. Molecular docking, binding free energy analysis and molecular dynamics simulations were conducted towards understanding the mechanisms by which the compounds bind to PfHsp70. The molecular docking results revealed ligand flexibilities, conformations and positions of key amino acid residues and protein-ligand interactions as crucial factors accounting for selective inhibition of Hsp70. The simulation results also suggest protein-ligand van der Waals forces as the driving force guiding the interaction of these compounds with PfHsp70. Of the two compounds, UAA and IMA bound to PfHsp70 within the micromolar range based on surface plasmon resonance (SPR) based binding assay. Our findings pave way for future rational design of new selective compounds targeting PfHsp70. Supplementary Information The online version contains supplementary material available at 10.1186/s13065-024-01159-6.
Introduction
The continued emergence of malaria drug-resistant parasites continues to emphasize the need for the identification of new alternative molecules.Annually the Plasmodium sp. is responsible for approximately 400,000 deaths worldwide, with the Plasmodium falciparum being the most lethal causative agent of malaria.This apicomplexan parasite undergoes a complex life cycle due to its ability to transverse in multiple hosts and the constant need to adapt its mechanism to various physiological conditions.As such, it has been hypothesized that as part of its survival mechanism and strategy, the P. falciparum expresses a repertoire of heat shock proteins (Hsps) to facilitate its survival under sever physiological conditions such as the increased temperature changes it encounters as it shuttles from the cold-blooded mosquito vector to the warm-blooded human host, the increase in temperatures it encounters as a result of the malarial febrile episodes as well as oxidative stress [1,2].
P. falciparum Hsps, such as PfHsp70 are ATP-dependent chaperones that are constitutively expressed to maintain cellular homeostasis under normal and stressful physiological conditions.These proteins play major roles such as the refolding of nascent or unfolded polypeptides, translation and translocation of proteins, and maintenance of cellular processes including protein assembly [3].PfHsp70 is an essential cytosol localized protein that cooperates with other partner proteins or co-chaperones to facilitate protein folding processes and induce parasite development and pathogenesis.This protein is also said to be expressed throughout the parasite erythrocytic stage highlighting its significance in the development of the parasite.
Additionally, it has also been suggested as a possible cause of antimalarial drug resistance making it an ideal and prospective antimalarial drug target [4,5].This, therefore, justifies the importance of the pursuit of small molecular weight compounds against the protein.Zininga and co-workers (2017), reported the presence of two small inhibitors namely polymyxin B (PMB) and epigallocatechin-3-gallate (EGCC), and postulated that these compounds inhibit the chaperone activity of PfHsp70 by interfering with the proteins chaperone's ability to interact with its known functional partners [6].
This is thought to be due to the binding of the compounds to the N-terminal ATPase domain of the protein which likely leads to the competition of the binding site between the protein substrates, adenosine triphosphate (ATP)/adenosine diphosphate (ADP) and the compounds.Furthermore, our previous studies have demonstrated that PfHsp70 is an efficient receptor for potential targeting using novel antimalarial compounds [7,8].However, there is a lack in computational techniques and determination of the binding strength of chemo-ligands towards the identified protein, yet this is crucial in the design of possible drug compounds [9].
Over the past decades, molecular docking has emerged as a powerful tool used to provide insights into proteinligand interactions by studying, predicting, and modeling the interactions of small molecules within the active site of a target receptor or protein at an atomic level [10].In addition, molecular docking can be used to infer the ADMET properties of drug candidates allowing for the prediction of the drug's pharmacokinetic behaviour and toxicity profile [11].Specifically, in the current study, we conducted molecular docking, Prime/Molecular Mechanics-Generalized Born Surface Area (MM-GBSA), and molecular dynamics (MD) simulations to analyze the interaction mechanisms, dynamic behaviors, binding affinity, and modes of the protein-ligand complex in a physical environment with solvent.
In our hands, Salomane et al. (2021) reported IMA and UAA as potential inhibitors of PfHsp70-1 chaperone activity [12], however, the nature of this interaction is yet to be fully elucidated.Thus, the current study aims to conduct an in-depth analysis of the interaction between the identified compounds and PfHsp70 using computational and lab-based assays.
Molecular docking
The Hsp70 crystal structure was retrieved from the RCSB Protein Data Bank (PDB ID: 4J8F) [13].4J8F protein was selected as the inhibitor based on the search and screening results.It is a target protein that plays a central role in the cellular defence against toxic protein aggregation and for the maintenance of protein homeostasis and is a marker of malaria.Hsp70 has been shown to reduce pathologic protein aggregation in cellular models of Parkinson's and Huntington's disease.Hsp70 has proven useful in facilitating the proteolytic clearance of toxic, aggregation-prone proteins.Similarly, boosting Hsp70 might be advantageous in cancer therapy by accelerating the clearance of metastable, oncogenic mutant proteins.Moreover, 4J8F complex was prepared as the initial receptors for the docking studies by the Protein Preparation module implemented in Schrödinger Release 2019 -2 [14].This includes the addition of hydrogen atoms, removing non-bonded heteroatoms and all crystal water molecules, assigning partial charges and appropriate protonation states at pH = 7.0 and optimising the structure using the OPLS3e force field until the RMSD attained a value of 0.3 Å.The missing loop structures and side chains of 4J8F complex were added by using Prime module [15] in the Schrödinger Release 2019-2 [16].The three-dimensional structures of iso-mukaadial acetate (IMA) and ursolic acid acetate (UAA) ligands were drawn in Maestro 11.8 suite [16].Both ligands were processed with the ligprep tool [17] in Schrödinger Release 2019-2 to produce the most possible ionisation state of tautomer and enantiomer using Epik at pH = 7.0.The quantum mechanics/ molecular mechanics (QM/MM) docking was performed using the OPLD procedure in Schrödinger to account for the receptor's polarization of the ligand charge.In addition, hydrogen atoms and OPLS 2005 force field partial charges were added and assigned to ensure the rationality of the charges.Charges were estimated using QM calculations, and partial charges were substituted for each ligand complex.The 6-31G** basis sets and OPLS 2005 force field was utilized to perform single-point energy calculations and geometry optimization of structures.
The QM region of all coordinates was free to adjust during the optimization.
The prepared proteins and ligands were docked using the Glide module in the Schrödinger Release 2019-2 suite [15] with the standard precision (SP) method to evaluate the binding pose of each compound in the Hsp70 binding sites.The scoring grid for docking was generated by enclosing the compound in a grid box of 28 × 28 × 28 and 34 × 34 × 34 points of dimension with the x, y, z coordinates of 26.220, 18.270, -19.00 and 32.250, 16.079, -28.584 for IMA and UAA, respectively, using the Receptor Grid Generation module.A grid box defined to cover the entire system with the same grid box size and dimensions is required to ascertain the probable ligand-binding location on the protein [18,19].However, the shape and properties of the receptors were represented on a grid by different grid box sizes and dimensions that progressively provided more accurate scoring to the ligand poses.Multiple scoring functions, such as docking score, glide score function, glide emodel, energy, Ecoul and Evdw term were used to choose the final best-docked structure.The glide score is the extended and modified version of the empirically based function [20].Glide energy is the modified glide emodel and van der Waals (vdWs)-Coulomb interactions energy that combines strain energy, glide vdWs, coulombic and score of the ligand.The terms Ecoul and Evdw are the electrostatic interaction energy and vdWs interaction energy.Before using this technique on ligand-receptor complexes, we ran docking tests and generated several ligand conformations.During the docking step, we used Glide to create 100 initial poses and grouped them using a 1.5 Å RMSD threshold.Finally, we prepared ten different ligand poses for each ligand.After clustering, we scored and ranked the representative poses using Emodel.Here, energies of receptors, ligands, and complexes are estimated individually using the OPLS 2005 force field in a solvent environment, and the energy difference is then calculated.To score the poses, the topranking poses based on energy differential value were selected.
Molecular dynamics (MD) simulations
MD simulations were conducted for both protein complexes using the Desmond module [15,16] of Schrodinger Release 2019-2.Each system was firstly neutralised with chlorides and sodium ions and then immersed in an orthorhombic box (10 × 10 × 10 Å 3 ) of simple point charge water molecules and followed by energy minimisation with the OPLS3e force field and Broyden-Fletcher-Goldfarb-Shanno algorithms.Later, 100 ns simulations were performed via the isothermal-isobaric (NPT) ensemble at constant pressure (P = 1 atm) and temperature (T = 300 K) using the Martyna-Tobias-Klein barostat [21] and Nose-Hoover thermostat [22] with 2.0 ps relaxation time.Moreover, the SHAKE scheme was used to constrain all bonds in both the minimisation and MD simulation stages.The long-range electrostatic interactions were described by the Particle Mesh Ewald approach and a cut-off of 9 Å was used to treat the vdWs forces.Finally, 1001 frames from the 100 ns MD trajectory were chosen to evaluate simulation interaction.Trajectory data generated by Desmond was analysed by the simulation interaction diagram.Besides, ligand-protein interaction, root mean square deviation (RMSD) and root mean square fluctuation (RMSF) were analysed to check the stability and residue fluctuation of the interacting complexes.The hydrogen bonds (H-bonds) were identified according to the following conditions: the acceptor-hydrogen-donor angle should be > 135° and the bond length between the acceptor heavy atoms and hydrogen donor should be < 3.5 Å.
MM-GBSA calculation
Prime MM-GBSA implemented in Prime version 3 module [15] was used to calculate binding free energies (ΔG bind ) for protein-ligand complex by the following Eqs.[7,23]: Here; ΔG complex , ΔG protein and ΔG ligand represent the free energies of interacting complexes, protein and ligand in the system, respectively.ΔG bind contains VdWs (ΔE vdW ) and electrostatic (ΔE ele ) interactions.ΔG SA and ΔG GB represent the non-polar and polar contributions of the solvation free energy.ΔG SA was estimated by the solventaccessible surface area (SASA) by the pairwise overlap approach with a probe radius of 1.4 Å, where ΔG SA = SASA x 0.0072.ΔG GB was evaluated using the generalised Born (GB) model developed by Onufriev and coworkers.TΔS is the entropy change based on the ligand binding conformations that were considered in this study due to the low prediction accuracy and high computational cost.
Surface plasmon resonance (SPR) analysis
The steady-state equilibrium binding kinetics of UAA/ IMA against PfHsp70-1 were determined using BioBavis Navi 420 A ILVES multi-parametric surface plasmon resonance (MP-SPR) system (Bionavis, Finland) as previously described [18].Degassed filter sterilised PBS was used as a running buffer.PfHsp70-1 immobilized onto a carboxymethyl dextran (CMD 3-D) gold sensor chip through amine coupling served as ligand.The amount of protein used was just above 200 RU.IMA/UAA were injected at varying concentrations from (0, 62.5, 125, 250, 1000, and 2000 nM) at a 50 µl/min flow rate.Lysozyme was also immobilized onto the same chip as a negative protein control.The interaction was allowed for 8 min at 25 ℃ to determine steady-state equilibrium and then followed by 4 min dissociation.Signals generated were analysed by Data Viewer (BioNavis, Finland).The signal generated by a channel lacking protein-ligand served as a baseline.To determine the equilibrium binding affinities, the resultant sensorgrams were analysed using Trace Drawer software version 1.8 (Ridgeview instrument; Sweden).
Binding pattern and affinity of IMA and UAA inhibitors
Molecular docking is the most effective program for identifying possible binding modes between proteins [27][28][29][30].For molecular docking to be successful, the Hsp70 protein's active site must be identified.The accuracy of the docking protocol was examined by re-docking of original ligand in the active site of the Hsp70 enzyme (self-docking) [18,19,31].Before molecular docking, the docking accuracy was verified by re-docking the original ligand (Fig. 1).The redocking approach of the original ligand against the receptor (Hsp70 protein), served as a benchmark for the active site determination.Figure 1 shows the original ligand (red) and re-docked ligand (blue) in almost the same position among the receptor (RMSD = 0.22 Å) that confirmed validation of docking protocol using extra precision glide (XP) scoring function, in the presence of water molecules that are not beyond 5 Å from reference ligand.What's more, the redocked ligand is deeply embedded in the pocket of the Hsp70 receptor, and thus produced hydrogen bonds between the amino acids (Ser22, Thr36 and Asn37).The coordinates obtained from the redocking process of the original ligand against the receptor can be employed as a coordinate reference for the docking process of IMA and UAA compounds.The native ligand that has bonded in the spherical cluster is expected to provide precise coordinates based on the ligand reference coordinates of the crystal structure.The interaction between the target Hsp70 protein and the candidate IMA and UAA compounds has been effectively studied using molecular docking.Based on the initial coordinates generated through the redocking procedure utilising flexible conformations, the potential ligand was docked with the Hsp70 protein.Subsequently, IMA and UAA compounds in this study are docked into the Hsp70 receptor to analyse the bonding patterns.Docking analysis was done to explore the amino acid residues that interact with IMA and UAA compounds, and the active component of Hsp70 protein.Based on the docking results, the docking score is the most accurate approach to define the binding affinity of IMA and UAA complexes.
The docking results suggested that both compounds were held in the binding sites of the PfHsp70 protein by the combination of several hydrophobic, hydrogen and salt bridge interactions with the PfHsp70 receptor.The docking results revealed that the highest binding compound to PfHsp70 protein was IMA with a docking score of -5.388 kcal/mol when compared with UAA (-4.329 kcal/mol), which agreed with the experimentally determined tendency (Table 1).The results of the docking of IMA and UAA ligands with the receptor showed that they had good potential to bind to the 1HSX protein because they had a lower grid score than the candidate protein (Table 1).
However, the glide energy of Hsp70 with IMA compound was slightly smaller than that with UAA.The protein-ligand interaction diagram from docking studies offered in-depth insights into the important amino acid residues, which contributed to the potency of inhibitors (see Fig. 2).
These side chains, particularly those from residues in the exposed cavity, go through conformational change when they engage with ligands at the substrate-binding site.For instance, a significant conformational change occurs in the side chains, which collectively produce a distal subpocket near the exposed cavity.When Lys146 is turned outside the exposed cavity, the distal subpocket enlarges.In the crystal structure of Hsp70, the folded-in conformation of Lys146 decreases the exposed surface of the exposed cavity, allowing a salt bridge interaction with the UAA inhibitor.Although Ser22, Thr 36 and Asn 37 did not directly interact with IMA inhibitors, they serve as the exposed cavity's tail-cap, creating a more buried protein surface.According to the proteinligand interactions diagram, the UAA compound formed stable hydrogen bonds with the Arg 80 (1.94 Å) amino acid residue and salt bridge interaction with the Lys 146 (3.10 Å) amino acid residue (Fig. 2b), which could aid in the smooth attachment of the UAA compound into the binding pocket of PfHsp70 receptor.We observed hydrophobic interactions between Leu 73, Val 74 and Val 77 amino acid residues and UAA compound.Moreover, IMA compound with the highest docking score showed additional hydrogen bonding with the Thr 35 (2.72 Å), Thr 36 (2.03 Å), Asn 37 (1.80 Å) and Gly 220 (1.81 Å) amino acid residues, which could increase the chances of compound possessing good activity (Fig. 2a).IMA compound formed four hydrophobic interactions with Ile 290, Val 359, Met 362 and Val 390 amino acid residues.The protein-ligand interaction results suggest that the UAA-Hsp70 complex had a fewer hydrogen bond formation than the IMA-Hsp70 complex.The 1HSX protein's molecular docking results showed several types of interactions with the amino acid residues that were on the receptor active site (Fig. 2c and d).The IMA ligand shows that two hydrogen bonds linked from residual amino acid residues, including ASN103 and TRP63 to the active site of this pose.Meanwhile, the functional group of the UAA compound shows two hydrogen bond interactions with LYS97 and HIS15 amino acid residues to oxygen atoms of ketone and alcohol carbonyl atoms, respectively.The UAA compound shows that LYS96 amino acid residue is involved in the salt bridge interaction.
The binding free energy of IMA and UAA inhibitors predicted by MM/GBSA
According to the MD simulations, MM-GBSA binding free energy for the two complexes was evaluated and the result is presented in Table 2. Generally, the binding free energy with more negative values exhibits higher activities of inhibitors [32].As presented in Table 2, the calculated ΔG bind of UAA (-31.19 kcal/mol) was slightly stronger compared with that of IMA (-30.26 kcal/mol).Thus, UAA showed greater inhibitory activity with PfHsp70 than with the IMA compound.This difference was mostly caused by Coulomb interactions, such as the salt bridge between Lys 146 (3.10 Å) and Hsp70 protein.Therefore, UAA is a selective PfHsp70 inhibitor.The binding free energy components suggest that the vdWs forces were the greatest contributors to the ΔG bind of both ligands followed by electrostatic energy (ΔG elec ), which suggested that conjugated effects are important for the formation of the protein-ligand complex.Moreover, the influence of conjugated effects on the UAA-Hsp70 complex was greater than the IMA-PfHsp70 adduct, since the UAA-PfHsp70 complex has a more negative ΔG vdW value as compared with the IMA-Hsp70.Moreover, the polar solvation energy is unfavourable for ΔG bind since ΔE elec values were positive.However, the non-polar solvation energy is favourable because ΔE vdW values are negative.However, the IMA ligand could bind more efficiently within the PfHsp70 pocket than the UAA compound, according to the highest lipo energy values.The difference in the non-polar contributions between IMA (-39.42 kcal/mol) and UAA compounds (-50.82kcal/mol) was 11.40 kcal/ mol.Even though the unfavourable polar energy contribution of UAA (25.50 kcal/mol) is higher than that of IMA (15.66 kcal/mol), cannot account for the reduction in the ΔG bind induced by the vdWs interactions [33].The PfHsp70 complexes show good binding affinity compared to the 1HSX protein.These results indicate that the IMA and UAA compounds have an excellent potential in inhibiting the PfHsp70 enzyme through a strong binding on the active site of its inhibition.To be more specific, the UAA ligand has a lower ΔG bind than the IMA, which allows the UAA ligand as a 1HSX inhibitor to have a better potency than the IMA ligand.Overall, the results show a good correlation with the calculation of the grid score using molecular docking.Ligand that has low binding ΔG bind is expected to bind with amino acid residues on the receptor active site, responsible for the activity of the 1HSX protein.
MD simulation analysis
To get deeper structural and energetic of the original ligand and re-docked ligand were considered for MD simulations [18,31].The main interactions arising from the docking were not fully maintained after MD runs; we even noted new specific interactions with optimal binding site residue re-organization.MD simulations are regarded as an efficient tool to investigate the dynamics and conformational flexibility changes occurring during protein-ligand binding using the RMSD and RMSF data, see Fig. 3.
Based on the RMSD results in Fig. 3a, both MD simulations attained equilibrium after 45 ns simulations and the system had good convergence.Therefore, the RMSD values of both complexes showed consistent and stable The fluctuations of the protein amino acid residues, stability changes of the identical residue, and the influence of inherent local conformational flexibility on the receptors were explored by measuring the RMSF (Fig. 3b).The RMSF trends of fluctuations and distributions of IMA-PfHsp70 and UAA-PfHsp70 complexes suggest that the binding patterns of both inhibitors are almost identical.According to the RMSF plot, amino acid residues of IMA-PfHsp70 complex fluctuated more significantly than amino acid residues in UAA-Hsp70 complex, signifying that the IMA inhibitor could form strong interactions with PfHsp70.Moreover, key amino acid residues, including Asp 32, Thr 35, Thr 36, Asn 37, Lys 94, Leu 218, Gly 220, Gly 221 and Gly 360 formed strong H-bonds interactions with IMA inhibitor.Moreover, analysis of the RMSF of the key amino acid residues (i.e., Asn 53, Leu 73, Lys 146 and Asn 556) involved in the formation of H-bonds in UAA-Hsp70 complex.
Clearly from Fig. 3c, the RMSD values of IMA and UAA compounds fitting on the 1HSX protein Cα fluctuated within the ranges of 1.5-2.4Å and 1.7-2.3Å after 70 and 80 ns stabilized, respectively, and finally converged at approximately 2 Å.Compared to the UAA, the RMSD of the IMA tended to equilibrate after 22 ns, which indicated that UAA is more active than IMA in the 1HSX receptor.Analysis showed that the fluctuation of RMSD value in the amino acid residues in both systems is less than 3 Å, thus indicating that the binding of IMA and UAA compounds to 1HSX protein was relatively stable during the MD simulations.Furthermore, the range of fluctuation in the RMSD value of UAA was relatively lower than that of the IMA, thus, demonstrating that UAA formed a relatively firm interaction in the active sites of 1HSX. Figure 3d shows the relationship between the 1HSX protein complexes' RMSF value and a number of residues.The RMSF distributions and dynamic fluctuation patterns of the IMA and UAA complexes were comparable, indicating that the binding of these inhibitors to 1HSX was similar.Additionally, it was demonstrated that the majority of protein residues in each complex had RMSF values that were lower than 4 Å.Moreover, it was found that the RMSF fluctuations in the IMA complex were lower than those in the UAA complex, suggesting that it had less structural mobility than the UAA.These findings revealed that the binding affinities of IMA and UAA to 1HSX protein were generally favourable.
The density of a protein structure may be determined using the radius of rotation (R g ), which is dependent on how close an atomic mass is to the centre of gravity of a particular molecule.Additionally, one of the factors for analysing the compactness of complex structures is the R g analysis.The R g value can give a clear picture of an unfolded or steadily folded structure.A higher R g number denotes a dynamic simulation in which the system expands.Figure 4 clearly shows changes in the R g within the four systems in the 100 ns MD simulations.Additionally, 100 ns was used to determine the R g of a protein with a ligand in the active sites.The best-docked model's R g values began at 3.2 Å in the MD simulations, and the structure steadily expanded and shrunk within the limit.R g remained almost the same throughout the process.Even though during the 26-59 ns, the R g value increased from 3.2 to 3.7 Å and subsequently reduced to 3.65 Å, returning the system to a position close to its initial position, it was clear that the receptor-ligand remained stable and firmly packed.Moreover, the R g of UAA steadily decreased to stability before converging around 4.65 to 4.95 Å.This demonstrated that UAA significantly affects the density of protein structures.The results show that the mean R g values of the PfHsp70 complexes are relatively identical and consistent with the mean R g values of 1HSX.The optimum docking position closely matched the reference pattern based on the R g values obtained from MD simulations for the protein's compactness.Thus, these results identify the folded PfHsp70 structure as stable.
The solvent-accessible surface area (SASA) parameter measures the number of water molecules that can interact with the complex's active site surface during the simulation.The results showed no significant difference in the mean value of SASA (Å −2 ) between the 1HSX protein and PfHsp70 complex (Fig. 4).For each complex, the SASA values' overall analysis revealed good stability.Low SASA value fluctuation is evidence of it.
Hydrogen bonds (H-bonds) are essential for ligand binding.Because hydrogen-bonding characteristics have such a significant impact on drug selectivity, metabolization, and adsorption, they must be considered while developing new drugs.H-bond distance analysis was carried out on each complex to estimate the H-bond interactions.A timeline representation of interactions and contacts (H-bonds, hydrophobic, ionic, and water bridges) is summarized in Fig. 5.The top panel shows the total number of specific interactions the PfHsp70 protein makes with the ligand over the 100 ns MD simulation trajectory.The bottom panel shows which residues interact with the ligand in each trajectory frame.According to the scale to the right of the figure, certain residues have many specific contacts with the ligand, which is depicted by a deeper orange colour.In the MD simulation process, 1 > 2 > 3 hydrogen bonds make up the bulk of proteinligand complexes, with 4 hydrogen bonds being the least common.The PfHsp70 protein complexed with IMA and UAA compounds could form a maximum 3-4 hydrogen bonds with active site residues.Fluctuations in the number of hydrogen bonds are relatively stable, indicating that the system is in a stable state.The equal number of hydrogen bonds, most of which are hydrogen bonds 1 to 3, helps the ligand remain stable by preventing rapid changes in the hydrogen bonds.This further proved stable interactions formed between compounds and proteins in the equilibrium state.According to these findings, hydrogen bonds are essential to the conformational stability of all four complexes.
We observed that PfHsp70 Thr 35, Thr 36, Asn 37, Lys 94, Gly 220, Gly 221 and Thr 222 amino acid residues formed stable H-bonds with IMA (Fig. 6a), while Gly 361 residue could interact with IMA residues through a water molecule by forming strong H-bonds.
Residues Lys 146 contributed greatly to UAA inhibitor with the interaction fractions up to 0.98 (Fig. 6b).H-bonds interaction was established between UAA inhibitor and residues Ser 54, Asn 53 and Glu 149 with the last two doing so with the help of water molecules.
Prediction of absorption, distribution, metabolism, and excretion (ADME) properties
Before conducting in vivo testing, it is crucial to examine the pharmacokinetic properties of any molecule being considered as a potential drug candidate, including factors such as absorption, distribution, metabolism, excretion, and toxicity (ADME).The drug-likeness of synthesized compounds is predicted by analysing their ADME properties based on Lipinski's rule of five.These essential parameters not only determine the similarity of the substance to a drug, but also its effectiveness within the body.ADME analysis provides an important image in predicting the activity of the candidate as a drug before using molecular docking against the PfHsp70 enzyme.properties vital for the pharmacokinetic profile of drugs in living systems [34] are summarised in Table 3.
The Lipinski rule, also known as the rule of five, utilizes basic molecular descriptors formulated to determine drug likeness.According to this rule, most drug-like molecules possess a Log P value less than or equal to 5, a molecular weight less than or equal to 500 Da, and no more than 10 hydrogen bond acceptors and 5 hydrogen bond donors.Molecules that violate more than one of these criteria may face challenges with bioavailability.Studied IMA and UAA compounds have molecular weight < 500.Low molecular weight drug molecules (< 500) are easily transported, diffused, and absorbed as compared to heavy molecules.Molecular weight is an important aspect of therapeutic drug action, if it increases correspondingly, it affects the drug action.A number of hydrogen bond acceptors and number of hydrogen bond donors in the tested compounds were found to be within the Lipinski limit.The log Kp values ranged from − 3.093 to -2.896, suggesting that the skin permeability of the designed compounds is better.Cell permeability (QPPCaco), which is used to access cell permeability in biological membranes and is a key factor governing drug metabolism ranged from 259 to 311, whereas pMDCK (cell-permeable parameter) values ranged between 146 and 178.Moreover, both compounds showed a good partition coefficient (QPlogPo/w) (2.723 to 7.029) that is critical to the distribution and absorption of drugs within the human body.The drugs are usually mostly taken in oral formulations, which must be absorbed by the intestine to exert their effects.The % human oral absorption for both inhibitors ranged between 88 and 100%, while their p log HERG (K + channel blockage) data in the range of -1.633 to -2.155 were less than − 5, water solubility (QP log S) ranged from − 3.270 to -8.244 and skin permeability (log K p ) data from − 2.896 to -3.183 were all within the acceptable range.The QP log BB parameter, which indicates the capability of a drug to pass via the blood-brain barrier was within the acceptable range.More importantly, the results obtained by QP log P o/w , QP log HERG, QPPCaco, and human oral absorption showed that the compound has the advantages of high solubility, low cardiotoxicity, good membrane permeability, and oral absorption.For toxicity assessment, all compounds were negative for AMES values and skin sensitization indicators, and tests showed that the compounds were not mutagenic and did not cause skin sensitization.The carcinogenicity was also shown to be negative, which did not cause mutations in the organism to the compounds that have some safety for the organism.These results indicated that the compounds exhibited good characteristics in intestinal absorption, distribution volume, and toxicity, presented high biological activities, and therefore potentially interesting candidates for further studies.Overall, the results indicated that the IMA inhibitor met the criteria for a good ADME as a drug.Especially the toxicity parameter, each candidate shows good suitability as a drug because it is non-toxic, and therefore, is more likely to be developed as a therapeutic molecules.To further investigate the pharmacokinetic properties of the compounds, the therapeutic effects of the compounds on model mice need to be studied by constructing a mouse model of hyperuricemia.
SPR analysis SPR kinetics data for the interaction of PfHsp70 with either IMA or UAA
We further conducted SPR analysis to explore the direct binding of UAA and IMA (Fig. 7).The findings demonstrate that PfHsp70 binds to either of the two compounds within the micromolar range of affinity (Table 4).The SPR data suggest that PfHsp70 exhibits modest affinity for the two compounds.We generated a flat graph for SPR with both compounds on assaying with lysozyme (Supplementary Figure S1).This validates that none of the two compounds interacts with lysozyme as a non-Hsp70 control.The data validates that the interaction of PfHsp70-1 with the two compounds is specific and this is in agreement with the data obtained from the in-silico findings.Furthermore, the findings suggest that PfHsp70 is capable of directly binding each of the two compounds.However, our current findings do not rule out the prospect of the compounds binding to other targets apart from PfHsp70 (Fig. 7).
Conclusion
In this study, MM/GBSA, MD and molecular docking simulations were used to investigate the binding and selectivity mechanisms of IMA and UAA compounds towards the PfHsp70 receptor.The MM/GBSA analysis suggests that vdWs interactions play a key influence in the binding free energies of IMA and UAA compounds in the PfHsp70 binding pocket.Moreover, the proteinligand interactions diagram reveals hydrogen bonding and hydrophobic interactions in enhancing the binding affinity and stability of the inhibitor at the pocket site.In the current study, we established that both IMA and UAA inhibit PfHsp70 chaperone function.Furthermore, the predicted ADME properties in the acceptable range for both inhibitors suggested that they are a druglike candidate.The in-silico studies offer insights into the structural features of IMA and UAA inhibitors and their interaction with PfHsp70.Altogether, our findings add UAA and IMA to the growing list of known PfHsp70 inhibitors.
Fig. 1
Fig.1The re-docking result of the re-docked ligand (blue) overlapping with the original ligand (red) in the Hsp70 protein and the 2D interaction diagram
Fig. 2
Fig. 2 The protein-ligand interactions diagram of (a and b) Hsp70 and (c and d) 1HSX receptor with (a,c) IMA and (b,d) UAA compounds
Fig. 3
Fig. 3 (a) RMSD against the dynamics simulation time and (b) RMSF against residue number of the Hsp70-ligand complexes.(c) RMSD against the dynamics simulation time and (d) RMSF against the residue number of the 1HSX complexes
Fig. 4
Fig. 4 The radius of gyration (above), and the solvent-accessible surface area (below) of (a and b) Hsp70 and (c and d) 1HSX receptor with (a, c) IMA and (b, d) UAA compounds
Table 1
Several scoring functions of IMA and UAA complexes All the values are in kcal/mol
Table 2
Calculated individual energy components and ΔG bind predicted by Prime/MM-GBSA method (kcal/mol)
Table 3
The predicted principal descriptors and physiochemical descriptors for IMA and UAA compounds towards the PfHsp70 receptor *indicates a violation of the 95% range | 7,655.2 | 2024-03-18T00:00:00.000 | [
"Medicine",
"Chemistry",
"Biology"
] |
Midsummer Atmospheric Changes in Saturn’s Northern Hemisphere from the Hubble OPAL Program
Using the Hubble Space Telescope, Saturn was observed in 2018, 2019, and 2020, just after the northern hemisphere summer solstice. Analysis of multispectral imaging data reveals three years of cloud changes associated with a 70° N storm that began in 2018. Additionally, there is an increase in equatorial brightness and perhaps haze optical depth at 0° to 7° N. There are small midsummer changes at the north pole, with a thin blue feature near the polar hexagon’s outer edge disappearing between 2019 and 2020 and increasingly reddish polar haze. Zonal winds at most latitudes remain close to values obtained by the Cassini mission with a slight increase of winds in the equatorial zone. Yearly cloud changes, while noticeable, are small compared with the changes observed between the Voyager (northern spring) and Cassini (southern summer to northern spring) eras, but further observations will provide a longer baseline for comparison.
Introduction
The Hubble Outer Planet Atmospheres Legacy (OPAL) program is a yearly Directorʼs Discretionary Time observing campaign to image each of the outer planets. Although the campaign began in 2014, Saturn was not observed until 2018, as the NASA Cassini mission was still in operation and included frequent observations of that planet. The goal of OPAL is to provide time coverage of the planets' atmospheres that includes adequate cadence for studying atmospheric dynamics while also covering a long temporal baseline for trending changes in cloud structure, color, and zonal winds. Each planet is observed near its respective Earth opposition and includes enough Hubble orbits to obtain complete global coverage over two planetary rotations to allow the extraction of zonal wind fields.
Saturnʼs dynamical activity and the seasonal insolation cycle drive aerosol and cloud structure changes. The seasonal cycle, produced by the tilt of the rotation axis of the planet, its oblateness, the orbital eccentricity, and ring shadows, mainly affects the upper atmosphere (Fletcher et al. 2018). Previous studies of Saturnʼs cloud structure and colors find a strong seasonal influence on hemispheric asymmetry and latitudinal variation (West et al. 2009;Pérez-Hoyos et al. 2016;Sanz-Requena et al. 2018). Analysis of Hubble data from 1991 to 2004, using three different instruments and covering wavelengths from 231 to 2370 nm, showed dramatic color changes at the south pole (Karkoschka & Tomasko 2005). Equatorial changes were noted on timescales of months, largely due to the 1994 planet-encircling equatorial storm. Attempts to find aerosol model fits from the center to limb scans and principal component analysis produced mixed results, depending on the spectral coverage used (Karkoschka & Tomasko 2005).
However, they deduced that most latitude variation is in the upper tropospheric aerosols (P ∼100-300 mbar). Prior to Cassiniʼs arrival, Pioneer and ground-based data indicated that the clouds in the north were thicker than those in the south, but that trend began to reverse in 2007-2008(West et al. 2009).
Saturnʼs latitudinally banded structure (particularly in methane absorption bands) is thought to be tied to its zonal winds, as on Jupiter, though the correlation between band edges and wind jet locations is not as well defined; see Figure 1 (e.g., Karkoschka & Tomasko 2005;DelGenio et al. 2009;West et al. 2009). Unlike Jupiter, very large changes have been observed in Saturn's zonal wind structure, particularly at the equator (Sánchez-Lavega et al. 2003, 2004Porco et al. 2005;DelGenio et al. 2009). Cassini measurements from continuum band images in 2004 indicated a slower equatorial jet than was observed by Voyager, but higher than that determined from Hubble observations (Sánchez-Lavega et al. 2000, 2003, 2004Porco et al. 2005;García-Melendo et al. 2011). This may be partially due to altitude changes and corresponding thermal wind shear (Pérez-Hoyos & Sánchez-Lavega 2006) or to quasiperiodic oscillations in the winds (Flasar et al. 2005;Fouchet et al. 2008). Cassini data acquired later in the mission showed a small increase in equatorial wind velocity, but still below the Voyager-measured winds; the only features observed to move at the very high Voyager velocities were associated with a 2015 storm measured by Hubble ). Tracking of long-lived features on Hubble Space Telescope (HST) and ground-based images throughout 2018 showed that discrete features moved at the speeds of the Cassini zonal winds except at the equator, where discrete features moved faster but with variable results (Hueso et al. 2020). Thus, any oscillation or seasonally induced component of the winds is not yet well characterized.
Here we analyze data from the first three years of the Hubble OPAL programʼs observations of Saturn. Owing to the current view from Earth, we focus on changes occurring in the northern hemisphere, as Saturn moves away from the northern hemisphere summer solstice. Although the OPAL data set is limited in viewing angles, as a dedicated observing program, it provides more constant spectral coverage and temporal cadence than was available prior to Cassini. In Section 2, we describe the available OPAL data sets, including spectral and longitudinal coverage. Section 3 documents how the cloud colors have changed over time, results from principal component analyses, and comparative color and opacity indices. In Section 4, we discuss changes in the zonal wind profiles and solar insolation and how they may correspond with the observed color variations. Finally, we summarize our findings and future work as the OPAL program extends into the next Saturnian season.
Observations
Saturn was observed with the Hubble Wide Field Camera 3 (WFC3) UVIS channel on 2018 June 6, 2019 June 20, and 2020 July 4 over two rotations of the planet in each cycle. In 2019, some partial Hubble orbits were lost due to failures in the guide star lock, so only a single rotation was completed in all filters. The filter sets were chosen to span from ultraviolet to near-infrared wavelengths, including methane band filters. In 2019, and subsequent years, the narrowband F658N filter was exchanged for another methane band filter at 892 nm to provide more information on cloud altitude. Table 1 summarizes the coverage and filters used in each yearʼs observations.
Each yearʼs data are calibrated through the WFC3 pipeline and deposited in the Mikulski Archive for Space Telescopes. Each image is postprocessed to remove cosmic rays and fringing (in narrowband long-wavelength filters; see Wong 2011 andWong et al. 2020) and navigated for the planet center using an iterative limb and ring fit (Simon et al. 2015). The images are converted from radiance to reflectance (Simon et al. 2015). The uncertainties on I/F are on the order of 1%-2% in most filters and 3%-5% in the FQ889N filter (Simon et al. 2018).
Latitudinal Color Variations
The 2018-2020 global maps show little large-scale longitudinal structure, beyond the high-latitude storms (65°-70°N) observed in 2018, Figure 1. However, there is latitudinal variation and banded structure evident in each of the global, three-color composite maps. From date to date, some of the bands changed slightly in color, and this is most obviously observed by comparing latitudinal profiles; see Figure 2. To compute the profiles, scans were taken on individual Minnaert and limb-corrected images, one per data and filter; images were chosen to avoid features like the 2018 polar storms. To avoid any potential cloud features or residual limb effects, pixels ± 0°. 5 of the central meridian longitude were averaged, within 0°. 18 latitude bins (i.e., 500 points from the equator to pole).
The most notable feature is increased I/F at the equator in all filters from 395 to 763 nm. Temporal changes are also seen over a narrower wavelength range (467-763 nm), particularly between 60°and 80°N, with smaller variations elsewhere. These wavelengths are most sensitive to changes in the tropospheric clouds and hazes. The UV and methane gas absorption band filters most sensitive to stratospheric opacity and aerosols show little I/F variation from year to year. Similarly, Karkoschka & Tomasko (2005) noted little change in I/F from 1995 to 2003 at the UV wavelengths sensitive to stratospheric haze. However, they did observe an increased I/F at southern polar latitudes at wavelengths from ∼600 to 1080 nm, and a decrease at 410 nm at most southern latitudes.
High-pass-filtered versions of the HST maps can also be used to investigate the detailed morphology of cloud systems present on the planet in each year; see Figure 3. The 2018-2020 maps show similar morphologies and features without major changes except at the polar latitudes, which were significantly modified by the convective storms in 2018 and 2020. Some cloud systems visible in the maps are long lived and can be seen throughout the three years of OPAL observations. These include subpolar vortexes at 60°-65°N
Principal Component Analyses
To understand how Saturnʼs spectrum changes with latitude, a map was made combining longitude sections from the three years of observation and including all filters common to all sets. The color 3 yr map is shown in Figure 4, top. This 3 yr map allows comparison of both latitudinal differences, as well as temporal variations, using a common set of principal components. The coloration of individual bands changes from year to year, with 25°to 35°N becoming redder from 2018 to 2019 and changing back in 2020. The distinct narrow band at 50°N gets redder in 2019 and remains that way in 2020. Lastly, high latitudes from 65°N to the pole vary the most, first appearing less red in 2019, with the entire region becoming quite red in 2020. This is primarily due to the decreased reflectance at bluer wavelengths ( Figure 2) produced by the intense convective activity in 2018 and 2020 , 2021.
To understand these changes further, first, a principal component analysis was run on each yearʼs global maps separately, as well as across the combined 3 yr map, and the contribution to the overall spectrum from the first six components are shown in Table 2. The combined 3 yr components were also mapped and are shown in Figure 4, For each year, and for the 3 yr map, PC1 contains the majority of the image variation. Even with coarse spectral coverage, PC1 corresponds to variations in tropospheric aerosol opacity and cloud-top height, in agreement with Karkoschka & Tomasko (2005), as its coefficients mirror Saturnʼs overall spectrum; see Figure 5, top left. PC2ʼs coefficients have an overall red spectral slope and contribute around 21% of the image variation; see Figure 5, top middle. PC2 is nearly identical in shape to that found by Karkoschka & Tomasko (2005), and which they attributed to the optical depth of stratospheric aerosols.
PC3 accounts for about 1% of the image variation and has a slightly bowl-shaped spectral shape, indicating sensitivity at green to red wavelengths, and there is some slight difference in the shape for the 1 yr and 3 yr data sets; see Figure 5, top right. PC4 through PC6 are contributions from other minor variations in the spectral shape and contribute <1% to the image variance. Our PC3 is similar to the combined PC3 and PC4 in Karkoschka & Tomasko (2005), which they attributed to tropospheric aerosol particle size. However, their PC3 was spectrally flat, except for a feature near 410 nm. Karkoschka & Tomasko (2005) found that their PC1 increased most notably at the southern polar latitudes, with a slight decrease at midlatitudes, as Saturn moved from equinox to southern summer solstice. PC2 was highest at the south pole and showed little change over time, except possibly increasing slightly at most latitudes in 1998. Although its variations are small, PC3 was lower at the south pole in 1995 and increased at most southern hemisphere latitudes from 1995 to 2003, consistent with the overall I/F change they observed at 410 nm.
Latitudinal profiles of our retrieved principal component coefficients are shown for each year of observation in Figure 5, bottom panels. In PC1, the north pole is darker than the equator, similar to the prior results for the south pole (Karkoschka & Tomasko 2005). From 2018 to 2020, PC1 increased slightly at the equator and decreased from 60°to 70°N , with only slight changes at the north pole. PC2 is largest at the equator, opposite of the Karkoschka & Tomasko (2005) results, indicating more stratospheric haze at low latitudes, even accounting for the larger seasonal coverage in Karkoschka & Tomasko (2005). PC2 increases at the equator slightly over time, with low magnitude variations seen at other latitudes. PC3 shows the least change, and only from 60°to 80°N. For PC1 and PC2, spatial variation is much greater than temporal variation over the 3 yr span observed, while for PC3, the magnitudes of the temporal and spatial variation are of the same order.
The PC changes noted in this analysis are much smaller in magnitude than those found from the 1995 to 2003 comparisons in Karkoschka & Tomasko (2005). In both our study and Karkoschka & Tomasko (2005), a single PC basis set Figure 1) were separately high-pass filtered before combination into color composites. was used throughout the full time period of each study, enabling temporal variation to be directly compared. The differences between our respective analyses give an indication that the timescale for temporal variation is largely seasonal, because the three years of OPAL data cover less than a full Saturn season. Karkoschka & Tomasko (2005) found strong temporal variation in PC1 at polar latitudes <70°S from southern spring into summer, while we observe virtually no change at midsummer on the northern pole.
Color and Cloud Indices
Although the 2018-2020 PCs highlight that the majority of the variation in Saturnʼs maps are due to overall brightness or possibly spectral slope variations (whether due to stratospheric aerosol optical depth or not), this analysis is limited by the wavelengths available. It is difficult to disentangle how the color changes correspond with underlying cloud structure or particle properties without the further wavelength and viewing angle coverage needed to find unique solutions in radiative transfer analyses; HST's high subobserver latitude provides little information from center to limb scans at high planetary latitudes. Instead, we use filter ratio indices as a proxy to compare cloud opacity and height with color changes. F395N to F631N brightness. An increase in CI value indicates higher reflectivity at 395 nm relative to 631 nm and a bluer (or less red) appearance. For particles with similar sizes, the CI variations indicate changes in the particles' imaginary refractive index. For example, UVblue absorbing particles will decrease this index. We also compute an atmospheric opacity index, AOI, as the ratio of FQ889N (a filter centered at a strong methane gas absorption band) to F275W. Both the UV and methane gas absorption band filters are sensitive to cloud opacity and altitude due to cloud particle scattering. Locations with higher/thicker clouds, such as the equator, tend to be brighter at 889 nm (because of the decrease in methane gas absorption) and darker at 275 nm (because of decreasing brightness produced by Rayleigh scattering); see Figure 2. Therefore, these indices allow us to make quantitative comparisons of the cloud properties between different regions in a given date (spatial changes) and of the same region on different dates (temporal changes).
The CI and AOI were computed from the 2019 and 2020 brightness scans (the 2018 data are not included as the FQ889N filter was not used in those observations). Figure 6 plots these two indices for the three regions where we observe the largest variations in brightness/color: near the equator (left panel), the polar storm latitudes from 65°to 75°N (middle panel), and the polar region from 80°to 90°N (right panel). Within each panel, latitude labels show how CI and AOI vary with latitude in each region. First, it should be noted that the CI has similar values in the three regions sampled, from 0.18 to 0.24, meaning that despite appearing to differ in Figure 1, there are really only subtle differences in coloration. However, the AOI at low latitudes (0°-10°N) is more than three times the value at the polar region, confirming that the equator has thick and high hazes/clouds relative to the polar area. A greater slant path length at high latitudes also means that the AOI is sensitive to particles at deeper levels near the equator. For temporal variation, the CI at the equator increased 9% from 2019 (blue plus symbols) to 2020 (red diamond symbols), while the AOI is unchanged (3%) to within the uncertainties. This indicates that the tropospheric clouds and haze were slightly bluer in 2020, even though all wavelengths saw a brightness increase; see Figure 2. At 65°N, the CI and AOI increased by 10% and 4%, respectively, while at 75°N, the CI increased by 7% and AOI decreased by 8%. The change in CI from 2019 to 2020 was an increase (blueshift) at 75°N and a decrease (redshift) at 80°N. The full polar region shows a redward shift of ∼7%, with 90°N showing the largest AOI increase (16%). As noted above, the FQ889N I/F is low (0.02) in the polar regions, so a large shift in AOI is still only a small change in cloud altitude or opacity.
Discussion
From 2019 to 2020, the equator changed more in CI than AOI, and this is also apparent in the individual filter profiles; see Figure 2. All filters from ∼400 to 760 nm show increased reflectance at the equator, while the UV and strong methane band filters sensitive to higher altitudes show little reflectance change. At the same time, both PC1 and PC2 have increased, indicating a potential change in the tropospheric and stratospheric haze optical depths (Karkoschka & Tomasko 2005;West et al. 2009). The choice of color assignment is arbitrary in the principal component analysis false-color map (Figure 4, bottom), but only the equator appears completely bright yellow: a combination of high PC1 amplitude at nonpolar latitudes (0°-60°N) indicative of thick tropospheric clouds/ hazes, and high PC2 amplitude in the equatorial zone (0°-15°N ) indicative of high stratospheric haze opacity. Changes from year to year are limited to within 5°-7°of the equator. Middle latitudes are dominated by PC1 (red in the false-color map and higher values in Figure 5), indicating less haze than at the equator.
Our results capture widespread changes following a major storm system in 2018, which comprised four sequential outbreaks between March and August from 67°to 74°N planetographic latitude (Sánchez-Lavega et al. 2020). The storm plumes themselves appear yellow/orange in the 2018 PC color map (Figure 4 bottom), suggesting local similarities to the equatorial aerosol structure, with high cloud opacity and high stratospheric haze opacity. But the zonal expansion of the storm activity produced global-scale changes to the aerosol structure. The latitude profile of PC1 ( Figure 5) ∼76°N (Sánchez-Lavega et al. 2021), and they could be responsible for the aerosol changes observed at these latitudes.
Close to the pole (80°N), tropospheric cloud opacity may have decreased slightly, as suggested by the 2019-2020 shift in CI (Figure 6, right). The brightness scans (Figure 2) show that most of this change is due to decreased reflectivity at blue/UV wavelengths. The false-color map shows these latitudes have a different contribution from PC2 and PC3 than other regions, with a smaller contribution from PC1 relative to the equator ( Figure 5). In the PCs, the temporal change is also subtle, with a very slight decrease in PC1 from 80°to 90°N. The enhanced reflectivity maps (Figure 3, top) show that a thin blue line at 80°N (near the polar hexagon boundary) was abolished between 2019 and 2020. Both PC1 and PC2 increased during this period, with the larger change in PC2.
Changes in optical depth or color at the northern polar latitudes and the equator may not have the same causes. To determine if any of the observed changes are due to atmospheric circulation changes, we also computed zonal winds using five F631N image pairs for each year with the two images separated by a planetary rotation. All of the measurements were obtained with a cloud correlation algorithm (Hueso et al. 2009) applied over cylindrical maps of the images and using different selections of correlation parameters on different areas of the images. Obvious cloud mismatches found by the correlation algorithm were removed manually (e.g., negative winds at the equator or wind points more than 150 m s −1 different than the mean wind profile in the global analysis of winds for each year). This resulted in 7098 wind measurements in 2018, 6382 in 2019, and 5720 wind measurements in 2020, which were used to compute the meridional profile of zonal winds each year; see Figure 7.
Although there are slight differences in wind jet peak magnitude from year to year, most are artifacts due to few measurable points where cloud tracers have poor contrast in many of the images, leading to large error bars. Additionally, some latitudes are prone to false detections; for example, the polar jet at 79°N, which is the location of Saturnʼs polar hexagon, is not well resolved and the correlation finds many measurements linked to the large-scale structure of the hexagon and not the small-scale fast-moving cloud features that define the jet. False detections are also found beyond 83degN due to the unfavorable geometry for the cloud correlation algorithm.
The three wind profiles are equivalent at most latitudes within their respective error bars.
The only latitudes with significant wind velocity changes are near the equator, which is also the location showing the biggest difference from Voyager to Cassini (Sánchez-Lavega et al. 2000, 2003, 2004Porco et al. 2005;García-Melendo et al. 2011;Sánchez-Lavega et al. 2016;Hueso et al. 2020). The Hubble-measured values were highest in 2018 and are linked to particular bright features that are easy to correlate in the images; their velocity difference is above the uncertainties. The bright features visible in 2018 are similar to the bright equatorial clouds in 2015 described by Sánchez-Lavega et al. (2016) and move at comparable speeds. At all other latitudes in the equatorial zone, weaker wind jets are also solutions within the large error bars of ∼40 m s −1 . All three data sets find equatorial winds that are higher than those measured during the Cassini (2004) era (García-Melendo et al. 2011). These higher winds are similar to those found in HST observations obtained in 2015, where faster motions were tied to discrete bright features that were interpreted as having deep roots one or two scale heights below the main level sampled in Cassini images . The fastest points in our 2018-2020 data correspond to similar discrete bright features observed in 2018 (one of these can be seen in the 2018 data at 5°N latitude and 110°W longitude in Figure 3), while the 2019 and 2020 data show intermediate wind speeds not associated with particularly bright storms. This overall increase in zonal winds may be consistent with cloud contrasts in the equatorial zone lying about one scale height below the levels observed by Cassini (see Figure 8 in Sánchez-Lavega et al. 2016). Alternatively, they could be related to the 13-17 yr quasi-periodic oscillations (Fouchet et al. 2008), but further time coverage is needed to make any convincing correlation over time. While the wind field may not be directly responsible for color or opacity/altitude changes at the equator, the waves responsible for the oscillations, as well as corresponding thermal changes, may also affect the tropospheric and stratospheric aerosols.
Although the 2018 to 2020 wind changes are largest near the equator, this is likely unrelated to the seasonal solar insolation pattern from the Cassini era to the present. Cassini arrived at Saturn just past the northern winter solstice; see Figure 7(B). Although the ring shadow can strongly affect the northern Figure 6. Color and atmospheric opacity indices for 2019 and 2020 brightness scans. Left: near the equator, there is only a slight variation in AOI, but the CI increases from 2019 to 2020 at latitudes below 10°N. Middle: near the polar storm latitudes, CI increases, and there are slight changes in the AOI, but the scale is compressed relative to that of the equator. Right: at polar latitudes, the trend is reversed and the CI decreases. midlatitude insolation when the subsolar latitude is poleward of ∼20°S, it is negligible near the summer solstice as shown in Barnet et al. (1992), Moses & Greathouse (2005), Sánchez-Lavega et al. (2020), and references therein. The Hubble observations began just after the 2017 northern summer solstice, and over the 2018 to 2020 time period, the equatorial insolation has changed little; see Figure 7(C). At this time, there appears to be no connection between insolation and equatorial cloud structure as there are only very small changes observed either by filter ratio indices or principal components.
However, sunlight at the polar regions evolves rapidly (Figure 7(C)), with a distinct change in the latitudinal insolation profile between 2019 and 2020. The abrupt polar latitude color/optical depth changes may be due to the rapidly changing sunlight levels affecting stratosphere and troposphere aerosol properties. The earliest Hubble images of Saturn in 1994 showed blue high southern latitudes, while at Cassiniʼs 2004 arrival at Saturn, the north pole was blue in color, before gradually changing to yellow with the seasons (West et al. 2009). While only the northern latitudes are shown in this paper, the 2020 view of Saturn from Hubble included a small sliver of high southern latitudes, and they also appear blue at this time (https://hubblesite.org/contents/media/images/ 2020/43/4713-Image), indicating that this change is cyclical and affects both poles. Polar region dynamical changes are still possible but cannot be proven with the Hubble data at present.
Summary
We analyzed Hubble imaging data of Saturn from 2018 to 2020 at wavelengths from 225 to 889 nm. Using latitudinal I/F profiles at each wavelength, principal component analyses, and filter ratio indices, we have trended short-term changes in reflectance and color after the northern summer solstice. The largest apparent changes occurred at equatorial and high northern latitudes.
The equator became brighter at wavelengths from ∼400 to 760 nm, which are most sensitive to tropospheric cloud opacity. Using filter ratios, the cloud color is very slightly bluer in 2020; however, there are no changes, within the uncertainties, in cloud height or opacity. However, the principal component analysis does indicate a possible slight increase in tropospheric and stratospheric haze optical depths has occurred. As the two methods are potentially sensitive to different altitudes or particle sizes, further evolution of this region with season should reveal the cause for equatorial brightening.
At high northern latitudes, the 60°-75°N region was perturbed by storms in 2018 and 2020. These storms caused apparent cloud color changes, as well as increased stratospheric haze opacity and altered aerosol particle sizes. Further north, the polar region has progressed to redder colors, with possible changes in cloud opacity. This region should continue to evolve as solar insolation rapidly changes over time.
Zonal wind profiles were also computed, and changes were compared with solar insolation variation. The zonal winds from HST images are similar to the results from Cassini at most latitudes, despite the large difference in solar insolation. Differences in polar latitude winds are possibly artifacts from the lack of spatial resolution and polar geometry, while differences in the equatorial zone are possibly real and may merit further study in the future. When connected with Voyager-era observations, these spacecraft data sets span approximately 1.5 Saturn years, albeit with gaps in coverage. Continued Hubble observations from the OPAL program will be crucial for understanding Saturnʼs seasonal evolution in the post-Cassini era.
This work used data acquired from the NASA/ESA HST Space Telescope, associated with the OPAL program (PI: Simon, GO15262, GO15502, GO15929), with support provided by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in | 6,423.2 | 2021-03-11T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
A Five-Level Framework for Research on Process Mining
Process Mining is a novel technology that helps enterprises to better understand their business processes. Over the last 20 years, intensive research has been conducted into various process mining techniques. These techniques support the automatic discovery of business process models from event log data, the checking of conformance between specified and observed behavior, the identification of various variants of a business process, non-compliant behavior, performance-relevant insights, and so forth. Research on process mining has mostly focused on devising new or better algorithms (see van der Aalst 2016; Augusto et al. 2019a). There are a few exceptions, among others the following. van der Aalst et al. (2007) were the first to discuss process mining from the perspective of applications in industrial practice. Jans et al. (2014) applied process mining techniques to enrich audit evidence during a financial statement audit. vom Brocke and Mendling (2018) and vom Brocke et al. (2021) present various applications of process mining in hospitals, insurances, software usability analysis, and logistics. In recent years, process mining has seen an increasing uptake in enterprises (Dumas et al. 2018), and has thus become an integral part of their daily business process management. Companies like Celonis, Fluxicon, Signavio, and Software AG are among the roughly 20 companies that Gartner monitors. As Kerremans (2019) from Gartner states, enterprises adopt process mining tools in order to support business process improvement, auditing and compliance, process automation, digital transformation, and IT operations (in order of decreasing importance). Some contributions have been made towards understanding how process mining has an impact in an enterprise setting. Much of this research focuses on methodology and application domains. For instance, van Eck et al. (2015) and Aguirre et al. (2017) describe methodologies how process mining projects can be conducted, and Maruster and van Beest (2009) provide a methodology how business processes can be redesigned with the help of process mining. Mans et al. (2013) discuss success factors for such process mining projects. Examples of domain-specific proposals in healthcare are Rebuge and Ferreira (2012) and Fernández-Llatas et al. (2015). Thiede et al. (2018) find applications for digital as well as for physical processes, which are investigated using data from single systems, across systems, and across boundaries. Process mining has even been identified as a strategy of inquiry for studying organizational change (Grisold et al. 2020). What is largely missing so far is research on how enterprises adopt process mining technology, how they integrate it into their information systems landscape, and which kind of effects emerge from this adoption. Effects are complex and unfold at different levels of the organization (Grisold et al. 2021). They are connected with organizational culture and the governance structures, to name but a few. Leonardi and Treem (2020) have coined J. vom Brocke University of Liechtenstein, Vaduz, Liechtenstein
Introduction
Process Mining is a novel technology that helps enterprises to better understand their business processes.Over the last 20 years, intensive research has been conducted into various process mining techniques.These techniques support the automatic discovery of business process models from event log data, the checking of conformance between specified and observed behavior, the identification of various variants of a business process, non-compliant behavior, performance-relevant insights, and so forth.
Research on process mining has mostly focused on devising new or better algorithms (see van der Aalst 2016; Augusto et al. 2019a).There are a few exceptions, among others the following.van der Aalst et al. (2007) were the first to discuss process mining from the perspective of applications in industrial practice.Jans et al. (2014) applied process mining techniques to enrich audit evidence during a financial statement audit.vom Brocke and Mendling (2018) and vom Brocke et al. (2021) present various applications of process mining in hospitals, insurances, software usability analysis, and logistics.
In recent years, process mining has seen an increasing uptake in enterprises (Dumas et al. 2018), and has thus become an integral part of their daily business process management.Companies like Celonis, Fluxicon, Signavio, and Software AG are among the roughly 20 companies that Gartner monitors.As Kerremans (2019) from Gartner states, enterprises adopt process mining tools in order to support business process improvement, auditing and compliance, process automation, digital transformation, and IT operations (in order of decreasing importance).
Some contributions have been made towards understanding how process mining has an impact in an enterprise setting.Much of this research focuses on methodology and application domains.For instance, van Eck et al. (2015) and Aguirre et al. (2017) describe methodologies how process mining projects can be conducted, and Maruster and van Beest (2009) provide a methodology how business processes can be redesigned with the help of process mining.Mans et al. (2013) discuss success factors for such process mining projects.Examples of domain-specific proposals in healthcare are Rebuge and Ferreira (2012) and Ferna ´ndez-Llatas et al. (2015).Thiede et al. (2018) find applications for digital as well as for physical processes, which are investigated using data from single systems, across systems, and across boundaries.Process mining has even been identified as a strategy of inquiry for studying organizational change (Grisold et al. 2020).
What is largely missing so far is research on how enterprises adopt process mining technology, how they integrate it into their information systems landscape, and which kind of effects emerge from this adoption.Effects are complex and unfold at different levels of the organization (Grisold et al. 2021).They are connected with organizational culture and the governance structures, to name but a few.Leonardi and Treem (2020) have coined the term behavioral visibility, a term that nicely emphasizes what process mining affords.The ''datafication'' of private and professional lives creates digital traces in various systems which can be analyzed by means of process mining techniques.In this way, process miming has the potential to afford behavioral visibility of various actions not only inside but also outside an organization.Obviously, many challenges arise from such large-scale behavioral visibility, including ethical ones.Therefore, more interdisciplinary research on the application of process mining from an enterprise perspective is needed.
In this editorial, we develop a framework for systematically discussing many of the associated concerns that emerge from adopting process mining in an enterprise setting.Our framework can be used to analyze the effects of process mining at different levels of investigation.In the following, we first provide a brief overview of process mining and its essential concepts.Then, we introduce our framework and discuss potential relevant research perspectives for each of its five levels.
Techniques, Tasks and Parties Involved in Process Mining
Enterprise information systems automatically log data during daily process executions.Process mining is a family of techniques that extract process knowledge from this logged process data.These techniques integrate concepts and ideas from machine learning and data mining on the one hand and process modeling and process analysis on the other hand (van der Aalst 2016).
In essence, process mining techniques support process discovery, conformance checking, process variant analysis, and process performance analysis (Dumas et al. 2018).Process discovery is the act of discovering a process model from event log data.This process model represents the real, observed behavior.Conformance checking focuses on the relation between a process model and the observed behavior (Carmona et al. 2018).Conformance checking techniques identify and measure the discrepancies between model and log.Researchers mainly use conformance checking to assure the quality of the discovered process model, i.e., to which extent this model accurately represents the logged behavior.In this context, the event log is taken as reference against which conformance is checked.Practitioners are more often interested in identifying which cases violate the behavior prescribed by the model.This means that the process model is taken as the norm to check conformance against.Process variant analysis addresses the question which variants of the process exist and which characteristics they are correlated with.Corresponding techniques build for instance on clustering and the analysis of factors.Process performance analysis is concerned with the analysis of time, costs, quality and flexibility of a business process based on event log data.In this way, measures can be identified to speed up the process, save costs, improve quality, and extend flexibility.
Technical research on process mining has primarily focused on process discovery and conformance checking.Different algorithms have been proposed for both tasks at hand.For process discovery, the Inductive Miner (Leemans et al. 2014), the Evolutionary Tree Miner (Buijs et al. 2014), the Split Miner (Augusto et al. 2019b) and the ILP miner (van Zelst et al. 2018) are examples of recent techniques.For conformance checking, techniques can be divided into three types of approaches.Some techniques rely on checking whether the observed behavior is compliant with a set of rules (e.g., Maggi et al. 2011).These rules function as a norm to check against, similar to controlling functions in organizations.Other techniques are based on the replay of the logged behavior on the process model (e.g., Rozinat and van der Aalst 2008).Finally, techniques based on alignments build on aligning the process executions with the closest path in the process model, which provides basis for calculating a notion of distance (e.g., De Leoni and van der Aalst 2013).
When organizations apply process mining, they do it by using a software tool from one of the numerous vendors.A process mining tool offers a set of analysis techniques for process analysts in a user-friendly way.The selection of the tool should reflect the requirements of the users.Often, these process mining users are process analysts who have the required skill set.Not only are they familiar with the field of process mining, but they also have expertise in an application domain.An experienced process analyst is a person who understands the organization's challenges, gets the right people on board, and is then capable of translating the business needs into specific analysis questions.Regarding process mining, process analysts have to develop an understanding which questions could be answered based on process event data.To this end, they interact with process participants, process stakeholders, and external partners.Process participants are those who work on individual tasks that collectively define overarching business processes.Their coordination and collaboration is logged by enterprise information systems, establishing the basis for applying process mining.Process stakeholders essentially include managers who have an interest in business processes operating well.They set the agenda for analyzing and improving business processes.Finally, system engineers provide expertise in which data enterprise information systems store and how event logs can be extracted.
A last, related party in the context of process mining is the group of external partners.These are the parties that are not directly involved in the process mining project, but are often considered in process analyses.The two most often analyzed business processes are order-to-cash and procure-to-pay.Both directly relate to external partners, namely customers and suppliers.
The described techniques, their corresponding analysis tasks, and the parties involved in process mining influence its success.
A Framework for Research on Process Mining
Process mining unfolds effects at different levels.For our framework we take Hevner et al. (2004) as a starting point, who describe a technical, a people and an organizational level of analysis.We refine this set to five levels, distinguishing an individual and a group level, and adding an ecosystem level (see Fig. 1).
At each level of the research framework, we identify specific phenomena of interest, key candidate theories to apply and further develop, and we pose a set of tangible research questions to be addressed as part of an agenda for future research.Please note that the separation of different levels is conceptual and, therefore, artificial.Even though effects span across these levels, the distinction of different levels can help to provide conceptual clarity.
Technical Level
Various concerns apply to researching process mining at the technical level.Much of the contributions at this level can be understood as pieces of engineering, and most of this engineering is focused on developing novel algorithms for different process mining tasks.These algorithms support the essential sets of various process mining techniques.Research on process mining at the technical level can be framed as a specific category of algorithm engineering.Mendling et al. (2021) distinguish both design and knowledge contributions in the context of algorithm engineering: Design contributions can be either design improvements or design exaptations.Design improvements present algorithms that perform better in at least one of the important performance dimensions such as execution time or output accuracy.For instance, the Split Miner (Augusto et al. 2019b) was presented as a design improvement providing high and balanced fitness and precision.Design exaptations demonstrate the applicability of established algorithmic designs for newly described tasks.An example is the work by van der Aa et al. ( 2018), which presents a conformance checking technique that is able to use text descriptions as normative specifications.
Knowledge contributions can be either performance propositions, sensitivity propositions, or explanatory propositions.The survey and comparison of state-of-the-art algorithms by Augusto et al. (2019a) focuses on performance propositions.Sensitivity propositions can be investigated with internal, design-related variations and external conditions as factors.The research by Di Ciccio et al. (2013), which studies the effect of noise on declarative process discovery, belongs to this category.Finally, explanatory propositions bring to the foreground the mechanisms of how design characteristics affect performance.For example, the study by Augusto et al. (2021), which investigates log complexity measures as predictors for the accuracy of process discovery, is in this category.
Technical
The design of process mining technology, e.g., algorithm engineering.
The effects of process mining on people's percepƟon and behavior, e.g., users.
Level Focus
The effects of process mining on people's interacƟon and mode of work, e.g., teams.
The effect of process mining on operaƟons and value creaƟon in organizaƟons, e.g., organizaƟonal success.
The effects of process mining on inter-organizaƟonal relaƟons, e.g., value chains and networks.
Fig.1 Process mining research framework
Much of the research on process mining at the technical level emphasizes design contributions and provides some knowledge contribution as an evaluation of the design work.Mendling et al. (2021) stress that various validity concerns have to be considered for such evaluations of process mining design contributions: algorithm engineering in general is subject to threats that relate to ecological validity, implementation validity, justification validity, logical validity, internal validity, external validity, construct validity, and conclusion validity.
Individual Level
Different categories of users work with process mining tools and their implemented algorithms and analysis techniques.We have identified users such as process analysts, process participants, process stakeholders, and external partners (Grisold et al. 2021).They use these tools in order to accomplish goals that are associated with process-mining-related tasks.Often, these tasks are not isolated, but embedded in BPM projects (Dumas et al. 2018) and BPM programs (vom Brocke et al. 2021).Some of the methodological specifics of these projects have been highlighted by van 2011) describe a set of 19 different analysis tasks including discovering the distribution of cases over paths, checking exceptions from the normal path, resources involved in cases, longest waiting times, identification of business rules.All of them can be directly supported by analysis based on process mining.
The task perspective plays a role for understanding why users adopt and use technology such as process mining tools.Seminal work towards the technology acceptance model emphasizes that perceptions about usefulness and ease of use are central for usage (Davis 1989;Davis et al. 1989).On the one hand, this is a question of how clear, understandable and easy to learn a technology is.On the other hand, different dimensions of usefulness such as job performance, work productivity, and overall effectiveness are equally important.Acceptance is indeed an issue for process mining (Grisold et al. 2021).According to the technology acceptance model, users are most likely to adopt process mining tools when they are easy to use and at the same time improve their effectiveness when working on process analysis tasks.
While the technology acceptance model explains when users are inclined to use a software tool, the task-technology fit model puts more emphasis on the actual task performance.Goodhue and Thompson (1995) stress that task characteristics and technology characteristics have to fit one another in order to provide a positive impact on performance.Applied to process mining, the fit model suggests that the analysis capabilities of a process mining tool should meet the demands of the tasks that a process analyst and other users are confronted with in the context of a BPM project.The tasks described by Ailenei et al. (2011) or the BPM use cases by van der Aalst (2013) could serve as basis for assessing such a fit.
Several additional perspectives on technology use have been integrated into the most recent version of the unified theory of acceptance and use of technology by Venkatesh et al. (2003Venkatesh et al. ( , 2016)).In essence, this theory posits that behavioural intentions are influenced by performance and effort expectancies, as well as social influence.These intentions materialize into actual technology usage under consideration of additional facilitating conditions.For process mining, social influence is a particularly interesting construct that can potentially play into different directions: from bottom up, it can produce resistance against creating transparency, eventually hampering adoption and use; from top down, social pressure can be imposed to make use of analysis capabilities of process mining.Such forces represent higher-level contextual factors (Venkatesh et al. 2016) that together with individual-level contextual factors influence acceptance, use, and eventually outcomes.
Group Level
We have described several groups of actors that are involved with business processes and corresponding BPM projects, namely process participants, process owners, process managers and process experts of multiple local teams.Notably, process participants and process managers are the largest and most diverse of these groups.A single business process can involve several departments and their corresponding managers and process participants who might not even be in the same reporting line.This setting provides various challenges for any initiative to improve such business processes (Markus and Jacobson 2015).
Before any improvements can be achieved, a shared understanding of the business process by all of the involved persons has to be established.In their work on the principles for good BPM, vom Brocke et al (2014) have formulated the principle of a joint understanding, meaning that BPM should not be the language of experts but create shared meaning.The BPM lifecycle addresses this point by stressing the need to discover and analyze the as-is process.Work on knowledge management in information systems research emphasizes this point, too.Nelson and Cooprider (1996) demonstrate that information system related activities require mutual trust and mutual influence, and that shared understanding and appreciation is key for translating mutual trust and influence into good performance.Process mining, in turn, might presumably help to increase both mutual trust and influence thanks to evidence-based insights into the process, as well as shared understanding by providing process representations that span the boundaries and the lines of visibility of the groups involved.
One of the relevant mechanisms for explaining the impact of process mining in this context are boundary objects.Star and Griesemer (1989) discuss cooperation without central control.They observe that boundary objects facilitate this cooperation thanks to three properties: interpretive flexibility, the needs of information and work processes, and dynamics of usage.Process mining tools can be analyzed using this lens, surfacing this facilitating role for the cooperation between, among others, process analysts, participants, and managers.The information needs of these groups differ such as the interpretations of representations generated by process mining tools, but they are not arbitrary.In this way, dynamic usage can converge towards standardized objects or systems (Star 2010), where boundary spanners-in-practice and boundary objects-in-use leverage cooperation (Levina and Vaast 2005).
Another relevant mechanism associated with process mining is behavioural visibility (Leonardi and Treem 2020).The digitalization of the work place has provided the means for tracking and analyzing behavior.An important observation regarding this digitalization is that the effort for obtaining behavior-related information has drastically declined as has the potential to analyze patterns (Leonardi and Treem 2020).Process mining tools leverage this behavioral visibility into work processes in organizations, revealing patterns, causes and motives (Leonardi and Treem 2020) by corresponding analysis functionality.In this way, new affordances and constraints (Norman 1999) are introduced into the way in which BPM projects are conducted.The article by Eggers et al. (2021) in this special issue discusses the mechanisms by which behavioral visibility increases process awareness, and eventually fosters process change.
We envision process mining in an enterprise setting to change the governance models for process management.Given the capacity to generate process knowledge quickly and continuously, based on real-time process data, process work will be less concerned with inquiring about processes and manually crafting processes models.Process mining will lead to more ad hoc investigations into processes and more real-time and data-driven decision making.Instead of working on processes in large teams of process analysts, investigations into processes could be organized in crossdepartmental meetings, e.g., held on a weekly basis and taking immediate action.Hence, process mining also stimulates research on the organization of the process work.
Organization Level
Technical implementation, individual adoption, and actual use of process mining tools are a prerequisite for any impact at the level of organizational performance.The mechanisms at the group level reveal how process mining can unfold its impact at the level of the larger organization.The information systems success model makes exactly this point by highlighting the impact of system quality, information quality, and service quality on individual use and usage satisfaction; these eventually translate into net benefits at the individual and at the organizational level (DeLone andMcLean 1992, 2003;Petter et al. 2008).
The theory of effective use drills down into the mechanisms surrounding information quality.In essence, effective use builds on a chain of transparent interaction, representational fidelity and informed action, which all contribute to efficient and effective performance (Burton-Jones and Grange 2013).Trieu et al. (2022) contextualize effective use in a business intelligence context and foreground business intelligence system quality, data integration, and an evidence-based management culture.For process mining, these constructs might serve as potential constraints to the affordances a process mining tool provides.
What is partially hidden behind the service quality construct in the success model is a capability perspective.BPM-related capabilities have often been described as dynamic capabilities, which are directed towards organizational problem solving (Niehaves et al. 2014).The BPMrelated capability areas presented by Rosemann and vom Brocke (2015) are specifically relevant in this context.The Delphi study by Martin et al. (2021) in this special issue uses them as a framework for identifying challenges and opportunities arising from process mining.The experts in this study describe more opportunities related to strategic alignment, methods and information technology, while more challenges are identified for governance, people and culture.Also in this special issue, Eggers et al. (2021) emphasize that the benefits that process mining offers are contingent to governance and implementation approaches.
Process mining can also be understood as a specific big data analytics capability.The framework by Grover et al. (2018) offers insights into how such capabilities along with an underlying infrastructure unfold an impact in different value dimensions.They describe that different value creation mechanisms are key to the capability realization process, including organization performance, business process improvement, product and service innovation, and consumer experience as much as market enhancement (Grover et al. 2018).Finally, Grover et al. (2018) point to various other theoretical logics that can be useful for studying big data analytics, namely resources, alignment, real options, dynamics, and absorptive capacity.These might be equally relevant for process mining.
Digital Ecosystem Level
So far, process mining has largely been restricted to the boundaries of central organizations.Martin et al. (2021) identify opportunities and challenges for process mining, and several of these directly relate to the ecosystem in which a company operates.The opportunities described by experts of their Delphi study relate to how process mining can facilitate value creation by fostering collaboration across organizational boundaries.
At this point, some research has been conducted on how process mining can be implemented at an inter-organizational level.Before organizational and strategic challenges can be addressed, various conceptual challenges have been overcome for constructing an integrated coherent data representation of the process across involved organizations (Gerke et al. 2009;Dumas et al. 2018, Chapter 11).Opportunities arise from the increasing uptake of blockchain technology for business processes (Mendling et al. 2018;Pufahl et al. 2021).Specific technical solutions such as the extraction of blockchain data for processes have been devised (e.g., Klinkmu ¨ller et al. 2019;Mu ¨hlberger et al. 2019).Hobeck et al. (2021) demonstrate which kind of insights can be derived by help of their case study with Augur.
Grover emphasizes in his interview with Mendling and Jans (2021) in this special issue that ''the digital'' defines new challenges for researching business processes.In this context, also new challenges arise.For instance, privacy is a concern once data is analyzed that is related to people who are not part of the same organization as the one in which the data is analyzed or where the generated insights are used (see Mannhardt et al. 2019).This is particularly relevant for mining data from the Internet of Things (Michael et al. 2019) and applications in healthcare (Pika et al. 2020).
Future Research Directions
In this editorial, we have identified connections between process mining and many established concepts and theories on information systems.We described a five-level framework including a technical, individual, group, organization, and ecosystem level.The impact of process mining can be investigated at each of these levels and across them.
In our call for papers for this special issue, we raised several research questions (vom Brocke et al. 2020a, b): • How is process mining used and adopted at the enterprise level?• What is the potential of using various types of data in process mining?• How does process mining complement other approaches and technologies?• How do enterprises build suitable data sets?
• What are the implications for management of using process mining?• Which governance structures do enterprises develop for process mining?• How do enterprises calculate the business case of process mining?• How does process mining change organizational culture?• How does process mining change the required skill sets of tool users?• How is process mining integrated into the IT landscape?• How is process mining integrated with existing business process methodologies?• How is process mining adopted in specific application domains, e.g., accounting, health, finance, HR, tax, etc.? • How is process mining used to support digital transformation initiatives?• What strategic implications for enterprises emerge from process mining usage?• What is the business impact of adopting process mining?• What is the overall business value of process mining?
• What is the transformative nature of process mining at the enterprise level?
The two research articles (Eggers et al. 2021;Martin et al. 2021) and the interview (Mendling and Jans 2021) published in this special issue answer some of these questions.Many of the questions, however, remain open.
The process mining research framework also shows that contributions from different disciplines are needed to further understand and develop the potential of process mining.On a technical level, for instance, computer science makes important contributions to algorithm engineering.Information systems research, in addition, has a great opportunity to cover the many socio-technical aspects related to process mining use on the individual, group, organizational and ecosystem level.
Specifically, both behavioral and design-oriented contributions are needed (Hevner et al. 2004).Based on a better understanding of process mining use in an enterprise setting, prescriptive knowledge can be gained to support interventions in practice (vom Brocke et al. 2020a, b), e.g., by models and methods for value identification and value Eck et al. (2015), Aguirre et al. (2017), Maruster and van Beest (2009) and Mans et al. (2013), partially inspired by the CRISP-DM procedure (Martı ´nez-Plumed et al. 2019).Ailenei et al. ( | 6,342 | 2021-09-07T00:00:00.000 | [
"Computer Science"
] |
A Complexity Approach to Tree Algebras: the Bounded Case
In this paper, we initiate a study of the expressive power of tree algebras, and more generally infinitely sorted algebras, based on their asymptotic complexity. We provide a characterization of the expressiveness of tree algebras of bounded complexity. Tree algebras in many of their forms, such as clones, hyperclones, operads, etc, as well as other kind of algebras, are infinitely sorted: the carrier is a multi sorted set indexed by a parameter that can be interpreted as the number of variables or hole types. Finite such algebras—meaning when all sorts are finite—can be classified depending on the asymptotic size of the carrier sets as a function of the parameter, that we call the complexity of the algebra. This naturally defines the notions of algebras of bounded, linear, polynomial, exponential or doubly exponential complexity. . . We initiate in this work a program of analysis of the complexity of infinitely sorted algebras. Our main result precisely characterizes the tree algebras of bounded complexity based on the languages that they recognize as Boolean closures of simple languages. Along the way, we prove that such algebras that are syntactic (minimal for a language) are exactly those in which, as soon as there are sufficiently many variables, the elements are invariant under permutation of the variables.
Introduction
Infinitely sorted algebras occur naturally in many contexts of language theory, graph theory or logic.A typical example is the case of tree algebras (such as clones, hyperclones, operads): plugging a subtree into another one requires a mechanism for identifying the leaf/leaves in which the substitution has to be performed.Notions such as variables, hole types, or colors are used for that.Another example is the one of graphs (HR-and VR-algebras [?]) in which basic operations (a) glue graphs together using a set of colors (sometimes called ports) for identifying the glue-points, or (b) add all possible edges between vertices of fixed given colors.
In these examples, the algebras are naturally sliced into infinitely many sorts based on the number of variables/hole types/colors that are used simultaneously.However, a technical difficulty arises immediately when using such algebras.Even when all sorts are finite (what we call a finite algebra), these algebras are not really finite due to the infinite number of sorts.This forbids, for instance, to entirely and explicitly describe the whole algebra in a finite way.And this is of course a problem for describing and using these algebras in an algorithm.Indeed, a concrete algorithm can only maintain a subset of the Finite algebras can naturally be classified using this map c.Simple classes are then algebras of bounded complexity if c A is bounded, of polynomial complexity if c A is bounded from above by some polynomial in n, etc.
Other interesting complexity classes can be defined using orbits.Indeed, in all of the mentioned examples, there is a natural operation that performs a renaming of the variables/hole types/colors.This renaming is parameterized by a bijection over variables/hole types/colors, and this permutation acts on the corresponding sort.Said differently, in all examples, there is an action of the symmetric group over n elements, Sym(n), over A n .It is thus natural to consider the orbit complexity map c • A as follows and define accordingly what are finite algebras of bounded orbit complexity, or polynomial orbit complexity, etc.
Related works
As mentioned above, there is a long history of understanding the expressive power of regular languages of words based on algebraic properties.The first work in this direction [10], characterizing star-free languages, intiated a long list of deep results.It was natural to extend this approach to trees.Here, the notion of algebra was less obvious, and several definitions have been used.Some algebras for trees are one sorted, such as deterministic bottom up automata (that can be seen as algebras).Some are two sorted, such as forest algebras [7].Some others, such as preclones [8], have infinitely many sorts.Characterizations of classes have been obtained using these approaches [5,9,6], but remain very limited due to difficulties inherent to the tree case.The study of algebras for infinite
126:3
trees renewed the interest in these questions [1,2,3].This line of works also highlights the difficulty to work with tree algebras, and the poor understanding we have so far of the mechanism of recognition for infinite objects.
Contributions of the paper
In this paper we establish some first results in this complexity analysis of infinitely sorted algebras, for the simplest complexity class, bounded complexity.
Our results are of two kinds: a characterization of algebras of bounded complexity; and a characterization of the languages that they recognize, meaning we give a syntactic description of the properties that can be recognized by algebras in this class.We more particularly prove: A characterization of syntactic finite tree algebras of bounded complexity as those syntactic algebras in which, as soon as there are sufficiently many variables, the elements are invariant under permutation of variables.See Theorem 5.
A characterization of languages recognized by finite tree algebras of bounded complexity as Boolean closures of simple languages.See Theorem 14.The second result actually uses the first as a building block in its proof.
Structure of the paper
In Section 2, we recall some classical definitions, and introduce our notions of algebras.In Section 3, we look at the permutations of variables in finite tree algebras, and prove Theorem 5.In Section 4, we study in more depth the bounded complexity case for finite tree algebras, and establish our main result, Theorem 14. Section 5 is our conclusion.
Definitions
We denote by N the set of all non-negative integers.Given n ∈ N, we write [n] = {0, 1, ..., n − 1}.The symmetric group (resp.alternating group) over [n] is denoted Sym(n) (resp.Alt(n)), the symmetric group of any set X is denoted Sym(X).We denote by A c the complement of a set A.
We fix a finite ranked alphabet Σ; the arity of a symbol a ∈ Σ is denoted ar(a).It is a constant if ar(a) = 0, and is unary if ar(a) = 1.For k ∈ N, we set Σ k = {a ∈ Σ | ar(a) = k}.A * is the set of finite words over A, and A + = A * \ {ε}.
Trees
In this section, we introduce notions and notations for trees.
We fix a countable set of variables.Given a finite set of variables X, a Σ, X-tree is, informally, a tree in which nodes are labelled by elements of Σ and leaves also possibly by variables.All variables have to appear at least once.Formally, a Σ, X-tree is a partial map t : N * → Σ ⊎ X such that dom(t) is non-empty and prefix-closed, and furthermore: For all u ∈ dom(t) there exists n ∈ N such that {i | ui ∈ dom(t)} = [n], and either t(u) ∈ Σ n (symbol node), or t(u) ∈ X and n = 0 (variable node).Note that a variable node is always a leaf.All variables from X appear in t, i.e. for all x ∈ X, t(u) = x for some u ∈ dom(t).The root is not a variable, i.e. t(ε) ̸ ∈ X. Σ, ∅-trees are simply called Σ-trees.The elements in dom(t) are called nodes.The prefix relation over nodes is called the ancestor relation.The node ε is called the root of the tree.The tree t is finite if it has finitely many nodes.A branch of a tree t is a maximal set of nodes ordered under the ancestor relation.Let FiniteTrees(Σ, X) bet the set of finite Σ, X-trees, for all finite set of variables X.
Building trees
We introduce now some operations on trees.See Fig. 1. a(x 0 , . . ., x n−1 ), for x 0 , . . ., x n−1 variables and a ∈ Σ n , denotes the Σ, {x 0 , . . ., x n−1 }-tree consisting of a root labelled a, and children 0, . . ., n−1 labelled with variables x 0 , . . ., x n−1 respectively.s • x t, for two trees s ∈ FiniteTrees(Σ, X), t ∈ FiniteTrees(Σ, Y ) and a variable x ∈ X, is the Σ, (X \ {x}) ∪ Y -tree s in which t is substituted for every occurrences of the variable x. σ(t), for a tree t ∈ FiniteTrees(Σ, X) and σ : X → Y a surjective map, is the Σ, Y -tree obtained as t in which variable σ(x) has been substituted to x for all x ∈ X.Note that denotes the tree of sort X \ {x 0 , . . ., x n−1 } ∪ i Y i obtained from t by simultaneously substituting the tree t i for the variable x i for all i ∈ [n], where t is a tree of sort X, x 0 , . . ., x n−1 ∈ X, and t 0 , . . ., t n−1 are trees of sort Note that this operation is equivalent to a combination of the previous ones.a(t 0 , ..., t n−1 ), for a ∈ Σ n , denotes the tree of root a and children t 0 , . . ., t n−1 at respective positions 0, . . ., n − 1.Again, this operation is equivalent to a combination of the previous ones.
▶ Lemma 1.All finite trees can be obtained from the a(x 0 , . . ., x n−1 )'s using the operation •.
Expressions denoting finite trees For X a finite set of variables, an F T Σ -expression of sort X (over the alphabet Σ) is an expression built inductively as follows: a(x 0 , . . ., x n−1 ) is an F T Σ -expression of sort {x 0 , . . ., x n−1 } for every symbol a ∈ Σ n , S • x T is an F T Σ -expression of sort X \ {x} ∪ Y for all F T Σ -expressions S of sort X, all F T Σ -expressions T of sort Y , and all variables x ∈ X (substitution), σ(T ) is an F T Σ -expression of sort Y for all F T Σ -expressions T of sort X, and surjective map σ : X → Y (renaming).For an F T Σ -expression T of sort X, [[T ]] denotes its evaluation into a finite Σ, X-tree using the operations of substitution and renaming.
Contexts
We define now contexts, which are terms with a specific leaf called the hole.Since we work in a multi-sorted algebra, the hole itself has a sort.Essentially, to a hole of sort X will be substituted a term of sort X. Formally, for fixed finite set of variables Y , an context of sort X with hole of sort Y (or simply a F T Σ -context) is defined inductively as an expression of sort X, using the extra construction □ Y (the hole of sort Y ) which is a context of sort Y with hole of sort Y .This new construction may appear multiple times in a context but has to appear at least once.
For C a context of sort X with hole of sort Y , [[C]] : FiniteTrees(Σ, Y ) → FiniteTrees(Σ, X) is the function which to a tree of sort Y t associates the tree of sort X obtained by evaluating the operations as above, interpreting □ Y as t.
Finite tree algebras
Our notion of tree algebra is the natural notion associated to finite trees equipped with the above operations.We give here a more formal definition, though the detail of identities is 126:5 more for reference.What matters is that it is defined such that the free algebra coincides with finite trees.
An F T Σ -algebra A consists of an infinite collection of carrier sets A X indexed by finite sets of variables X, together with operations: for all finite sets of variables X, Y and x ∈ X, that satisfy the expected identities, i.e. the ones guaranteeing that several ways to describe the same tree yield the same evaluation in the algebra.Formally, for all s, t, u that belong to In practice, we shall not explicitly use these identities, and simply write two elements of the algebra equal as soon as they obviously come from expressions denoting the same trees.
A morphism of F T Σ -algebras from A to B is a family of maps α X : A X → B X for all finite sets of variables X which preserves all operations, i.e. α Y (σ The FiniteTrees(Σ, X) sets equipped with the operations of substitution and renaming form an F T Σ -algebra (it is the free F T Σ -algebra generated by ∅).For A a F T Σ -algebra, its associated evaluation morphism is the unique morphism from FiniteTrees(Σ) to A.
A congruence ∼ over a F T Σ -algebra A is a family ∼ of equivalence relations over the A X 's (each denoted ∼) such that, for any a From such a congruence, one can define the quotient algebra A/ ∼ in the natural way.
Compact presentation, and complexity
Our definition of algebras does not so far match the one used in the introduction.In the introduction, algebras were considered as using natural numbers as sorts, while here, our sorts are indexed by finite sets.What would be these algebras should be pretty clear.The presentation used here is simpler to present and to use.It is an exercise to show the equivalence.
What matters is that in what follows, a F T Σ -algebra is of bounded complexity if there exists a bound K such that |A X | ⩽ K for all finite sets of variables X.
Languages and syntactic algebras
A language of finite Σ-trees L is a set of Σ-trees.It is recognized by an F T Σ -algebra A if there is a set P ⊆ A ∅ such that L = α −1 (P ) in which α is the evaluation morphism of A.
The syntactic congruence ∼ L of a language L of finite Σ-trees is defined in the following way s ∼ L t for s, t finite Σ-trees if, for all The language of all finite trees in which the symbol a appears has for syntactic F T Σ -algebra the algebra with sorts A X = {0, 1}, for all finite set of variables X, and whose evaluation morphism is such that α(t) = 1 if and only if t is a tree in which a appears.
Tree automata
A tree automaton B = (Q, I, (δ a ) a∈Σ ) over Σ has a finite set Q of states, a set of accepting states I ⊆ Q and a transition relation δ a ⊆ Q × Q ar(a) for every symbol a ∈ Σ.A run of B over a finite tree t is a mapping ρ : dom(t) → Q such that, for any vertex u ∈ dom(t) with t(u) = a ∈ Σ, (ρ(u), (ρ(u0), ..., ρ(u(ar(a) − 1)))) ∈ δ a .A run is accepting if ρ(ε) ∈ I.A language L of finite trees is called regular if it is recognized by a tree automaton B, meaning the trees in L are exactly those for which there is an accepting run in B.
Ex. 3 shows the translation from tree automata to tree algebra.
▶ Example 3 (Automaton algebra).Consider a regular language L of finite trees recognized by the tree automaton B = (Q, q 0 , (δ a ) a∈Σ ).Consider some finite set of variables X.An X-run profile is a tuple τ ∈ Q × P(Q) X .For a Σ, X-tree t, τ = (p, (U x ) x∈X ) is a run profile over t if there exists a run ρ of the automaton over Q such that ρ(ε) = p and for all variables x ∈ X, U x is the set of states assumed by ρ at leaves labelled x.We define a tree algebra A that has as elements of sort X sets of X-run profiles.The definition of the operations is natural, and is such that the image of a Σ, X-tree t under the evaluation morphism yields the set of run profiles over t.It naturally recognizes the language L.
Note that this definition yields an algebra of doubly exponential complexity (and hence, this is an upper bound for regular languages).Of course, in practice, one can restrict the algebra to the reachable elements, and this may dramatically reduce the complexity.
The converse translation is also true, yielding the following classical result (it is for instance proved for preclones in [8]).
▶ Proposition 4.
A finite tree language is regular if and only if it is recognized by a finite F T Σ -algebra.
Fundamental results on permutations in tree algebras
This section studies some fundamental phenomena concerning the effect of variable permutations on tree algebras.Its main objective is to prove Theorem 5.It is a characterization of syntactic F T Σ -algebras of bounded complexity which turns out to be key in the subsequent developments.Beyond that, several intermediate results in this section may also be relevant in the analysis of algebras of unbounded complexity.In this section, given an F T Σ -algebra A, φ A X : Sym(X) → Sym(A X ) (or simply φ X when there is no ambiguity) denotes the group morphism σ → σ A , for all finite sets of variables X.
▶ Theorem 5. A finite syntactic F T Σ -algebra A is of bounded complexity if and only for all sufficiently large finite set of variables X, ker(φ
The meaning of ker(φ X ) = Sym(X) is that permuting the variables has no effect on A X (i.e.σ A = Id A X for every σ ∈ Sym(X)).We fix from now the F T Σ -algebra A.
Our first step is to define a relation ≡ A which we show to be a congruence equivalent to the syntactic one (Proposition 6).We set for all a ∈ A X : in which X = {x 0 , ..., x n−1 }.Define now a ≡ A a ′ , for a, a ′ ∈ A X to hold if ⟨a⟩ = ⟨a ′ ⟩ (note that it does not depend on the numbering of variables).
By a simple counting argument, we obtain the following corollary.
Using Propositions 6 and 8, we prove that, whenever X is large enough, ker(φ X ) may only be Sym(X) or {Id X }.
▶ Proposition 9.
Let A be a finite syntactic F T Σ -algebra.There is an integer M such that, for all X of cardinal at least M , either ker(φ X ) = Sym(X) or ker(φ X ) = {Id X }.
Proof.Let M = max(5, |A ∅ | + 1).Let X be a finite set of variables such that |X| ≥ M .By Prop.8, ker(φ X ) is either Sym(X), Alt(X) or {Id X }.Assume, for the sake of contradiction, that ker(φ X ) = Alt(X).This implies that the image of φ X has exactly 2 elements: permutations of signature +1 are sent to Id A X , and those of signature −1 are sent to another (distinct) element.Let us call τ this permutation of A X .
Let t ∈ Sym(X) be a transposition, let us show that t A = Id A X .According to Proposition 6, we only need to prove that ⟨t A (a)⟩ = ⟨a⟩ for all a Since this holds for all a ∈ A X , t A = Id A X .This is a contradiction.◀ According to Proposition 9, for X large enough, ker(φ X ) may only be Sym(X) or {Id X }.The next result shows that one of these two cases in fact holds for all sufficiently large X. ▶ Proposition 10.Let A be a finite syntactic F T Σ -algebra.There is an integer M such that, either ker(φ X ) = Sym(X) for all X with |X| ≥ M , or ker(φ X ) = {Id X } for all X with |X| ≥ M .By simple counting, if ker(φ X ) = {Id X }, then |Sym(A X )| ≥ |Sym(X)| and hence |A X | ≥ |X|.This yields the following corollary, which is one direction of Theorem 5.
▶ Corollary 11. Let A be a syntactic F T Σ -algebra of bounded complexity. There is an integer
M such that, for every X with |X| ≥ M , ker(φ X ) = Sym(X).
Heading toward the converse implication, we now show that when ker(φ X ) = Sym(X), then ⟨a⟩ can only take very simple forms.Indeed, for σ that maps y to z and leaves all other variables unchanged, we have in which d, d ′ ∈ A ∅ and h ∈ (A ∅ ) X\{x,y,z} , from which the claim (⋆⋆) follows.At this point, (⋆⋆) allows to change the value b(z) to another providing that its values before and after the change appear elsewhere in b, and (⋆) allows to permute all the b(x)'s.Hence, by iterative applications of them, we obtain of (⋆) and (⋆⋆), ⟨a⟩(b) = ⟨a⟩(b ′ ) providing that b and b ′ have same image.◀ As a consequence of the above lemma, assuming that ker(φ X ) = Sym(X) for all sufficiently large X, we can bound the number of possible elements in A X .This yields Corollary 13 below, which is the second direction of Theorem 5.
▶ Corollary 13.
A finite syntactic F T Σ -algebra such that ker(φ X ) = Sym(X) for all sufficiently large set of variables X, has bounded complexity.
Proof.By Lemma 12, we know that, for a ∈ A X , ⟨a⟩ must be chosen in a set of at most this is an upper bound on the number of elements of A X that does not depend on |X| for X sufficiently large.◀
Finite tree algebras of bounded complexity
The main theorem of this section, Theorem 14, provides a characterization of the languages recognized by F T Σ -algebras of bounded complexity as Boolean combinations of simple languages.We proceed as follows.We state Theorem 14 and establish its easier parts in Section 4.1.In Section 4.2, we establish Lemma 22 which essentially amounts to the result for trees with 'sufficiently many branches', which is the hardest part of the proof of Theorem 14.
Statement of the result
The main theorem of this section, Theorem 14, requires some preliminary definitions.
For a given finite tree t, we associate to it some data (see Figure 2 for an illustration).Let n, be the depth of the first node labelled with a non unary symbol; formally, n is the least natural number such that ar(t(0 n )) ̸ = 1.The unary prefix of t, denoted upref(t) is the word t(01 )...t(0 n−1 ) ∈ Σ * 1 .The first non-unary symbol of t is t(0 n ), which we denote by fnu(t).The set of symbols in t is symb(t) = {t(u) | u ∈ dom(t)} and its set of post-branching symbols is, if it exists, pbsymb(t ▶ Theorem 14.For a language of finite trees, the following properties are equivalent: 1. being recognized by a F T Σ -algebra of bounded complexity, 2. having its syntactic F T Σ -algebra of bounded complexity,
being equal to a Boolean combination of languages of the following kinds:
a.The language of finite trees with unary prefix in a given regular language of words L ⊆ Σ * 1 .b.The language of finite trees with first non-unary symbol a for a fixed non-unary symbol a. c.The language of finite trees with post-branching symbols B, for B ⊆ Σ. d.A regular language K of bounded branching, meaning that there exists a natural number k such that all trees t ∈ K have at most k branches.Let us establish the easiest parts of this statement.To start with, it is a generic factgeneric meaning independent of the type of algebras under consideration-that the syntactic F T Σ -algebra of a language divides 1 any F T Σ -algebra that recognizes the same language.Hence, each sort of the syntactic F T Σ -algebra has a lesser size than the same sort in any other F T Σ -algebra recognizing the same language.Thus 1 implies 2.
We now prove the second easiest implication, 3 implies 1.For this, it is sufficient to prove that the languages of kind 3a to 3d are recognized by F T Σ -algebras of bounded complexity, and that F T Σ -algebras of bounded complexity are closed under all the Boolean connectives.This is stated in Lemmas 15 to 17 below.All are straightforward.
Given a regular language of words L ⊆ Σ * 1 , a non unary symbol a and a set B ⊆ Σ of symbols, let us denote by UPref(L), FNU(a) and PBSymb(B), respectively the language of trees with unary prefix in L, the language of trees with first non-unary symbol c, and the language of trees t with pbsymb(t) = B. Lemma 15 shows that these languages are recognized by algebras of bounded complexity, i.e. it treats Cases 3a to 3c.
▶ Lemma 15.Given a regular language of words L, a non unary symbol a ∈ Σ ̸ =1 and a set of symbols B ⊆ Σ, the languages UPref(L), FNU(a) and PBSymb(B) are recognized by F T Σ -algebras of bounded complexity.
Proof.For space considerations, we only detail the case of UPref(L), which is arguably the hardest one.Let φ : Σ * 1 → M be a monoid morphism that recognizes L, meaning there exists P ⊆ M such that φ −1 (P ) = L. Define on FiniteTrees(Σ) the relation ∼ by s ∼ t if φ(upref(s)) = φ(upref(t)), this relation is easily seen to be a congruence, and UPref(L) is obviously recognized by the quotient algebra FiniteTrees(Σ)/ ∼.Because ∼ only has |M | equivalence classes in every sort, we just described a F T Σ -algebra recognizing UPref(L) of bounded complexity.◀ Next we deal with the languages that have bounded branching, i.e.Case 3d.It is done with a modification of the automaton algebra of Example 3 so that it is also able to count the number of branches of a tree up to the bound k.The key observation is that a tree with k different variables must have at least k branches.This means that the sort A X where X is a finite set of variables such that |X| > k can be collapsed to one element only.Note that this is tightly related to our assumption that every variable from X must occur in all Σ, X-trees.We do not give any further detail here.
▶ Lemma 16.The regular languages of finite trees of bounded branching are recognized by F T Σ -algebras of bounded complexity.
Finally, using standard constructions, we can provide the last ingredient of the proof that 3 implies 1 in Theorem 14: ▶ Lemma 17.The languages recognized by F T Σ -algebras of bounded complexity are closed under Boolean operations.
Trees with many branches
In this section, we fill the missing gap in the proof of Theorem 14, namely the implication from 2 to 3.
Given two finite trees s and t and a F T Σ -algebra A with evaluation morphism α, let s ≃ A t hold if α(s) = α(t).We omit the sub-and superscript A when it is clear from the context.Our goal is to prove Lemma 22 which states that if A is of bounded complexity, then for all trees s and t with sufficiently many branches, if upref(s) = upref(t), fnu(s) = fnu(t) and pbsymb(s) = pbsymb(t), then s ≃ A t Given an Σ, X ⊎ {γ 1 , . . ., γ n }-tree t in which the γ i 's only appear once, and given trees t 1 ,. . ., t n , we denote t(t 1 , . . ., t n ) the tree t[γ 1 ← t 1 , . . ., γ n ← t n ].We will use these distinguished variables γ i 's in this section.
The results we prove throughout this section are consequences of the properties of permutations in F T Σ -algebras of bounded complexity and, more particularly, consequences of Corollary 11.Our first objective is to show that if there are sufficiently many branches in a tree, it is possible to exchange any two subtrees which are not related by the ancestor relationwithout changing evaluation in the algebra (see Lemma 20).Lemma 18 is a first step towards this goal: it says that, in a syntactic algebra, whenever a tree has many branches, it is possible to exchange some of its subtrees.More precisely: ▶ Lemma 18.For every syntactic F T Σ -algebras A of bounded complexity, there exists an integer N 0 such that, for all finite trees t(t 1 , t 2 ) such that t has at least N 0 branches, Proof.Let N 0 be the integer introduced in Corollary 11.Assume that t is a Σ, X ⊎ {γ 1 , γ 2 }tree that has at least N 0 branches.Let s be a Σ, X ⊎ {γ 1 , γ 2 } ⊎ {x 1 , ..., x N0−2 }-tree and let 126:11 s 0 , ..., s N0−2 be trees such that s[x 1 ← s 1 , ..., x N0−2 ← s N0−2 ] = t.As such, s has at least N 0 variables.Hence by Corollary 11, t ′ ≃ A σ A (t ′ ) in which σ is the transposition of γ 1 and γ 2 .We now compute: Here, notice the similarity between this proof and the observations we made in the proof of Lemma 12: this is the exact same argument, we just used the fact that a tree with many branches can always be obtained from some tree with many variables.Taking this similarity further, we may apply the other arguments we used in the proof of Lemma 12 to prove Lemma 19, and change the number of occurrences of plugged subtrees.
▶ Lemma 19.For every syntactic F T Σ -algebras A of bounded complexity, there exists an integer N 1 such that, for any finite tree t(t 1 , t 2 , t 2 ) such that t has at least N 1 branches, The next step is to establish Lemma 20, which is very similar to the previous Lemma 18 but for the fact that it is sufficient to have many branches in t(t 1 , t 2 ) instead of many branches in t to be allowed to swap t 1 and t 2 .It is obtained by repeated and careful applications of Lemma 18.
▶ Lemma 20.For all syntactic F T Σ -algebras A of bounded complexity, there is an integer N 3 such that, for any finite tree t(t 1 , t 2 ) where t(t 1 , t 2 ) has at least N 3 branches, t(t 1 , t 2 ) ≃ A t(t 2 , t 1 ) .
As such, we may exchange two subtrees of a tree with many branches without changing evaluation in the algebra.We will use this result to prove that two trees with the same unary prefix, first non-unary symbol and set of post-branching symbols are not distinguished by the algebra (Lemma 22).
Using Lemmas 19 and 20, we first establish Lemma 21 that allows in some situation to make a symbol 'appear' or 'disappear'.
▶ Lemma 21.For all syntactic F T Σ -algebras A of bounded complexity, there exists an integer N 4 that has the following property.For all finite trees s(γ 1 , γ 2 ), all finite trees t with at least N 4 branches, and all symbols c, d ∈ symb(t), c constant, In combination with Lemmas 19 and 20, it means that under the same assumptions, s(t, c) ≃ A s(t, t ′ ) for all finite trees t ′ that use only symbols appearing in t.This building block, used iteratively, allows to shuffle and change the number of occurrences of all symbols that appear below a first non-unary symbol as soon as there are sufficiently many branches.This is our key Lemma 22.
▶ Lemma 22.Let A be a syntactic F T Σ -algebra of bounded complexity.There is an integer N , such that, for all finite trees s and t, both of which have at least N branches, if upref(s) = upref(t), fnu(s) = fnu(t) and pbsymb(t) = pbsymb(t), then t ≃ A s.
Putting everything together we establish the last implication in Theorem 14: I C A L P 2 0 2 1 126:12 A Complexity Approach to Tree Algebras: the Bounded Case ▶ Lemma 23.A language of finite trees L recognized by a F T Σ -algebra of bounded complexity can be written as a Boolean combination of languages of kinds 3a-3d in Theorem 14.
The idea behind this last proof is as follows.Given a regular word language K ⊆ Σ * 1 , a non-unary symbol a, and a set of symbols B, let L K,a,B be the set of trees t such that upref(t) ∈ K, fnu(t) = a and pbsymb(t) = B.Such languages can be written as the intersection of UPref(K), FNU(a) and PBSymb(B) which are of the kinds 3a-3d in Theorem 14.Let N be the value from Lemma 22.
In a first step, we construct finitely many tuples (K i , a i , B i ), such that L and i L Ki,ai,Bi coincide over all trees with sufficiently many branches.For this, for all t ∈ L with at least N branches, consider the least language K t ⊆ Σ * 1 recognized by A {x} such that upref(t) ∈ K t (in which upref(t) is recognized by A {x} when seen as a tree made of unary symbols and the variable x).The language K t is regular and has the property that exchanging the unary prefix of t for any other word in K t leaves the tree in L. We use as the (K i , a i , B i )'s all the tuples (K t , fnu(t), pbsymb(t)) for t ranging over the trees in L with at least N branches (note that since all the K t 's are recognized by A {x} , there are finitely many of them).One can show using Lemma 22 that, as claimed, L and i L Ki,ai,Bi coincide over all trees with at least N branches.In a second step one defines L i to be the set of trees in L Ki,ai,Bi that have at least N branches.Let also L ′ be L restricted to trees with less than N branches.One gets and this is by construction a Boolean combination of languages of the kinds 3a-3d.
This concludes our proof of Lemma 23, and hence Theorem 14.
Conclusion
In this paper, we initiated a complexity analysis of the expressiveness of infinitely sorted algebras.Our main result gives a descriptive characterization of the languages of finite trees recognized by algebras of bounded complexity, Theorem 14.In this work, we made a design choice in the definition of tree algebras.Indeed, we require that in a tree of sort X, every variable occurs at least once.Removing this assumption would change our bounded complexity characterization result, yielding only Boolean combinations of languages of the form "the root symbol is a".Another possible variant is to allow trees restricted to a single variable: in this case our results remain unchanged.
Extensions to infinite trees
We also obtained a similar characterization for algebras for infinite trees.We did not include it in this short abstract for space considerations (this was in fact our original question).In this case, the algebras have to include an extra iterating construct that allows to build all infinite regular trees (i.e.unfolding of finite graphs).By Rabin's lemma, regular languages of infinite trees are entirely characterized by the regular trees they contain, and as a consequence such algebras describe regular languages of infinite trees.Our result characterizes such algebras of bounded complexity along the same line as Theorem 14.We show that over infinite trees, such algebras can express two extra things (1) the existence of subtrees of unary shape that belong to a prescribed prefix-invariant regular language of infinite words, and (2) the existence of subtrees in which a set of letters C appears densely, i.e. every letter in C appears in every subtree.
Future work
The next simplest cases seem to be the algebras of polynomial complexity and of bounded orbit complexity.An example in this case is the language of Σ-trees such
1 126: 6 A
The quotient algebra FiniteTrees(Σ)/ ∼ L is called the syntactic algebra of L. I C A L P 2 0 2 Complexity Approach to Tree Algebras: the Bounded Case ▶ Example 2.
Figure 2 A
Figure 2 A finite tree t and its associated data. | 8,608.6 | 2021-01-01T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
Understanding the Complexities of Student Learning Progress in Texas: A Study of COVID-19 and Rural vs. Non-Rural Districts
In this study, we investigate the impact of COVID-19 on academic achievement in Texas public schools. Demographic and Grade 5 STAAR test data were collected from 1155 public school districts for 2018–2019 and 2020–2021. Multiple regression was adopted to analyze the differences between rural and non-rural districts, as well as the impact of demographic characteristics on students’ achievement. The results reveal significant differences in demographic characteristics between the two academic years, with non-rural districts exhibiting a greater decline in academic achievement than rural districts. Additionally, the findings suggest that higher teacher salaries correlate with better academic performance across various subjects and that English learners require additional support to acquire content knowledge and skills. We further confirm that the COVID-19 pandemic has disrupted the academic learning experience of Texas students, with rural districts displaying more resilience than non-rural districts.
Introduction
Many governments chose to close schools for several weeks in the spring of 2020 due to the COVID-19 epidemic, among which the United States initiated a policy of lockdown to prevent and slow down the spread of the virus, and students started online schoolwork and lessons with the support of their teachers and parents [1].However, the education system in the United States was unprepared for protracted closures.Although school closures were regarded as one of the most effective strategies for preventing the transmission of the virus [2], many educators and researchers are concerned about the impact of COVID-19-related school closures on student academic achievement and learning disparities.The detrimental impacts of physical school closures (e.g., summer vacation or natural catastrophe) on student academic performance are extensively established (e.g., [3,4]).Specifically, Hanushek and Woessmann [5] projected that COVID-19-related school closures had a negative impact on student attainment of 0.10 standard deviations.According to [6] systematic review, school closures during COVID-19 adversely affected student attainment, particularly among younger kids and those from low-socioeconomic status households.Ref. [7] anticipated that socioeconomic attainment inequalities would grow by up to 30%.
The shifting from in-person instruction to online or hybrid learning led to problems for educational institutions, teachers, parents, and students.To begin with, schools lacked the structure to provide effective and quality instruction to children after the shutdown [8].Teachers were not fully prepared to face the challenges associated with online learning, including limited technical support [9][10][11], the heavy workload in course content preparation [9,11], difficulties to explain formulas and teach a subject related to numerical problems [10], and maintenance and supervision of the online classroom [9,10].In the meantime, students had to overcome several obstacles in online learning, including limited access to the internet or laptops [9,11], lack of parental support and engagement with instructors [11,12], and mental health problems [9].
According to state data in Texas, in April 2020, 569 school districts declared closures due to coronavirus fears [13].Although school districts were encouraged to continue educating all students, legislators from both political parties and school superintendents in Texas urged the state to cancel statewide testing out of concern that students would miss school days during an extended spring break [14].In March 2020, the Texas governor waived the requirements for the annual academic assessment-State of Texas Assessment of Academic Readiness (STAAR)-for the school year 2020-2021.The Texas Education Agency (TEA) resumed the STAAR tests for all school systems and campuses in the school year 2021-2022.The preliminary STAAR data analysis concluded that COVID-19 contributes to learning loss and decreases academic performance measured by STAAR across grade levels [15].Specifically, according to Texas Academic Performance Reports by TEA [16], 15% fewer students passed STAAR math, and 4% fewer passed STAAR reading.
Specifically for schools in rural areas, students' learning loss and the possible factors that impacted their achievement might need a close look.For example, compared to non-rural school districts, rural school districts normally had significantly more students identified as economically disadvantaged [17], limited instructional expenditure [18], a higher student mobility rate [17,19], and a high teacher turnover rate [20,21].Given the geographic isolation of rural schools, limited resources, and lack of support, rural schools face significant challenges in providing effective and quality professional development to teachers [22,23].With the largest number of students enrolled in rural public schools [17], Texas not only faces challenges similar to other rural areas, such as low expenditure and professional isolation, but it also possesses some unique rural education characteristics [18].For example, nationally, 3.5% of rural students were identified as English learners, while this figure in Texas is 8.2% [24].It was found in previous studies that there exists an achievement gap among rural and non-rural school students in reading [17,18,25] and science [26].While school location is often used as an indicator in educational research and policy making, what impacts students' academic achievement is not the categorization of rural or non-rural, but the local demographic characteristics associated with the school districts [18].
In this study, we aimed to investigate the impact of COVID-19 on Texas school districts' demographic characteristics and fifth-grade students' learning progress in rural and nonrural areas.In the following section, we provide an overview of three key topics related to K-12 education, including the impact of COVID-19 on education, the influence of geographic and demographic factors on academic performance in rural school districts, and the demographic diversity of rural school districts in Texas.
The Impact of COVID-19 on K-12 Education: Challenges and Issues
The COVID-19 pandemic has greatly impacted student learning, teacher instruction, and school support for students and educators [27,28].To ensure the continuity of student education, K-12 schools transitioned to virtual learning during the pandemic [29].However, prolonged lockdowns, the requirement for extended virtual learning, and subsequent waves and mutations of COVID-19 have disrupted the traditional learning environment and are expected to persist in the upcoming school year [29,30].The significant shift to online instruction has presented a multitude of challenges for teachers, students, and administrators [29,31].Teachers had to adapt their instruction to suit the new learning environment, resulting in a roughly equal distribution of review and new content, with a smaller emphasis on review and a greater focus on new material [12,29].Moreover, researchers have noted that teachers had to reduce instructional time during the sudden shift to online learning, resulting in a decline in reading and math scores for the semester starting in March 2020 [28].
Moreover, during the COVID-19 pandemic, school administrators have faced many challenges.The transition to remote teaching has required educators to adapt to new technological tools and platforms to support virtual learning, while also ensuring that all students have equal access to digital devices and internet connectivity [32].Administrators have also been charged with developing and executing comprehensive plans to ensure that schools remain safe and healthy for students, teachers, and staff.These plans have involved considerable time and resources, including guaranteeing sufficient personal protective equipment, scheduling regular COVID-19 testing, and implementing contact tracing protocols [33,34].Additionally, administrators have had to address students' social and emotional needs, severely impacted by the pandemic, by providing counseling services and other support mechanisms [12,34].Finally, the pandemic's financial pressures have mounted, putting administrators in a difficult position to make challenging decisions on budget cuts and staffing levels while still providing quality education to students [32].
The emergence of the COVID-19 pandemic has also brought to the forefront the challenging circumstances encountered by low-income households and rural areas concerning internet connectivity [35].Despite the initial expectation that remote learning would be a smooth and facile transition for students and their families equipped with multiple electronic devices and high-speed internet, the reality has been quite the opposite for those who lack a reliable internet connection [36,37].The repercussions of this digital divide are extensive and significantly impact students' access to education and educational outcomes [36,37].Empirical evidence suggests that students who lack access to dependable internet connectivity are more prone to academic setbacks, leading to long-term adverse effects such as reduced lifetime earnings and limited opportunities [38].Consequently, guaranteeing that every student has access to reliable internet and devices has become crucial in education [38][39][40].Reflecting on the broader challenges of the COVID-19 pandemic, Anderson [41] highlights the significant stress placed on educators as they navigated the shift to emergency remote teaching (ERT), often under less-than-ideal circumstances.This situation illustrates the global struggle within the educational sector to maintain continuity in learning during unprecedented disruptions.
Geographic and Demographic Factors Affecting Academic Performance in Rural School Districts
The academic performance of students in rural school districts is influenced by many different factors, including both where they are located and the characteristics of the people living there.Regarding geography, rural school districts are uniquely affected by their environment and the communities they serve [42].The economy of rural areas is typically reliant on sectors experiencing declining job opportunities, thereby constricting the availability of educational resources for students.Further, a lower population density can aggravate the dearth of resources in rural regions, impeding students' academic growth [43].
Moreover, demographic elements play a crucial part in students' academic progress.Disparities in socioeconomic status (SES) can considerably affect the achievement gap between students in rural and non-rural settings [44].Moreover, English language proficiency is imperative to students' academic success, specifically in reading, math, and science [18,45,46].Recent research has also established a potent correlation between the teacher turnover rate and student mobility rate, along with their academic performance in diverse subjects [18,45,46].
The COVID-19 pandemic has highlighted the gap in internet access for low-income families and rural areas, especially in terms of remote learning [35].Students who lack reliable internet and electronic devices face considerable educational disadvantages, potentially harming their future opportunities and earnings potential [38,40].Reports by rural teachers suggest that remote learners are often the most underprivileged regarding technology access, resulting in less effective pedagogy [47].While online learning has the potential to enhance learning outcomes for many students, thoughtful consideration is necessary to prevent worsening social and economic inequalities [48].Families with lower incomes and those in rural areas encounter major obstacles to accessing steady internet and digital tools, limiting their chances to engage in online education [32].A recent study by Bacher-Hicks et al. [36] discovered pronounced disparities in the utilization of online learning materials between regions with varying income levels, internet access, and school types.Families with low SES may also be restricted in terms of study space, electronic devices, internet access, and books, all of which can negatively impact their children's online learning experience [49].
Demographic Diversity in Rural School Districts: A Case Study of Texas
Unlike community classification, district demographic characteristics account for a higher percentage of variance in students' academic achievement [18,45,50].Understanding the impact of demographic variables is crucial when examining the academic achievement of rural and non-rural district students.Rural school districts often face similar challenges in improving students' academic performance due to their geographic location [51].However, within rural areas, school districts exhibit significant diversity regarding demographic characteristics, resources, and student needs [52,53].As a result, academic outcomes for students in rural communities vary significantly along demographic dimensions, including students' SES and racial ethnicity, region, distance from urban communities, and local economies [50].
Texas is an illustrative case of rural school district diversity, with nearly 700,000 students enrolled in rural districts [17].The state's rural school districts exhibit significant demographic diversity, with Hispanic, African American, and Caucasian populations comprising the majority.Although Texas's rural school districts face common challenges, such as low expenditure per student, inequitable funding, low transportation costs, high mobility rates, and high poverty rates [17], there is considerable variation within Texas's rural districts.For example, Lindsay Independent School District (ISD) and Santa Maria ISD, identified as rural school districts, have exhibited vastly different demographic characteristics based on their annual Texas Academic Performance Reports (TAPR).In 2020-2021, 9.8% of students in Lindsay ISD were identified as economically disadvantaged (ED), and no students were identified as English learners (ELs).The district's student mobility rate was 4.3%, and the teacher turnover rate was 9.6%.In addition, the average years of teaching experience was 15.1 years.In contrast, 98.5% of students in Santa Maria ISD were identified as ED, much higher than the average state level of 60.2%.Additionally, 38.6% of students were identified as ELs, and the district's student mobility rate was 11.8%, with a teacher turnover rate of 10.8%.The average years of teacher experience was 8.2 years, less than the state average of 11.2 years.Therefore, it is important to consider the diversity of rural districts, the impact of demographic variables, the limitations of data sources, funding, and resource allocation, and the impact of geographic location to gain a comprehensive understanding of the unique challenges faced by these communities and how these factors impact academic outcomes.
Study Purpose and Research Questions
Previous studies indicated a need to revisit the possible differences between rural and non-rural school districts regarding students' demographic characteristics and their academic gains before and after COVID-19.Two years after the pandemic, it is the right time to assess the impact of COVID-19 empirically on districts' demographics and student academic gains, as well as how the changed demographics possibly impact student achievement at the district level.Therefore, in this study, we sought to address the following three research questions: Research Question 1: What was the impact of COVID-19 on Texas school districts' demographic characteristics, including instructional hours, principal experience, teacher experience, teacher and student ratio, teacher full-time equivalence, teacher salary, teacher turnover rate, student mobility rate, percentage of students identified as an English learner and percentage of students identified as economically challenged?
Research Question 2: What was the impact of district location (rural vs. non-rural) on Texas fifth-grade students' learning progress (difference between 2019 and 2021) as measured by high-stakes reading, math, and science tests?
Research Question 3: What were the impacts of district location (rural vs. non-rural) and demographic characteristics on Texas fifth-grade students' learning progress (difference between 2019 and 2021) as measured by high-stakes reading, math, and science tests?
Method 6.1. Research Design and Context
In accordance with the Texas Education Agency (TEA) guidelines outlined in 2020, a rural school district in Texas is defined as having an enrollment of either less than 300 students or an enrollment exceeding 300 students but less than the median district enrollment of the state, with an average enrollment growth rate of less than 20% over the past five years.In 2018-2019, TEA identified 466 rural school districts out of 1210 public school districts across the state of Texas.
To investigate the relationship between rural/non-rural status and academic achievement, we collected rural and non-rural district-level data about STAAR reading, math, and science through the Texas Assessment Management System (TAMS).More precisely, the data acquisition efforts targeted 5th-grade district-level data for both the 2018-2019 and 2020-2021 academic years.Our ultimate analytical sample consisted of 1155 public school districts, with 461 categorized as rural districts.District-level demographic data from the years 2018-2019 and 2020-2021 were gathered from the Texas Academic Performance Reports (TAPR), which included a range of essential indicators such as the percentage of instructional hours, principal and teacher experience, teacher-student ratio, average teacher salary, teacher turnover rate, student mobility rate, as well as the percentage of students identified as English learners and economically disadvantaged.
Measurement
STAAR is a standardized testing program aligned with the Texas Essential Knowledge and Skills (TEKS) curriculum standards.It evaluates students' abilities in core subjects, including reading, math, science, and writing, from grades 3 to 8, and employs performance level descriptors with four rating levels.STAAR is a mandatory testing program administered by the state that aims to assess students' competencies and skills in key subject areas, including reading and mathematics from grades 3 to 8, writing in grades 4 and 7, and science in grades 5 and 8.For eligible students whose primary language is Spanish, TEA provides the alternative STAAR Spanish to evaluate their math, reading, and science academic performance for grades 3-5.STAAR uses performance-level descriptors to capture students' academic performance on both STAAR and STAAR Spanish assessments, utilizing four rating levels: Masters Grade Level, Meets Grade Level, Approaches Grade Level, and Did Not Meet Grade Level.This study focuses on the percentage of students who achieved the Approaches Grade Level in STAAR reading, math, and science tests.This rating level represents the basic level of academic proficiency and whether a student passes the test, including students rated as Approaches Grade Level, Meets Grade Level, and Masters Grade Level.According to the Texas Education Agency [54], "Approaches Grade Level" refers to students who demonstrate some ability to apply the knowledge and skills outlined by TEKS in a familiar context.Students classified at this performance level will likely make academic progress in the next grade with targeted academic intervention.Table 1 provides detailed information about the tests.
Data Analysis
RQ1 aimed to investigate whether there was a significant change in district-level demographic characteristics after the onset of the COVID-19 pandemic.To this end, we conducted ten paired t-tests, respectively, to examine potential differences before and after COVID-19 with respect to the teacher and student demographic characteristics at the district level.These demographic characteristics included the percentage of instructional hours, principal experience, teacher experience, teacher-to-student ratio, teacher full-time equivalence, teacher salary, teacher turnover rate, student mobility rate, percentage of students identified as ELs, and percentage of students identified as EC.
The aim of RQ2 was to assess whether there was a significant difference in students' academic progress in reading, math, and science between rural and non-rural school districts after the COVID-19 pandemic.To address this research question, we conducted stepwise hierarchical multiple regression analyses, respectively, for three dependent variables.The dependent variables were students' learning progress in the three subjects.Specifically, we calculated the students' learning progress by subtracting the percentage of students who achieved Approaches Grade Level in the STAAR tests in 2019 from the percentage who achieved Approaches Grade Level in 2021.To address RQ2, we included the location of the school districts in Model 1 as the grouping variable.
where b 1 is the coefficient of the school district as rural.
The aim of Research Question 3 was to investigate the additional impact of demographic characteristics on students' learning progress.To address this research question, we conducted stepwise hierarchical multiple regression analyses, respectively, for three dependent variables.The dependent variables used were the same as in Research Question 2. We calculated the change in district-level demographic characteristics by subtracting the demographic characteristic values in 2019 from those in 2021 for each variable that showed a significant change in response to COVID-19.We repeated hierarchical multiple regression analyses three times to determine whether the changes in demographic characteristics could predict students' learning progress in reading, math, and science above and beyond school district location, respectively.To address Research Question 3, the variables reflecting the differences in demographic characteristics, including the differences in principal experience, teacher experience, teacher-to-student ratio, teacher full-time equivalence, teacher salary, teacher turnover rate, student mobility rate, percentage of students identified as ELs, and percentage of students identified as EC, were added in Model 2, following the district location condition in Model 1.
Model 2: where b 1 is the coefficient of the district as rural, b 2 is the coefficient of district-level principal experience, b 3 is the coefficient of district-level teacher experience, b 4 is the coefficient of the district-level teacher-to-student ratio, b 5 is the coefficient of district-level teacher full-time equivalence, b 6 is the coefficient of district-level teacher average salary, b 7 is the coefficient of district-level teacher turnover rate, b 8 is the coefficient of district-level mobility rate, b 9 is the coefficient of district-level percentage of students identified as ED, and b 10 is the coefficient of district-level percentage of students identified as ELs.
Results
In the results section, descriptive analyses were first conducted to present rural and non-rural school district students' academic performance as measured by STAAR reading, math, and science tests in 2019 and 2021 (Table 2).RQ1: What was the impact of COVID-19 on Texas school districts' demographic characteristics, including instructional hours, principal experience, teacher experience, teacher-to-student ratio, teacher full-time equivalence, teacher salary, teacher turnover rate, student mobility rate, percentage of students identified as an English learner, and percentage of students identified as economically challenged?
The results of paired sample t-tests revealed that there was a statistically significant difference before and after COVID-19 in terms of principal experience (p = 0.030), teacher experience (p < 0.001), teacher-to-student ratio (p < 0.001), teacher full-time equivalence (p < 0.001), teacher average salary (p < 0.001), teacher turnover rate (p < 0.001), student mobility rate (p < 0.001), percentage of students identified as ELs (p < 0.001), and percentage of students identified as EC (p = 0.004).No significant difference was identified between the percentage of instructional hours before COVID-19 in 2019 and after COVID-19 in 2021 (p = 0.063; Table 3).Research Question 2: What was the impact of district location (rural vs. non-rural) on Texas fifth-grade students' learning progress (difference between 2019 and 2021) as measured by high-stakes reading, math, and science tests?
To answer the second research question, stepwise hierarchical multiple regression analyses were conducted, respectively, for three outcomes: reading, math and science.The results of Model 1 revealed that there was a statistically significant impact of district location on Texas fifth grade students' learning progress as measured by G5 STAAR reading, math, and science tests.Specifically, non-rural school districts showed a larger learning loss in reading after COVID-19 compared to rural school districts by 2.41% of students who achieved Approaches Grade Level (p = 0.002) when other variables were controlled.In addition, non-rural school districts showed a larger learning loss in math after COVID-19 compared to rural school districts by 5.77% of students who achieved Approaches Grade Level (p < 0.001) when other variables were controlled.Finally, non-rural school districts showed a larger learning loss in science after COVID-19 compared to rural school districts by 4.82% of students who achieved Approaches Grade Level (p < 0.001) when other variables were controlled.See Table 4 for full details on Model 1.
Research Question 3: What were the impacts of district location (rural vs. non-rural) and demographic characteristics on Texas fifth-grade students' learning progress (difference between 2019 and 2021) as measured by high-stakes reading, math, and science tests?
To address the third research question, hierarchical multiple regression analysis was conducted three times to determine if the change in demographic characteristics (principal experience, teacher experience, teacher-to-student ratio, teacher full-time equivalence, teacher salary, teacher turnover rate, student mobility rate, percentage of students identified as an English learner and percentage of students identified as economically challenged) improved the prediction of students' academic progress indicated by the change in percentage of students achieving Approaches Grade Level in G5 STAAR tests over and above district location (rural vs. non-rural) in reading, math, and science, respectively.See Table 4 for full details on each regression model.Adding the change in demographic variables to predict students' learning progress in reading led to an increase in R 2 of 0.013, ∆F (9,1126) = 1.692, p = 0.086.The full model of the change in demographic characteristics and rural location of a school district to predict fifth-grade students' learning progress in reading (Model 2) was statistically significant, R 2 = 0.022, F(10,1126) = 2.497, p = 0.006; adjusted R 2 = 0.013.Non-rural school districts showed a larger learning loss in reading after COVID-19 compared to rural school districts by 2.04% (p = 0.012) when other variables were controlled.In addition, the change in teacher turnover rate and the change in average teacher salary significantly predicted students' reading progress.Specifically, as the change in average teacher salary increased by one dollar, the expected students' learning progress in reading increased by 0.0003% (p = 0.023), holding the other variables constant.As the change in teacher turnover rate increased by one percentage, the expected change in students' learning progress in reading increased by 0.09% (p = 0.030), holding the other variables constant.
Adding the change in demographic variables to predict students' math learning progress led to an R 2 of 0.024, ∆F (9,1126) = 3.186, p < 0.001.The full model of the change in demographic characteristics and rural location of a school district to predict fifth-grade students' learning progress in math (Model 2) was statistically significant, R 2 = 0.055, F(10,1126) = 6.535, p < 0.001; adjusted R 2 = 0.046.Non-rural school districts showed a larger learning loss in math after COVID-19 compared to rural school districts by 5.07% (p < 0.001) when other variables were controlled.In addition, the change in teacher experience, the change in the percentage of students identified as ELs, and the change in average teacher salary significantly predicted students' math progress.Specifically, as the change in average teacher salary increased by one dollar, the expected students' learning progress in math increased by 0.0005% (p = 0.005), holding the other variables constant.As the change in teacher experience increased by one year, the expected students' learning progress in math decreased by 0.93% (p = 0.003), holding the other variables constant.As the change in the percentage of students identified as ELs increased by one percentage, the expected students' learning progress in math decreased by 0.65% (p = 0.004), holding the other variables constant.
Adding the change in demographic variables to predict students' learning progress in science led to an increase in R 2 of 0.023, ∆F (9,1124) = 3.060, p = 0.001.The full model of the change in demographic characteristics and rural location of a school district to predict fifth-grade students' learning progress in science (Model 2) was statistically significant, R 2 = 0.044, F(10,1124) = 5.223, p < 0.001; adjusted R 2 = 0.036.Non-rural school districts showed a larger learning loss in science after COVID-19 compared to rural school districts by 4.42% (p < 0.001) when other variables were controlled.In addition, the change in the teacher-to-student ratio, the change in the teacher average salary, the change in the percentage of students identified as ELs, and the change in student mobility rate significantly predicted students' science progress.Specifically, as the teacher-to-student ratio change increased by one student per teacher, the expected students' learning progress in science decreased by 1.225% (p = 0.004), holding other variables constant.As the change in average teacher salary increased by one dollar, the expected students' learning progress in science increased by 0.0003% (p = 0.038), holding the other variables constant.As the change in student mobility rate increased by one percentage, the expected students' learning progress in science increased by 0.32% (p = 0.020), holding the other variables constant.Moreover, as the change in the percentage of students identified as ELs increased by one percentage, the expected students' learning progress in science decreased by 0.533% (p = 0.003), holding the other variables constant.
Impact of COVID-19 on Demographic Characteristics
This study is a data-driven analysis to explore the impact of COVID-19 on Texas rural and non-rural school district 5th-grade students' academic performance as measured by STAAR reading, math, and science tests.The overall findings indicated a significant difference in Texas school district demographic characteristics in 2019 and 2021.Among these variables, some changes are worth attention.Compared to 2019, there is a significant increase in teachers' average salary in 2021.Moreover, student mobility and teacher turnover rates are significantly lower in 2021 compared to 2019.While students' academic performance is often found to be associated with teacher turnover rate [55] and student mobility rate [56], the increase in these variables might not be strong enough to mitigate the impact of COVID-19 on students' academic performance.
Impact of COVID-19 on Students' Academic Performance
The hierarchical linear regression analysis results suggested that COVID-19 significantly negatively impacted students' academic achievement across subjects, which is consistent with the report by TEA [16].Both rural and non-rural districts declined in the percentage of students who achieved Approaches Grade Level on STAAR reading, math, and science tests after COVID-19.Specifically, COVID-19 had a more significant negative impact on non-rural districts than rural school districts.Before COVID-19, Texas rural school districts already faced the challenges of poverty [17], insufficient access to professional development [23], and racial diversity [17].Researchers have been making efforts to support rural districts' teachers and students.For example, it was found in a previous study that the virtual delivery of professional development and mentoring were effective and practical solutions for rural teachers to access quality pedagogical support to further improve students' academic learning [57].In light of the findings that non-rural districts experienced a greater learning loss compared to rural districts, a deeper analysis is warranted to understand the underlying factors contributing to this unexpected result.It is crucial to consider the unique characteristics and resources of rural districts in Texas, which may differ significantly from those in other regions or from the typical portrayals of rural education.
Firstly, the methodological design of our study, primarily relying on aggregated district-level data, may have influenced these findings.While this approach provides a broad overview, it potentially overlooks subtle intra-district variations that could explain better resilience in rural areas.Moreover, during the pandemic, Texas rural districts might have had distinct advantages that mitigated the impacts of school closures.For example, smaller school sizes and community cohesion typical of rural areas might have facilitated more effective communication and implementation of distance learning strategies.The role of local education authorities and their support during the pandemic, including the provisioning of technological resources and training, could have also played a crucial role in these districts.Additionally, our findings raise questions about the adequacy of the current definitions and classifications of 'rural' in educational research.The definition used in this study, as provided by the Texas Education Agency, might mask significant variability within rural districts-ranging from remote areas with severe resource limitations to those closer to urban centers that might not face the same challenges.This variability could inadvertently lead to findings that suggest a homogeneity within rural districts that does not exist.Reflecting on how rural areas are defined and ensuring these definitions accurately reflect the demographic and geographic realities could lead to more precise and actionable insights.
These unexpected findings emphasize the importance of contextual and demographic factors in assessing the impact of educational disruptions like those caused by the COVID-19 pandemic.Future studies can consider these elements to provide a more comprehensive understanding of the dynamics at play.This approach will not only enhance the accuracy of research outcomes but also contribute to the development of targeted educational policies and practices that can better support vulnerable populations during crises.In comparison, non-rural districts, which may lack sufficient equipment, technological support, and resources due to lower socioeconomic conditions, appear less prepared to handle the shifts in instructional delivery prompted by COVID-19.
In addition, the findings indicated a larger numerical decrease in math and science than in reading, which is also consistent with the report by TEA that there is a larger decline in math and science than reading at the state level [15].A potential reason that might lead to the phenomenon is that effective math and science instruction is often embedded with hands-on experiments and in-person engagement.Texas school districts transitioned between in-person learning, virtual learning, and a hybrid mode from spring 2020 to fall 2021 due to COVID-19.However, educational institutions, teachers, and families are not fully prepared to adapt the curriculum and learning material to engage students during virtual learning.Another potential reason is that out of the classroom, students still have greater opportunities to apply their reading skills, such as reading with parents [58].However, only some parents can or know how to help their kids practice math skills, especially at higher grade levels [58], or the skills and resources to conduct science experiments.
The findings of the study further indicated that adding a series of district-level demographic variables significantly improved the model prediction, which is consistent with Tang et al. 2021 [18] that district geographic location is not the key term, but the demographic characteristics associated with the district that showed significant impact on students' academic performance.Specifically, we found one district-level demographic characteristic, teacher average salary, constantly and significantly impacted students' academic performance across subjects (reading, math, and science).The finding is consistent with a previous study that higher teacher salary is associated with a decreased academic gap among students of diverse backgrounds [59].In addition, the percentage of students identified as ELs showed a significant impact on STEM subjects (math and science), indicating that ELs need additional and quality support to acquire content knowledge and skills, as well as an academic language.
Conclusions and Limitations
COVID-19 has significantly impacted the education of students across the nation.According to estimates, as soon as COVID-19 struck, up to three million students in the United States withdrew their enrollment [60], with students from low socioeconomic position perhaps being the hardest hit [7,61].Students who return to school will likely be further behind and have a wider range of academic abilities [4].Like many other states across the nation, Texas had to shift to remote learning during the pandemic.This abrupt transition has disrupted students' academic learning experiences across Texas.The results of the current study indicate that COVID-19 significantly negatively impacted both rural and non-rural fifth-grade students' academic performance across subjects.More importantly, non-rural school districts exhibited larger learning losses than rural districts.This finding may seem unexpected given that rural areas typically have fewer resources than their non-rural counterparts.Several factors unique to the rural districts in Texas, such as smaller student-to-teacher ratios, strong community ties, or distinct administrative strategies, might have contributed to these unexpected outcomes.
The study has several limitations.First, our study only focuses on Texas fifth-grade students' academic performance.However, the impact of the pandemic on students' academic learning might not be the same at different grade levels or in different states.Future studies should consider investigating students' learning loss across grade levels and in other states, or even across the nation.Second, we analyzed district-level aggregated data given its public access.Inevitably, the detailed and nuanced information at the individual level was not considered.Therefore, we suggest that statewide data collection and innovative student assessment systems be utilized to monitor learning progress and identify students' diverse needs.Third, the study utilized data collected in 2019 and 2021 on state-level standardized tests.However, the pandemic might have a long-term effect on not only students' academic performance but also their learning motivation and behavior.Future studies should consider collecting longitudinal data to future monitor students' learning progress and provide students and teachers with timely support.
Table 2 .
Descriptive statistics of STAAR performance by school district location.
Table 3 .
t-test results from comparing school districts' demographic characteristics before and after COVID-19.
Table 4 .
Hierarchical multiple regression analysis predicting students' learning progress in reading, math, and science from demographic characteristics and locations.
The significance is reported for the following level: * p < 0.05. | 7,968.8 | 2024-05-01T00:00:00.000 | [
"Education",
"Sociology",
"Economics"
] |
Materials for Pharmaceutical Dosage Forms: Molecular Pharmaceutics and Controlled Release Drug Delivery Aspects
Controlled release delivery is available for many routes of administration and offers many advantages (as microparticles and nanoparticles) over immediate release delivery. These advantages include reduced dosing frequency, better therapeutic control, fewer side effects, and, consequently, these dosage forms are well accepted by patients. Advances in polymer material science, particle engineering design, manufacture, and nanotechnology have led the way to the introduction of several marketed controlled release products and several more are in pre-clinical and clinical development.
Introduction
Biodegradable and biocompatible materials for pharmaceutical dosage forms have enabled the advancement of pharmaceuticals by providing better therapy and disease state management for patients through controlled release drug delivery, particularly as microparticles and nanoparticles. Controlled release delivery is available for many routes of administration and offers many advantages over immediate release delivery. This review describes controlled drug delivery, the types/classes of biocompatible and biodegradable pharmaceutical polymers, the types of drugs encapsulated in pharmaceutical polymers, microparticle/nanoparticle controlled drug delivery, the particle engineering OPEN ACCESS design technologies and manufacture of controlled release microparticles/nanoparticles, and currently approved controlled release drug products.
Controlled Drug Release Technology of Drugs
In order to achieve efficient disease management, the concentration of released drugs from polymeric matrices should be within the therapeutic window with minimal fluctuation in blood levels over prolonged periods of time at the intended site of action [1][2][3]. The release of drug can be controlled by diffusion, erosion, osmotic-mediated events or combinations of these mechanisms [4,5]. Typically, a triphasic release pattern is observed, consisting of an initial burst [4], primarily attributed to drug precipitates at the particle surface and surface pores in the polymer, and the osmotic forces in highly water-soluble peptide formulations [6], a lag period depending on the molecular weight and polymer end-capping [5] and finally erosion-accelerated release [6].
Considering release rate control as a key parameter, a decrease in particle size (i.e., an increase in the specific surface area) results in higher release [6]. Also, higher porosity of the particles inducing a larger inner surface can increase the influx of the release medium into the particles and, thereby, facilitate the drug diffusion rate [7]. In addition, the specific properties of the polymer matrix (e.g., the chain length, flexibility and swelling behavior, potential interactions between polymer and drug) will significantly influence the drug release rate [8,9]. Therefore, switching to a different molecular weight or an end group capped polymer, and the use of block copolymers will alter the diffusion and drug release rate [10,11].
To achieve zero-order release kinetics indicative of uniform release with respect to time, which is desired for most applications, a combination of fast-and slow-releasing particles or the use of copolymers are possible alternative advanced methods [12,13]. A one-time only dose can be achieved by co-injection of a bolus of soluble drug as a loading dose and zero-order releasing microspheres as a maintenance dose.
Types and Classes of Biodegradable and Biocompatible Pharmaceutical Polymers
Biodegradability and biocompatibility of a polymer are among the most important properties for pharmaceutical applications. Biodegradation is generally described by two steps, namely: (1). water penetrates polymeric matrix, attacking the chemical bonds by hydrolysis and thereby shortening the polymer chain length resulting in a reduction in molecular weight and metabolism of the fragments and bulk erosion; and (2). surface erosion of the polymer occurs when the rate at which the water molecules penetrating the matrix is slower than the rate of conversion of the polymer into watersoluble materials. Biocompatibility refers to specific properties of a material not having toxic or injurious effects on biological systems. Non-biocompatible materials can cause irreversible tissue damage, such as permanent tissue destruction, necrosis, significant fibrosis, and dystrophic calcification.
However, it should be noted that good biocompatibility does not insure good biodegradability. Poly(N-isopropyl acrylamide) (NIPAAM), used to formulate thermo-responsive hydrogels [14], is non-toxic and biocompatible but not biodegradable by hydrolysis. It is of critical importance to investigate both biodegradability and biocompatibility of synthesized copolymers [15,16].
Polyester-Based Synthetic Polymers
Drug delivery systems based on biodegradable aliphatic polyesters have advanced remarkably over the past few decades. Commonly used polymers ( Figure 1 and Table 1) such as poly (ε-caprolactone) (PCL), poly (lactide acid) (PLA), and poly (lactic-co-glycolic acid) (PLGA) are FDA-approved and well known for their biodegradability, biocompatibility and non-toxic properties which makes them suitable as matrices for controlled release drug delivery systems. Poly (lactic-co-glycolic acid) (PLGA) has become one of the most studied diblock copolymer biomaterials for drug encapsulation and is present in several commercially available pharmaceutical products. Due to the slow degradation and drug release rates, poly (lactide acid) (PLA) homopolymer has no longer been broadly used for the past two decades. PLGA heteropolymer degrades relatively faster than PLA and can achieve 2-6 weeks release criteria, while PLA delivers drugs over months [7].
Degradation rate of PLGA is attributed to its molecular weight, its distribution, the lactide/glycolide ratio, the polymer end-group, micro/nano particle size, pH, and the temperature of the release medium. Generally, low molecular weight of PLGA degrades faster and as a result it causes more rapid drug release and higher initial burst [17]. The hydrophilicity of PLGA is defined by the lactide:glycolide ratio and affects the release rate in a micro/nano particulate formulation. When the lactide/glycolide ratio increases, the drug release rate decreases [18]. The carboxyl end groups of PLGA are mainly involved in interactions with drug. The initial adsorption of a peptide to hydrophilic PLGA is due to an ionic interaction between the amino group of the peptide and the terminal carboxyl group of PLGA, resulting in initial burst release [10,11]. Also, hydrophilic and acidic properties of free carboxyl groups induce faster water uptake and hydrolysis of ester bonds making more acidic groups by autocatalytic cycle [19]. For particulate drug delivery systems, particle preparation procedure and interactions of polymer with drug, both play important roles in polymer degradation. Higher stirring rate [20] and ultrasound treatment [6] for emulsification can reduce particle size (thereby increasing the surface area per unit volume) of particles resulting in faster degradation upon exposure to the release medium.
In comparison to PLGA, Poly (ε-caprolactone) (PCL) has high permeability to small drug molecules and a slow degradation rate which make it suitable for extended long-term delivery over a period of more than a year. While PLGA generates an acidic environment during degradation, which can lead to peptide/protein instability, the ability to avoid acidic conditions has become one of major advantages for selecting PCL as a drug carrier [21].
Poly(ethylene glycol) (PEG), also known as polyethylene oxide (PEO), is a largely exploited polymer for advanced physical and chemical stability of drugs and its "stealth" properties. Abuchowski, Davis and co-workers first described a method for the covalent attachment of mPEG to proteins in 1977 [22], which has since been termed PEGylation. PEG is an amphiphilic polymer composed of repeating ethylene oxide subunits and can dissolve in organic solvents as well as in water. Ordinarily, the properties of PEG that are of particular relevance in pharmaceutical applications are: (1). improved circulation time due to evasion for renal or cellular clearance mechanisms; (2). reduced antigenicity and escape from phagocytosis and proteolysis; (3). improved solubility and stability; and (4). reduced dosage frequency, with reduced toxicity [10,23,24]. Degradation rate of PEG depends both on the molecular weight and on the concentration of PEG. The degradation mechanism is explained by the strong hydrophilicity of PEG, the hydrogen-bonding interaction between PEG and water [23].
Polyvinyl alcohol (PVA), a homopolymer with measurable surface activity, has some similarities with PEG in that it is comprised of a repeating monomer unit that is hydrophilic, as shown in Figure 1. PVA plays a variety of functions in controlled release delivery systems including the following; as a matrix of particle [25], hydrogel [26], and as a surfactant in emulsion systems during formulation processes for micro/nano particles [27][28][29]. PVA can be grafted with a chain of polymeric substrate [30,31]. For example, in PVA-grafted PLGA polymer, the PVA backbone can be modified to create negatively or positively charged properties using sulfobutyl or amine moieties and the resulting increase in the hydrophilicity of this polymer provides advantages when carrying sensitive biomolecules, such as proteins, peptides and DNA [30].
As shown in Figure 1 and Table 1, poly(N-vinylpyrrolidone) (PVP) has been extensively used in controlled release drug delivery due to its biocompatibility, chemical stability, and excellent aqueous solubility [24,32]. Moreover, a polymer matrix combined with PVP has been known to reduce nonspecific protein adsorption [33]. Kollidon ® SR is a compressible polymeric blend composed of polyvinyl acetate (PVAc) and povidone (PVP) commercially available and used often in pharmaceutical dosage forms [34]. The amorphous nature of PVAc and its low glass temperature (T g ) of 28-31 °C impart unique characteristics to Kollidon ® SR. By the gradual leaching of water-soluble PVP, the matrix creates channels for releasing drugs [35]. Due to excellent solubility, the soluble grades of Kollidon ® usually have no delaying effects on the dissolution of drugs and can be used as a hydrophilic component in dosage forms that contain controlled-release excipients, such as cetylalcohol, alginate, cellulose derivatives, polyactic acid, polyvinyl alcohol, ceresine wax, stearic acid or methacylate copolymers to control the release of drugs, as binders or sometimes as plasticizers [34].
Natural Origin Polymers Used as Pharmaceutical Excipients
Naturally derived polymers with special focus on polysaccharides and proteins have become attractive in the biological applications of controlled release systems due to their similarities with the extracellular matrix in the human body and favorable specific properties that can be exploited for "smart" systems, for example, stimuli-responsiveness. Polysaccharides are a class of biopolymers constituted by either of one or two alternating monosaccharides, which differ in their monosaccharide units in the length of a chain, in the types of the linking units and in the degree of branching [36]. Table 1 lists FDA-approved natural origin polymers and their routes of administration.
Starch, composed of amylose and amylopectin, is generally modified to change its physical properties by adding plasticizers, such as water and glycerol, improving the flexibility of starch which is favorable in pharmaceutical applications [37,38]. In addition, cross-linking techniques can lead to advanced drug delivery systems by compensating for weak points of plasticized starch which is sensitive to moisture, shows low tensile strength and Young's modulus [39]. Due to its high hydrophilicity, starch has bioadhesive properties [8] that are favorable for ophthalmologic drug delivery (i.e., timolol, flurbiprofen) [40].
Chitosan is a polyaminosaccharide, prepared by the N-deacetylation of chitin. Chitosan is thermo-stable due to its strong intramolecular hydrogen bonding between hydroxyl and amino groups. As a weak poly-base, reversible pH-sensitive behavior, due to its large quantities of amino groups on its chain, makes chitosan applicable in hydrogel smart delivery systems. Chitosan is soluble in water and in organic acids such as formic, tartaric, acetic, and citric at low pH (<pH 6.5) due to protonation of the amino groups [41]. For particulate drug delivery, a cross-linking technique by glutaraldehyde is generally used [42].
Alginate, a marine-derived polysaccharide, is abundantly available in nature and is an attractive alternative for controlled release systems, as it is amenable to sterilization and storage [43]. Alginate is an anionic block copolymer consisting of β-D-mannuronic acid (M) and α-L-guluronic acid (G).
Natural-origin polymers
Starch (for oral, IV, IM, topical) Hyaluronate (for intra-articular, IM, intravitreal, topical uses), Human albumin (for IV, SC, Oral uses) Gelatin (for IM, SC, IV, oral topical uses) Alginic acid (for opthalmic and oral uses) Collagen (for topical use) Alginate forms a stimuli responsive hydrogel in two different ways. One is via hydrogen bonding at pH levels below 2, which is based on the pKa values for carboxyl acid groups in M (pKa 3.38) and G (pKa 3.65). The other way is via ionic interactions with divalent metal ions. Since, chelating agents such as EDTA or phosphate buffer can easily remove Ca 2+ ions, Ca 2+ -responsive-hydrogel systems can be designed [44].
Hyaluronic acid is a major carbohydrate component of the extracellular matrix found in synovial fluids and on cartilage surfaces [45]. Hyaluronic acid, an excellent lubricator and shock absorber, inhibits chondrocytic chondrolysis, thereby improving the lubrication of surfaces and reducing joint pain in osteoarthritis [46]. Hyaluronic acid has been widely studied for drug delivery, especially for transplantation, injection and gene delivery particularly as it is non-immunological [45]. To avoid rapid degradation and clearance, when the hyaluronic acid is used as a carrier, its matrix is utilized with cross-linking using glurataldehyde [37], carbodiimide [37], or polyethyleneglycol diglycidylether (PEGDG) [46]. Bovine Serum Albumin (BSA), a globular protein, is a naturally biodegradable, nontoxic and non-antigenic biopolymer making it suitable for controlled drug delivery. Typically BSA particles are prepared under mild conditions by coacervation or a desolvation process [47,48] and cross-linked by glutaraldehyde. However, polyethyleneimine (PEI) has been suggested to avoid potential toxicity of glutaraldehyde [49].
Collagen is the major protein component of the extracellular matrix. Twenty seven types of collagens have been identified to date, but collagen type I is the most investigated for pharmaceutical applications [37]. Several factors affect degradability of collagen, for example, structure contraction caused by cell penetration, collagenase, gelatinase and other non-specific proteinases can digest collagen [50]. The versatile properties of collagen (e.g., high mechanical strength, good biocompatibility, low antigenicity, and water uptake properties) have made it one of the most useful biomaterials for tissue engineering using a form of collagen sponge [51] or collagen gel [52].
Gelatin is a denatured protein obtained by acid and alkaline processing of collagen [37]. Gelatin, in a variety of isoelectric points, can be manufactured and basic gelatin with an isoelectic point of 9.0 and acidic gelatin with an isoelectric pont of 5.0 are mostly used. If the biomolecule to be released is acidic, basic gelatin with an isoelectric point of 9.0 is preferable as a matrix, and vice versa. Both gelatins are insoluble in water. To prepare a hydrogel through cross-liking, the gelatin hydrogels forming polyion complexes with proteins will facilitate the release of biologically active proteins [53].
Homo vs. Diblock Copolymer vs. Triblock Copolymers
To enhance the desirable properties of polymer as a matrix for a controlled drug delivery system, efforts have been made to improve its hydrophilicity, biodegradation rate, and drug stability. The most commonly used hydrophilic block for polymeric drug delivery systems is poly (ethylene oxide)/poly(ethylene glycol), PEO/PEG. PEO is FDA-approved for parenteral administration, due to its low toxicity and biocompatibility [10]. One of the primary advantages of attachment of the PEO moiety is its effectiveness against protein adsorption to hydrophobic surfaces. For polymeric micelles, the length of the PEO blocks affects circulation time and uptake by phagocytes, with longer chains extending circulation time and reducing phagocytosis [54]. As a shell forming material for polymeric micelles, with PEO, PEG (Figure 1) imparts to the micelle with a "stealth character" in the blood compartment, achieving longer circulation [55]. PEG grafted to surfaces of nanospheres proved to reduce thrombogenicity and to increase their dispersion stability in aqueous medium, due to steric repulsion effects of tethered PEG strands [56]. PLGA-PEG-PLGA (ReGel) as controlled release formulations for two weeks delivery of glucagon-like peptide-1 (GLP-1) in type 2 diabetic rats [16]. PEG-PLGA-PEG triblock copolymers with TGF-β1 have been formulated to accelerate the diabetic wound healing [57]. PVA based branched graft polyester bearing PLGA block, which is first generation, designated as PVA-graft-PLGA, shows lower burst effects and controlled release profiles based on the structure and molecular weight of the copolymer [58]. In order to obtain negative charged polymer, as a second generation, branched poly[sulfobutyl-poly(vinyl alcohol)-g-(lactide-coglycolide)] (SB-PVA-g-PLGA) was reported in which the sulfobutyl groups are covalently conjugated to PVA backbone [59,60]. Third generation, amine-PVA-g-PLGA, was developed by attaching various amino groups to the PVA backbone, which is positively charged [61].
Poloxamers, also known by the trade name Pluronics, are nonionic triblock copolymers composed of hydrophilic poly(ethylene oxide) (PEO) and hydrophobic poly(propylene oxide) (PPO) blocks, designated as PEO-PPO-PEO [62]. Due to their amphiphilic characteristics poloxamer exhibits surfactant properties coupled with ability to self-assemble into micelles above critical micelle concentration (CMC) in aqueous solutions. Besides, these copolymers are shown to be potent biological response modifiers capable of overcoming drug resistance in cancer and enhancing drug transport across cellular barriers, such as brain endothelium [63,64].
Therapeutic Agents Encapsulated in Polymeric Particles
Administration of a variety of drugs from different therapeutic classes encapsulated in polymeric particles (Figure 2), particularly through parenteral route, has been extensively investigated to lead to complete absorption of drugs in the systemic circulation and control drug release over a predetermined time span ranging from days to weeks to months. In chemotherapy, obtaining adequate drug levels at the tumor cell is the most primary issue because inadequate tumor cell drug-burden will lead to low cell apoptosis and to early development of drug resistance [65]. Chemotherapeutic agents ( Figure 2) such as paclitaxel [66][67][68], docetaxel [69], vascular endothelial growth factor siRNA [70,71], 5-fluorourasil [72,73], doxorubicin [74,75], adriamycin [76], gancyclovir [77], celecoxib [78,79], bleomycin [80,81], and tamoxifen [29] have been successfully formulated in polymeric particulate delivery systems.
Antibiotic drug delivery will decrease the bacterial load at the infection site, minimizing renal, liver and systemic toxicities. Application of controlled drug release systems offers advantages in maintaining a highly site specific drug concentration for an extended period while reducing systemic toxicity and drug resistance. Antibiotics (Figure 2) incorporated in controlled release systems include chlorhexidine [103], vancomycin [104], amphotericin B [105], gentamicin [106,107], and doxycycline [108][109][110].
Growth hormones and birth control hormones ( Figure 2) have been mostly focused for sustained release formulation. Encapsulation of growth hormone in biodegradable PLGA microspheres has been a typical technique to prolong the effect of the drug. Human growth hormone, a somatotropic hormone to treat growth hormone deficiency (GHD), chronic renal insufficiency, Turner' s syndrome, and cachexia secondary to AIDS, has been developed to reduce the need for frequent administrations by maintaining in vivo drug levels in the therapeutic range [28,111,112]. On the other hand, octreotide, a synthetic anti-somatotropic agent for the treatment of acromegaly and endocrine tumors, has been formulated in PLGA microspheres and commercialized as Sandostatin ® LAR ® depot (Novartis Pharma, Basel, Switzerland) on a monthly basis [113]. The use of polymers to deliver birth control hormones has evolved over the years. The first system, Norplant ® , consisted of six levonorgestrel contraceptive implants for a five year duration of use. By replacing the initial model of silastic capsules containing steroid crystals with a solid mixture of the steroid and a polymer (rods) covered by a release-regulating silastic membrane, it was possible to release the same amount of contraceptive steroid delivered by six capsules through two rods, which is a second generation implant system, Jadelle ® [114]. However, these products are silicone based devices, which are non-biodegradable with considerable long-term toxicities. Consequently, the devices need to be removed after depletion of the drug. To overcome this problem, PLGA microspheres have been studied for implantation using levonorgestrel under the skin without special surgery [115][116][117][118].
Patient compliance rates are notoriously poor in antipsychotic medications due to the nature of the disease, troublesome side effects, and symptom recurrence. Undoubtedly, sustained and controlled release systems offer many advantages in the delivery of antipsychotics, reducing the frequency of dosing and enhancing drug bioavailability [119]. Haloperidol [120], risperidone [121,122], clozapine [123], and olanzapine [124] have been, and are being, studied for long acting particulate formulations.
There are oral dosage formulations for which osmotic pumping is the major release mechanism. In this system, osmotic pressure is used as the driving force to induce drug release in a predictable and uniform manner. The osmotic pump consists of a solid core containing drug, alone or with an osmotic agent, surrounded by a semi-permeable membrane, which has a delivery pore. When this device is placed in water, the water is imbibed osmotically into the core, thereby pushing a volume of saturated drug solution through the delivery orifice in a programmed manner [125,126]. Propranolol [127], nifedipine [128], allopurinol [129], ferulate [130], diclofenac [131], and pseudoephedrine [132] have been formulated as osmotic pump controlled release formulations.
Microparticles for Controlled Release Delivery
Due to the development of particulate drug delivery system, current formulations in the market for delivering proteins and peptides have reduced administration from once a month to every three months. Microparticles are particles between 0.1 and 100 µm in size. Kang and Singh studied the effect of additives on the physicochemical characteristics and in vitro release of a model protein, bovine serum albumin (BSA) [133]. The addition of hydrophobic tricaprin additives with low molecular weight PEG-100, results in further release of BSA from PLGA microspheres. The difference in the release profiles between control and additive containing microspheres is closely related to their surface morphology.
Blanco and Alonso compared the size effect of preparation method, w/o/w solvent extraction vs. o/o solvent evaporation, and encapsulation efficiency along with using stabilizer [134]. The size of microspheres prepared by two different methods depended on the intrinsic viscosity of the polymer solution. Microspheres using the w/o/w solvent extraction method showed a size increase, as intrinsic viscosity of the polymer solution increased, while the size of microsphere prepared by o/o solvent evaporation was increased with low viscosity polymer. Co-encapsulation of a stabilizer, poloxamer 188 or 331, induced lower loading efficiency and slower release of BSA. Without stabilizer, protein release is mainly influenced by polymer erosion rate and forming water-filled channels.
The effect of protein molecular weight (MW) on release kinetics from polymeric microspheres was studied using the phase inversion technique. The mechanism of release from microspheres appeared to be dependent on protein MW for microspheres with low loading (0.5-1.6%), whereas that is independent with high loadings (4.8-6.9%). At low loading, release of larger MW proteins was dependent on diffusion through pores for the duration of the study, while smaller MW proteins seemed to depend on diffusion through pores initially and on degradation at later times [135].
Tissue engineering in the context of controlled release drug delivery has been the subject of interesting recent research for drug delivery to bone tissue. Polymer microspheres as drug delivery carriers have been incorporated in 3D scaffolds for bone tissue controlled drug delivery [136][137][138]. Additionally, protein and small molecule therapeutics to promote bone growth have been incorporated in polymeric devices and in PLGA microspheres for controlled drug delivery to bone [139][140][141][142][143].
Nanoparticles for Controlled Release Delivery
The area of nanoparticle drug delivery is gaining much attention in recent years for a variety of administration routes, including pulmonary nanomedicine delivery [144]. To improve the bioavailability of PLGA nanoparticles, Barichello et al. formulated surface bound peptides using nanoprecipitation solvent displacement method [145]. Insulin was preferentially surface bound on the PLGA nanoparticles and the amount of insulin encapsulated into nanoparticles was related to composition and pH of the buffer solution; the optimal pH was close to the isoelectric point of insulin.
Insulin-loaded PLGA nanoparticles were prepared by w/o/w and s/o/w encapsulation methods with a stabilizer, Pluronic F68. Comparing the nanoparticles prepared by s/o/w method, the insulin release rate was higher for the batches prepared by w/o/w method containing stabilizers. Also the presence of stabilizers resulted in a sustained release of insulin, therefore a prolonged reduction of blood glucose level in diabetic rats [146].
Magnetically modulated nanoparticles are used for developing in vivo imaging and delivering drugs to targeted sites, such as tumors. Non-targeted applications of magnetic nanospheres include their use as contrast agents (MRI) and as drug carriers that can be activated by a magnet applied outside the body [147]. In another study, this magnetic force was used to improve the efficiency of orally delivered protein therapeutics. When the external magnetic field was applied to the intestine, the transit time of magnetic particles slowed down; therefore, the residence time of the orally delivered particles in small intestine is extended and absorption of protein increases [148].
Double-Emulsion Evaporation Methods
As a considerable number of hydrophobic drugs are soluble in various water-immiscible organic solvent and are poorly soluble in water. By emulsion/solvent evaporation technique, both drug and biodegradable polymer are first dissolved in a solvent, mostly methylene chloride. The resulting organic oil phase is emulsified in an aqueous phase making o/w emulsion. Volatile solvents can be removed from this emulsion by evaporation [7]. However, for drugs that do not show a high solubility in methylene chloride, it can be replaced with butyl acetate, ethyl acetate, ethyl formate, or methylene ketone [7]. Alternatively, a cosolvent may be added to methylene chloride. For hydrophilic peptides or proteins, they are either dispersed in an organic solution of polymer or preferably processed in an aqueous solution of water-in-oil (w/o) emulsion resulting in a w/o or a w/o/w emulsion system [149]. However, o/w or w/o/w methods are predicted to result in low encapsulation efficiencies due to a flux of drugs from the dispersed phase to the larger volume of the continuous phase during manufacturing process [7]. In addition, proteins encapsulated by w/o or w/o/w techniques into particles are susceptible to denaturation resulting in a loss of biological activity, aggregation, oxidation and cleavage, especially at the aqueous phase-solvent interface [149]. In order to improve protein integrity, the use of stabilizers and surfactants are suggested during the primary emulsion phase.
Supercritical Fluid (SCF) Technology
Substances become supercritical fluids (SCF) when placed above their critical point, which exhibit the flow properties of a gas and the dissolving behavior of a liquid. Their solvent power is affected by density, temperature and pressure. Many excellent reviews exist on this cutting-edge particle engineering design technique that has found increasing utility in novel delivery systems for many routes of administration, particularly in non-invasive pulmonary delivery via pharmaceutical inhalation aerosols [150][151][152][153].
There are two possible processes for the drug and matrix polymer to be either dissolved or melted in the SCF and afterwards form particles following either the rapid expansion from supercritical solution (RESS) or from gas-saturated solution (PGSS) process. The RESS process, fine particles formed using the supercritical fluid as a good solvent, has two steps: (1) dissolving the solute into a supercritical fluid; and (2) formation of the solute as a microparticle due to rapid supersaturation [154]. CO 2 is an attractive solvent for a variety of chemical and industrial processes, since it is abundant, inexpensive, non-toxic, and a relatively accessible critical point, i.e., T c = 304.2 K and P c = 7.37 MPa [154][155][156]. In the PGSS process, the supercritical fluid or dense gas is used as a solute. Polar or high molecular weight substances, such as proteins, are difficult to dissolve in CO 2 , which has no polarity. However, the ability of CO 2 to diffuse into organic compounds enables the formation of composite particles in the PGSS process. The organic compounds will mainly constitute polymers and CO 2 lowers the melting point and decreases the viscosity of a compound with an increase in its concentration. As a result, the compounds are melted in a compressed gas and the concentration of a gas in a molten solute increases with pressure forming a saturated solution. When this solution is rapidly depressurized through a nozzle, composite microcapsules can be formed due to the release of gas from the condensed phase [154].
Supercritical Antisolvent Method
CO 2 is the most common supercritical fluid used in pharmaceutical applications due to its relatively accessible critical point, abundance, and minimal toxicity [150,155]. In addition to RESS and PGSS, the antisolvent method utilizes CO 2 as an antisolvent for particle fabrication. Antisolvent methods have the advantage of utilizing the high miscibility of supercritical fluids with organic solvents which have high dissolving power for the compound [155]. The techniques include the supercritical antisolvent (GAS/SAS), the precipitation with compressed supercritical fluid (PCA), aerosol solvent extraction system (ASES) and the solution-enhanced dispersion by supercritical fluids (SEDS) processes. The principle of the supercritical antisolvent method (GAS/SAS) is based on a rapid decrease in the solubilization power of a solvent by addition of a second fluid as antisolvent. Adding the antisolvent expands the organic solution thereby dissolving the solute inducing supersaturation of the solution. The precipitated particles are washed with the antisolvent to remove remaining solvent [154]. Particle size can be regulated by several factors, such as temperature, pressure and composition [154]. In contrast to the one-way mass transfer of the CO 2 into the organic phase in the GAS process, in the PCA process a two-way mass transfer occurs. The organic solvent diffuses into the CO 2 , and the CO 2 diffuses into the organic phase. In the ASES process, the drug and polymer are dissolved or dispersed in an organic solvent, i.e., generally soluble in the supercritical CO 2 , that is sprayed into a supercritical CO 2, then extracted, resulting in the formation of solid microparticles [157,158]. In the SEDS process, the particle formation is attributed to the mass transfer of the supercritical fluid into the sprayed droplet and to the rate of solvent transfer into the supercritical phase. Notably, a high mass transfer leads to a smaller particle size distribution with less agglomeration [159].
Spray Drying Particle Engineering Design
Spray drying has been widely used in the efficient design and production of food and pharmaceutical particles, especially particles designed for use in pharmaceutical inhalation aerosols [151,160]. Spray drying [151] comprises of four steps: (1) atomization of the feed solution into fine droplets in a spray; (2) spray-air contact involving intimate flow and mixing; (3) drying of sprayed droplets at elevated temperatures; and (4) separation of dried particles from the air [160]. In order to control the various particle characteristics, the operating parameters of the spray drying process such as atomization pressure, feed rate, airflow, inlet temperature, outlet temperature, and the size of nozzle orifice all must be controlled [161]. Generally, a smaller nozzle orifice, faster atomization airflow, and a low feed concentration generate a larger particle size [162,163]. To modify the particle morphology, the feed solvent type [164] or optimizing the outlet drying temperature can be done [165]. By adding Tween 20 and lactose to the feed solution, the particles with rougher surfaces can be obtained [165].
Spray-freeze drying is based on the atomization of an aqueous drug solution via a two-fluid or an ultrasonic nozzle into a spray chamber which is filled with a cryogenic liquid, i.e., liquid nitrogen, or halocarbon refrigerant, e.g., chlorofluorocarbon, hydrofluorocarbon [166]. Once the liquid droplets contact the cryogenic medium, it solidifies quickly due to the high heat-transfer rate. After the spraying process is completed, the collected contents are lyophilized and frozen solvent is removed by vacuum or atmospheric freeze-drying [167]. To obtain a smaller particle size, the mass flow ratio of atomized nitrogen to liquid feed, which has the most significant influence to particle size, should be increased [168]. Spray freeze-drying can be exploited to create small microparticles and nanoparticles [169,170].
Marketed Controlled Release Polymeric Pharmaceutical Products and Clinical Trials
Administration of a variety of drugs encapsulated in polymeric particles has been extensively investigated leading to complete absorption of drugs in systemic circulation and control drug release over a predetermined time span in days to weeks to months, resulting in increased patient compliance and maximal therapeutic effects. Lupron ® Depot is a microsphere formulation of leuprolide with duration of one, three or four months using PLA or PLGA in the treatment of prostate cancer and endometriosis. Nutropin ® , a commercial PLGA microsphere formulation product of human growth hormone, is used for two weeks or one month duration. As a synthetic anti-somatotropic agent for the treatment of acromegaly and endocrine tumors, Octreotide encapsulated in PLGA microspheres, commercialized as Sandostatin ® LAR ® is taken on a monthly basis. In addition, Trelstar ® Depot for triptorelin, Suprecur MP ® for buserelin, Somatuline LA ® for lanreotide, Arestin ® for minocycline, Risperdal Consta ® for risperidone have been commercialized as a parenteral microsphere formulation products for extended duration [171][172][173][174][175][176]. Micellar nanoparticles incorporating paclitaxel or cisplatin are in their clinical trials [177]. There are also oral dosage formulation commercial products for which osmotic pressure is the major driving force in release mechanism, including Procardia XL ® for nifedipine ( Figure 2) and Glucotrl XL ® for glipizide [47,48,178].
Conclusions
Biodegradable and biocompatible materials for pharmaceutical dosage forms have enabled the advancement of pharmaceuticals by providing better therapy and disease state management for patients through controlled release. Controlled release delivery is available for many routes of administration and offers many advantages over immediate release delivery. These advantages include reduced dosing frequency, better therapeutic control, fewer side effects, and, consequently, these dosage forms are well accepted by patients. Advancements in polymer material science, particle engineering design, manufacture, and nanotechnology have led the way to the introduction of several marketed controlled release products containing polypeptide drugs and protein drugs that retain their therapeutic activity over pharmaceutical timescales following encapsulation in biodegradable materials. | 7,631.4 | 2010-09-15T00:00:00.000 | [
"Materials Science",
"Medicine"
] |
Preloaded D-methionine protects from steady state and impulse noise-induced hearing loss and induces long-term cochlear and endogenous antioxidant effects
Objective Determine effective preloading timepoints for D-methionine (D-met) otoprotection from steady state or impulse noise and impact on cochlear and serum antioxidant measures. Design D-met started 2.0-, 2.5-, 3.0-, or 3.5- days before steady-state or impulse noise exposure with saline controls. Auditory brainstem response (ABRs) measured from 2 to 20 kHz at baseline and 21 days post-noise. Samples were then collected for serum (SOD, CAT, GR, GPx) and cochlear (GSH, GSSG) antioxidant levels. Study sample Ten Chinchillas per group. Results Preloading D-met significantly reduced ABR threshold shifts for both impulse and steady state noise exposures but with different optimal starting time points and with differences in antioxidant measures. For impulse noise exposure, the 2.0, 2.5, and 3.0 day preloading start provide significant threshold shift protection at all frequencies. Compared to the saline controls, serum GR for the 3.0 and 3.5 day preloading groups was significantly increased at 21 days with no significant increase in SOD, CAT or GPx for any impulse preloading time point. Cochlear GSH, GSSG, and GSH/GSSG ratio were not significantly different from saline controls at 21 days post noise exposure. For steady state noise exposure, significant threshold shift protection occurred at all frequencies for the 3.5, 3.0 and 2.5 day preloading start times but protection only occurred at 3 of the 6 test frequencies for the 2.0 day preloading start point. Compared to the saline controls, preloaded D-met steady-state noise groups demonstrated significantly higher serum SOD for the 2.5–3.5 day starting time points and GPx for the 2.5 day starting time but no significant increase in GR or CAT for any preloading time point. Compared to saline controls, D-met significantly increased cochlear GSH concentrations in the 2 and 2.5 day steady-state noise exposed groups but no significant differences in GSSG or the GSH/GSSG ratio were noted for any steady state noise-exposed group. Conclusions The optimal D-met preloading starting time window is earlier for steady state (3.5–2.5 days) than impulse noise (3.0–2.0). At 21 days post impulse noise, D-met increased serum GR for 2 preloading time points but not SOD, CAT, or GpX and not cochlear GSH, GSSG or the GSH/GSSG ratio. At 21 days post steady state noise D-met increased serum SOD and GPx at select preloading time points but not CAT or GR. However D-met did increase the cochlear GSH at select preloading time points but not GSSG or the GSH/GSSG ratio.
Introduction
Although physical hearing protectors have improved and many occupational noise exposures reduced, permanent noise-induced hearing loss (NIHL) still affects at least 10 million Americans [1,2]. Further, harmful levels of occupational noise exposure may affect close to 30 million Americans [3]. Approximately 37.5 million adults 18 and over report some trouble hearing [4,5], consequently NIHL would account for a large percent of the overall incidence of hearing loss in this country. Internationally, excessive noise exposure is the major avoidable cause of permanent hearing loss worldwide [6,7]. Additionally, recreational activity with firearms, amplified music, motorcycles, and power tools can expose millions of people to sound capable of producing permanent hearing loss [3,8,9]. Even young children can suffer NIHL after exposure to sudden noise emitted by toy pistols and firecrackers [10,11].
In U.S veterans, auditory disorder is the second most common service-connected injury, second only to musculoskeletal disability, and the number of new compensation recipients has grown from 250,435 in 2015 to 278, 501 in 2019 [12].
NIHL generally first affects the high frequency hearing range with a characteristic "notch" at approximately 4 kHz. As the loss progresses, the patient may have difficulty in all listening environments affecting both their social and work lives, and sometimes impacting employability. The social and psychological impact on the US population can be difficult to quantify. But the financial impact to the U.S. government is undeniable. The U.S. Veteran's Administration alone paid approximately $24 billion dollars in hearing loss compensation from 1970-1990 [13] and $1.2 billion in 2012 [14] although the total costs to the US military are far more extensive including costs for hearing aids, retraining, noise mitigation, medical care, transportation, hearing protection devices and work time loss [15]. Noise-induced tinnitus is less well studied but is a frequently reported consequence of noise exposures [16]. The worldwide social and financial impact of military, industrial, and recreational NIHL is enormous. The World Health Organization estimates the annual global cost of unaddressed hearing loss exceeds over 750 billion dollars reported in US dollars [7].
In multiple studies, D-methionine (D-met) administered before and/or after noise exposure provides excellent noise-induced hearing loss (NIHL) protection from steady-state and impulse noise exposures [17][18][19][20][21][22][23]. We first reported that D-met preloading, administering Dmet solely prior to noise exposure, could significantly reduce NIHL in 2013 [23]. Preloading, an innovative approach to prophylactic dosing, would be particularly useful for special operations troops that are typically deployed for 72 hours but have severe pack weight and possibly fluid restrictions during deployment. Preloading could also be useful for anticipated noise exposures such as weapons training for military and police personnel, farmers, musicians or concert attendees, or for recreational hunters and other shooters.
However, optimal preloading D-met otoprotection is time-dependent and more information is needed to optimize future clinical use. Claussen et al 2013 [23] was a small-scale study utilizing only steady state noise but many anticipated noise exposures such as weapons training are impact noise. Further we need to understand more about the mechanisms of preloading protection and whether or not long-term protective changes occur in the serum and/or cochlea.
Previous mechanistic studies cite possible correlations between D-met otoprotection and potential antioxidant enzyme biomarkers [20,[24][25][26]. D-met is an established direct antioxidant free radical scavenger [18,20,22,26,27] and has influenced superoxide dismutase (SOD) and catalase (CAT) antioxidant biomarker levels in previous noise and drug-induced otoprotective studies [20,26]. SOD and CAT are demonstrated biomarkers for schizophrenia [28] and 3-methylglutaconic aciduria [29]. Thus, endogenous SOD and CAT enzyme assays may provide feasible and accessible biomarker opportunities to elucidate D-met's optimal protective prophylactic dose and understand how D-met treatment influences the overall antioxidant environment.
As a sulfhydryl reversible antioxidant [30], methionine may impact the glutathione pathway 24 , particularly in the mitochondria [31]. Although it is possible for D-met to act as a cysteine sink to fuel the glutathione pathway, methionine in its D-isomer form is excreted without conversion to the L-isomer 60-70% of the time in humans [32,33], making D-met's direct antioxidant mechanisms the likely otoprotective pathway in humans. D-met has influenced glutathione pathway markers in previous otoprotective animal studies [25]. Further, glutathione reductase (GR) and glutathione peroxidase (GPx) are greatly influenced by NIHL [34] and are demonstrated biomarkers for GR blood deficiencies [35]. Thus, reduced oxidized glutathione ratios, particularly in cochlear supernatant, and endogenous glutathione enzymes are justified potentials as optimal D-met otoprotection biomarkers.
Combined, cochlear GSH/GSSG ratio and endogenous antioxidant assays may serve as biomarkers for optimal D-met otoprotection. They may also correlate endogenous antioxidant levels with cochlear oxidative environments and identify opportunities to develop realisticallyscalable and personalized otoprotective biomarker diagnostics.
The current study identified the earliest preloaded (earliest possible time to initiate effective therapeutic prophylaxis) time possible to successfully demonstrate otoprotection from steady state or impulse noise exposures and the optimal time period for administration. The study then investigates whether or not D-met's optimal and sub-optimal protections were also reflected in cochlear GSH/GSSG ratios and serum endogenous antioxidant enzyme levels 21 days after noise cessation.
The overall aims of the of the present study were twofold. The first aim was to determine the optimal time windows for D-met preloading administration to reduce noise-induced hearing threshold shift secondary both steady-state and impulse noise exposures. The second aim was to expand our understanding of preloaded D-met's impact on oxidative state measured in both serum and cochleae at 21 days. These aims were achieved.
Subjects
Study subjects comprised one hundred and ten (110) male Chinchillas lanigera (3)(4)(5) year of age; Ryerson, USA). Animals were housed in the SIU School of Medicine animal care facility and maintained a normal diet prior to and throughout the study's duration. Male chinchillas are singly housed to avoid fighting. Temperature was maintained at 68 degrees Fahrenheit. Lighting was on a 12 hour on and 12 hour off cycle controlled by a timer. Animals had access to food and water ad libitum.
Ethics statement
This study was carried out in strict accordance in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Laboratory Animal Care and Use Committee (LACUC) of Southern Illinois University School of Medicine LACUC Protocol # 93-14-025.
All animal protocols and procedures were reviewed and approved by SIU School of Medicine's Laboratory Animal Care and Use Committee prior to study performance.
Experimental design
Subject groups (10 animals/group; 11 groups) were assigned to one of the following five (5) steady state or impulse noise exposure preloaded treatment (treatment beginning prior to noise exposure) cohorts: 2.0-day preloaded control, 3.5-day preloaded D-met, 3.0-day preloaded D-met, 2.5-day preloaded D-met, 2.0-day preloaded D-met. All groups received a total of 5 intraperitoneal (ip) injections every 12 hours for 48 hours with each dose comprising either one injection of saline (control) or one injection of 200 mg/kg D-met with start times of the first dose as indicated above. In addition, one no noise control group was sacrificed for serum enzyme and cochlear glutathione levels only to serve as a normal comparison for those measures. Auditory brainstem responses (ABRs) were measured prior to preloading treatment and noise exposure to serve as a comparative baseline and establish normal hearing thresholds, based on the lowest intensity capable of eliciting a replicable, visually detectable response in both intensity series. ABRs were measured again 21 days post-noise exposure, and threshold shift was calculated as the difference between the 21-day and baseline hearing thresholds. Animals were humanely sacrificed following 21-day ABR testing by decapitation while still under anesthesia and serum and cochlear tissues were collected for correlative enzymatic and antioxidant concentration analyses.
D-methionine
In a sterile hood environment, 99+% pharmaceutical-grade D-methionine (Acros Organics) was dissolved into 0.9% sterile saline at 30 mg/ml. The solution was then stored in the refrigerator until use.
Noise exposure
Steady-state and impulse noise exposures were generated by custom-developed digital noise exposure systems. Steady-state noise exposure comprised 105 dB SPL octave band noise centered at 4 kHz for a 6-hour exposure. Impulse noise exposure comprised 155 dB peak SPL impulse noise repeating 150 times over 75 seconds (simulating M-16 repeating rifle fire). Each noise exposure system comprises a data acquisition device (NI DAQUSB-6251), an audio power amplifier (Yamaha P2500S), an acoustic compression driver (JBL 2446J), a shock tube extension (3' length and 2" diameter), an exponential horn (JBL 2380), a sound measurement and calibration system, and a computer. LabVIEW software was used to generate digital noise, control analog signal input and output, calibrate the system, and monitor sound levels during exposure experiments. The system has been described by Qin et al 2014 [36].
Anesthesia
The injectable anesthesia solution contained 50 mg/kg ketamine and 5 mg/kg xylazine. Subjects were anesthetized via intramuscular injections prior to and during (as needed in halforiginal volume doses) ABR testing and prior to humane sacrifice which comprised decapitation while still under anesthesia. Throughout ABR testing, anesthetized animals were placed on a temperature-controlled thermostatic fluid pad and body temperature was monitored via rectal temperature measurements. Food, but not water, was removed from the animal cage at least 2 hours prior to anesthesia.
Electrophysiology
ABR measurements were collected to assess bilateral auditory thresholds. Following assessment for middle ear disease, subcutaneous needle electrodes were placed into anesthetized subjects at the vertex (active electrode), nose (reference channel 1), mastoid (reference channel 2), and opposite hind leg (ground electrode). ABR thresholds were measured in response to tone bursts with 1 ms rise/fall and 0 ms plateau, gated by a Blackman envelope, and centered at 2, 4, 6, 8, 14, and 20 kHz frequencies presented at 30/s. Two intensity series ranging from 100 to 0 dB peak sound pressure level (SPL) at 512 sweeps per average were obtained for each animal in 10 dB decrements. The recording epoch was 15 ms following stimulus onset and responses were analog-filtered with a 30-3000 Hz bandpass.
Tissue collection, preservation, and preparation
Blood samples were collected from heavily-anesthetized subjects via cardiac puncture following 21-day ABR assessment, stored in cryo vials containing 0.1mM EDTA, and immediately snapfrozen in liquid nitrogen. After blood collection, subjects were sacrificed by decapitation, right and left cochleae were collected, and samples were immediately snap-frozen in liquid nitrogen.
Prior to enzyme analysis: 1) To chelate EDTA and clot blood, 140 μL of 1 M CaCl 2 in sterile H 2 O (Catalog no: C4901, Sigma, MO) were added to 500 μL of whole blood and the tubes were incubated on ice for 30 minutes without disturbance. After blood samples were coagulated using calcium chloride, they were then separated via centrifugation to collect serum samples; 2) The cochleae were decapsulated without the vestibulum. Cochlear tissue from the right cochlea of each animal was homogenized with a battery powered Bio-Vortexer Homogenizer rotating plastic pestles in 1.5 ml microtubes, with one cochlea per microtube in 0.1M Phosphate Buffered Saline, pH 7.4 with 0.1 mM EDTA, spun down via centrifugation. Then the supernatant was collected. for the assay.
Enzyme assays
Serum enzyme assays were measured using an EPOCH spectrophotometer. Assays were performed in triplicate and results were normalized to total protein values via a Coomassie Plus Assay Kit (Fisher Scientific 23236). Endogenous glutathione peroxidase (GPx), glutathione reductase (GR), and superoxide dismutase (SOD) serum concentrations were quantified using manufactured kits (GPx, Cayman Chemical 703102; GR, Cayman Chemical 703202; SOD, Fisher Scientific EIASODC) per manufacturer's instructions. Serum catalase activity was measured by previously-established H 2 O 2 degradation protocols (Campbell et al. 2003). Briefly, 100% ethanol was added to samples at a 1:10 ratio and placed in an ice bath for 30 minutes. Following the ice bath, Triton X-100 was added to the sample at a 1:10 ratio and the resulting extract was diluted 1:25 in 0.05 M sodium phosphate buffer. Equal amounts of diluent and 0.066 M H 2 O 2 .(Sigma Catalog number 323381) were then added to initiate the quantifiable reaction. The sample was measured at 240 nm 5 times per minute for one minute and a molar extinction coefficient of 43.6 M -1 cm -1 was used to quantify catalase activity (1 unit = 1 mM H 2 O 2 degraded/min/mg protein). A Gen5 Take3 low-volume accessory (BioTek Instruments, Inc plate was used for catalase measures only).
Cochlear supernatant samples were tested for glutathione: glutathione disulfide (GSH/ GSSG) ratios using a Promega GLOMAX Multi + Detection System luminometer and a GSH/ GSSG-GLO assay kit (Promega V6612). Assays were performed in duplicate and according to manufacturer's instructions.
Statistical analysis
Statistical analyses were performed separately for all outcomes-ABR shift, blood serum enzymes (CAT, SOD, GR, and GPx), and cochlear oxidative state (GSH, GSSG, GSH:GSSG). For the ABR shift analyses, differences in ABR threshold shift in the left and right ears were compared using paired t-tests at each frequency. No significant differences were found, and threshold shifts for the left and right ear were averaged to yield a single ABR shift for each animal. ABR threshold shift results are presented in the main text for simplicity. However, full results on baseline and 21-day post-exposure ABR shifts can be found in the supplement (S1-S4 Figs). ABR threshold shifts were analyzed using a three-way repeated measures ANOVA. Frequency (2,4,6,8,14, and 20 kHz) was included as a repeated within-subjects factor. Treatment group (saline and D-met preloading at 2, 2.5, 3, and 3.5 days) and noise exposure (impulse and steady-state) were included as between-subject factors. Post-hoc contrasts explored all pairwise differences between treatment groups at each level of frequency and noise exposure with p-values adjusted for multiple comparisons using Tukey's method.
Each blood serum enzyme measure and cochlear oxidative state measure was analyzed using a two-way ANOVA. Treatment group (saline and D-met preloading at 2, 2.5, 3, and 3.5 days) and noise exposure (impulse and steady-state) were included as between-subject factors. Post-hoc contrasts explored all pairwise differences between all treatment groups and saline controls at each level of noise exposure with p-values adjusted for multiple comparisons using Tukey's method. Additional one-way ANOVAs were used to explore differences between the no noise control group and treatment groups under each noise exposure. Post-hoc contrasts explored the difference between each treatment group and the no noise controls with p-values adjusted to correct for multiple comparisons using Tukey's method.
ABR threshold shift
ABR threshold shifts varied significantly across treatment group (p < 0.001) but not across noise exposure (p = 0.20) or frequency (p = 0.09). A significant interaction existed between treatment group and noise exposure (p < 0.001). Other two-way interactions (noise exposure by frequency-p = 0.92; treatment group by frequency-p = 0.65) and the three-way interaction (p = 0.35) were not significant.
For impulse noise exposure, preloading with D-met at 2, 2.5, and 3 days (but not 3.5 days) resulted in significantly lower ABR threshold shifts than saline controls across all frequencies (Fig 1A). A general trend existed across all frequencies toward the lowest ABR threshold shifts occurring when preloading with D-met at 2.5 days, although this treatment was only significantly lower than preloading at 2 and 3 days at frequencies of 2 and 4 kHz. At frequencies of 8 and 20 kHz, preloading at 2.5 days was similar to preloading at 3 days.
For steady-state noise exposure, preloading with D-met at 2.5, 3, and 3.5 days resulted in significantly lower ABR threshold shifts than the saline controls across all frequencies (Fig 1B). Preloading at 2 days resulted in significantly lower ABR threshold shifts than saline controls only at frequencies of 4, 8, and 14 kHz. Although preloading at 2.5 and 3 days generally resulted in the lowest ABR shifts, these treatments were not statistically lower than each other or preloading at 2 or 3.5 days at any frequency considered.
Thus, the effective window of preloading timepoints for D-met protection appear to extend earlier for impulse noise than for steady state noise exposures which extends later although effective time periods partially overlap.
Serum enzymes
CAT activity showed no significant difference across treatment group (p = 0.75) or no noise exposure (p = 0.19), and no significant treatment by noise interaction was found. CAT activity was similar between all treatment groups, saline controls, and no noise controls under impulse and steady-state noise exposure (Fig 2A).
SOD activity varied significantly across treatment groups (p < 0.001) but not across noise exposure type (p = 0.89). A significant noise exposure by treatment group interaction (p < 0.001) existed. SOD activity was similar between treatment groups and saline controls under impulse noise exposure, and all treatment groups had higher SOD activity than the no noise controls (Fig 2B). No significant difference was found between treatment groups for impulse noise exposure, however. For steady-state noise exposure, preloading at 2.5, 3, and 3.5 days showing significantly higher activity than saline treatment ( Fig 2B). All treatment groups had higher SOD activity than no noise controls (Fig 2B), and preloading at 2.5, 3, and 3.5 days showed significantly higher activity than preloading at 2 days.
GR activity varied significantly across treatment groups (p = 0.002) but not across noise exposure (p = 0.26). A significant noise exposure by treatment group interaction (p = 0.04) was found. For impulse noise exposure, preloading at 3 and 3.5 days yielded significantly higher GR activity than saline controls (Fig 2C). Saline controls had significantly lower GR activity than no noise controls, while all other treatments were similar to the no noise controls (Fig 2C). Preloading at 3 and 3.5 days was not significantly different from other preloading times. For steady-state noise exposure, GR activity was similar between saline controls, no noise controls, and all treatment groups Fig 2C).
GPx activity varied significantly across treatment groups (p < 0.001) but not across noise exposure (p = 0.35). No significant noise exposure by treatment group interaction (p = 0.28) existed. Although GPx activity was similar across saline controls and all treatment groups under impulse noise exposure, only preloading at 2, 2.5, and 3 days yielded significantly higher GPx activity than the no noise controls (Fig 2D). Under steady-state noise exposure, only preloading at 2.5 days was significantly higher than the saline controls (Fig 2D). Preloading at 2.5, 3, and 3.5 days yielded significantly higher GPx activity than the no noise controls (Fig 2D). Preloading at 2.5 days yielded significantly higher GPx than preloading at 2 days but was similar to other preloading times. Describes the mean (± one standard deviation) ABR shift (in dB) observed at different frequencies (2,4,6,8,14, and 20 kHz) under impulse (A) and steady-state noise exposure. Results for each D-met pre-loading treatment are shown for each noise exposure-frequency combination (ranging from saline treatment [dark gray] to pre-loading at 3.5 days [light gray]). Significant differences between D-met preloading times and the saline treatment for each noise exposure-frequency combination are denoted by ' � ' (p < 0.05) and ' �� ' (p < 0.01). Ten animals were tested in each group except for D-met preloaded at 3.5 days and exposed to impulse noise, where 9 animals were successfully tested. https://doi.org/10.1371/journal.pone.0261049.g001
PLOS ONE
Preloaded D-met reduces noise-induced hearing loss inducing long-term cochlear and serum antioxidant effects . Results for each D-met preloading treatment are shown for each noise exposure (ranging from saline treatment [dark gray] to pre-loading at 3.5 days [light gray]). No noise controls are shown in black. Significant differences between D-met pre-loading times and no noise controls for each noise exposure are denoted by ' � ' (p < 0.05) and ' �� ' (p < 0.01). Significant differences between D-met pre-loading times and the saline treatment for each noise exposure are denoted by ' †' (p < 0.05) and ' † †' (p < 0.01). Ten animals were tested in each group. https://doi.org/10.1371/journal.pone.0261049.g002
Cochlear oxidative state
GSH concentration varied significantly over noise exposure (p = 0.04), but not over treatment group (p = 0.34). A significant noise by treatment interaction (p = 0.01) was found. For impulse noise exposure, all treatment groups were similar and did not differ from saline and no noise controls (Fig 3A). For steady-state noise exposure, preloading at 2 and 2.5 days increased GSH above saline treatment (Fig 3A), although these two treatments produced statistically similar GSH concentrations to preloading at 3 and 3.5 days. No noise controls were similar to all treatment groups (Fig 3A).
GSSG concentration varied over noise exposure type (p = 0.02) but not over treatment group (p = 0.76). There was a significant interaction between noise and treatment (p = 0.001). For impulse noise exposure, treatment groups were similar to saline controls and each other, and only preloading at 3.5 days resulted in significantly higher GSSG concentration than no noise controls (Fig 3B). For steady-state noise exposure, all treatment groups and saline controls were similar each other, and preloading at 2, 2.5, and 3 days resulted in significantly higher GSSG concentration than no noise controls (Fig 3B).
No effect of noise exposure type (p = 0.15) or treatment group (p = 0.29) on GSH:GSSG ratio was found, but a significant interaction between noise type and treatment (p = 0.01) existed. For impulse noise exposure, no treatments were different from saline controls, but all treatments resulted in significantly lower GSH/GSSG ratios than no noise controls (Fig 3C). For steady-state noise exposure, all treatment groups were similar to saline controls, and all treatment groups, except preloading at 3.5 days, resulted in significantly lower GSH/GSSG ratios than no noise controls (Fig 3C).
Discussion
Prevention of NIHL could be a far more effective approach than hearing aids and disability payments in both social and economic terms [12,15]. Preloading of an otoprotective agent may afford us that opportunity, at least for anticipated noise exposures such as weapons training. However, while the military and shooters are potentially large patient populations for preloading, pre-loading could be useful for any anticipated high noise level exposures such as jack-hammer use and other high noise equipment, musicians, concert attendees, and firework displays. Just as people plan ahead by carrying physical hearing protectors, a pre-loading pharmacologic agent may also be helpful, particularly if one can be developed that could be delivered orally and with a suitable risk/benefit ratio.
To address the range of clinical dosing protocols that will be needed to reduce the burden of NIHL, a variety of treatment approaches will be needed. While pre-loading could be very useful for anticipated noise exposures just as are hearing protection devices, not all exposures . Results for each D-met pre-loading treatment are shown for each noise exposure (ranging from saline treatment [dark gray] to pre-loading at 3.5 days [light gray]). No noise controls are shown in black. Significant differences between D-met pre-loading times and no noise controls for each noise exposure are denoted by ' � ' (p < 0.05) and ' �� ' (p < 0.01). Significant differences between D-met pre-loading times and the saline treatment for each noise exposure are denoted by ' †' (p < 0.05) and ' † †' (p < 0.01). https://doi.org/10.1371/journal.pone.0261049.g003
PLOS ONE
Preloaded D-met reduces noise-induced hearing loss inducing long-term cochlear and serum antioxidant effects are expected and even when they are expected not all patients may access protection in advance of the exposure. Consequently, rescue treatment ie providing the pharmacologic agent after the noise exposure and probably after a change in hearing has been noticed, is another needed approach. We have previously published 2 studies demonstrating that D-met can first be administered 1 to 7 hours post noise and significantly reduce NIHL [18,19]. Further 3 clinical trials using iv or intratympanic rescue dosing with other agents demonstrated some efficacy against NIHL following either military or firework exposure [37][38][39]. One challenge for rescue dosing is that the agent would need to be readily available after an unexpected exposure while preloading dosing can be planned in advance.
Pre-loading could be useful for a variety of anticipated noise exposures. However first we need to know the effective time-points of preloading and understand some of its effects on both serum and cochlear oxidative levels.
We have previously reported that in small groups of animals (n = 5 per cell) D-met preloading significantly reduced steady-state noise-induced ABR threshold shift when started 3 days prior to noise exposure [23]. However, using larger animal groups (n = 10 per cell) and more timepoints in the current study, we have found that the effective time window for D-met preloading for steady-state noise exposure actually extends from 2.5 to 3.5 days for protection across all ABR frequencies with protection at 3/6 frequencies at 2.0 day start time. Thus, the time window of effective start times for D-met preloading is wider than originally predicted. The wider range of start times could be clinically useful in allowing more flexibility for clinical use.
Because D-met preloading to reduce impulse noise-induced ABR threshold shift has not been previously investigated, we cannot compare these current to previous findings for impulse noise. The effective time window for preloading start times was earlier than for steady-state noise although overlap existed in the effective time frames. For impulse noise exposure, preloading with D-met at 2, 2.5, and 3 days (but not 3.5 days) resulted in significantly lower ABR threshold shifts than saline controls across all frequencies. Thus 2.5 and 3.0 day start times yielded optimal protection for both steady-state and impulse noise exposures but 2.0 day start time was effective at all frequencies tested for impulse noise exposures but only partially for steady-state noise exposures and the 3.5 start time was effective for steadystate noise exposures but not for impulse noise exposures.
Thus, the effective windows of preloading timepoints for D-met protection appear to extend earlier for impulse noise than for steady state noise exposures which extends later. However effective time periods partially overlap for both types of noise exposure. Further Dmet's overall mitigation of threshold shift appears to be less for impulse noise exposure than for steady state noise exposure.
Differences in damage and opportunities for protection between impulse and steady state noise exposure were expected. As reviewed by Henderson and Hamernik 1995 [40], steady state noise exposures would be expected to cause overstimulation of the outer sensory hair cells with high energy demands from mitochondrial transport chains. The more intense impulse noise exposures, would be expected to induce not only metabolic damage with outer hair cell loss but mechanical damage including tears in the basilar membrane and reticular lamina, breakage of the stereocilia tips, and even "potassium poisoning" secondary to endolymphatic infiltration into supporting cells quickly activating programmed cell death pathways [41,42]. Consequently, an antioxidant, such as D-met, may have a greater opportunity to mitigate the metabolic damage secondary to steady state noise, which could deplete endogenous antioxidants but without direct mechanical damage, as opposed to the intense damage of impulse noise which could induce not only metabolic damage but mechanical damage with higher variability, and abrupt programmed cell death activation. Thus, steady state vs. impulse noise exposures may have different antioxidant profile needs, opportunities for protection, and different outcomes.
Further D-met preloading resulted in increased serum and cochlear antioxidant levels even 21 days after noise cessation and even longer after cessation of the D-met administration which occurred prior to noise onset. D-met's increase in GR and GPX activity is consistent with our previous study results demonstrating that D-met can protect antioxidant enzymes [25]. The increase in GPX activity observed here was associated with lower ABR threshold shifts (S1 and S2 Tables). Methionine can enhance glutathione synthesis and concentrations, particularly in the mitochondria [22,43,44], and D-met's long-term influence on glutathione concentrations at 21 days post noise in this study provides another timepoint of information suggesting long term protection consistent with previous studies citing significant glutathione increases 2-4 months after glutathione supplementation [45].
D-met increased endogenous serum antioxidant levels just as it has in previous rescue studies [20,25,26]. This study, however, comprised a preloaded experimental design and therefore further elucidates D-met's long-term protective characteristics. Furthermore, the increases in serum antioxidant levels, specifically GPX under impulse noise and SOD and GPX under steady-state noise, were associated with lower ABR threshold shifts in this study.
D-met also increased cochlear GSH concentrations in this preloaded study design just as it has in previous rescue studies [18,25], which may justify noted apoptosis, 4-HNE, connexin 26, and connexin 30 inhibitions when it is administered before or after noise exposure [46]. We did not observe any association between GSH:GSSG ratios and ABR threshold shifts, but we did find a negative association with serum enzyme activity, specifically SOD and GPX (S1 and S2 Tables). More research is needed to elucidate the exact relationship between functional hearing thresholds and serum enzyme and cochlear oxidative state measures. It is possible that earlier measures in antioxidant status following D-met preloading would provide even a clearer correlation between protection from noise-induced threshold shift and D-met's antioxidant effects.
The current study did find changes in antioxidant levels 21 days post noise at the time of animal sacrifice which is encouraging for longer term protective effects. However, changes at 21 days cannot fully address the timing of optimal and suboptimal preloading timepoints for either steady state or impulse noise exposures. For example, the 3.5 day pre-loading time point, did not provide significant protection from noise-induced threshold shift for impulse noise but did for steady state noise. Further apparent preloading "sweet spots" exist in the protection for the 2.5 day start time for impulse noise and the 2.5 and 3.0 day start times for steady state noise. The lack of significant protection for impulse noise at 3.5 days when significant protection for steady state noise for the same timing may simply reflect the greater damage inflicted by impulse noise as discussed above. But the "sweet spots" of timing may be more complex. They cannot be explained simply on the basis of D-met half-life which has been reported in the rat as .9 hours [47], and 3.2 hours in humans [48] although Alagic et al 2011 [49] did demonstrate elevated cochlear D-met levels 24 hours after round window administration. If protection was solely dependent on D-met concentration at the time of noise exposure, the greatest protection would be expected to occur for the 2.0 day preloading start time that completed the 48 hours of dosing just before noise exposure, which was not the case.
Therefore the "sweet spots" are more likely secondary to D-met induced changes in antioxidant levels over time. Although the temporal antioxidant changes for the first few days following D-met administration have not been fully characterized in the literature, the information available does suggest that possibility. A single IV 50 mg/kg D-met increased cochlear reduced glutathione levels and the GSH/GSSG ratio at 4-8 hours and 4 hours respectively [50]. Samson et al 2008 [20], administered one dose of D-met prior to noise exposure and two after noise and reported the D-met to significantly ameliorate the noise induced changes in SOD levels on post-noise day 7 and CAT levels post noise days 3 and 7. Similarly, administering one pre-and two post-noise exposure D-met doses elevated CAT and SOD levels from 7-14 days peaking 7 days post-noise exposure [51]. Further studies are needed to elucidate the exact time course of preloading D-met induced changes in antioxidant status.
Conclusions
Preloaded D-met between is optimal when started 2.0, 2.5 or 3 days prior to impulse noise exposure and 2.5, 3.0 or 3.5 days for steady state noise exposure thus the most effective time window appears to be somewhat later for impulse than for steady state noise exposures. However, effective preloading timepoints for D-met protection overlap across noise exposure types and although extending earlier for impulse noise than for steady state noise exposures which extend later. D-met also elucidates long-term cochlear and endogenous antioxidant increases up to 22.5 days post-dose and 21 days post-noise cessation, which may eventually be used to clinically-tailor optimal D-met otoprotective doses at an individualized level. Results demonstrate preloaded D-met's protective effects from impulse noise exposure, extend protection dosing windows for steady state noise exposure, and identify potential long-term cochlear and serum antioxidant biomarkers of D-met protection. Future studies may further elucidate optimal D-met protection with additional antioxidant biomarkers, particularly at earlier timepoints. Table. Pearson correlation coefficients for ABR threshold shifts, serum enzyme activity, and cochlear oxidative state for all animals exposed to impulse noise. (DOCX) S2 Table. Pearson correlation coefficients for ABR threshold shifts, serum enzyme activity, and cochlear oxidative state for all animals exposed to steady-state noise. | 8,101.8 | 2021-12-08T00:00:00.000 | [
"Physics"
] |
COMP Report: CPQR technical quality control guidelines for use of positron emission tomography/computed tomography in radiation treatment planning
Abstract Positron emission tomography with x‐ray computed tomography (PET/CT) is increasingly being utilized for radiation treatment planning (RTP). Accurate delivery of RT therefore depends on quality PET/CT data. This study covers quality control (QC) procedures required for PET/CT for diagnostic imaging and incremental QC required for RTP. Based on a review of the literature, it compiles a list of recommended tests, performance frequencies, and tolerances, as well as references to documents detailing how to perform each test. The report was commissioned by the Canadian Organization of Medical Physicists as part of the Canadian Partnership for Quality Radiotherapy initiative.
INTRODUCTION
The Canadian Partnership for Quality Radiotherapy (CPQR) is an alliance among the three key national professional organizations involved in the delivery of radiation treatment in Canada: the Canadian Association of Radiation Oncology (CARO), the Canadian Organization of Medical Physicists (COMP), and the Canadian Association of Medical Radiation Technologists (CAMRT), together with financial and strategic backing from the Canadian Partnership Against Cancer (CPAC) which works with Canada's cancer community to reduce the burden of cancer on Canadians.The vision and mandate of the CPQR is to support the universal documents are live and may be updated from time to time. To avoid duplication and risk of conflicting information, other TQC documents are referred to, but not reproduced.Tests that are not covered in other TQC documents are detailed herein. A detailed description of the production, review, and impact of TQC guidelines was previously described in this journal. 3 In the RTP process, physiological information from positron emission tomography (PET) can be used to inform target delineation and identify metabolically active regions for possible dose escalation. Information from PET scans can also be used to help spare healthy tissue, further boosting the probability of a complicationfree cure. PET-based RTP, however, is a relatively new application that is not yet commonly utilized and requires quality control (QC) measures that are incremental to that of routine diagnostic PET. This report reviews current QC guidelines for combined PET and x-ray CT for RTP to produce a consolidated list of QC tests for PET-based RTP.These incremental QC activities are relatively few and should not pose a major obstacle for expending the use of PET to RTP.
This report forms a review and summary of existing guidelines and recommendations relevant to QC of PET for RTP. It was assembled by a committee of subject matter experts as a guideline for practitioners looking to implement PET for RTP. The report was reviewed and vetted through the CPQR TQC process. This publication is an adaptation of the original TQC 4 and intends to: 1. Recommend QC procedures of PET/CT for RTP in the absence of prior works dedicated to this emerging application.
2. Make readers aware of CPQR TQC documents in general as a resource for establishing stringent QC in radiation treatment.
SYSTEM DESCRIPTION
Radiation therapy aims to accurately deposit a prescribed amount of radiation dose to target volumes, while sparing surrounding disease-free tissues. To achieve this goal, the radiological properties of the patient anatomy must be accurately represented in the treatment planning system for dose-calculation purposes. This anatomical information, along with delineated target and avoidance structures, is routinely derived from CT-simulator images. Modern, hybrid PET/CT system combines a PET subsystem to generate 3D images of functional processes in the body and a co-registered CT sub-system. The CT generates attenuation images for anatomical lesion localization while allowing for accurate photon attenuation correction of the PET images.These hybrid systems are often equipped with fully diagnostic CT scanners that can also serve as CT simulators for RTP. With the addition of a flat tabletop and isocenter lasers, these hybrid systems could well fulfil the requirements for CT simulation. Four perceived methods of PET/CT for RTP can be envisioned in order of increasing technical complexity and treatment accuracy: 1. Side-by-side visualization of the diagnostic PET/CT data and a second CT-simulator image, whereby the radiation oncologist manually defines the treatment volumes on the CT-simulator image using the PET/CT for guidance, as accurate image registration between the different patient postures may be challenging. 5 This method relies on existing practices and does not leverage the full power of modern PET/CT for RTP and simulation. It is limited by low operator reproducibility and accuracy. 2. Software-based registration of the PET/CT study with the CT-simulator image to guide the definition of treatment volumes. 6 In theory, this approach overcomes the above limitations and provides optimal registration between the PET and the simulation CT images. Registration between images is usually achieved by affine registration between the two CTs, and is aided by consistent patient positioning in the RT posture. While deformable, non-rigid image registration that can compensate for inconsistent patient positioning is an ongoing topic of research, routine clinical application is not yet widely feasible. Thus, inaccurate image registration limits the accuracy of target delineation and subsequent treatment planning. 3. Acquisition of the PET/CT data with the patient in the RTP configuration and using these data for target volume delineation and planning without the need for an additional CT-simulator image. This method aims to fully exploit the information in PET/CT both for target delineation and RTP dose calculations, but also requires a flat tabletop and a wall-mounted laser alignment system be installed in the PET/CT imaging suite to accurately register the patient in the treatment planning system and RT treatment machine. To date, widespread adoption of PET/CT simulators has been limited by workflow constraints and lack of reimbursement. Nevertheless, this method is proposed as a feasible option due to the recent trend toward clinical utilization of PET/CT simulators, the decreased cost in fluorodeoxyglucose (FDG), and because the QC testing required for this method encompasses the requirements for methods 1 and 2 above. It should be appreciated, however, that incorporating the entire RT simulation process into the PET/CT image acquisition workflow, which often includes the design and application of immobilization devices and patient indexing, can result in prolonged PET/CT appointment times. This will undoubtedly reduce patient throughput on PET imaging systems and risk increases to staff exposures. 7 4. A viable alternative to combining the RT simulation and PET/CT image acquisition processes is to first perform RTP on a CT simulator and then replicate patient positioning in the PET/CT, enabling accurate image registration through simple rigid transformations. These procedures should include steps for converting the PET/CT system to accommodate a flat tabletop and patient immobilization device that can be rapidly, consistently, and safely deployed. Special considerations should be given to potentially smaller bore sizes of PET/CT systems that may limit patient positioning. This approach to acquiring PET/CT images will facilitate accurate image registration for treatment planning CT images. [8][9][10] QA of PET/CT simulators is largely similar to the QA of CT simulators, with the addition of PET-dedicated QC tests. Since rigorous TQC guidelines for CT simulators have already been established by CPQR 11 and are actively being maintained, supplementary guidelines should be created for PET/CT with minimal duplication to avoid inconsistencies, as these guidelines evolve over time. PET/CT devices are rarely dedicated to RT and therefore may reside in the diagnostic imaging department (e.g., nuclear medicine or radiology). Sharing of responsibilities between departments and close coordination is essential to ensure quality of the overall PET/CT-simulator process.
Performance testing
Performance tests should be referred to when selecting a system, when performing acceptance evaluation of newly installed equipment, and prior to the end of a manufacturer's warranty period. The National Electrical Measurements Association (NEMA) has developed standard NU-2-2012, 12 which has become the de facto standard for evaluating the performance of PET systems. The standard describes equipment and procedures for measuring system performance parameters including spatial resolution, scatter fraction, count losses, random events measurement, activity sensitivity, corrections accuracy, and image quality. The NEMA standard was updated to version NU-2-2018, 13 adding two new procedures to assess coincidence timing resolution on PET systems with time-of -flight capability, and to assess co-registration accuracy of hybrid PET/CT systems; the latter is of particular interest in using PET for RTP. Task Group 126 on the American Association of Physicists in Medicine (AAPM) has put out a report entitled PET/CT Acceptance Testing and Quality Assurance that recommends an alternative set of simplified tests that may be performed with fewer specialized phantoms and equipment. 14 Likewise, performance testing of CT equipment are detailed by the AAPM, 15 International Electrotechnical Commission (IEC), 16 and other similar professional body recommendation documents. These tests are also summarized in an IAEA document. 17
Acceptance testing and commissioning
Newly acquired or substantially modified PET/CT systems should be tested to ensure performance complies with vendor-and tender-stated specifications. 18,19 Through active participation in the acceptance testing, users may also become familiar with the system. Commissioning follows acceptance testing with a comprehensive battery of performance tests to establish base-line performance metrics against which subsequent tests may be compared to ensure stable and acceptable performance of the system over its lifetime.
System upgrades and maintenance
Special consideration should be given in the case of a PET/CT system servicing and upgrades. Acceptance or preventive maintenance tests provided by the PET/CT manufacturer under an institutional service contract agreement should ensure that the PET/CT system is at optimal functionality. However, monthly tests should be performed after any hardware upgrade and both monthly and annual QC should be done after PET/CT console software upgrade.
Routine QC
Routine QC is performed to ensure system stability from time of commissioning and to proactively determine the need for service. Periodic (e.g., daily, monthly, quarterly) QC tests are typically defined by the manufacturer and may differ from general guidelines due to technology (e.g., solid-state vs. photomultiplier tubebased detection) and feasibility considerations (e.g., automated QC). Routine QC guidelines have been established by multiple professional groups with a consensus statement on Diagnostic Imaging Requirements put out by the Joint Commission on the Accreditation of Healthcare Organizations as an umbrella list of QA requirements. 20 The CPQR has established its own TQC guidelines as summary standards of test frequency and tolerances. 3 Recommendations from major international professional bodies (listed in Table 1) were included in this review. A summary of recommended routine QC activities and frequencies is summarized in the Test Tables section along with references in which greater details on the QC test may be found. Tolerances from TQCs were used if available, otherwise the strictest values from the reviewed literature were adopted. We elected to 28 recommend the strictest tolerances to accommodate the use of PET/CT acquired explicitly for RTP (mode 3 above) which is most demanding of quality. The list is intended to serve as a guideline and may not be optimal for all equipment types and all applications (e.g., modes 1, 2, and 4); in which case adaptations to the test and/or relaxation of tolerances may be made in consultation with a knowledgeable medical physicist.For comprehensive instructions for performing PET/CT QA, the reader is referred to references. 17,21 In order to comprehensively assess the use of PET/CT for RTP performance, additional tests, as outlined in related CPQR TQC guidelines must also be completed and documented,as applicable.Related TQC guidelines, available at https://www.cpqr.ca/programs/ technical-quality-control/ (and a subset have been published through the journal), include: • Treatment planning systems • Computed tomography simulators 22 • Data management systems TQC guidelines are referred to throughout as a primary source to avoid conflicting instructions as these live documents are updated.
The QA program should be overseen by trained medical physicist(s) with expertise in diagnostic imaging, nuclear medicine, and RT. Frequent QC tests may be delegated to trained technologists, but results should be reviewed by a physicist in a timely manner to identify equipment that does not meet operating specifications.
TEST TABLES
Tables 2 to 6 list required PET/CT for RTP QC tests by frequency. These tables further indicate incremental tests over those typically required for diagnostic PET/CT as derived from the CPQR TQC guidelines for CT simulators 11 (✓ = incremental tests, = tests performed more frequently).
Notes on daily/weekly tests (Table 2)
PT-D1 -2 and PT-W1 As per manufacturer's instructions, these tests are typically semiautomated and only require confirmation that the test has passed, and no visual artifacts are visible in the recorded sinograms. The tests measure the stability of the PET detectors.
On scanners with TOF, PT-D2 measures the capability of the system to estimate the difference in arrival times of the two annihilation photons.
The weekly test updates the detector gains to compensate for changes in the crystals' behavior over time. See references 17,23 for more details. ±5% Daily a Can be performed weekly if system is found to be stable but needed on days system will be used for RTP. b Alternate to cover all peak kilovoltage (kVp) values used clinically.
TA B L E 3 Monthly quality control tests
Designator Tolerance For RTP CT-D1 -6 Refer to the TQC for computed tomography simulators. 11 OT-D1 As per manufacturer instructions, follow the daily QC procedure using a long-lived radionuclide source (e.g., 137 Cs) to test accuracy and stability. 24
CT-M1
Notes on monthly tests (Table 3) CT-M1 -3 Refer to TQC for computed tomography simulators. 11 G-M1 Documentation relating to the daily QC checks, preventive maintenance, service calls, and subsequent checks must be complete, legible, and the operator identified. 19 Notes on quarterly tests (Table 4) PT-Q1 This test measures the crystal efficiency and is used to correct for crystal non-uniformities that degrade the images. The scanner is also cross-calibrated with the dose calibrator to ensure that SUV calculations are accurate, and the images are quantitative. 17 The test is performed using a cylindrical uniform phantom of known activity concentration (depending on the manufacturer's recommendations it can be a pre-manufactured 68 Ge phantom or a fillable one with 18 F). The normalization data are acquired according to the manufacturer's instructions. A calibration factor relating the detected events to the known activity concentration is also calculated. The test passes when a reconstructed image using the new established normalization and calibration factor parameters is visually uniform, and the measured SUV mean in a big field of view inside the phantom is close to 1. Data should also be compared with previous measurements to detect big shifts in calibration, which could indicate procedural errors in the test. 25,26 PT-Q2 The cylindrical phantom from the PT-Q1 test is used to measure the response of the system to the homogeneous activity distribution. 17 An image of the phantom is reconstructed with all the corrections enabled (i.e., dead time, attenuation, scatter, etc.) and using the parameters of the institution's standard clinical protocol. For each transaxial slice in the image, a grid of 10 mm × 10 mm squares is drawn. The maximum, minimum, and mean concentration c of each grid element k in each of the i image slices is recorded. The maximum value of non-uniformity across all images (NU i ) should be reported as per equation 1
TA B L E 4 Quarterly quality control tests
Designator Tolerance For RTP Quarterly (or after system maintenance)
PT-Q1
PET System normalization and calibration Visual acceptance. The new calibration should be checked with a reconstructed image of the flood phantom applying all the corrections. The mean measured SUV in a region of 10 cm in the center of the phantom should be 1.0 ± 0.1. Calibration constant change <5% from previous. 25 13 has added a new standard for PET/CT co-registration. It uses fiducial markers of sources such as 18 F or 22 Na with materials that are greater than 500 Hounsfield units in the CT scan. The location of the centroids in the two images are checked to determine the co-registration error, CE, for each of the fiducial markers using equation 2.
As this report is still recent, not all current PET/CT scanners follow this exact procedure. Regardless, patient-specific bed sag and misregistration may not be fully characterized by QC, and successful QC may not preclude the need for patient-specific compensation (e.g., using fiducial markers).
OT-Q1
The linearity test measures the response to radionuclides over a big range of activities that will be used in the department. A vial containing a high amount of activity is measured several times until the activity has decayed to a low value. We recommend performing the test with a starting activity on the order of a few GBq and decay until the activity is less than 1 MBq. The measured activity in the dose calibrator is compared to the predicted activity based on the half -life of the decay. This response is expected to follow the identity line in a plot of measured activity vs. predicted activity. Alternatively, specially designed attenuation sleeves (calibrated for the test isotope) can be used as a surrogate for activity decay.
Details may be found in reference. 24 Notes on annual tests (Table 5) PT-A1 This test ensures that the PET/CT scanner mechanical and electrical components are operating as indicated by the manufacturer. Follow any manufacturer's recommendations and inspect the housing, bed motion, controls, connectors, and any accessories that are connected to the scanner. 17,21,27 PT-A2 The aim of this test is to measure the tomographic resolution in air and ensure that is not affected by the acquisition or reconstruction. The procedure involves scanning three-point sources of 18 PT-A3 This test determines the rate of detected true coincidences per unit of radioactivity concentration (e.g., kcps/MBq) for a standard line source configuration. Several scans of a line source with different aluminum sleeves that increase the thickness of absorbing material are used to extrapolate the value to the one where no attenuating material is present. The procedure is performed at the center of the FOV and at 10 cm from the central axis. The sensitivity is expected to be equal or greater than the specified by the scanner manufacturer. 13,17,21 PT-A4 The purpose of this test is to generate images that simulate a real patient scan with hot and cold lesions and with scatter from outside of the FOV. The quality of the image is assessed from the contrast and background variability, accuracy of the attenuation and scatter corrections, and from the accuracy of the radioactivity quantification. The procedure involves scanning the NEMA IEC body phantom that includes six spheres of different sizes.
The two biggest spheres are filled with water that does not contain radioactivity; while the other four are filled with a solution that has a concentration of 8 times the one in the background (some manufacturers also suggest using a 4:1 ratio). A line source is placed inside a cylindrical plastic phantom to generate some scatter out of the FOV. The images should be reconstructed as recommended by the manufacturer for a standard whole-body protocol.
The slice in which the contrast of cold and hot spheres is highest is selected to draw regions of interest around each of the spheres. The diameters of the region of interests (ROIs) should be as close to the inner diameter of the sphere as possible. Concentric ROIs of the same sizes (both cold and hot spheres) are drawn on the same slice at 12 background regions (see NEMA standards for location of ROIs). The same ROIs are then copied to four neighboring slices (∼±1 and ±2 cm) giving a total of 60 background ROIs for each size of sphere; 12 on each of the 5 slices. The average number of counts in each hot, background, and cold spheres in combination with the known activity concentrations are used to calculate the contrast and background variability.
An ROI with a diameter of 3.0 cm is drawn on the lung insert for each of the slices. If the scatter and attenuation correction are perfect, this value is expected to be close to zero. Another 12 circular 3.0 cm diameter ROIs placed over the background region are used to calculate a percentage relative error for the lung insert and for each slice. The ratio of the average counts in the lung ROI to the average in the 12 background ROIs in the corresponding slice is calculated as a percentage.
Lastly, accuracy in activity quantification is measured from the known activity in the background at the time of the phantom filling procedure and comparing it to the average radioactivity concentration measured from the image by averaging the 12 3.7 cm diameter background ROIs.
Using a fused-image display, ensure accurate registration between PET and CT images. Measure the width and height of the phantom shell on the CT image and ensure that they agree with the physical measurements within 2 pixel width (or 2 mm) to ensure spatial integrity of both PET and CT modalities.
PT-A5
This test measures the contribution of scatter, count losses, and random to the image. All of these effects degrade the image quality and quantification accuracy. A small scatter fraction (ratio of scatter photons to the sum of true coincidences and scatter) is desired. The count rate performance provides information regarding the quantitative accuracy at low and high count rates. The noise equivalent count rate (NECR) is typically used to represent the count rate performance as a function of the activity concentration. The peak NECR and the corresponding activity concentration serve as a guide to optimize the injected activity to patients. The calculation assumes Poisson statistics, and considers the contribution of true, scattered, and random events to the total coincidence rate.
The method of measurement involves a 70-cm long line source that is placed inside and off -center of a plastic cylinder. The manufacturer's specifications for the initial radioactivity concentration within the line source should be followed. Different acquisitions are taken at intervals of less than half of the half -life of the radioisotope (e.g., 18 F), but with a higher frequency around the peak of the NECR curve. Each acquisition has a duration that should be less than ¼ of the half -life of the radioisotope. The analysis might be slightly different between systems depending on the method of random correction.
Pixels that are more than 12 cm away from the center of each sinogram (i.e., one sinogram per acquisition) are set to zero. Then, the maximum pixel on each projection (row) of the sinogram is shifted to the center of the sinogram and all the projections are added. A profile of total counts as a function of distance from the center of the sinogram is made. The sum of scatter and random, the total counts, and the unscattered counts can be determined from that profile. The scatter fraction is then calculated for each slice and each acquisition. The NECR for each acquisition j is calculated based on the trues, total, and randoms of each slice i using equation 3.
where R t is the rate of true coincidences, R tot is the total count rate, and R r represents the randoms count rate. The value of is set to 0 for processed delayed window or singles event rates (noiseless), or to 1 for unprocessed delayed window (noisy) randoms correction mode.
The total system NECR for an acquisition j is the sum of NECR i,j over all the slices i.
The scatter fraction, peak NECR, and the radioactivity concentration to reach the peak NECR should meet the manufacturer's specifications.
PT-A6
This test determines the capability of the system to measure the difference in arrival time of two coincidence events. Follow the manufacturer's recommendations to perform this test. A typical measurement uses a line source of 18 F in an aluminum tube positioned at the center of the scanner. The system records coincidences with time of arrival and generates some histograms for it. The timing resolution is calculated as the FWHM on this histogram. The timing resolution should not exceed the manufacturer's specifications. 13,17,21 CT-A1 -7 Refer to the TQC for computed tomography simulators. 11,28 GA1 -3 Refer to the TQC for computed tomography simulators. 11,19 OT-A1 The dose calibrator geometry test allows determining if the correct activity values are measured regardless of the sample size geometry. For this, all the different syringes and vials used to draw-up injected doses are tested. For each of the volumes, an initial value of activity is measured. This is then followed by subsequent measurements in which a saline solution or water is added to the syringe/vial to increase the volume. In all cases, the activity is expected to be within 5% of the initial values. If variations >±5% exist, derive a calibration factor to be applied clinically. Ensure no change from baseline. Likewise, using a syringe test stability of the activity reading as the source is gradually withdrawn from the ionization chamber. Ensure that activity readings are consistent across >5 cm of displacement, and that response is consistent with baseline.
OT-A2
Clinical computer monitor displays should be tested and calibrated at least annually using a dedicated light measurement device and according to its manufacturer procedure. As a minimum, displays that have obvious discoloring, non-uniform luminance >30%, or that deviate from DICOM luminance response accuracy by >10% and cannot be calibrated should be replaced. 29 OT-A3 No regulatory guidelines or standards could be found for QC of medical weight scales. But vendor provided instructions require testing using standard weights on the order of typical patient weights (e.g., 100 kg). Testing should be performed on an annual basis, after relocating the device or after service. Errors should not exceed 0.1 kg for weights <100 kg of 0.2 kg for larger weights.
OT-A4
No guidelines or standards could be found for QC of height measurement devices. Accuracy should be tested annually, after relocating or after service using an independent measuring device such as a measuring tape.
Notes on patient-specific tests (Table 6)
At least three forms should be filled to ensure that the PET/CT procedure is going to be performed optimally: 1. A screening form should be filled by the booking clerk ensuring that contains information regarding patient medication, diabetes, claustrophobia, concerns lying flat for the PET/CT scan, and for females, whether they are pregnant or breastfeeding. 2. A questionnaire form to be filled by the patient and to be presented on the day of the appointment. This form should include some questions regarding the patient's clinical history (e.g., asthmatic, diabetic, smoke status). It should contain information about any implants or other foreign objects within the patient's body. In addition, it should include a small questionnaire in the type of "checkboxes" to ensure that the patient has fasted before the appointment (if required), is well hydrated, and has listed his current medications.
A PET/CT technologist's worksheet that includes
patient information such as name, date of birth, and age. The technologists should record the patient's weight and height, glucose level, allergies, radioisotope to be administered, and should record the initial activity in the syringe, and the residual after injection with its respective times of measurement. Additionally, the volume of radiotracer and the site of injection should also be recorded. The scan protocol, including the scan range (e.g., whole-body vs. vertex to thighs) should be pre-established before the patient arrives at the facility and should be written in this technologist's worksheet.
Use of these forms should be used to ensure that the tests from Table 6 and described below are correctly performed.
PS1
The
Ancillary equipment
PET images are typically reported as standard uptake values (SUV) that have been shown to be accurate within ±10% across a wide range of scanner models with appropriate QA and method standardization. 31 Since SUV is calculated using patient weight, periodic QA should be carried out on patient weight scales in accordance with the vendor's recommendations. 10 Accuracy and precision on the order of ±1 kg corresponds to 1%-2% patient weight error and is on par with clinical sources of variability including patient clothing, bowel content, and hydration state, but ±0.2 kg is readily achievable with clinical devices and routine QC. Likewise, PET images are often scaled to standard uptake based on lean-body mass (SUL), which are computed based on patient height. Height measurement apparatuses should be accurate to within 5 mm. Dose calibrators are used to measure the patientadministered activities which factors into the SUV calculation, and they serve as a reference for calibration of the PET system.Therefore,they must undergo routine QC to ensure consistency. If multiple dose calibrators are in use, cross calibration must also be ensured. Vendor recommendations should be followed, while professional society guidelines also layout periodic QC including consistency, accuracy, linearity, and geometric and positioning sensitivity testing. 24 Synchronization of clocks between dose calibrators and PET imaging devices is required for accurate radionuclide decay correction and may be aided by automated device clock synchronization with a centralized time server. Regardless, daily QC of dose calibrator and PET times is recommended, with an emphasis when adjusting to daylight savings times.
Image transfer and compatibility
Widespread adoption of DICOM standards for image and RTP transfer has aided compatibility between imaging, diagnostic visualization, and treatment planning systems. Target volumes can therefore be delineated on either diagnostic imaging or treatment planning workstations, depending on the preferred tools and workflow. Nevertheless, the commissioning of new systems should entail the validation of proper data transfer including specific emphasis on image orientation, pixel size, spatial positioning offsets, and image unit scaling (e.g., SUV). Validations should replicate the clinical workflow and can utilize phantom scans or a patient scan augmented with physical markers that are visible in the image. Marker locations, sizes, and separation can be measured in images and validated against the empirical setup. Suitable markers include: 1. Radioactive point sources (e.g., 22 Na) 2. Radioactive dilution standards (e.g., sealed vials with known dilutions of FDG) for validating activity quantification. 3. Thin metal wires that are visible on CT, but do not introduce artifacts.
Commissioning acceptance testing and routine QC testing for RTP systems are detailed in the AAPM TG-53 report 32 and in the IAEA Technical document N. 1583. 18
Image reconstruction and processing
Image reconstruction and processing parameters can influence image characteristics including target to background uptake ratio, spatial resolution, and noise. These in turn may influence the perceived target size and intensity. For consistent volume delineation, image reconstruction and processing methodologies should be carefully derived, validated, and preserved. As patients may be imaged on different scanners during the course of their treatment, a need to harmonize image reconstruction and processing across the patient catchment region is ideal, especially for quantitative assessment of tumor response to treatments including RT. Changes to image reconstruction and processing methodology should be coordinated between the imaging and radiation therapy teams. Likewise, the use of institutionally standardized default image display parameters (e.g., color maps, window/level, and image fusion level) is recommended.
Deviations from these guidelines
This recommendation document details a rigorous set of tests and prescribed schedules for a general audience. Adaptations may be made to enhance feasibility and efficiency in local practice and specific scenarios. Adaptations should be made by subject matter experts familiar with the equipment,clinical application,and work environment. For example, fluctuations in scanner sensitivity can be monitored by the quarterly calibration factor measurements, which could make annual repetition of the commissioning tests optional. Likewise, centers acquiring dual energy CT (DECT) on their systems should consider the implementations of DECT-specific quality-control tests as recommended in recent articles. 33,34 In both articles a QC protocol is described using a DECT specific phantom with a known quantity of iodine which is quantified via DECT and the difference is evaluated. Both articles recommend the DECT test be performed weekly, however, Nute et al. 33 note that monthly testing may be acceptable if weekly testing is not feasible. In the study by Nute et al., the tolerances for the DECT tests are recommended to be both a fixed error of ±1 mg/mL and a percentage error of 10%. While a study conducted by Green et al., did not set specific tolerances, they did note that there should be low variability in terms of iodine density. 34
New technologies
With advancement in instrumentation and automated fault testing, some tests may become obsolete (e.g., annual x-ray generation); in-depth understanding of the technology and consultation with the vendor are recommended prior to relaxing QC activities. On the other hand, with the increased application of automation, optimization, and artificial intelligence (AI) into every aspect of medical imaging and RTP, PET/CT for RT may be especially prone to generating harms due to unexpected behaviors from these systems. As AI enters the clinic, the community will need to develop new quality assurance that targets these vulnerabilities to ensure that systems are safe, accurate and predictable. The reader is referred to the work done by Vandewinckele et al. 35 for an overview and recommendations for QA of AI in radiotherapy including image automated segmentation, automated treatment planning, and synthetic CT.
Clinical practices
Often the quality of clinical data is limited not by the imaging equipment, but rather by clinical factors. Appendix A summarizes these factors and makes recommendations towards enhanced clinical quality.
CONCLUSION
Quality assurance of PET/CT for RTP requires additional efforts to diagnostic PET/CT alone and collaboration between team members. Nevertheless, the incremental effort is relatively small compared to the potential clinical benefit, and is therefore a feasible undertaking.
Patient preparation
Specific patient preparation consideration should be given depending on the PET tracer, disease state of patient, and clinical task. Specific guidelines for tracers and indications are continuously being developed by professional bodies. The Society of Nuclear Medicine and Molecular Imaging (SNMMI) and the European Association of Nuclear Medicine (EANM) commonly publish joint guidelines that are freely available through their respective websites, including for FDG 30,37 and 68 Ga-PSMA. 38 For accurate SUV/SUL scaling, patient weight and height should be measured with a high-quality scale. In addition, the activity administered to the patient must be accurately measured, including the residual activity in the syringe after injection as well as time of injection (for radioactive decay correction).
CT Contrast Agent
The use of CT contrast agents is commonly applied for improved organ delineation in RTP. Concerns regarding suboptimal attenuation correction from contrast CT have been largely addressed except for cases of high concentration (e.g., arterial phase). 30,37 Venous phase and delayed enhancement CT-contrast imaging may produce small changes in SUV. 39 Nevertheless, with the added information of PET for RTP, the need for CT contrast may be reduced. At the expense of extra scan time and radiation exposure to the patient, two CT scans may also be obtained: without and with contrast.
Patient Positioning
Utilization of diagnostic PET for RT simulation is typically ill-advised due to differences in patient positioning between imaging and therapy sessions. PET acquisition on a flat tabletop and with appropriate immobilization devices is preferable as this enables better softwarebased registration between the PET and simulation CTs.
Ideally RTP and simulation should be performed using hardware registered PET and CT (i.e., hybrid systems), and with appropriate patient positioning by a qualified radiation therapist. The use of fiducial markers, a flat bed, patient immobilization devices, and dedicated laser alignment hardware should be integrated into the PET process for optimal registration with RT delivery devices. The PET/CT patient positioning should replicate that of RT as nearly as possible using identical apparatuses.
PET/CT registration
For accurate attenuation correction and SUV quantification, it is assumed that hardware registration between PET and CT is sufficient. Nevertheless, in the presence of patient motion this assumption may be violated (typically regionally). PET/CT registration QA should be performed in every case prior to patient removal from the PET imaging bed, as is common practice in diagnostic imaging. 37 Repeat imaging of body regions in which gross misregistration is apparent may be undertaken as required. Manual alignment may be appropriate, but adjustment of small misregistration is not recommended as it may introduce errors due to human factors.
PET and CT misregistration in the lung and liver regions is unavoidable due to the long imaging time of PET (2-3 min per bed position) versus that of CT. Normal breath-hold techniques during the CT acquisition are recommended, 37 but the use of 4D CT should be considered in cases where accurate target delineation in respiratory-motion-affected regions is vital.
Respiratory Motion
Gated PET (4D) is recommended to account for reciprocating organ and target motion in lung, heart, diaphragm, and upper abdominal regions. 8,40,41 In conjunction with appropriate therapy delivery equipment, target tracking and/or dose rate modulation can be used to deliver more accurate and conformal dose distributions. Although new data driven or device-less methods that estimate the respiratory wave function using the projection data of PET are being introduced into clinical systems, gated PET typically relies on external respiratory triggering hardware (e.g., optical tracking or pressure belt) to assign detected events to corresponding phases in the respiratory cycle. Respiratory equipment at the delivery unit may differ and may not provide identical information regarding the magnitude of motion. Gated PET in most scanners currently in use still relies on external respiratory equipment and specific QA is required to ensure adequate correlation between gating systems for optimal dose delivery.
With list-mode data acquisition being a standard feature of modern PET systems, PET reconstruction of static (3D) and respiratory gated (4D) images is possible from a single PET acquisition. To compensate for lower count-statistics per gate, however, it may be desirable to acquire motion effected body regions with longer time per bed-stop, especially in the presence of small, low intensity targets. Moving objects are blurred in static images, typically making lung lesions appear fainter and larger, but motion correction software is becoming increasingly available to reconstruct motion frozen PET while preserving 100% of the data.
While other types of motion, such as cardiac contraction, gross patient motion, and organ creep are measurable, they are largely ignored in the context of RT.
Time to Therapy
Due to the dynamic nature of cancer, the time between diagnostic and/or pre-treatment imaging and delivery of therapy may be a critical factor for accurate target delineation. Geiger et al. 42 and Everitt et al. 43 demonstrated that in non-small-cell lung cancer over the course of even a few weeks, a significant number of patient were upstaged due to increases in tumor FDG avidity, tumor size, number of nodes, and metastatic state. These changes in staging influenced the intent to treat from curative to palliative within several weeks and are consistent with previous findings in both lung and other cancers. 42 Hence, the clinical workflow should target RT delivery within 2 weeks of PET/CT for RTP.
Protocols
Patient mispositioning, inaccurate communication, and operator error remain large sources of variability in RTP and can be mitigated using clear, predefined protocols. Protocols should be body site specific and should contain instructions regarding patient positioning, immobilization devices, setup instructions, image acquisition protocols and parameters, scan limits, use of contrast agents, and any additional special instructions. 28 Image acquisition parameters should be preconfigured as imaging protocols on the modality workstation to reduce errors due to human factors and improve workflow. Likewise, contrast and/or tracer injection systems should be preconfigured.
Nomenclature
Because PET/CT for RTP involves multidisciplinary interactions, it is especially important that effective communication be facilitated using standardized nomenclature such as proposed in the AAPM TG-263 report. 44 Standardized nomenclature may incrementally benefit multicenter clinical trials and development of artificial intelligence-based applications. 45
Overall System Test
Integration of all the components in an RT workflow should be tested with a system level validation test whenever changes are made to equipment, software, or workflow. System tests should use a validation phantom and a typical clinical workflow to test object alignment and orientation, image acquisition, image transfer, image processing, treatment planning, transfer of plan to therapy device, treatment delivery verification including image guidance and creation of documents. Delivery of the desired radiation plan may be validated using dosimetry equipment but is beyond the scope of this document.
Roles and Responsibilities
Even when the intention of a PET/CT study is for RTP, best practice is that a nuclear medicine and radiology trained physician reviews the study in a timely manner to evaluate disease progression and to detect incidental findings.
Quality assurance is an institutional responsibility and therefore requires the collaboration of all care providers and support staff. Physicists are charged with ensuring optimal functioning of instrumentation and software but are rarely present during the immediate course of clinical care. Technologists are often the first to witness anomalies, whether it is patient compliance, equipment failure, or inappropriate requisitions. It is vital that technologists are empowered to resolve errors when appropriate and to freely communicate concerns and observations within the circle of care. Imaging physicians and radiation oncologists routinely view image and other clinical data and are therefore well positioned to identify errors and artifacts in a timely manner. Thus, they should be well trained to identify these anomalies and to draw attention to them in a timely manner. The biomedical engineering team is charged with ensuring that maintenance is performed to the highest standard and in coordination with the manufacturer's guidelines. Finally, the management team is essential to emphasizing the value of quality and supporting it with adequate resources.
Reference 46 is a good resource that presents different image artifacts and discusses possible causes.
Comparative Studies
As with any comparative study, it is assumed that patient preparation, image acquisition, and image reconstruction parameters are well controlled. Nevertheless, previously published multicenter clinical trials have demonstrated that compliance with professional guidelines may be low and could introduce undesired variability to the study data. 47 Likewise, for studies with baseline and follow-up scans, it is pertinent to ensure that both scans are acquired under similar, pre-defined conditions. Special considerations must be given if images originate from two different PET/CT systems, as harmonization across devices, especially by different vendor/models, may not be achievable. Pre-study qualifying scans 48 and routine quality control over the course of a research study are pertinent to ensuring high quality data. Much of the required data (e.g., image acquisition and reconstruction parameters, tracer uptake times, blood glucose level) may be available from the image DICOM header, but care should be taken to ensure that these data are not stripped during data anonymization, transfer, and conversion. Other data should be captured in clinical report forms (CRF) and checked for quality. Rapid feedback and guidance of imaging sites by the core lab is essential to achieving and maintaining optimal data quality throughout the course of the study.
Multicenter trials may especially benefit from the use of a standardized phantom which facilitate qualitative and quantitative validation of image quality against known activity distributions and enables objective comparison between sites and equipment. Such initiatives have been well demonstrated by professional groups including the Society of Nuclear Medicine and Molecular Imaging (SNMMI) Clinical Trials Network, 49 American College of Radiology Nuclear Medicine Accreditation Program, 50 EANM Research Ltd (EARL), 51 and Ontario Clinical Oncology Group (OCOG), 52 and therefore phantom data may be readily available at active research and/or accredited sites. Traditionally, these phantoms focus on PET uptake quantification and image quality, but for RTP an emphasis should also be placed on aspects of target volume delineation including location and size. | 10,047.8 | 2022-10-08T00:00:00.000 | [
"Medicine",
"Physics"
] |
Efficient artificial intelligence-based assessment of the gastroesophageal valve with Hill classification through active learning
Standardized assessment of the gastroesophageal valve during endoscopy, attainable via the Hill classification, is important for clinical assessment and therapeutic decision making. The Hill classification is associated with the presence of hiatal hernia (HH), a common endoscopic finding connected to gastro-esophageal reflux disease. A novel efficient medical artificial intelligence (AI) training pipeline using active learning (AL) is designed. We identified 21,970 gastroscopic images as training data and used our AL to train a model for predicting the Hill classification and detecting HH. Performance of the AL and traditionally trained models were evaluated on an external expert-annotated image collection. The AL model achieved accuracy of 76%. A traditionally trained model with 125% more training data achieved 77% accuracy. Furthermore, the AL model achieved higher precision than the traditional one for rare classes, with 0.54 versus 0.39 (p < 0.05) for grade 3 and 0.72 versus 0.61 (p < 0.05) for grade 4. In detecting HH, the AL model achieved 94% accuracy, 0.72 precision and 0.74 recall. Our AL pipeline is more efficient than traditional methods in training AI for endoscopy.
normalization of endoscopic images 29 and decision tree models from bariatric data 30 .Another approach utilized AI methods to diagnose HH through chest radiographs 31,32 .To our knowledge, no AI-based method for predicting the Hill classification exists.
As AI is already implemented and used as a valorous tool in clinical practice 16 , improving efficiency of AI training is essential, as quality labeled data can only be obtained through expert annotations.However, medical experts usually have limited time and are often expected to annotate copious amounts of data at once.This approach can lead to tiredness, lack of motivation, and stress, all of which are factors that diminish the quality of annotations 33 .To improve AI-training efficiency, we implemented a novel active learning (AL) 34 pipeline, which works multiple steps that can be undertaken at the expert's own pace.Using AL, we trained an AI for predicting the Hill classification and through its predictions infer the presence of HH.The proposed AL-pipeline is shown to result in higher performing models, compared to traditional AI training.
The Hill classification
The Hill grade is a classification of the antireflux barrier that focuses on the valve.Hill differentiated four different grades, described next.A grade 1 flap valve has the ridge of the tissue at the cardia preserved and closely approximated to the shaft of the retroflexed scope, extending 3-4 cm along the lesser curvature, which is considered a normal physiologic situation.A grade 2 flap valve has a less pronounced ridge at the cardia, which may open with respiration.A grade 3 flap valve describes a diminished ridge of the cardia along with failure to close around the endoscope.A grade 4 flap valve: the muscular ridge at the cardia is absent; the esophago-gastric junction stays open and the endoscopist may easily view the esophageal lumen in retroflexion.Also, Hill classification has been shown to be connected to HH, with Hill grades 3 and 4 associated with its presence and grades 1 and 2 with its absence.Examples of the different Hill classes are depicted in Fig. 1.
Train and test datasets
Data used for AI training consisted of retroflexion images captured during routine gastroscopy.71,877 examinations, performed in two hospitals from 2015 to 2021, were screened.Out of these, 46,068 were excluded due to incomplete documentation, where no image data associated with the examination existed.A further 10,926 examinations were excluded because they lacked images in retroflexion.The final training data consisted of 21,970 images from 14,883 gastroscopies.
To prevent data leakage, the images were split into training and validation on examination level, such that if multiple images from the same examination were selected, they were all used exclusively for training or validation.Following an 80-20% train-validation split resulted in 11,907 examinations with 17,137 images for training and 2976 examinations, with 4833 images, for validation.The trained AI models were evaluated on images from an external dataset of endoscopic images, HyperKvasir 40 .This dataset contains multiple images of anatomical landmarks from the upper and lower gastroesophageal tract.Out of these, the 764 images from the "retroflexstomach" category were presented to the expert endoscopist, with over 30 years of experience in endoscopy, and specialized on the diagnosis and different treatment modalities of gastroesophageal reflux disease, for annotation.The expert evaluated if the Hill classification was applicable for each image, and assessed the Hill grade when it was.The final test dataset contained 710 images with their corresponding Hill grade.The distribution of Hill labels in the test data was 368 (51.8%) grade 1, 257 (36.2%) grade 2, 67 (9.4%) grade 3, and 18 (2.5%)grade 4. Generation of the training, validation and test datasets is presented in Fig. 2.
Proposed active learning pipeline
AL has found successful applications in the medical field [41][42][43][44] for many medical computer vision tasks such as image classification 45,46 and segmentation 47,48 .It proves beneficial under class imbalance, a common issue in medical data, where examples of one or more classes are sparse in the dataset, making them difficult to find using random selection.
Active learning uses a large set of unlabeled data, usually called a pool, and an AI model for the task at hand.In traditional AI training, unlabeled data are annotated in random order.The goal of active learning is to optimize the selection process for annotation data, by utilizing AI predictions.Through them, unlabeled data that can improve model training the most are selected for annotation.After annotating the selected data, a new, improved model is trained, and its predictions are then used for selecting further data points for annotation.In more detail, model predictions are initially obtained for all unlabeled data.These predictions are used with a selection method, called acquisition function, which selects the most information rich subset of the unlabeled data to be annotated 42 .The expert annotates the selected data, and a new model is trained using all annotated data.Active learning is the repetition of these steps, where the AI trained at each step is used to predict on the remaining unlabeled data.The acquisition function is the heart of AL pipeline 49 , as it is responsible for selecting data points to be included in the training of subsequent models.In traditional AI training data points are selected at random and therefore random selection serves as a baseline for evaluating AL.
This work proposes an AL pipeline based on a novel acquisition function that results in a diverse and representative set of annotated data.Data are selected for each label sequentially.The predictions for each label were split into 10 bins of equal length.Initially, a non-empty bin was randomly uniformly selected.Subsequently, an image from the selected bin was randomly selected for annotation.The first part of the selection enabled inclusion of images for annotation that varied from easy to hard examples, thus resulting in a representative dataset.
In this work, the AL iterations occurred until 10% (2400 images) of the total unlabeled data were annotated.At each AL iteration, 300 images, 75 per label, were selected for annotation, for 8 iterations.To enable comparison with traditional training, the same process but with 300 randomly selected data each iteration was used as a baseline.
The pipeline and the trained model are made publicly available, empowering physicians to train their own AI models without requiring programming skills https:// www.ukw.de/ en/ inexen/ code-data-and-3d-models/.
Model architecture and training
All AI models trained followed the same architecture, using a ConvNext 50 , pre-trained on ImageNet 51 , as backbone, followed by a fully connected layer with 4 output neurons one for each Hill grade.Predictions were obtained by applying the SoftMax activation to the model output and selecting the label with the highest value as the predicted one.At each active learning step, a new pre-trained model was initialized and fine-tuned from scratch.The optimizer used in model AdamW, with the loss function was the standard cross entropy.The model saved at each AL step was the one achieving the lowest loss on the validation set over 50 epochs.
To improve interpretability of model predictions, the Grad-CAM (Gradient-weighted Class Activation Mapping) algorithm was used 52,53 .The Grad-CAM algorithm generates a heatmap that highlights the image regions contributing the most to the model's classification decision.
Evaluation
The annotation protocol used for evaluating the proposed AL pipeline compared to traditional AI training was as follows.Initially, the same 21,970 unlabeled images, identified as potential training and validation data.For traditional training, images were sequentially sampled from the available data and presented to the expert for annotation, as is standard in AI training.Models were trained with predetermined amounts of sequentially selected, annotated images, namely 2400 and 5400 images, to enable comparison with the AL model.For the AL, starting from the same original data pool of 21,970 unlabeled images, 300 images were identified using model predictions at each step and presented to the expert for annotation.These images were used to train a new AI, the predictions of which were used in the next unlabeled image selection round.This process continued until a predefined limit of 2400 images, about 10% of the unlabeled dataset, was annotated.In all cases, annotations were provided by an expert endoscopist with over 30 years of experience.The expert was presented with each image and was asked to provide one classification label for each image, indicating the Hill grade, or if the image is not sufficient for assessing the Hill grade, thus excluding it from training.Model performance was evaluated in terms of assessing the Hill classification.Each image of the test dataset was presented to the expert endoscopist for annotation, who again provided one label for each image, indicating either the Hill grade for the image or that the Hill classification is not applicable with the current image.The label provided by the expert endoscopist was the gold standard used to assess model performance.Images where the expert deemed insufficient to assess with the Hill grade were excluded from model evaluation.Special attention was given to accuracy, precision, recall (sensitivity), and specificity for each grade individually.The AL pipeline was assessed comparing the mean per-class accuracy, precision, recall (sensitivity), and specificity of models trained using the same number of images, selected either with our AL pipeline or with the traditional method.Furthermore, the distribution of the different grades in images with the two methods was compared.Furthermore, the last AL model was compared with a model trained with the traditional method, using 225% the training data, 2400 for AL versus 5400 for traditional training.The ability to infer the presence of HH (Hill grades 1-2 vs. grades 3-4) was assessed with accuracy, precision, recall (sensitivity), and specificity.The 95% confidence intervals were obtained via bootstrapping and statistical differences were investigated with the t test.To enable comparison with existing approach in diagnosing HH, the values for specificity and sensitivity were also calculated.
Results
The model trained with the AL pipeline, after the pre-defined termination condition of 2400 images was met, achieved an accuracy of 76% in assessing the Hill classification, compared to the 73% achieved by the traditionally trained model with the same number of images.In the per-class analysis, the AL model demonstrated greater or equal accuracy with its traditional counterpart for all classes.For the rare classes, the AL model demonstrated higher precision, with 0.54 (95% CI 0.42-0.66)versus 0.34 (95% CI 0.23-0.46)for grade 3 and 0.72 (95% CI 0.50-0.92)versus 0.56 (95% CI 0.31-0.80)for grade 4, while maintaining high performance in terms of recall (sensitivity), with 0.58 (95% CI 0.46-0.71)versus 0.56 (95% CI 0.40-0.71)for grade 3 and 0.52 (95% CI 0.32-0.72)vs 0.62 (95% CI 0.38-0.87)for grade 4. The AL model was more specific for the common grade 1 with 0,88 (95% CI 0.84-0.91)versus the 0.82 (95% CI 0.77-0.86) of the traditional model for the same grade.
A traditionally trained model for the same task, with 5,400 images, that is, with 125% more training data used for the AL model achieved accuracy of 77%.Even with this amount of data, the AL model was more precise, with 0.54 (95% CI 0.42-0.66)versus 0.39 (95% CI 0.27-0.51)for grade 3 and 0.72 (95% CI 0.50-0.92)versus 0.61 (95% CI 0.36-0.84)for grade 4. The extended amount of data allowed the traditional model to demonstrate a higher specificity of 0.90 (95% CI 0.86-0.93)for grade 1, which is a mere improvement of 0.02 from the 0.88 (95% CI 0.84-0.91)that the AL model achieved for the same grade.The complete per-class analysis is reported in Table 1.Furthermore, the GradCAM method results were collected for correct and erroneous assessments of the Hill grade.Examples of such assessments on images from the external test dataset, including the explainability heatmap and prediction confidences for each class are depicted in Fig. 3.This performance difference can be attributed to the ability of the AL pipeline to select data from underrepresented classes for annotation.Out of the 2400 selected images from the AL pipeline 339 (14.1%) were grade 3 and 167 (7.0%) were grade 4. In traditional training, for the same number of total images 237 (9.9%) were grade 3 and 60 (2.5%) were grade 4.Even after 5,400, traditional training collected 440 (8.1%) images with grade 3 and 105 (1.9%) images of grade 4. The distribution of all labels in the training dataset is depicted in Fig. 4. Model predictions together with the Grad-CAM heatmaps on sequential video frames of flap-valve inspection during gastroscopy for the different Hill grades are depicted in Supplementary Video 1.
Discussion
The potential benefits of AI in medicine are being thoroughly investigated.In endoscopy, multiple commercially available AI solutions exist and are implemented in clinical routine to support physicians during the examination.Developing effective medical AI required, in most cases, expert annotated data, which is a limiting factor as experts usually have limited time.Furthermore, a common pitfall for medical data is class imbalance, which describes the existence of data classes that are significantly less represented.Traditional AI training selects data for annotation randomly, thus resulting in significantly lower chance for data from rare classes to be selected 35 .Therefore, the need for generation of efficient AI training methods, that account for class imbalance and can be undertaken at the expert's pace is imminent.Several works have attempted to solve the above problems using active learning, where the idea is to stratify how images are selected for annotation, using the predictions of an existing, weak AI model.Most existing AL pipelines select data for which the model is most "uncertain", as these are more likely to result in erroneous predictions.Examples of selection methods include choosing the least confidence predictions 36 , highest Shannon entropy 37 and marginal sampling 38 .These methods tend to select "hard" examples, which may not be representative of the original data.
This work proposes a novel AL pipeline that accounts for rare classes, resulting in a diverse and representative collection of annotated data.The novelty of the proposed AL method lies in how images are selected for annotation.Instead of selecting randomly, or focusing on hard examples, the method uniformly selects data from the entire range of model predictions, enabling selection of easy to hard positive and negative examples.The data selection method's efficiency is shown in the performance of the trained models, which were evaluated on expert-annotated images from a distinct set of images from an external dataset of endoscopic images.The AL trained model achieved a 76% accuracy in assessing the Hill grade compared to the 71% of its traditional trained counterpart.Traditional training, with 225% of the data, managed to improve accuracy to 77%.This demonstrates that traditional training requires a larger volume of data to achieve similar performance.This becomes more evident when considering the per-class analysis, specifically for rare classes.The AL and traditional model achieve comparable, high performance in terms of accuracy and specificity for grades 3 and 4. The major difference comes in precision, where for grades 3 and 4 the AL model achieves 0.54 and 0.72, whereas the traditional model achieves a mere 0.34 and 0.56.This vast difference can be attributed to the selection method, as the rare classes are better represented and a greater percentage of the training data for the AL model.The increased number of examples renders the AL model more effective in identifying these classes.When the Hill classification is used to infer the presence of HH, grades 3 and 4 are associated with HH whereas grades 1 and 2 are associated with its absence.The accuracy, precision, recall (sensitivity), and specificity were 94%, 0.72, 0.74, and 0.96 for the AL model and 93%, 0.81, 0.54, and 0.98 for the traditional model.For HH, the AL model presents a much higher recall, which emphasizes its ability to detect cases of HH that would be missed from the traditionally trained model.Overall, AL developed demonstrated powerful performance which is improved compared to its traditional counterpart.Explainability of the AL model for correct and erroneous examples was investigated using the GradCAM method, which demonstrated that the model focuses on the correct parts of the image when predicting.Erroneous predictions were attributed to erroneous assessment of the size of the flap valve from the model.
The AI models trained in this work achieved a mean accuracy of 76% for the different Hill grades, which can be less than that of expert physicians, yet model performance can improve by continuing model training.Furthermore, we believe that the model can support the decision process for younger physicians with limited experience and in the automation of the examination reporting process.
Despite the significance of the Hill classification, to our knowledge, the problem of automating its determination using AI has not been addressed so far.Regarding the detection of HH, comparison of our results with previous works involving gastroscopic images 29 or patient data from bariatric interventions 30 presents that our model achieves similar outcomes and a smaller gap between specificity and sensitivity (Table 2).Thus, the obtained model was able to infer the presence HH robustly.Similarly, Santeramo et al. 32 utilized deep learning methods for chest radiographs to predict the existence of several abnormalities, including HH.
Our work presents different limitations.One could also argue that annotating in batches is not convenient for the expert, as the annotation process is interrupted by model training.Yet, this fact proves beneficial when attempting to integrate annotation processes in the medical routine, for example between examinations 39 .Furthermore, a limitation is that the model was trained and tested on static images, instead of video frames which could result in the model not performing the same during an examination Yet, frame-by-frame application of the model in examination videos demonstrated robust performance, which makes us confident in the ability of the model to be compatible with real-time application during clinical routine.One limitation of this study is that the test dataset contained fewer images for Hill grades 3 and 4 compared to grades 1 and 2. This imbalance may affect the accuracy of model evaluation, particularly in larger datasets.However, it is noteworthy that an analogous distribution of Hill grades is expected in the general population undergoing screening gastroscopies.Therefore, the analysis of model performance remains relevant and fitting for this clinical scenario.While the model obtained demonstrates promising results overall, there are areas for improvement, particularly in outcomes for Hill grades 3 and 4. Model performance in these critical scenarios can be enhanced by increasing the number of images used in model training.This, as was shown in this work, can be efficiently done by continuing the AL based training.This work introduces a novel AL pipeline that enables efficient training of AI models, especially under class imbalance.The training process is iterative, and the annotation process can fit the time availability of experts providing labels.Using the proposed pipeline, we trained AI for predicting the Hill classification from gastroscopic images and infer the presence of hiatal hernia.The model was evaluated on an external set of expert-annotated images and its performance was compared to that of traditionally trained AIs.Both the AL
Figure 1 .
Figure 1.Examples of endoscopic images of flap valve inspection captured during retroflexion in the stomach, depicting the four different Hill grades.
Figure 2 .
Figure 2. Visualization of training (left) and testing (right) data.The training data was obtained from a collection of examination image reports.Test data consisted of images from an external dataset of endoscopic images.
Figure 3 .
Figure 3. Visualization of Grad-CAM results and prediction probabilities for correctly and erroneously classified images for the four different Hill grades.The results correspond to images from the distinct test dataset 40 .Hill grade with green colored letters indicates the gold standard label, and bars indicate model predictions.
The above selection generates a diverse set of annotated examples.Furthermore, uniformity in the selection enables more images from under-represented classes to be included in the training dataset for the model.We applied the proposed pipeline for training an AI model that predicts the Hill classification in gastroscopy.The Hill classification assesses the status of the gastroesophageal valve by assigning a grade, from 1 to 4. Grades 3 and 4 are rare in the screening population from which the training data come from.21,970 unlabeled images captured during gastroscopy were identified as training data.The stopping criterion for the AL was set to 2400 when 10% of the training data were annotated.Additionally, we used traditional AI training as the baseline, and compared the AL trained model with traditionally trained model with 2400 and 5400 images.Out of 2400 images, using the AL pipeline resulted in 339 (14.1%) grade 3 and 167 (7.0%) grade 4 data points.In a total of 2400 images, 237 (9.9%) represented grade 3 and merely 60 (2.5%) grade 4. Traditional training with 225% data, that is 5400 images, found 440 (8.1%) grade 3 images and 105 (1.9%) images of grade
Figure 4 .
Figure 4. Distribution of the different labels in the annotated data based on our active learning with 2400 images (first row) and traditional training with 2400 (second row) and 5400 images (third row).Each label is represented with a different color.Images excluded from training (assessed as irrelevant or low quality from the expert) are presented in gray.
Table 1 .
Per Hill grade evaluation of models on the external test data.The active learning model (first column) trained with 2400 images is compared to traditionally trained model with the same (column 2) and 225% training data (column 3).CI: Confidence interval.
Table 2 .
Overview of performance measures for detection of hiatal hernia by different models compared to our work. | 5,166 | 2024-08-13T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Field-controlled ultrafast magnetization dynamics in two-dimensional nanoscale ferromagnetic antidot arrays
Ferromagnetic antidot arrays have emerged as a system of tremendous interest due to their interesting spin configuration and dynamics as well as their potential applications in magnetic storage, memory, logic, communications and sensing devices. Here, we report experimental and numerical investigation of ultrafast magnetization dynamics in a new type of antidot lattice in the form of triangular-shaped Ni80Fe20 antidots arranged in a hexagonal array. Time-resolved magneto-optical Kerr effect and micromagnetic simulations have been exploited to study the magnetization precession and spin-wave modes of the antidot lattice with varying lattice constant and in-plane orientation of the bias-magnetic field. A remarkable variation in the spin-wave modes with the orientation of in-plane bias magnetic field is found to be associated with the conversion of extended spin-wave modes to quantized ones and vice versa. The lattice constant also influences this variation in spin-wave spectra and spin-wave mode profiles. These observations are important for potential applications of the antidot lattices with triangular holes in future magnonic and spintronic devices.
Field-controlled ultrafast magnetization dynamics in two-dimensional nanoscale ferromagnetic antidot arrays
Introduction Recent advances in nanofabrication techniques have resulted in artificially patterned magnetic metamaterials, known as magnonic crystals (MCs), which have great potential for technological applications and fundamental research [1,2]. Investigation and tailoring of the magnetization dynamics in ferromagnetic nanodots [3,4], nanowires [5] and antidots [6][7][8] have fuelled considerable research on reconfigurable MCs, which act as a media for standing and propagating spin waves (SWs) in the GHz frequency regime. Ferromagnetic antidot lattices (magnetic thin films with periodic non-magnetic inclusions or embedded holes) have emerged as one of the strongest candidates for reconfigurable, effective media for SW propagation due to the larger propagation velocity (steeper dispersion) than nanodot lattices. They find potential applications in magneto-photonic crystals [9], ultrahigh density data storage media [10], frequency-based magnetic nanoparticle detectors [11], waveguides for SWs [12,13], spin-wave filters [14], spin-logic [15] and reprogrammable magnonic devices [16]. The edges of the antidots lead to quantization of SW modes due to lateral confinement as well as the generation of a periodically modulated internal magnetic field due to the demagnetization effect. A number of parameters can be varied to tune the magnonic spectra and magnetization dynamics in ferromagnetic antidot lattices. Several studies have been focused on the engineering of the coercive field, magnetoresistance and anisotropy properties on domain formation and the magnetization reversal mechanism with the change of shape, size and density of antidots [17,18]. Extensive research on the dynamics of standing and propagating SWs in antidot lattices has shown pattern induced splitting [19], confinement, localization and propagation of SWs, depending upon the lattice and antidot geometry, base material and strength and orientation of the bias field [6][7][8][19][20][21][22][23][24][25][26]. Intrinsic configurational magnetic anisotropy arising due to the internal field variation can be tuned effectively by varying the antidot lattice symmetry [21,24]. The shape of the antidots is found to control the SW mode structures as well as the anisotropy in the frequency spectra [25]. Quantized SW modes have been found to be transformed to propagating ones and vice versa in rhombic antidot lattices with the variation of the in-plane orientation of the biasmagnetic field [26]. A particular study showed the hysteresis and anisotropy properties of Ni 80 Fe 20 (a permalloy, noted as Py hereafter) antidot lattices with hexagonal symmetry by the influence of the hole size, lattice packing fraction and scale factor via micromagnetic numerical approach [27]. Bi-component or filled antidot lattices can tune the SW properties and magnonic band structure more efficiently due to the strong interelement exchange and dipolar coupling [28,29]. A remarkable difference in magnetic anisotropies and magnetization reversal mechanisms has been observed in systematically engineered square and binary antidot lattices [30].
Hexagonally arranged antidot lattices are interesting because they offer the highest packing density features among all Bravais and non-Bravais lattices. In contrast to other lattice symmetries, the hexagonal lattice structure exhibits six-fold anisotropy with an easy axis that alternates at every 60° and it does not obey the nearest-neighbour rule as the easy axes are oriented along the edges of the hexagonal unit cell [26,31,32]. All of those studies were performed on antidots of circular shape, and very rarely, antidot lattices with triangular-shaped holes, which may suffer from edge effects due to the sharp triangular edges of the holes, have been explored [25]. Unlike the circular or square-shaped antidots, the demagnetized regions around the sharp edges of the triangular-shaped antidots are asymmetric. These may lead to interesting properties of SW quantization in the regions between the antidots. Here, we have focused on the detailed and systematic investigation of the magnetization dynamics in two-dimensional Py antidot lattices where nanostructured triangular holes are arranged in a hexagonal lattice using an all-optical time-resolved magneto-optical Kerr microscope. We have investigated the variation in the nature of the extended and quantized SW modes in such systems by changing the strength and orientation of the in-plane bias-magnetic field and the lattice constant of the array. Micromagnetic simulations have also been performed to understand and interpret the experimental results, which helped to unravel the transformation of extended SW modes to quantized ones with the angular variation of the in-plane magnetic field and change in lattice constant. The opening and closing of channels for spin-wave extension and localization for a large number of bias field angles (in the full range of 0° to 90°) for this type of complex magnonic crystal have not been studied earlier. Also the sharp corners of the triangular holes of the high density antidot lattice and the complicated lattice structure create inhomogeneous internal magnetic fields due to the effective pinning centres for SWs created by the asymmetric demagnetized regions between the neighbouring triangular holes, which may give rise to some new and interesting physics of opening and closing of new channels for spin-wave extension and/or localization at particular edges of the triangular holes. Finally we have extensively studied the variation in the internal magnetic field profiles and demagnetizing regions for different angular orientation of the applied bias magnetic field and lattice constant for deeper understanding of the origin of the observed SW modes in such complex magnonic crystal, which was not done previously.
Results and Discussion
electron-beam evaporation and ion milling [20]. Figure 1a,b shows the scanning electron micrographs (SEMs) of the two hexagonal antidot arrays, S1 and S2. The edge length of the triangular holes is about 200 nm and the separation between the nearest edges for the two samples is about 200 nm and 500 nm, respectively (lattice constants (a) are 400 nm and 700 nm, respectively). About ±5% deviation in the edge length of antidots and lattice constant is observed. The SEM images show that the triangular antidots have rounded corners and they suffer from small asymmetry in their shapes. The above deviations and asymmetry in the shape of the antidots have been included in the micromagnetic simulations as described later in this article.
The lattice parameters a and b are varied while the angle γ is kept constant at 120° for the hexagonal lattice as shown in Figure 1b. The unit cell is also marked in Figure 1b. The values of a and b (in nm) as obtained from the SEM images are 400 and 360 and 700 and 560, respectively, for the two lattices. The value of γ is obtained as 120 ± 2°. For convenience, the antidot arrays will be described only by the lattice constant a, from here on.
The ultrafast magnetization dynamics was measured by using a home-built time-resolved magneto-optical Kerr effect microscope based upon a two-colour collinear pump-probe setup [33,34]. The second harmonic (λ = 400 nm, pulse width ≈100 fs) of a Ti:sapphire laser was used to pump the samples, while the time-delayed fundamental (λ = 800 nm, pulse width ≈80 fs) laser beam was used to probe the dynamics by measuring the polar Kerr rotation with an optical bridge detector. A variable amplitude magnetic field is applied to the sample, the direction of which was tilted slightly (10°) out of the plane of the sample to have a finite demagnetizing field along the direction of the pump pulse. The pump beam modulates this out of plane demagnetizing field to induce precession of magnetization in the sample. The in-plane component of the magnetic field is referred to as the bias magnetic field. The magnetization dynamics in the antidot arrays are measured as a function of strength (H) and angular orientation (φ) of the bias magnetic fields.
Variation of magnetization dynamics with the variation of the orientation of applied bias magnetic field and lattice constant Figure 1c shows representative time-resolved Kerr rotation data from the array with a = 700 nm with in-plane bias field of 1.3 kOe at φ = 0°. The graph reveals three important temporal regimes, namely, the ultrafast demagnetization (region I), fast relaxation (region II), and precessional motion superposed on a slow relaxation (region III). We have further performed precise measurements of the time-resolved Kerr rotation for about 3 ps from the zero delay with higher temporal resolution of 25 fs (Figure 1c inset) and fitted the data with the three temperature model using the analytical expression [35] given in Equation 1.
Here the time resolution for the laser profile is accounted by a Gaussian function G(t), and it is convoluted with the fit function, which contains two exponentials with time constants t m and t e representing the demagnetization and the fast relaxation time, respectively. H(t) and δ(t) represent the Heaviside step function and the Dirac delta function, respectively. A1, A2, and A3 are constants. From the fit we have obtained the ultrafast demagnetization time as 204 ± 3 fs and the fast relaxation as 1.0 ± 0.01 ps. This is followed by the slower relaxation process which occurs within 400 ± 7 ps, while the precessional oscillation is found to be superposed on the slower relaxation process. Figure 1d shows the precessional dynamics after removing the negative delay and ultrafast demagnetization and subtracting a bi-exponential background. Fast Fourier transform (FFT) is performed over this background-subtracted oscillatory Kerr rotation data to obtain the power vs frequency plot.
Figure 2a-f shows the representative background-subtracted experimental time-resolved Kerr rotation data for some specific orientations of the in-plane bias magnetic field for the two antidot arrays. The experimental FFT power spectra for S1 and S2 with φ varying from 0° to 90° (at an interval of 15°) are shown in Figure 2g and Figure 2i.
As φ and a-values are varied, we observe a distinct variation in the magnetization dynamics. For all φ and a-values, multimodal SW spectra are observed corresponding to the damped nonuniform oscillations [36].
The experimental data of bias-field-angle dependence of SW spectra for S1 has been taken at H = 1.0 kOe, whereas for S2 the data has been taken at H = 1.3 kOe. However, since both of the two field values are well above the saturation field for the Py (base material of the antidot lattice), the variation in bias field magnitude only changes the SW frequency values for the two samples, while the qualitative features of the angular dependence of SW spectra and corresponding SW mode profiles (a-f) Experimental time-resolved Kerr rotation data for some specific orientations of the bias magnetic field the two arrays: (a) S1 at 0°, (b) S1 at 45°, (c) S1 at 60°, (d) S2 at 0°, (e) S2 at 45°, and (f) S2 at 60°. (g) and (i) show the FFT power spectra of experimental time-resolved Kerr rotation data of S1 and S2 for different orientations of the in-plane bias field: (g) for S1 at H would not be affected by this. The experimental SW spectra for S1 show 3 modes for φ = 0°, 30°, 60° and 90°. The spectra for φ = 0° and 60° are qualitatively similar in nature (though the mode frequencies are not same). Also, there is a qualitative (but not quantitative) agreement between the spectra for 30° and 90°. However, the SW spectra are remarkably different for φ = 15°, 45° and 75° with a drastic increase in the number of modes at φ = 45° and 75°. These indicate a change in the collective nature of the magnetization dynamics with varying φ-values. When we consider S2 with larger a-values, we get a significant difference in the nature of SW spectra as opposed to S1. Here, instead of large number of modes, only two modes (one dominant mode with a low power shoulder) for almost all the angles are observed (excluding φ = 30° where instead of two, only one mode is observed).
The experimental SW spectra are well reproduced by micromagnetic simulations by OOMMF software [37]. The simulated FFT power spectra for S1 and S2 with the variation of φ are shown in Figure 2h and Figure 2j. As opposed to the experimental technique, which is based on optical excitation of magnetization, the simulation is performed by applying a pulsed magnetic field, which reproduces the experimental conditions successfully. The details of the simulation can be found elsewhere [4]. Similar simulation methods have also previously been used to successfully reproduce and understand the magnetization dynamics and SW mode profiles in different types of magnonic systems [34,36,38]. We have studied arrays of 7 × 7 antidots and discretized the arrays into rectangular prisms of dimensions 4 × 4 × 20 nm 3 . The excitation over the whole array is uniform and we have extracted the data from the centre of the arrays. The lateral cell size is well below the exchange length of Py (≈5.2 nm). The shapes introducing the actual edge roughness of the triangular holes have been derived from SEM images and the material parameters used for Py were gyromagnetic ratio γ′ = 17.6 MHz Oe −1 , anisotropy field H k = 0, saturation magnetization M s = 860 emu cm −3 , and exchange stiffness constant A = 1.3 × 10 −6 erg cm −1 . The material parameters were extracted by measuring the variation of precessional frequency (f) with bias magnetic field H for a Py thin film and by fitting them using Kittel formula, The exchange stiffness constant A is obtained from literature [39]. A pulsed field of peak value of 30 Oe, 10 ps rise/fall time and 20 ps pulse duration is used perpendicular to the sample plane, while a damping coefficient α = 0.008 is used during dynamic simulations. The experimentally observed SW spectra (FFT of the time-resolved Kerr rotation data) match qualitative- ly with the simulated SW spectra. But due to some limitations in the micromagnetic simulations, we observe a slight quantitative disagreement between the experimental and simulated spectra [36]. As the triangular antidots have rounded corners, and hence suffer from small asymmetry in their shapes, the simulations have been performed by introducing the actual edge roughness of the triangular antidots. However, the precise edge roughness and deformation could not be reproduced by the finite difference method based micromagnetic simulations used here. The simulations have also been performed on similar antidot arrays after applying a two-dimensional periodic boundary condition (2D-PBC). The simulations with and without application of 2D-PBC show almost identical results. The simulation results with the introduction 2D-PBC are given in Supporting Information File 1. Figure 3a shows the bias magnetic field dependence of the SW frequencies fitted with the Kittel formula for a 20 nm-thick Py blanket film. Figure 3b shows the same extracted from the experimental and simulated FFT spectra for S2. The experimental data points corresponding to mode 1 are well fitted with the Kittel formula. However, the higher frequency mode (mode 2) does not follow the same formula. The M s value obtained from the Kittel fit of mode 1 is 712 emu cm −3 , while the other magnetic parameters remain same as the Py blanket film. The difference between the simulated and experimental SW mode frequencies may arise due to random demagnetized regions at the edges and rounded corners of the triangular antidots, which is hard to precisely incorporate in the finite difference method based micromagnetic simulations such as OOMMF as used here. Further, we could not fit any of the experimental modes of S1 with the Kittel formula.
Micromagnetic analysis of the collective magnetization dynamics in the arrays
We have further simulated the spatial profiles of the resonant modes using a home-built code [40] and the simulated power and phase maps for S1 are shown in Figure 4a and Figure 4b, respectively. For φ = 0°, mode 1 has an extended character through the horizontal channels between the neighbouring antidot rows in the Damon-Eshbach (DE) geometry (i.e., extended in a direction orthogonal to the applied bias magnetic field). On the contrary, mode 2 is a localized mode where the highest spin-precession amplitude is localized in the same horizontal channels with quantization number n = 3. Mode 3 is again a quantized mode with higher quantization number (n = 5). When φ is rotated to 15°, mode 1 is an edge mode (EM) of the array and the highest spin-precession amplitude associated with this mode is found at the top vertex of each triangular hole. Interestingly, in the simulated profile, mode 2 for φ = 0° is split into two for φ = 15° (modes 2 and 3). These two modes are localized modes between diagonally situated next nearest neighbours but standing waves do not form exactly between the diagonally situated next nearest neighbours and become asymmetric due to asymmetry in the internal field profile. Mode 4 is the quantized mode with quantization number n = 5. When φ is further rotated to 30°, mode 1 is again an EM of the lattice with the highest spin-precession amplitude mainly concentrated due to the demagnetizing regions at the left-most vertex of each triangular antidot. Mode 2 is localized mode and as opposed to φ = 15°, here the standing wave is symmetric and forms exactly between the diagonally situated next nearest neighbours. Again, mode 3 is the quantized mode with n = 3. For φ = 45°, the lowest frequency mode is split into two modes (modes 1 and 2). For these two modes, overlap between localized modes generates a pseudo-extended mode through the channel marked by the dotted line shown in Figure 1a (diagonally extended channel). The next mode is again split into mode 3 and mode 4 and these two are localized modes along the same channel. Here, the highest frequency mode 5 is quantized with quantization number n = 5. Again, for φ = 60°, the spatial profiles of the SW spectra qualitatively match with that of φ = 0°. Here, mode 1 is a fully extended mode similar to that for φ = 0°, but the channel of propagation is different and it flows through the diagonally extended channel. Mode 2 is localized in the same channel with n = 3, and mode 3 is a quantized mode with higher quantization number (n = 7). Again, at φ = 75°, each mode is split into two. For the two lowest frequency modes (mode 1 and 2), the highest spin-precession amplitude is concentrated at the left-most vertex of the triangular antidots. These two are localized modes and seem to be running parallel through the diagonally extended channel of the array. The next higher frequency modes are quantized modes with quantization numbers increasing from mode 3 to mode 6. For φ = 90°, due to the unavailability of continuous channels along the vertical direction, mode 1 is again an EM of the lattice with the highest spin-precession amplitude mainly concentrated due to the demagnetizing regions at the left-most vertex of each triangular antidots. Mode 2 is a localized mode as compared to φ = 30° and here the standing wave is symmetric and forms exactly between the vertically situated next nearest neighbours. Mode 3 is again a quantized mode with higher quantization number (n = 5).
In S2, we observed a remarkable variation in the SW mode profiles as shown in Figure 5. In addition to horizontally and diagonally extended continuous channels, we also observed continuous channels in the vertical direction (which was unavailable in S1) and fully extended modes in DE geometry are obtained through the horizontal, diagonal and vertical channels for φ = 0°, 60° and 90°, respectively. For the other angles, the lower frequency extended, pseudo-extended or EMs (present in S1) are not present here and we observed localized and/or quantized modes.
Magnetostatic field distribution of the arrays
We have further simulated the magnetostatic field distribution of the antidot arrays by using the LLG micromagnetic simulator [41]. Figure 6 shows the magnetization maps (domain plot) and the contour plots of simulated magnetostatic field distributions for the two arrays, at some specific orientations of the in-plane bias field. This reveals the demagnetized regions and internal-field distribution around the antidots for different values of φ. For different φ-values, the surface charges at the boundaries between the antidots and magnetic layer lead to the formation of different types of domains through the demagnetization field. Due to the triangular shape of the antidots, the demagnetized regions, as well as magnetostatic field distributions around the arrays, are not symmetrical. As φ is varied, the domain structure and the internal field lines change considerably. This leads to the variation in the SW mode structures as well as the mode frequencies. For S1 at φ = 0°, domains with magnetization reaching ±45°w ith respect to the y-axis (direction of applied bias magnetic field) are located within the regions between the base and the top apex of the vertically nearest neighbour triangular antidots. But due to the triangular shape of the holes and the small lattice constant, all the ±45° domains are not symmetrically situated. The domains with magnetization along the applied field are located within the central area of the unit cell. From the contour plot it is evident that the density of the internal field lines in the region between the base and top apex of vertically situated triangular holes (i.e., along the horizontal channel) is large. The hexagonal geometry gives the extended nature of the SW modes through the horizontal channel shown in Figure 1. For S1 at φ = 60°, the density of the internal field lines deceases along the horizontal channel but increases along the diagonal channel as small domains with magnetization directed nearly along y-axis are located at the left and right apex and domains with magnetization directed nearly along x-axis are located at the top-apex of each triangle. But along the diagonally extended channel, the magnetization points along the direction of applied field and SW shows an extended nature through this channel. Again, for S1 at φ = 45°, the density of the internal field lines is less compared to φ = 0° and 60°, along the horizontal and diagonal channels, respectively. Domains with magnetization directed nearly along the x-direction are located along the horizontal channel (between the base and top-apex of vertically situated nearest neighbours) and domains with magnetization directed nearly along the y-direction are located between horizontally situated nearest neighbours. But in this orientation of the bias field, the extended nature of SW is suppressed due to the absence of a channel along which only one type of domain could be observed. For S1 at φ = 90°, the density of the internal field lines reduces significantly and the demagnetizing regions become asymmetric around the triangular holes. In most of the regions, the magnetization points along the direction of the applied field. Only very small domains with magnetization pointing nearly along the y-axis are located at the corners of the triangular holes. But in this orientation, the extended nature of the SWs is not observed due to the hexagonal geometry of the lattice with a small lattice constant.
The domain structure changes when we consider the array S2 with a larger lattice constant. For φ = 0°, the asymmetric nature of the domain structure found in S1 is not observed in S2, all the domains almost coalesce, and very small ±45° domains are located only at the corners of the triangular holes for S2. In most of the regions, the magnetization points along the direc-tion of the applied magnetic field and as the horizontal channels consist of only one type of domain, the power of the extended mode through this horizontal channel is considerably higher than that for S1. Similarly, in other orientations also, the domains coalesce more and only one type of domain is observed in most of the regions except for the triangular corners. Hence, in case of S2, we do not observe EMs as obtained in S1 and for all other orientations of the bias manetic field we observe either quantized or extended modes with comparatively higher power than S1.
Conclusion
In conclusion we have investigated the effects of the orientation of the bias-magnetic field and lattice constant on the ultrafast magnetization dynamics and magnetostatic field distribution in a periodic array of triangular nanoholes forming a hexagonal antidot lattice in a thin Py film by using time-resolved Kerr microscopy. The experimental results reveal that the magnetization dynamics can be effectively tuned by the systematic variation of the orientation of the in-plane bias-magnetic field and lattice constant. Micromagnetic simulations successfully repro-duced the experimental results and a fully extended SW mode is found to transform to quantized ones and vice versa simply by changing the in-plane orientation of the bias field. For the antidot lattice S1 (lattice constant 400 nm), the channels for SW propagation are found to be opened at φ = 0° and 60°. For φ = 45°, we observe a pseudo-extended nature of SW modes along the diagonally extended channel, whereas for the other angles, due to unavailability of continuous propagation channels, the powers of SWs are found to be concentrated at specific edges of the triangular holes. Interestingly, for S2 (lattice constant 700 nm), due to the increased inter-antidot separation, an additional SW propagation channel at φ = 90° gets opened. For other angles, the low-power edge modes are not present here due to the increased lattice constant, and for those angles, we mainly observe quantized and/or localized modes. The observed variation in the collective magnetization dynamics with the orientation of the in-plane bias field is attributed to the variation of the internal field distribution between the triangularshaped antidots. The observed tunability of the magnetization dynamics and SW spectra with the variation in the orientation of the in-plane bias field and lattice constant is anticipated to be important for nanoscale magnonic crystal based technology.
Experimental Fabrication
Two-dimensional arrays of Py antidots with triangular holes arranged in a hexagonal lattice have been fabricated by a combination of electron-beam lithography, electron-beam evaporation and ion milling. The 20 nm-thick Py film was deposited on a commercially available self-oxidized Si(100) substrate and a 60-nm-thick protective layer of Al 2 O 3 was deposited on top of the Py film in an ultrahigh vacuum chamber at a base pressure of 2 × 10 −8 Torr. The Al 2 O 3 capping layer was deposited on the Py film to protect the samples from external contamination of the environment, degradation with time, and also from direct irradiation of laser light. A PMMA/MMA bilayer resist was used for electron-beam lithography to prepare the resist pattern on the Py thin film followed by argon ion milling at a base pressure of 1 × 10 −4 Torr with a beam current of 60 mA for 6 min for etching out the Py film from everywhere except the unexposed resist pattern to create the triangular antidots.
Measurement
A custom-built all-optical time-resolved magneto-optical Kerr effect (TRMOKE) microscope based on a two-colour collinear optical pump-probe geometry has been employed to measure the ultrafast magnetization dynamics of the antidot lattices. In this technique, the second harmonic (λ = 400 nm, fluence = 20 mJ/cm 2 , pulse width ≈100 fs, spot size = 1 μm) of the fundamental laser beam is generated by a second harmonic generator (SHG) from a mode-locked Ti:sapphire laser (Tsunami, Spectra Physics) to pump or excite the dynamics. The fundamental beam (λ = 800 nm, fluence = 5 mJ/cm 2 , pulse width ≈80 fs, spot size = 800 nm) placed at the centre of the pump beam is used to probe the dynamics of the sample by measuring the time-varying polar Kerr rotation from the sample. The magneto-optical Kerr rotation is measured by an optical bridge detector as a function of the time delay between the pump and probe beams. The pump and probe beams are spatially overlapped and focused together on the antidot lattice in a collinear fashion by using a single microscope objective (N.A. = 0.65). The sample is scanned by an x-y-z piezoelectric scanning stage, which gives high stability to the sample in the presence of feedback loops. The pump beam was chopped at 2 kHz frequency, and the phase-sensitive detection of the Kerr rotation and reflectivity were performed using lock-in amplifiers and an optical bridge detector at room temperature. A variable magnetic field is applied at a small angle (10°) to the sample plane and its in-plane component is defined as the bias magnetic field H. In the experiment, we effectively vary the azimuthal angle (φ) of H between 0° and 90° at intervals of 15°f or the hexagonal antidot lattice by rotating the samples using a high-precision rotary stage while keeping the microscope objective and H constant. The pump and the probe beams are made to incident on the same region of the array for each value of φ.
Supporting Information
Supporting Information File 1 Micromagnetic simulations of the antidot arrays by applying 2D-PBC. | 6,921.6 | 2018-04-09T00:00:00.000 | [
"Physics"
] |
BRD4 as a Therapeutic Target in Pulmonary Diseases
Bromodomain and extra-terminal domain (BET) proteins are epigenetic modulators that regulate gene transcription through interacting with acetylated lysine residues of histone proteins. BET proteins have multiple roles in regulating key cellular functions such as cell proliferation, differentiation, inflammation, oxidative and redox balance, and immune responses. As a result, BET proteins have been found to be actively involved in a broad range of human lung diseases including acute lung inflammation, asthma, pulmonary arterial hypertension, pulmonary fibrosis, and chronic obstructive pulmonary disease (COPD). Due to the identification of specific small molecular inhibitors of BET proteins, targeting BET in these lung diseases has become an area of increasing interest. Emerging evidence has demonstrated the beneficial effects of BET inhibitors in preclinical models of various human lung diseases. This is, in general, largely related to the ability of BET proteins to bind to promoters of genes that are critical for inflammation, differentiation, and beyond. By modulating these critical genes, BET proteins are integrated into the pathogenesis of disease progression. The intrinsic histone acetyltransferase activity of bromodomain-containing protein 4 (BRD4) is of particular interest, seems to act independently of its bromodomain binding activity, and has implication in some contexts. In this review, we provide a brief overview of the research on BET proteins with a focus on BRD4 in several major human lung diseases, the underlying molecular mechanisms, as well as findings of targeting BET proteins using pharmaceutical inhibitors in different lung diseases preclinically.
Introduction
The bromodomain and extra-terminal domain (BET) family are known as epigenetic readers to regulate gene transcription. It has four members in mammals, i.e., bromodomaincontaining protein 2, 3, 4 (BRD2, BRD3, and BRD4) and testis-specific BRDT. BET proteins have several conserved domains. All BET members share two tandem N-terminal bromodomains (BDs) and one C-terminal extra-terminal (ET) domain. The first and second BDs (BD1 and BD2) bind to acetylated lysine residues of nuclear proteins, through which they enhance activity of the transcriptional machinery and thus gene transcription. Compared to BRD2 and BRD3, BRD4 and BRDT also have a C-terminal domain (CTD). The CTD recruits the positive transcription elongation factor b (pTEFb), a cyclin-dependent kinase controlling elongation by RNA Polymerase II [1,2] to promote gene transcription. The ET domain may serve as another important transcriptional regulator through interaction with several cellular proteins including glioma tumor suppressor candidate region gene 1 (GLTSCR1), Jumonji domain-containing 6 (JMJD6), and nuclear receptor binding SET domain protein 3 (NSD3) [3][4][5]. Among these BET family members, BRD4 is the most studied. BRD4 and BRD2 knockout in mice are embryonically lethal [6,7]. BET regulates gene transcription through binding to acetylated lysine residues of nuclear proteins (e.g., histones, transcription factors, enhancers, super-enhancers, and others) to facilitate RNA polymerase II-dependent transcription [8]. Two important transcription factors, NF-κB
Implications of BET Proteins in Pulmonary Diseases
Increasing evidence suggests the participation of BET proteins, particularly BRD4, in the development of pulmonary diseases ( Table 1). The following sections will discuss the novel functions of BRD4 in pulmonary diseases based on findings from in vitro assays and preclinical models.
Acute Lung Inflammation
BET proteins are important for regulating inflammatory and immune responses. Of the first two BET inhibitors discovered, JQ1 and I-BET [60,61], I-BET was initially found to regulate macrophage-driven inflammation [61]. Several other BET inhibitors also demonstrate anti-inflammatory effects in the lung caused by lipopolysaccharide (LPS). For example, Chen and colleagues reported that the BETi CPI-203 remarkably suppressed Th17 cytokine production (IL-17A, IL-22) by T cells from the lungs of cystic fibrosis patients [21]. The authors further showed that CPI-203 treatment inhibited Th17 chemokines and cytokines in human bronchial epithelial cells derived from cystic fibrosis and control donors. In addition, CPI-203 decreased the inflammatory response in mice with acute Pseudomonas aeruginosa infection. During acute inflammation, adhesion of leukocytes to activated endothelial cells is an early event. A previous study showed that the BETi JQ1 attenuated the production of adhesion molecules and pro-inflammatory cytokines (e.g., IL-6 and IL-8) from human umbilical vein endothelial cells (HUVECs), and leukocyte adhesion to activated endothelial cells (ECs) induced by TNF-α in vitro [22]. The attenuation of NF-κB and p38 MAPK pathway activation by JQ1 may account for the effects of BET inhibition. Consistently, in LPS-induced acute lung inflammation, JQ1 pretreatment decreased leukocyte infiltration into the lung and suppressed the expression of VCAM-1, ICAM-1, and myeloid-related protein 14 (MRP14) [22]. In another study, the BD2-selective BETi RVX-297 also reduced proinflammatory mediators (IL-6 and IL-17) in the spleen and serum in a mouse model of LPS-induced acute inflammation [23]. The work by Liu et al. [24] showed that in polyinosinic:polycytidylic acid (poly(I:C))-induced acute lung inflammation mouse model, two selective BRD4 inhibitors (ZL0420 and ZL0454) more effectively blocked neutrophil infiltration into the lungs and cytokine expression in the lungs than that of JQ1 or RVX-208. In addition, ZL0420 and ZL0454 also demonstrated strong potency in inhibiting toll-like receptor 3 (TLR3)-dependent innate immune gene expression in human small airway epithelial cells (hSAECs). These several lines of evidence support BET inhibition as an attractive strategy in suppressing acute lung inflammation.
Respiratory viruses are key contributors to acute lung inflammation. Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is a strain of coronavirus that causes coronavirus disease 2019 (COVID-19) [62]. To search for potential drug targets, Gordon and colleagues identified 332 high-confidence protein-protein interactions between SARS-CoV-2 and human proteins. Interestingly, BRD2 and BRD4, but not other BET family members, were found on the list to interact with SARS-CoV-2 [25]. Further mechanistic studies by Vann et al. [63] showed that SARS-CoV-2 E protein interacts with human BRD4 through two different mechanisms, i.e., BD1 and BD2 of BRD4 binds to acetylated E protein at K53 and K63 sites and the ET domain of BRD4 interacts with an unacetylated motif (SFYVYSRVKNLN) of the E protein. Further, JQ1 and OTX015 treatment reduced SARS-CoV-2 infection in vitro. The data suggest that BRD4 contributes to SARS-CoV-2 infection. In addition, BRD2 and BRD3 regulate the expression of the viral entry receptor angiotensinconverting enzyme 2 (ACE2) to facilitate the de novo viral infection of SARS-CoV-2 while BRD4 is the least effective in this task [26,27,64]. Of note however, the BETi JQ1 and ABBV-744, when administered on the same day with SARA-CoV-2 infection in K1-hACE2 transgenic mice constitutively expressing the human ACE2 receptor, enhanced the viral replication [27]. These data underscore the antiviral role of BET proteins post entry and raise concerns about BET protein inhibition in ongoing SARS-CoV-2 infections.
Acute respiratory distress syndrome (ARDS) is a serious pathological condition that is associated with severe pulmonary inflammation, hypoxia, and edema [65]. It is also one of the major complications in severe COVID-19 patients [66]. The effects of BRD4 targeting in ARDS patients have been reported recently. In one study [28], BRD4 siRNA lipoplexes were found to suppress inflammatory infiltration of neutrophils and mast cells in the lungs of mice challenged with LPS. Blocking BRD4 also attenuated cytokine storm and oxidative stress associated with ARDS. Mechanistically, knockdown of BRD4 attenuated nuclear expression of NF-κB and STAT3, two critical downstream targets of TLR-4 signaling 6 of 17 activated by LPS. The same group also reported the induction of BRD4 by LPS in the RAW264.7 macrophage cell line and the BEAS-2B bronchial epithelial cell line as well as in an LPS-promoted tumor metastatic model [29]. However, whether BET/BRD4 inhibition will block LPS-promoted tumor metastasis in vivo remains to be tested.
Asthma
The T helper 2 (Th2) cell-mediated inflammatory response in the airways represents a key mechanism underlying allergic asthma [67]. Although much is known about the pathogenesis of allergic asthma, the mechanisms of Th2 cell differentiation remain largely unclear. A very recent study unveiled a novel role of BRD4 in Th2 cell differentiation from mouse primary naïve CD4+ T cells [68]. BRD4 in collaboration with Polycomb repressive complex 2 (PRC2) repressed transcriptional expression of the Th2 negative regulators, Foxp3 and the E3-ubiquitin ligase Fbxw7. This in turn promoted lineage-specific differentiation of Th2 cells from mouse primary naïve CD4+ T cells. Specifically, Fox3p was found to repress the Th2-specific transcription factor Gata3, while Fbxw7 promoted ubiquitination-mediated protein degradation of Gata3. JQ1 treatment eliminated the repression of Foxp3 and Fbxw7 by the BRD4 BD2 domain. This effect leads to the activation of Gata3-regulated genes including IL-4, IL-5, and IL-13. This study provides evidence that BRD4 is involved in allergic asthma by modulating the Th2-mediated inflammatory response, which warrants further investigation. IL-9-producing helper T cells (Th9) are characterized by IL-9 expression and are characteristic of allergic lung inflammation. A comprehensive study by Xiao and colleagues uncovered a novel role of BRD4 in Th9 cell induction and IL-9 production with implications for the treatment of airway inflammation with BRD4 inhibition [30]. The authors first showed that OX40 (CD134) robustly induced CD4+ T cell differentiation into Th9 cells accompanied by the assembly of super enhancers on the Il9 locus using H3K27Ac as an indicator of active enhancers. They identified the enrichment of BRD4 to the Il9 super enhancer region (H2K27Ac) in OX40-induced Th9 cells. Further, they found that JQ1 dramatically inhibited OX40 co-stimulation (with the Th9 cell medium containing TGF-β + IL-4)-induced Th9 cell conversion from naïve CD4+ T cells (reduced from 70% to 11% with JQ1). Consistent effects were found using siRNA targeting BRD4. Further, several mouse models were used to evaluate the effects of BRD4 inhibition on airway inflammation. These include the OX40Ltg spontaneous airway inflammatory mouse model, aerosolized OVA challenged mouse model (3 weeks after immunization with OVA in alum adjuvant), and an adoptive transfer model in Rag-1 deficient mice with the transfer of OT-II cells with selective BRD4 knockdown. BRD4 inhibition with JQ1 or BRD4 knockdown markedly reduced airway inflammation, as shown by the suppressed proliferation of mucin-producing cells in the airways and inflammatory cell infiltration into the lungs, as well as suppressed IL-9 levels in the bronchoalveolar lavage (BAL) compared to corresponding controls. These data together demonstrate the crucial role of BRD4 in mediating airway inflammation via IL-9 secretion and imply a novel strategy for targeting BRD4 in allergic airway inflammation.
The proinflammatory cytokine interleukin-17F (IL-17F) expression is also increased in the airways of asthmatics and correlates with asthma severity [69]. IL-17F induces the production of CXCL8 (IL-8) and the latter potentially contributes to neutrophilic infiltration into the airway. BRD4 has been reported to mediate CXCL8 production and CDK9 phosphorylation by IL-17F in human ASMCs [70], suggesting that BRD4 may be involved in neutrophilic infiltration by modulating IL-17F/CXCL8 signaling.
Airway smooth muscle cell (ASMC) hyperproliferation and secretion of proinflammatory mediators contributes to airway remodeling and inflammation in asthma [71][72][73]. The response of ASMCs from healthy subjects and asthmatics to BETi JQ1 and I-BET762 in the presence of fetal calf serum (FCS) and TGF-β1 stimulation has been reported previously [31]. The study found that the proliferation and secretion of IL-6 and CXCL8 of ASMCs from severe and non-severe asthmatics were more resistant to treatment with JQ1 and I-BET762. While c-Myc knockdown significantly inhibited the proliferation of ASMCs derived from both healthy and asthma donors, JQ1 treatment had no effect on c-Myc mRNA levels, a reported target of BRD4 [74,75]. Our previous work also supports c-Myc independent effects of JQ1 in the induction of cancer cell apoptosis [76], which seems to be cell-type dependent. Instead, JQ1 was found to increase p21 and p27 and potentially induce cell cycle arrest. In addition, JQ1 also attenuated the binding of BRD4 with the CXCL8 promoter. CXCL8 is a chemokine that contributes to steroid-resistant neutrophilic airway inflammation [77]. Clifford and colleagues found that ASMCs from asthmatic patients secrete higher levels of CXCL8 compared to that from healthy donor controls [32]. They further found that ASMCs from asthmatic donors demonstrate increased histone H3 lysine acetylation (H3K18Ac) and increased binding of p300. BRD4 and BRD3 were found to bind to the CXCL8 promoters, which can be inhibited by different BET inhibitors including PFI-1, I-BET, and JQ1. I-BET also disrupted the binding of BRD4 and RNA polymerase II to the CXCL8 promoter, without affecting the binding of transcription factors, including NF-κB and C/EBPβ [78]. Conversely, in human bronchial epithelial cells, JQ1 impaired the binding of NF-κB p65 to the CXCL8 promoter [49]. In viral infection, the vaccinia virus protein F14 was recently shown to selectively inhibit a subset of NF-κB signaling induced by TNF-α, i.e., suppressing BRD4 recruitment to the promoters of CCL2 and CXCL10. However, the binding to the promoters of NFKB1A and CXCL8 was not affected. Interestingly, JQ1 treatment blocked the induced binding of BRD4 to the promoters of CCL2 and CXCL10 but not to NFKBIA and CXCL8 promoters [79]. These findings suggest bromodomain-independent recruitment of BRD4 to the promoters of certain genes, while the detailed mechanism(s) of this interaction remain(s) elusive.
The role of BRD4 in airway remodeling, another key feature of severe asthma, has gained much attention recently [80]. Two highly selective BRD4 inhibitors, ZL0420 and ZL0454, were compared to the non-selective pan-BET inhibitors, JQ1 and RVX208 (also called apabetalone), with respect to their ability to reduce chronic airway remodeling induced by TLR3 using a mouse model that mimics recurrent virus-induced asthma exacerbations [33]. The authors showed that BRD4 targeting by siRNA or specific BRD4 inhibitors (ZL0420 and ZL0454) dramatically suppressed TLR3-mediated mesenchymal transition in hSAECs, as indicated by the suppressed induction of genes including SNAI1, ZEB1, IL6, VIM, FN1, MMP9, and COL1A by poly(I:C) over 15 days. Both ZL0420 and ZL0454 inhibited the acetylation at histone lysine residue 122 (H3K122Ac) induced by Poly(I:C) in hSAECs. This is consistent with the reported intrinsic histone acetyltransferase (HAT) activity of BRD4 [81]. The HAT activity of BRD4 towards H3K122Ac can be attenuated by the BETi I-BET [81], suggesting the dependence on its binding to BDs. Further, the study examined the in vivo efficacy of BRD4 inhibitors on innate inflammation-driven airway remodeling. The authors found that BRD4 inhibitors blocked TLR3 agonist-induced epithelial to mesenchymal transition (EMT) in the lungs of mice receiving poly(I:C). Treated mice also demonstrated improved lung function and pathological changes, as well as reduced collagen deposition in subepithelial and interstitial spaces. This study provides evidence that BRD4 is a novel target for inflammation-induced airway remodeling. Whether BRD4 inhibition could reverse airway remodeling induced by pol(I:C) deserves future exploration and may inform the role of BRD4 targeting in inflammation-driven airway remodeling.
Recently, Lu and colleagues also reported the implication of BRD4 in fine particulate matter 2.5 (PM2.5)-induced airway hyperresponsiveness (AHR) [34]. In the nose-only PM2.5 exposure model, PM2.5 induced AHR, lung inflammation, and elevated expression of BRD4. Such effects were attenuated by the BRD4 inhibitor ZL0420. In vitro experiments showed that PM2.5 induced hASMC contraction and migration, as well as elevated expression of BRD4, vimentin, MMP2, and MMP9. Interestingly, these effects were reversed by ZL0420 and BRD4 siRNA in hASMCs. The data suggest that BRD4 contributes to PM2.5-induced AHR and may represent a therapeutic target for the treatment of inflammatory airway diseases.
Exacerbations of asthma are modulated by acute viral infection. Tian et al. [35] showed that RSV induces BRD4 to complex with NF-κB/RelA. By doing so, BRD4 modulates the assembly of CDK9 and RNA polymerase II formation on the promoters of IRF1, IRF7, and RIG-1, promoting their transcriptional elongation. In vivo, BRD4 inhibition with JQ1 blocked poly(I:C)-and RSV-induced neutrophilia, mucosal chemokine production and airway obstruction. In cat dander-induced asthma, Tian and colleagues [36] further showed that BRD4 was also induced to complex with NF-κB/RelA in primary human hSAECs. The binding activates BRD4's atypical HAT activity, leading to inflammatory and profibrotic gene expression, which can be blocked by the BRD4 inhibitor ZL0454. ZL0454 also blocks epithelial mesenchymal transition, myofibroblast expansion, IgE sensitization, and fibrotic changes in airways of naïve mice associated with cat dander exposure. These findings together support the idea that BRD4 may serve as a potential target in exacerbated asthma, regardless of the causes.
In summary, different layers of evidence support a critical role of BRD4 in allergic asthma and airway remodeling. BRD4 affects multiple cell types to promote asthma by modulating the secretion of proinflammatory and profibrotic mediators, cell differentiation, EMT, and imbalance of proliferation/apoptosis of ASMCs. Emerging evidence strongly supports the targeting of BET/BRD4 in asthmatics in clinical trial testing.
Pulmonary Artery Hypertension (PAH)
PAH is a vascular disease that primarily affects the distal pulmonary arteries. It is characterized by increased inflammation, vasoconstriction, and hyperproliferation of smooth muscle cells with suppressed apoptosis within the arterial wall. It is a progressive disease that leads to pulmonary vascular resistance, right ventricle failure and death. Previously, Courboulin and colleagues compared 337 microRNAs (miRNAs) between PAH and control lungs and reported six upregulated miRNAs including miR21 and a single downregulated miRNA, namely miR-204 in PAH [82]. Later, Meloche and colleagues reported elevated BRD4 expression in the distal pulmonary arteries and pulmonary arterial smooth muscle cells (PASMCs) of PAH patients. Further, this enhanced expression was shown to be miR-204-dependent [37]. In addition, these investigators showed that JQ1 treatment reversed established PAH in the Sugen 5416/hypoxia rat model [37]. They found that BRD4 targeting by JQ1, or siRNA suppressed pro-survival signals (e.g., NFATc2, BCL-2 and survivin) and increased cell cycle arrest via p21 upregulation in PASMCs, which may account for the observed effects of JQ1 in vivo. The miR-204-BRD4 axis has also been noted in head and neck squamous cell carcinoma, in which miR-204 acts as a tumor suppressor by enhancing p27 mRNA stability through targeting BRD4 [83]. Another pan-BET inhibitor, I-BET151, was also shown to ameliorate right ventricle hypertrophy and pulmonary hypertension in rats induced by chronic hypoxia and pulmonary inflammation [38]. In addition, the BD2-selective BETi RVX208 also demonstrated benefits for PAH in the Sugen/hypoxia and monocrotaline (MCT)+shunt-PAH rat models [39]. RVX208 treatment reversed vascular remodeling and improved lung hemodynamics. RVX-208 also reduced the pressure load to the right vesicle in a rat PAH model induced by pulmonary artery banding. In vitro data showed that RVX-208 restored the altered phenotypes of proliferation, apoptosis resistance, and inflammation of both PASMCs and PAECs derived from PAH patients. The authors further identified two downstream targets of BRD4, FoxM1 and PLK1, implying the modulation of DNA damage response as a contributor to PAH [39]. The consistent finding was reported that JQ1 attenuated the inflammation (mRNA of IL-6 and CXCL8) and proliferation of human pulmonary microvascular endothelial cells (HPMECs) from healthy subjects. These observations are related to reduced NF-κB/p65 recruitment to the native IL6 and CXCL8 reporters and the induced cell cycle arrest at G0/G1 by JQ1 [40]. A recent pilot clinical trial in PAH patients also demonstrated the benefits of BET targeting. RVX-208 treatment for 16 weeks decreased pulmonary arterial resistance and increased cardiac output, stroke volume, and compliance in all six PAH patients [84]. Future larger and placebo-controlled trials are warranted to assess the efficacy of BETi in PAH patients.
Together, these findings support further exploration of BRD4 as a novel therapeutic target in PAH.
Pulmonary Fibrosis
Largely due to the discovery of specific BET inhibitors [60,61], studies were enabled to explore the targeting of BET protein in pulmonary fibrosis. Two papers from the same research group emerged a few years later that investigated the role of BET proteins in pulmonary fibrosis, which represent a milestone for targeting BET/BRD4 in this context. In the first study, Tang and colleagues [41] showed that the response of human lung fibroblasts (HLFs) from healthy donors to TGF-β1 and PDGF-BB was mediated by BET proteins. Specifically, TGF-β1 induces the acetylation of lysine 5 of H4 (H4K5ac, a reported BRD4 binding site [85]), and the binding of BRD4 to the promoters of IL-6, α-smooth muscle actin (α-SMA), and plasminogen activator inhibitor-1 (PAI-1). Accordingly, pretreatment of HLFs with JQ1 and I-BET blocked TGF-β1-induced transcription of these genes. Further, TGF-β1and PDGF-BB-mediated cell phenotype switching of HLFs, including proliferation, migration, and extracellular matrix production, was also abolished by JQ1 and I-BET pretreatment. Interestingly, knockdown of BRD2 and BRD4, but not BRD3, significantly blocked the induction of α-SMA by TGF-β1 in HLFs, indicating the differential effects of BET proteins in mediating TGF-β1-induced myofibroblast differentiation. Of note, JQ1 treatment did not affect Smad2/Smad3 nuclear translocation, suggesting a null effect on Smad3 phosphorylation. In the bleomycin model of pulmonary fibrosis, JQ1 treatment reduced total BAL fluid cells and collagen deposition, as indicated by hydroxyproline content in the lungs of injured mice. In the second paper [42], the authors compared the response of HLFs from idiopathic pulmonary fibrosis (IPF) lungs and tumor-free lungs (control) of tumor patients. They found that HLFs are more responsive or activated in response to PDGF-BB treatment, as determined by cellular proliferation, migration, and IL-6 release. However, JQ1 treatment inhibited these responses. Consistent with predictions from the in vitro evidence, JQ1 dose-dependently suppressed inflammatory infiltration in the lungs of bleomycin-challenged mice (21 days). In accord, pulmonary fibrosis, as indicated by collagen I expression, was also largely inhibited by JQ1. Recently, Bernau and colleagues [43] further showed the efficacy of selective inhibition of the BRD4 BD1 domain in reducing myofibroblast differentiation and reversing established pulmonary fibrosis in mice using the BRD4 BD1-selective inhibitor ZL0591.
CG223, a novel quinolinone-based BETi, was also reported to attenuate bleomycininduced pulmonary fibrosis in mice [44]. When given daily in the inflammatory phase of the bleomycin model (3 to 12 days after bleomycin instillation), CG223 reduced the number of lymphocytes and neutrophils in BAL, collagen deposition in the lung, and the Ashcroft score of mice that received bleomycin. Specifically, bleomycin induced the enrichment of BRD4 in fibrotic lesions of the lungs of mice and the binding of BRD4 to the promoters of profibrotic genes including thrombospondin 1 (Thbs1), integrin β3 (Itgb3), and smooth muscle alpha (α)-2 actin (Acta2). In vitro experiments showed that CG223 dose-dependently inhibited TGF-β1-induced expression of these genes in primary lung fibroblasts isolated from untreated C56BL/6 mice. Because Thbs1 and Itgb3 genes are involved in the entry into the TGF-β1 autocrine/paracrine loop [86][87][88], this study implies the role of BRD4 in triggering this response. In addition to BRD4-mediated downstream gene expression of TGF-β1 as mentioned above, this study suggests another layer of interaction between BRD4 and TGF-β signaling.
Reactive oxygen species (ROS) are important mediators of TGF-β-induced myofibroblast differentiation. Inhibition of ROS production attenuates lung injury in bleomycinchallenged mice [89,90]. A gene array analysis [45] showed increased NOX4 gene expression and decreased SOD2 expression in systemic sclerosis (SSc) and IPF lung fibroblasts compared to non-fibrotic controls. In the same study, treatment with JQ1 was found to inhibit the TGF-β1-induced NOX4 increase, SOD2 decrease, and Nrf2 inactivation. In accord, the production of ROS and myofibroblast differentiation induced by TGF-β1 were also significantly blocked by JQ1 in HLFs. Further, the authors found that BRD3 and BRD4, but not BRD2, bind to the promoters of NOX4 and mediate TGF-β1-induced NOX4 expression. In addition, the BRD4/NOX4 axis has been implicated in age-related pulmonary fibrosis [46]. Inhibition of BET with OTX015 resolves established pulmonary fibrosis in 18-month-old mice that received bleomycin for 21 days. Lung function and histology were largely restored by OTX015 treatment for 21 days after the bleomycin challenge. In a brief report [47], JQ1 treatment was found to upregulate genes enriched in glutathione metabolism and downregulate fibrosis-related genes in primary HLFs from an IPF patient via an RNA-sequencing analysis. These studies suggest that BET proteins are also involved in regulation of oxidative/reductive balance to promote myofibroblast differentiation.
Besides IPF, radiation-induced pulmonary fibrosis (RIPF) may likewise involve BET protein regulation. RIPF has been reported to occur in about 16% of Hodgkin's lymphoma patients [91] and in 70-80% of lung cancer patients who received high-dose radiation [92]. In a study evaluating the effect of JQ1 in RIPF [48], the authors found that JQ1 protected normal lung tissue after irradiation, attenuated RIPF, lung inflammation, and collagen deposition in a rat model of pulmonary damage with 20Gy radiation. Proteins, including BRD4 and c-Myc in the lungs of irradiated mice, were reduced by JQ1, as well as collagen, TGF-β1, p-NF-κB p65, and p-Smad2/3. JQ1 also repressed the radiation-induced myofibroblast differentiation of normal HLFs. These results provide evidence supporting the targeting of BET/BRD4 as a candidate-effective strategy for the treatment of different forms of pulmonary fibrosis.
Chronic Obstructive Pulmonary Disease (COPD)
COPD is the third leading cause of death worldwide and appears to have an increasing trajectory [93]. Currently, there is no approved drug that can cure COPD. The pathogenesis of COPD is characterized by chronic inflammation and oxidative stress. A previous study [49] showed that the BET inhibitors JQ1 and PFI-1 dramatically reduced IL-6 and CXCL8 expression in human epithelial cells stimulated by IL-1β plus H 2 O 2 . JQ1 also inhibited the recruitment of p65 and BRD4 to the promoters of IL-6 and CXCL8. BRD4, but not BRD2, was found to mediate IL-6 and CXCL8 release from human primary epithelial cells. The results suggest a role for BRD4 in inflammation-and oxidative-stress-related proinflammatory cytokine production. Consistently, JQ1 treatment was also reported to inhibit pro-inflammatory gene expression in alveolar macrophages from COPD patients [50]. Alveolar macrophages extracted from COPD transplanted lungs were stimulated with LPS in the presence or absence of JQ1. While LPS induced a proinflammatory M1 macrophage phenotype, JQ1 treatment reversed these changes. There was no difference between the expression of BET proteins (BRD2, 3, 4, and BRDT) in alveolar macrophages from COPD patients and control subjects. In another study [51], JQ1 treatment ameliorated the oxidative stress of primary ASMCs, monocytic cells and the THP-1 cell line. JQ1 activates the Nrf2-dependent transcription and expression of antioxidants including hemeoxygenase-1 (HO-1), NADPH quinone oxidoreductase 1 (NQO1), and the glutamate-cysteine ligase catalytic subunit (GCLC). It also blocked H 2 O 2 -induced intracellular ROS production. All BET proteins (BRD2, 3, and 4) were found to interact with the Nrf2 protein. BRD2 and BRD4 were also found to bind to the promoters of NQO1 and HO-1 genes. Interestingly, neither the interaction between BET proteins and Nrf2 nor the binding of BET proteins to the promoters of antioxidant genes (NQO1 and HO-1) was affected by JQ1 treatment, suggesting BD-independent activity of these BET proteins. These results indicate that BET proteins can selectively modulate Nrf2-specific gene expression and thus may contribute to oxidative-stress-related diseases such as COPD.
Small airway fibrosis occurs in COPD patients and contributes to obstructed air flow. The response of ASMCs derived from COPD patients to a profibrotic stimulus, e.g., TGF-β1, differs from those in healthy controls. Work by Zakarya and colleagues showed that there was higher mRNA expression of COL15A1 and TNC (tenascin C) in ASMCs from COPD patients compared to non-COPD controls [52]. Accordingly, elevated collagen 15α1 and TNC staining were found in the lungs of COPD small airways in contrast to those from non-COPD controls. TGF-β1 treatment induced a more pronounced increase in COL15A1 and TNC mRNA levels in ASMCs from COPD smokers versus non-COPD smokers, which were blocked by JQ1 treatment. JQ1 treatment was also found to abolish H4 acetylation at the promoters of COL15A1 and TNC induced by TGF-β1 only in COPD ASMCs. By contrast, H3 acetylation at the promoters of these two genes was minimally induced by TGF-β1, suggesting differential regulation. Findings from this study provide a novel insight into the implication of BET proteins in TGF-β1-induced specific gene expression and histone H4 acetylation. Whether these effects are specific to BRD4 remains to be tested since only the pan-BET inhibitor JQ1 was used. Again, these findings in human ASMCs warrant further validation in the in vivo setting of COPD.
In a cigarette-smoke-and LPS-induced COPD model in mice [53], JQ1 treatment reversed histopathological changes and the induced cytokine profile. The mean linear intercept, destructive index, and inflammatory score of mice with induced COPD were dose-dependently improved by JQ1. JQ1 treatment also reduced the expression of MMP2, MMP9, IL-1β, IL-17, IL-6, and TNF-α that were enhanced in COPD mice. In addition, the oxidative stress in COPD mice as indicated by increased MDA levels and decreased SOD, HO-1, and T-AOC, was also significantly improved by JQ1. These observed effects of JQ1 are associated with suppressed nuclear NF-κB p65, p65 acetylation (Lys310), and p65/DNA binding activity. This study demonstrates the efficacy of BETi in reversing established preclinical COPD. During viral exacerbation, researchers recently noticed elevated mRNA expression of BRD4 in the blood and sputum of COPD patients compared to those in a stable state [54]. Consistently, elevated BRD4 expression was found in the lungs of mice undergoing influenza infection and cigarette smoke exposure. This model also demonstrated inflammatory cells infiltration, IL-6, and chemokines induction in the lungs. Concurrent JQ1 treatment dramatically suppressed these alterations in the mice. Further, the authors showed that BRD4 siRNA significantly inhibited the protein and mRNA levels of IL-6 and CXCL8 induced by cigarette smoke exposure and influenza virus infection in the bronchial epithelial cell line, BEAS-2B. These two studies support the concept that BRD4 may be a target in cigarette smoke-and infection-exacerbated COPD.
MiRNAs are known to be involved in the regulation of inflammation in different contexts. In COPD, several non-coding RNAs including circulating RNA (circRNA), long non-coding RNA (lncRNA), and miRNA have been reported to regulate airway epithelial cell apoptosis and inflammation induced by cigarette smoke. For example, both the circRNA ankyrin repeat domain 1 (circANKRD11) [55] and circRNA oxysterol binding protein like 2 (circOSBPL2) [56] were found to increase in the lungs of smokers with or without COPD. Downregulation of either mitigates cigarette smoke-induced apoptosis, inflammation, and oxidative stress in human bronchial epithelial cells. Mechanistic links to BRD4 were found for both circANDRD11 and circOSBPL2, with miR-I45-5p and miR-193a-5p as the intermediate targets, respectively. In addition, miR-218 [57] and miR-29b [58] are also associated with cigarette smoke extract (CSE)-induced apoptosis and inflammation in human bronchial epithelial cell lines. Both miRNAs were downregulated in COPD lungs, which correlate with lung function and inflammation. In contrast, BRD4 was found to be increased in COPD patients and induced by CSE. Knockdown of BRD4 with siRNA blunted the CSE-induced inflammatory cytokine expression and secretion (IL-6, CXCL8, and TNF-α). In addition, the long noncoding MIR155 host gene (MIR155HG) was found to inversely correlate with miR-218-5p in COPD lungs and CSE-induced HPMECs [59]. Further, LncRNA MIR155HG was found to downregulate miR-128-5p expression, through which BRD4 was targeted indirectly. In conclusion, the LncRNA MIR15HG/miR-128-5p/BRD4 axis was implicated in the cell apoptosis and inflammation of HPMECs. Despite these findings, the regulation of BRD4 by non-coding RNAs in the pathogenesis of COPD remains largely unclear and an open research avenue. A schematic view of BRD4 in the pathogenesis of COPD is shown in Figure 1. BRD4 contributes to the development of COPD by regulating multiple cellular processes in various cell types. Targeting BRD4 may represent a novel, effective strategy for the treatment of COPD. through which BRD4 was targeted indirectly. In conclusion, the LncRNA MIR15HG/miR-128-5p/BRD4 axis was implicated in the cell apoptosis and inflammation of HPMECs. Despite these findings, the regulation of BRD4 by non-coding RNAs in the pathogenesis of COPD remains largely unclear and an open research avenue. A schematic view of BRD4 in the pathogenesis of COPD is shown in Figure 1. BRD4 contributes to the development of COPD by regulating multiple cellular processes in various cell types. Targeting BRD4 may represent a novel, effective strategy for the treatment of COPD. BET protein, particularly BRD4, plays important roles in these cell types to regulate key cellular processes including inflammation, oxidative stress, profibrotic changes, and apoptosis. These deregulated processes and related genes together contribute to the development of COPD. CSE, cigarette smoke extract; CXCL1, C-X-C motif chemokine ligand 1; GCLC, glutamate-cysteine ligase catalytic subunit; HAM, human alveolar macrophage; HASMC, Human airway smooth muscle cell; HBEC, human bronchial epithelial cell; HO1, heme oxygenase-1; HPMEC, human pulmonary microvascular endothelial cell; NQO1, NAD(P)H quinone dehydrogenase 1; and TNC, tenascin C.
Conclusions and Future Perspectives
In summary, BRD4 is involved in the development of several major lung diseases. This is related to its role as a scaffold protein to enhance transcriptional activity. However, the molecular mechanisms underlying BET/BRD4 in different lung diseases remain BET protein, particularly BRD4, plays important roles in these cell types to regulate key cellular processes including inflammation, oxidative stress, profibrotic changes, and apoptosis. These deregulated processes and related genes together contribute to the development of COPD. CSE, cigarette smoke extract; CXCL1, C-X-C motif chemokine ligand 1; GCLC, glutamate-cysteine ligase catalytic subunit; HAM, human alveolar macrophage; HASMC, Human airway smooth muscle cell; HBEC, human bronchial epithelial cell; HO1, heme oxygenase-1; HPMEC, human pulmonary microvascular endothelial cell; NQO1, NAD(P)H quinone dehydrogenase 1; and TNC, tenascin C.
Conclusions and Future Perspectives
In summary, BRD4 is involved in the development of several major lung diseases. This is related to its role as a scaffold protein to enhance transcriptional activity. However, the molecular mechanisms underlying BET/BRD4 in different lung diseases remain largely unclear. For instance, the gene specificity of BRD4 binding, the coordinated or redundant role of BET proteins, and the preferred downstream targets of BET/BRD4 in different lung diseases remain unclear. In addition to the reported binding proteins (e.g., the Mediator [94] and pTEFb complex [1]), identification of novel BRD4 partners, for example through interactome analysis, may uncover novel functions of BRD4 in disease progression. Beyond the role of an epigenetic reader, BRD4 may also regulate gene transcription through its atypical histone acetylation activity and kinase activity [95], adding another aspect of the control of gene transcription. Further, BRD4 integrates in the pathogenesis of lung diseases also through manipulating innate immunity and adaptive immune response. Future single cell RNA sequencing analysis may help identify subgroups of patients that benefit from BET/BRD4 targeting in different settings. While BETi has been actively tested in human diseases including cancer and metabolic diseases, clinical trials for testing BET/BRD4 inhibitors in lung diseases are supported by currently available evidence and could potentially identify safe, new and more effective interventions. | 7,856.8 | 2023-08-25T00:00:00.000 | [
"Biology"
] |
Design , Implementation and Simulation of Non-Intrusive Sensor for On-Line Condition Monitoring of MV Electrical Components
Non-intrusive measurement technology is of great interest for the electrical utilities in order to avoid an interruption in the normal operation of the supply network during diagnostics measurements and inspections. Inductively coupled electromagnetic sensing provides a possibility of non-intrusive measurements for online condition monitoring of the electrical components in a Medium Voltage (MV) distribution network. This is accomplished by employing Partial Discharge (PD) activity monitoring, one of the successful methods to assess the working condition of MV components but often requires specialized equipment for carrying out the measurements. In this paper, Rogowski coil sensor is presented as a robust solution for non-intrusive measurements of PD signals. A high frequency prototype of Rogowski coil is designed in the laboratory. Step-by-step approach of constructing the sensor system is presented and performance of its components (coil head, damping component, integrator and data acquisition system) is evaluated using practical and simulated environments. Alternative Transient Program-Electromagnetic Transient Program (ATP-EMTP) is used to analyze the designed model of the Rogowski coil. Real and simulated models of the coil are used to investigate the behavior of Rogowski coil sensor at its different stages of development from a transducer coil to a complete measuring device. Both models are compared to evaluate their accuracy for PD applications. Due to simple design, flexible hardware, and low cost of Rogowski coil, it can be considered as an efficient current measuring device for integrated monitoring applications where a large number of sensors are required to develop an automated online condition monitoring system for a distribution network.
Introduction
The major deriving forces to modernize the current power grid include increasing requirements for reliability, efficiency and safety of power grid along with optimization of capital assets while minimizing the operation and maintenance cost.Some of the greatest effects on reliability of a power distribution network are provided by various types of failures due to environmental and operational stresses [1].Insulation degradation is one of the most frequent causes of the failure in the critical and expensive power components such as, motors, generators, transformers, switchgear, and power lines.One of the methods to predict the incoming insulation faults is to perform on-line condition monitoring of the network components.Partial Discharge (PD) diagnostic is a wellknown technique for insulation condition monitoring.PD is the process of localized dielectric breakdown of a small portion (cavities, voids, cracks or inclusions) of a solid or liquid electrical insulation part which is under high voltage stress during operation [2].The electrical stress due to applied voltage causes discharges within the defective portion of the insulation.There are different phenomena which appear during discharges and give rise to respective detection indicators such as electromagnetic radiation, sound or noise, thermal radiation, gas pressure, chemical formation, and electromagnetic impulses [3]- [6].The PD measurement sensor technology is based on the type of the energy exchange which takes place during above mentioned discharge phenomena.In this paper electromagnetic impulses are considered for assessment of the PD activity.
Due to rapid displacement of the charges during discharge event, voltage and current transient appears in the form of electromagnetic waves.These signals travel away from the site of origin along the power lines and can be measured by resistive, capacitive or inductive methods [7].A variety of sensors are available for measurement of electromagnetic signals.However, considering practical aspects such as low cost, high bandwidth, good sensitivity, saturation characteristics, high linearity, and wide operating temperature, Rogowski coil has been regarded as a favorite tool for high-frequency current-sensing purposes [7].Its flexible design provides the possibility to install it in a variety of physical locations especially in the tight spaces that may be inaccessible with typical iron-core current transformers.For MV overhead covered conductor lines, it can directly be installed around the CC line.For an MV cable network and cable accessories (joints and terminations), switchgears, and transformers, the coil can easily be installed around the earth straps to detect the fault current.Moreover it can also be used for normal current measurements at the substation.Researchers and engineers have been using Rogowski coil for high-amplitude sinusoidal currents and transients as well as low-amplitude and high-frequency signals in power systems and power electronics applications [8]- [11].Nowadays it is also being used for detection and localization of insulation and short circuit faults while its application for relay protection is impressive as well [12].Obtaining simplicity in the design, better accuracy for a wide range of amplitude and frequency of measured signal, and low cost are the challenges during development of a sensor.
In this paper, the main features, design and construction features of Rogowski coil have been explored.Rogowski coil sensor has been identified as a composite of four essential and sequential components: Rogowski coil head, damping component, integrator and Data Acquisition System (DAS).Step-by-step implementation of Rogowski coil is described and performance of each stage is evaluated for low-amplitude and high-frequency transient current signals.The operation of the Rogowski coil is simulated in ATP-EMTP simulation software environment which provides an in-depth analysis, the verification of the coil prototype and the possibility of developing further PD diagnostic techniques for distribution lines.
Components of Rogowski Coil Sensor
A measuring sensor transacts a complete measuring function from initial detection to final indication of the measured quantity as described in Figure 1.Considering Rogowski coil sensor, the initial detection is done by a current sensing coil which acts as an interface between primary and measured current.Intermediate signal
Data acquisition system
Display of recording processing consists of signal conditioning based on the operational features of the coil, its physical and electrical characteristics, and the purpose (requirement) of measurements and diagnostics.This is accomplished by damping component and integrator.Final indication is displayed or recorded with the help of a suitable Data Acquisition System (DAS).This recorded data can be transported to personal computers to be used for further investigations.Figure 1 represents the sequential construction and operation of Rogowski coil measuring sensor while its implementation is depicted in Figure 2. The detailed description and implementation of each component is explained further in this section.
Coil Head
The Rogowski coil's main sensing part is composed of toroidal wound coil with n number of turns, on an air-core (dielectric) former of constant cross-sectional area.The air-core is split in one location to allow assembly of the coil head unit.As shown in Figure 3, the wire is wound such that the winding start from the first end, progresses towards the other end and returning through the centre of the coil back to the first end, so that both terminals are at the same end of the coil.Number of turns ( ) 1).Its physical parameters are selected based on the application requirements of the coil.Space for installation, bandwidth, and sensitivity of the coil are the main concerns required to initiate the design.
Damping Component
Every piece of wire has resistance and inductance while every two wires present some capacitance during an electrical application [13].Response of the Rogowski coil to primary current signal can be well explained by developing electrical model of this electromagnetic device.Lumped parameter model is used in this work where introduces oscillations in the output voltage and current.The rate of these oscillations is determined by the resonant frequency of the coil.Such oscillations can be damped by properly terminating the response of the coil with a suitable resistance which is normally connected across the output terminal of the coil.
Integrator
The output voltage of Rogowski coil which is induced by the variable current passing through the primary conductor is proportional to the derivative of primary current.The main operating principle is thus described with Farady's law.Due to very nature of the electrical circuit of Rogowski coil, oscillations are introduced due to second order equivalent circuit characteristics.These factors affect the output of the Rogowski coil sensor (with reference to primary signal).Reliability of the measurement depends on how accurately the primary signal is captured.In order to recreate the original waveform of the measured signal, first step was to remove sensor's oscillations using damping component (described above).An integration operation is needed to process differential output to provide output waveform (measurement result) to match the measured primary current waveform.
Rogowski coil is an air-core induction sensor and relatively low number of turns are required to guarantee the high-frequency operation.Therefore the current transfer ratio (output current to input current) is very low; even minor resistive losses can significantly reduce the amplitude of the measured signal in the sensor's output.The specific calibration of the sensor transfer rate can thus be required to achieve the accurate amplitude of the measured signal.
Data Acquisition System
The output of Rogowski coil sensor is an analogue signal.For more detailed analysis and recording of the measured transients patterns, the signal is passed on to a DAS system and digitized.This incorporates in the first stage an Analog-to-Digital Converter (ADC) with very high sampling rate, in order to capture the extremely short-time transients of the PDs occurring on the power line.The Nyquist criterion states that the sampling frequency of a system needs to be at least twice of the frequency needed to be captured.The bandwidth needed to capture the PD traces would reach several tens MHz, thus the sampling frequency required from the DAS would be in tens of millions of samples per second (MS/s) also.The ADC would need to have at least 8-bit resolution (better higher), therefore having multiple sensors in a substation would mean very high data transfer bandwidth.
The second stage of the DAS would be initial processing of the results to provide reduction of the data to be processed.The PD transients have specific waveforms and recognition/detection of such waveforms can be then used for triggering the further storage and processing operations.As a result of such event filtering, the data stored will be limited in a window of some thousands of samples at each detected PD occurrence, or even less.The data bandwidth is reduced for easy data storage or transmission to data centers for further analysis.Suitable equipment for establishing the DAS would be for example Digital Storage Oscilloscopes (DSO) in the laboratory measurements or on-chip data logger systems can be used for on-site applications.
Physical and Electrical Model of the Coil Head
When the coil is placed around the conductor carrying alternating or transient current ( ) p i t to be measured (see Figure 2), the voltage ( ) rc V t induced within the coil is expressed as where c M is mutual inductance or sensitivity of the coil, expressed in V A at a specific frequency.Mutual Inductance depends on the number of turns of the coil, cross sectional area of the core, and the diameter of the toroid (to determine the radial distance of the coil winding from the current carrying conductor placed at the center of the coil).The physical parameters of the Rogowski coil head are given in Table 1.The value of electrical parameter depends on the physical design of the coil.Table 2 represents the measured electrical parameters of the Rogowski coil head.The details of the parameters measuring methodology has been described by the authors in [14].The lumped parameters (RLC) equivalent electrical model is shown in Figure 4.
During high frequency measurements, the interaction of sensed signal with RLC parameters results in significant energy exchange between the coil's self-inductance and self-capacitance, and causes oscillations with a resonant frequency c ω which can be calculated as To ensure the practical response of Rogowski coil, a laboratory test is made.A PD pulse is injected from a PD calibrator into a simple test circuit as shown in Figure 5.The primary current pulse in the test line is measured by a commercial High Frequency Current Transformer (HFCT) and is shown in red color in Figure 6(a).This current pulse will be used as reference primary signal further in this paper.The captured output voltage of Rogowski coil is shown as time-domain plot in blue color in Figure 6(a).This oscillating signal is the measured signal ( ) o V t which needs to be processed during the incoming stages of the sensor development in order to get the primary signal ( ) p i t .Fast Fourier Transform (FFT) of the captured response shown in Figure 6(b) represents the frequency response of the coil's output voltage.The resonant frequency of measured response is determined as 37.6 MHz.The resonant frequency reflects the value of the LC parameters of an induction sensor.Therefore, exact match of calculated and practically measured resonant frequency validates the accuracy of the measured electrical parameters.
Coil Head Along with Damping Component
In the previous, the output voltage measurement device input resistance m R .With this measuring setup, Rogowski coil operates in an un-damped mode.In order to damp the oscillation, a suitable value of terminating resistance is required which should be a specific ratio of the characteristics impedance of the Rogowski coil as shown in Figure 7. Based on the terminal loading, the Rogowski coil can operate in over damped, undamped, and critically damped modes, as shown in Figure 8. Suitability of the operating mode depends upon the measured waveform of the output which aims at to be the waveform of the primary signal.
The output voltage can be expressed as The response ( ) o V t of the coil can be divided into two parts: 1) ( ) rc V t , the forced response due to primary signal and; where ξ is the damping coefficient.Natural response reflects coil's circuit properties which hides the informa- tion of the original measured signal.As visible from Equation ( 4), damping coefficient can be efficiently used to control the effect of oscillations in the output voltage of Rogowski coil.Considering Equation ( 2), the classical 2 nd order behavior of the coil's electrical model can be represented by the characteristic equation written as Comparing with the transfer function, the damping coefficient ξ for coil model without damping component can be an be expressed as 1 2 whereas the damping coefficient with terminating resistance t ξ can be written as ( ) For different values of terminating resistance, the location of poles provides a quantitative view of presence of oscillations within the output of Rogowski coil as shown in Figure 9. Location of poles from the real axis quantifies the presence of oscillations.It can be seen that for t c R Z > the poles are away from real axis showing higher oscillations.It has been identified in [15] that for Here c Z is the characteristic impedance of the coil.Using such damping component value, the poles still lie on real axis, while their magnitude has the highest value.The poles can have even higher magnitude values, however in such case the poles will also have an imaginary part.The removal of the oscillations from the output is beneficial for the sensor application, as removal of natural oscillatory response modifies (4) as Figure 10 shows the transformation of the signal from the output of stage 1 to the output of stage 2. The compensation of the effect of time differential and mutual inductance can be carried out by integration and calibration.
Coil Head Along with Damping Component and Integrator
Integration of the output of Rogowski coil can be performed by one of two common means, (1) by use of an electrical or electronic integrator, or (2) by using numerical integration in software after the coil output voltage is digitized [16].In order to avoid complex circuit and expensive components, the numerical integration is proposed to be done by the built-in numerical functions of the DSO.The sampling frequency is selected based on the resonant frequency (considering Nyquist criterion) of the Rogowski coil.The digital integration can be expressed as where i is the current obtained by the digital integration of a voltage signal V, s f is sampling frequency and N is the order number of sample.The digital integration recreates the current as The amplitude of the induced voltage is reduced by a factor c M .Similarly resistive or stray losses can further reduce the amplitude of the measured pulse.Calibration of the measured signal can be done by comparing the measured signal ( ) m i t of the Rogowski coil with the reference signal.The calibration factor cal K can be calculated as where cal K depends on the c M and the stray losses st K .Thus the ( ) pm i t can be obtained as The overall measurement scenario of the primary current
Data Capturing Using Suitable Data Acquisition System
PD transients are high frequency signals therefore a high sampling rate is required to capture the signals reliably without losing any information.During laboratory measurements, a sampling rate of 2.5 GS/s is used with the help of DSO.However it is important to take into factor the factor of sampling rate because it affects the economical parameters in terms of cost of high frequency processors and large memory for data storage.For real applications, such high sampling rate (2.5 GS/s) is not necessary.A suitable range of required sampling rate can be estimated by the resonant frequency of the measuring sensor.The resonant frequency of the Rogowski coil used in this work is 37.6 MHz.Therefore, considering Nyquist criterion, 76 MHz or more, is a suitable sampling frequency for capturing the PD data.
Simulation of Rogowski Coil Sensor Using ATP-EMTP
Rogwoski coil is simulated using ATP-EMTP power system transients' simulation software.In literature, the pulse used in different simulation software is mostly based on the waveforms characterized by its mathematical model described as [13] ( ) ( ) where A is the peak value of the pulse, 1 t α is the rise time and 2 t α is half of the fall time.In this work, the data of practically captured PD current pulse is imported in ATP to be measured by the simulated model of Rogowski coil.Both mathematical and practical current pulses are shown in Figure 13(a).It can be clearly seen that practical current pulse contains fast variations during rising and fall slopes of the pulse.This kind of variations or distortions in the pulse, significantly affect the output of the Rogowski coil.To avoid any loss of high frequency components present in the practical pulse, the simulated model of Rogowski coil is assigned to measure the same practical primary current.This ensures the correct assessment for evaluation of the identified parameters of the coil to create an accurate model.The measured parameters (Table 2) represents the time-derivative of the primary current.Block B3 shows the RLC equivalent of Rogowski coil.In the first model, the sensor is electrically connected with the primary line.The arrangement works well as long as the coil circuit does not provide any disturbance to the primary current.However, if any reflections occur within the coil circuit, they may conduct towards the primary (test) line.This phenomenon may affect the characteristics of PD signal.In real practice, the Rogowski coil has no electrical connection with the primary line.Therefore, an improved model is developed in Figure 13(c) in order to isolate the sensor form the line.The Transient Analysis of Control Systems (TACS) shown as block B2 senses the current of the primary line and eliminates the possibility of conducting any reflections from secondary side towards the primary side.In this work, first model has been considered for further analysis and development.The above shown schematics represent the simulation of the Rogowski coil head.Further stages are implemented in Figure 14.
Conclusions
For employing successful on-line monitoring of the distribution lines using PD monitoring, the tasks to be carried out include PD detection, localization, and measurement which consequently provide the information about the presence of PD activity, where the deterioration is happening and extent of the damage that has been done.Such information determines the execution of the required (repair, replacement or standby) tasks.This paper presents design of high-frequency Rogowski coil for PD diagnostics.Rogowski coil is designed in four cascaded stages.First stage is the design of Rogowski coil head, which is used to sense the primary current as an induction sensor.In this form, Rogowski coil can be used to detect the presence and polarity of PD signals which are sufficient to detect and localize the PD fault within the electrical components [17]- [19].PD measurement task requires quantification of the discharged phenomenon which needs the information of the wave shape of actual PD signals.The area of the PD transient waveform is used to calculate the amount of charge released during PD event and hence the extent of PD defect can be estimated.For this purpose the implementations of the second, third, and fourth stages are necessary.Rogowski coil head along with damping components is analogue while integration, calibration, and data recording are done by the digital component (DSO).Nowadays many of the DSOs have numerical integration function as a built-in feature and make the integration possible without any additional complex circuits and cost.Similarly calibration constant can be applied numerically.Due to limitations of cost, weight, size, and vast scope of applications for measurements, this device (DSO) is generally used for laboratory or certain onsite applications.However the applications where a large number of coil sensors are integrated into the network, the digital part of the Rogowski coil sensor can be developed using on-chip components and analog-to-digital converters with required sampling rate having programmable signal processor units and data logging and storage functions.
For creating more complex monitoring systems, the ATP-EMTP simulation software has been shown to provide good results in the transient analysis.The design input is simplified by ATP-EMTP as it provides graphical and mouse-driven preprocessor, where the user can construct a wide range of power system circuits.A variety of line models (taking into account the high frequency effects) can be developed by input of the geometrical design parameter which provides an opportunity to further develop and/or analyze diagnostic techniques for PDs travelling over the power lines.
Figure 1 .
Figure 1.Essential elements of a PD measuring sensor.
c R , c L and c C are structure-based inherent self-resistance, self-inductance and self-capacitance of the Rogowski coil.During current measuring operation, the sensed signal passes through RLC circuit of the coil which
Figure 2 .
Figure 2. Stages of constructions of Rogowski coil measuring sensor.
Figure 3 .
Figure 3. Physical model of the Rogowski coil.
Figure 4 .
Figure 4. Electrical model of the Rogowski coil head.
Figure 5 .Figure 6 .
Figure 5. Experimental setup for measurement of PD current pulse.(a) Circuit model; and (b) Laboratory implementation.
response due to sensor's operation.
Figure 7 .
Figure 7. Electrical model of the Rogowski coil.
Figure 8 .
Figure 8. Modes of Rogowski coil operation based on the value of damping.
Figure 9 .
Figure 9.Effect of damping on the location of poles for different value of damping resistance.
the final outcome is shown in Figure 11 and Figure 12.
Figure 11 .
Figure 11.Complete electrical equivalent circuit of Rogowski coil measuring sensor.
Figure 12 .
Figure 12.Signal wave-shapes at different stages of sensor design.(a) Primary current pulse i p (t) to be measured; (b) Signal sensed by coil head; (c) Damped signal; (d) Integrated signal and (e) Calibrated signal as final outcome primary measured signal i pm (t).
the coil are used for simulation of model.The current carrying line used in the experimental setup is shown as test line in the simulation.Two types of ATP models are shown in Figure 13(b) and Figure 13(c).Block B1 shown in schematic Figure 13(b) represents the PD current pulse source.The coil senses the current through the test circuit.Block B2
Figure 13 .Figure 14 .
Figure 13.(a) Comparison of mathematical and practical current pulses; (b) ATP simulation model of Rogowski coil head using RLC equivalent circuit.
Table 1 .
Physical parameters of Rogowski coil head.
Table 2 .
Electrical parameters of Rogowski coil head. | 5,486.6 | 2014-10-07T00:00:00.000 | [
"Engineering",
"Physics"
] |
Sphingomyelin Depletion from Plasma Membranes of Human Airway Epithelial Cells Completely Abrogates the Deleterious Actions of S. aureus Alpha-Toxin
Interaction of Staphylococcus aureus alpha-toxin (hemolysin A, Hla) with eukaryotic cell membranes is mediated by proteinaceous receptors and certain lipid domains in host cell plasma membranes. Hla is secreted as a 33 kDa monomer that forms heptameric transmembrane pores whose action compromises maintenance of cell shape and epithelial tightness. It is not exactly known whether certain membrane lipid domains of host cells facilitate adhesion of Ha monomers, oligomerization, or pore formation. We used sphingomyelinase (hemolysin B, Hlb) expressed by some strains of staphylococci to pre-treat airway epithelial model cells in order to specifically decrease the sphingomyelin (SM) abundance in their plasma membranes. Such a pre-incubation exclusively removed SM from the plasma membrane lipid fraction. It abrogated the formation of heptamers and prevented the formation of functional transmembrane pores. Hla exposure of rHlb pre-treated cells did not result in increases in [Ca2+]i, did not induce any microscopically visible changes in cell shape or formation of paracellular gaps, and did not induce hypo-phosphorylation of the actin depolymerizing factor cofilin as usual. Removal of sphingomyelin from the plasma membranes of human airway epithelial cells completely abrogates the deleterious actions of Staphylococcus aureus alpha-toxin.
Introduction
Airway epithelia form major barriers between inhaled air and the internal space of the body [1]. In vivo, respiratory epithelia are covered by a thick mucus layer. Inhaled microorganisms and other particles stick to that mucus layer and are removed from the airways by the ciliary activity in the periciliary liquid (mucociliary clearance) [2]. Therefore, it is unlikely that inhaled bacteria like the human commensal and opportunistic pathogen Staphylococcus aureus (S. aureus) readily come into direct contact with the apical surfaces of epithelial cells. However, when the mucociliary clearance is attenuated (as in bedridden or immune deprived patients, or patients with virus infections or cystic fibrosis) bacteria may reach critical densities in the mucus layer and start to secrete soluble virulence factors. Virulence factors play a central role in the pathogenicity of S. aureus [3]. Secreted soluble virulence factors like alpha-toxin (hemolysin A, Hla) may diffuse through the mucus layer and reach the apical surfaces of the epithelial cells [4]. The assumption that Hla may play a role in the onset of S. aureus lung infection is supported by the findings of pneumonia patients having generated antibodies against Hla [5,6] and by animals being protected from developing S. aureus-mediated pneumonia when vaccinated against Hla [7].
Alpha-toxin is a pore-forming bacterial toxin [8]. It is lytic to red blood cells [9] and toxic to a wide range of mammalian cells [10]. Hla pores increase the membrane permeability for ATP [11,12] and cations like calcium, potassium, and sodium [13][14][15][16][17]. Cation-entry depolarizes the membrane potential in airway epithelial cells and enhances phosphorylation of p38 MAP kinase [17]. Additionally, Hla induces alterations in cell shape by remodeling the actin cytoskeleton and disrupts cell-matrix adhesions in human airway epithelial cells [18,19]. Actin remodeling seems to be mediated by hypo-phosphorylation and activation of cofilin [19], an actin depolymerizing factor [20].
Hla is secreted by the bacteria as a water-soluble 33 kDa monomer that binds to plasma membranes (PM) of eukaryotic host cells. Seven Hla monomers form a non-lytic heptameric pre-pore [21]. Subsequently, all seven subunits simultaneously unfold their pre-stem domains, which are then inserted into the lipid bilayer and form a cylindrical transmembrane pore [4,22,23].
SM and PC form clusters with cholesterol in so-called lipid rafts [36]. It is conceivable that such microdomains act as concentration platforms for membrane-associated proteins, and hence may mediate quick oligomerization of pore forming toxins [35,37,38]. It has actually been shown that pore formation of S. aureus Hla is highly effective in biological membranes, which have a high proportion of SM [39]. However, it was not clear whether the lipid composition affects the binding of the monomers, the assembly of membrane-bound monomers to heptamers, or the final step of pore formation, namely the coordinated unfolding of the stem loops of each of the assembled monomers to form the transmembrane portion of the pore.
To answer these questions, we used the recombinant form of another toxin of S. aureus, hemolysin B (beta-hemolysin, Hlb), which has enzymatic activity and functions as a neutral sphingomyelinase [40,41] cleaving SM to phosphorylcholine and ceramide [42,43]. Immortalized human airway epithelial cells (S9, 16HBE14o-), as well as freshly isolated human nasal epithelium, were pre-treated with recombinant Hlb to deplete sphingomyelin from the PM and subsequently exposed to recombinant Hla. Cells were tested for the well-known effects of Hla on cell signaling like calcium influx [17], changes in cell morphology, and cell layer integrity [18], or actin remodeling induced by hypo-phosphorylation of cofilin at Ser3 [19].
Pre-Treatment of Airway Cells with rHlb Allows rHla Monomer Binding to the PM, but Prevents Formation of Heptamers
To investigate whether SM is necessary for binding of Hla monomers to the host cell plasma membrane or assembly of heptameric transmembrane pores in these membranes, we pre-treated confluent cell layers (16HBE14o-and S9) for 1 h with 5000 ng/mL rHlb (sphingomyelinase) followed by a 0-4 h incubation with 2000 ng/mL rHla. Semi-quantitative Western blot analysis in whole cell protein extracts showed no changes in abundance of Hla monomers, whether cells had been pre-treated with rHlb or not ( Figure 1A,B,D,E). This indicates that binding of rHla monomers to airway cell plasma membranes was obviously not affected by the removal of SM from the plasma membranes in the two cell types (c.f. Figure S1). Experiments using freshly prepared human nasal tissue showed similar results confirming the above conclusion ( Figure 1G,H). In contrast, the abundance of rHla heptamers was lower in cell or tissue samples that were pre-treated with sphingomyelinase (rHlb) ( Figure 1C,F,I).
In 16HBE14o-as well as S9 cells that had not been pre-treated with rHlb, heptamer assembly started immediately after the addition of rHla at 0 h and increased steadily with the duration of exposure up to 4 h ( Figure 1C,F). However, when cells had been pre-treated with rHlb, heptamer abundance was significantly lower and did not show any increases over the time of exposure ( Figure 1C,F). Similar results were obtained when freshly prepared human nasal tissue was used in the experiments ( Figure 1H,I). These observations indicate that the presence of SM in the plasma membranes of human airway epithelial cells is essential for S. aureus Hla to form multimeric complexes.
Pre-Treatment of Airway Cells with rHlb Allows rHla Monomer Binding to the PM, but Prevents Formation of Heptamers
To investigate whether SM is necessary for binding of Hla monomers to the host cell plasma membrane or assembly of heptameric transmembrane pores in these membranes, we pre-treated confluent cell layers (16HBE14o-and S9) for 1 h with 5000 ng/mL rHlb (sphingomyelinase) followed by a 0-4 h incubation with 2000 ng/mL rHla. Semi-quantitative Western blot analysis in whole cell protein extracts showed no changes in abundance of Hla monomers, whether cells had been pretreated with rHlb or not ( Figure 1A,B,D,E). This indicates that binding of rHla monomers to airway cell plasma membranes was obviously not affected by the removal of SM from the plasma membranes in the two cell types (c.f. Figure S1). Experiments using freshly prepared human nasal tissue showed similar results confirming the above conclusion ( Figure 1G,H). In contrast, the abundance of rHla heptamers was lower in cell or tissue samples that were pre-treated with sphingomyelinase (rHlb) ( Figure 1C,F,I). In 16HBE14o-as well as S9 cells that had not been pre-treated with rHlb, heptamer assembly started immediately after the addition of rHla at 0 h and increased steadily with the duration of exposure up to 4 h ( Figure 1C,F). However, when cells had been pre-treated with rHlb, heptamer abundance was significantly lower and did not show any increases over the time of exposure ( Figure 1C,F). Similar results were obtained when freshly prepared human nasal tissue was used in the experiments ( Figure 1H,I). These observations indicate that the presence of SM in the plasma membranes of human airway epithelial cells is essential for S. aureus Hla to form multimeric complexes. Pre-incubation of cells with sphingomyelinase (rHlb) prevented formation of rHla heptamers (rHla7), but not plasma membrane binding of rHla monomers in airway epithelial cells and nasal tissue. Confluent layers of immortalized airway epithelial cells (16HBE14o-(A-C) and S9 (D-F)) were treated with 2000 ng/mL rHla after pre-treatment of cells in the presence or absence of 5000 ng/mL rHlb (sphingomyelinase) for 0-4 h. Cells treated with rHla showed binding of Hla monomers (33 kDa, rHla) and Hla heptamers (231 kDa, rHla7). The rHla monomer abundances were independent of the incubation time with rHla und independent of the pre-treatment regime with sphingomyelinase (A,B,D,E). Formation of Hla heptamers, however, was significantly reduced in 16HBE14o-or S9 cells which had been pre-treated with sphingomyelinase (rHlb) compared with Figure 1. Pre-incubation of cells with sphingomyelinase (rHlb) prevented formation of rHla heptamers (rHla 7 ), but not plasma membrane binding of rHla monomers in airway epithelial cells and nasal tissue. Confluent layers of immortalized airway epithelial cells (16HBE14o-(A-C) and S9 (D-F)) were treated with 2000 ng/mL rHla after pre-treatment of cells in the presence or absence of 5000 ng/mL rHlb (sphingomyelinase) for 0-4 h. Cells treated with rHla showed binding of Hla monomers (33 kDa, rHla) and Hla heptamers (231 kDa, rHla 7 ). The rHla monomer abundances were independent of the incubation time with rHla und independent of the pre-treatment regime with sphingomyelinase (A,B,D,E). Formation of Hla heptamers, however, was significantly reduced in 16HBE14o-or S9 cells which had been pre-treated with sphingomyelinase (rHlb) compared with control cells without sphingomyelinase pre-treatment (A,C,F). Experiments using freshly prepared human nasal tissue showed similar results (G-I). Representative example Western blot signals of Hla heptamers (rHla 7 ), Hla monomers (rHla), and β-actin are shown (A,D,G). Recombinant Hla (approximately 40 ng/lane) was used to indicate the position of Hla monomers (pos con), and in some cases, heptamers that form spontaneously when aqueous solutions of rHla are left at room temperature for 10 min. The positions of molecular mass standards (in kDa) are indicated. Mean values ± S.D. of densitometry signals of Western blot analyses normalized to the densities of the respective β-actin bands (n = 5, each) were assembled in histograms. Individual means were tested for significant differences using Student's t-test or Welch's t-test (w): * p < 0.05, ** p < 0.01, or *** p < 0.001.
Effects of Sphingomyelinase Pre-Treatment of Airway Epithelial Cells on rHla-Mediated Changes in [Ca 2+ ] i
As previously shown in human airway epithelial cells, treatment with rHla induced elevations in the cytosolic calcium concentration ([Ca 2+ ] i ) [15,17]. As observed previously, [Ca 2+ ] i started to increase with a lag phase of approximately 5-10 min after the addition of rHla and reached levels significantly different (p < 0.05) from the controls at 20-22 min recording time. These results were confirmed in this study as treatments of 16HBE14o- ( Figure 2B) with 5000 ng/mL rHlb (sphingomyelinase) and subsequent exposure to 2000 ng/mL rHla (traces rHlb + rHla), however, did not result in any significant increases in [Ca 2+ ] i . These traces were not significantly different from those that were obtained using cells that had been pre-treated with PBS (instead of rHlb) and treated with PBS instead of rHla during the experiment ( Figure 2, traces PBS + PBS). Treatments of 16HBE14o-or S9 cells with 5000 ng/mL rHlb (traces rHlb + PBS), per se, did not induce any changes in [Ca 2+ ] i when compared to untreated control cells.
These results indicate that pre-treatment of airway epithelial model cells with sphingomyelinase (rHlb) prevented rHla-mediated increases in [Ca 2+ ] i . Because acute addition of sphingomyelinase (rHlb) to airway epithelial cells did not elicit any sustained changes in [Ca 2+ ] i ( Figure S2A) and rHlb-pre-treated cells showed strong calcium influx upon addition of calcium ionophores ( Figure S2B), it can be concluded that the suppression of rHla-mediated calcium signaling in rHlb-pretreated airway epithelial cells is not a consequence of indirect effects (like rHlb-mediated emptying of calcium stores before the addition of rHla or ceramide-mediated internalization of rHla-containing plasma membrane).
Effects of Sphingomyelinase Pre-Treatment of Airway Epithelial Cells on rHla-Mediated Formation of Paracellular Gaps
Previous investigations had demonstrated alterations in cell shape, loss of cell-cell contacts, and the formation of paracellular gaps in initially confluent cell layers of airway epithelial model cells upon exposure to rHla [18], with 16HBE14o-cells' reaction being much more pronounced than S9 cells. Thus, we tested in this study whether these effects of rHla could be moderated or abrogated by pre-treatment of cells with sphingomyelinase (rHlb). As shown in the still pictures taken from time lapse movies shown in Figure 3 (third row, each), confluent cell layers of 16HBE14o-, as well as S9 cells that had been pre-treated with 5000 ng/mL rHlb, did not develop microscopically visible gaps or other rHla-typical cellular changes upon treating the cells with 2000 ng/mL rHla (added at 0 h). Cell growth, division, and shape were comparable with control cells treated with PBS in both cell lines ( Figure 3, first row, each), but clearly different from those cell cultures that had not been pre-treated with sphingomyelinase and exposed to rHla ( Figure 3, second row, each).
Effects of Pre-Treatment of Airway Epithelial Cells with Sphingomyelinase (rHlb) on rHla-Mediated Hypo-Phosphorylation of Cofilin
Earlier studies have shown that rHla-treatment of airway epithelial model cells resulted in hypo-phosphorylation of the actin depolymerizing factor cofilin [19], which is likely to be the most important part in the chain of events leading to rHla-mediated changes in cell shape and paracellular gap formation. In this study, we tested the impact of rHla on pSer3-phosphorylation of cofilin with or without pre-treatment of cells with rHlb. As reported previously [19], and again shown in Figure 4, treatment of human airway epithelial cells (16HBE14o-and S9 cells) with 2000 ng/mL rHla significantly decreased the levels of pSer3-cofilin in 16 HBE14o-cells, as well as in S9 cells ( Figure 4B,D, dots). After adding rHla to the 16HBE14o-cell culture, the level of pSer3-cofilin declined, reaching a minimum after 2 h that was maintained for the remaining experimental time ( Figure 4B). In S9 cells, the decrease of cofilin phosphorylation was transient and started to recover between 2 and 4 h of rHla-exposure ( Figure 4D). These results are in accordance with those reported previously [19]. When cells had been pre-incubated with 5000 ng/mL S. aureus rHlb, no decline in cofilin phosphorylation was observed in either cell type ( Figure 4B,D, diamond). Total cofilin abundance (normalized to β-actin), although somewhat variable in cells, was not significantly affected by any of these treatments. epithelial model cells (16HBE14o-or S9 cells) were inhibited by pre-incubation of cells with S. aureus beta-toxin (rHlb), a sphingomyelinase. Treatments of cells with rHlb alone showed no cellular changes (data not shown). Phase contrast images were taken at different time points after adding PBS (vehicle control, PBS), 2000 ng/mL rHla, or rHla in the continued presence of 5000 ng/mL rHlb, respectively. Images were taken at the indicated times from time-lapse movies (Biostation IM, Nikon) monitoring the cells over 24 h.
Effects of Pre-Treatment of Airway Epithelial Cells with Sphingomyelinase (rHlb) on rHla-Mediated Hypo-Phosphorylation of Cofilin
Earlier studies have shown that rHla-treatment of airway epithelial model cells resulted in hypophosphorylation of the actin depolymerizing factor cofilin [19], which is likely to be the most important part in the chain of events leading to rHla-mediated changes in cell shape and paracellular gap formation. In this study, we tested the impact of rHla on pSer3-phosphorylation of cofilin with or without pre-treatment of cells with rHlb. As reported previously [19], and again shown in Figure 4, treatment of human airway epithelial cells (16HBE14o-and S9 cells) with 2000 ng/mL rHla significantly decreased the levels of pSer3-cofilin in 16 HBE14o-cells, as well as in S9 cells ( Figure 4B,D, dots). After adding rHla to the 16HBE14o-cell culture, the level of pSer3-cofilin declined, reaching a minimum after 2 h that was maintained for the remaining experimental time ( Figure 4B). In S9 cells, the decrease of cofilin phosphorylation was transient and started to recover between 2 and 4 h of rHla-exposure ( Figure 4D). These results are in accordance with those reported previously [19]. When cells had been pre-incubated with 5000 ng/mL S. aureus rHlb, no decline in cofilin phosphorylation was observed in either cell type ( Figure 4B,D, diamond). Total cofilin abundance (normalized to β-actin), although somewhat variable in cells, was not significantly affected by any of these treatments. preparations. Individual means were tested for significant differences using Student's t-test or Welch's t-test (w): * p < 0.05, ** p <0.01.
When freshly prepared human nasal tissue was exposed to 2000 ng/mL rHla for 2 h, a significant decline in pSer3-phosphoration of cofilin was observed compared with untreated control tissue ( Figure 4F). However, when nasal tissue was pre-incubated with 5000 ng/mL rHlb, rHla-mediated hypo-phosphorylation of cofilin was absent ( Figure 4F). This indicates that the results obtained using 16HBE14o-or S9 airway model cells are of physiological relevance.
Discussion
Besides sphingomyelin (SM), the most common and important phospholipids in the plasma membrane of eukaryotic cells are phosphatidylcholine (PC), phosphatidylserine (PS), and phosphatidylethanolamine (PEA) [44]. The choline containing lipids, SM and PC, have been implicated in the formation of lipid rafts [38,45,46]. SM has been discussed as an important factor mediating the deleterious effects of S. aureus alpha-toxin (Hla) [21,47] on host cells [35,39]. However, it was not clear which of the sequential step(s) of forming functional transmembrane pores in the plasma membranes of host cells (monomer binding, monomer heptamerization, and pre-pore assembly or unfolding of the stem loops of each of the monomers to from the functional transmembrane pore) were facilitated by sphingomyelin. We used S. aureus β-hemolysin (Hlb), another secreted virulence factor of S. aureus that functions as a neutral sphingomyelinase [42,43] (Figure S1) as a tool to deplete the pool of sphingomyelin in the outer leaflet of the plasma membranes of airway epithelial model cells.
To investigate whether SM is necessary for binding Hla monomers to the plasma membrane or formation of heptameric pores in the plasma membrane, immortalized human airway epithelial cells (16HBE14o-, S9), as well as freshly isolated human nasal tissue, were pre-treated with Hlb and subsequently exposed to rHla. As shown in Figure 1, the removal of SM (rHlb + rHla) had no effect on the abundance of Hla monomers in the protein extracts from cells when compared with the samples obtained from cells that had not been pre-treated with Hlb ( Figure 1B,E,H). This indicates that Hla monomer binding was not affected by the presence or absence of SM. Western blot signals associated with Hla heptamers, however, showed a completely different picture ( Figure 1C,F,I). Because heptamer formation by membrane-bound alpha-toxin monomers is a very rapid process [48], the abundance of Hla heptamers in rHla-treated airway epithelial model cells steadily increased over the time of Hla exposure in those cells that were not pre-treated with Hlb ( Figure 1C,F; rHla). In cells that had been pre-treated with sphingomyelinase (rHlb), however, the abundance of Hla heptamers was significantly lower or entirely absent and did not change over time during the incubation period ( Figure 1C,F; rHlb + rHla). The latter finding was confirmed when freshly isolated human airway tissue was used ( Figure 1I), indicating that the lack of Hla heptamer formation is a general feature of SM-depleted airway epithelial cells. These results indicate that the presence of SM in plasma membranes of airway epithelial cells is dispensable for the attachment of toxin monomers to the cell surface, but essential for the heptamerization process and the formation of the pre-pore. Without the formation of a pre-pore there should be no unfolding of the stem loops of the seven subunits and no formation of a functional transmembrane pore. Thus, our expectation was that none of the usual Hla-mediated cell physiological changes would occur upon Hla exposure of our eukaryotic model cells if these had been pre-treated with sphingomyelinase.
In the absence of SM in airway epithelial cell membranes, rHla heptamerization was almost completely suppressed. The residual multimer formation may be due to incomplete SM degradation during the pre-incubation of cells with rHlb. Alternatively, choline-containing lipids like PC may be able to replace SM to a certain extent in mediating Hla heptamer formation [35]. Another potential explanation would be that Hla monomers assemble spontaneously and at a low rate without any assistance from lipids. This conclusion is supported by the observation that rHla monomers maintained for 10 min in aqueous solution at room temperature (Figure 1, pos con) sometimes show spontaneous heptamerization without the need for any additional reagents.
While these options remain to be tested, we focused on the question whether the removal of SM from host cell membranes and almost complete suppression of heptamer formation may be able to suppress the cell physiological effects that are normally seen in Hla-treated cells upon pore-formation [15,18,19]. We chose to measure the time course of changes in [Ca 2+ ] i , the formation of paracellular gaps, as well as the hypo-phosphorylation of cofilin upon addition of rHla to airway epithelial model cells that had or had not been pre-treated with rHlb.
Exposure of Indo-1-loaded 16HBE14o-cells or S9 cells to rHla resulted in a slow increase in Ca 2+ -mediated dye fluorescence, whose onset was delayed for 3 to 8 min, probably due to the time required for generating Hla pore-mediated calcium influx that exceeded the capacity of endogenous calcium extrusion mechanisms in these cells (Figure 2). These results match those that have been reported previously [15,18]. Pre-treatment of cells with sphingomyelinase (rHlb), however, completely abolished this cellular response to rHla-exposure (Figure 2, traces rHlb + rHla), indicating that sphingomyelin depletion prevents Hla from forming functional transmembrane pores. The few residual heptamers that seemed to form in rHlb-treated cells ( Figure 1C,F) may allow some influx of calcium ions into the cytosol of these cells that is, however, not large enough to out-perform the endogenous calcium extrusion mechanisms.
Monitoring of cell shape changes and paracellular gap formation in confluent cultures of 16HBE14o-, as well as S9 cells during exposure to rHla using time lapse microscopy, we could confirm that treatment of cells with rHla induced loosening of the cells from each other and from the culture dish ( Figure 3, traces rHla). These effects were much more pronounced in 16HBE14o-cells than in S9 cells, a finding that confirmed the results of previous studies [18,19]. However, when cells had been pre-treated with sphingomyelinase (rHlb), these effects were completely suppressed, and the cultures maintained their confluent appearance over the entire experimental period despite the continued presence of rHla ( Figure 3, traces rHlb + rHla).
The rHla-mediated changes in shell shape and paracellular gap formation are associated with and most likely caused by the disruption of the original architecture of the actin cytoskeleton in rHla-treated airway epithelial model cells [19]. We monitored the level of pSer3-phosphorylation of the actin depolymerizing factor cofilin that has been shown to be downregulated upon rHla exposure of cells [19]. As shown in Figure 4 (traces with dots), we could confirm the previous findings that cofilin hypo-phosphorylation occurred under rHla-treatment in airway epithelial model cells with a sustained effect in 16HBE14o-and a transient effect in S9 cells. However, when cells had been pre-treated with rHlb before the start of the rHla-experiment there was no indication of any change in pSer3-phosphorylation in cofilin over the experimental period of 4 h (Figure 4, traces with diamonds). Parallel experiments using freshly isolated human nasal tissue gave similar results ( Figure 4F). This indicates that pre-treatment of airway epithelial cells with sphingomyelinase (rHlb) prevents the activation of signaling pathways and any of the detrimental cell physiological effects usually induced by rHla exposure.
An interesting question was whether the attenuation of Hla-mediated cell damage by sphingomyelinase pre-treatment is a phenomenon limited to airway epithelial cells, or a more general effect in eukaryotic cells. To test this we used sheep erythrocyte agar plates and pre-treated the red blood cells in the agar matrix with sphingomyelinase (rHlb) before exposing them to rHla. Exposing cells to sphingomyelinase (rHlb) results in some changes in cell integrity ( Figure S3B,C), known as "incomplete hemolysis" [9]. As shown in Figure S3C, these cells did not show the typical complete hemolysis that occurs in erythrocytes exposed to rHla ( Figure S3D). We concluded that pre-treatment of sheep erythrocytes with sphingomyelinase (S. aureus rHlb) suppresses rHla-mediated hemolysis. This indicates that the removal of sphingomyelin from the outer leaflet of the plasma membranes of eukaryotic cells may generally save eukaryotic cells from the deleterious actions of S. aureus alpha-toxin. Such a conclusion is consistent with the recent observation that cells lacking sphingomyelin synthase 1 (SGMS1) are resistant against Hla-virulence [25].
Expression and Purification of Recombinant Staphylococcus aureus Hla and Hlb
Recombinant alpha-toxin (rHla) and recombinant beta-toxin (rHlb) were prepared and purified as described previously [49]. The purity of rHla and rHlb was assessed by SDS-PAGE. The concentration of rHla routinely used was 2000 ng/mL (60 nmol/L), for reasons discussed previously [18]. Exposure of airway epithelial cells to such a concentration of rHla for 2 h results in cell death of less than 15% of all cells [17]. Lower concentrations of rHla may induce pore formation as well, but the physiological responses are much less pronounced and can hardly be detected. As we have previously shown, treatment of cells with 200 ng/mL rHla does not induce MAP kinase activation [49] or intracellular calcium accumulation [18].
Freshly Prepared Human Airway Tissue
Primary human airway tissue was isolated from the ethmoid sinus uncinate process in chronic rhinosinusitis patients with macroscopically normal mucosa undergoing surgery for the removal of nasal polyps. The sheet of epithelial and underlying connective tissue was lifted from the bone, rinsed several times in cell culture medium (see above), and transferred to the lab. Tissue was sliced in a vertical direction (0.5 mm thickness) to maintain the original tissue structure. The experiments were approved by the ethics committee of the university hospitals in Greifswald (BB95/10; September 1, 2010) and Münster (2017-120-f-S). Informed written consent was obtained from all donors.
Sample Preparation for Western Blotting
Immortalized human airway epithelial cells (16HBE14o-or S9) were incubated in the presence or absence of 5000 ng/mL recombinant S. aureus Hlb (sphingomyelinase) for 1 h. Afterwards, cells were treated with 2000 ng/mL recombinant Hla for 0, 1, 2, or 4 h or with phosphate buffered saline (PBS) (control). Primary human airway tissue was treated with 5000 ng/mL rHlb for 1 h and subsequently with 2000 ng/mL rHla for 2 h or with PBS (control). After incubation of cells or tissue in the presence or absence of rHla, the culture medium was carefully aspirated and the material was washed using 5 mL PBS. To each 10 cm-plate of cultured cells, 400 µL lysis buffer (100 mmol/L KCl, 20 mmol/L NaCl, 2 mmol/L MgCl 2 , 0.96 mmol/L NaH 2 PO 4 , 0.84 mmol/L CaCl 2 , 1 mmol/L EGTA, 0.5% (v/v) Tween 20, 25 mmol/L HEPES (free acid), pH 7.2 containing 10 mmol/L each of aprotinin, leupeptin, and pepstatin, as well as 100 mmol/L PMSF and 0.33 mmol/L ortho-vanadate) was added. Cells were scraped off the cell culture plate using a cell scraper. The suspension of cytosolic extract and particulate matter was transferred to a 1.5 mL Eppendorf-reaction tube and immediately transferred on ice. The samples were homogenized on ice using a T8-Ultraturrax (IKA Labortechnik, Staufen, Germany) for 30 s, each, and combined with an equal volume of SDS sample-buffer, mixed, and frozen at −80 • C [19].
Semi-Quantitative Western Blotting
Proteins were separated by SDS/PAGE (10% or 13% gels) in a minigel apparatus (BioRad, Munich, Germany) and transferred to nitrocellulose membrane (HP40, Roth, Karlsruhe, Germany) [19]. Western blotting for the quantification of total or phosphorylated proteins was performed using (phospho-) specific antibodies, HRP-linked secondary antibodies (1:6000), and enhanced luminescence reagents (Biozym, Oldendorf, Germany). Signals were recorded using a Fusion FX7-SL gel imager (Vilbert Lourmat, Eberhardzell, Germany). Band signal intensities were assessed by densitometry using Phoretix 1 D (Nonlinear Dynamics, Newcastle upon Tyne, UK). To correct for potential minor differences in exposure time, the mean density of all bands on each gel image was used to normalize the densities of individual bands of the same gel. Signal intensities of the phosphorylated forms of proteins were normalized to the signals obtained using antibodies against the respective core proteins or against the β-actin band densities. Relative band densities were used to calculate means and standard deviations of different experiments.
Intracellular Calcium Concentrations
Changes in intracellular calcium concentration ([Ca 2+ ] i ) were monitored in airway epithelial cells using the calcium sensitive indicator dye indo-1, as described previously [15,18]. The cell suspension was split in equal aliquots after the dye loading procedure. Cells in one portion were treated with 5000 ng/mL rHlb, and cells in the other portion with phosphate buffered saline (PBS) during the 30 min recovery period. Subsequently, cells were washed 3 times (2 min at 600× g, each) and finally resuspended in 1.5 mL HEPES-buffered saline. Portions of 300 µl cell suspension were transferred to each well of a 96-well-plate (black flat-bottomed microplate, Biozym, Oldendorf, Germany) and treated with 2000 ng/mL rHla or PBS as control after a pre-run of 12 min. Calcium concentrations in the samples were determined using the Infinite M200Pro microplate reader (Tecan, Crailsheim, Germany) equipped with the software package I-control V1.11 (Tecan, Grödig, Austria, 2014) at a constant temperature (37 • C). Excitation wavelength was set to 338 nm with a slit width of 9 nm, emission wavelength was set to 400 nm with a slit width of 20 nm. Fluorescence data were recorded in intervals of 12 s. All fluorescence intensity values during the individual measurements were normalized to the average fluorescence intensities during the initial 6 min measuring period and expressed in % of these pre-run intensities.
Time Lapse Microscopy
Airway epithelial cells (16HBE14o-, S9) were cultured in 35 mm µ-cell culture plates (Ibidi, Planegg, Germany) in medium as described above until they reached confluence. The medium was renewed one day before the plate was transferred to the time lapse-microscope (Biostation II, Nikon Instruments, Düsseldorf, Germany). The microscope chamber was thermostatically controlled at 37 • C and gassed with 5% CO 2 in air during the experiment. Images of cells were taken every 3 min over 24 h and combined to time lapse-movies of 30 s duration.
Data Presentation and Statistics
Data are presented as means and S.D. of n experiments on different cell/tissue preparations. Significant differences in the series of means were detected by ANOVA. Individual means were tested for significant differences to the appropriate controls using Student's t-test (used if variances were equal) or Welch's t-test (w, used if variances were not equal). Significant differences of means were assumed at p < 0.05. | 7,229.6 | 2019-02-01T00:00:00.000 | [
"Medicine",
"Biology"
] |
Privacy-Aware Data Forensics of VRUs Using Machine Learning and Big Data Analytics
)e present spreading out of big data found the realization of AI and machine learning. With the rise of big data and machine learning, the idea of improving accuracy and enhancing the efficacy of AI applications is also gaining prominence. Machine learning solutions provide improved guard safety in hazardous traffic circumstances in the context of traffic applications. )e existing architectures have various challenges, where data privacy is the foremost challenge for vulnerable road users (VRUs). )e key reason for failure in traffic control for pedestrians is flawed in the privacy handling of the users.)e user data are at risk and are prone to several privacy and security gaps. If an invader succeeds to infiltrate the setup, exposed data can be malevolently influenced, contrived, and misrepresented for illegitimate drives. In this study, an architecture is proposed based on machine learning to analyze and process big data efficiently in a secure environment. )e proposed model considers the privacy of users during big data processing. )e proposed architecture is a layered framework with a parallel and distributed module using machine learning on big data to achieve secure big data analytics. )e proposed architecture designs a distinct unit for privacy management using a machine learning classifier. A stream processing unit is also integrated with the architecture to process the information. )e proposed system is apprehended using real-time datasets from various sources and experimentally tested with reliable datasets that disclose the effectiveness of the proposed architecture. )e data ingestion results are also highlighted along with training and validation results.
Introduction
In a recent technological globe, data are mounting rapidly, and humans are mostly relying on data. Besides the pace at which the data rise, it is becoming impracticable to stock up the data into any specific server. Today the planet holds an enormous quantity of data that persists to grow exponentially at very high speed and is insecure [1]. Moreover, the entire globe has gone online with the invention of the web, and every single action we do puts down a digital map out that is prone to vulnerability [2]. With the rise of big data and machine learning, the notion of improving accuracy and enhancing the efficacy of AI projects is also gaining importance and is largely recognized [3]. Some of these factors of the evolution of data are the enhancement of technology, social media, and Internet of ings (IoT). IoT is one of the latest concepts in the current age that is mostly applicable in traffic controlling and monitoring applications. e future of this globe is secure IoT that will be going to alter today's world objects into intelligent and smart objects [4]. Smart systems include IoT devices, such as sensors and actuators, process input connectivity, and people. Sensors and actuators are acting as a backbone for any emerging system. e interactions among all these components create a new type of smart application and service. With the rise of IoT devices, the idea of edge computing is also gaining prominence and is broadly recognized. Machine learning solutions provide improved guard safety in hazardous traffic circumstances in the context of traffic applications [5][6][7].
As several new-fangled and ground-breaking technologies pledge benefits through enhanced optimization of traffic community systems, "Smart" traffic system development chooses the best of these techniques and services to resolve traffic most imperative confronts [8][9][10]. Hence, the smart traffic trend going towards the higher side. ere are many aspects of urban from transportation management to building blueprint and community safety, which are examined as grown for reinvention. Besides, some cuttingedge and imperative technologies such as cloud computing, robotics, artificial intelligence, machine learning, big data, and particularly, machine learning seems progressively more within the reach [3,11]. e overall big data analytics process goes through several stages to serve the purpose [12].
ese stages include identification of the problem, designing the data requirements, preprocessing, data loading, performing processing and analytics, and data visualization. Firstly, all the problems are needed to be identified accurately which are required to be addressed using big data analytics. en, all the data requirements are designed to provide a logical solution to be executed in the later stages. Big data are usually very chaotic, messy, incoherent, incomplete, and inconsistent [13]. erefore, proper preprocessing is required to be done before processing the big data. Consequently, the next phase is the data loading before the data processing and analytics of big data. e smart traffic environment is based on IoT devices and objects generating gigantic data (Big Data) which requires efficient aggregation, processing, and analysis to achieve optimal results for decision-making [14,15]. Efficient exhaustive analysis of such data is not possible through traditional data analytics techniques. On the contrary, some big data analytics methods are also found in the past other than traditional methods; however, there is no allinclusive, common, and effective resolution proposed to aggregate and process the big data produced in an IoT-based smart traffic environment [16][17][18]. e existing solutions are based on traditional or classical Hadoop framework. Moreover, the data ingestion or data loading performance of big data files into Hadoop is overlooked in the existing solutions, which is one of the major factors affecting the overall processing [19,20]. Big data analytic involves smart management of the data to give real-time monitoring of the data population of the VRUs which has drastically expanded everywhere throughout the world. e solutions using IoT big data are proposed for the VRUs' information management along with traffic management. is research prefers the customization of the YARN parallel and distributed framework. However, to comprehend the reliability of the smart traffic, many challenges are required to be addressed where privacy is one of the most brutal between the imperative challenges.
Literature Review
A malevolent hit on the services of users can be extremely costly in the context of the trustworthiness of edge computing [21,22]. Hence, this article presents a secure architecture for data supervision to deal with data security challenges in smart traffic applications. e work related to the proposed architecture about data analytics and machine learning for smart traffic data management is very significant. e key problems decorated in the architecture are the use of traditional MR cluster, inadequate data piling, intangible structure, and only specific dataset [10,23]. A scheme was discussed in detail in the context of V2X connections [24]. e bog data analytics approaches are also taken into consideration including Tiers that are accountable for various steps and activities of the data analytics. ough it is a complete four-tier architecture consisting tiers from data collection to data analysis usage, it causes processing delay [25,26], and a classical map-reduce framework is used that slows down the performance. Moreover, data aggregation before data loading is focused while data loading competence is overlooked. e data aggregation of results is preferred and data loading before analysis is overlooked in this architecture.
An approach is proposed for reducing the conflict between VRUs and automated vehicles [27]. is proposal is only focusing on automated vehicles. It does not support big data processing in general. e key issue is the data ingestion performance in this model. It takes a lot of time to insert the big data into the system for processing. Some researchers proposed a model based on data analysis that promotes the notion of smart traffic and utilizes the big data to be processed but overlooks the data loading efficiency. A framework is presented to overcome the VRUs' issues, but these researchers also overlook the data loading and ingestion into a distributed environment. ere are some models proposed to deal with the same problem of Big Data analysis in the smart environment [14,28], but this solution is the utilization of the conventional cluster resource management scheme and insufficient data loading to the Hadoop server. Moreover, architecture is proposed to investigate the data in a transport environment that is more accessible and efficient [29]. However, it causes an additional delay in processing, and the said scheme is only tested for transportation datasets, and data load efficiency is overlooked while loading Big Data to Hadoop server as well. e additional delay affected the overall performance of the big data analytics.
On the contrary, a scheme is proposed using a parallel processing approach.
ough a YARN-based solution is offered, the data loading efficiency is still overlooked in this architecture. e standard practice of traditional data analytics techniques is to analyze the limited data only, which generates an open area of errors and biases in the Big Data scenario. Another challenge that needs to be addressed is insufficient data loading into the traditional cluster management framework, e.g., Hadoop. e traditional data loading challenges are time-consuming, more storage required, commands are difficult, no append, and no partial ingestion. Similarly, Hadoop processing based on traditional cluster management challenges includes scheduling issues, inefficient load balancing, scalability issues, NameNode availability, and responsibility unification. e objective of this research is to propose a framework based on edge intelligence to process enormous data efficiently and overcome the data loading and processing issue. IoT gathers data and directs the driver to follow the free lanes. A specific proposal is designed to realize the map-reduce paradigm integrated with Apache Spark for real-time data processing comprehension of big data. Spark deals with hasty computation and allows reusability. Effective data ingestion into the distributed storage mechanism is missing in the loading and storage process efficiently.
e current work has deficiencies in the big data storage and processing for IoT-enabled intelligent transportation. Furthermore, model parallelism is also missing for effective extrapolation and decision-making. e proposed research will propose a framework to overcome the existing challenges. Trust and privacy in the smart traffic application, particularly considering the VRUs, is a prejudiced experience that brings complexity in recognizing the attacks. e insecure VRUs in the smart traffic applications could cause a breakdown in the transportation monitoring and controlling services. erefore, to enhance security, we need to evaluate the level of insecurity in an application first.
is study proposed a secure architecture based on machine learning in the smart traffic domain that evaluates the privacy level of the VRUs.
Proposed Architecture
e proposed architecture based on machine learning connects the smart community departments (e.g., traffic monitoring and control department). e data sources are comprised of traffic monitoring and controlling big data. e workflow of the proposed parallel and the distributed scheme is depicted in Figure 1. Data gathering is done by the respective units collected from various traffic control sources (e.g., sensors and cameras). To devise effective parallel and distributed architecture, the data must be scrutinized before computation. e data are generated by different devices such as environmental sensors, security monitoring sensors, traffic cameras, and transportation monitoring sensors. e data are properly collected by the various departments such as the traffic-controlling authorities. is process is known as secure data collection. e data are classified using the machine learning approach. e data are given to the proposed parallel and distributed architecture to process using proposed modules. e number of parallel changes is balanced using the fixed block size of the chunk. e default block size of the utility is time-consuming and has less parallelism. e default size is optimized and modified to improve the data loading efficiency. is data collection is a part of a distributed system. It involves overall data management that includes aggregation, collection, and storage. e data are also preprocessed before injecting into the proposed scheme to remove noise and anomalies for speeding up the processing activities. Afterward, the data are divided into different chunks for parallel processing at the edge level. e distributed storage mechanism is also taken into consideration to assist the parallel processing. e YARN parallel and distributed platform for big data analytics is preferred because the cluster management is dealt with separately by the resource manager that is a part of the YARN. Premediated algorithms are applied while processing the data in the cluster. e processed results are sent for decision-making to the concerned smart society services' providers that are finally forwarded to the users. Following filtration, the Hadoop processing unit is used to process the data which are stored in the distributed storage mechanism. Lastly, the analyzed data are operated for community planning. e data are collected from the departments, and the decisions are sent back to the community development departments. e objective is to realize a smart traffic scheme to perform processing and keep the data private. e said-community departments are the data sources for the proposed system and a mediator between the system and the user. Architecturally, the anticipated solution consists of 3 modules that are data security, organization, and processing, which are shown in Figure 2.
Data Security Layer.
e proposed structural design has a security layer for keeping secure the VRUs' data from attacks. It is a part of smart traffic architecture. It recommends flexibility in opposition to the attacks. e supplier manager (SM), user manager (UM), and superviser are the components' security layer. e SM and UM pay attention to the supplier and the user, while the supervisor applies the algorithm of machine learning. e CNN DL technique is integrated that classifies secure or insecure data. e SM is accountable for the profile maintenance of every supplier, and the UM is accountable for the profile maintenance of users. e supervisor is trained using the classifier. e level of security is predicted using special classes that are highly secure (HS), fine secure (FS), moderately secure (MS), highly insecure (HIS), and partly insecure (PIS). Equation (1) Fk. (2) Equation (1) is used to calculate the various levels of security. e purpose of the different security levels is to give the particular score to the candidate user for the prospect. e major purpose of the multiclassification is to identify the watch list of the risks in the future. It helps identify the intruders with less score to be analyzed further for future investigation.
Big Data Organization.
e big data organization system involves the overall data management including aggregation, collection, and storage. e data are distributed across various nodes for computation to get a load from the central server or cloud. Intelligent applications are supported by acquiring data via the Internet from various local devices. Several devices that include sensors, cameras, and objectmounted devices record the information of the environment in the different domains.
is data are later utilized for analysis to get insights and produce intelligent decisions. It is the first layer that is accountable for assembling the data from different community departments that are used to manage the smart community development services. A practical community does not only hold large data only but also includes versatile and wide-ranging processing areas. e smart community implementation is dependent on all forms of data processing due to their heterogeneous nature. Data collection is used to transform signals that are assessed in practical circumstances and converts outcomes to the digital form for processing. e collection is done by a special system that converts data from analog to digital form. e smart traffic centers pull out the data using various sensors in the community to gather real-time data. e data organization layer further contains the data aggregation, where the data are grouped based on the identification of the connected devices. is aggregation process is implemented due to the data size because the data are very massive and required to be assembled for efficient processing. e aggregation improves the modularity and processing.
Big Data Processing.
is unit is the main processing part that preprocesses the raw data initially including the irrational data combination, missing values, and values beyond the range which are integrated before processing. If the data are not inspected for such problems, there could be misleading results during decision-making. Hence, the transformation is also done to scale the data to a specific scale. en, the data are taken by a parallel processing unit that is the backbone of the proposed architecture. e proposed architecture is based on a parallel computing model called MapReduce that is utilized. MapReduce is introduced to realize big data analytics. is programming paradigm is composed of Map and Reduce functions. It is a useful model that exploits huge datasets and processes them in parallel. It executes processes in a distributed manner and offers high availability. e underlying system also manages machine failures, performance issues, and efficient communications. Task distribution in the cluster is carried out using the YARN distributed cluster management framework.
e YARN is equipped with dynamic programming for task distribution and cluster management.
e previous platforms such as MapReduce paradigm are only responsible for the processing. e YARN is preferred because the cluster management is dealt with separately by the resource manager that is a part of the YARN. e fair algorithm is integrated with YARN to perform scheduling. Besides, interleaving is possible between map and reduce phases; therefore, the reduced phase might begin before the map phase finishes.
Results and Discussion
e proposed scheme is implemented using the parallel and distributed platform of Hadoop version 3.0. e Hadoop is equipped with Apache Spark module. e reliable datasets are utilized. e pyspark library is utilized in Python 3.8. Similarly, the resilient agent evaluation is carried out using a detailed setting with a machine learning classification module. e machine learning library is also utilized and implemented in Python 3.8. e comparative analysis of the proposed design is provided with current proposals. e experimental results and comparison disclose the effectiveness of the proposed design. e discussion about the results is provided in this section. Results are produced using various reliable datasets to assess the proposed architecture based on parallel and distributed paradigms using premeditated algorithms. We performed a noise and anomalies removal process on data on top of our proposed architecture. e anomalies are removed using the min-max normalizations and Kilman algorithm. e data ingestion is achieved using the map-only algorithm. e traditional YARN cluster management framework is customized with improved capacity and a fair algorithm of scheduling. We applied the dynamic algorithm to set the parameters of the YARN framework dynamically. e processing is performed using MapReduce algorithms. We also optimize the MapReduce algorithm for edge computing to utilize at every edge. us, notable efficiency is achieved in the processing time. e proposed architecture implemented using the Hadoop parallel and distributed framework along with optimized algorithms. ese datasets are preferred due to the utilization of this dataset in the literature. We deliberately executed almost the same queries to compare the processing time and throughput of proposed edge-enabled IoT architecture using customized MapReduce and YARN for parallel processing.
Data Security Results.
e results and experiments of the security layer include the required training of the dataset using an ML classifier. e model is trained using secure and insecure interaction with the proposed architecture. e assessment of the security layer is performed in a specific setting. Initially, the model was trained on 365 * 925 matrices. e training process of the Naïve Bayes classifier is shown in Figure 3.
Security and Communication Networks e proposed resilient agent evaluation is also performed by setting a specific environment where the proposed model is trained using the proposed model. To assess the proficiency of the proposed model, the confusion matrix is exploited, as depicted in Table 1. To measure the effectiveness of the classifier, the confusion matrix is utilized concerning two classes (e.g., secure and insecure), as shown in Table 2. e value is considered secure if it is greater than 0.5; otherwise, it is considered insecure. e performance measures are applied to the ML technique utilized for a resilient agent. e accuracy of the technique is expressed in the form of percentages in Table 2. e specific value of percentage of each confusion matrix value is also highlighted in Table 2. Figure 4 is the confirmation of the enhanced accuracy of the validation and training.
Training and Validation Results.
e enhanced level of accuracy in training and validation is a result of the enlarged number of epochs (e.g., 200 epochs). Likewise, Figure 5 reveals the proposed model's validation and training loss that is the indication of minimal loss. e reduction in the loss is a result of the enlarged number of epochs (e.g., 200 epochs). using the specific utility. It has been experimentally proved that it gets nearly no time to load the dataset into Hadoop when the dataset size is small. Overall, the proposed system efficiency including all the parameters' modification of data loading is shown in Figure 6. In the same way, Figure 7 demonstrates the threshold for all the parameters' modification of data loading using the proposed system. e threshold is the alarming set value that highlights the focal point where the difference between existing and proposed schemes starts. e proposed scheme is manual in the context of data ingestion and automated in the context of classification and processing.
Conclusion
A smart traffic application is considered by the extensive expansion of IoT-connected devices particularly with the rise of Big Data and machine learning. Machine learning solutions provide efficient results in the context of efficiency and accuracy of the machine learning models. However, it becomes challenging to tackle the privacy of the users in the smart traffic management and surveillance of the users because that produces an enormous amount of big data to be processed and analyzed efficiently. In this study, an architecture is proposed based on machine learning to process big data efficiently in a secure environment considering user privacy. e proposed architecture is a layered framework with a parallel and distributed module using machine learning on big data to achieve secure big data analytics. A specific privacy layer is proposed that classifies the dishonest entities using machine learning.
e proposed system is apprehended using real-time datasets from various sources and experimentally tested with reliable datasets that disclose the effectiveness of the proposed architecture. e data ingestion results are also highlighted along with training and validation results. is study proposes an architecture based on machine learning to process big data efficiently in a secure environment considering user privacy. e proposed design is the optimization of the existing parallel and distributed framework to achieve efficient processing. e current proposals lack efficient parallel data ingestion and efficient mechanism for communication overhead. erefore, the security challenges using machine learning are explored in this paper. is paper proposes a separate secure and resilient module to overcome the privacy issue of the users. e proposed architecture is equipped with a resilient agent using an ML classifier. A stream processing unit is also integrated with the architecture to process the information produced by edge devices.
Data Availability
e data used to support the findings of the study are included within the article.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 5,314.4 | 2021-11-28T00:00:00.000 | [
"Computer Science",
"Engineering",
"Law"
] |
Labor Adjustment Costs Across Sectors and Regions
This paper estimates the mobility costs of workers across sectors and regions in a large sample of developing countries. The paper develops a new methodology that uses cross-sectional data only. This is motivated by the fact that panel data typically are not available for most developing countries. The results suggest that, on average, sector mobility costs are higher than regional mobility costs. The costs of moving across sectors and regions are higher than the costs of moving across only sectors or only regions. In poorer countries, workers face higher mobility costs. The paper provides evidence suggesting that mobility costs, particularly across sectors, are partially driven by information assimetries and access to the Internet can mitigate these costs.
Introduction
We estimate labor mobility costs across sectors and regions for a large number of developing countries. Given that labor surveys in developing countries rarely provide longitudinal data, the methodology we developed only requires cross-sectional data. We rely on the assumption, borrowed from the urban literature, that workers' intertemporal utility net of mobility cost is equalized across sectors and regions. We start by estimating for each worker in our sample their hypothetical wage in other sectors and regions given their observed characteristics. We then compare their level of intertemporal utility in each location or sector. Differences in intertemporal utility capture mobility costs, which are identified using the time horizon faced by each worker in their intertemporal maximization, i.e., the number of years until retirement. After correcting for self-selection of workers into regions and sectors, we find that the sector mobility costs are larger than regional mobility costs and represent about 1.4 times the average annual wage. The cost of moving across sectors and regions simultaneously is larger and represents almost 1.8 times the average annual real wage. We also find that workers in poorer countries face higher mobility costs. In addition, we provide evidence suggesting that information costs partly explain labor mobility costs, which can be mitigated by better access to the Internet.
Estimating labor mobility costs across sectors and regions is important for several reasons.
First, moving sector and location is often a joint decision in less developed countries. As economies get richer, the share of jobs in agriculture shrinks and workers migrate to urban areas to find job opportunities in other sectors. Bustos et al. (2016a) provide evidence that technological change in agriculture that is labor-saving can foster industrialization and labor reallocation towards manufacturing. These gains are also extended to other sectors and regions by capital reallocation through bank branch networks (Bustos et al., 2016b). However, if the workforce is not sufficiently mobile across sectors and regions, the gains from labor-saving technological change are unlikely to be realized to their fullest extent, which may slow the speed of structural transformation. Gollin et al. (2014) suggest that even after controlling for sector differences in hours worked and human capital per worker, there is still a large productivity gap in agriculture across developing countries associated with misallocation of factors across sectors. Their findings are supported by Hsieh and Klenow (2009) who suggest that resource misallocation is also existent in manufacturing. 1 Second, for the gains from trade to materialize, workers and capital must be able to move freely within countries into sectors in which the country has a comparative advantage. Significant gains from trade integration accrue from the reallocation of labor from declining sectors into booming sectors. While capital can move relatively freely within a country, the same cannot be said about workers' mobility. Workers are endowed with sector-specific skills that are not easily transferable to another sector, and new skills are costly to acquire. Furthermore, if changing sector also involves moving to another location workers incur an additional moving cost. This cost includes the cost of physically moving to another location (finding a new house), as well as the loss of the social environment people develop over time, which has a strong geographic component. If workers are stuck in some sector or region because of high mobility costs, gains from trade (or other productivity shocks) are likely to be small or even negative for workers.
Third, understanding what are the forces behind each of these costs (e.g. information costs, retraining costs, moving costs) can further help policy makers pinpoint the strategies needed to promote more labor mobility and ultimately a more efficient and equal wage distribution. 2 . Whether the lack of observed relocation of workers is driven by the difficulty of moving across sectors or across regions is highly relevant for policy makers who have to decide whether to allocate resources to the reduction of sector or regional mobility costs. Most of the existing literature has typically focused on one of these two adjustment costs separately. Our aim is to simultaneously identify the costs incurred by workers when they move to another region and when they become employed in a different sector. Ignoring the various dimensions of mobility costs can lead to an overestimation of the true cost of moving to another sector and/or to another location.
We face several challenges to estimate region and sector mobility costs in developing countries.
First, if our methodology does not require longitudinal data, it does require cross sectional data that are comparable across countries. The World Bank has recently put together a series of labor and households surveys which are harmonized along some key dimensions, such as sector, location and other workers' characteristics. The richness of the International Income Distribution Data set (I2D2) allows us to account for workers' heterogeneity, which is an important component of our The second challenge has to do with the bias that workers' self-selection into regions or sectors may introduce in our analysis (Roy, 1951). If more skilled workers select in one sector or region, the comparison of wages across sectors and regions is not providing information on mobility costs, but on different returns to skills. To provide meaningful comparisons of predicted wages we estimate mobility costs by controlling for differences in observed heterogeneity, such as age differences, education, gender, and occupation across sector and regions. In addition, we deal with self-selection of workers into regions and sectors based on unobservables characteristics by applying a correction method suggested by Dahl (2002).
The third challenge we face is that workers base their migration decision on real wages, and not nominal wages. Information on the cost of living at the regional level in developing countries is unfortunately not available. We bypass this issue by using the average wage in each region to 2 Based on a quasi-experiment using detailed information on rural road construction in India Asher and Novosad (2016) suggest that poor rural transportation infrastructure is a major constraint on the sectoral allocation of labor in low-income countries 3 See Montenegro and Hirn (2009) for more details on the I2D2 database.
proxy for the local cost of living and compute real wages. This has the additional advantage as will become clearer in the empirical methodology section that our estimates of adjustment costs can be interpreted in terms of average real wages.
The fourth challenge has to do with the fact that the methodology we propose relies on crosssectional data only, but the question at stake is fundamentally dynamic. To account for the fact that younger workers are more "footloose" than older ones, we use the difference between observed age and (expected) retirement age as a measure of workers' time horizon. 4 This introduces workerlevel variation that will prove to be important in the estimation of the mobility cost and allow us to introduce dynamic considerations when estimating mobility costs.
The literature has recently produced various estimates of mobility cost (Hollweg et al., 2014). Kennan and Walker (2011) develop a model of individual migration, where expected income is the main force influencing migration. They test their model using detailed US data on individual workers. They find that interstate migration is strongly influenced by the prospect of higher income in other states, and estimate an elasticity of 0.5 between wages and migration decision.
One important difference with our paper is that they do not consider sector mobility costs and exclusively focus on region mobility costs.
Using the same kind of theoretical tools but in a context of trade shocks, Artuç et al. (2010) propose a structural estimation of the reallocation cost of workers across sectors. Using panel data where workers' movements can be observed over time, they estimate the structural parameters of their model on US data and find an average moving cost of about 13 times the average worker's annual wage. In their model, workers are homogeneous, which may explain the large moving-cost they obtain. Dix-Carneiro (2014) develops a model where worker's heterogeneity is taken into account. Using panel data for Brazilian workers, he estimates an average moving cost of about 2 times the average annual worker's wage. Taking into account heterogeneity across workers appears to affect greatly the magnitude of the moving cost. Artuç et al. (2015) estimate sector mobility costs in a large number of countries by adapting the methodology in Artuç et al. (2010) to be implemented using repeated cross-sectional data on sectoral employment in manufacturing in each country. They found sector mobility costs that are on average 3 times annual wages. One important difference between all these papers and what we do is that we simultaneously allow for regional and sector mobility costs, whereas the previous papers have exclusively focus on only one of these two components. We find that simultaneously accounting for both matters. Moreover, we explore some potential channels that may explain labor mobility costs, including costs associated with information asymmetries , and investigate how access to Internet can reduce mobility costs.
The rest of the paper is organized as follows. In section 2 we present the methodology to estimate sectoral and regional adjustment costs using cross-sectional data. Section 3 describes the I2D2 database and provides some descriptive statistics regarding wage dispersion across region, sectors and age groups. Section 4 presents the estimates of regional and sector mobility costs, as well as a description of their correlation with variables such as income per capita, wage inequality and the geographic and sectoral concentration of employment. Section 5 explores the extent to which the estimates of mobility costs are driven by information costs and analyzes how access to the Internet is associated with reduction in mobility costs. Section 6 concludes.
Methodology
Consider a worker of type l (a type being given by characteristics such as age, gender, education, occupation, etc.) working and living in sector-region k. 5 Making this distinction right now would uselessly flood the text with subscript and indices. Her utility U l,k is given by: where w l,k is the log of real wage received by the worker; γ l represents worker's characteristics (orthogonal to wages) that are common across the different region/industries. We assume that (i) workers are rational and maximize their intertemporal utility, and (ii) what we observe in the data is an equilibrium. Workers of type l will decide to move from sector-region k to sector-region k until their intertemporal utility V l,k is equal to the intertemporal utility in sector-region k (V l,k ) net of mobility costs (C k,k ). Workers of different age maximize over different time horizons. We assume they maximize over (T l -t 0 ) years, where T l is the retirement age and t 0 their current age.
This is an important feature of our model since our empirical strategy will strongly rely on the time horizon of workers until retirement (therefore on their age) for identification. This implies that for all V l,k >V l,k we should observe in equilibrium: where β is the intertemporal discount factor. Substituting (1) into (2) and solving for the difference in gross intertemporal utilities yields: 5 We consider jointly the sector in which workers are employed and the place where they live. In the empirical analysis we distinguish between industry and region.
where C k,k is the cost (in utility terms) of moving from sector-region k to sector-region k , and ∆w l,k,k = w l,k −w l,k is the difference between the hypothetical real wage of worker l in sector-region k and her current (observed) real wage w l,k . We will use the equilibrium condition (3))to estimate the moving costs. With β lower than one, we have: Solving equation (2) for the wage difference yields: Following Artuç et al. (2010) we assume that that mobility costs are symmetric and identical across sector-regions (i.e. C k,k = C). The parameter β is not observed and we follow the literature by using a discount factor equal to 0.95 and test for the robustness of results using estimates in the [0.9;0.99] range. This discount factor corresponds to the usual annual discount factor, which implies that the mobility costs C we estimate can be interpreted as a share of annual wages. To control for any other unobserved heterogeneity in sector-regions k and k we add fixed effects to obtain our estimating equation: The set of dummies α k and α k can be thought as capturing local amenities in each sector-region, or anything other factor that may explain average differences in real wages in different sector-regions. This is somehow analogous to the idiosyncratic shocks ξ j that workers receive in industry j in Artuç et al. (2010), or to the unexplained part of the utility flow viewed as preference shocks, or shocks to the cost of moving in Kennan and Walker (2011). Retrieving the moving cost C k,k from equation (5) can be done via simple OLS. Because equation (2) is an equilibrium condition if and only if V l,k >V l,k , which from (3) necessarily implies w l,k > w l,k , we impose this condition in the estimation.
We are interested in estimating separately three mobility costs. First, we want to estimate the cost for workers to move across sectors while remaining in the same region. We use the subscript j to denote this mobility cost (C j,j ). Second, we want to estimate the cost for workers to move across regions within a country while remaining employed in the same sector, and use the subscript r to denote characterize this mobility cost (C r,r ). Finally, we are interested in estimating the cost for workers when they change sector and move to another region. We use the subscript jr to label this cost (C jr,j r ). Note that these three costs are orthogonal to each other, and that the "general" mobility cost (C k,k ) is simply their union. Equation (6a) illustrate this strategy. We also want to compare these mobility costs with estimates where we ignore either the sector or the region dimension. To do so, we estimate equations (6b) and (6c) where we consider the cost of moving across sectors (C2 j,j ) (ignoring whether workers move to another location), and the cost of moving across regions (C2 r,r ) while ignoring that workers may also be moving across sectors.
To estimate equations (6a)-(6c) we need an estimate of the left-hand-side ∆w l,k,k . While w l,k is observed, w l,k is not observed and has to be estimated. To properly estimate wages we need to account for both observable and unobservable characteristics. The I2D2 provides us with many individual characteristics we can directly use to predict wages. Unobserved characteristics are an issue if they are a determinant of wages, via the self-selection of workers into specific sector-regions for instance. We tackle this issue by estimating the probability for a worker to be observed in a specific sector-region using the methodology proposed by Dahl (2002). We then use this predicted probability as an additional determinant of wages. The next section describes the method in detail.
Expected wages and selection bias
Expected wage is a key variable in our analysis. We assume that workers are heterogeneous based on observable characteristics, including age, skills, gender, and occupation. To correct for self-selection based on other unobserved characteristics when estimating ( w l,k ) we adapt the methodology proposed by Dahl (2002), also used by Bourguignon et al. (2007) and Bertoli et al. (2013). The selection bias may happen because workers can choose among several industries-regions to work based on unobservable characteristics that may be correlated with their expected wages across different industries-regions. Thus, differences in wages accross industry-regions might be partly capturing differences in worker's ability unrelated to age, gender, schooling, or occupation. Note that in our cross-sectional data set we do not have information on the share of workers who migrate across sector-regions to estimate the selection probability of migration, as in Dahl (2002). Instead, we adapt a Roy model of occupational choice where workers choose from many alternative of jobs across sectors and regions, taking into consideration the relative concentration of jobs across sectors and regions. We proceed in two steps. First, we estimate Dahl's correction function based on the probabilities for a worker l to be in a different sector-region k using a multinomial logit model, following equation 7.
where x l is a matrix made of the location quotient, Z l,k , and a set of individual observable characteristics: age, age 2 , occupation dummies, skill dummies, and a gender dummy. We estimate this equation using a multinomial logit and construct the predicted probability of observing worker l across different sector-regions k n .
The location quotient, Z l,k , is defined as: where #workers l,k is the number of workers at age l in sector-region k, #workers k is the total number of workers in sector-region k, #workers l is the number of workers at age l in the country, and #workers is the total number of workers in the country.
The predicted probability for each sector-region k and its quadratic term k 2 is then used as correction fuction in the second stage to estimate expected wages ( w l,k ). We do this by adding k and k 2 as additional explanatory variables in the following mincer equation: 6 w l,k = α + β 1 age l,k + β 2 age 2 l,k + β 3 occup l,k + β 4 skill l,k + β 5 gender l,k + β 6 age l,k * K + β 7 age 2 l,k * K + β 8 occup l,k * K + β 9 skill l,k * K where w l,k refers to real wage of worker l in region k, α is a constant, age is the age of the worker, agesq is age square, occup is a dummy variable indentifying managerial occupations, skill is a dummy variable differentiating workers by education attainment (high-school or above), K is a set of fixed effects by sector-region based on the observed location of worker l, k is the estimated probability of workers being in sector-region z, and is the error term. To capture the heterogeneity 6 Our specification leads to similar results as using the selmlog routine develped by Bourguignon et al. (2007). Yet, instead of running a separate second stage for each industry-region, we capture the differences in the return of workers' observable characteristics (e.g. age, occupation, skills, and gender) through interactions terms using weights at the national level.
of the marginal returns to workers' characteristics (e.g. experience, skills, occupation, and gender) across sector-region k we interact age, agesq, occup, skill, and gender with the sector-region dummies K.
Data: The I2D2
We use the International Income Distribution Database (I2D2) developed at the World Bank. The I2D2 is a global harmonized household survey database, covering 120 countries. Data are collected from more than 1,000 surveys, and harmonized in order to be used for quantitative analysis. For the purpose of this study, we had to select surveys for which some key information on worker's gender, age, education, occupation, industry affiliation and location of residence (administrative region and rural/urban area) is non-missing. We also selected individuals working as paid employees between the age of 15 and 65. There are 10 occupations in the database: Senior officials, professionals, technicians, clerks, service and market sales workers, skilled agricultural workers, craft workers, machine operators, elementary occupations workers, and military. We define two broad categories of occupation: a managerial-type occupation (senior officials, professionals, and technicians) and a non-managerial occupation comprised of the remaining occupations. Similarly, we define 17 age categories by grouping individuals in a three years of age interval. For instance, individuals between 15 and 17 (included) are given the age of 16; individuals between 18 and 20 (included) are given the age of 19 and so on. The reason for this is that we need to reduce the dimensionality of worker characteristics for the selection equation to be able to perform appropriately.
Sectors are defined in 10 categories: agriculture, mining, manufacturing, public utilities, construction, commerce, transport and communications, financial and business services, public administration, and other services. Because some sectors have a very small size in some surveys, we aggregate them into 7 categories by merging agriculture with mining, public utilities with construction, and public administration with other services. Unlike sectors, the definition of regions as a geographical unit is not harmonized across surveys. Most of the countries in our final data set consist of on average 6 regions. Our final data set is a collection of 234 worker-level surveys covering 47 (almost exclusively) developing countries over the period 1981-2012. Table 1 provides the full list of surveys.
Our identification strategy relies on the age of workers. If older workers are less likely to move, i.e. they face higher mobility costs, then we should observe larger wage differences across sectors and/or regions for them than for younger workers. This implies a positive correlation between the age of workers and the variance of wages within this age group. To investigate this, we estimate the following equation: where ln(σ(wage kct )) and ln(age kct ) are respectively the log of the standard deviation of the wage distribution and the age of workers with of age l in country c in year t; γ c t are country×year fixed effects and kct is the error term. A positive correlation between wage dispersion and age would imply α positive. Figure 1 plots the conditional correlation between ln(σ(wage kct )) and ln ( greater than the dispersion of wages across regions. This is the case for 80% of the surveys in our data and suggest that sector mobility costs may be larger than mobility costs across regions.
Baseline results: With correction
The estimation of equations (7) and (8) Finally, column (3) reports summary statistics for the mobility cost incurred by workers when the change sectors and move to another location. On average, this cost is 1.8 times the average annual real wage, which is larger than the sector-only and region-only mobility costs. Note that the number of estimates which are statistically significant is much larger than for the other mobility costs. Using only the significant estimates produces a larger average sector-region mobility cost.
Yet, the mobility costs across countries are heterogeneous. Sector mobility cost range from one time the average annual wage in countries such as Honduras or Argentina to more than 10 times the average annual wage in Cameroon, Ethiopia, or South Africa. Regional mobility costs range from around 30% of the average annual wage in places like Uguanda and Malawi to more than 5 times in Timor-Leste or Ethiopia. The costs incurred by individuals moving across sectors and regions ranges from at least one time the average annual wage (Argentina, the Republic of Yemen, Chile) to more than 10 times in Ethiopia or South Africa. In the next subsection we investigate some of the determinants of mobility costs.
Explaining the mobility costs
The historical patterns of structural transformation in high income countries suggest that as economies develop workers move from agriculture activities in rural areas towards urban jobs in industry and services. This has been well documented in the literature (Clark, 1940;Rostow, 1959;Lewis, 1954). Yet, depending on the pattern of technological progress and frictions in labor market, workers may be misallocated across sectors and regions, slowing down structural transformation.
To provide some description of the mobility costs and its association with development, we estimate the following equation:Ĉ where C c,t is the estimated mobility costs of moving between industry/region k and k and ε c,t is the error term. γ c and γ t are country and year fixed effects respectively. In the empirical analysis we use them alternatively. Because the left-hand side variable of equation (10) is estimated with error, we weight the observations by the inverse of the standard error of each mobility cost Lewis and Linzer (2005). We estimate equation (10) for each mobility cost and using a single country characteristics at a time. The country characteristics we use are the log of GDP per capita, The Gini coefficient, and the share of agriculture, manufacturing and services in employment. We expect GDP per capita to be negatively correlated with mobility costs, both across countries, and within countries over time as financial constraints which restrict mobility are likely to be lower as countries get richer. We also expect the Gini index to be positively correlated with mobility costs. 7 Without claiming any causal link between mobility costs and income inequalities, we argue that high mobility costs can lead to divergent trajectories for wages both across industries and across regions. We then use the share of the three large sectors (agriculture, industry, and service) in total employment with the aim of capturing the extent of structural change across and within countries.
Industry or services activities are likely to be more mobile than agriculture, which is strongly tied to land. Similarly, service activities are also likely to be more mobile than industry, where capital is less mobile in the short-run than in the long-run. We then use three measures of the concentration of jobs within industries, regions and industry× regions. We define an index aiming at capturing such specialization: where L is total employment, and L k is employment in sector/region k. The index is a herfindahl index based on the share of sector, region or sector×region k in overall employment. It ranges from zero to 1/K. We expect the specialization index to be positively correlated with the mobility cost.
Everything equal, the more concentrated jobs are in a particular sector and/or region the more difficult it is to move to another sector/region. Finally, we look at whether large countries face higher mobility costs. We use the average internal distance within a country to proxy for its size.
We expect the regional cost to be positively correlated with internal distance. Because there is no time variation in this variable we only use it with time fixed effects.
Results are presented in table 3. Each coefficient (with its associated standard error) comes from a separate regression. In columns (1) and (4) the dependent variable is the sector mobility cost ( C j,j ), and we control for country or year fixed effect respectively. The same logic applies to columns (2) and (5) for the region mobility cost ( C r,r ), and to columns (3) and (6) for the sector× region mobility cost ( C jr,j r ). Results indicate that the difference in mobility costs is significant between richer and poorer countries. The coefficients on GDP per capita are significant in columns (1)-(3). As countries get richer, the sector mobility cost and the sector×region mobility costs get smaller. On average, a 1% increase in GDP per capita lowers the sector mobility cost by 0.3%, and the sector-regional mobility cost by 0.6%, when controlling for country fixed effects. Note that variation in GDP per capita is not correlated with the regional mobility costs. The second country characteristics we use is the Gini index. Results indicate that more unequal countries, or countries that become more unequal over time, have higher sector and sector×regional mobility costs (columns 1 and 3, and 4 and 6). This finding supports the view that wage convergence is difficult when mobility costs are larger.
Finally, we look at whether sector and/or regional mobility costs are correlated with the structure of the economy. We alternatively regress each mobility cost on the share in employment of the agricultural, manufacturing, and service sector. Results indicate that countries with a larger agricultural sector also have higher mobility costs (columns 1-3). Results in columns (4-6) suggest that, as the share of the agricultural sector declined in most countries over time, so did on the average the mobility costs. Symmetrically, we obtain opposite results when considering the share of the manufacturing sector in the economy. More industrialized countries face lower sector mobility costs. Results are similar when considering the share of the service sector, but with a much smaller magnitude than for the manufacturing sector. Although we do not claim any causal link, our results suggest that larger mobility costs are correlated with the level and speed of structural change.
One constraint of our analysis is that the definition of a region is not identical across countries and we had to restrict ourselves to a limited number of regions for computational reasons, which means that large countries may have too few regions compared to smaller countries. We re-estimated the regression using only small countries (we dropped countries for which the average internal distance is above 1,000 km). The intuition is simple and merely assumes that greater distances between regions increases the cost of moving.Larger countries may also have developed denser road and railway networks, which would ease regional mobility. To capture these two effects, we use data on internal distance from CEPII. 8 and data on the length (in km) of the railway network from the World Development Indicators. Results presented in table 3 show that internal distance is positively correlated with the regional mobility costs, but results are less precisely estimated when using only statistically significant estimates of reginal mobility costs. The size of the railway network (which we interpret here as a proxy for transport infrastructure) is negatively correlated with regional mobility costs. Yet, when controlling for both variables (distance and railway network), none of them are statistically significant. 9
Information costs and Internet access
For policy makers it is not only important to know whether sector or regional mobility costs are larger, but it is also key to understand what is driving these costs and what policies are available to reduce them. We explore this question by putting forward a potential explanation associated with information costs and analyze how access to the Internet may mitigate them. Other determinants (e.g. retraining costs and social network) have also been explored by the literature. For example, Dix-Carneiro (2014) suggests a moving subsidy as a better policy than retraining to compensate the fact that sector-specific experience is not perfectly transferable across sectors. 10 A potential driver of mobility costs is lack of information associated with searching costs. Work-ers have to learn about job opportunities in other sectors and regions, while employers need to learn about the availability of workers, including those in other sectors and regions. An increase in Internet labor market connections should improve the efficiency with which workers are matched to jobs (Autor, 2001). Yet, the empirical literature evaluating the effects of Internet access on job search is not conclusive. Kroft and Pope (2014) find no effect of Craigslist, a major website aiming to advertise jobs and items for sale, at virtually no cost to the user, on the unemployment rate in the United States. Kuhn and Skuterud (2004) also find that Internet job search was ineffective in reducing unemployment durations, but in a more recent paper Kuhn and Mansour (2014) suggest that unemployment duration is about 25% shorter for those workers who search for jobs online.
However, most of these findings are based on the United States and do not focus on the effect of Internet access on labor mobility costs.
We explore the importance of this channel by looking at how sector and regional mobility costs differ for workers with and without Internet access. We use a similar correction procedure, following Dahl (2002), to take into account self-selection of workers into sectors and regions. To capture the "access to internet" effect on mobility costs, we construct a variable that measures the relative concentration of access to Internet at the industry-region across different age groups. 11 int k = Share of workers at age (l) in industry − region (k) with access to internet Share of workers at age (l) in country (y) with access to internet , We then re-estimate eq.(6c) adding the variable int k in level and its interaction with each mobility costs (across sectors, regions, and both). More specifically, we estimate eq.11: Because int k is a continuous variable, we compare the differences in the coefficients (with and without interaction with int k ) assuming different values for int k . First we compare having no access to the Internet in the industry-region int k = 0 versus having access equivalent to int k national average. We call this comparison "average versus non-access to Internet." We then compare relative low-access to Internet, defined as one standard deviation below to the national average in a given industry-region, to a relative high-access to the Internet, defined as one standard deviation above the national average. Table 4 shows the results for these comparisons. 12 Our results suggest that the mobility costs, particularly across industry and industry-region, tend to be smaller for workers with relatively more access to the Internet (table 4). This effect is not robust for mobility costs across regions when using only statistically significant estimates. Also, there are fewer statistically significant estimates for mobility costs across regions. 13 For example, on average sector mobility costs in Brazil would reduce from 1.1 to 0.97 times the average annual wage if access to the Internet for a worker in a given age l in region-industry k increases from lowto high-access status. A similar pattern is observed for Costa Rica, Honduras, Peru, Paraguay and Uruguay. The costs of moving to another industry-region in Brazil reduces from 1.3 to 1.02 times the average annual wage if workers move from low-to -high-access to Internet statuts. This pattern is followed by other countries (Chile, Peru, Paraguay, Uruguay, and the Republica Bolivariana de Venezuela) in figure 5, except Honduras. Other exceptions, where the Internet effect seems to be negative are Chile, for sector, and Honduras, for region. But both countries also have significant coefficients that suggest positive effects of the Internet on industry or industry-region mobility costs.
The numbers of significant coefficients are smaller for regions, which may suggest that other factors (e.g. infrastructure, differences on amenities, or social network) might be more important as a driver of mobility costs across regions. These results also suggest that lack of access to information may play an important role in determining mobility costs across sectors. In addition to facilitate access to information on jobs' opportunities in other sectors (and regions) access to the Internet can also reduce the costs of acquiring skills to perform in other sectors (e.g. online courses platform).
Concluding remarks
This paper estimates mobility costs of workers across sectors and regions in a very large sample of developing countries. Our results suggest that on average sector mobility costs are larger than regional mobility costs. The average sector mobility cost is about 1.4 times the average real wage and is larger than the regional mobility costs (0.9 times the average real wage). The cost of moving both sector and region (1.8 the average real wage) is larger than the costs of moving only sector or only region, but smaller than the sum of these two costs. Our results also suggest that increasing access to the Internet can reduce mobility costs across sectors and sector-regions. Thus, facilitating access to information might be an important policy to be considered by governments aiming to reduce mobility costs.
12 Table 1 provides the country and year of surveys included in the sample and estimations at the country-year level.
13 The number of surveys available is significantly smaller for this section due to lack of data availability on Internet access. The analysis covers 67 surveys for 17 countries, most of them from Latin America and the Caribbean region. Robust standard errors, t-statistics in parentheses. Significance levels: c p<0.1, b p<0.05, a p<0.01. Each cell corresponds to a single regression. Column (1) reports the regression of the industry mobility cost on alternatively: log of GDP per capita (N=235), log Gini (N=235), the share of employment in agriculture/manufacturing/services (N=213), Ln Specialization Indexj/Specialization Indexr/Specialization Indexjr (N=235). We also include year dummies. Column (2) and (3) replicates column (1) using respectively the regional mobility cost or the industry×region mobility as the dependent variable instead. Columns (4)-(6) replicate columns (1)-(3) replacing the year dummies by country dummies. Note: Estimations refer to the difference between the respective mobility costs ( C2 j,j , C2 r,r , and C2 jr,j r ) coefficients (in log) with different levels of access to Internet. Because the variable used as a proxy for access to Internet can be treated as a continuous varible in order to compare the mobility costs with and without access to Internet, we assume different values for access to Internet. First we compare average access (int k = mean(int k )) versus no-access to Internet (int k = 0). Results are shown in columns (1)-(8). Then we commpare a scenario with low-acess (int k = mean(int k ) − 1sd) with high acess to Internet (int k = mean(int k ) + 1sd).Results are shown in columns (9)-(16). Robust standard errors are used to keep significan estimates at 95%. This table reports the average, median, and standard deviation of 67 country-year level estimations carried at the worker's level. We estimate the effect of access to Internet on mobility cost for each country-year survey for which data is available separately, based on equation 11. Results at individual country level are available under request. . | 8,871.8 | 2017-11-02T00:00:00.000 | [
"Economics"
] |
ASPICov: An automated pipeline for identification of SARS-Cov2 nucleotidic variants
ASPICov was developed to provide a rapid, reliable and complete analysis of NGS SARS-Cov2 samples to the biologist. This broad application tool allows to process samples from either capture or amplicon strategy and Illumina or Ion Torrent technology. To ensure FAIR data analysis, this Nextflow pipeline follows nf-core guidelines and use Singularity containers. Pipeline is implemented and available at https://gitlab.com/vtilloy/aspicov.
Introduction
Whole-genome sequencing (WGS) is used for clinical surveillance of SARS-Cov2 in order to detect emerging variants especially variants of interest (VOI) or variants of concern (VOC), to facilitate epidemiological studies and to anticipate possible therapeutic/vaccinal escape.
Two main library sequencing preparation methods are used according to the context and sample origin: shotgun metagenomics and target enrichment. Various ways are undertaken such as transcriptome sequencing or combination of strategies (hybrid capture enrichment, . . .) depending on goals and context [1,2]. Shotgun metagenomics method is used to capture SARS-Cov2 sequences by hybridization from a highly concentrated sample. Target enrichment or amplicon strategy is often chosen to amplify and detect SARS-Cov2 at low concentrations such as in wastewaters and some particular samples (stools, blood, end-infection steps samples, . . .). It is also important to consider NGS sequencing platform which will not provide the same sets of data and/or which are optimized for a particular strategy library kit.
In order to cover a large range of sequencing technologies and handle all parameters of our analysis we developed ASPICov, a pipeline able to identify whole genome variations at the nucleotide or amino-acid level in samples using a reference sequence. This pipeline is a multistep Nextflow [3] pipeline able to process raw-reads sequences into usable information such as quality reports, VCF files, sequence consensus and plots (variants and coverage).
Implementation
ASPICov workflow was created as a Nextflow pipeline following some of the nf-core standards requirements to setup a portable pipeline. Code wrapping the many tools used in ASPICov (see below) is written in bash and Python. Tools themselves have been integrated into readyto-use Singularity containers [4]. Singularity definition files (used to build images) as well as binary images are all available for download (see below). ASPICov comes with a test data set.
In such a way, users can validate the correct execution of ASPICov on their computing infrastructure after cloning the pipeline from its public Gitlab repository.
Pipeline steps and tools used
The succession of genomic tools used (Fig 1) combined to an optimized computing configuration is a key for the robustness of the pipeline.
To facilitate the use of ASPICov and make it highly reproducible, all tools are automatically installed via pre-built singularity images available from the National Oceanographic Data Center operated by Ifremer in France; member of the Research Data Alliance (https://rd-alliance. org) (ftp://ftp.ifremer.fr/ifremer/dataref/bioinfo/sebimer/tools/AspiCov/). These images are built from recipes available as part of the ASPICov source code (https://gitlab.com/vtilloy/ aspicov/-/tree/master/containers).
Input options
ASPICov is designed to be used on Linux distribution and launched with a single command within a cluster job scheduler or locally. Project name, technology, method, path to data, references, Trimmomatic adapter and bedpe files information will be completed by users in a custom configuration file (supported by profile), according to standard Nextflow principles. To use this workflow on a computing cluster, it is necessary to provide an institute configuration file (using -c <institute_config_file>) in order to enable Singularity and to setup the appropriate execution settings for the environment.
Output files
ASPICov generates different results organized in seven folders (Fig 1). Figures, filtered VCF, specific variant highlight and consensus files are particularly helpful for biological interpretation.
Availability
ASPICov is a free and open-source pipeline available and updated on a public Gitlab repository (https://gitlab.com/vtilloy/aspicov). It is provided with a quick start guide, a complete documentation describing all options available to fine tune data processing.
Dataset used to design and to validate the pipeline
Wuhan strain (NC_045512 [5] was used as whole genome reference during pipeline validation. ASPICov has been optimized using a dataset from a single sample (Basa strain isolated from a patient with mild Covid disease at Limoges hospital) taken at different culture stages (P3, P4 and P7 passages), serially diluted (10 −1 , 10 −2 , 10 −3 , 10 −4 , 10 −5 , 10 −6 and 10 −7 ) and processed using Thermofisher and/or Swift Ampliseq protocols, Illumina (S1 Table). We have thus determined a threshold corresponding to background noise: nucleotidic variants were considered as low quality if Phred score is below 200 or depth below 100 or allelic frequency below 0.02. Mutation(s) not retained by filters are still available in VCF files tagged 'filter' whereas selected mutations are tagged 'filter-pass'.
ASPICov validation
We have screened ENA and SRA public databases to get a dataset of SARS-Cov2 reads coming from different labs using different strategies and sequencing technologies. Our aim was to
PLOS ONE
validate ASPICov from a wide range of data. All VOC and VOI were found using ASPICov workflow, demonstrating its efficiency and accuracy (S2 Table).
ASPICov potential applications
From filter optimization we were able to finely observe and intersect changes for a single sample at different culture passages.
We were also able to evaluate repeatability of sequencing methods by sequencing the same library on two runs with the same sequencing technology (Ion Torrent) and also by comparing two strategies (ThermoFisher and Swift amplicons designs).
Conclusions
ASPICov pipeline is dedicated to detect and identify finely SARS-Cov2 mutations from a broad range of parameters (various samples, different sequencing approaches) with concrete applications in diagnostic and wastewater domains. In order to ensure FAIR data analysis, the workflow is built as a Nexflow pipeline, follows nf-core guidelines and use Singularity containers to wrap tool environments. Its efficiency and accuracy have been demonstrated.
Due to detection of VOI/VOC and IonTorrent technology analysis, ASPICov is complementary to other pipeline such as viralrecon [6] and Farkas pipeline [7]. Conception is different allowing to have an alternative and also a contribution to the diversity of tools for whole genome covid analysis.
ASPICov is regularly updated on Gitlab for special variants according to WHO publications.
Several new features are currently under development, such as a global HTML report, phylogenetic analysis, integration of ONT and MGI sequencing technologies, highlight of genotype percentage, PANGO lineage determination and Nextclade/Gisaid data comparison. | 1,464.4 | 2022-01-26T00:00:00.000 | [
"Medicine",
"Computer Science",
"Environmental Science"
] |
MedDialog: A Large-scale Medical Dialogue Dataset
Medical dialogue systems are promising in assisting in telemedicine to increase access to healthcare services, improve the quality of patient care, and reduce medical costs. To facilitate the research and development of medical dialogue systems, we build large-scale medical dialogue datasets – MedDialog, which contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. To our best knowledge, MedDialog is the largest medical dialogue dataset to date. We pretrain several dialogue generation models on the Chinese MedDialog dataset, including Transformer, GPT, BERT-GPT, and compare their performance. It is shown that models trained on MedDialog are able to generate clinically correct and human-like medical dialogues. We also study the transferability of models trained on MedDialog to low-resource medical dialogue generation tasks. It is shown that via transfer learning which fine-tunes the models pretrained on MedDialog, the performance on medical dialogue generation tasks with small datasets can be greatly improved, as shown in human evaluation and automatic evaluation. The datasets and code are available at https://github.com/UCSD-AI4H/Medical-Dialogue-System
Introduction
Telemedicine refers to the practice of delivering patient care remotely, where doctors provide medical consultations to patients using HIPAA compliant video-conferencing tools. As an important complement to traditional face-to-face medicine practiced physically in hospitals and clinics, telemedicine has a number of advantages. First, it increases access to care. For people living in medically under-served communities (e.g., rural areas) that are in shortage of clinicians, telemedicine enables them to receive faster and cheaper care compared with traveling over a long distance to visit a clinician. Second, it reduces healthcare costs. In a study 1 by Jefferson Health, it is shown that diverting patients from emergency departments with telemedicine can save more than $1,500 per visit. Third, telemedicine can improve the quality of care. The study in (Pande and Morris, 2015) shows that telemedicine patients score lower for depression, anxiety, and stress, and have 38% fewer hospital admissions. Other advantages include improving patient engagement and satisfaction, improving provider satisfaction, etc. Please refer to (Wootton et al., 2017) for a more comprehensive review.
While telemedicine is promising, it has several limitations. First, it puts additional burden on physicians. In addition to practicing face-toface medicine which already makes physicians very busy, physicians need to provide remote telemedicine consultations, which further increases the risk of physician burnout. Second, different from in-hospital patients, the progression of whose medical conditions can be easily tracked by clinicians, remote patients are difficult to track and monitor. To address such problems, there has been increasing research interest in developing artificial intelligence (AI) methods to assist in telemedicine. In particular, medical dialogue systems are being developed to serve as "virtual doctors". These "virtual doctors" are aimed to interact with patients via natural dialogues, asking about the medical conditions and history of patients and providing clinical advice. They can also proactively reach out to patients to ask about the progression of patients' conditions and provide timely interventions.
To build medical dialogue systems, a large collection of conversations between patients and doctors is needed as training data. Due to data privacy concerns, such data is difficult to obtain. The existing medical dialogue datasets (Xu et al., 2019;Yang et al., 2020) are limited in size or biased to certain diseases, which cannot adequately serve the purpose of training medical dialogue systems that can achieve doctor-level intelligence and cover many specialities in medicine.
To address the limitations of existing datasets, we build large-scale medical dialogue datasets -MedDialog -that contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. Both datasets cover almost all specialities in medicine, ranging from internal medicine to family medicine and covers a wide spectrum of diseases, including cancer, pneumonia, etc. To our best knowledge, they are the largest Chinese and English medical dialogue datasets to date. The data is open to the public. Each consultation starts with a description of medical conditions and history, followed by the conversation between doctor and patient. In certain consultations, doctors make diagnosis conclusions and give suggestions on treatment. The conversations have multiple turns.
On the Chinese MedDialog (MedDialog-CN) dataset, we train several dialogue generation models for the interested community to benchmark with. Generating a response given the conversation history can be formulated as a sequence-tosequence (seq2seq) learning problem, where we use the Transformer (Vaswani et al., 2017) architecture to perform this task. Transformer consists of an encoder which embeds the conversation history and a decoder which generates the response. Both the encoder and decoder use self-attention to capture long-range dependency between tokens. In addition to training the Transformer on MedDialog-CN from scratch, we can pretrain the encoder and decoder on a corpora much larger than MedDialog-CN, then finetune them on MedDialog-CN. BERT-GPT (Wu et al., 2019;Lewis et al., 2019) is a pretrained model where the encoder is pretrained using BERT (Devlin et al., 2018) and the decoder is pretrained using GPT (Radford et al.). Besides the seq2seq formulation, dialogue generation can be formulated as a language modeling problem which generates the next token in the response conditioned on the concatenation of the already generated tokens in the response and the conversation history. GPT (Radford et al.;Zhang et al., 2019) is a pretrained language model based on Transformer decoder. BERT-GPT and GPT are finetuned on MedDialog-CN. We perform evaluation of these models using automatic metrics including perplexity, BLEU (Papineni et al., 2002a), Dist (Li et al., 2015), etc. The generated responses are clinically informative, accurate, and human-like.
We utilize the models trained on the large-scale MedDialog-CN dataset to improve performance in low-resource dialogue generation tasks where the dataset size is small. The study is performed on COVID-19 dialogue generation on the CovidDialog (Yang et al., 2020) dataset, which contains 1,088 dialogues and 9,494 utterances. The small size of this dataset incurs high risk of overfitting, if directly training the large-sized neural models on it. To alleviate this risk, we take the weights of dialogue generation models pretrained on MedDialog-CN and finetune the weights on CovidDialog. Human evaluation and automatic evaluation show that pretraining on MedDialog-CN can greatly improve the performance on CovidDialog and generate clinically meaningful consultations about COVID-19.
The major contributions of this paper are: • We build large-scale medical dialog datasets -MedDialog, which contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. To our best knowledge, they are the largest of their kinds to date.
• We pretrain several dialogue generation models on the Chinese MedDialog dataset, including Transformer, BERT-GPT, and GPT, and compare their performance using automatic metrics.
• Through human evaluation and automatic evaluation, we show that the pretrained models on MedDialog-CN can significantly improve performance on medical dialogue generation tasks where the dataset size is small, via transfer learning.
The rest of this paper is organized as follows. Section 2 and 3 present the datasets and dialogue generation models (DGMs). Section 4 gives experimental results of developing DGMs on Chinese MedDialog and studies the transferability of DGMs trained on MedDialog-CN to other low-resource medical dialogue generation tasks. Section 5 reviews related works and Section 6 concludes the paper.
Related Works
There have been several works investigating medical dialogue generation. Wei et al. built a task-oriented dialogue system for automatic diagnosis. The system detects the user intent and slots with values from utterances, tracks dialogue states, and generates responses. Xu et al. (Xu et al., 2019) developed a medical dialogue system for automatic medical diagnosis that converses with patients to collect additional symptoms beyond their self-reports and automatically makes a diagnosis. This system incorporates a medical knowledge graph into the topic transition in dialogue management. Xia et al. (Xia et al.) developed a reinforcement learning (RL) based dialogue system for automatic diagnosis. They proposed a policy gradient framework based on the generative adversarial network to optimize the RL model.
Datasets
Our MedDialog consists of a Chinese dataset and an English dataset, collected from different sources.
The Chinese MedDialog dataset
The Chinese MedDialog (MedDialog-CN) dataset contains 3.4 million Chinese dialogues (consultations) between patients and doctors. The total number of utterances is 11.3 million. Each consultation starts with the narration of patient' medical condition and history, including present disease, duration of the disease, allergies, medications, past diseases, etc. Then it follows with the multi-turn conversation between patient and doctor. In the conversation, there are cases that multiple consecutive utterances are from the same person (either doctor or patient) and these utterances were posted at different time points. For such cases, we combine the consecutive utterances from the same person into a single utterance. Optionally, at the end of the consultation, the doctor makes diagnosis and Table 1 shows statistics of the Chinese dataset. Figure 1 shows an exemplar consultation. The data is crawled from an online consultation website -haodf.com 2 , which provides consultation service to patients. The dialogues cover 29 broad categories of specialties including internal medicine, pediatrics, dentistry, etc. and 172 fine-grained specialties including cardiology, neurology, gastroenterology, urology, etc. The consultations are conducted from 2010 to 2020.
The English MedDialog dataset
The English MedDialog (MedDialog-EN) dataset contains 0.26 million English consultations between patients and doctors. The total number of utterances is 0.51 million. Each consultation consists of two parts: (1) description of patient's medical conditions; (2) conversation between patient and doctor. The data is crawled from iclinic.com 3 and healthcaremagic.com 4 , which are two online platforms of healthcare services, including symptom self-checker, video consultation, online chat with doctors, etc. The consultations cover 51 categories of communities including diabetes, elderly problems, pain management, etc. and 96 specialties including andrology, cardiology, nephrology, pharmacology, etc. The consultations were conducted from 2008 to 2020. Table 2 shows statistics of the English dataset.
Advantages of our datasets
To our best knowledge, MedDialog-CN and MedDialog-EN are the largest Chinese and English medical dialog dataset respectively. They have the following advantages.
• Large number of conversations and utterances. MedDialog-CN has about 3.4 mil- lion conversations and 11.3 million utterances.
MedDialog-EN has about 0.3 million conversations and 0.5 million utterances.
• Broad coverage of medical specialities. greatly minimizes population biases in these two datasets. Table 3 shows a comparison of our datasets with several other medical dialogue datasets. The number of dialogs and diseases in our datasets are both much larger than those in other datasets.
Methods
We train several dialogue generation models on the Chinese MedDialog dataset for the interested research community to benchmark with. During training, given a dialogue containing a sequence of alternating utterances between patient and doctor, we process it into a set of pairs {(s i , t i )} where the target t i is a response from the doctor and the source s i is the concatenation of all utterances (from both patient and doctor) before t i . A dialogue generation model takes s as input and generates t. This problem can be formulated either as a sequence-to-sequence learning problem where the goal is to generate t conditioned on s via an encoder-decoder model, or as a language modeling problem which generates the i-th token t i in t conditioned on the concatenation of the conversation history s and the already generated sequence t 1 , · · · , t i−1 in the response before t i via a language model.
Dialogue Generation as
Sequence-to-Sequence Modeling The problem of response generation can be formulated as a sequence-to-sequence (seq2seq) learn-ing (Sutskever et al., 2014) problem: given the conversation history s, generate the response t. We use the Transformer (Vaswani et al., 2017) architecture for seq2seq modeling. Transformer consists of an encoder which embeds the input sequence into a latent space and a decoder which takes the embedding of the input sequence as input and generates the output sequence. Different from LSTM-based seq2seq models (Sutskever et al., 2014) which learn representations of a sequence of tokens in a recurrent manner and therefore suffer computational inefficiency due to their sequential nature, Transformer uses self-attention to capture the long-range dependency among tokens by calculating the similarity between each pair of tokens in the sequence. Self-attention avoids sequential computation and greatly facilitates parallel computation. A building block in Transformer contains the following modules: a self-attention sub-layer, a token-wise feed-forward sub-layer, residual connections (He et al., 2016) between sub-layers, and layer normalization (Ba et al., 2016). Both the encoder and decoder are composed of a stack of such building blocks. The encoder generates an encoding for each token in the input sequence. These encodings are fed into the decoder to generate the output sequence. To generate the token at position i, the decoder encodes the generated tokens from 1 to i − 1 (like an encoder), calculates an attentional representation by performing attention between the encodings of input tokens and the encodings of output tokens 1, · · · , i − 1, then feeds the attentional representation into a softmax layer to generate token i. Transformer learns the weights in the encoder and decoder by maximizing the conditional likelihood of responses conditioned on conversation histories.
Dialogue Generation as Language Modeling
Besides the sequence-to-sequence formulation, response generation can be formulated as a language modeling problem as well. Given the conversation history s, a language model defines the following probability on the sequence of tokens t = t 1 , · · · , t n in the response: where s, t 1 , · · · , t i−1 denotes the concatenation of s and t 1 , · · · , t i−1 . GPT (Radford et al.) is a pretrained language model which uses the Transformer decoder to model the conditional probability p(t i |s, t 1 , · · · , t i−1 ) in Eq. (1), which first encodes the tokens in s, t 1 , · · · , t i−1 , then predicts t i based on the encodings. GPT learns the weights of the decoder by maximizing the likelihood (defined based on Eq.1) on the responses in the training data.
Pretraining
Before training Transformer and GPT on the MedDialog-CN dataset, we can first pretrain them on general-domain text datasets which are much larger than MedDialog-CN, to get a good initialization of the weight parameters. BERT-GPT (Wu et al., 2019;Lewis et al., 2019) is a pretraining approach of Transformer, which uses BERT (Devlin et al., 2018) to pretrain the Transformer encoder and uses GPT to pretrain the Transformer decoder. Given a sequence of tokens, BERT randomly marks out some of them. The masked sequence is fed into the transformer encoder, which aims to recover the masked tokens. The weights in the encoder are learned by maximizing the accuracy of recovery.
In BERT-GPT, the BERT encoder generates representation of the input sequence, which is then fed into the GPT decoder to generate the response.
Experimental Settings
We split the Chinese MedDialog dataset into a training set, a validation set, and a test set with a ratio of 0.8:0.1:0.1. The split was based on dialogues, not based on source-target pairs. The split statistics are summarized in Table 4. The models were built at the Chinese character level. The validation set was used for hyperparameter tuning. The training procedure was stopped when the validation loss stopped to decrease. For Transformer, the implementation by HuggingFace 5 was used, where the hyperparameters followed the default settings in the original Transformer (Vaswani et al., 2017). In BERT-GPT, the BERT encoder and GPT decoder are Transformers with 12 layers. The hidden state size is 768. The optimization of weight parameters was performed using stochastic gradient descent, with a learning rate of 1e-4. The maximum length of input sequences was truncated to 400 and that of output sequences was truncated to 100. For GPT, the DialoGPT-small (Zhang et al., 2019) architecture was used, with 10 layers. We set the embedding size to 768 and the context size to 300. In layer normalization, the epsilon hyperparameter was set to 1e-5. In multi-head self-attention, we set the number of heads to 12. The weight parameters were learned with Adam (Kingma and Ba, 2014). The initial learning rate was set to 1.5e-4 and the batch size was set to 32. The learning rate scheduler was set to Noam, with 2000 warm-up steps. Top-k random sampling (Fan et al., 2018) with k = 50 was used for decoding in all methods. We evaluated the trained models using automatic metrics including perplexity, NIST-n (Doddington, 2002) . Perplexity measures the language quality of the generated responses. The lower, the better. NIST, BLEU, and METEOR measure the similarity between the generated responses and groundtruth via n-gram matching. The higher, the better. Entropy and Dist measure the lexical diversity of generated responses. The higher, the better. BERT-GPT is pretrained on Chinese corpus collected from the Large Scale Chinese Corpus for NLP 6 . The corpus includes Chinese Wikipedia containing 104 million documents, News containing 2.5 million news articles from 63,000 sources, Community QA containing 4.1 million documents belonging to 28 thousand topics, and Baike QA containing 1.5 million question-answering pairs from 493 domains. The total size of these datasets is 15.4 GB. GPT is pretrained on Chinese Chatbot Corpus 7 containing 14 million dialogues and 500k-Chinese-Dialog 8 containing 500K Chinese dialogues. Table 5 shows the performance on the MedDialog-CN test set. From this table, we make the following observations. First, BERT-GPT achieves lower perplexity than Transformer. This is because BERT-GPT is pretrained on a large collection of corpora before being finetuned on MedDialog-CN. Pretraining enables the model to better capture the linguistic structure among words, which yields lower perplexity. Second, on machine translation metrics including NIST-4, BLEU-2, BLEU-4, and METEOR, BERT-GPT performs worse than Transformer. This indicates that Transformer is able to generate responses that have more overlap with the groundtruth. However, it is worth noting that the studies in (Liu et al., 2016) show that machine translation metrics are not reliable evaluation metrics for dialogue generation. Given the same conversation history, many responses are valid. A response should not be deemed as bad simply because it has little overlap with the response given by a doctor. Third, on diversity metrics, BERT-GPT and Transformer are on par, which indicates that they have similar capability in generating diverse responses. Fourth, compared with BERT-GPT, GPT has worse perplexity, better machine translation scores, and comparable diversity scores. Figure 2 shows an example of generated responses on the MedDialog-CN test set. The response generated by BERT-GPT is clinically informative and accurate. It prescribes Ebastine and gives detailed instructions of taking this medication. Ebastine is a medication for treating eczema. The patient mentioned that his/her baby has eczema. So this prescription is clinically meaningful. The language quality of the response is also good. It is syntactically and semantically correct and smooth. The response generated by GPT is also good, but less specific. It believes the baby has a skin allergy issue, but does not pinpoint the exact issue as BERT-GPT does. The response generated by Transformer is less clinically informative. It does not give medical suggestions. But it asks for further information, which is also a valid response. Figure 3 shows another example. The response generated by BERT-GPT is clinically accurate and concise. The language quality is great. The response generated by GPT is self-conflicting. It says "if there is no abnormality at the throat, you can take a laryngoscope test; if abnormal, you should take a laryngoscope test", which is semantically inconsistent. The response generated by Transformer prescribes two repetitive laryngoscope tests, which is clinically insensible.
Transfer to Other Datasets
In this section, we study how to use the models pretrained on MedDialog-CN to improve the performance on low-resource dialogue generation tasks where the dataset size is small. The target task is generating medical dialogues related to COVID-19 on the small-sized CovidDialog-Chinese (Yang et al., 2020) dataset. We finetune the MedDialogpretrained models on CovidDialog-Chinese, and use the finetuned models to generate COVID-19related dialogues.
Data
We use a Chinese dialogue dataset about COVID-19: CovidDialog-Chinese (Yang et al., 2020), for the experiments. This dataset has 1,088 patientdoctor dialogues about COVID-19, with 9,494 utterances and 406,550 tokens (Chinese characters) Transformer: 需要看到皮肤的形态才能给予具体建议. (I need to see the skin to give detailed suggestions.) GPT: 如果是过敏性皮肤,建议外用酮康唑乳膏. (If the skin is allergic, can take some Ketoconazole cream.) BERT-GPT: 你去买一种依巴斯汀一天晚上吃一片试试看. (Can use Ebastine, one tablet every day at bedtime.) Conversation history: 声音嘶哑, 几年了, 不时发作. (My voice is broken. In the past few years, it happens oftentimes.) Groundtruth response: 手术方式: 门诊表麻电子喉镜下 摘除(小息肉)或住院全麻支撑喉镜下显微镜下摘除(大息 肉). (Surgery choices: outpatient, surface anesthetic, remove small polyps using electrolaryngoscope; inpatient, general anesthesia, remove large polyps using laryngoscope and microscope.) Transformer: 建议到当地儿童医院进行喉镜检查, 然后到 比较好的医院进行喉镜检查, 以便确诊和选择治疗. (I suggest to take a laryngoscope test at a local Children's hospital. Then take a laryngoscope test at a better hospital, to confirm and select the best way for treatment.) GPT: 你好, 声音嘶哑原因较多, 不知你的是喉部还是咽喉 部? 如果喉部没有异常, 建议做电子喉镜检查, 如果有异 常, 应该做喉镜检查. *声带是指哪些部位的病变? (Hello. Broken voice can be caused by many things. Is your problem at the throat or pharynx? If there is no abnormality at the throat, you can take a laryngoscope test. If abnormal, you should take a laryngoscope test. BERT-GPT: 您好,建议先到当地医院做一个电子喉镜检 查. (Hello. I suggest to take a laryngoscope test at a local hospital.) in total. Duplicated and incomplete dialogues were removed. The dialogues are multi-turn. The average number of utterances in a dialogue is 8.7. The utterances are reasonably long. The average number of tokens in an utterance is 42.8. Table 6 shows the statistics of this dataset.
Experimental settings
We split the CovidDialog-Chinese dataset into a training set, a validation set, and a test set with a ratio of 0.8:0.1:0.1. The split is based on dialogues. The split statistics are summarized in Table 7. Most hyperparameter settings follow those in Section 4.1, except the following: in optimization, the batch size was set to 8. We evaluate the trained models using automatic metrics including perplexity, NIST-4 (Doddington, 2002), BLEU-2, 4 (Pa- (Lavie and Agarwal, 2007), Entropy-4 (Zhang et al., 2018), and Dist-1, 2 (Li et al., 2015). We also perform human evaluation. We randomly select 100 dialog examples and ask 5 undergraduate and graduate students to rate the generated responses in terms of informativeness, relevance, and human-likeness. Informativeness is about whether a response contains sufficient medical information such as explanations of diseases and suggestions for treatment. Relevance is about whether the content of a response matches with that of the conversation history. Human-likeness is about whether a response sounds like a human. The ratings are from 1 to 5. The higher, the better. The ratings from different annotators are averaged as the final results. Transformer, pretraining on MedDialog-CN improves results on all metrics. This demonstrates that pretraining on MedDialog-CN can improve performance on low-resource medical dialog generation tasks. Second, on GPT, pretraining on MedDialog-CN improves 5 of the 8 metrics. On BERT-GPT, pretraining on MedDialog-CN improves half of metrics. The reason that improvement on GPT and BERT-GPT is not as significant as that on Transformer is probably because these two models are already pretrained using other corpora. Therefore the value of pretraining on MedDialog-CN is diminishing. However, it is still useful to pretrain on MedDialog-CN to adapt these two models to the medical dialog domain. strates the effectiveness of pretraining. We perform significance tests between different methods based on the double-sided Student's t-test. The results are shown in Table 10. As can be seen, in most cases, the p-value is less than 0.015, demonstrating high statistical significance. For Transformer, GPT, and BERT-GPT, using pretraining (PT) on MedDialog-CN achieves significantly better performance than not using pretraining (No-PT). Figure 4 shows an example of generating a doctor's response given the utterance of a patient. As can be seen, models pretrained on MedDialog-CN perform better than their unpretrained counterparts. For example, the response generated by GPT without pretraining on MedDialog-CN is not understandable by human. With pretraining on MedDialog-CN, it generates a much better response which gives medical advice. Figure 5 shows another example. Similarly, without MedDialogpretraining, the response generated by GPT is not readable. With pretraining, the generated response is smooth and clinically informative.
Conclusions and Future Works
To facilitate the research and development of medical dialogue systems that can potentially assist in telemedicine, we build large-scale medical dialogue datasets -MedDialog -which contain 1) a Chinese dataset with 3.4 million conversations between patients and doctors, 11.3 million utterances, 660.2 million tokens, covering 172 specialties of diseases, and 2) an English dataset with 0.26 million conversations, 0.51 million utterances, 44.53 million tokens, covering 96 specialties of diseases. To our best knowledge, they the largest of their kind. We pretrain Transformer, GPT, and BERT-GPT on MedDialog-CN. The results show that the generated dialogues by these pretrained models are clinically meaningful and human-like. We use transfer learning to apply these pretrained models for low-resource dialogue generation. On a COVID-19 dialogue generation task where the dataset is small, human evaluation and automatic evaluation show that models pretrained on MedDialog-CN can effectively improve the quality of generated responses.
For future work, we will annotate medical entities in our datasets. Such annotations can facilitate the development of goal-oriented medical dialog systems. | 5,961.2 | 2020-11-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Almost as helpful as good theory : some conceptual possibilities for the online classroom
Interest and activity in the use of C&IT in higher education is growing, and while there is effort to understand the complexity of the transition to virtual space, aspects of development, particularly clarity about the nature of the learning community, may only be lightly theorized. Based on an ongoing action research study involving postgraduate students studying in the UK and USA, this paper will identify some theoretical roots and derive from these six conceptual areas that seem to the authors to have relevance and significance for behaviour online. An exploration of these forms the basis for a twodimensional model which can account for what happens when groups come together to learn in cyberspace. In depicting this model, there is acknowledgement of the existence of third and fourth dimensions at work. However, the explanatory power of taking these extra dimensions into account is beyond the scope of the analysis thus far.
Introduction
Current interest in the use of Communications and Information Technology (C&IT) in higher education and other settings is at a high level (for example, Framework V: Towards a User-Friendly Information Society; the ESRC Programme PACCIT) and while universities -in response to the Dearing Report -begin to exercise effort in restructuring course delivery methods to take advantage of the technology, there is a danger that the issue of the changes in the social dynamics that are a direct consequence of the transition from actual space to virtual space will be overlooked.Some work has been undertaken in this area (notably Rheingold, 1991;Cook, 1995;McConnell, Hardy and Hodgson, 1996) but there is need for additional research if we are to identify a clearer sense of how effective online learning might manifest itself.We agree with Jones (1998), however, that the technology presents considerable challenges in establishing and maintaining communities: The learning process may bring people together insofar as such learning is often collaborative, but it is equally as often frustrating and off-putting.(Jones, 1998: 8) This paper will explore an approach to postgraduate education which depends on online collaborative construction of knowledge, drawing on students' past experience and new understanding gained on the course.There are other models of practice employing computer-mediated communication (CMC) in postgraduate teaching, but it has been decided to limit the scope of this paper to action science groups (Argyris, Putnam and Smith, 1986) because the nature of the work undertaken by these small groups is intense and personal, providing fertile ground for the creation of the learning community which is the focus of the research.Action Science has been described as 'the science of interpersonal action' (Argyris, 1993) and is an approach to personal and organizational development.Action science employs a number of methods by which participants examine their work-based defensive routines and look to change their underlying theories-in-use to produce more positive and effective ones.Participants of all six cohorts under consideration here wrote case studies which were then interrogated online by their small group and the two facilitators.A case study in this context is a written record of a remembered conversation about which the case writer feels dissatisfied for some reason.It was expected that learning will occur at two levels.Firstly, each case writer has the opportunity to gain insight into his or her own defensive routines as played out in the original dialogue.Secondly, all group members learn the specific questioning skills required of an action scientist, enhance their ability to spot underlying inconsistencies in a conversation and gain practice in rewriting statements in a new format which is more likely to lead to mutual learning and win-win outcomes.
It is felt that each group within the six cohorts that represents our data set has demonstrated some vivid characteristics.We consider that this rich environment is a source of real insights into the challenges offered by CMC in creating learning communities and this has enabled us to develop a model that goes some way to describing the characteristics of these communities.
There have been almost fifty years of research and theorizing about the way in which groups of various sizes function in a variety of face-to-face settings, including some in higher education classrooms and lecture theatres.Research into CMC has a much shorter history.An examination of the literature persuades us that it is easy to make incorrect assumptions about the characteristics of the online classroom.CMC is often seen simply as another process which can be incorporated into existing thinking and practice, rather than one requiring a shift in conceptualization: about teaching and learning; about groups; and about the effect of technology on their structure and function.As Fernback noted, We know already that many of the assumptions we hold about the negotiation and formation of social relationships, and particularly about community, do not seem to apply in the complex realm of CMC.(Fernback, 1999: 205) Not surprisingly, the rhetoric and some of the practice of teaching and learning see CMC as a potential pedagogy for higher education as the capacity of the medium to deliver course materials and to generate interactivity between lecturers and students and among the student body becomes apparent.While the move towards the virtual campus cannot be ignored, there is the need to identify some of its characteristics and the theories underlying the practice.The purpose of this paper, therefore, is to explore some of the theory that contributes towards an effective understanding of what takes place when groups of students meet and learn online.In common with standard practice, students' names have been changed.Their contributions to discussion, however, are uncorrected.
Some theoretical possibilities
While not believing that activity in cyberspace has direct analogies with face-to-face experiences, our thinking is shaped by attention to two main theoretical sources: group dynamics -both from a sociological (Miller, 1993) and a psychoanalytical perspective (Bion, 1961) -and situated learning, particularly its focus on notions of communities of practice, socially constructed knowledge and authentic activity (McLellan, 1996).In the limited space we have available for this paper, we will go no further than claiming a relationship between the components of situated learning identified by McLellan, onto which we have mapped online experience of working with action science.Making appropriate use of the available technology.
Table I: Relationship between key components of situated learning and online experiences of action science
Our thinking, dominated at the time by consideration of group dynamics, led us to reflect on six conceptual areas which seem to be powerful ingredients in the online classroom: • social organization
Social organization
While communication online can be task-based there is an expectation that other, non-task but socially essential communication will contribute to the growth and sustenance of the community.Sociolinguists call this phatic communication (Stubbs, 1983), and it is seen as essential to maintain effective social interaction.It has been argued (Feenberg, 1989) that CMC is poor at fulfilling these phatic functions.He writes: ' AH such phatic signs are bypassed in computer conferencing.Even standard codes for opening and closing conversations are discarded' (Feenberg, 1989: 22).This is particularly problematic in online communication because there is only the single, textual cue available for inspection in contrast with the multiple cues that exist in face-toface communication.Evidence from previous studies in CMC (Davis, 1997;Davis and Holt, 1998) suggests that this form of interchange may not arise spontaneously and its absence can contribute towards social isolation and withdrawal.There are models that suggest that remediation of this is possible using a number of straightforward strategies, such as sending private email messages, directing comments at individuals and modelling responsiveness (Harasim, Hiltz, Teles and Turoff, 1995;Midoro, 1999).The research is key both to identify and to refine appropriate facilitation strategies to maximize effective learner participation and interdependence.Positive outcomes of well-timed and wellcrafted tutor interventions include independence and community.However, whilst independence can be a rewarding outcome of a learning experience, it can also work against effective community membership.
Orientation towards learning
Orientation towards learning has two related continua: Collaborative, interdependent learning, which is that discovered in a learning community, can be understood 'as a distributed, ongoing social process, where evidence that learning is occurring or has occurred must be found in understanding the ways in which people collaboratively do learning and do recognize learning as having occurred' (Jordan, 1996: 42).Collaborative learning occurs when participants mutually engage 'in a coordinated effort to solve [a] problem together' (Roschelle and Behrend, 1995: 70).Whipple describes the benefits of collaborative learning: collaboration results in a level of knowledge within the group that is greater than the sum of the knowledge of the individual participants.Collaborative activities lead to emergent knowledge, which is the result of interaction between (not summation of) the understandings of those who contribute to its formation.(Whipple, 1987: 5).
In contrast, Hiltz, describing the software they use, writes: "This assignment was carried out using the 'activity branch' software.In a response branch, each student must answer the question before being able to read the answer of others' (Hiltz, 1994: 59).
This kind of structure may work counter to attempts at building a learning community.Whilst the focus is on ensuring that the individual learner thinks, then an interactive building of ideas is absent and it becomes unlikely that collaborative learning can take place or a learning community emerge.
Orientation towards task/tutor
A classic depiction (Bennis and Shepard, 1956) of group life is that it can be divided into two main phases: dependence-power relations and interdependence-personal relations, and it is this that determines the orientation towards the task and the tutor.In dependence-power relations, students engage in flight from the task through avoidance and reliance on social engagement to occupy time.Counterdependency -a metaphorical fight with authoritycan also emerge.The shift into interdependence-personal relations leads to conditions where there is considerably less dependence on external authority -either in the shape of the task or the tutor -and the development of a shared sense of responsibility for group performance.In online communities, this can create conditions in which members feel addicted to the process.In sharp contrast to this condition is the total separation of self from others: the condition of anomie 'when members of a superficially well-organised society feel disconnected and isolated' (Reber, 1995: 39).
Group work modality
Group life can also be characterized from a more psychological perspective, originally modelled by WDfred Bion (Bion, 1961).Fundamental to Bion's thinking about groups is that membership of them is part of the human condition.As he wrote: 'no individual, however isolated in time and space, can be regarded as outside a group, or lacking in active manifestations of group psychology' (Bion, 1961:132).
Bion proposed that groups work at two levels.Work groups function effectively, engage with the task and with one another and attend to the needs of the group.According to Bion, however, whilst a group is operating in 'work' mode it is also capable of being subverted at any one time by one of the three basic assumptions -dependence, flight/fight or pairing.
A group moves into basic assumption dependency whenever it is reliant on a leader and believes that the leader will control, make decisions and rein in any passions that are too threatening to the safety of the group.A group in conflict or under pressure will often move into denial manifest as flight (running away from a difficult issue to talk about 'safer' topics) or fight (usually a verbal struggle).A group is considered to be in basic assumption pairing when two members of the group are heavily involved in a discussion and the remainder of the group is silent but attentive.It is likely that a series of pairs will emerge, each dominating the discussion for a while.Basic assumption groups are thought to be mutually exclusive: for example, a group in basic assumption pairing cannot demonstrate flight/fight or dependency-type behaviour.It is possible for a group to move readily from one basic assumption to another.
Emotional climate
All of the above conditions represent a challenge to the group and this challenge contributes towards an emotional reaction, either shared or individual.Among these, we have identified indifference (real or otherwise), frustration, off-task fascination, and anxiety.The first three are counterproductive in respect of the success of the group and/or its task completion.The latter can be productive or its opposite: too high anxiety invariably leads to ineffectiveness or, in the worst case, withdrawal; too low is insufficient to drive the motor of learning.
Group response to challenge
Groups respond to challenge at different times in different ways.Our experience has led us to identify four responses: groups that become hostile to the task reveal passive resistance or aggression, often by showing little interest in the activity; others deny that there is a problem when attempts are made to establish dialogue about the events unfolding.Others become fascinated with membership of the group and are seduced by the social aspects of their communication.Successful groups, however, engage in risk-taking: challenging other members, indeed challenging themselves, to push the margins of what is possible.
Discussion
In an attempt to gain some insight into how these conceptual areas might inter-relate, we chose a 2 x 2 matrix (see Figure 1) to characterize four archetypal groups.
Fragmented by technologies (I.I)
A group which is low on both learning and group dynamics may have very little activity and will not be concerned about the group processes nor will it be effective in its learning objectives.Members will be isolated from one another and their approach to learning, where it exists, is individual.Socially, group members are isolated and their basic assumption is flight -from the task and any discussion about the task.This leads to public indifference (despite email messages that indicate private frustration and anger) and a group strategy of passive resistance or aggression.Whilst successful groups showed themselves willing to build upon each other's ideas and create new levels of understanding, other groups never gained any momentum: Hilary: I am not really enjoying our group interaction.It is very slow and uninvolved and the communication levels are very low.I am finding it hard to find questions to ask.I don't know why.
A willingness to avoid the task and discussion about the task is well summed up by one case-writer who commented on his own case as follows: Jack: i [sic] have reviewed your input, and appreciated your interest, the questions that were asked will help me focus on the situation.
Since this particular casewriter only made two interventions into his own case (the average for one of the cohorts was twenty-six) this represents 50 per cent of his output and clearly the experience has made little positive impact on him and the rest of the group.
Summer Holiday (1.9) If a group is high on group dynamics but low on learning dynamics then group members may be having a lot of fun whilst achieving little learning.Here, members are displaced from normal life and they demonstrate self-interest and individuality.Work is avoided and the complex notion of basic assumption pairing is acted out.In this, the group waits for a magical event to emerge from possible pairing of other participants.Accordingly, they can be high on social interaction -invariably manifested through social 'conversation' at the expense of work.Indeed, the social is the dominant theme in this type of group and this, of course, can be very satisfying for the members and is very seductive.The following example demonstrates a group being hampered by notions of the need to be inclusive whilst at the same time struggling with the process of making decisions online.
Laura: since nobody is taking the initiative but everybody seems to share the view (at least this is what I make out of it) I would like to see how many of you could make it for an on line session, real time sometime this satarday [sic] or Sunday afternoon.This is the time that I can make it if you think that some other time is more convinient [sic] please suggest it.
This comment came in week four of seven weeks when the group had already been discussing meeting synchronously (at the same time as each other) since the first week.It seems likely that the social element was so important for this group that the thought of meeting up at a time that didn't suit all of them was unthinkable.The group continued to resist moves by various members to experiment with synchronous communication until, in week six, the following intervention from one of the tutors coincided with the most innovative member being more proactive about her desires.
Jack: Would 7pm on tonight and on Wednesday night suit everyone?Kate (Facilitator): I think trying to get everyone may be a mistake.All you need is the casewriter and one or two others.More is obviously great, but not essential.I sense waiting for everyone to agree could mean that yet again you fail to meet.
Jack: that's true, Kate ... I had forgotten that that time would not be convenient for all ... so who will meet me on line ... I'll be here at 7.00pm.
And she was.Those that met up with her clearly enjoyed 'being' together, but continued to use the time in a largely social way.Although the group had managed to overcome one of its difficulties (making decisions) it was unable to work against the by now well established norm of social activity dominating the work space.
I'm ok, you're ok (9.1) If a group is high on learning dynamics but low on group dynamics then members will show little concern for each other personally and will tend to work independently rather than interdependently.One group whose group strategy we have characterized as denial had the following conversation: Megan: I am aware that there are a number of things I have been thinking but not saying and I wonder if this is true for others also.
Rod: M, I too feel that perhaps we aren't as active as we could be.But I am OK with it.
Here Rod refused to take up the gauntlet, preferring to work in her own way which Megan later described as 'bullying'.
Such groups are capable of acting co-operatively rather than collaboratively.In the latter, understanding and insight grow from the social construction of knowledge.In the former, it is more competitive and individual understanding and insight is the desired outcome, possibly at the expense of others' learning.Inevitably, groups who find themselves in this situation demonstrate counter-dependent behaviour with frequent (although invariably unsuccessful) appeals to authority to deal with the problematic group dynamics.Equally inevitably, tutors are held responsible for their failure to make the groups work more effectively and members deny their collective and individual responsibilities for the difficulties the group is experiencing.
Below is an extract from a group who struggled and looked to the tutors to make the interaction more productive.
Sue: MikeD, Kate, correct me if I'm wrong but I thought that in order for a case to be completed, it was necessary to provide the 'interrogated' with a TIU.This didn't happen in my case.
Kelly: yes, S. I am also interested in getting an answer to this question, you will remember that I raised a similar concern to Kate and MikeD last time when my case was discussed, but I did not get any satisfactory response.As for MikeD, he did not even bother to comment on the issue.With Kate it was better because, even if she did not answer my question, she at least asked me questions in relation to the issue.
Here we have two group members whose primary concern is that they gain from the experience without necessarily giving to the rest of the group.Both are requesting a theoryin-use from members of the group, neither were very forthcoming in giving them to others.This demonstrates a group attempting to learn individually in an environment set up for collaborative learning.It is not possible to do action science alone.If it were possible they would have done it!As has been mentioned earlier one of the elements of the group whose orientation towards learning is collaborative is that the knowledge is constructed together by the sharing of thoughts, feelings and knowledge.This building of ideas results in higher-order insights than might be gained individually.One participant commented on how the group seemed to be building individual lines of enquiry rather than working together, which, whilst it was true, was a comment which itself was part of the following scaffold: Kelly: I find Karen [facilitator] has made an intervention that has set me thinking ... Jackie: Just to tag on to K's response... Megan: Good job K you've hit a big problem on the head... maybe one solution is ...
Our model, as it stands, assigns to each of the six conceptual areas four potential states to correspond to the creation of four ideal group types.This, however, is a misrepresentation of the complexity of the model we have created, as much as anything else for neatness of exposition.What we are aware of is a third dimension, not accommodated by the 2 by 2 matrix, which can indicate possible alternative alignments of the various characteristics of behaviour and their interrelationship.This we have designated depth (the fact that a conceptual area can be manifest in a number of ways) and see it as the third dimension of the model which as yet remains in its infancy.The danger, however, here is that we fall into the trap identified by Aarseth: the race is on to conquer and colonise these [learning technologies] for our existing paradigms and theories, often in the form of 'the theoretical perspective of <fill in your favourite theory/theoretician here> is clearly really a prediction/description of <fill in your favourite digital medium here>.'This method is being used with permutational efficiency throughout the fields of digital technology and critical theory, two unlikely tango partners indeed.But the combinatorial process shows no sign of exhaustion yet.(Aarseth, 1999: 31) At least we are aware of this risk, and we will remind ourselves of it from time to time.
Conclusion
So, what, if anything, can we conclude?The model depicted in Figure 1 was the product of inspiration and intuition based on our iterative analysis of data collected over a three-year period and it feels as if it has some explanatory power.We plan to re-examine our data in an attempt to confirm the accuracy of the conclusions we have drawn.We then want to examine other data from other online courses that are similar in nature to see if the model, as it currently stands, is robust.Then we might be able to tackle the third dimension, depth, and the, as yet unmentioned, fourth dimension of time. | 5,212.2 | 2001-01-01T00:00:00.000 | [
"Education",
"Computer Science"
] |
Apparent ghosts and spurious degrees of freedom in non-local theories
Recent work has shown that non-local modifications of the Einstein equations can have interesting cosmological consequences and can provide a dynamical origin for dark energy, consistent with existing data. At first sight these theories are plagued by ghosts. We show that these apparent ghost-like instabilities do not describe actual propagating degrees of freedom, and there is no issue of ghost-induced quantum vacuum decay.
In two recent papers [32,33] we have proposed a non-local approach that allows us to introduce a mass term in the Einstein equations, in such a way that the invariance under diffeomorphisms is not spoiled, and we do not need to introduce an external reference metric (contrary to what happens in the conventional local approach to massive gravity [34][35][36][37][38][39][40]). In particular, in [33] has been proposed a classical model based on the non-local equation Here d is the number of spatial dimensions (the factor (d−1)/2d is a convenient normalization of the parameter m 2 ), 2 = g µν ∇ µ ∇ ν is the d'Alembertian operator with respect to the metric g µν and, quite crucially, 2 −1 ret is its inverse computed with the retarded Green's function. The superscript T denotes the extraction of the transverse part of the tensor, which exploits the fact that, in a generic Riemannian manifold, any symmetric tensor S µν can be decomposed as S µν = S T µν + 1 2 (∇ µ S ν + ∇ ν S µ ), with ∇ µ S T µν = 0 [41,42]. The extraction of the transverse part of a tensor is itself a non-local operation, which involves further 2 −1 operators. For instance in flat space, where ∇ µ → ∂ µ , it is easy to show that Again, in eq. (1.1) all 2 −1 factors coming from the extraction of the transverse part are defined with the retarded Green's function, so that eq. (1.1) satisfies causality. Furthermore, since the left-hand side of eq. (1.1) is transverse by construction, the energy-momentum tensor is automatically conserved, ∇ µ T µν = 0. Both causality and energy-momentum conservation were lost in the original degravitation proposal [7], and in this sense eq. (1.1) can be seen as a refinement of the original idea. However, the explicit appearance of retarded Green's function in the equations of motion has important consequences for the conceptual meaning of an equation such as (1.1), as we will discuss below. As shown in [33,43], eq. (1.1) has very interesting cosmological properties, and in particular it generates a dynamical dark energy. Since during radiation dominance (RD) the Ricci scalar R vanishes, the term 2 −1 R starts to grow only during matter dominance (MD), thereby providing in a natural way a delayed onset of the accelerated expansion (similarly to what happens in the model proposed in [13]). Furthermore, this model is highly predictive since it only introduces a single parameter m, that replaces the cosmological constant in ΛCDM. In contrast, models based on quintessence, f (R)-gravity, or the non-local model of [13] in which a term Rf (2 −1 R) is added to the Einstein action, all introduce at least one arbitrary function, which is typically tuned so to get the desired cosmological behavior. In our case, we can fix the value of m so to reproduce the observed value Ω DE 0.68. This gives m 0.67H 0 , and leaves us with no free parameter. We then get a pure prediction for the EOS parameter of dark energy. Quite remarkably, writing w DE (a) = w 0 + (1 − a)w a , in [33] we found w 0 −1.04 and w a −0.02, consistent with the Planck data, and on the phantom side.
These cosmological features make eq. (1.1) a potentially very attractive dark energy model. The presence of the 2 −1 operator raises however a number of potential problems of theoretical consistency, and the purpose of this paper is to investigate them in some detail. The crucial problem can already be seen linearizing eq. (1.1) over flat space. Writing g µν = η µν + κh µν , where κ = (32πG) 1/2 and η µν = (−, +, . . . , +), the equation of motion of this theory takes the form where E µν,ρσ is the Lichnerowicz operator, while and 1/2 ret is the retarded inverse of the flat-space d'Alembertian. Apparently, the corresponding quadratic Lagrangian is Adding the usual gauge fixing term of linearized massless gravity, L gf = −(∂ νh µν )(∂ ρh ρµ ), and inverting the resulting quadratic form we get the propagator (1.6) plus terms proportional to k µ k ν , k ρ k σ and k µ k ν k ρ k σ , that give zero when contracted with a conserved energy-momentum tensor. The first term is the usual propagator of a massless graviton, for d generic. The term proportional to m 2 gives an extra contribution to the saturated propagatorT µν (−k)D µνρσ (k)T ρσ (k), equal to This term apparently describes the exchange of a healthy massless scalar plus a ghostlike massive scalar. In general, a ghost has two quite distinct effects: at the classical level, it can give rise to runaway solutions. In our cosmological context, rather than a problem this can actually be a virtue, because a phase of accelerated expansion is in a sense an instability of the classical evolution. Indeed, ghosts have been suggested as models of phantom dark energy [44,45]. The real trouble is that, at the quantum level, a ghost corresponds to a particle with negative energy and induces a decay of the vacuum, through processes in which the vacuum decays into ghosts plus normal particles, and renders the theory inconsistent.
The main purpose of this paper is to discuss and clarify some subtle conceptual issues related to this apparent ghost-like degree of freedom and to show that, in fact, in this theory there is no propagating ghost-like degree of freedom. The paper is organized as follows. In sect. 2 we show that the status of a non-local equation such as eq. (1.1) is that of an effective classical equation, derived from some classical or quantum averaging in a more fundamental theory. In sect. 3 we show that similar apparent ghosts even appear in massless GR when one decomposes the metric perturbation into a transverse-traceless part h TT µν and a trace part η µν s. This is due to the fact that the relation between {h TT µν , s} and the original metric perturbations h µν is non-local. We show that (contrary to some statements in the literature) the apparent ghost field s is not neutralized by the helicity-0 component of h TT µν . Rather, what saves the vacuum stability of GR, in these variables, is that s (as well as the helicity-0, ±1 components of h TT µν ), is a non-propagating field and cannot be put on the external lines, nor in loops. Beside having an intrinsic conceptual interest, this analysis will also show that the same considerations extend straightforwardly to the non-local modification of GR that we are studying. Finally, in sect. 4 we will work out the explicit relation between the fake ghost that is suggested by eq. (1.7), and the spurious degrees of freedom that are know to emerge when a non-local theory is written in local form by introducing auxiliary fields. Sect. 5 contains our conclusions.
Non local QFT or classical effective equations?
A crucial point of eq. (1.1), or of its linearization (1.3), is that they contain explicitly a retarded propagator. This retarded prescription is forced by causality, which we do not want to give up. We are used, of course, to the appearance of retarded propagators in the solutions of classical equations. Here however the retarded propagator already appears in the equation itself, and not only in its solution. Is it possible to obtain such an equation from a variational principle? The answer, quite crucially, is no. As already observed by various authors [13,18,32], a retarded inverse d'Alembertian cannot be obtained from the variation of a non-local action. Consider for illustration a non-local term in an action of the form dxφ2 −1 φ, where φ is some scalar field, and 2 −1 is defined with respect to some Green's function G(x; x ). Taking the variation with respect to φ(x) we get We see that the variational of the action automatically symmetrizes the Green's function. It is therefore impossible to obtain in this way a retarded Green's function in the equations of motion, since G ret (x; x ) is not symmetric under x ↔ x ; rather G ret (x ; x) = G adv (x; x ). The same happens if we take the variation of the Lagrangian (1.5). Writing explicitly the convolution with the Green's function as we did in eq. (2.1) we find that it is not possible to get the term P µν ret P ρσ ret in eq. (1.3). If in the action the term 2 −1 that appears in P µν is defined with a symmetric Green's function, so that G(x; x ) = G(x ; x), we find the same Green's function in the equation of motion. If, in contrast, we use h µν P µν ret P ρσ ret h ρσ in the action, in the equation of motions we get (P µν ret P ρσ ret + P µν adv P ρσ adv )h ρσ . Of course, one can take the point of view that the classical theory is defined by its equations of motion, while the action is simply a convenient "device" that, through a set of well defined rules, allows us to compactly summarize the equations of motion. We can then take the formal variation of the action and at the end replace by hand all factors 2 −1 by 2 −1 ret in the equation of motion. This is indeed the procedure used in [13,46], in the context of non-local gravity theories with a Lagrangian of the form Rf (2 −1 R). As long as we see the Lagrangian as a "device" that, through a well defined procedure, gives a classical equation of motion, this prescription is certainly legitimate. However, any connection between these classical causal equations of motion and the quantum field theory described by such a Lagrangian is now lost. In particular, the terms in eq. (1.6) or in eq. (1.7) that apparently describe the exchange of a healthy massless scalar plus a ghostlike massive scalar are just the propagators that, to reproduce eq. (1.3), after the variation must be set equal to retarded propagators. Taking them as Feynman propagators in a QFT gives a quantum theory that has nothing to do with our initial classical equation (1.3) and that has dynamical degrees of freedom that, with respect to our original problem, are spurious.
Thus, eq. (1.1) is not the classical equation of motion of a non-local quantum field theory. To understand its conceptual meaning, we observe that non-local equations involving the retarded propagator appear in various situation in physics, but are never fundamental. They rather typically emerge after performing some form of averaging, either purely classical or at the quantum level. In particular, non-local field equations govern the effective dynamics of the vacuum expectation values of quantum fields, which include the quantum corrections to the effective action. The standard path integral approach provides the dynamics for the in-out matrix element of a quantum field, e.g. 0 out |φ|0 in or, in a semiclassical approach to gravity, 0 out |ĝ µν |0 in . The classical equations for these quantities are however determined by the Feynman propagator, so they are not causal, since they contain both the retarded and the advanced Green's function. This is not surprising, since the in-out matrix element are not directly measurable quantities, but only provide intermediate steps in the QFT computations. Furthermore, even ifφ is a hermitean operator, its in-out matrix element are complex. In particular, this makes it impossible to interpret 0 out |ĝ µν |0 in as an effective metric. In contrast, the in-in matrix elements are real, and satisfy non-local but causal equations [47,48], involving only retarded propagators (that can be computed using the Schwinger-Keldysh formalism).
Similar non-local but causal equations can also emerge from a purely classical averaging procedure, when one separates the dynamics of a system into a long-wavelength and a short-wavelength part. One can then obtain an effective non-local but causal equation for the long-wavelength modes by integrating out the short-wavelength modes, see e.g. [49] for a recent example in the context of cosmological perturbation theory. Another purely classical example comes from the standard post-Newtonian/post-Minkowskian formalisms for GW production [50,51]. In linearized theory the gravitational wave (GW) amplitude h µν is determined by 2h µν = −16πGT µν , whereh µν = h µν −(1/2)hη µν . In such a radiation problem this equation is solved with the retarded Green's function,h µν = −16πG2 −1 ret T µν . When the non-linearities of GR are included, the GWs generated at some perturbative order become themselves sources for the GW generation at the next order. In the far-wave zone, this iteration gives rise to effective equations forh µν involving 2 −1 ret . In summary, non-local equations involving 2 −1 ret are not the classical equation of motion a non-local QFT (a point already made e.g. in [13,18,24,25,52]). Even if we can find a classical Lagrangian whose variation reproduces them (once supplemented with the 2 −1 → 2 −1 ret prescription after having performed the variation), the quantum field theory described by this Lagrangian has a priori nothing to do with the problem at hand. Issues of quantum consistencies (such as the possibility of a vacuum decay amplitude induced by ghosts) can only be addressed in the fundamental theory that, upon classical or quantum smoothing, produces these non-local (but causal) classical equations.
So, there is no sense, and no domain of validity, in which the Lagrangian (1.5) can be used to define a QFT associated to our theory. To investigate whether the classical equation (1.1) derives from a QFT with a stable quantum vacuum we should identify the fundamental theory and the smoothing procedure that give rise to it, and only in this framework we can pose the question.
3 Vacuum stability in massless and massive gravity 3
.1 A fake ghost in massless GR
It is instructive to see more generally how spurious degrees of freedom, and in particular spurious ghosts, can appear when one uses non-local variables. A simple and quite revealing example is provided by GR itself. We have already discussed this example in app. B of [53], but it is useful to re-examine and expand it in this context. We consider GR linearized over flat space. The quadratic Einstein-Hilbert action is We decompose the metric as where h TT µν is transverse and traceless, µν and the scalar s are gauge invariant. Plugging eq. (3.2) into eq. (3.1) we find that µ cancels (as it is obvious from the fact that eq. (3.1) is invariant under linearized diffeomorphisms and µ is a pure gauge mode), and Performing the same decomposition in the energy-momentum tensor, the interaction term can be written as so the equations of motion derived from S This result can be surprising, because it seems to suggest that in ordinary massless GR we have many more propagating degrees of freedom than expected: the components of the transverse-traceless tensor h TT µν (i.e. 5 components in d = 3) plus the scalar s. Note that these degrees of freedom are gauge invariant, so they cannot be gauged away. Furthermore, from eq. (3.3) the scalar s seems a ghost! Of course these conclusions are wrong, and for any d linearized GR is a ghost-free theory; in particular, in d = 3 it only has two radiative degrees of freedom, corresponding to the ±2 helicities of the graviton (and, in generic d, (d + 1)(d − 2)/2 radiative degrees of freedom, corresponding to the fact that the little group is SO(d − 1)). We know very well that the remaining degrees of freedom of GR are physical (i.e. gauge-invariant) but non-radiative. How is this consistent with the fact that s, as well as all the extra components of h TT µν , satisfy a Klein-Gordon rather than a Poisson equation?
The answer, as discussed in [53], is related to the fact that h TT µν and s are non-local functions of the original metric perturbation h µν . In particular, inverting eq. (3.2) one finds that The fact that s, as a function of h µν , is non-local in time means that the initial data assigned on h µν on a given time slice are not sufficient to evolve s, so a naive counting of degrees of freedom goes wrong. A simple but instructive example of what exactly goes wrong is provided by a scalar field φ that satisfies a Poisson equation ∇ 2 φ = ρ. If we define a new fieldφ fromφ = 2 −1 φ, the original Poisson equation can be rewritten as so nowφ looks like a propagating degree of freedom. However, for ρ = 0 our original equation ∇ 2 φ = ρ only has the solution φ = 0. If we want to rewrite it in terms ofφ without introducing spurious degrees of freedom we must therefore supplement eq. (3.8) with the condition that, when ρ = 0,φ = 0. In other words, the homogeneous plane wave solutions of eq. (3.8),φ is fixed uniquely by the original equation, and a k , a * k cannot be considered as free parameters that, upon quantization, give rise to the creation and annihilation operators of the quantum theory.
Exactly the same situation takes place in GR, for the field s and for the the extra components in h TT µν . For instance, writing s in terms of the variables entering the (3 + 1) decomposition (and specializing to d = 3) one finds where Φ and Ψ are the scalar Bardeen's variable defined in flat space (see [53]). Since Φ and Ψ are non-radiative and satisfy Poisson equations, s is non-radiative too, and it is just the 2 −1 factor in eq. (3.10) that (much as the 2 −1 in the definition ofφ), potentially introduces a fake propagating degree of freedom. In order to eliminate such a spurious degree of freedom, we must supplement eq. (3.6) with the condition that s = 0 when T = 0, i.e. we must discard again the homogeneous solution of eq. (3.6) (and similarly for eq. (3.5)). This implies that, at the quantum level, there are no creation and annihilation operators associated to s. Therefore s cannot appear on the external legs of a Feynman diagram, and there is no Feynman propagator associated to it, so it cannot circulate in the loops. One might also observe that the contribution to the propagator of s is canceled by an equal and opposite contribution due to the helicity-0 component of h TT µν , see app. A1 of [38]. However this only shows that, in the classical matter-matter interaction described by tree level diagrams such as that in Fig. 1, these contributions cancel and we remain, as expected, with the contribution from the exchange of the helicity ±2. This cancellation has nothing to do with the vacuum stability of GR. Consider in fact the graphs shown in Fig. 2. If s were a dynamical ghost field that can be put on the external line, these graphs would describe a vacuum decay process. Such a process is kinematically allowed because the ghost s carries a negative energy that compensates the positive energies of the other final particles. There is no corresponding vacuum decay graph in which we replace s by the helicity-0 component of h TT µν , since the latter is not a ghost, and the process is no longer kinematically allowed. In any case, these processes have different final states, so the positive probability for, e.g., the decay vac → ssφφ shown in Fig. 2 (where φ is any normal matter field, or a graviton) cannot be canceled by anything.
It is interesting, and somewhat subtle, to understand the same point in terms of the imaginary part of vacuum-to-vacuum diagrams. The vacuum-to-vacuum diagrams corresponding to the processes of Fig. 2 are shown in Fig. 3. Here, whenever we have a dashed line corresponding to s, we indeed have a corresponding graph where this line is replaced by the propagation of the helicity-0 component of h TT µν , and one might believe that these graphs cancel. In fact this is not true, due to a subtlety in the i prescriptions of the propagators. For a normal particle the usual scalar propagator is −i/(k 2 + m 2 − i ) (with our (−, +, +, +) signature). For a propagating ghost the correct prescription is instead i/(k 2 − m 2 + i ). As discussed in [54], this +i choice propagates negative energies forward in time but preserves the unitarity of the theory and the optical theorem. With a −i choice, in contrast, ghosts carry positive energy but negative norm, and the probabilistic interpretation of QFT is lost. This latter choice is therefore unacceptable. In our case m = 0 and the sum of the contributions to each internal line of the "healthy" helicity-0 component of h TT µν and the ghost s is We see that, because of the different i prescriptions, these two terms do not cancel. Indeed, the ghost contribution to the diagrams of Fig. 3 generates an imaginary part that corresponds to the modulus square of the corresponding diagrams in Fig. 2, as required by unitarity. In contrast, the contribution to the diagram of Fig. 3 from the helicity-0 component of h TT µν has no imaginary part, again in agreement with unitarity, since the processes corresponding to Fig. 2, with s replaced by the helicity-0 component of h TT µν , are not kinematically allowed.
To sum up, what saves the vacuum stability in GR is not a cancelation between the contributions of the ghost s and that of the helicity-0 component of h TT µν . If one treats them as propagating degrees of freedom there is no such cancelation, and one reaches the (wrong) conclusion that in GR the vacuum is unstable. Rather, vacuum stability is preserved by the fact that the field s, as well as the extra components of h TT µν , are nonradiative. There are no destruction nor creation operators associated to them, and we are not allowed to put these fields on the external lines or in loops.
In other words, the theory defined by eq. (3.3) is not equivalent to that defined by the quadratic Einstein-Hilbert action (3.1), because the non-local transformation between h µν and {h TT µν , s} introduces spurious propagating modes. We can still describe GR using (a) (b) Figure 3: The vacuum-to-vacuum diagrams corresponding to the processes shown in Fig. 2.
the formulation in terms of {h TT µν , s}, but in this case we must impose on eq. (3.6) the boundary condition that s = 0 when T = 0 (and similarly for the extra components of h TT µν in eq. (3.5)), in order to eliminate these spurious modes.
The apparent ghost in the non-local massive theory
It is now straightforward to make contact between linearized GR and its non-local massive deformation given by eq. (1.3). Integrating by parts the operator P µν and using eqs. (3.3) and (3.7), the Lagrangian (1.5) can be written as Thus, the non-local term in eq. (1.5) is simply a mass term for the field s. However, in the original equation of motion (1.3) that this action is supposed to reproduce, the nonlocal term was defined with the retarded Green's function. Thus, in order not to introduce spurious propagating degrees of freedom, we must simply continue to impose the condition that s is a non-radiative field, just as we did in GR. In other words, now the equation of motion (3.6) is replaced by and, just as in eq. (3.9), we must refrain from interpreting the coefficients of the the plane waves e ±ikx with k 2 = m 2 as free parameters that, upon quantization, give rise to creation and annihilation operators. Thus, again, there are no creation and annihilation operators associated to s, which therefore cannot appear on the external lines of graphs such as those in Fig. 2, not in the internal lines of graphs such as those in Fig. 3, and there is no vacuum decay. Observe that, since the pole of s is now massive while that of the helicity-0 mode of h TT µν remain massless, there is no cancelation among them in the tree graphs that describe the classical matter-matter interaction, which is therefore modified at cosmological distances, compared to GR. This is just as we want, since our aim is to modify classical gravity in the IR. In contrast, the lack of cancelation between s and the helicity-0 mode has nothing to do with unitarity and vacuum decay. As discussed above, this cancellation does not take place even in the m = 0 case. Graphs such as those in Fig. 2 could not be canceled by anything, and the reason why the vacuum decay amplitude in GR is zero is that these graphs simply do not exist, because we cannot put s on the external lines. 1 It is instructive to compare the situation with the usual local theory of linearized massive gravity. With a generic local mass term, the quadratic Lagrangian reads (3.14) Using again the decomposition (3.2), now the action depends also on µ , since the invariance under linearized diffeomorphisms is broken. Writing µ = T µ + ∂ µ α, the scalar sector now depends both on s and α, with h scalar µν = ∂ µ ∂ ν α + (1/d)η µν s. In particular, the mass term in eq. (3.14) produces a term proportional to (b 1 + b 2 )(2α) 2 . For b 1 , b 2 generic, this higher-derivative term gives rise to a ghost, and the Fierz-Pauli tuning b 1 + b 2 = 0 is designed so to get rid of it. Indeed, this longitudinal mode of the metric of the form ∂ µ ∂ ν α is nothing but the mode that is isolated using the Stückelberg formalism, and the dRGT theory [34,35] is constructed just to ensure that its equations of motion remains of second order, even at the non-linear level. The situation is quite different from that in eq. (3.12), where no higher-derivative term is generated, but we just added a mass term to an already non-radiative field.
Spurious degrees of freedom from auxiliary fields
An alternative way of studying the degrees of freedom of a non-local theory is to transform it into a local theory by introducing auxiliary fields, see e.g. [55,56]. As it has been recognized in various recent papers [18,24,57,58], such "localization" procedure introduces however spurious solutions, and in particular spurious ghosts. This is in fact an equivalent way of understanding that the apparent ghosts of these theories do not necessarily correspond to propagating degrees of freedom. An example that has been much studied is the non-local model originally proposed by Deser and Woodard [13], which is based on the action for some function f . This can be formally rewritten in local form introducing two fields ξ(x) and φ(x) and writing Thus, ξ is a Lagrange multiplier that enforces the equation 2φ = R, so that formally φ = 2 −1 R. The kinetic term ξ2φ = −∂ µ ξ∂ µ φ can be diagonalized writing ξ = ϕ 1 + ϕ 2 , φ = ϕ 1 − ϕ 2 , and then and we see that one of the two auxiliary fields (ϕ 2 , given our signature) is a ghost. However, this apparent ghost is a spurious degree of freedom, as it is immediately understood observing that the above formal manipulation hold even when the function f (x) is equal to a constant f 0 [57] (or, in fact, even when f = 0). In this case the original action (4.1) is obviously the same as GR with a rescaled Newton constant, and certainly has no ghost (and, in fact, it has no ghost also for a broad class of functions f (2 −1 R) [24]). Once again, the point is that eq. (4.3) is equivalent to eq. (4.1) only if we discard the homogeneous solution of 2φ = R, and therefore there are no annihilation and creation operators associated to φ (nor to ξ). A similar example has been given, for a non-local model based on the term R µν 2 −1 G µν , in [18], where it was also clearly recognized that the auxiliary ghost field that results from the localization procedure never exists as a propagating degree of freedom, and does not appear in the external lines of the Feynman graphs.
Exactly the same happens in our model. To define the model we must specify what 2 −1 actually means. In general, an equation such as 2U = −R is solved by where U hom (x) is any solution of 2U hom = 0 and G(x; x ) is any a Green's function of the 2 operator. To define our model we must specify what definition of 2 −1 we use, i.e. we must specify the Green's function and the solution of the homogeneous equation. In our case we use the retarded Green's function, but still we must complete the definition of 2 −1 by specifying U hom (x). A possible choice is U hom (x) = 0. Then, in eq. (1.1), and similarly Consider now what happens if we rewrite the theory in a local form, introducing U = −2 −1 R and S µν = −U g µν . Formally, eq. (1.1) can be written as where S µν = S T µν + 1 2 (∇ µ S ν + ∇ ν S µ ). To make contact with eq. (1.3) we linearize over flat space and we use eq. (1.2). Then eq. (4.8) can be rewritten as the coupled system Such a local form of the equations can be convenient, particularly for numerical studies, because it transforms the original integro-differential equations into a set of coupled differential equations. However, exactly as in the example discussed above, it introduces spurious solutions. The choice of homogeneous solution, that in the original non-local formulation amounts to a definition of the theory, is now translated into a choice of initial conditions on the field U (x). There is one, and only one choice, that gives back our original models. For instance, if the original non-local theory is defined through eq. (4.5), we must choose the initial conditions on U in eq. (4.10) such that the solution of the associated homogeneous equation vanishes. In any case, whatever the choices made in the definition of 2 −1 , the corresponding homogeneous solution of eq. (4.10) is fixed, and does not represent a free field that we can take as an extra degree of freedom of the theory. In flat space this homogeneous solution is a superposition of plane waves of the form (3.9), and the coefficients a k , a * k are fixed by the definition of 2 −1 (e.g. at the value a k = a * k = 0 if we use the definition (4.5) ), and at the quantum level it makes no sense to promote them to annihilation and creation operators. There is no quantum degree of freedom associated to them.
Comparing eq. (4.9) with eq. (1.3) we see that, at the linearized level, that U = P ρσ ret h ρσ . Therefore at the linearized level U is the same as the variable s given in eq. (3.7). The fact that the homogeneous solutions for U does not represent a free degree of freedom means that the same holds for s. We therefore reach the same conclusion of the previous section: the homogeneous solutions for s do not describe propagating degrees of freedom, and at the quantum level there are no creation and annihilation operators associated to it.
Conclusions
Non-local modifications of GR have potentially very interesting cosmological consequences. At the conceptual level, they raise however some issues of principle which must be understood before using them confidently to compare with cosmological observations. In particular, these equations feature the retarded inverse of the d'Alembertian. The retarded prescription ensures causality, but at the same time the fact 2 −1 ret appears not only in the solution of such equations, but already in the equations themselves, tells us that such equations cannot be fundamental. Rather, they are effective classical equations.
Such non-local effective equations can emerge in a purely classical context. Typical examples are obtained when integrating out the short-wavelength modes to obtain an effective theory for long-wavelength modes. Another example is given by the formalism for gravitational-wave production, beyond leading order. In both cases one basically reinjects a retarded solution, obtained to lowest order, into the equation governing the nextorder corrections. Another way to obtain non-local equations is by performing a quantum averaging, in particular when working with the in-in expectation values of the quantum fields, and deriving these equations from an effective action that takes into account the radiative corrections. In particular, in semiclassical quantum gravity we can write such effective non-local (but causal) equations for an effective metric 0 in |ĝ µν |0 in .
We have seen (in agreement with various recent works, e.g. [18,24,57,58]) that, if one is not careful, it is quite easy to introduce spurious degrees of freedom in these models, which furthermore are ghost-like. Basically, this originates from the fact that the kernel of the 2 −1 operator is non-trivial: the equation 2 −1 (0) = f does not imply that f = 0 but only that f satisfies 2f = 0. The non-local equations that we are considering only involve the retarded solutions of equations of the form 2f = j, for some source j, i.e.
f (x) = dx G ret (x; x )j(x ) . (5.1) However, any action principle that (with some more or less formal manipulation, as discussed in sect. 2) reproduces the equation 2f = j will automatically carry along the most general solution of this equation, of the form where 2f hom = 0 and G(x; x ) is a generic Green's function. In order to recover the solutions that actually pertain to our initial non-local theory we must impose the appropriate boundary conditions, that amount to choosing G(x; x ) = G ret (x; x ) and fixing once and for all the homogeneous solution. In particular, one should be careful not to use the corresponding Lagrangian at the quantum level, and one should not include the corresponding fields in the external lines or in the loops. The corresponding particles, some of which are unavoidably ghost-like, do not correspond to propagating degrees of freedom in our original problem, and the quantization of these spurious solutions does not make sense. We have seen in particular how the above considerations apply to the model defined by eq. (1.1). We have found that the apparent ghost signaled by the second term in eq. (1.7) is actually a non-radiative degree of freedom, and we have also seen that in the m → 0 limit it goes smoothly into a non-radiative degree of freedom of GR. Finally, we have shown how the same conclusion emerges from the point of view of the spurious degrees of freedom induced by the localization procedure. The conclusion is that the apparent ghost of eq. (1.7) is not an indication of any problem of consistency of the theory at the quantum level, and eq. (1.1), taken as an effective classical equation, defines a consistent classical theory that can be safely used for cosmological purposes. | 8,453.2 | 2013-11-14T00:00:00.000 | [
"Physics"
] |
Broadband and narrowband laser-based terahertz source and its application for resonant and non-resonant excitation of antiferromagnetic modes in NiO
A versatile table-top high-intense source of terahertz radiation, enabling to generate pulses of both broadband and narrowband spectra with a tunable frequency up to 3 THz is presented. The terahertz radiation pulses are generated by optical rectification of femtosecond pulses of Cr:forsterite laser setup in nonlinear organic crystal OH1. Electric field strengths of broadband and narrowband terahertz pulses were achieved close to 20 MV/cm and more than 2 MV/cm, correspondingly. Experiments on excitation of spin subsystem oscillations of an antiferromagnetic NiO were carried out. Selective excitation of 0.42 THz mode was observed for the first time at room temperature by a narrowband terahertz pulses tuned close to mode frequency. © 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Introduction
In recent years, sources of single-cycle and multi-cycle pulses of terahertz (THz) radiation have been increasingly used for both fundamental and applied research. Numerous new experiment in THz science have become possible due to the recent availability of THz sources with high values of energy and strength of electric and magnetic fields [1][2][3][4]. High-intense terahertz pulses are generated by various methods, in particular down-frequency conversion, optical rectification of femtosecond laser pulses [1,2,[5][6][7], radiative phenomena of ultra-short, highly charged electron beams in modern linear accelerators [3,8].
Single-cycle THz pulses are generated in table-top setup by optical rectification of femtosecond laser pulses in nonlinear organic [1,9] and inorganic [10,11] crystals, in gas-plasma driven by two-colour laser [12], and in large scale accelerator facilities using relativistic charged particle beams [3]. The advantages of optical rectification in highly-nonlinear crystal are a conversion efficiency of a few percents, shot-to-shot radiation stability, and high beam quality, which allows tight focus and highest field [1,2], but has several limitations. In particular, radiation spectral bandwidth ranges up to 10 THz and pulse energy is limited by the size of the nonlinear crystal and the pump energy density, at which its destruction occurs. Compared with optical rectification, the gas-plasma THz generation scheme has a very wide spectral bandwidth (up to 50 THz) [13], as well as the theoretical possibility of achieving a field strength of up to 1 GV/cm [5]. Systems that allow generation of THz radiation using charged particle beams make it possible to realize high pulse energy at high repetition frequencies (up to 100 kHz) [3]. A significant disadvantage of these sources is the inability to obtain generation of broadband and narrowband radiation in a single setup. In addition, it is difficult to implement synchronization with probe laser radiation for the pump-probe schemes used in THz spectroscopy.
Multi-cycle THz pulses are generated in laser-based sources by mixing two delayed linearly chirped pulses in a nonlinear medium [14]. The THz pulses close to microjoule energy tunable in the frequency range of 0.3-1.3 THz were obtained in LiNbO 3 crystal by tilted-pulse-front pumping [15]. The generation of THz pulses with a tunable central radiation frequency in the range from 0.3 to 0.8 THz in an HMQ-TMS organic crystal by collinearly phase matched optical rectification of temporally shaped 800 nm pulses was obtained in [16]. Optical rectification of the spatial forms of femtosecond laser pulses in a lithium niobate crystal was used to generate THz pulses tunable in the frequency range 0.3-1.2 THz [17]. Extreme narrowband (quasi monochromatic), multi-cycle THz generation (1%-bandwidth, 0.361 THz) with high pulse energy (0.6 mJ) using a large-aperture periodically-poled lithium niobate was demonstrated in [4].
To carry out experiments covering linear and nonlinear THz spectroscopy, resonant pumping and control of material properties, the capability to control the number of oscillations of the electric field strength from one to several oscillations is required, which corresponds to the broadband and narrowband spectrum, with the maximum possible electric field strength up to several tens of MV/cm. Despite the numerous of studies in the field of generation of intense terahertz pulses, nowadays there is no versatile THz source with high conversion efficiency and providing both broadband and narrowband THz generation with a tunable central frequency.
In this paper we present a versatile table-top laser-based easy-to-implement THz source that allows to generate both broadband and narrowband pulses with tunable center frequency with high energy. In addition, the resonant and non-resonant effects of THz radiation pulses of our source on the spin subsystem of a typical NiO antiferromagnetic at room temperature are demonstrated.
Versatile broadband and narrowband laser-based THz source
The versatile spectrally tunable source of THz radiation pulses has been developed on the unique chromium-forsterite laser system [18] based on chirped pulse amplification scheme and it consists of a seed oscillator, stretcher, amplifying stages and a temporal compressor ( Fig. 1(a)). The laser system delivers femtosecond optical pulses at a wavelength of 1240 nm with an energy up to 40 mJ, a duration of 100 fs (FWHM Fig. 1(b)) and a repetition rate of 10 Hz. Broadband THz pulses are produced by optical rectification of femtosecond laser pulses in an organic crystal [1,9]. It is necessary to form optical pulses with a given temporal shape (Figs. 1(c) and 1(d)) to generate THz radiation pulses with a specific spectral band (narrowband) [14,19]. We used two alternative devices to implement this mode of operation that were included in the optical scheme of the femtosecond laser system: the first one was based on an acousto-optic dispersion delay line (AODDL) [20,21], the second one was based on the Mach-Zehnder interferometer (MZI) [22] (Fig. 1(a)).
The advantage of using AODDL before the amplifier is the possibility to recover the diffraction losses in the acousto-optic crystal. In fact the subsequent amplifier stages operate in the saturation mode and are not sensitive to the input seed energy. Moreover, the AODDL allows to control the frequency and width of the spectrum of the generated terahertz pulses radiation in real time. However, the use amplification of multi-peaked laser pulse requires careful adjustment of the gain of amplifiers, since, instead of a single pulse stretched in time, a sequence of shorter pulses is used that connected with potential risk to exceed the optical damage threshold of the laser components. Unlike a device that uses AODDL, the pulse energy loss is at least 50% with the Mach-Zehnder interferometer and cannot be compensated since this pulse forming system is placed after the amplifiers. The advantage of the scheme with the interferometer is a larger adjustment range for the duration of the laser pulse (up to the duration of the chirped pulse). This makes possible to obtain a narrower spectral line of THz radiation, and its center frequency [22] can be varied by changing the beat frequency via delaying the pulse replica.
In recent works, [1,9] it was shown that the method of optical rectification of femtosecond laser pulses with a wavelength of 1240 nm allows to generate linearly polarized THz pulses in various nonlinear organic crystals (DAST, DSTMS, OH1) with high pulse energy (from tens to hundreds of µJ) and conversion efficiency (a few percents). Typical waveform of single-cycle THz pulse generated in an OH1 crystal, and the corresponding spectrum calculated by Fourier transformation are shown in Figs. 2(a) and 2(c). In order to obtain multicycle (narrowband) THz pulse shown in Figs. 2(b) and 2(d), the laser beam is directed towards the above-mentioned pulse-forming scheme of versatile THz radiation source. Waveforms were measured by electro-optical sampling in a 100 µm thick GaP electro-optical crystal with (110) orientation (on a 2 mm-thick GaP substrate with (100) orientation).
Excitation of spin subsystem oscillations of an antiferromagnetic NiO
To demonstrate the benefit to use a versatile THz source, we have carried out experiments on the excitation of spin subsystem oscillations of an antiferromagnetic NiO. To excite antiferromagnetic modes in NiO, high intensity THz pulses at a frequency close to the resonance were used. Spin dynamics controlled by the magnetic field of the THz pulse has been investigated by the magneto-optical Faraday effect using infrared femtosecond laser pulses.
The scheme of the experiments is shown in Fig. 3. The main part of the laser system radiation (95%) at the fundamental wavelength of 1240 nm was directed to a nonlinear organic OH1 crystal (Rainbow Photonics). The OH1 crystal was 6 mm in diameter and 425±10 µm thick with an antireflection coating at the pump laser wavelength. After the OH1 crystal, a low pass filter (LPF8.8-47, Tydex) with cut off frequency at 10 THz was used to reject the residual laser pump radiation. The filter reduced the laser radiation more than 10 8 times. High conversion efficiency (∼3%) was achieved at the fluence of 6 mJ/cm 2 of transform-limited pump laser pulse. The maximum THz radiation pulses energy was obtained at the pump energy density of 15 mJ/cm 2 . To achieve the maximum strength of the electric and magnetic fields, the THz radiation beam size was expanded 6 times with a telescope consisting of two off-axis parabolic mirrors with reflected focal lengths of 25.4 mm and 152.4 mm. The THz radiation was focused on the sample by an off-axis parabolic mirror with a reflected focal length of 50.8 mm.
A probe pulse with a duration of 100 fs at a wavelength of 1240 nm passed through a hole in the parabola and was focused on the NiO sample's surface by a positive lens with a focal length of 100 mm in a 20 µm spot (at a level of 1/e 2 ). The energy of the probe pulse was 1 µJ. Then, the radiation passed through a half-wave plate, a Wollaston prism, and was recorded by balanced diodes (Thorlabs PDB210C/M). A delay line installed in the pump beam of the OH1 crystal was used to implement the temporal overlap between the probe and THz pulses. The experimental scheme for the THz radiation generation and the spin dynamics study in a NiO crystal was placed in a housing with dried air to reduce the absorption of a THz radiation by water vapor (absolute humidity at a temperature of 23 • C was 0.41 g/m 3 ). The THz radiation polarization was parallel to the probe pulse polarization. Table 1 shows the parameters of the THz radiation pulses that have been used in the experiment, where f -frequency, ∆f -spectral line width, r e −2 -focusing spot radius at 1/e 2 level, W -energy, E -electric field strength, B -magnetic field induction. The last row of Table 1 displays the parameters of a single-cycle THz pulse (broadband) (Figs. 2(a) and 2(c)). The temporal and spectral characteristics of the THz pulses were measured using electro-optical sampling. The THz beam size in the focal plane was estimated using the "knife-edge" method, and a calibrated Golay cell (GC-1D, Tydex) was used to measure the THz pulse energy. The electric field strength of THz pulses E was estimated taking into account the measured energy, duration, and size of the spot, keeping in mind that the THz pulse was of Gaussian distribution [23][24][25]. The corresponding magnetic field induction was calculated by the well-known expression: B = E/c, where c is the speed of light.
In the experiments, we have used a double-side polished monocrystalline sample of a NiO (111) crystal of 45 µm thick and 5 mm in diameter. Nickel oxide (NiO) crystallizes in a cubic structure and below the Neel temperature T = 523 • C is an antiferromagnetic [26]. The magnetic field of the THz pulse was applied parallel to the (111) surface of the NiO sample and had direct access to the degrees of freedom of the spin system of the antiferromagnetic. In this case, a Zeeman torque is created on the magnetic dipole associated with each spin at a frequency that can be tuned in resonance with the collective mode of the NiO magnons [27][28][29][30].
The time dependences of the rotation of the probe pulse polarization plane in the NiO sample due to the magneto-optical Faraday effect under action of broadband and narrowband THz pulses with different frequencies are shown on Fig. 4. We can see an increasing amplitude of oscillations and subsequent exponential decay with time constant of the order of 40 ps under the action of either the broadband (Fig. 4(a)) or the narrowband (at a frequency of 1 THz, Fig. 4(b)) THz pulses. In both cases, a quasi-permanent period of 1 ps can be observed, which corresponds to the 1 THz antiferromagnetic resonant mode in NiO. However, the oscillations have a rather sharp front and reach their maximum amplitude about 4 ps after exposure to the broadband THz pulse. In case of the narrowband pulse with a frequency of 1 THz, the oscillations develop more smoothly and reach their maximum amplitude about 8 ps after the THz pulse. Despite the ratio of the magnetic fields for the broadband and narrowband THz pulses is 16, the ratio of amplitude of oscillations is only 2, indicating that more effective excitation is reached with narrowband resonant stimulus. The Faraday signals induced by narrowband THz pulses at frequencies of 0.5 THz and 2 THz (Figs. 4(c) and 4(d)) are 10 times less than the maximum amplitude under the action of the broadband THz pulse, and it is very difficult to find any periodicity in the observed signal.
In [31] it was shown that at low temperature ( 273K) five antiferromagnetic resonant modes could be observed in NiO: three modes at low frequency in the range up to 0.5 THz (0.028; 0.198; 0.42 THz) and two high-frequency modes in the range from 1 to 2 THz (1.14; 1.29 THz). At room temperature (273K), the splitting of high-frequency modes degrades, and the two modes merge into one at a frequency of ∼1.07 THz (mode softening occurs). The mode with a frequency of ∼0.4 THz is weakly dependent on temperature and almost does not change in frequency, unlike the low-frequency pair 0.028 and 0.198 THz, which merges into the mode at a frequency of ∼0.2 THz. Thus, at room temperature, according to [31], we can observe three antiferromagnetic modes out of five in the vicinity of frequencies 0.2, 0.4, and 1 THz.
In addition, the experimental detection of the nonlinear response of the NiO spin system as a weak signal (about 1% of the maximum amplitude) in the spectrum of magnon oscillations at the double frequency of the main antiferromagnetic resonant mode (∼1 THz), when the sample was excited at room temperature by broadband and narrowband THz pulses tuned to the frequency of the resonant mode, was reported in the works [28,29], respectively.
The amplitude spectra obtained using the Fourier transform of time dependences of the magneto-optical signals presented in Fig. 4 are shown in Fig. 5. We can clearly see two magnon modes at frequencies of 0.25 and 1 THz (indicated by arrows) under the action of a broadband THz pulse with a peak amplitude of the magnetic field of 6.5 T (Fig. 5(a)) on the sample, which have been previously observed in [27,28]. In this mode of exposure, we could not resolve the feature associated with higher order nonlinear effects such as a component of the second harmonic of the antiferromagnetic mode with a frequency of ∼1 THz due to the low signal-to-noise ratio [28] despite the high THz field (the last line in Table 1). However, when using a narrowband THz pulse with a frequency of ∼1 THz and a peak magnetic field amplitude of 0.4 T, the second harmonic spectral feature is traced (Fig. 5(b)), as in [29]. The second harmonic signature is visible although the pulse repetition rate of our THz source is 10 4 times lower compared with the work of [29], and the signal-to-noise ratio is 10 2 worse. The obtained result agrees well with the excitation theoretical model of nonlinear spin oscillations in an antiferromagnetic NiO under the action of a picosecond THz pulse with a frequency of 1 THz and an amplitude of the magnetic field of 0.4 T [32]. Figure 5(c) shows two low-frequency modes 0.27 and 0.42 THz in the spectrum of spin oscillations induced by a narrowband THz pulse with a frequency of 0.5 THz (peak amplitude of the magnetic field ∼0.1 T). Note that the antiferromagnetic resonant mode with a frequency of 0.42 THz is observed for the first time in experiments on excitation of spin oscillations under the action of THz pulses at room temperature. Previously, a mode at a frequency of 0.5 THz was observed experimentally only using Brillouin spectroscopy [31] and in the time-domain terahertz spectroscopy [33].
The spectrum of magnon oscillations when a NiO sample is excited by a narrowband THz pulse with a frequency of 2 THz (magnetic field amplitude 0.8 T) is shown in Fig. 5(d). As it can be seen from the figure, no lines in the range from 0.5 to 2.2 THz are detected in the spectrum, the signal is a white noise. This suggests that the effect of a THz pulse tuned at frequencies out from the main antiferromagnetic resonances does not induce excitation of spin subsystem of the antiferromagnetic NiO. Consequently, the interaction of the THz pulse with the NiO spin system has a strictly resonant character.
In addition, we can see two local maxima with frequencies of 0.86 and 1.14 THz, which are symmetrically located relative to the resonant frequency of 1 THz (see Fig. 5(a)). In [28], the presence of local maxima at 0.77 and 1.23 THz was interpreted as the process of mixing the difference and sum frequencies of magnon oscillations in the plane and outside the plane with a frequency of 0.23 and 1 THz. In our experiment, we assume that the appearance of local maxima is not related to the propagation effect (∆f = 0.14 THz not 0.25 THz), but may be due to splitting of the resonant mode of 1 THz in the magnetic field of the THz pulse, by analogy with [30,34].
Conclusion
We present a versatile table-top high-intensity source of THz radiation, enabling the generation of pulses with a broadband and narrowband spectra with a tunable center frequency in the range of 0.1-3 THz. The THz generation based on optical rectification in OH1 organic crystal provides up to 300 µJ in the broadband mode and up to 7 µJ in the narrowband mode with high conversion efficiency. High values of field strength, reaching several tens of MV/cm (for one cycle) and several MV/cm (for several cycles) open new scientific ways for selective and nonlinear excitation of low-energy modes in condensed matters.
An experimental demonstration of the capabilities of this versatile THz source was carried out by excitation of the spin subsystem of a typical NiO antiferromagnetic by broadband and narrowband THz pulses at room temperature. Two modes were recorded at 0.25 and 1 THz under action of a broadband THz pulse. A mode at 0.42 THz was observed for the first time due to selective excitation by narrowband THz pulses tuned close to this resonance. Our observations agree well with theoretical predictions of the antiferromagnetic resonant modes in NiO. It was shown that the interaction of THz pulses with the NiO spin subsystem had a strictly resonant character.
Funding
Ministry of Education and Science of the Russian Federation . | 4,467.4 | 2019-09-13T00:00:00.000 | [
"Physics"
] |
Termination of Triangular Integer Loops is Decidable
We consider the problem whether termination of affine integer loops is decidable. Since Tiwari conjectured decidability in 2004, only special cases have been solved. We complement this work by proving decidability for the case that the update matrix is triangular.
Introduction
We consider affine integer loops of the form Here, A ∈ Z d×d for some dimension d ≥ 1, x is a column vector of pairwise different variables x 1 , . . . , x d , a ∈ Z d , and ϕ is a conjunction of inequalities of the form α > 0 where α ∈ A [x] is an affine expression with rational coefficients 3 over x (i.e., A [x] = {c T x + c | c ∈ Q d , c ∈ Q}). So ϕ has the form B x + b > 0 where 0 is the vector containing k zeros, B ∈ Q k×d , and b ∈ Q k for some k ∈ N.
Def. 1 formalizes the intuitive notion of termination for such loops.
Definition 1 (Termination). Let f : Z d → Z d with f (x) = A x + a. If ∃c ∈ Z d . ∀n ∈ N. ϕ[x/f n (c)], then (1) is non-terminating and c is a witness for non-termination. Otherwise, (1) terminates.
Here, f n denotes the n-fold application of f , i.e., we have f 0 (c) = c and f n+1 (c) = f (f n (c)). We call f the update of (1). Moreover, for any entity s ⋆ funded by DFG grant 389792660 as part of TRR 248 and by DFG grant GI 274/6 3 Note that multiplying with the least common multiple of all denominators yields an equivalent constraint with integer coefficients, i.e., allowing rational instead of integer coefficients does not extend the considered class of loops.
Example 2. Consider the loop
where the update of all variables is executed simultaneously. This program belongs to our class of affine loops, because it can be written equivalently as follows.
while y + z > 0 do While termination of affine loops is known to be decidable if the variables range over the real [14] or the rational numbers [4], the integer case is a wellknown open problem [2,3,4,13,14]. 4 However, certain special cases have been solved: Braverman [4] showed that termination of linear loops is decidable (i.e., loops of the form (1) where a is 0 and ϕ is of the form B x > 0). Bozga et al. [3] showed decidability for the case that the update matrix A in (1) has the finite monoid property, i.e., if there is an n > 0 such that A n is diagonalizable and all eigenvalues of A n are in {0, 1}. Ouaknine et al. [13] proved decidability for the case d ≤ 4 and for the case that A is diagonalizable.
Ben-Amram et al. [2] showed undecidability of termination for certain extensions of affine integer loops, e.g., for loops where the body is of the form if x > 0 then x ← A x else x ← A ′ x where A, A ′ ∈ Z d×d and x ∈ x.
In this paper, we present another substantial step towards the solution of the open problem whether termination of affine integer loops is decidable. We show that termination is decidable for triangular loops (1) where A is a triangular matrix (i.e., all entries of A below or above the main diagonal are zero). Clearly, the order of the variables is irrelevant, i.e., our results also cover the case that A can be transformed into a triangular matrix by reordering A, x, and a accordingly. 5 So essentially, triangularity means that the program variables x 1 , . . . , x d can be ordered such that in each loop iteration, the new value of x i only depends on the previous values of x 1 , . . . , x i−1 , x i . Hence, this excludes programs with "cyclic dependencies" of variables (e.g., where the new values of x and y both depend on the old values of both x and y). While triangular loops are a very restricted subclass of general integer programs, integer programs often contain such loops. Hence, tools for termination analysis of such programs (e.g., [5,6,7,8,10,11,12]) could benefit from integrating our decision procedure and applying it whenever a sub-program is an affine triangular loop.
Note that triangularity and diagonalizability of matrices do not imply each other. As we consider loops with arbitrary dimension, this means that the class of loops considered in this paper is not covered by [3,13]. Since we consider affine instead of linear loops, it is also orthogonal to [4].
To see the difference between our and previous results, note that a triangular matrix A where c 1 , . . . , c k are the distinct entries on the diagonal is diagonalizable iff (A − c 1 I) . . . (A − c k I) is the zero matrix. 6 Here, I is the identity matrix. So an easy example for a triangular loop where the update matrix is not diagonalizable is the following well-known program (see, e.g., [2]): while x > 0 do x ← x + y; y ← y − 1 It terminates as y eventually becomes negative and then x decreases in each iteration. In matrix notation, the loop body is x y ← 1 1 0 1 x y + 0 −1 , i.e., the update matrix is triangular. Thus, this program is in our class of programs where we show that termination is decidable. However, the only entry on the diagonal of the update matrix A is c = 1 and A − c I = 0 1 0 0 is not the zero matrix. So A (and in fact each A n where n ∈ N) is not diagonalizable. Hence, extensions of this example to a dimension greater than 4 where the loop is still triangular are not covered by any of the previous results. 7 Our proof that termination is decidable for triangular loops proceeds in three steps. We first prove that termination of triangular loops is decidable iff termination of non-negative triangular loops (nnt-loops) is decidable, cf. Sect. 2. A loop is non-negative if the diagonal of A does not contain negative entries. Second, we show how to compute closed forms for nnt-loops, i.e., vectors q of d expressions over the variables x and n such that q[n/c] = f c (x) for all c ≥ 0, see Sect. 3. Here, triangularity of the matrix A allows us to treat the variables step by step. So for any 1 ≤ i ≤ d, we already know the closed forms for x 1 , . . . , x i−1 when computing the closed form for x i . The idea of computing closed forms for the repeated updates of loops was inspired by our previous work on inferring lower bounds on the runtime of integer programs [9]. But in contrast to [9], here the computation of the closed form always succeeds due to the restricted shape of the programs. Finally, we explain how to decide termination of nnt-loops by reasoning about their closed forms in Sect. 4. While our technique does not yield witnesses for non-termination, we show that it yields witnesses for eventual nontermination, i.e., vectors c such that f n (c) witnesses non-termination for some n ∈ N. All missing proofs can be found in Appendix C.
From Triangular to Non-Negative Triangular Loops
To transform triangular loops into nnt-loops, we define how to chain loops. Intuitively, chaining yields a new loop where a single iteration is equivalent to two iterations of the original loop. Then we show that chaining a triangular loop always yields an nnt-loop and that chaining is equivalent w.r.t. termination. 6 The reason is that in this case, (x − c1) . . . (x − c k ) is the minimal polynomial of A and diagonalizability is equivalent to the fact that the minimal polynomial is a product of distinct linear factors. 7 For instance, consider while x > 0 do x ← x + y + z1 + z2 + z3; y ← y − 1 Definition 3 (Chaining). Chaining the loop (1) yields: which simplifies to the following nnt-loop: The following lemma is needed to prove that (2) is an nnt-loop if (1) is triangular (see Appendix C.1 for the straightforward proof of Lemma 5).
Lemma 5 (Squares of Triangular Matrices). For every triangular matrix
A, A 2 is a triangular matrix whose diagonal entries are non-negative.
i.e., iff (2) does not terminate as f 2 (x) = A 2 x + A a + a is the update of (2).
Theorem 8 (Reducing Termination to nnt-Loops). Termination of triangular loops is decidable iff termination of nnt-loops is decidable.
Thus, from now on we restrict our attention to nnt-loops.
Computing Closed Forms
The next step towards our decidability proof is to show that f n (x) is equivalent to a vector of poly-exponential expressions for each nnt-loop, i.e., the closed form of each nnt-loop can be represented by such expressions. Here, equivalence means that two expressions evaluate to the same result for all variable assignments.
Poly-exponential expressions are sums of arithmetic terms where it is always clear which addend determines the asymptotic growth of the whole expression when increasing a designated variable n. This is crucial for our decidability proof in Sect. 4. Let N ≥1 = {b ∈ N | b ≥ 1} (and Q >0 , N >1 , etc. are defined analogously). Moreover, A [x] is again the set of all affine expressions over x.
Definition 9 (Poly-Exponential Expressions). Let C be the set of all finite conjunctions over the literals n = c, n = c where n is a designated variable and c ∈ N. Moreover for each formula ψ over n, let ψ be the characteristic function of ψ, i.e., ψ (c) = 1 if ψ[n/c] is valid and ψ (c) = 0, otherwise. The set of all poly-exponential expressions over x is As n ranges over N, we use n > c as syntactic sugar for c i=0 n = i . So an example for a poly-exponential expression is n > 2 · (2 · x + 3 · y − 1) · n 3 · 3 n + n = 2 · (x − y). Moreover, note that if ψ contains a positive literal (i.e., a literal of the form "n = c" for some number c ∈ N), then ψ is equivalent to either 0 or n = c .
The crux of the proof that poly-exponential expressions can represent closed forms is to show that certain sums over products of exponential and poly-exponential expressions can be represented by poly-exponential expressions, cf. Lemma 12. To construct these expressions, we use a variant of [1, Lemma 3.5]. As usual, Q[x] is the set of all polynomials over x with rational coefficients.
Lemma 10 (Expressing Polynomials by Differences [1]). If q ∈ Q[n] and c ∈ Q, then there is an r ∈ Q[n] such that q = r − c · r[n/n − 1] for all n ∈ N.
So Lemma 10 expresses a polynomial q via the difference of another polynomial r at the positions n and n − 1, where the additional factor c can be chosen freely. A detailed proof of Lemma 10 can be found in Appendix C.2. It is by induction on the degree of q and its structure resembles the structure of the following algorithm to compute r. Using the Binomial Theorem, one can verify that q − s + c · s[n/n − 1] has a smaller degree than q, which is crucial for the proof of Lemma 10 and termination of Alg. 1.
Proof. Let p = ℓ j=1 ψ j · α j · n aj · b n j . We have: is closed under addition, it suffices to show that we can compute an equivalent poly-exponential expression for any expression of the form We first regard the case m = 0. Here, the expression (4) can be simplified to From now on, let m ≥ 1. If ψ contains a positive literal n = c, then we get The step marked with ( †) holds as we have n > i − 1 = 1 for all i ∈ {1, . . . , n} and the step marked with ( † †) holds since i = c + 1 implies ψ (i − 1) = 0. If ψ does not contain a positive literal, then let c be the maximal constant that occurs in ψ or −1 if ψ is empty. We get: Again, the step marked with ( †) holds since we have n > i − 1 = 1 for all i ∈ {1, . . . , n}. The last step holds as i ≥ c + 2 implies ψ (i − 1) = 1. Similar to the case where ψ contains a positive literal, we can compute a poly-exponential expression which is equivalent to the first addend. We have which is a poly-exponential expression as 1 . For the second addend, we have: . It remains to show that the addend n > c + 1 · α b · r · b n is equivalent to a poly-exponential expression.
The proof of Lemma 12 gives rise to a corresponding algorithm.
if m = 0 then compute q j as in (5) and (6) else if p j = . . . ∧ n = c ∧ . . . · . . . then compute q j as in (7) else • split p j into two sums p j,1 and p j,2 as in (8) • compute q j,1 from p j,1 as in (9) • compute q j,2 from p j,2 as in (10) and (11) using Alg. 1 • q j ← q j,1 + q j,2 return ℓ j=1 q j Example 13. We compute an equivalent poly-exponential expression for where w is a variable. (It will later on be needed to compute a closed form for Ex. 4, see Ex. 18.) According to Alg. 2 and (3), we get . We search for q 1 , q 2 , q 3 ∈ PE[w] that are equivalent to p 1 , p 2 , p 3 , i.e., q 1 + q 2 + q 3 is equivalent to (12). We only show how to compute q 2 . See Appendix A for the computation of q 1 = n = 0 · 1 2 ·w ·4 n and q 3 = 2 3 − 2 3 ·4 n . Analogously to (8), we get: The next step is to rearrange the first sum as in (9). In our example, it directly simplifies to 0 and hence we obtain Finally, by applying the steps from (10) we get: The step marked with ( †) holds by Lemma 10 with q = 1 and c = 4. Thus, we have r = − 1 3 , cf. Ex. 11. Recall that our goal is to compute closed forms for loops. As a first step, instead of the n-fold update function h(n, x) = f n (x) of (1) where f is the update of (1), we consider a recursive update function for a single variable x ∈ x: g(0, x) = x and g(n, x) = m · g(n − 1, x) + p[n/n − 1] for all n > 0 Here, m ∈ N and p ∈ PE[x]. Using Lemma 12, it is easy to show that g can be represented by a poly-exponential expression.
To see why (13) is sufficient, note that (13) implies So both addends are equivalent to poly-exponential expressions.
Example 15. We show how to compute the closed forms for the variables w and x from Ex. 4. We first consider the assignment w ← 2, i.e., we want to compute a q w ∈ PE[w, x, y, z] with q w [n/0] = w and q w = (m w · q w + p w ) [n/n − 1] for n > 0, where m w = 0 and p w = 2. According to (13) and (14), The restriction to triangular matrices now allows us to generalize Lemma 14 to vectors of variables. The reason is that due to triangularity, the update of each program variable x i only depends on the previous values of x 1 , . . . , x i . So when regarding x i , we can assume that we already know the closed forms for x 1 , . . . , x i−1 . This allows us to find closed forms for one variable after the other by applying Lemma 14 repeatedly. In other words, it allows us to find a vector q of poly-exponential expressions that satisfies q [n/0] = x and q = A q[n/n − 1] + a for all n > 0.
The last step holds as A is lower triangular. By Lemma 14, we can compute a q 1 ∈ PE[x] that satisfies for all n > 0. As A j;1 · q 1 + p j ∈ PE[x] for each 2 ≤ j ≤ d, the claim follows from the induction hypothesis.
Together, Lemmas 14 and 16 and their proofs give rise to the following algorithm to compute a solution for (16) and (17). It computes a closed form q 1 for x 1 as in the proof of Lemma 14, constructs the argument p for the recursive call based on A, q 1 , and the current value of p as in the proof of Lemma 16, and then determines the closed form for x 2,...,d recursively.
So the closed form of Ex. 4 for the values of the variables after n iterations is:
Deciding Non-Termination of nnt-Loops
Our proof uses the notion of eventual non-termination [4,13]. Here, the idea is to disregard the condition of the loop during a finite prefix of the program run.
Definition 19 (Eventual Non-Termination). A vector c ∈ Z d witnesses eventual non-termination
If there is such a witness, then (1) is eventually non-terminating.
Clearly, (1) is non-terminating iff (1) is eventually non-terminating [13] (see Appendix B for a proof). Now Thm. 17 gives rise to an alternative characterization of eventual non-termination in terms of the closed form q instead of f n (c).
Corollary 20 (Expressing Non-Termination with PE). If q is the closed form of (1), then c ∈ Z d witnesses eventual non-termination iff Proof. Immediate, as q is equivalent to f n (x). See Appendix C.3 for details.
So to prove that termination of nnt-loops is decidable, we will use Cor. 20 to show that the existence of a witness for eventual non-termination is decidable. To do so, we first eliminate the factors ψ from the closed form q. Assume that q has at least one factor ψ where ψ is non-empty (otherwise, all factors ψ are equivalent to 1) and let c be the maximal constant that occurs in such a factor. Then all addends ψ · α · n a · b n where ψ contains a positive literal become 0 and all other addends become α · n a · b n if n > c. Thus, as we can assume n 0 > c in (18) without loss of generality, all factors ψ can be eliminated when checking eventual non-termination.
Corollary 21 (Removing ψ from PEs). Let q be the closed form of an nntloop (1). Let q norm result from q by removing all addends ψ · α · n a · b n where ψ contains a positive literal and by replacing all addends ψ · α · n a · b n where ψ does not contain a positive literal by α · n a · b n . Then c ∈ Z d is a witness for eventual non-termination iff Proof. See above for the proof idea and Appendix C.4 for a detailed proof.
By removing the factors ψ from the closed form q of an nnt-loop, we obtain normalized poly-exponential expressions.
Recall that the loop condition ϕ is a conjunction of inequalities of the form
and we need to decide if there is an instantiation of these inequalities that is valid "for large enough n". To do so, we order the coefficients α j of the addends α j · n aj · b n j of normalized poly-exponential expressions according to the addend's asymptotic growth when increasing n. Lemma 24 shows that α 2 · n a2 · b n 2 grows faster than α 1 · n a1 · b n 1 iff b 2 > b 1 or both b 2 = b 1 and a 2 > a 1 .
Here, > lex is the lexicographic order, i.e., Proof. By considering the cases b 2 > b 1 and b 2 = b 1 separately, the claim can easily be deduced from the definition of O. See Appendix C.5 for details.
Definition 25 (Ordering Coefficients). Marked coefficients are of the form , where α j = 0 for all 1 ≤ j ≤ ℓ. The marked coefficients of p are Example 26. In Ex. 23 we saw that the loop from Ex. 2 is non-terminating iff there are w, x, y, z ∈ Z, n 0 ∈ N such that p ϕ 1 > 0 ∧ p ϕ 2 > 0 for all n > n 0 . We get: Now it is easy to see that the asymptotic growth of a normalized polyexponential expression is solely determined by its ≻-maximal addend.
Proof. Clear, as c · n a · b n is the asymptotically dominating addend of p. See Appendix C.6 for a detailed proof.
Note that Cor. 27 would be incorrect for the case c = 0 if we replaced . Building upon Cor. 27, we now show that, for large n, the sign of a normalized poly-exponential expression is solely determined by its ≻-maximal coefficient. Here, we define Lemma 28 (Sign of NPEs). Let p ∈ NPE. Then lim n →∞ p ∈ Q iff p ∈ Q and otherwise, lim n →∞ p ∈ {∞, −∞}. Moreover, we have sign (lim n →∞ p) = sign(unmark(max ≻ (coeffs(p)))).
Proof. If p / ∈ Q, then the limit of each addend of p is in {−∞, ∞} by definition of NPE. As the asymptotically dominating addend determines lim n →∞ p and unmark(max ≻ (coeffs(p))) determines the sign of the asymptotically dominating addend, the claim follows. See Appendix C.7 for a detailed proof.
Lemma 29 shows the connection between the limit of a normalized poly-exponential expression p and the question whether p is positive for large enough n. The latter corresponds to the existence of a witness for eventual non-termination by Cor. 21 as ϕ[x/q norm ] is a conjunction of inequalities p > 0 where p ∈ NPE[x].
Proof. By case analysis over lim n →∞ p. See Appendix C.8 for details. Now we show that Cor. 21 allows us to decide eventual non-termination by examining the coefficients of normalized poly-exponential expressions. As these coefficients are in A [x], the required reasoning is decidable.
Proof. For any p i with 1 ≤ i ≤ k and any c ∈ Z d , we have p i [x/c] ∈ NPE. Hence: Hence by the considerations above, (20) is valid iff is valid. By multiplying each (in-)equality in (22) with the least common multiple of all denominators, one obtains a first order formula over the theory of linear integer arithmetic. It is well known that validity of such formulas is decidable.
Note that (22) is valid iff k i=1 max coeff pos(p i ) is satisfiable. So to implement our decision procedure, one can use integer programming or SMT solvers to check satisfiability of k i=1 max coeff pos(p i ). Lemma 30 allows us to prove our main theorem.
Theorem 31. Termination of triangular loops is decidable.
Proof. By Thm. 8, termination of triangular loops is decidable iff termination of nnt-loops is decidable. For an nnt-loop (1) we obtain a q norm ∈ NPE[x] d (see Thm. 17 and Cor. 21) such that (1) is non-terminating iff where ϕ is a conjunction of inequalities of the form α > 0, α ∈ A [x]. Hence, . Thus, by Lemma 30, validity of (20) is decidable.
The following algorithm summarizes our decision procedure.
Ex. 33 shows that our technique does not yield witnesses for non-termination, but it only proves the existence of a witness for eventual non-termination. While such a witness can be transformed into a witness for non-termination by applying the loop several times, it is unclear how often the loop needs to be applied.
Example 33. Consider the following non-terminating loop: The closed form of x is q = n = 0 · x + n = 0 · (x + y + n − 1). Replacing x with q norm in x > 0 yields x + y + n − 1 > 0. The maximal marked coefficient of However, the final formula constructed by Alg. 4 precisely describes all witnesses for eventual non-termination (see Appendix C.9 for the proof).
Lemma 34 (Witnessing Eventual Non-Termination). Let (1) be a triangular loop, let q norm be the normalized closed form of (2), and let Then c ∈ Z d witnesses eventual non-termination of (1) iff [x/c] is a model for k i=1 max coeff pos(p i ).
Conclusion
We presented a decision procedure for termination of affine integer loops with triangular update matrices. In this way, we contribute to the ongoing challenge of proving the 15 years old conjecture by Tiwari [14] that termination of affine integer loops is decidable. After linear loops [4], loops with at most 4 variables [13], and loops with diagonalizable update matrices [3,13], triangular loops are the fourth important special case where decidability could be proven.
The key idea of our decision procedure is to compute closed forms for the values of the program variables after a symbolic number of iterations n. While these closed forms are rather complex, it turns out that reasoning about firstorder formulas over the theory of linear integer arithmetic suffices to analyze their behavior for large n. This allows us to reduce (non-)termination of triangular loops to integer programming. In future work, we plan to investigate generalizations of our approach to other classes of integer loops.
C.1 Proof of Lemma 5
Proof. Let A be a lower triangular matrix of dimension d (the proof for upper triangular matrices is analogous). We have A 2 i;j = d k=1 A i;k · A k;j (where A i;j is the i th entry in A's j th column).
If i < j, then d k=1 A i;k · A k;j = 0 as, for each addend, either A i;k = 0 (if i < k) or A k;j = 0 (if k ≤ i < j), which proves that A 2 is triangular.
If i = j, then d k=1 A i;k · A k;i ≥ 0 as for each addend, either , which proves that all diagonal entries of A 2 are non-negative.
C.2 Proof of Lemma 10
Proof. We use induction on the degree d of q. In the induction base, let d = 0, i.e., q = c 0 ∈ Q. If c = 1, then we fix r = c 0 · n and we get If c = 1, then we fix r = c0 1−c and we get For the induction step, let d > 0, i.e., q = which is a polynomial whose degree is at most d − 1. If c = 1, then we have which is again a polynomial whose degree is at most d − 1. By the induction hypothesis, there exists some r ′ ∈ Q[n] such that t = r ′ − c · r ′ [n/n − 1] for all n ∈ N. Let r = s + r ′ . Then we get: Proof. We have: c witnesses eventual non-termination (as q is the closed form of (1) and thus, q = f n (x), cf. Thm. 17)
⇐⇒ (18)
C.4 Proof of Cor. 21 Proof. By Cor. 20, c is a witness for eventual non-termination iff Let c be the maximal constant that occurs in a sub-expression ψ in q. Then
C.6 Proof of Cor. 27
Proof. If p = 0, then c = 0 by Def. 25 and hence O(p) = O(c · n a · b n ) = O(0). Otherwise, p has the form c · n a · b n + ℓ j=1 c j · n aj · b n i for c = 0 and ℓ ≥ 0. We have c (bj ,aj ) j ∈ coeffs(p) and hence (b, a) > lex (b j , a j ) for all 1 ≤ j ≤ ℓ. Thus, Lemma 24 implies O(n aj · b n j ) O(n a · b n ) and hence we get O(p) = O c · n a · b n + ℓ j=1 c j · n aj · b n j = O(n a · b n ) = O(c · n a · b n ).
Let p = ℓ j=1 c j · n aj · b n j with ℓ ≥ 1 and c j = 0 for all 1 ≤ j ≤ ℓ, since p = 0. We use induction on ℓ.
C.9 Proof of Lemma 34
Proof. We have: | 7,005.6 | 2019-05-21T00:00:00.000 | [
"Mathematics"
] |
A prediction model on rockburst intensity grade based on variable weight and matter-element extension
Rockburst is a common dynamic disaster in deep underground engineering. To accurately predict rockburst intensity grade, this study proposes a novel rockburst prediction model based on variable weight and matter-element extension theory. In the proposed model, variable weight theory is used to optimize the weights of prediction indexes. Matter-element extension theory and grade variable method are used to calculate the grade variable interval corresponding to the classification standard of rockburst intensity grade. The rockburst intensity grade of Engineering Rock Mass is predicted by rock burst intensity level variable and the interval. Finally, the model is tested by predicting the rockburst intensity grades of worldwide several projects. The prediction results are compared with the actual rockburst intensity grades and the prediction results of other models. The results indicate that, after using variable weight theory and grade variable method, the correct rate of prediction results of matter-element extension model is improved, and the safety of the prediction results is also enhanced. This study provides a new way to predict rock burst in underground engineering.
Introduction
Rockburst is the common energy release of elastic deformation with sudden, uncontrollable and destructive damage in deep mining and underground space development, which is manifested as the fracture and instantaneous projection of surrounding rock mass [1][2][3]. Strong rockbursts may cause significant losses to underground engineering, such as the failure of the supporting mechanism and equipment, casualty and construction delay [4,5]. Therefore, it is significant to predict and alarm the rockburst in underground engineering [6,7]. The research on the mechanism of rockburst has been studied from different perspectives with scientific rockburst criteria and prediction method [8]. Xie H P [9] and Cai M F [10,11] obtained the rockburst criteria based on the energy principle by studying the internal relationship between energy accumulation, dissipation, release, rock strength and overall failure in the process of rock deformation and failure. Arteiro A [12] summarized the critical energy value of rockburst through triaxial test and used it as the criteria of rockburst prediction. Jiang Q [13]et al. proposed that local energy release rate (LERR) is a new energy index to simulate rockburst conditions, contributing to understanding the rockburst mechanism from the perspective of energy. He M C [14,15], Kuksenko V S [16] et al. obtained the rockburst prediction criteria by analyzing the characteristics of acoustic emission signal in the rock failure. Feng X T [17], Ma T H [18], Liu X H [19], Chen Z H [20] et al. used microseismic monitoring technology to explore the relationship between the spatiotemporal evolution of microseismic signals and rockburst, and obtained the criteria for rockburst prediction and early warning based on the features of microseismic events. Sharan S K [21] proposed a finite element model for predicting the potential occurrence of rockburst in underground caverns. Mole-Coulomb failure criteria [22] and Hoek-Brown failure criteria [23] was used as rockburst criteria. These studies on rockburst criteria enrich the understanding of rockburst mechanism. However, due to the complex rock mass structure and geological environment in the underground engineering, rock burst prediction by a single rock burst criterion is usually less accurate.
To improve the correct rate of prediction, the information of rockburst reflected by various rockburst criteria have been synthesized by the relevant mathematical theory, and many rockburst prediction and early warning models have been established. For example, Zhou J [24,25] used Fisher discriminant theory to construct a Fisher analysis discriminant model for rockburst classification prediction, which can accurately predict rockburst in deep tunnels and some coal mines. Guo L [26] used the theory of neural network to construct the BP neural network prediction model of rockburst prediction. A large amount of rockburst data were required for the model training, otherwise the correct rate of prediction and the application scope were limited. Liu Z J [27] used the fuzzy probability theory to establish a new fuzzy probability model for predicting the occurrence and intensity of rockburst. The fuzzy weight was introduced in this model, and the limitation of traditional fuzzy probability model was overcame to a certain extent in practical engineering. Gao W [28] constructed a rockburst prediction model by ant colony clustering algorithm. Engineering analogy was performed to realize the automatic classification of rockburst in this model, while prediction accuracy still needed to be further improved. Pei Q T [29] considered the complex relationship between rockburst and its influencing factors as a grey system, optimized the grey whitening weight function, and established an improved grey evaluation model for rockburst prediction. The model greatly overcame the problems of multiple intersections of grey types and abnormality, and improved the prediction accuracy. Based on Projection Pursuit (PP), Particle Swarm Optimization (PSO) and Logistic Curve Function (LCF), a particle swarm projection pursuit model for rockburst prediction was constructed by Xu F [30], ZHOU Xuanchi [31] et al. Particle swarm optimization was used to optimize the parameters of projection index function and LCF, guaranteeing the accuracy of model parameters and prediction accuracy. However, with the increasing number of prediction indexes, the optimization of model parameters becomes more difficult. Zhou J [32][33][34] used a large number of rockburst data to compare the learning ability of 11 supervised learning algorithms, including linear discriminant analysis (LDA), partial least-squares discriminant analysis (PLSDA), quadratic discriminant analysis (QDA), multilayer perceptron neural network (MLPNN), support vector machine (SVM), random forest (RF), gradient-boosting machine (GBM) and so on. Limitations of these algorithms were also analyzed by Zhou J. In addition, the commonly used models for rockburst prediction include evidence theory model [35], set pair analysis theory model [36], efficacy coefficient method model [37], normal cloud model [38]. Although these models have been applied to the engineering, there are still some defects in the models, such as the difficult fusion of conflicting indicators in evidence chain model; the randomness of forecasting cannot be reflected by the failure of set pair analysis model; the difficult obtaining of satisfactory and unsatisfactory values in efficacy coefficient method; and the gap between forecasting results and actual ones caused by the over ideal distribution of indicators in normal cloud model. Therefore, it is necessary to improve the existing rockburst prediction theories and models.
Compared with the extension prediction model in the previous study, this paper proposes a new rockburst intensity measurement model for underground engineering by variable weight theory and matter-element extension theory. In the proposed model, variable weight theory is introduced to optimize the constant weight of predictors so as to improve the rationality of weighting deamination. Grade variable method and matter-element extension method are combined to establish the grade variable interval of rockburst intensity prediction, corresponding to the classification standard of rockburst intensity grade. The rockburst intensity grade is determined in line with grade variable interval of rockburst intensity. Because the grade variable comprehensively reflects the comprehensive extension correlation information of each rockburst intensity grade, and the correct rate of rockburst intensity prediction can be improved.
The matter-element extension theory
In the matter-element extension theory, the change of objects indicates the exploitation, and the change possibility is called extensibility, and the extensibility of objects is described by the matter-element extension. If an object S has the feature y, the eigenvalue of feature y is expressed by v, then the ordered triple R = (S, y, v) is used to describe the basic-element of the object, abbreviated as matter-element. If a matter S has n features, the corresponding eigenvalues of features y 1 , y 2 , � � �, y n are v 1 , v 2 , � � �, v n respectively. The matter-element describing the Subject S is denoted as R [39].
The matter-element classical domain of object S is R j .
The matter-element joint domain of object S is R 0 .
Where S is the object, v i is the eigenvalue, S j is the object corresponding to feature greades of j, j2{1,� � �,m}, m is the number of feature greades, y i is the features of object i, i2{1,� � �,n}, n is the number of features, V ji is the eigenvalues range of S j about y i , S 0 is the object corresponding to overall feature grades, V 0i is the eigenvalues range of S 0 about y i .
According to the extension set and extension distance [40], extending "distance" to "extension distance", the real variable function in classical mathematics is extended to correlation function then the extension distance formula of feature y i in the object S in regard to feature grade j can be expressed as: where v i is the object feature i, V ji is the value range of S j in regard to feature is the extension distance of feature y i with respect to the classical domain of j feature grade, and ρ(v i ,V 0i ) is the extension distance of feature y i with respect to joint domain.
If v i 2V 0i , the elementary function is obtained for calculating the feature correlation [41].
where S j (v i ) is the feature correlation of the feature y i of the object S with respect to the feature grade j. Combining with the feature weight vector W, the comprehensive correlation degrees of the object S with respect to the feature grade j is calculated as follows [42].
Where w i is the weight coefficient of the feature y i , S j (V j ) is the comprehensive correlation degrees between the object S and the feature grade j, S j (v i ) is the single index correlation of the feature y i of the object S with respect to the feature grade j.
Variable weight theory
Since the weight value is always fixed in the process of constant weighting, it can only reflect the relative importance of each feature of objects, ignoring the influence caused by eigenvalue change to the feature weight [43]. In this regard, Zhu Y Z and Li H X improved the variable weight theory [44]. According to the variable weight theory, constructing the variable weight vector can optimize the constant weight of objects, and obtain the feature variable weight.
According to the axiomatic system, the definition of variable weight vectors [45] as follows.
} is called the penalty (or excitation) variable weight vector. The definition of feature variable weight vector [45] as follows. There is a mapping P: P is called an m dimensional feature variable weight vector, if P satisfies the following three conditions: Condition 1. x i �x j !P i (X)�P j (X) or x i �x j !P i (X)�P j (X). Condition 2. P j (X) is continuous for each variable x j . Condition 3. for any constant weight vector W 0 = {w 01 ,� � �,w 0m }, Eq (7) satisfies the condition a, condition b and condition c, P is called a penalty (or excitation) feature variable weight vector.
Essentially, feature variable weight vector P j (X) is a gradient vector of m-dimensional real function B(x) (also known as balanced function) [46], whose calculation equation is as follows.
According to Eqs (7) and (8), the variable weight vector W(X) of the object X can be obtained.
Modeling solutions
Due the complex environment around the rock masses in underground engineering, many factors can result in the rockburst. Firstly, based on the analysis of the existing achievement, the scientific predictive index system of rockburst intensity grading was constructed, and the rockburst intensity predictive index grading standard was established. Secondly, the extreme value method was used to standardize the grading standards and the indicators of Engineering Rock Mass. The dimensionality of the forecasting indicators was achieved, and the normalized grading standards for rockburst prediction indicators were obtained. Thirdly, the each grades in the grading standard was regarded as an Ideal Rock Mass; the constant weight of the predictive indexes were calculated by using entropy method to synthesize the index information of the Ideal Rock Masses and the Engineering Rock Masses, and the difference of the predictive indexes was fully considered by using variable weight theory to calculate index variable weight. Finally, based on the matter-element extension theory, the comprehensive correlation degrees between the normalized grading and the Ideal Rock Masses, Engineering Rock Masses, were calculated, and whose grading variables of rockburst were calculated. The prediction grading interval of rockburst intensity was constructed by using the rockburst grading variables of Ideal Rock Masses. The rock burst intensity grade of Engineering Rock Masses can be obtained by judging the grading interval of the variable value of Engineering Rock Masses.
Predictive index system and grading standard
According to a large number of test results and practical experience, the factors causing rockburst can be divided into internal factors and external factors. Internal factors mainly includes compressive strength, shear strength, brittleness, hardness and Poisson's ratio. External factors mainly includes blasting disturbance, tunneling depth, size and shape of underground space. These external factors can damage the integrity and spatial structure of the rock mass, thus changing the stress distribution of the surrounding rock. Rock burst is caused by both internal and external factors. Based on the existing research [47][48][49][50], the rockburst intensity predictive index system was established based on the ratio σ θ /σ c of shear stress to uniaxial compressive strength, the ratio σ c /σ t of rock uniaxial compressive strength to tensile strength, rock brittleness index I s and rock elastic energy index W et . The Classification standard of rock burst intensity grading features is shown in Table 1, and the corresponding prediction index grading standard is shown in Table 2.
To concisely calculate the variable interval corresponding to the rockburst intensity grade, grading criteria of the rockburst intensity is classified into five kinds of Ideal Rock Masses. The predictive indexes of rockburst intensity and their normalized values of these five Ideal Rock Masses are shown in Table 3, and the normalized method of rockburst intensity prediction index is shown in Section 3.3.
Weighting of prediction indicators
Firstly, the Ideal Rock Masses and the Engineering Rock Masses were subjected to dimensionless treatment by the extremum method, so that the dimension of each predictive index was consistent. Then the entropy method was used to determine the constant weight of each predictive index. Finally, the variable weight vector was constructed based on the variable weight theory to optimize the constant weight, and the variable weight of predictive index was obtained. Weighting steps of the rockburst intensity predictive index are as follows: [50]
II (slight)
Loose wall of the surrounding rock with stripping rock and a slight sound of crackling. Protective measures, routine safety monitoring and management are required.
III (medium)
Rock clumps peel off from the chamber or roadway wall with sharp ejection sound frequently, occasional ejection phenomenon. Serious floor heave phenomenon, which is easy to cause personnel injury and mechanical damage. Anti-ejection facilities should be taken into consideration in design and construction, and real-time monitoring should be adopted.
IV (severe)
Large rocks peel off from chamber or roadway wall with sharp ejection sounds continuously, and ejection phenomena. Surrounding rocks deform sharply and a large number of blasting pits appear, which pose a great threat to human and mechanical safety. Corresponding protective measures must be taken to enhance the safety.
Where x ij is the predictive index of rock mass X i , m j is the maximum value of predictive index x ij , m j represents the minimum value of predictive index x ij , v ij is the dimensionless value of x ij . 2. The normalized predictive index value v ij of rock mass X i is processed and the predictive index information entropy e j of rock mass X i is calculated.
> > > > < > > > > :
Where v ij is the normalized predictive index of the rock mass X i , e j is the information entropy of the predictive index of the rock mass X i , r ij is the normalized predictive index value of the rock mass X i . 3. the constant weight of predictive index of rock mass X i is calculated.
Where w j is the weight of the predictive index of the rock mass X i , e j is the information entropy of the predictive index of the rock mass X i , and m is the number of the predictive indexes of the rock mass X i . The constant weight vector W 0 = {w 01 ,� � �,w 0m } can be obtained.
4. According to the variable weight theory, the construction and selection of state variable weight vectors are the core of variable weight implementation. The common feature variable weight vectors include sum type, product type, exponential type and mixed type. Since the exponential feature variable weight vector can determine the appropriate parameters by given different equilibrium forces, it is more flexible [52]. The feature variable weight vectors P j (X) = (P j (x 1 ),� � �,P j (x m )) of exponential type are used. The calculation equation of P j (x i ) is as follows.
Where x ij is the predictive index of rock mass X i , β indicates the adjustment degree of variable weight vector to the vector equilibrium. β is selected according to the actual situation and β > 0.
Prediction of the rockburst intensity grade
According to matter-element extension theory and rockburst intensity predictive index standard, the classical domain of rockburst intensity predictive index can be expressed as R j , and the nodal domain as R 0 . The classical domain R j expresses the change range of normalized index value in each feature grade of slope rock mass evaluation index, and the nodal R 0 expresses the whole range of normalized value of rock mass classification index.
According to the grading standard of rockburst intensity, if a predictive index of Engineering Rock Mass is greater than the maximum index, the maximum index is used as the index of Engineering Rock Mass. if it is less than the minimum index, the minimum index is used as the index of Engineering Rock Mass. Then the rockburst intensity predictive indexes of each Engineering Rock Massare normalized according to the Eqs (9) and (10), and normalized predictive index values are obtained. Finally, by Eqs (4), (5) and (6), the comprehensive correlation degrees between each Ideal Rock Mass, each Engineering Rock Mass and the rockburst intensity prediction grading standard are calculated. The grading variable values of rockburst intensity grade of Ideal Rock Masses and Engineering Rock Masses are calculated based on the grading variable method [53]. The equation for calculating grading variables k is as follows.
where S j (V j ) is the comprehensive correlation degree between S and feature grading j, P j (V j ) is the normalized value of comprehensive correlation, k is the value of feature grading of S, Finally, the grading interval of grading variable is constructed by using the grading variables of Ideal Rock Masses. The rockburst intensity grade of a Engineering Rock Mass is determined by distinguishing the grading interval corresponding to the grading variable of the Engineering Rock Mass.
Cases selection
To test the accuracy and safety of the proposed model, several classical rockburst cases of underground engineering at home and abroad [54] are selected for the prediction and analysis. The measured values and normalized values of rockburst prediction in each case are shown in Table 4.
Rockburst cases in Table 4 are occurred in different countries and regions, so there is a large difference in the engineering geological conditions. This difference can be reflected in the distribution of the forecasting indicators (Fig 1). At the same time, As shown in Fig 1, there is no regularity between the predictive indexes of different cases in the model verification, and there is a great difference between the predictive indexes of the same case. It is impossible to directly determine the rockburst intensity grade of each case based on a single index. Therefore, these cases can be used to test the correctness of the prediction results of the model.
Calculation and analysis of the weights of predictive indexes
In the exponential variable weight vectors, β is the penalty grade, which reflects the adjustment degree of the variable vectors weighting to vector equilibrium of the predictive index. The bigger the value, the more the optimal result is biased toward equilibrium among indexes. β = 1 is selected in this paper.
According to the theory of extreme value entropy weighting and variable weight, the constant weight of each rockburst prediction index and the variable weight value of each engineering case are calculated by using Eqs (7), (9)- (13) and (19). The results are shown in Table 5. The constant weight of prediction index synthesizes the prediction indexes in Table 5, which reflects the relative importance of each prediction index. In Fig 2, the relative importance of each prediction index is shown as: that index 1 is more important than Indicator 2, Index 2 is more important than Index 3, and Index 3 is more important than Index 4. Considering the different values of rocks in the prediction index, the variable weight of prediction index is obtained by adjusting the constant weight. It reflects the influence of different values of prediction index on the weight of prediction index. As shown in Fig 2, the relative importance of prediction indexes of different Engineering Rock Masses changes after introducing variable weights. There is no regularity in these changes, indicating the extensive representativeness of the selected rockburst cases in this paper.
Prediction results and analysis
According to the Eqs (4)- (6) and (15)- (18), the comprehensive correlation degrees and the rockburst intensity grading variables between each Ideal Rock Mass and each prediction grading standard are calculated as shown in Table 6. The grading variable interval for constructing the rockburst intensity grading is shown in Table 7.As shown in Table 6, prediction index values of rockburst intensity of Ideal Rock Mass come from classification criteria. In this case, the comprehensive correlation degree of a Ideal Rock Mass is a one-to-one correspondence with its rockburst intensity grade. Therefore, according to the comprehensive correlation degree of each ideal rock mass in Table 6, the value of the characteristic grade variable of the calculated rockburst intensity is also a one-to-one correspondence relationship with the rockburst intensity grade, namely, the corresponding relationship in Table 7 is correct. According to the Eqs (4)-(6), (15)- (18), the comprehensive correlation between the Engineering Rock Mass and the standard grade of rockburst intensity and the variable value of the characteristic grade of rockburst intensity are calculated. Table 8 shows the final prediction results. Model 1 in Table 8 refers to the cloud model based on index distance and uncertainty measurement [50]. Model 2 refers to the normal cloud model [38], Model 3 refers to the set pair theory model [36], and Model 4 refers to the efficiency coefficient method model [37]. Method 1 refers to the matter-element extension model based on variable weight theory with the discrimination criteria of the characteristic grade variables of rockburst intensity. Method 2 refers to the matter-element extension model that uses the variable weight with the discrimination criteria of the comprehensive relevance. Method 3 refers to the matter-element extension model that uses the constant weight with the discrimination criteria of the comprehensive relevance.
In Table 8, prediction results show that the prediction grades of Method 1 on Projects 2, 11 and 13 are lower than the actual grade, and the misjudgment rate is about 23%. In other projects, the prediction results are completely consistent with the actual situation. Method 2 has a lower prediction grade than the actual grade on Projects 4, 10 and 13, a lower prediction grade than the actual grade on Project 2; and a risk misjudgment (higher prediction grade than the actual grade) on Project 1, with a misjudgment rate of about 38%. Method 3 has a lower prediction grade than the actual grade on Project 13; and there are dangerous misjudgments on Projects 1, 9, 10 and 12, the false positive rate is about 38%. Thus, after applying variable weight theory and grading variable method in matter-element extension model, the misjudgment rate of the model is reduced by about 15%, and the occurrence of dangerous misjudgment is reduced.
In Table 9, predicted results show that Model 1 has dangerous misjudgments in Projects 1 and 8. All predictive grades of Model 2 are consistent with the actual grade. The prediction grade of model 3 on Projects 10 and 13 is one grade lower than the actual grade. The predictive grade of Model 4 in Project 2 is lower than the actual grade. Therefore, Method 1 is more secure than Model 1 in predicting results. Compared with Models 2, 3 and 4, the correct rate of Method 1 needs to be improved. However, Method 1 has a relatively high correct rate of prediction results, and the predicted grade is not greater than the actual grade. Hence, Method 1 is of great significance for guiding the safe construction of underground engineering.
Conclusions
1. Taking the ratio of tangential stress to uniaxial compressive strength of rock, uniaxial compressive strength to tensile strength of rock, brittleness index and elastic energy index of rock as prediction index system, the dimension of prediction index is unified by extreme value method. The objective constant weight of prediction index is calculated by entropy weight method, which reflects the relative importance of prediction index. On this basis, the variable weight theory is introduced to fully consider the influence of the difference of rockburst prediction indexes on index weighting, and the variable weight function of penalty characteristic variable weight function is selected to calculate the variable weight of prediction index, so as to improve the rationality of the weighting of prediction index.
2. Based on the grading standards of rockburst intensity index, five ideal rock masses are constructed. The matter-element extension theory and the grade variable method are used to calculate the characteristic grade variables of rockburst intensity for each ideal rock mass. Then value range of the rockburst intensity is constructed, corresponding to the grading standard. Based on this interval, the rockburst grade can be predicted by the variables of characteristic grade of rockburst intensity. This method overcomes the deficiency of matter-element extension model in predicting the rockburst grade with the maximum comprehensive correlation degree.
3. This model is also used to predict the rockburst intensity of several typical underground engineering cases at home and abroad. Results show that in a few projects, the prediction grade of this model is lower than the actual grade, accounting for about 23%. In most projects, the prediction grade is consistent with the actual grade, accounting for about 77%, and there is no large risk misjudgment or the difference between the prediction grade and the actual grade. Compared with the traditional matter-element extension model, the method of distinguishing rockburst intensity grade by the characteristic grade variable of rockburst intensity is proposed in this paper. The correct rate of prediction results is improved by about 15%, and the safety of prediction results is also higher.
4.
Compared with normal cloud model, set pair theory model and efficacy coefficient method, the correct rate of the prediction results of this model needs to be further improved. Meanwhile, although the engineering cases collected in this paper are representative, the number is limited, and the reliability of the model needs to be further tested in engineering practice. In addition, in the process of calculating the variable weight of prediction index, the selection of variable weight function is significantly important to the weight calculation. A more reasonable variable weight is required to construct the function according to the characteristics of rockburst index. | 6,498.8 | 2019-06-26T00:00:00.000 | [
"Physics"
] |
Deconstruction of Risk Prediction of Ischemic Cardiovascular and Cerebrovascular Diseases Based on Deep Learning
Over the years, with the widespread use of computer technology and the dramatic increase in electronic medical data, data-driven approaches to medical data analysis have emerged. However, the analysis of medical data remains challenging due to the mixed nature of the data, the incompleteness of many records, and the high level of noise. This paper proposes an improved neural network DBN-LSTM that combines a deep belief network (DBN) with a long short-term memory (LSTM) network. The subset of feature attributes processed by CFS-EGA is used for training, and the optimal selection test of the number of hidden layers is performed on the upper DBN in the process of training DBN-LSTM. At the same time, the validation set is combined to determine the hyperparameters of the LSTM. Construct the DNN, CNN, and long short-term memory (LSTM) network for comparative analysis with DBN-LSTM. Use the classification method to compare the average of the final results of the two experiments. The results show that the prediction accuracy of DBN-LSTM for cardiovascular and cerebrovascular diseases reaches 95.61%, which is higher than the three traditional neural networks.
Introduction
At present, doctors mainly judge whether patients suffer from cardiovascular and cerebrovascular diseases (CVD) based on experience and clinical test reports. e etiology of CVD is complex and the predictability is poor. It is generally difficult for nonprofessionals to judge whether they may suffer from such diseases. People who are more concerned about their physical conditions often monitor their physical conditions based on routine physical examination indicators such as blood pressure and blood lipids, while ignoring factors that may lead to CVD such as family medical history and pathological changes in other organs of the body. erefore, in the inspection and prevention of cardiovascular disease, the most important thing is to use advanced medical technology to screen relevant indicators. Using medical artificial intelligence technology to analyze the occurrence and development mechanism of diseases is a hot and difficult point in current medical research. e accumulation of massive clinical data provides opportunities for disease prediction and disease classification research. Disease prediction can provide help for the early diagnosis of patients, recommend effective treatment in the early stage of disease development, relieve patients' pain, and reduce economic burden. It is of great significance to conduct indepth mining and analysis of clinical diagnosis data of patients, identify disease subtypes based on patient prognosis information, and conduct subtype population differences analysis research to improve the ability and level of individualized diagnosis and treatment of patients.
Chronic kidney disease is an important risk factor for cerebrovascular disease. Kelly et al.' study found that chronic kidney disease was associated with severe stroke severity, prognosis, and a high burden of asymptomatic cerebrovascular disease and vascular cognitive impairment [1]. Zhang et al. studied the risk prediction model for CeVD mortality by all accessible clinical measures which were screened as potential predictors [2]. Zeng et al. studied the usefulness of predicting CVD in an Inner Mongolian population using the China-PAR equation for 10-year risk [3]. Tenori et al. used multivariate statistics and a random forest classifier for creating a prediction model for predicting death within 2 years after a cardiovascular event. Results showed that the prognostic risk model predicted death with a sensitivity, specificity, and predictive accuracy of 78.5% [4]. Early mortality and associated risk factors in adult maintenance hemodialysis (MHD) patients were then retrospectively analyzed. Chen et al.' study used multifactorial logistic regression in the training dataset to analyze risk factors for premature death within 120 d after hemodialysis and to develop a prediction model [5]. Lee et al. developed a deep learning signature using PET, in order to objectively assess stroke patients with cognitive decline [6]. Research on deep learning for cardiovascular disease prediction has been slow due to difficulties in feature extraction and other reasons.
Cuadrado-Godia et al.' study found a significant increase in the use of NN, ML, and DL in image processing for the correct severity of cSVD [7]. Nanni et al. constructed different sets of support vector machines for CNNs by using clinical image datasets for which different learning rates, enhancement techniques (e.g., warping), and topologies were experimented with [8]. Wang et al.' study used actual data from hospitals to deeply construct learning models for multiclassification studies of infectious diseases. Data normalization and densification of sparse data by self-encoders were used to improve model training [9]. Systems medicine aims to improve our understanding, prevention, and treatment of complex diseases. Wang et al. found that deep learning can automatically extract relevant features needed for a given task from high-dimensional heterogeneous data, prediction, prevention, and applications in precision medicine [10]. Important for disease diagnosis, epidemic response, and prevention, Dan et al. designed deep CNN models and analyzed the results in detail [11]. Stroke is a cerebrovascular disease that seriously endangers people's life and health. Zhang et al.'s research found that deep neural networks with massive data learning capability provide powerful tools for lesion detection. His research contributes to intelligent assisted diagnosis and prevention of ischaemic stroke [12]. e above studies demonstrate the feasibility of neural networks for predicting similar diseases in medicine, but the accuracy of neural network prediction is largely unspecified in the studies.
rough the sorting of related work, it can be found that although many scholars have conducted research on the treatment and prognosis of cardiovascular disease and achieved many staged results, few scholars have improved the systematic method of medical data processing. is paper adopts the combination of DBN and LSTM to establish an improved neural network model DBN-LSTM. en, the test set will be used for evaluation, and the classification method will be used to compare the prediction accuracy with DNN, CNN, and LSTM. e improved DBN-LSTM neural network model is trained with the test set, and the model parameters are determined. At the same time, the neurons in the hidden layer of the neural network are designed to be 10, the number of layers is 1, the learning rate should be 0.005, and the activation function should be sigmoid. e novelty of this paper is that the deep belief network and the long short-term memory network are connected by the idea of "concatenation." e deep belief network is used as the upper layer to input the training data, and the LSTM is used as the lower layer to output the results.
Deep Learning Risk Prediction
Method for CVD
Current Status of Cardiovascular and Cerebrovascular Disease Prediction
Research. e World Health Organization has conducted a survey on cardiovascular patients in China and found that the number of cardiovascular disease patients in China is very high in the world. According to the report, the number of people suffering from cardiovascular disease (CVD) in China is about 300 million. One person dies from the disease in about 10 seconds, and the mortality rate is increasing year by year. e cost of late-stage CVD treatment is high, and the total cost of medical visits and the average cost of each medical visit have risen sharply in recent years, as shown in Figure 1.
As shown in Figure 1, in view of the current status of CVD in China, there is an urgent need to carry out research work to delay the progression of the disease and reduce its incidence. As a chronic disease, CVD has a slow onset process, a long incubation period, and many modifiable pathogenic factors, leaving enough time for the early prevention and treatment of CVD [13]. Based on the current advanced Internet technology and medical informatization, through the personalized health management strategy of "Internet + medical treatment," the risk factors of CVD can be comprehensively managed. Targeted development of life, diet, exercise, psychology, and other interventions and clinical decisions can effectively reduce the incidence and mortality of CVD [14,15]. e closed-loop and continuous health management model is shown in Figure 2.
As shown in Figure 2, in the process of health management, it is very important to predict the risk of CVD. e main functions are as follows: (1) By combining quantitative and qualitative analyses of the risk level of each factor and the probability of future incidence, the triage management of the assessed objects can be realized. And re-evaluation to improve the results, relying on this method, can optimize the allocation of scarce medical resources and make the use of resources more reasonable [16]. (2) By predicting and quantifying the size of an individual's risk of disease, it will help to promote the subject to improve self-management awareness and mobilize their enthusiasm to fully participate. To fundamentally alleviate the current predicament of "seeking a doctor after illness" in the treatment of chronic diseases, and to change the passive treatment to active health management, is of great significance for alleviating CVD events [17,18]. e application field of deep learning is shown in Figure 3.
As shown in Figure 3, with the development of deep learning, the method has achieved continuous success in data mining, computer vision, and natural language processing, and it has also become the preferred method for tasks that are difficult to extract features [19,20]. With the continuous increase of electronic medical record data, deep learning methods have better performance than traditional methods in early diagnosis and risk prediction [21,22].
Graph Convolutional Neural
Network. Graph convolution neural network methods are divided into two categories, spectral domain-based methods and spatial domain-based methods. Spectral domain-based methods define graph convolution by introducing filters from a graph signal processing perspective, where graph convolution operations are interpreted as removing noise from graph signals. Spatial domain-based methods represent graph convolution as aggregating feature information from neighbors. e input of the graph convolutional neural network (GCN) model is mainly composed of the feature matrix X and the adjacency matrix A. ere may be various relationships between N nodes in the graph, and this edge information can be represented by an N × N-dimensional adjacency matrix A.
Contrast Media & Molecular Imaging
(1) A � A + I in the formula, where I represents the identity matrix, and H represents the number of features of each layer. If the relevant research method layer is the input layer, H is X. σ is the nonlinear activation function. D is the degree matrix corresponding to A, as is shown in the following formula.
(2) e overall forward propagation formula is Afterward, the loss function can be calculated according to the labeled nodes, such as the cross-entropy loss function, to optimize the training of the model. Since GCN models can be trained even with only a few labeled nodes and achieve good results, GCN models are often considered semisupervised classification models [23,24].
First, a relatively complete heterogeneous network is constructed, which contains molecular network data of multiple species and protein information of multiple species [25]. Among them, if the vector similarity of two protein nodes is relatively large, then the two protein nodes are topologically similar in the heterogeneous network or have similar protein sequences.
ProSNet first samples a large number of heterogeneous path instances according to the heterogeneous biological network (HBN) to find low-dimensional vectors for each node. In the construction process, the algorithm will match the nearest nodes and find the optimal path in the matching. en, the optimal low-dimensional vector is found based on the property that nodes that appear together in other cases have similar vector representations.
e framework utilizes p M and q M to model different heterogeneous paths and weights node vectors of different dimensions according to the heterogeneous path M. Pr + (P e l −e L | M) represents the distribution of pathway instances in the heterogeneous biological network (HBN) following heterogeneous pathway M. Pr − (P e l −e L | M) is the noise distribution; for simplicity, set Among them, D ∈ {0, 1} are binary classification labels. Since the optimization goal is to fit Pr(P e l −e L |M) to Pr + (P e l −e L | M), just maximize the following expectations.
Basic Principles of Bayesian Algorithm.
e Bayesian algorithm is widely used in the field of mathematical statistics, which mainly predicts the probability of occurrence of future time events based on known events. In practical applications, this algorithm mainly analyzes the probability of events that have occurred and determines the possibility of future time events. e algorithm has a high proportion of applications in the field of probabilistic classification. e analysis is mainly based on the corresponding joint probability.
e disadvantage of this algorithm is that the structure is very complex and the possibility of overfitting is high. e structure of the Bayesian classification algorithm is shown in Figure 4.
As shown in Figure Naive Bayesian classification algorithm belongs to a class of efficient classification algorithms and has certain applications in many fields. Among them, X � {X 1 , X 2 ,. . .,X n }. According to Bayes' theorem, there is the following formula: Among them, P(X|C i ) is the conditional probability of feature vector X under category C, P(C i ) is the prior probability under category C i , and formula (11) can be obtained according to probability knowledge: Naive Bayes decision criterion: for any i ≠ j, there is P(C i | X)>P(C j |X), and the category of the attribute set X is judged to be C j . Considering that there is no correlation between P(X) and C during the analysis, it can be described by the following expression: Since the naive Bayes classifier assumes that each attribute is independent of each other, the formula of the modified naive Bayes classifier is an improved neural network model DBN-LSTM is established, which will be evaluated using the test set, and the classification method will be used to compare and analyze the prediction accuracy with DNN, CNN, and LSTM. en, the regression method is used to compare the performance of DBN-LSTM and LSTM separately. e DBN-LSTM for predicting CVD is composed of DBN and LSTM in series. Using DBN-LSTM, the complex features of multiple factors in the dataset can be extracted for prediction of CVD. e structure diagram of DBN-LSTM is shown in Figure 5.
As shown in Figure 5, first of all, the upper layer of DBN-LSTM is composed of DBN. DBN has a strong ability to learn features and can well obtain the intrinsic correlation of feature attribute data. Its structure is composed of multiple RBMs. RBMs are energy-based models with the definition of energy function E � (i, j): DBN is good at learning to extract the deep-level features of the training data. e joint distribution between its vector v and the I-layer hidden layer h is simulated as follows: Among them, i and j are the nodes distributed on the network, which are equivalent to the coordinates in the coordinate system, and P represents the distribution obeyed by the nodes.
Not so for the entire DBN:
Contrast Media & Molecular Imaging
After the meta update, the RBM will backpropagate the values of the trained h, layer to the next layer of RBM until the entire DBN is fine-tuned. e deviation of the visible and hidden units of the DBN is calculated as follows: e output feature vector is used as its input feature vector, and the new feature vector is extracted from the DBN after denoising and enters the LSTM layer as the output. erefore, the dependencies of LSTM feature learning are strongly influenced by the DBN outputting new features.
As mentioned above, the LSTM network has good computing performance. Combined with the fast learning speed of DBN and the fast iterative update speed, DBN-LSTM can be well adapted to the calculation of medical data in the text.
Prediction Steps of CVD.
e related content studied in this paper can actually be understood as the related application of data mining technology in medical data. e progression process can be summarized into the following five steps: determine the main factors of the data, collect and preprocess the data, establish the model, compare the model, and test and analyze it. e steps of cardiovascular and cerebrovascular disease prediction are shown in Figure 6.
As shown in Figure 6, the main steps for the prediction of CVD are as follows.
Determine the Main Factors of the Data.
In the first stage, it is necessary to actively consult the data, and it is necessary to conduct in-depth exchanges and discussion with experts in related fields in a timely manner. e relevant standards of this data mining are formulated by analyzing the theme, and the data closely related to cardiovascular and cerebrovascular are screened out.
Collecting Data and Preprocessing.
is process includes algorithm selection and algorithm improvement. e main purpose is to reduce the data, reduce the dimension, and extract features for the data with too many influence factors and too high spatial dimension. e extracted feature attribute dataset can be suitable for the training of the neural network.
Establish a Neural Network Model.
is process is to establish an improved neural network model, set the model parameters, and use the feature attribute dataset to train the model. Building the contrasting model is also trained at the same time.
Comparative Test Analysis.
is process is to test the prediction accuracy of the improved neural network model and the traditional neural network model for CVD and compare the performance of the NN to draw conclusions.
Cardiovascular and Cerebrovascular Feature Extraction and Data Preprocessing.
e cardiovascular and cerebrovascular datasets are mainly derived from patient medical records, and their main features are multifactor and highdimensional. ere are many redundant and low correlation attribute features in such data. In practical applications, if the preprocessing is not performed, the space complexity of the algorithm is high and it is easy to overfit. is adversely affects the performance of the classifier model and also reduces the classification accuracy. erefore, in the process of processing, it is necessary to appropriately reduce the dimension of the original data of CVD and perform certain screening processing to determine the subset of characteristic attributes with strong correlation. On this basis, the corresponding diagnostic model is determined to provide support for subsequent processing.
Contrast Media & Molecular Imaging sources include CT, electronic medical records, and magnetic resonance images. e data of patients with CVD after preprocessing are shown in Table 1. As shown in Table 1, according to actual experience, it can be known that there are many kinds of original datasets, and the corresponding forms also have complex changes, which will obviously affect the feature selection and recognition results. erefore, in the actual processing process, in order to effectively improve the accuracy and precision requirements of the classifier, the original information of cardiovascular and cerebrovascular diseases should be preprocessed first. e corresponding processes mainly include discretization, data integration, normalization, and smoothing.
Determine Hyperparameters.
e improved DBN-LSTM algorithm is trained on the test set, and the performance changes of the algorithm under different parameters are recorded. e optimal hyperparameters are set to 1, 2, 2, 2, 2 when the neural network achieves the best performance. With optimized hyperparameter settings, an accuracy of 0.9299 is achieved on the test set. e effects of individual design hyperparameters on prediction performance are then analyzed, and the effects of five hyperparameters at four levels are shown in Figure 7.
As shown in Figure 7, the effect of each design hyperparameter can be isolated by orthogonal combinations of design hyperparameters in each test. For example, level 3 of the design hyperparameter c has an average precision of 0.9172 on the 3rd, 8th, 9th, and 14th main tests. Here, a range analysis is introduced to determine the sensitivity of the design hyperparameters.
Analysis from the Perspective of Classification.
e hyperparameter sensitivity represents the difference between the maximum average precision and the minimum average precision. In order to ensure the generality of the experimental results, two experiments are used. e first experiment adopts the forward accumulation from the No. 1 feature attribute to the 9th attribute, and the second experiment adopts the reverse accumulation method and then analyzes the cumulative distribution function of the root mean square error of DBN-LSTM and LSTM under the two experiments. e advantage of this is that the stability and prediction accuracy of the neural network under the influence of different data attributes can be observed; at the same time, it can also reflect the real impact value of the feature subset in the preprocessed dataset on the prediction accuracy, which can be specific to a certain attribute. e Tables 2 and 3.
As shown in Tables 2 and 3, among these design parameters, the learning rate is the most important factor. erefore, combined with the analysis of the impact of the dataset and a single hyperparameter on the performance of the NN, DNN, CNN, LSTM, and DBN-LSTM two training accuracy is shown in Figure 8.
As shown in Figure 8, as the number of training increases, the training accuracy increases and the loss of the neural network decreases; the LSTM model shows higher loss and lower accuracy at the beginning of training. is is due to the high sum of errors due to the interference between different samples in the training set, and its accuracy is not high to begin with due to fewer training times. is phenomenon also occurs with DBN-LSTM, but it is significantly better than LSTM. is is because the DBN learns the feature attribute data characteristic to reduce the training complexity of the LSTM hidden layer, and later both NNs have obvious convergence at the end of training. e CNN fluctuated during the training process, but stabilized after 40 times, and the accuracy of the training data was significantly improved.
e prediction accuracy of the LSTM network exceeds that of the DNN and CNN networks, with an average accuracy of 92.98%. However, the performance of DBN-LSTM is significantly better than DNN, CNN, and LSTM network in both experiments, with an accuracy of 95.61%. But DBN-LSTM requires about 5-6 times more training time than DNN and CNN models, which can be estimated. Because the computational cost of LSTM memory cells in DBN-LSTM network and the cost of DBN training balance are high, it is still better than the LSTM network.
Analysis from the Perspective of Regression.
From the perspective of regression, the performance and stability of DBN-LSTM and LSTM are analyzed. erefore, in order to further compare the performance of the two NN, this paper adopts the multiattribute accumulation test method to test. Specifically, through multiple trainings, the subattributes contained in the input dataset are incremented during each training. For example, there are 1 subattribute in the first dataset, 2 in the second, and 3 in the third, until all 9 features of the preprocessed data are filled, and the test is performed 9 times. e predicted performances of the first and second experiments are shown in Figure 9.
As shown in Figure 9(a), in the process of stacking the number of forward feature attributes, the prediction accuracy errors of the two NN in the first five attributes have large fluctuations. At the beginning of the 6th attribute stacking, the RMS value of error increases steadily. e prediction stability of DBN-LSTM and LSTM has been significantly improved, and the root mean square error after the eighth attribute is superimposed reaches the maximum. After the ninth feature attribute is superimposed, the root mean square of the error decreases significantly, which indicates that the feature attribute in the previous data has a certain interference with the No. 9 feature attribute. For the overall experiment, DBN-LSTM has a smaller prediction accuracy error than LSTM in a small fraction of the time, but most of the time, DBN-LSTM has a smaller prediction accuracy error than LSTM. As shown in Figure 9(b), in the process of stacking the number of reverse feature attributes, the error accuracy of DBN-LSTM and LSTM networks basically increases steadily. When reversely superimposed to the 5th special attribute, which is the 4th characteristic attribute, the prediction accuracy error of DBN-LSTM and LSTM fluctuates within a certain range. When the ninth feature attribute is superimposed, which is the No. 1 feature attribute, the prediction accuracy of DBN-LSTM and LSTM both decrease to varying degrees, but the prediction accuracy of DBN-LSTM decreases more slowly. At the same time, it can be seen that whether it is the forward or reverse stacking method, the training accuracy of the neural network decreases each time the ninth feature attribute is input, which shows that the data preprocessing steps can be further optimized. e cumulative distribution function (CDF) of the RMSE performance is shown in Figure 10. As shown in Figure 10, the DBN-LSTM network has a significant performance advantage most of the time, while it is similar to the LSTM network some of the time. e performance difference between the two networks is also partly related to the initialization value of the network. In the figure, in 95% of the time periods, the RMSE of DBN-LSTM is less than 17.2, and the RMSE of LSTM is less than 14.4. On this dataset, this paper further verifies the superiority of DBN-LSTM from the perspective of regression. Compared with LSTM, DBN-LSTM has better predictive ability and better stability.
Conclusions
e work of this paper mainly focuses on the prediction of CVD. First of all, for the prediction of CVD using deep learning methods, this paper reviews the rapid development of NN in recent years and its outstanding achievements in predicting diseases in the medical field. After comparing various forecasting methods, it is decided to adopt an efficient forecasting method. Combined with the basis of the genetic algorithm, the improved genetic algorithm is used to select and optimize the characteristic attributes of the dataset samples, and the prediction accuracy of the neural network can be increased by using the dataset. e improved neural network DBN-LSTM is formed by combining DBN and LSTM. e structure and composition of the DBN-LSTM neural network are described in detail. Second, in the DBN-LSTM training process, the upper layer DBN is first subjected to a comparative experiment of the effect of different hidden layers on the prediction effect of the neural network, and the number of hidden layers of the DBN part is determined, and the hyperparameters of the LSTM are determined in combination with the verification set. en, construct the DNN, CNN, and LSTM networks and use the classification method to compare and analyze the average of the final results of the two experiments. Finally, the regression method is used to compare the prediction ability and stability of DBN-LSTM and LSTM networks in the face of different numbers of feature datasets, and a multiattribute accumulation test method is used to test, which accumulates feature attributes from two different directions. e results show that DBN-LSTM has better performance and more stable network than LSTM. However, this paper also has certain shortcomings. For example, in the experiment, it focuses on comparing the performance of the algorithm and does not actually apply the algorithm to the case. erefore, in the follow-up research, this research will be continued.
Data Availability e datasets generated during and/or analyzed during the current study are not publicly available due to sensitivity and data use agreement.
Conflicts of Interest
e authors declare that there are no conflicts of interest. | 6,379.8 | 2022-09-30T00:00:00.000 | [
"Computer Science"
] |
How Families’ Use of Digital Technology Can Be a Tool for Reducing Loneliness and Improving Food Intake among Older Adults
: The purpose of this study was to explore how a technical solution implemented among older adults and connected with an app supervised by an app administrator can reduce loneliness, prevent malnutrition, and inspire social eating and networking. In October 2020, a survey was distributed to 3500 administrators of the one-button computer communication tool Komp. Komp consists of a screen placed with older adults and an app used by the administrator of the tool. The survey addresses aspects that can provide new insights into how older adults can use digital solutions as a link to family and external networks. The study results show that due to COVID-19, 65% of respondents said they used Komp more frequently than before, but only 5% of current use was associated with eating meals together. However, 54% of the app administrators indicated that this could be a good future activity. Furthermore, 88% thought Komp could contribute to more socializing through shared meals. This study elicited almost 1650 constructive comments on experience, use, and recommendations. The study results show that digital solutions can be a link between older adults and their families and external network. Such tools can address needs connected to loneliness
Introduction
The number of older adults is increasing worldwide. In Norway, the population of older adults over the age of 80 years is expected to triple by 2060, and the 90-plus age group is expected to increase five time relative to 2020 [1]. The Norway population census shows that in 2019, 53% of Norwegian older adults lived in a single-person household. Among women aged 80 years or older, two out of three live alone, whereas the same applies to only one in three men. Although there are many older adults who live alone, there are also more than before who live with a partner. One reason for this may be that life expectancy of men is increasing, and as they tend to stay in partnerships, the share of older people living in partnerships has increased [2]. However, as we age, loneliness and poor health occur more frequently, and loneliness, as well as social isolation, are major contributors to malnutrition [3]. Implementation of digital technology in health care has been suggested as one avenue towards healthy ageing for older adults [4] In this article, we investigate whether a novel digital device can contribute to a reduction in loneliness and social isolation and, consequently, to an improvement in older adults' food intake.
Various health conditions such as living alone, physical inactivity, lack of social network, and frailty are associated with old age [5] Among these, reduced mobility, both in oneself and one's contemporaries, can lead to less social interaction and networking and more time spent alone [6,7]. Loneliness and social isolation are described as two different conditions, and both represent significant risks in terms of health issues among older adults [8,9]. Loneliness is described as the subjective feeling of being alone in combination with a wish for more social contact than is attainable [10]. Social isolation is described as an objective absence or very limited physical contact with others such as family and friends [10]. For older adults, the feeling of being alone can be affected by both loneliness and social isolation [11,12] Loneliness has been measured as stable when it comes to older adults up to 80 years, but it increases rapidly for those over 80. In old age (80 plus), approximately 40-50% self-report that they often feel lonely [13]. For older adults under the age of 80, between 5 and 10% feel lonely [14]. Social isolation is characterized by a lack or inadequacy of social contact with other people, as reflected by a limited size of social network and a lack of meeting places. Older people are more likely to be isolated compared to younger generations. The social networks of older adults are reduced in scope and ultimately consist mainly of family and close friends [15].
The incidence of perceived loneliness and food-related problems increases with age due to factors such as loss of a partner, loss of social networks, and age-related health issues [16]. For example, the SOLINUT study showed that 50% of older adults had a dietary intake that was not sufficient to meet their daily nutritional needs, and approximately 30% never shared a meal with family or friends, revealing their degree of social isolation [17]. Digital technology has been framed as a possible solution to increase interaction among older adults [18]. Digital technology for older adults can be a tool for maintaining a higher degree of functional independence throughout old age. This, in turn, can contribute to a better quality of life and influence healthy ageing [4]. The positive effect of technological solutions acts as a link between older adults with different needs and people in close relationships with the older adults [18]. To reap the benefits of technology for ageing and longevity, technologies that are inclusive and benefit everyone need to be designed [19]. A newly developed digital communication tool, Komp, uses warm technology to reduce loneliness among older adults [20]. Although studies have been carried out with a focus on technology and an aim of reducing social isolation and loneliness [8], there is a lack of studies focusing on food intake and the use of technology.
The recent COVID-19 pandemic is an example of a situation in which increased social isolation was particularly noticeable among older adults. In their article about COVID-19, ref. [21] discussed the consequences of isolating older people and the possible resultant public health challenges. Social isolation in the COVID-19 situation meant that many older adults did not have the same opportunities for a nutritionally complete diet or to be as physically active as they were before the pandemic [21]. Thus, as the COVID-19 pandemic particularly reduced older adults' mobility and social interaction, most older adults became more isolated [22]. The start of the COVID-19 pandemic in 2020 also led to the rapid adoption of the use of new and existing digital technologies in society. Especially for older adults and their families, this became important because of the lack of physical contact [23]. Ref. [21] recommended that digital solutions be highlighted as a relevant future measure for older adults. However, little is known about the needs of older people regarding technology. In our study, we investigated whether a digital communication device recently launched in Norway could be instrumental in improving older adults' quality of life during the pandemic shutdown.
Research has revealed that older adults living at home may be unrecognized and unsupported until they receive home care [17]. Because they are left unattended, it can often be difficult for them to maintain good health, which can, in turn, interfere with maintaining good nutritional status [24]. Ref. [25] found a link between eating alone and malnutrition. The study showed that older adults who ate alone ate fewer meals and fewer fruits and vegetables and had a lower intake of protein in their diet [25]. Food, meals, and the context of the meal are essential for older adult's eating behavior and represent important elements of everyday life [26]. Nutrition is essential for healthy and active ageing [27]. A meal becomes more meaningful when shared, food tastes better, one tends to eat more, and meals also become more regular [16]. As the shutdown and isolation during the pandemic reduced older adults' possibilities for social interactions, more knowledge is needed with respect to whether using a digital communication tool could, to some extent, substitute physical contact and physical contact at meals.
The purpose of this study was to explore how a technical solution implemented among older persons and connected with an app supervised by family/caregivers could inspire networking and social eating via communication technology. We hypothesize that a reduction in loneliness, social isolation, and risk of malnutrition can be achieved.
Materials and Methods
In this study, the use of an innovative tool for communication between older adults and family/caregivers was investigated through a survey administered to the administrators of the tool. We first describe the tool, its purpose, and function, followed by descriptions of the research design, participants, and questionnaire.
The Communication Tool, Komp
The tool is a one-button computer developed for older adults who cannot manage modern technology (Figure 1). Connected to the computer is the Komp app administrated by the family/caregivers, which connects smartphones or tablets to the Komp screen ( Figure 2). needed with respect to whether using a digital communication tool could, to some extent, substitute physical contact and physical contact at meals.
The purpose of this study was to explore how a technical solution implemented among older persons and connected with an app supervised by family/caregivers could inspire networking and social eating via communication technology. We hypothesize that a reduction in loneliness, social isolation, and risk of malnutrition can be achieved.
Materials and Methods
In this study, the use of an innovative tool for communication between older adults and family/caregivers was investigated through a survey administered to the administrators of the tool. We first describe the tool, its purpose, and function, followed by descriptions of the research design, participants, and questionnaire.
The Communication Tool, Komp
The tool is a one-button computer developed for older adults who cannot manage modern technology (Figure 1). Connected to the computer is the Komp app administrated by the family/caregivers, which connects smartphones or tablets to the Komp screen (Figure 2). One person functions as administrator of the app. All app users, such as family members or caregivers, must be invited in order to access the Komp Family app, which functions as a private social network. Through the app, one can send photos and messages needed with respect to whether using a digital communication tool could, to some extent, substitute physical contact and physical contact at meals. The purpose of this study was to explore how a technical solution implemented among older persons and connected with an app supervised by family/caregivers could inspire networking and social eating via communication technology. We hypothesize that a reduction in loneliness, social isolation, and risk of malnutrition can be achieved.
Materials and Methods
In this study, the use of an innovative tool for communication between older adults and family/caregivers was investigated through a survey administered to the administrators of the tool. We first describe the tool, its purpose, and function, followed by descriptions of the research design, participants, and questionnaire.
The Communication Tool, Komp
The tool is a one-button computer developed for older adults who cannot manage modern technology (Figure 1). Connected to the computer is the Komp app administrated by the family/caregivers, which connects smartphones or tablets to the Komp screen ( One person functions as administrator of the app. All app users, such as family members or caregivers, must be invited in order to access the Komp Family app, which functions as a private social network. Through the app, one can send photos and messages One person functions as administrator of the app. All app users, such as family members or caregivers, must be invited in order to access the Komp Family app, which functions as a private social network. Through the app, one can send photos and messages and make two-way video calls with Komp. The original intention of Komp was to provide a simple tool for older adults to be in touch with family and/or caregivers. Komp was developed by No Isolation (www.noisolation.com, accessed on 20 November 2022), a Norwegian company founded in 2015 with a focus on user empathy and the goal of reducing loneliness and social isolation through the use of warm technology [20]. Technology is often devised without full consideration of the target user group. Warm technology is an approach to technology design from a person-centered perspective [28]. Komp has a built-in 4G subscription and is available in most countries in Europe. During the COVID-19 pandemic, sales of Komp devices in Norway increased from 650 in 2019 to more than 4000 devices by the end of 2020. The number of app users in Norway in November 2019 was 19,000 persons.
Research Design-Procedure
A survey was developed to collect information from Komp administrators about the possibility of using the tool to participate in activities together. These activities are an extension of Komp's purpose as only a communication device.
The survey was administered by Nofima, and the web application software EyeQuestion (Logic8, Elst, Gelderland, the Netherlands, version 5) for Sensory and Consumer Research was used to collect responses. A link to the questionnaire was distributed to app administrators via No Isolation's database. Prior to answering the survey, the respondents had to sign a consent form electronically. The survey was distributed to all Komp administrators (approximately 3500) in November 2020. The response form was sent to Nofima, and only researchers associated with the project had access. No personal data were collected, as IP addresses were not linked to the questionnaire. Respondents had the option use an external link to send their email address and take part in a lottery to win a gift card worth EUR 100. The email addresses collected in this way were deleted immediately after the lottery was finished. Participants could also choose which grocery store they wanted the gift card from. A total of five gift cards were distributed.
Respondents
A total of 748 app administrators answered the survey. They provided free-text answers connected to the questionnaire, with a total of 1652 free-text answers related to social meals, physical activities, networking, technology, and COVID-19.
Questionnaire
The measures were developed specifically to achieve the aims of the study. Due to limitations of the data collection method (smartphone), knowledge of the respondents (app administrators), and topics to be investigated, questions were developed in a group process involving researchers, technologists, and No Isolation. The questionnaire consisted of four parts (see Appendix A for the complete questionnaire). Part 1 measured the age of the app administrators and demographic characteristics of the older adult Komp users. In part 2, the Komp users' social and daily activities were registered. Part 3 registered how COVID-19 influenced Komp usage. Part 4 investigated what the administrators thought about new areas for use of Komp related to food and physical activity (not reported here). Some of the questions were open for free-text answers (see Table 1).
Data Analysis
The survey data were analyzed using SPSS (ver. 27.0.1.0 IBM Corp., Armonk, NY, USA). Descriptive statistics, frequencies, and crosstabs were used to describe the data.
Results
In this section, we describe the age distribution of the app administrators who answered the questionnaire, demographic characteristics of Komp users, and how use and possible uses of the Komp communication tool were viewed by app users. A selection of free-text responses to various questions are presented to provide in-depth understanding of the topics. In total, the app users provided more than 1650 lines of free-text responses through the survey indicating current and future uses of Komp (Table 1).
Characteristics of Komp Users
More than 60% of the Komp users were 86 years or older, whereas 15% were 80 years or younger. Women constituted a majority (80%) of Komp users. Almost twothirds (64%) lived in their own homes, whereas 19% lived in institutions and 14% live in senior apartments. Among Komp users, 32% were still married/cohabiting, whereas widow(er)s constituted 58% of users. A proportion of the Komp users 42% lived in cities, whereas 58% lived in less densely populated areas. For details, see Supplementary Materials Table S1.
Loneliness and Network
More than 80% of app administrators indicated that reducing loneliness by using Komp was important. "Increasing the network with Komp is an excellent way to reduce loneliness. Imagine looking forward to meeting someone every day. It can fill life with great content and increase the quality of life". Social meeting arenas and networks were identified as important, and 77% of Komp users reported having regular contact with family and friends. However, only 19% reported regularly participating in activities outside the home, 21% had a small or no network, and 41% were mostly alone. "It would have been very nice if more people could connect together, both from organized services and relatives/friends". See Supplementary Materials Table S2 for details.
Social Meals and Food Intake
Less than one-third of Komp users did their own food shopping, whereas 23% had their meals delivered, and 46% had others who shopped for them (Table S2). "She has her food bought, and cooks it herself. Sounds like a good idea. We will definitely try this".
Only 5% of users used Komp for meals, but more than 76% of the app administrators considered using Komp for shared meals a good idea ( Table 2). "My seniors and I would have really appreciated it if I could have arranged a dinner and agreed to eat together over Komp :)". A proportion of 39% of app administrators indicated that social meals through the use of Komp could be important for increasing appetite, and 27% thought it would help Komp users to eat more. "I hadn't thought of this, but it was a clever idea". For 24% of app administrators, eating together with the Komp user was not an option. "The elderly must feel comfortable with the technology to enjoy a shared meal. Our user is a bit sceptic".
The Impact of COVID-19 on the Use of Komp
The COVID-19 situation, with lockdowns and isolation requirements for older adults, led to increased use of Komp. The study results show that 65% of app administrators reported increased use of Komp. "During a shutdown, this is worth its weight in gold" (Table 3). Of those who reported new uses (n = 261), free-text analysis of the answers showed that 68 app administrators (9% of 748 app administrators) answered that COVID-19 was the direct reason for buying Komp for the older adult. "We bought Komp because our elderly people lost a lot of contact with their families when the corona outbreak occurred. It has helped a lot to bring some joy into their everyday life, both with conversations and pictures :)".
Technology and Communication
In our study, we found that for 67% of older adults, Komp was the only digital tool they used. A proportion of 30% of users were not active Internet users, and only 8% were active Internet users (Table S2). "Grandma has great challenges with using technology. That's why we chose Komp :)". "The elderly miss the chat from friends, associations and neighbors etc. The daily chat is so important. And with talk and joy comes the zest for life". This comment illustrates how important it is to be able to communicate with other people.
Discussion
In this paper, we investigated whether a simple digital communication device could be a useful tool for reducing loneliness and, consequently, improving food intake among older adults. The results address different needs connected to loneliness, social isolation, and food intake. Our findings indicate that digital solutions such as Komp can be a link between older adults and their families and external networks.
Loneliness and Network
Research shows that loneliness and social isolation are described as two different aspects [29]. In our study, these two aspects partly come together in that, for example, older adults in institutions might not be lonely, but they may be isolated from their family networks. Loneliness and social isolation among older adults are acknowledged and can be difficult to cope with in their lives [30]. Our findings show that although most app users said that the older adult had regular contact with family and friends, they thought that the use of Komp increased the opportunities for contact. Administrators suggested many ideas that could help the older adult to both build a network and to feel less alone in their own home. Older adults who are unable to visit each other physically can be together digitally, or they can participate in events via a digital device, such as a screen, in their own homes. Events suggested by app administrators included singing sessions or that the older adult could follow activities organized by, for example, non-governmental organizations. Such activities can also facilitate one-way communication with older adults who just want to listen, or have something social to look at, but who are no longer able to engage themselves.
Social Meals and Food Intake
In our study, eating together was not something that app users had previously considered. However, based on their positive feedback, they thought that social meals via Komp could a factor that could reduce loneliness and increase food intake among Komp users. A lack of appetite was pointed out as problem for many of Komp users. The literature shows that although malnutrition increased with age, it is often a challenge for relatives or friends to recognize the symptoms of malnutrition in older adults [31,32]. Buying food, eating, and preparing food were identified as challenges for many Komp users. One possible solution could be that Komp users and app administrators not only eat together but also prepare food together by communicating through a screen, e.g., Komp. Although the majority of app administrators thought this was a good idea, many app administrators thought it could be challenging for some of older adults. They indicated that it was not suitable for everyone. Some of these insights were connected to where the older adults lived or where they ate their meals. Older adults living together may not have the same need to share meals with other people. Living and eating alone is associated with a risk for malnutrition, as reported in several studies [17,[33][34][35]. However, app users pointed out that some older adults do not like to share meals with others and would rather eat alone because they may have challenges related to eating. With increased age, many older adults experience oral problems such as reduced saliva production, difficulty swallowing, and dental problems [36], which can also affect their social contact with other people and increase their desire to eat alone [37].
The Impact of COVID-19 on the Use of Komp
During the COVID-19 pandemic, loneliness and social isolation were brought together for many Komp users. As the COVID-19 pandemic particularly reduced older adults' mobility and social interaction, most older adults became more isolated [38]. Quality of life aspects such as being close to older adults were suddenly lost at the beginning of the pandemic for those who were isolated either in their own homes or in care facilities [39]. Not being able to visit their older relatives was decisive for many families when it came to purchasing a technological device. For older adults, access to others through Komp, in many ways, replaced physical contact with family and others in their network, as they were able to send pictures and follow each other's everyday lives.
Technology and Communication
Challenges related to the use of technology are related to design and usability, as well as how older adults can interact with family and friends [40]. Several studies on older adults show that they are less likely to adopt new technology than younger [41][42][43] adults. The digital gap will gradually be reduced for those who are older in working life today, as they are already part of digital everyday life. This will also apply to, for example, gender differences and social conditions [44]. In our study, only one-third of the Komp users had digital devices other than the Komp screen, and only a few of them used the Internet in their everyday life. Although several respondents mentioned the importance of older adults becoming more familiar with digital interaction, most people manage to handle an analogue screen. The responses from app users showed that social contact such as communicating with family was essential for those who did not have relatives nearby.
Implications for Future Use
Many of the ideas and needs for future use of the device that were suggested by the app administrators may increase the complexity and reduce the user-friendliness of the communication device. Demographic development shows that people worldwide are living longer, and the oldest age group in particular is increasing. Although the new generation of older adults may have good digital knowledge, there will probably be new applications and solutions that will be unknown to the oldest population group over the age of 80. They will therefore have their own challenges related to new devices [45]. In the future, older adults and their families/caregivers could be directly involved in researching digital solutions as part of a user-friendly approach. The findings of this study may also be important for identifying weak spots and intervention points with respect to health services and caregivers in their daily care of and communication with older adults. Future research should address how technology could be used as a tool to combat mental health problems among older adults and how digital communication could be used to uphold social networks for older adults.
Limitations and Considerations
One consideration for the applicability of the results may be that the app administrators comprise an affluent segment in the population, as they had to purchase the device. Thus, the gains use of this computer device to reduce loneliness and improve food intake among older adults would first benefit older adults with families already possessing more resources than others. However, the findings show the merit of digital technology as a tool to improve conditions for older adults; thus, implementation of such devices in social welfare planning on a societal level should be addressed. Another possible limitation of the reported data is that the results are based on self-reported data and therefore provide biased results. However, the survey was used to explore whether the use of digital technology could be a tool for reducing loneliness and improving food intake among older adults by collecting insights from families and caregivers. Therefore, the app administrators' views and subjective answers to some of the questions provide important information for future use and development of technology.
Conclusions
This study shows that app administrators felt the need for better communication with older adults. They suggested that the Komp technology, in addition to facilitating social contact with family, should also be used to facilitate participation of older adults in various networks and social meeting places that help to prevent loneliness and malnutrition. A clear majority of app administrators were in the 40-60 age group, which most likely indicates that they were children of Komp users. However, the presence of younger administrators also implies that the tool is already functioning as a bridging tool between generations. The potential for the use of the digital communication tool lies in the context of a wider interaction within extended families, as well as with external networks.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/jal3010004/s1, Table S1: Demographic characteristics of Komp users (N = 748) and distribution in the population; Table S2: Distribution of Komp user networks and regular participation in activities by age group (percentages shown are within age groups; whole sample N = 748). Funding: This work was supported by RFF Oslo Qualification support (project number 2720) and the Norwegian Fund for Research Fees for Agricultural Products (FFL) through the project "FoodFor-Future" (project number 314318).
Institutional Review Board Statement:
This study was submitted to the Norwegian Agency for Shared Services in Education and Research, Sikt (formerly NSD), for evaluation of compliance with current legal requirements and research ethics principles (300101). The study was designed and executed in accordance with the guidelines laid out in the Declaration of Helsinki (revised 2008).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement: Data available on request due to restrictions e.g., privacy or ethical. The data presented in this study are available on request from the corresponding author. The data are not publicly available due to data being the property of the project owner, No Isolation, and cannot be shared without permission.
Conflicts of Interest:
The authors are not aware of any biases that might be perceived as affecting the objectivity of this review.
Appendix A
Questionnaire (Questions marked with * are not included in the paper.) We want answers from you who are app users and operate a Komp for an older adult called "elderly". We want to know how you experience the use of Komp. Among completed forms, we draw 5 gift vouchers worth NOK 1000.
(1) The place of residence of the older user of KOMP?
Living at home Nursing home Senior apartment Other | 6,648.6 | 2023-01-25T00:00:00.000 | [
"Computer Science",
"Environmental Science"
] |
Potential of Ceria-Zirconia-Based Materials in Carbon Soot Oxidation for Gasoline Particulate Filters
: ZrO 2 and Ce 0.8 Zr 0.2 O 2 mixed oxides were prepared and tested in the oxidation of carbon soot at di ff erent oxygen partial pressures and degrees of catalyst / soot contact to investigate their activity under typical gasoline direct injection (GDI) operating conditions. Under reductive atmospheres, generation of oxygen vacancies occurs in Ce 0.8 Zr 0.2 O 2 , while no reduction is observed on ZrO 2 . Both materials can oxidize carbon under high oxygen partial pressures; however, at low oxygen partial pressures, the presence of carbon can contribute to the reduction of the catalyst and formation of oxygen vacancies, which can then be used for soot oxidation, increasing the overall performance. This mechanism is more e ffi cient in Ce 0.8 Zr 0.2 O 2 than ZrO 2 , and depends heavily on the interaction and the degree of contact between soot and catalyst. Thus, the ability to form oxygen vacancies at lower temperatures is particularly helpful to oxidize soot at low oxygen partial pressures, and with higher CO 2 selectivity under conditions typically found in GDI engine exhaust gases.
Introduction
The gasoline engine market, based on direct injection (DI) technology is growing rapidly, especially in relation to their better efficiency and lower CO 2 emissions [1]. In addition, for 2020 the European Union has imposed a limit on CO 2 emissions for cars of 95 g/km; the GDI engines are able to increase fuel economy, and consequently reduce these emissions [2]. On the other hand, however, these engines produce a higher amount of soot than conventional gasoline engines. In particular, attention is focused not only on particulate emissions (PM), but also on the particulate number (PN), and consequently it is necessary to implement new emission control strategies [3]. As required for diesel engines, it is possible to introduce a particulate filter to reduce emissions of soot: the so-called gasoline particulate filter (GPF) [4][5][6]. However, due to the different operating conditions of a diesel engine and a GDI engine [3], it is not possible to simply transfer the technologies optimized and developed in the last twenty years for diesel particulate filters (DPF), but it is necessary to investigate in detail the mechanisms of soot accumulation and oxidation under typical GDI operating conditions [7]. Exactly as it happens for DPFs, the accumulation phase must necessarily be followed by a regeneration phase, in order to oxidize the accumulated soot and prevent an increased pressure drop [8]. Since particulate oxidizes at around 600 • C, in order to save fuel and to avoid filter damage, a catalyst impregnated on the filter, which lowers the combustion temperature, can be used. To ensure regeneration, however, issues arising from the different atmosphere of the exhaust gas of a GDI and a diesel engine must be taken into consideration [3]. In the diesel engine, in fact, regeneration is favored by the high oxygen content and by the presence of nitrogen oxides, NO x , which participate in the oxidation reaction. Consequently, the catalyst formulations developed for the DPFs cannot be directly transferred to the GDI engines, real systems. Tight contact has been obtained by grinding the mixture in an agate mortar for ten minutes. This is less representative of real conditions, but results are generally more reproducible, and a much stronger interaction is observed [42]. A third type of mixing has only been employed in the case of ceria-zirconia, and it is realized by milling powders for 8 h in a high-energy mill to obtained a supertight contact [40]. This is not representative of the real application, but it is a good reference for a high degree of redox interaction that can be obtained between carbon and catalyst, and is useful in the understanding of the mechanism of reaction and on the intrinsic activity of reducible materials. This contact is characterized by a different morphology, where a layer of soot forms a thin shell covering ceria-zirconia crystallite cores promoting interactions at a nanoscale at the ceria-zirconia/carbon interface [40,41]. The different contact modes are represented in Scheme 1.
Catalysts 2020, 10, x FOR PEER REVIEW 3 of 13 Two types of mixing conditions have been employed to put into contact carbon soot and catalysts to simulate different degrees of interactions. For loose contact, soot and catalyst have been mixed in a vial for two minutes, which is representative of the weak degree of interaction that is found in real systems. Tight contact has been obtained by grinding the mixture in an agate mortar for ten minutes. This is less representative of real conditions, but results are generally more reproducible, and a much stronger interaction is observed [42]. A third type of mixing has only been employed in the case of ceria-zirconia, and it is realized by milling powders for 8 h in a high-energy mill to obtained a supertight contact [40]. This is not representative of the real application, but it is a good reference for a high degree of redox interaction that can be obtained between carbon and catalyst, and is useful in the understanding of the mechanism of reaction and on the intrinsic activity of reducible materials. This contact is characterized by a different morphology, where a layer of soot forms a thin shell covering ceria-zirconia crystallite cores promoting interactions at a nanoscale at the ceria-zirconia/carbon interface [40,41]. The different contact modes are represented in Scheme 1. Scheme 1. Different kinds of contact obtained by mixing soot (black) and catalyst (yellow) using a spatula, an agate mortar and a high-energy mill.
The composition and BET surface area of investigated materials are reported in Table 1. Ceriazirconia and zirconia show a surface area in the range of 60-80 m 2 /g. When soot is mixed with the catalyst in a loose and tight contact mode, the surface area is not affected, while the surface area of ceria-zirconia decreases by effect of milling from 79 to 29 m 2 /g. This is typically observed upon the milling of high surface area powders, and is probably due to the adhesion of fine crystallites on the surface of larger particles [40,43]. X-ray diffraction (XRD) profiles are shown in Figures 1A,B. ZrO2 shows the simultaneous presence of tetragonal (space group P42/nmc; main reflection at 30.2°) and monoclinic ZrO2 (space group P121/c1; main reflection at 28.2°) and milling with soot does not modify the XRD profile. Ceria-Scheme 1. Different kinds of contact obtained by mixing soot (black) and catalyst (yellow) using a spatula, an agate mortar and a high-energy mill.
The composition and BET surface area of investigated materials are reported in Table 1. Ceria-zirconia and zirconia show a surface area in the range of 60-80 m 2 /g. When soot is mixed with the catalyst in a loose and tight contact mode, the surface area is not affected, while the surface area of ceria-zirconia decreases by effect of milling from 79 to 29 m 2 /g. This is typically observed upon the milling of high surface area powders, and is probably due to the adhesion of fine crystallites on the surface of larger particles [40,43]. X-ray diffraction (XRD) profiles are shown in Figure 1A,B. ZrO 2 shows the simultaneous presence of tetragonal (space group P42/nmc; main reflection at 30.2 • ) and monoclinic ZrO 2 (space group P121/c1; main reflection at 28.2 • ) and milling with soot does not modify the XRD profile. Ceria-zirconia (CZ) catalysts ( Figure 1B) exhibit reflections characteristic of a pure fluorite phase. The soot/CZ mixture in loose and tight contact do not evidence any difference with parent CZ. The XRD profiles do not show significant modification of the ceria-zirconia structure, and all samples are indexed in a fluorite cubic cell. The CZST sample shows a sharpening of XRD signals due to an increase of the particle size. In addition, a small signal appears at ca. 30.5 degrees, due to the main reflection of the (011) plane of ZrO 2 , originating from abrasion during milling. Rietveld analysis also reveals a small decrease of the lattice cell parameter due to insertion of ZrO 2 into CZ, with a slight change in stoichiometry of solid solution that goes from Ce 0.8 Zr 0.2 O 2 to Ce 0.78 Zr 0.22 O 2 in agreement with previous observations [40]. No signals of reaction between the oxide and carbon are revealed by XRD.
Catalysts 2020, 10, x FOR PEER REVIEW 4 of 13 zirconia (CZ) catalysts ( Figure 1B) exhibit reflections characteristic of a pure fluorite phase. The soot/CZ mixture in loose and tight contact do not evidence any difference with parent CZ. The XRD profiles do not show significant modification of the ceria-zirconia structure, and all samples are indexed in a fluorite cubic cell. The CZST sample shows a sharpening of XRD signals due to an increase of the particle size. In addition, a small signal appears at ca. 30.5 degrees, due to the main reflection of the (011) plane of ZrO2, originating from abrasion during milling. Rietveld analysis also reveals a small decrease of the lattice cell parameter due to insertion of ZrO2 into CZ, with a slight change in stoichiometry of solid solution that goes from Ce0.8Zr0.2O2 to Ce0.78Zr0.22O2 in agreement with previous observations [40]. No signals of reaction between the oxide and carbon are revealed by XRD. In order to characterize the reduction behavior of the materials and understand the role of carbon in the removal of lattice oxygen, we have further investigated the redox and oxygen storage properties of pure catalyst and catalyst soot mixtures under different contact conditions by H2-TPR and OSC measurements; the results are summarized in Figures 2 and 3, respectively. For zirconia, a temperature-programmed reduction (TPR) feature of a typical non-reducible support was found, while CZ shows one broad hydrogen uptake peak centered at around 558 °C, due to the reduction of surface and bulk Ce 4+ cations [44]. In carbon-containing samples, the H2 consumption peak shifts to progressively lower temperatures as the contact become more intimate. The reduction peak for bare CZ centered at 558 °C progressively shifts to 530, 506 and 420 °C, respectively for loose, tight and supertight contact, with an overall reduction degree, similar for all the investigated samples, of 45%. No evidence for reduction is found in Zr samples. In bare CZ, the reduction peak is correlated to consumption of H2 with the formation of H2O. With CZ/soot mixtures, the outlet gas composition has been followed by MS/FTIR, indicating the production of H2O (see inset of Figure 2A), with no formation of CH4, thus excluding the reduction of carbon by hydrogen. Figures 2C,D show CO formation profiles in TPR, as followed by Fourier-transform infrared spectroscopy (FTIR). Pure CZ shows the development of a very low amount of CO at around 600 °C, likely related to the desorption of residual carbonate species adsorbed on the surface. When soot/catalyst mixtures are investigated, the amount of CO formed during the reaction increases by increasing the contact with soot. No evidence for CO formation is observed in Zr samples, regardless of the contact conditions. Overall, these results indicate that deposited carbon promotes the reduction of ceria-zirconia as a function of the degree of interaction with the surface; the more robust the carbon/ceria contact, the lower the reduction temperature. This is particularly evident in CZST, where the morphology of carbon distributed as a thin envelope on the surface of the catalyst acts as a booster for the reduction reaction, with a reduction profile centered 140 °C lower than bare CZ. One possible mechanism is that carbon at the interface forms C-O* adsorbed species that act as a driving force for the reduction of the In order to characterize the reduction behavior of the materials and understand the role of carbon in the removal of lattice oxygen, we have further investigated the redox and oxygen storage properties of pure catalyst and catalyst soot mixtures under different contact conditions by H 2 -TPR and OSC measurements; the results are summarized in Figures 2 and 3, respectively. For zirconia, a temperature-programmed reduction (TPR) feature of a typical non-reducible support was found, while CZ shows one broad hydrogen uptake peak centered at around 558 • C, due to the reduction of surface and bulk Ce 4+ cations [44]. In carbon-containing samples, the H 2 consumption peak shifts to progressively lower temperatures as the contact become more intimate. The reduction peak for bare CZ centered at 558 • C progressively shifts to 530, 506 and 420 • C, respectively for loose, tight and supertight contact, with an overall reduction degree, similar for all the investigated samples, of 45%. No evidence for reduction is found in Zr samples. In bare CZ, the reduction peak is correlated to consumption of H 2 with the formation of H 2 O. With CZ/soot mixtures, the outlet gas composition has been followed by MS/FTIR, indicating the production of H 2 O (see inset of Figure 2A), with no formation of CH 4 , thus excluding the reduction of carbon by hydrogen. Figure 2C,D show CO formation profiles in TPR, as followed by Fourier-transform infrared spectroscopy (FTIR). Pure CZ shows the development of a very low amount of CO at around 600 • C, likely related to the desorption of residual carbonate species adsorbed on the surface. When soot/catalyst mixtures are investigated, the amount of CO formed during the reaction increases by increasing the contact with soot. No evidence for CO formation is observed in Zr samples, regardless of the contact conditions. Overall, these results indicate that deposited carbon promotes the reduction of ceria-zirconia as a function of the degree of interaction with the surface; the more robust the carbon/ceria contact, the lower the reduction temperature. This is particularly evident in CZST, where the morphology of carbon distributed as a thin envelope on the surface of the catalyst acts as a booster for the reduction reaction, with a reduction profile centered 140 • C lower than bare CZ. One possible mechanism is that carbon at the interface forms C-O* adsorbed species that act as a driving force for the reduction of the material. This adsorbed oxygen then reacts more easily with hydrogen, forming H 2 O. In other words, carbon, when finely dispersed over ceria-zirconia, can act as a catalyst for ceria-zirconia reduction reaction. Traces of CO are also formed by direct oxidation of carbon with lattice oxygen; again, these are more evident in CZST, where the carbon is finely dispersed over ceria-zirconia.
Catalysts 2020, 10, x FOR PEER REVIEW 5 of 13 material. This adsorbed oxygen then reacts more easily with hydrogen, forming H2O. In other words, carbon, when finely dispersed over ceria-zirconia, can act as a catalyst for ceria-zirconia reduction reaction. Traces of CO are also formed by direct oxidation of carbon with lattice oxygen; again, these are more evident in CZST, where the carbon is finely dispersed over ceria-zirconia. Similar results are confirmed by oxygen storage capacity analysis carried out on pure zirconia and ceria-zirconia, and on soot-catalyst mixtures at different temperatures, as seen in Figure 3. The data were collected according to the method described in the experimental part; zirconia, as expected, does not show any measurable oxygen storage capacity, while OSC of ceria-zirconia increases from 1796 µg O2/g at 300 °C, to 2922 and 8762 µg O2/g, respectively, at 400 °C and 500 °C. For bare CZ and CZL, removal of oxygen is similar in the temperature range investigated, while for tight and supertight contact, a higher OSC has been found, confirming the enhanced "reducibility" of the material in the presence of soot. reaction. Traces of CO are also formed by direct oxidation of carbon with lattice oxygen; again, these are more evident in CZST, where the carbon is finely dispersed over ceria-zirconia. Similar results are confirmed by oxygen storage capacity analysis carried out on pure zirconia and ceria-zirconia, and on soot-catalyst mixtures at different temperatures, as seen in Figure 3. The data were collected according to the method described in the experimental part; zirconia, as expected, does not show any measurable oxygen storage capacity, while OSC of ceria-zirconia increases from 1796 µg O2/g at 300 °C, to 2922 and 8762 µg O2/g, respectively, at 400 °C and 500 °C. For bare CZ and CZL, removal of oxygen is similar in the temperature range investigated, while for tight and supertight contact, a higher OSC has been found, confirming the enhanced "reducibility" of the material in the presence of soot. Similar results are confirmed by oxygen storage capacity analysis carried out on pure zirconia and ceria-zirconia, and on soot-catalyst mixtures at different temperatures, as seen in Figure 3. The data were collected according to the method described in the experimental part; zirconia, as expected, does not show any measurable oxygen storage capacity, while OSC of ceria-zirconia increases from 1796 µg O 2 /g at 300 • C, to 2922 and 8762 µg O 2 /g, respectively, at 400 • C and 500 • C. For bare CZ and CZL, removal of oxygen is similar in the temperature range investigated, while for tight and supertight contact, a higher OSC has been found, confirming the enhanced "reducibility" of the material in the presence of soot.
Carbon particles can act as an "activator" of the oxygen, moving it towards the surface and making it more available for the hydrogen present in the gas phase, as depicted in Scheme 2. The higher the contact between carbon and the catalyst, the higher the amount of oxygen that can be transferred at a lower temperature. Carbon particles can act as an "activator" of the oxygen, moving it towards the surface and making it more available for the hydrogen present in the gas phase, as depicted in Scheme 2. The higher the contact between carbon and the catalyst, the higher the amount of oxygen that can be transferred at a lower temperature.
Scheme 2.
Mechanism of reaction for oxygen transfer over soot/CZ mixtures.
Catalytic Activity
The catalytic activity has been investigated by means of thermogravimetric experiments and temperature programmed oxidation under different oxygen concentration, with the objective to understand whether a reducible CZ and a non-reducible Zr material can be used under oxygen conditions, simulating those of gasoline exhaust gases, in which the amount of oxygen is typically low. Representative weight loss profiles obtained for bare soot and CZ in loose contact under different atmospheres (air, 1% O2/N2 and N2) are shown in Figure 4. The non-catalytic reaction ( Figure 4A) is strongly dependent on the amount of oxygen in the gas phase. Under inert atmosphere a negligible weight loss is observed, while in the presence of O2 a complete oxidation is reached with a T50 of 705 °C (1O2/N2) and 630 °C (air). When soot is mixed with CZ in loose contact mode ( Figure 4B), a remarkable increase in the activity is achieved independently of the O2 concentration. The oxidation of carbon in CZL in the absence of oxygen is not quantitative, and is associated with the oxidation of the soot by the surface/bulk oxygens of ceria-zirconia. The weight loss (2.7%) is due to the formation of CO/CO2 evolving from the system, and it also accounts
Catalytic Activity
The catalytic activity has been investigated by means of thermogravimetric experiments and temperature programmed oxidation under different oxygen concentration, with the objective to understand whether a reducible CZ and a non-reducible Zr material can be used under oxygen conditions, simulating those of gasoline exhaust gases, in which the amount of oxygen is typically low. Representative weight loss profiles obtained for bare soot and CZ in loose contact under different atmospheres (air, 1% O 2 /N 2 and N 2 ) are shown in Figure 4. Carbon particles can act as an "activator" of the oxygen, moving it towards the surface and making it more available for the hydrogen present in the gas phase, as depicted in Scheme 2. The higher the contact between carbon and the catalyst, the higher the amount of oxygen that can be transferred at a lower temperature.
Scheme 2.
Mechanism of reaction for oxygen transfer over soot/CZ mixtures.
Catalytic Activity
The catalytic activity has been investigated by means of thermogravimetric experiments and temperature programmed oxidation under different oxygen concentration, with the objective to understand whether a reducible CZ and a non-reducible Zr material can be used under oxygen conditions, simulating those of gasoline exhaust gases, in which the amount of oxygen is typically low. Representative weight loss profiles obtained for bare soot and CZ in loose contact under different atmospheres (air, 1% O2/N2 and N2) are shown in Figure 4. The non-catalytic reaction ( Figure 4A) is strongly dependent on the amount of oxygen in the gas phase. Under inert atmosphere a negligible weight loss is observed, while in the presence of O2 a complete oxidation is reached with a T50 of 705 °C (1O2/N2) and 630 °C (air). When soot is mixed with CZ in loose contact mode ( Figure 4B), a remarkable increase in the activity is achieved independently of the O2 concentration. The oxidation of carbon in CZL in the absence of oxygen is not quantitative, and is associated with the oxidation of the soot by the surface/bulk oxygens of ceria-zirconia. The weight loss (2.7%) is due to the formation of CO/CO2 evolving from the system, and it also accounts The non-catalytic reaction ( Figure 4A) is strongly dependent on the amount of oxygen in the gas phase. Under inert atmosphere a negligible weight loss is observed, while in the presence of O 2 a complete oxidation is reached with a T50 of 705 • C (1O 2 /N 2 ) and 630 • C (air). When soot is mixed with CZ in loose contact mode ( Figure 4B), a remarkable increase in the activity is achieved independently of the O 2 concentration. The oxidation of carbon in CZL in the absence of oxygen is not quantitative, and is associated with the oxidation of the soot by the surface/bulk oxygens of ceria-zirconia. The weight loss (2.7%) is due to the formation of CO/CO 2 evolving from the system, and it also accounts for the lattice oxygen of CZ used for oxidation during the reaction. Weight loss in 1% O 2 /N 2 and in air shows very similar profiles, but when a lower amount of oxygen is used, a shift of the oxidation curve profile to higher temperature is observed; in particular, T50 shifts from 515 • C for reaction under air to 545 • C for reaction in 1% O 2 /N 2 . Figure 5 shows the T50 for all the investigated catalysts as a function of the reaction atmosphere. Under an inert atmosphere, weight loss is negligible in bare soot and in zirconia/soot mixtures regardless of the contact type; this is consistent with the lack of lattice oxygen available for promotion of redox reaction in zirconia and the lack of volatile matter in soot composition, which can be released during thermal treatment. No oxidation is therefore observed in these samples under an inert atmosphere. With the introduction of ceria in CZ, a higher activity can be observed already under inert atmosphere, with a T50 for the loose contact of 775 • C, which progressively decreases to 546 and 420 • C for tight and supertight contact, respectively. This is consistent with the fact that ceria-zirconia-based systems can use the surface/bulk oxygen to oxidize soot even in the absence of oxygen in the gas phase, through the oxidation of carbon at the carbon/CZ interface with the formation of an oxygen vacancy [24,45]. The oxidation reaction proceeds to consume available oxygen, and the activity is closely related to the amount of reducible oxygen/OSC, and is progressively inhibited due to the lack of reoxidation of the material when the inert gas phase is used [46,47]. The introduction of oxygen facilitates oxidation in all formulations. If we focus on CZ, we can observe that the catalytic activity in 1% O 2 /N 2 slightly differs from that in air (∆T50 close to 25/35 • C), while for Zr materials the T50 in 1% O 2 /N 2 is 70/80 • C, which is higher compared to oxidation in air. In the case of zirconia, the change of oxygen concentration heavily affects T50, as carbon reacts directly with O from the gas phase, while in CZ materials the dependence on the amount of oxygen in the gas phase is less significant, and mediated by the available oxygen from the CZ surface. Therefore, even when the oxygen concentration is very low, ceria-based materials are able to release their surface/bulk oxygen to oxidize the soot with formation of vacancies that are refilled through oxygen of the gas phase, promoting oxidation. It is therefore the oxygen from ceria that sustains the catalytic activity through a mechanism that exploits both the available surface oxygen and the bulk oxygen [47]. for the lattice oxygen of CZ used for oxidation during the reaction. Weight loss in 1% O2/N2 and in air shows very similar profiles, but when a lower amount of oxygen is used, a shift of the oxidation curve profile to higher temperature is observed; in particular, T50 shifts from 515 °C for reaction under air to 545 °C for reaction in 1% O2/N2. Figure 5 shows the T50 for all the investigated catalysts as a function of the reaction atmosphere. Under an inert atmosphere, weight loss is negligible in bare soot and in zirconia/soot mixtures regardless of the contact type; this is consistent with the lack of lattice oxygen available for promotion of redox reaction in zirconia and the lack of volatile matter in soot composition, which can be released during thermal treatment. No oxidation is therefore observed in these samples under an inert atmosphere. With the introduction of ceria in CZ, a higher activity can be observed already under inert atmosphere, with a T50 for the loose contact of 775 °C, which progressively decreases to 546 and 420 °C for tight and supertight contact, respectively. This is consistent with the fact that ceriazirconia-based systems can use the surface/bulk oxygen to oxidize soot even in the absence of oxygen in the gas phase, through the oxidation of carbon at the carbon/CZ interface with the formation of an oxygen vacancy [24,45]. The oxidation reaction proceeds to consume available oxygen, and the activity is closely related to the amount of reducible oxygen/OSC, and is progressively inhibited due to the lack of reoxidation of the material when the inert gas phase is used [46,47]. The introduction of oxygen facilitates oxidation in all formulations. If we focus on CZ, we can observe that the catalytic activity in 1% O2/N2 slightly differs from that in air (ΔT50 close to 25/35 °C), while for Zr materials the T50 in 1% O2/N2 is 70/80 °C, which is higher compared to oxidation in air. In the case of zirconia, the change of oxygen concentration heavily affects T50, as carbon reacts directly with O from the gas phase, while in CZ materials the dependence on the amount of oxygen in the gas phase is less significant, and mediated by the available oxygen from the CZ surface. Therefore, even when the oxygen concentration is very low, ceria-based materials are able to release their surface/bulk oxygen to oxidize the soot with formation of vacancies that are refilled through oxygen of the gas phase, promoting oxidation. It is therefore the oxygen from ceria that sustains the catalytic activity through a mechanism that exploits both the available surface oxygen and the bulk oxygen [47]. The catalytic activity for the combustion of soot was also determined from peak-top temperature (Tm) during temperature programmed oxidation (TPO) of catalyst/soot mixtures under 10% O2/N2 or 1% O2/N2, mainly to verify gas composition after combustion and evaluate the selectivity to CO2. Figure 6 compares CO and CO2 profiles of all samples.
The results are in agreement with those obtained from TGA, with ceria-zirconia displaying a lower oxidation temperature than zirconia for any contact conditions, and regardless of oxygen content. Selectivity to CO2 is always higher than 97%@1% O2, compared to a value of 60-80% obtained The catalytic activity for the combustion of soot was also determined from peak-top temperature (Tm) during temperature programmed oxidation (TPO) of catalyst/soot mixtures under Catalysts 2020, 10, 768 8 of 13 10% O 2 /N 2 or 1% O 2 /N 2 , mainly to verify gas composition after combustion and evaluate the selectivity to CO 2 . Figure 6 compares CO and CO 2 profiles of all samples.
Catalysts 2020, 10, x FOR PEER REVIEW 8 of 13 oxygen is comparable to selectivity in bare soot. Selectivity in ceria-zirconia systems is only negligibly affected by the different oxygen concentration and contact conditions, while in bare soot and Zr-based samples, the CO/CO2 ratio is strongly influenced by atmosphere and carbon catalyst contact. In particular, zirconia at low oxygen concentration in loose conditions has negligible catalytic activity and selectivity (Tm = 680 °C, Sel = 63%), which is increased under tight conditions, as well as 10% oxygen, to reach 97% with a Tm of 535 °C. The specific reaction rate for soot oxidation was also calculated to evaluate in more detail the ability of ceria-based materials to help the transfer of lattice oxygen to carbon in oxygen-poor atmospheres. The values at different temperatures are reported in Figure 7. While no oxidation is observed in uncatalyzed reaction at 500 °C (S@500 in Figure 7), for zirconia, both in tight and loose contact, in the 400-500 °C temperature range, the specific reaction rate is quite low, and does not exceed 300 µgsoot/gsoot*s*m 2 . For CZ, higher reaction rate values are observed regardless of the contact type, and the rates increase as the contact become more intimate (CZST > CZT > CZL). The results are in agreement with those obtained from TGA, with ceria-zirconia displaying a lower oxidation temperature than zirconia for any contact conditions, and regardless of oxygen content. Selectivity to CO 2 is always higher than 97%@1% O 2 , compared to a value of 60-80% obtained with Zr. The data are summarized in Table 2; it can be observed that selectivity to CO 2 in ZrL at 1% oxygen is comparable to selectivity in bare soot. Selectivity in ceria-zirconia systems is only negligibly affected by the different oxygen concentration and contact conditions, while in bare soot and Zr-based samples, the CO/CO 2 ratio is strongly influenced by atmosphere and carbon catalyst contact. In particular, zirconia at low oxygen concentration in loose conditions has negligible catalytic activity and selectivity (Tm = 680 • C, Sel = 63%), which is increased under tight conditions, as well as 10% oxygen, to reach 97% with a Tm of 535 • C. The specific reaction rate for soot oxidation was also calculated to evaluate in more detail the ability of ceria-based materials to help the transfer of lattice oxygen to carbon in oxygen-poor atmospheres. The values at different temperatures are reported in Figure 7. While no oxidation is observed in uncatalyzed reaction at 500 • C (S@500 in Figure 7), for zirconia, both in tight and loose contact, in the 400-500 • C temperature range, the specific reaction rate is quite low, and does not exceed 300 µg soot /g soot *s*m 2 . For CZ, higher reaction rate values are observed regardless of the contact type, and the rates increase as the contact become more intimate (CZST > CZT > CZL).
Catalysts 2020, 10, x FOR PEER REVIEW 9 of 13 In an oxygen-poor atmosphere, ceria-zirconia has a specific reaction rate from four (CZL) to seven (CZT) times higher than the corresponding zirconia-based systems, confirming the importance of the redox behavior for oxidation under low oxygen pressure. We have previously reported that soot oxidation over ceria-based catalysts is driven by two different coexisting phenomena, one influenced by the amount of surface active oxygens that predominates in oxygen rich atmosphere, and another related to the OSC (bulk oxygen) of the material that prevails when gas-phase oxygen is absent [47]. In particular, (i) oxygen withdrawn from ceria-zirconia oxidizes carbon at the soot catalyst interface; and (ii) the resulting vacancy or other surface defects can be the center for the activation of gas phase oxygen and the formation of active oxygen species that contributes to oxidize soot on a parallel route. In the absence of gas phase oxygen, route (i) is the only available way to oxidize carbon, while in the presence of oxygen, the two mechanisms may coexist in proportions that depend on oxygen partial pressure and the degree of carbon catalyst contact. The concurrence of the two mechanisms could therefore be important in GPF applications, where the oxygen concentration is lower than in DPF.
Overall, the results obtained in this study highlight the characteristics of materials for use in the catalytic oxidation of particulate matter under oxygen-poor conditions such as those found in GDI engines. It is shown that the activation and transfer of lattice/surface oxygen are fundamental in order to develop catalytic materials suitable for GPF. Ceria-zirconia compositions overtake ZrO2 in both activity and selectivity under these conditions; several known strategies could then be accomplished to increase the transfer of oxygen in these materials, like the engineering of the shape and size of ceria particles (nanocubes, nanorods, etc.) or tailoring the composition of the materials, by lattice/surface doping with rare earth, noble or transition metals elements.
Catalyst Preparation
The ceria-zirconia (Ce0.8Zr0.2O2, CZ80) sample was prepared by the co-precipitation of an acidic solution of cerium and zirconium nitrate (Treibacher Industrie AG, Althofen, Austria) with NH4OH in the presence of H2O2. The precipitate was dried overnight at 100 °C, and calcined in air at 500 °C for 3 h. Zirconia was synthesized by the calcination of zirconium hydroxides (Mel Chemicals, Manchester, UK) at 500 °C for 3 h. Conventional catalyst/soot mixtures were obtained in loose (L) and tight (T) contact modes by mixing the appropriate amount of catalyst (CZ80 or ZrO2) with soot (Printex U by Degussa, Esse, Germany, surface area of 100 m 2 /g), respectively, in a vial for 2 min or in an agate mortar for 10 min. The catalyst/soot weight ratio of 20:1 was adopted in this study. For CZ80, improved contact (supertight, ST) was achieved in a high-energy Spex mill equipped with a In an oxygen-poor atmosphere, ceria-zirconia has a specific reaction rate from four (CZL) to seven (CZT) times higher than the corresponding zirconia-based systems, confirming the importance of the redox behavior for oxidation under low oxygen pressure.
We have previously reported that soot oxidation over ceria-based catalysts is driven by two different coexisting phenomena, one influenced by the amount of surface active oxygens that predominates in oxygen rich atmosphere, and another related to the OSC (bulk oxygen) of the material that prevails when gas-phase oxygen is absent [47]. In particular, (i) oxygen withdrawn from ceria-zirconia oxidizes carbon at the soot catalyst interface; and (ii) the resulting vacancy or other surface defects can be the center for the activation of gas phase oxygen and the formation of active oxygen species that contributes to oxidize soot on a parallel route. In the absence of gas phase oxygen, route (i) is the only available way to oxidize carbon, while in the presence of oxygen, the two mechanisms may coexist in proportions that depend on oxygen partial pressure and the degree of carbon catalyst contact. The concurrence of the two mechanisms could therefore be important in GPF applications, where the oxygen concentration is lower than in DPF.
Overall, the results obtained in this study highlight the characteristics of materials for use in the catalytic oxidation of particulate matter under oxygen-poor conditions such as those found in GDI engines. It is shown that the activation and transfer of lattice/surface oxygen are fundamental in order to develop catalytic materials suitable for GPF. Ceria-zirconia compositions overtake ZrO 2 in both activity and selectivity under these conditions; several known strategies could then be accomplished to increase the transfer of oxygen in these materials, like the engineering of the shape and size of ceria particles (nanocubes, nanorods, etc.) or tailoring the composition of the materials, by lattice/surface doping with rare earth, noble or transition metals elements.
Catalyst Preparation
The ceria-zirconia (Ce 0.8 Zr 0.2 O 2 , CZ80) sample was prepared by the co-precipitation of an acidic solution of cerium and zirconium nitrate (Treibacher Industrie AG, Althofen, Austria) with NH 4 OH in the presence of H 2 O 2 . The precipitate was dried overnight at 100 • C, and calcined in air at 500 • C for 3 h. Zirconia was synthesized by the calcination of zirconium hydroxides (Mel Chemicals, Manchester, UK) at 500 • C for 3 h. Conventional catalyst/soot mixtures were obtained in loose (L) and tight (T) contact modes by mixing the appropriate amount of catalyst (CZ80 or ZrO 2 ) with soot (Printex U by Degussa, Esse, Germany, surface area of 100 m 2 /g), respectively, in a vial for 2 min or in an agate mortar for 10 min. The catalyst/soot weight ratio of 20:1 was adopted in this study. For CZ80, improved contact (supertight, ST) was achieved in a high-energy Spex mill equipped with a zirconia jar and balls [40]. A reference sample, to obtain the non-catalytic combustion profile, was prepared diluting soot with powder quartz (Carlo Erba Reagents S.r.l., Milan, Italy) in a weight ratio of 1:20.
Catalyst Characterization
Surface area of the materials has been measured by means of a Tristar 3000 gas adsorption analyzer (Micromeritics, Norcross, GA, USA), according to the B.E.T. method by nitrogen adsorption at 77 K. Structural features of the catalysts were investigated by XRD. Diffractograms were recorded on a Philips X'Pert diffractometer (Ni-filtered Cu-Kα radiation) in the range 20 • -145 • using a step size of 0.02 • and a counting time of 40 s per angular abscissa (PANalytical B.V., Almelo, the Netherlands). Phase identification has been performed by means of Philips X'Pert HighScore software (Version 1.0b, PANalytical B.V., Almelo, the Netherlands).
In TPR experiments, the catalysts (50 mg) were heated at a constant rate (10 • C/min) in a U-shaped quartz reactor from room temperature to 900 • C under a flowing hydrogen/nitrogen mixture (35 mL/min, 4.5% H 2 in N 2 ). The hydrogen consumption was monitored using a thermal conductivity detector (TCD). For ZrO 2 and Ce 0.8 Zr 0.2 O 2 , pretreatment under air at 500 • C for 1 h was carried out. The outlet gas composition was also followed by an online quadrupole mass-spectrometer (Omnistar, Balzers Instruments, Balzers, Liechtenstein). In addition, H 2 -TPR experiments under 1.8% H 2 in N 2 (total flow 500 mL/min) in a tubular quartz reactor followed by FTIR gas analyzers (MultiGas 2030, MKS Instrumentrs, Inc., Andover, MA, USA) have been performed.
The OSC of samples was investigated by carrying out TGA experiments (Q500, TA Instruments, New Castle, DE) in 4.5% H 2 in N 2 mixture flow (100 mL/min). Each sample was treated in N 2 atmosphere for 1 h at 150 • C, followed by heating at a constant rate (10 • C/min) up to a fixed temperature (300, 400 and 500 • C), and finally the N 2 /H 2 mixture was introduced while keeping that temperature for 60 min. The observed weight loss is due to oxygen removal by H 2 to form water, and it can be associated with total oxygen storage capacity at that temperature.
Catalytic Activity
The soot oxidation activity was measured by running thermogravimetric analysis from r.t. to 800 • C (heating rate 10 • C/min) under a different atmosphere (N 2 , 1% O 2 /N 2 and air) [48,49]. Before the catalytic tests, the samples (soot + catalyst, ca. 20 mg) were subjected to a 1 h pre-treatment at 150 • C under inert atmosphere (60 mL/min), in order to eliminate the adsorbed water. The activity was measured by means of T50, the temperature at which 50% of the weight loss is observed. Catalytic activity measurements have also been carried out by TPO experiments heating at a constant rate (10 • C/min) 20 mg of sample under O 2 gas flow (1% or 10% O 2 (v/v) balance N 2 ; total flow 0.5 L/min). The catalyst temperature was measured by a chromel-alumel thermocouple, located on the catalyst bed. The outlet gas composition (i.e., CO, and CO 2 ) was measured by FTIR gas analyzers (MultiGas 2030, MKS Instrumentrs, Inc., Andover, MA, USA). As a matter of fact, reproducibility of results was verified by running several TGA/TPO experiments on similar samples, and the results, in terms of T50/Tp, were always within ± 3 • C.
Reaction rate measurements were performed by TGA isothermal experiments after pretreatment of ca. 20 mg of a catalyst-soot mixture under nitrogen atmosphere for 1 h at 150 • C. Then the temperature has been increased at a constant rate (10 • C/min) up to the reaction temperature (300, 400 or 500 • C), followed by switching to air. The reaction was followed for 1 h. A specific reaction rate has been determined, according to Van Setten et al. [50], and normalized to the soot initially present in the reactor and to the catalyst surface area (µg soot /(g sootinitial ·s·m 2 ) [51]. The reaction rate was calculated at 3% of conversion. A reaction rate in an inert atmosphere has also been performed.
Conclusions
Ceria-based materials has been widely studied as a catalyst in diesel particulate filters, but their role and applicability in gasoline particulate filters, under oxygen-poor conditions, is still at the initial stage, and this preliminary work investigates their potential in soot oxidation at low oxygen partial pressure. The comparison of the behavior of ceria-zirconia and zirconia in soot oxidation shows that the former is less influenced by the variation of the oxygen atmosphere, and is capable of carbon oxidation at a lower temperature with almost full selectivity to CO 2 ; in contrast, activity in ZrO 2 is highly influenced by the oxygen atmosphere, and lower CO 2 selectivities are obtained. Lattice and surface oxygen in ceria-zirconia can help with oxidation at low oxygen partial pressure. The mechanism works through the interface between carbon and catalysts, and is therefore highly dependent on the way carbon and catalysts are put into contact. The presence of soot over ceria-zirconia particles acts as a driving force for the reduction of the material enhancing the oxygen removal rate, and its use in oxidation reaction in the substitution of gas phase oxygen. Funding: This research was funded in part by Interreg V Italy-Austria project COAT4CATA project number ITAT1019.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,973.6 | 2020-07-09T00:00:00.000 | [
"Materials Science",
"Environmental Science",
"Chemistry"
] |