text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
Design Methodology of a Passive Vibration Isolation System for an Optical System With Sensitive Line-of-Sight
The performance of an optical system with sensitive line-of-sight (LOS) is influenced by rotational vibration. In view of this, a design methodology is proposed for a passive vibration isolation system in an optical system with sensitive LOS. Rotational vibration is attributed to two sources: transmitted from the mounting base and generated by modal coupling. Therefore, the elimination of the rotational vibration caused by coupling becomes an important part of the design of the isolation system. Additionally, the decoupling conditions of the system can be obtained. When the system is totally decoupled, the vibration on each degree of freedom (DOF) can be analyzed independently. Therefore, the stiffness and damping coefficient on each DOF could be obtained by limiting the vibration transmissibility, in accordance to actual requirements. The design of a vibration isolation system must be restricted by the size and shape of the payload and the installation space, and the layout constrains are thus also discussed.
Introduction
In most optical systems, vibration causes the undesired motion of its components and leads to performance loss [1][2][3][4][5][6][7]. It can downgrade the precision of sensitive optical telescopes, cause the misalignment of laser communication devices, and induce blurriness in the images of airborne cameras [8][9][10][11][12][13]. Thus, the suppression and isolation of vibration are essential concerns for high performance optical systems.
Vibration can be classified in two categories: translational vibration and rotational vibration [14,15]. Some optical systems such as astronomical telescopes, laser communication devices, and long focal length airborne cameras have very sensitive line-of-sight (LOS) and require very high pointing accuracies [16][17][18][19][20][21][22]. For example, the National Aeronautics and Space Administration (NASA) Hubble Space Telescope (HST) requires a telescope pointing accuracy of 0.01 arcsec [9], while the pointing accuracy of the deep space optical communication (DSOC) antenna is just a few milliradians [23]. Figure 1(a) shows the image shift of a long focal length airborne camera caused by translational vibration, and the image shift can be obtained by δ = dʹ − d = D(f/H), where D is the amplitude of translational vibration, f is the focal length of the camera, H is the flight altitude of the aircraft, d is the distance between the image of the target on the CCD and the center of the CCD before the vibration occurs, and dʹ is the distance between the image of the target on the CCD and the center of the CCD after the vibration occurs. Figure 1(b) shows the image shift of the same airborne camera caused by the rotational vibration, and the image shift can be obtained by δʹ = f·sin(α), where α is the amplitude of rotational vibration. When H = 10 km, f = 880 mm, D = 10 mm, and α =3 0 arcsec, the image shifts are δ = 0.88 μm and δ′ = 128 μm, respectively. For this optical system type, the performance loss caused by the rotational vibration is far more than that caused by the translational vibration. Therefore, the primary task of designing the vibration isolation systems of optical systems with the sensitive LOS is to eliminate the rotational vibration. In fact, for optical systems without the internal vibration, the rotational vibration comes from two sources: transmitted from the mounting base and generated by modal coupling, as shown in Fig. 2. Generally, the rotational vibration transmitted from the mounting base can be predicted by analyzing the working environment and suppressed or isolated by a well-designed vibration isolation system. However, in most instances, the rotational vibration generated by modal coupling is unexpected, which is difficult to predict and eliminated. Therefore, designing an uncoupled vibration isolation system is very important for an optical system with the sensitive LOS. According to different application purposes, the vibration isolation system can be divided into two categories: one is used as the support of the vibration source to prevent the generation of vibration [24], and the other is used as the support of the sensitive payload to prevent the vibration [25]. Furthermore, depending on these means, the vibration isolation system can also be classified into four different types: passive, active, active-passive hybrid, and semiactive [26]. Davis et al. [27] designed a passive viscous damping strut (D-strut) with a very low fundamental frequency to isolate disturbances from a reaction wheel assembly (RWA). Cobb et al. [28] designed a vibration isolation and suppression system (VISS) to isolate a precision payload from spacecraft borne disturbances using passive isolation in combination with voice coil Keqi QI et actuators. This research study mainly focuses on the passive vibration isolation for the prevention of the transmission of vibration to the optical payloads.
To obtain an effective, low-cost, and reliable solution, the vibration isolation should be considered from the beginning of the conceptual design of the sensitive optical system. O'Toole [29] discussed the design of a passive isolation system for a high-altitude and long-range oblique reconnaissance camera, and introduced some key factors for passive isolation designs, including the selection and layout of isolators. Hyde [30] proposed a conceptual design methodology for the vibration isolation, including the conceptual design process, performance target allocation, and design tradeoffs. However, none of them provided detailed theoretical analysis and derivation in all degrees of freedom (DOFs).
This study proposes a design methodology of a passive vibration isolation system for an optical system with the sensitive LOS. The remaining parts of this study are organized as follows. The dynamic model of the passive vibration isolation system is established in Section 2. The modal coupling characteristics are analyzed in Section 3. Section 4 introduces the method of selecting the suitable vibration transmissibility. In Section 5, the layout constrains are discussed. Concluding remarks are summarized in Section 6.
Dynamic model
An optical payload supported by n isolators is shown in Fig. 3. m is the mass of the payload, and O is the center of mass and the origin of a Cartesian coordinate system. In this case, the coordinate axes are selected to coincide with the principal inertial axes of the payload. The simplified model of the vibration isolation system of the optical payload is shown in Fig. 4, where k x1 , k x2 , k xi , and k xn , are the stiffnesses of Isolators 1, 2, i, and n, along the X-axis, respectively; k y1 , k y2 , k yi , and k yn , are the stiffnesses of Isolators 1, 2, i, and n, along the Y-axis, respectively; k z1 , k z2 , k zi , and k zn , are the stiffnesses of Isolators 1, 2, i, and n, along the Z-axis, respectively; c x1 , c x2 , c xi , and c xn , are the damping coefficients of Isolators 1, 2, i, and n, along the X-axis, respectively; c y1 , c y2 , c yi , and c yn , are the damping coefficients of Isolators 1, 2, i, and n, along the Y-axis, respectively; and c z1 , c z2 , c zi , and c zn , are the damping coefficients of Isolators 1, 2, i, and n, along the Z-axis, respectively. Based on Newton's second law of motion, the dynamic equations of the vibration isolation system can be expressed in accordance to the following equations: where I xx , I yy , and I zz , are the moments of inertia of the payload with respect to the X-, Y-, and Z-axes, respectively; F x , F y , and F z , are the excitation forces acting on the center of mass along the X-, Y-, and Z-axes, respectively; M x , M y , and M z , are the excitation moments acting on the center of mass with respect to the X-, Y-, and Z-axes, respectively. Moreover, the parameters of the stiffness and damping matrices can be derived from the following equations: where l xi , l yi , and l zi are the coordinates of Isolator i on the X-, Y-and Z-axes, respectively. Equation (1) can be expressed as where M, C, and K represent the mass, damping, and the stiffness matrices of (1), respectively, and F represents the excitation load vector of (1).
Suppose that the initial position and the initial velocities of the vibration isolation system are zero. After the application of the Laplace transformation, (3) can be written as where [Ms 2 +Cs+K] is the impedance matrix of the system, represented by Z(s).
Decoupling condition analysis
From (1), it can be inferred that the impedance matrix is nonsingular. Therefore, this matrix is reversible. Correspondingly, we can obtain that where [Ms 2 +Cs+K] -1 is the admittance matrix of the system. If we only consider the translational DOF of the system, the following equations could be obtained from (1): From (6), it can be seen that the translational DOFs of the system are uncoupled naturally. Thus, we only focus on the coupling situation among the translational and rotational DOFs, and the coupling situation among rotational DOFs.
If the vibration isolation system is totally uncoupled, the admittance matrix of the system must be a diagonal matrix. Correspondingly, the impedance matrix must also be a diagonal matrix.
Based on the above analysis, it can be derived that the vibration isolation system is a completely uncoupled system when the following conditions are satisfied: 0 The admittance matrix of the completely uncoupled system can then be obtained as When (7) and (8) hold true and (9) and (10) do not hold true, the impedance matrix of the system can be written as (12). From (12), it can be observed that only the rotational DOFs are coupled, but the translational and rotational DOFs are uncoupled.
When (7) and (8) do not hold true and (9) and (10) hold true, the impedance matrix of the system can be written as (13). From (13), it can be observed that the rotational DOFs are uncoupled, but the translational and rotational DOFs are mutually coupled.
When (7) and (9) hold true and (8) and (10) do not hold true, the impedance matrix of the system can be written as (14). From (14), it can be observed that all DOFs of the stiffness matrix are uncoupled, but the translational and rotational DOFs of the damping matrix are mutually uncoupled.
When (8) and (10) hold true and (7) and (9) do not hold true, the impedance matrix of the system can be written as (15). From (15) Based on the above analyses, it can be concluded that (7) to (10) formulate the criteria of determining whether the system is coupled, uncoupled, or partially coupled.
Vibration transmissibility
As mentioned above, our research mainly focuses on preventing the external vibration from being transmitted to the optical payloads. Therefore, Keqi The excitation load vector F can then be written as where u i is the excitation owing to the motion from the mounting base of Isolator i, u xi , u yi , and u zi , are the components of u i along the X-, Y-, and Z-axes, respectively, as shown in Fig. 5. From Fig. 5, u xi , u yi , and u zi can be expressed as cos , cos , cos Let r xi , r yi , and r zi , represent cosα i , cosβ i , and cosγ i , respectively. From (16) and (17), the excitation load vector F can be expressed as (18). u u l r k l rk l r k l r k l u r k l r k l r k l r k l r k l r k l r k l r k l r k l r k l r k l r k l u Equation (18) can thus be written as After the application of the Laplace transformation, (19) can be expressed as From (5) and (20), the transfer function matrix can be obtained: From (11), (18), (19), and (21), the transfer function matrix of the totally uncoupled system can be expressed in accordance to the following equation: ( ) diag ( ), ( ), ( ), ( ), ( ), ( ) where U x (ω), U y (ω), U z (ω), U φx (ω), U φy (ω), and where Normally, we are only concerned about the amplitude of the transfer function, which is also known as the transmissibility of the system. Transmissibility is one of the most important indicators of the vibration isolation system since it determines its performance. Therefore, it is a Keqi QI et al.: Design Methodology of a Passive Vibration Isolation System for an Optical System With Sensitive Line-of-Sight 443 primary factor which should be considered when designing the system. Figure 6 shows a 1-DOF passive vibration isolation system. The transmissibility of this system can be expressed as where ζ is the damping ratio of the system, and g is the frequency ratio. The damping ratio, the frequency ratio, and the natural frequency can be obtained in accordance to the following equations: where m 1−D , k, and ω n are the mass, stiffness, and natural frequency of the 1-DOF passive vibration isolation system, respectively. The curves of the transmissibility of the 1-DOF passive vibration isolation system at different damping ratios are shown in Fig. 7. From Fig. 7, it can be observed that the shape of the curve varies as a function of the damping ratio. When 2 g ≤ , instead of suppressing the external vibration, the system amplifies the vibration.
Moreover, the amplitude of this vibration decreases as a function of the damping ratio. When 2 g > , the isolation capability of the system decreases as a function of the damping ratio. That is to say, the damping ratio and the natural frequency of the system determine the isolation performance of the system. Thus, if we set the requirement of the transmissibility, the range of the damping ratio values and the natural frequencies will be limited. Figure 8 shows the distribution of the transmissibility with respect to the damping and frequency ratios. The blue part of the surface indicates that the transmissibility of the region is lower than 10%. In other words, to make the transmissibility less than 10%, the values of the damping and the frequency ratios must be within the range covered by the blue part. Furthermore, this range can be obtained from (27) in accordance with 2 2 where R r (ω) is the required transmissibility. Generally speaking, the requirement of the vibration isolation has a frequency range, e.g., the transmissibility of the isolation system is lower than 10% at frequencies of 10 Hz and above. Moreover, in the effective isolation range (g) of the passive isolation system, the transmissibility decreases as a function of the frequency of the external vibration. Therefore, we only consider the transmissibility of the lower limit of the frequency range. Equation (29) can then be written as where ω L represents the lower limit of the frequency range.
The range of the damping ratio and the natural frequency of the required isolation system can be obtained from (30). Based on the characteristics of the isolator, an appropriate set of damping ratios and natural frequencies can be selected. Because the mass of the system is generally known, the stiffness and the damping coefficient of the system can be obtained from (28) where ω ns and ζ s are the selected natural frequency and damping ratio, respectively. For an uncoupled 6-DOF isolation system, the stiffness and the damping coefficient on each DOF can be obtained from the transmissibility limits of each DOF as follows: where ω n_x , ω n_y , ω n_z , ω n_φx , ω n_φy , and ω n_φz are the selected natural frequencies of each DOF, respectively, and ζ x , ζ y , ζ z , ζ φx , ζ φy , and ζ φz are the selected damping ratios of each DOF, respectively. The designer of the vibration isolation system could choose to limit the transmissibility of all or some DOFs, based on the different application requirements. The constraint equations of the design will then be obtained.
Layout constraints
In addition to the above considerations, the number and layout of the isolators of a vibration isolation system must be restricted by the size and shape of the payload and the installation space. For example, a triangularly shaped payload is usually supported by three isolators on three corners, as shown in Fig. 9(a). For a payload with asymmetrical appearance, the isolators are usually symmetrically arranged, as shown in Fig. 9(b). Correspondingly, a circularly shaped payload is usually supported by three or more evenly arranged isolators, as shown in Fig. 9(c).
The examples described above are only general cases, and designers can make appropriate changes based on the actual situations. Figure 10(a) shows a payload supported by two isolators. From Fig. 10(a), the distance between Isolators 1 and 2 is changeable, and the range of the distances can be expressed as follows: where O is the center of mass of the payload; m 1 is the mass of the payload; L is the installation distance between Isolators 1 and 2; l 1 and l 2 are the distances from Isolator 1 and Isolator 2 to the center of mass, respectively; L min and L max are the minimum and maximum installation distance between Isolators 1 and 2, respectively. Fig. 9 Several different types of isolators layout: (a) triangularly shaped payload supported by three isolators, (b) rectangular payload supported by four symmetrically arranged isolators, and (c) circularly shaped payload supported by three evenly arranged isolators. This system can be treated as a 2-DOF passive vibration isolation system, as shown Fig. 10(b). If we consider that this system is an uncoupled system, the following equations can be obtained: (39), it can be seen that the transmissibility of the rotational vibration is influenced by the distance between isolators. The larger the distance is, the better the isolation effect of the rotational vibration is. Therefore, it is important to try to make full use of the installation space when designing a vibration isolation system.
Conclusions
The performance of an optical system with a sensitive LOS is heavily influenced by the rotational vibration. Thus, the primary task of designing the vibration isolation systems for these optical systems is to reduce as much as possible or eliminate the rotational vibration. The design methodology of a passive vibration isolation system for an optical system with the sensitive LOS is proposed herein. The main steps of the design process are as follows: (1) The design of a vibration isolation system must be restricted by the size and shape of the payload and the installation space. The number and layout of the isolators of a vibration isolation system should be firstly determined based on these restrictions. It is important to make full use of the installation space. This step is referred to as "layout constraints".
(2) To reduce the effect of the rotational vibration, the rotational vibration caused by the coupling is reduced or eliminated first. Equations (7) to (10) represent the decoupling conditions of the system. The coupling vibration is eliminated when the parameters of the isolators fulfill (7) to (10). This step is referred to as "decoupling constraints".
(3) Transmissibility is one of the most important indicators of the vibration isolation system since it determines the performance of the system. When the system is totally uncoupled, the transmissibility on each DOF is only determined by the stiffness and damping coefficients of the system on the corresponding DOF. Thus, the stiffness and damping coefficient on each DOF of the system can be obtained by limiting the transmissibility on each DOF based on the actual design requirements. This step is known as "transmissibility constraints".
In most cases, a unique solution cannot be obtained, even if all three types of constraints are met. However, it offers designers the necessary designing freedom to overcome other unexpected constraints. | 4,662 | 2021-01-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Self-phase modulation cancellation in a high-power ultrafast thin-disk laser oscillator
Ultrafast high-power lasers are employed in a wide variety of applications in science and industry. Thin-disk oscillators can offer compelling performance for these applications. However, because of the high intracavity peak power, a large amount of self-phase modulation (SPM) is picked up in the intracavity air environment. Consequently, the highest performance oscillators have been operated in a vacuum environment. Here, we introduce a new concept to overcome this hurdle. We cancel the SPM picked up in air by introducing an intracavity phase-mismatched second-harmonicgeneration crystal. The resulting cascaded χ 2 processes provide a large SPM with a sign opposite the one originating from the air. This enables laser operation in air at 210W average output power with 780 fs, 19 μJ pulses, the highest output power of any semiconductor saturable absorber mirror (SESAM) modelocked laser operated in air to date, to the best of our knowledge. This result paves the way to a novel approach for nonlinearity management in high-power lasers. © 2018
Ultrafast laser technologies are a crucial tool for a wide variety of applications ranging from science, such as time-resolved studies and XUV generation, to industry, for instance, in high-precision material processing. During the last decade, high-power sources based on Yb-doped gain materials, shaped in the thin-disk [1], fiber [2], and slab geometry [3] have had an impressive development, leading to ultrafast amplifier systems exceeding the kW-level average power milestone. Using thin-disk laser (TDL) technology, oscillators delivering multi-100-W average power and tens-of-μJ pulse energy at MHz repetition rate have been demonstrated [4][5][6]. This approach enables to use a table-top and comparatively cost-effective TDL oscillator as an ultrafast highpower laser source. Hence, TDL oscillators, due to their excellent beam quality and low-noise properties [7], are a highly attractive alternative to multi-stage amplifier systems composed of a lowpower oscillator, pulse stretcher, amplification stages, and pulse compressor [2,8]. In fact, TDL oscillators are being used for extra-and intra-cavity XUV generation and high-power frequency conversion to the mid-IR, and are potential sources for high-power THz generation [9].
A significant challenge in these TDL oscillators is the high intracavity peak power, which can exceed 100 MW. At such peak powers, the phase accumulated because of the nonlinear refractive index of the intracavity air represents a major contribution to the overall self-phase modulation (SPM). Since the modelocking process relies on soliton pulse formation, which requires a balance between group-delay dispersion (GDD) and SPM [10], this very large amount of SPM ultimately hinders pulse formation. Different methods have been developed so far to overcome this challenge. One is to compensate this large SPM with a corresponding amount of GDD obtained through dispersive mirrors. This creates a tradeoff between the amount of GDD in the cavity and the output pulse energy of the laser ("Standard TDL" in Fig. 1). However, dispersive mirrors have substantially worse thermal behavior compared to Bragg mirrors, making it very challenging to add a large number of them in a high-power oscillator [8,11]. A different approach consists of operating the oscillator in vacuum or helium environment so that the air contribution to the SPM is almost removed ("Vacuum/He TDL" in Fig. 1) [8]. This approach led to the record results in average power and pulse energy. However, the advantages in performance offered by operation of the TDL in vacuum are offset by the significantly increased cost and complexity of such a system. For many scientific and industrial applications, a simpler solution would be required.
Here, we present a new and much simpler technique to cancel the intracavity SPM picked up in air by exploiting cascaded quadratic nonlinearities (CQN) [12]. In CQN, a second-harmonicgeneration (SHG) crystal yields an effective nonlinear refractive index that is tunable in magnitude and sign. CQN have been successfully employed for modelocking of lasers in both the positive and negative dispersion regimes [13][14][15][16][17], pulse compression [18,19], for nonlinear-mirror-type modelocking schemes in TDLs [20,21], and in regenerative amplifiers [22]. Here, we introduce a CQN crystal inside the laser cavity in a phasemismatched, low-loss configuration. This allows us to cancel up to 80% of the total SPM of air. We balance the remaining SPM through just five dispersive mirrors, enabling soliton pulse formation. We obtain 210-W average power at 780-fs pulse duration, 10.96-MHz repetition rate, and 19.2-μJ pulse energy using −16, 800 fs 2 of GDD ("This result" in Fig. 1). This result represents the highest output power of any semiconductor saturable absorber mirror (SESAM) modelocked oscillator operated in air. In the previous record of 145 W [23], −346, 500 fs 2 of round-trip GDD were used. Kerr-lens modelocking (KLM) can require a lower amount of negative GDD for pulse formation (Fig. 1). However, SESAM modelocking is highly advantageous in terms of robust modelocking, since pulse formation is decoupled from cavity stability. Our oscillator also delivers more pulse energy than any KLM oscillator to date, where the record is 14 μJ [6], and more than four times average power compared to previous lasers involving CQN [20].
In our laser experiment we use a 100-μm-thick, 10-at.% doped Yb:YAG disk contacted on diamond (TRUMPF), mounted in a 36-pass head, and pumped at 940 nm with a 4.4-mm-diameter pump spot. We designed a cavity including three reflections on the disk gain medium. Thus, we could use an output coupler (OC) with a comparatively large T OC 40% transmission and hence limit the intracavity power. A large OC rate is beneficial in two ways: it reduces the amount of SPM picked up in the intracavity air thus mitigating the requirement of negative GDD, and it decreases the stress on the intracavity components. The folded multi-pass cavity arrangement leads to a lower repetition rate and thus higher pulse energy while keeping a compact footprint (Fig. 2). We introduced a thin-film polarizer (TFP) in the cavity to fix the polarization of the laser.
In order to modelock the oscillator, we used an in-house grown SESAM as an end mirror, where the beam radius is ≈850 μm. The SESAM consists of a distributed AlAs/GaAs Bragg reflector grown at 580°C and three InGaAs quantum wells as absorber grown at 280°C in an antiresonant configuration [24,25]. We measured our SESAM to have a saturation fluence F sat 50 μJ∕cm 2 , a modulation depth ΔR 2.7%, and nonsaturable losses ΔR ns 0.35% [26]. The SESAM was contacted by TRUMPF on a polished copper heatsink (cold radius of curvature >500 m [25]).
We use only five Gires-Tournois interferometer (GTI)-type dispersive mirrors, yielding a total GDD of D −16, 800 fs 2 per round trip. Achieving 210-W output power with 780-fs pulses without CQN would require ≈5 times more negative GDD. Thus, the use of CQN critically helps the balance between SPM and GDD. CQN offer a large effective nonlinear refractive index contribution n 2,CQN , which depends on the secondorder nonlinear coefficient d eff and the phase mismatch Δk k SH − 2k FW , where SH stands for second harmonic and FW for fundamental wave. This n 2,CQN can be tuned in sign and magnitude via Δk [12]. In this laser experiment, we exploit a negative n 2,CQN from a SHG crystal in order to pick up a negative nonlinear phase shift, which counteracts the positive one picked up in air. A potential drawback of this technique is the loss caused by the SH generated in the cascading processes, since the SH light is not resonant in the laser cavity. The SHG efficiency scales with the peak intensity; hence, it represents an inverse saturable loss. On the other hand, if such losses are small compared to the modulation depth of the SESAM ΔR, this property can stabilize the modelocking process [16,27]. In order to minimize the second-harmonic losses, we operate the crystal near the SHG minima, which correspond to ΔkL ≈ 2πn min , where L is the length of the crystal and n min is an integer. Experimentally, we monitor the SHG losses measuring the power of a cavity green leakage ("Photodiode" in Fig. 2) and adjust the crystal's tilt angle θ through a piezo-controlled mount. In this way, we can operate the crystal in the SHG minima.
To quantify the losses and the phase shift introduced by the CQN device, let us consider a pulse with peak intensity I pk , progressing through the SHG crystal. We call the phase shift introduced for the peak of the pulse B CQN,sp and the efficiency of the SHG process η CQN,sp : where we define a group-velocity mismatch parameter δ 1∕v g,SH − 1∕v g,FW , ξ 2ω FW d eff 2 ∕ε 0 c 3 n FW 2 n SH , and τ p is the full-width-at-half-maximum (FWHM) duration of the pulse, assuming a sech 2 shape. These equations assume the cascading regime, where the phase mismatch is large and the transfer of energy from the fundamental to the second harmonic is small. The phase shift presented in Eq. (1a) has a well-known expression in literature [28]. We obtain Eq. (1b) in the supplementary material assuming a short crystal fulfilling τ p > 2δL, together with a large enough Δk, and operation in a SHG minimum (i.e., ΔkL 2πn min ). In this short-crystal regime, the phase mismatch Δkλ is close to 2πn min across the whole pulse spectrum, allowing for very low SHG losses for the Fig. 1. Overview of the GDD used in TDLs with respect to their output pulse energy. Our result, due to the use of cascaded χ 2 nonlinearities, overcomes the tradeoff in GDD versus pulse energy typical of standard TDL, lying in a region previously accessible only through expensive vacuum systems. For the non-labeled results, the average output power is below 100 W. All references can be found in Supplement 1. Letter intracavity pulse. Hence, the ratio between nonlinear phase shift [Eq. (1a)] and nonlinear losses [Eq. (1b)] is lower than in the long-crystal limit (τ p ≪ δL) [13]. Additionally, short crystals are beneficial in high-power applications in order to minimize thermal lensing. The free parameters in the design of the CQN device are the crystal length L, the intensity on the crystal I pk , adjustable through the laser spot size on the SHG crystal, and the phase mismatch Δk. The goal is to get a large amount of negative phase shift and as little as possible SHG losses, i.e., to maximize B CQN,sp ∕η CQN,sp ∼ Δk∕L. Thus, our formulas suggest to use short crystals operated at large phase-mismatch angles. We employed an AR-coated type-I LBO crystal (Cristal Laser) with a length L 5 mm, in a position where the 1∕e 2 beam radius is ≈850 μm. In this way, we have a peak intensity on the crystal below 5 GW∕cm 2 .
We next consider the balance of the different sources contributing to the cavity SPM. The total phase shift B CQN,rt and losses η CQN,rt per round trip due to the SHG crystal are obtained multiplying the single-pass values [Eqs. (1a) and (1b)] for (1 R OC ) where R OC 60% is the reflectivity of the OC. A convenient way to express the phase shift is to introduce the SPM coefficient γ B∕P pk,IC , where P pk,IC is the intracavity peak power immediately before the OC. Regarding the air, we integrate the peak intensity in a cavity round trip to obtain the total SPM, denoted B air,rt (Supplement 1), and we obtain γ air ≈ 10.6 mrad∕MW. In Fig. 3, we plot the expected losses η CQN,rt and the SPM coefficients γ CQN for the CQN device, according to our analytical model (green) and a numerical simulation (blue). We use d eff 0.83 pm∕V for LBO [29]. We obtained the numerical solution by directly solving the pulsed coupled-wave equations for the laser parameters at the maximum output power (τ p 780 fs, P pk,IC 54 MW). For the intrinsic nonlinear refractive index of the LBO, we use 2 × 10 −16 cm 2 ∕W [30]. The analytical model accurately predicts the SHG losses in the minima and the phase shift. The positive contribution to the phase shift from the crystal's intrinsic n 2 leads to a slightly less negative SPM coefficient γ CQN in the numerical model compared to the analytical solution, since this term is not included in the latter. The other sources of SPM, e.g., the disk, contribute only by few percent and so have been neglected.
Femtosecond SESAM-modelocked lasers rely on soliton pulse formation. In this regime of SESAM modelocking, pulse duration and intracavity pulse energy E IC E out ∕T OC depend mostly on the GDD versus SPM balance and only marginally on the parameters of the saturable absorber [10]. Their relation is governed by the so-called soliton formula, τ p ≈ 1.762jDj∕γ avg E IC , where γ avg 3 = 4 γ takes into account the effective phase shift for a pulse with a Gaussian spatial profile compared to the phase shift for the peak of the pulse [16,31]. By tuning the phase mismatch Δk, we can adjust the net SPM coefficient γ [ Fig. 3(b)]. Thanks to the straight-forward tunability of Δk by adapting the crystal's tilt during live laser operation, we obtain the shortest pulse duration for several values of the output power (cfr. Fig. 4 and Table 1). In contrast, a standard TDL, having a fixed amount of GDD and SPM, operates only over a fixed power range and has the shortest pulses only at the maximum output power. In Fig. 4 we present the laser output power versus pump power for three phase-matching configurations. The blue and red curves are obtained operating the SHG crystal, respectively, in the fourth (ΔkL ≈ 8π) and third (ΔkL ≈ 6π) SHG minimum. The slope in yellow is obtained starting from the third SHG minimum and gradually decreasing the Δk as the pump power is increased, in order to reduce the net SPM coefficient γ. Like this, we keep the pulse duration equal to the minimum achievable for our laser, but at increased output power. At the maximum output power (210 W, 780 fs), we measured a SHG efficiency ≈1.8 times the one we had in the third SHG minimum. This suggests a shift in ΔkL from the third SHG minimum of ≈ − 0.2π, i.e., ΔkL ≈ 5.8π. For this value of ΔkL, we have n 2,CQN ≈ −2.1 × 10 −15 cm 2 ∕W [28].
Next, in Table 1, we quantify the SPM cancellation effect occurring in the laser for several operating points. Except for the point at 210-W output power, we experimentally optimized Letter the crystal's phase mismatch in order to operate in the SHG minima, i.e., ΔkL ≈ 2πn min . For the point at 210-W output power, we slightly detuned the phase mismatch from the third SHG minimum, as described above. The soliton formula together with the measured laser characteristics yields a prediction for the total round-trip SPM coefficient, denoted γ soliton . Two contributing terms to this SPM coefficient are the intracavity air γ air , and the CQN crystal γ CQN , which we calculate according to γ air B air,rt ∕P pk,IC and γ CQN B CQN,rt ∕P pk,IC , respectively. We expect γ soliton γ air γ CQN . In Table 1, we compare γ soliton − γ air to γ CQN , to show that the laser characteristics are in good agreement with this equation. The last column of the table presents the percentage of the SPM picked up in air canceled by the CQN device. It ranges from ≈30% to ≈80% showing the great flexibility of this technique.
In Fig. 5 we present the laser diagnostics at the maximum output power, which show a single-pulse stable modelocked operation. We ensure single-pulsed operation by scanning the autocorrelator delay up to 60 ps and acquiring a sampling oscilloscope trace with a 45-GHz photodiode [ Fig. 5(f )]. We obtain diffraction-limited beam quality (M 2 < 1.05) in all configurations. In the presented laser, the output power was limited by the pump intensity on the disk, already close to the safety limit of 5 kW∕cm 2 , and the fluence on the SESAM, which was already operated slightly into the rollover.
In conclusion, we demonstrated a novel concept to cancel the SPM picked up in air in the context of high-power ultrafast oscillators. This allowed us to obtain laser performance in line with best-in-class TDLs using, instead of a complex vacuum system, an inexpensive and easy-to-set-up nonlinear crystal. Next to SESAM-modelocked TDL, this technique can be applied to high-power KLM oscillators. Additionally, we prove here that self-defocusing nonlinearities can be used at unprecedented power levels of up to 500 W intracavity power, hence offering a new toolset for high-average-power lasers. | 3,915.8 | 2018-12-11T00:00:00.000 | [
"Physics",
"Engineering"
] |
A Software Reliability Model for OSS Including Various Fault Data Based on Proportional Hazard-Rate Model
The software reliability model is the stochastic model to measure the software reliability quantitatively. A Hazard-Rate Model is the well-known one as the typical software reliability model. We propose Hazard-Rate Models Considering Fault Severity Levels (CFSL) for Open Source Software (OSS). The purpose of this research is to make the Hazard-Rate Model considering CFSL adapt to baseline hazard function and 2 kinds of faults data in Bug Tracking System (BTS), i.e., we use the covariate vectors in Cox proportional Hazard-Rate Model. Also, we show the numerical examples by evaluating the performance of our proposed model. As the result, we compare the performance of our model with the Ha-zard-Rate Model CFSL.
Introduction
Open Source Software (OSS) is used by many organizations in various situations because of its low cost, standardization, and quick delivery. However, the quality of OSS is not ensured, because OSS is developed by many volunteers around the world in a unique development style. Then, the development style has no organized testing phase. The faults latent in OSS are usually fixed by using the database of Bug Tracking System (BTS). There is various information related to faults in BTS. The reliability assessment of OSS is necessary and important for the de-mand in the future and the current problem of OSS. The software reliability model is a mathematical model to measure software reliability in statistical and stochastic approaches. As of today, many various models not only for proprietary software but also for OSS have been proposed by a lot of researchers [1]- [6]. The Hazard-Rate model is well known as the typical software reliability model [7] [8] [9] [10]. We proposed a Hazard-Rate Model Considering Fault Severity Levels (CFSL) for OSS in the past [11]. Mostly, a lot of Hazard-Rate Models measure the software reliability with only the data of the time of occurrence of software failures in the testing or operation phase. However, we can get various information related to faults of software aside from the data of the time of occurrence of software failures. As for previous research, the Hazard-Rate Model includes the data of the failure identification work and execution time in CPU, which are called environment data in the paper. Then, the related models have been proposed in the past by using
Bug Tracking System
BTS is the database. This is that OSS users can report the information about faults in OSS. There is various information in BTS, e.g., the recorded time of fault, the time of fault to be fixed, the nickname of fault assignee, and so on. We show the list of fault data in BTS in Table 1.
Hazard-Rate Model
Firstly, we show the stochastic quantities related to the number of software faults and the time of occurrence of software failures in testing phase or operating phase as shown in Figure 1.
The distribution function of ( ) 1, 2, k X k = representing the time-interval between successive detected faults of ( ) st 1 k − and k th is defined as: where: Pr{A} represents the occurrence probability of event A. Therefore, the following derived function means the probability density function of k X :
Changed
The modified date and time.
Product
The name of product included in OSS.
Component
The name of component included in OSS.
Version
The version number of OSS.
Reporter
The nickname of fault reporter.
Assignee
The nickname of fault assignee.
Severity
The level of fault.
Status
The fixing status of fault.
Resolution
The status of resolution of fault.
Hardware
The name of hardware under fault occurrence.
OS
The name of operating system under fault occurrence.
Summary
The brief contents of fault. Also, the software reliability can be defined as the probability that a software failure does not occur during the time-interval ( ] 0, x . The software reliability is given by: From Equations (1)-(3), the hazard-rate is given by the following equation: where: the Hazard-Rate means the software failure rate when the software failure does not occur during the time-interval ( ] 0, x . A Hazard-Rate Model is a soft-ware reliability model representing the software failure-occurrence phenomenon by the Hazard-Rate. Moreover, we discuss three Hazard-Rate Models as follows.
Jelinski-Moranda Model
Jelinski-Moranda (J-M) model is one of the Hazard-Rate Models. J-M model has the following assumptions: 1) The software failure rate during a failure interval is constant and is proportional to the number of faults remaining in the software; 2) The number of remaining faults in the software decreases by one each time a software failure occurs; 3) Any fault that remains in the software has the same probability of causing a software failure at any time.
From the above assumptions, the software Hazard-Rate in Equation (4) at k th can be derived as: where: each parameter is defined as follows: N: the number of latent software faults before the testing; φ : the Hazard-Rate per inherent fault.
Moranda Model
Moranda model has the following assumptions: The software failure rate per software fault is constant and is decreasing geometrically as a fault is discovered.
From the above assumptions, the software Hazard-Rate in Equation (4) at k th can be derived as: where each parameter is defined as follows: D: the initial Hazard-Rate for the software failure; c: the decrease coefficient for Hazard-Rate.
Xie Model
Xie model has the following assumptions: The software failure rate per software fault is constant and is decreasing exponentially with the number of faults remaining in the software.
From the above assumptions, the software Hazard-Rate in Equation (4) at k th can be derived as: where each parameter is defined as follows: N: the number of latent software faults before the testing;
Mean Time between Failures (MTBF)
Three Hazard-Rate Models above have the following assumption: Any fault that remains in the software have the same probability of causing s software failure at any time. We assume that the fault data is divided into the following types in terms of
Hazard-Rate Model Considering Fault Severity Levels (CFSL)
where each parameter is defined as follows: ( )
Cox Proportional Hazard-Rate Model
Cox PHM is the model representing Hazard-Rate by using baseline hazard function, which is subject for a variable of time, and covariate vector. In this section, we discuss about Cox PHM. It is assumed that two kinds of vectors are defined as follows: where each vector is defined as follows: k α : the covariate vector including q kinds of data Therefore, Cox PHM is defined as follows by using two vectors above: where: (14) is called baseline hazard function and is subject for a variable of k x . β : the coefficient parameter for k α .
Proposed Model
In this paper, we apply the exponential Hazard-Rate Model to the baseline hazard function. Thus, the proposed model can be regarded as a parametric model. Moreover, the distribution function and the density function of k X are derived as a Equation (16), (17) respectively.
( ) For this reason, the parameters in the proposed model can be estimated by MLE (Maximum Likelihood Estimation).
Numerical Example
We use of fault big data in Apache HTTP server to estimate MTBF as the evalua-tion of the performance of our proposed model compared to Hazard-Rate Model CFSL [14]. The data of assignee is converted in numerical one in the form of frequency of occurrence. Specifically, our proposed model is divided into three cases as follows: PHM1: the data of assignee is only included in k α ; PHM2: MTBC is only included in k α ; PHM3: the data of assignee and MTBC are included in k α .
The parameters in the proposed models are estimated by MLE (Maximum Likelihood Estimation). The estimated value of parameters in three models is shown in Table 2.
In Table 2 . Therefore, there is not multicollinearity in PHM3. As a criterion to measure the goodness-of-fit of our proposed model, we use AIC (Akaike's Information based on the maximum likelihood estimation of model parameters Criterion). Figures 2-5 show the estimated MTBF for each model and Table 3 shows the Table 3. In other words, PHM is possible to predict the MTBF of OSS more correctly.
Conclusions
In OSS is popular and in demand for a lot of organizations in various situations. However, OSS is developed by many volunteers in the world without an explicit testing phase. Therefore, the reliability of OSS is not ensured. For this reason, it is necessary to measure software reliability quantitatively. There are various fault data in the BTS of OSS. Then, the data sets are useful to find the characteristics of OSS. Moreover, we can assess software reliability accurately by using not only the data of the time of occurrence of software failures in the testing or operation phase but also the other various fault data in BTS.
In BTS, there are many kinds of fault data aside from the one we used in this paper. Therefore, we will discuss the proposal of other software reliability models with other kinds of fault data in BTS as future research. Also, we would like to suggest new measurements for OSS reliability including the characteristics of OSS. | 2,151.8 | 2022-01-01T00:00:00.000 | [
"Computer Science"
] |
Pontryagin Maximum Principle for Distributed-Order Fractional Systems
We consider distributed-order non-local fractional optimal control problems with controls taking values on a closed set and prove a strong necessary optimality condition of Pontryagin type. The possibility that admissible controls are subject to pointwise constraints is new and requires more sophisticated techniques to include a maximality condition. We start by proving results on continuity of solutions due to needle-like control perturbations. Then, we derive a differentiability result on the state solutions with respect to the perturbed trajectories. We end by stating and proving the Pontryagin maximum principle for distributed-order fractional optimal control problems, illustrating its applicability with an example.
Introduction
The idea to consider fractional order systems of distributed order goes back to Caputo and the study of anomalous diffusion in viscoelasticity [1]. The interest on the new operator slowly increased, in particular with the works of Chechkin et al. [2], who applied distributed order fractional derivatives to study retarding sub-diffusion and accelerating super-diffusion; Naber [3] studied distributed-order fractional subdiffusion processes with different decay rates; Kochubei [4] applied distributed-order operators to the study of ultraslow diffusion; and Mainardi et al. [5] applied distributed order fractional derivatives to study Gaussian diffusion. The subject is today under strong current research, partially explained by their relation with physical processes lacking temporal scaling [6] and complex non-linear systems [7]. Indeed, the distributed-order definition of the operator allows considering superposition of orders and accounting for physical phenomena, such as memory effects in composite materials and multi-scale effects. A typical example that illustrates the capabilities of this class of operators is the mechanical behavior of viscoelastic materials having spatially varying properties. The literature on experimental applications of fractional order systems of distributed order is now vast, and we refer the interested reader to the review paper of Reference [8]. For numerical aspects of fractional initial value problems of distributed-order, we refer to Reference [9].
The calculus of variations is a field of mathematical analysis that uses variations, which are small perturbations in functions to find maxima and minima of functionals. The Euler-Lagrange equation is the main tool for solving such optimization problems, and they have been developed in the context of fractional calculus to better describe non-conservative systems in mechanics [10]. Necessary optimality condition of Euler-Lagrange type for distributed-order arXiv:2108.03600v1 [math.OC] 8 Aug 2021 problems of the calculus of variations were first introduced and developed in Reference [11]. The results were then further generalized by the present authors in Reference [12], with the proof of several analytical results and a weak maximum principle of Pontryagin type for distributed-order fractional optimal control problems. Here, we extend and improve the theory of optimal control for distributed-order fractional operators initiated in Reference [12] by proving a strong version of the Pontryagin maximum principle, which allows the values of the controls to be constrained to a closed set. The main novelty consists to extend the optimality condition proved in Reference [12] to a maximality condition, which yields to the strong version of Pontryagin maximum principle. For this purpose, and in contrast with Reference [11,12], we use the so-called needle-like variations to the control perturbations.
The paper is organized as follows. In Section 2, we recall some necessary results of the distributed-order fractional calculus. Our contribution is given in Section 3: we formulate the distributed-order fractional optimal control problem under investigation, and we prove the continuity of solutions (Lemmas 3 and 4), a result on the differentiability of the perturbed trajectories (Lemma 5) and, finally, the Pontryagin maximum principle (Theorem 1). We then give an illustrative example of application of the obtained necessary optimality conditions in Section 4. We end with Section 5, indicating some conclusions, the main achievements and novelty of the work, as well as some future research directions.
Preliminaries
In this section, we recall necessary results and fix notations. We assume the reader to be familiar with the standard Riemann-Liouville and Caputo fractional calculi [13,14].
Let α be a real number in [0, 1]. In the sequel, we use the following notation: where I α a + and I α b − represent, respectively, the left and right Riemann-Liouville integral of order α. We also use the notation AC α ([a, b], R n ) to represent the set of absolutely continuous functions that can be represented as for some functions f ∈ L α . Let ψ be a non-negative continuous function defined on [0, 1] such that This function ψ will act as a distribution of the order of differentiation.
Definition 1 (See Reference [15]). The left-and right-sided Riemann-Liouville distributed-order fractional derivatives of a function x ∈ L α are defined, respectively, by where D α a + and D α b − are, respectively, the left-and right-sided Riemann-Liouville fractional derivatives of order α. Definition 2 (See Reference [15]). The left-and right-sided Caputo distributed-order fractional derivatives of a function x ∈ AC α are defined, respectively, by where C D α a + and C D α b − are, respectively, the left-and right-sided Caputo fractional derivatives of order α.
As noted in Reference [11], there is a relation between the Riemann-Liouville and the Caputo distributed-order fractional derivatives: Along the text, we use the notation where I 1−α b − represents the right Riemann-Liouville fractional integral of order 1 − α. The following results will be useful for our purposes. In concrete, integration by parts will be used in the proof of the Pontryagin maximum principle (Theorem 1). [11]). Let x ∈ L α and y ∈ AC α . Then,
Lemma 1 (Integration by parts formula
It follows a generalized Grönwall inequality that will be used in Section 3.1. [16]). Let α be a positive real number and let a(·), b(·), and u(·) be non-negative continuous functions on [0, T] with b(·) monotonic increasing on [0, T). If
Main Results
In this work, we look for an essentially bounded control u ∈ L ∞ ([a, b], R m ) and the corresponding state trajectory x ∈ AC α ([a, b], R n ), solution to the following distributed-order non-local fractional-order optimal control problem: where Ω is a closed subset of R m . The data functions L : are subject to the following assumptions: • The function f is continuous in all its three arguments. • The function f is continuously differentiable with respect to state variable x and, in particular, locally Lipschitz-continuous, that is, for every compact B ⊂ R n and for all x, y ∈ B there is K > 0 such that • With respect to the control u, there exists M > 0 such that The cost integrand L satisfies the same assumptions as f .
Sensitivity Analysis
Now, our concern is to establish continuity and differentiability results on the state solutions for any control perturbation (Lemmas 3-5), which are then used in Section 3.2 to prove a necessary optimality condition for the optimal control problem (1). With this purpose, let us denote by L[F(·)] the set of all Lebesgue points in [a, b) of the essentially bounded functions t → f (t, x(t), u(t)) and t → L(t, x(t), u(t)). Thus, let (τ, v) ∈ L[F(·)] × Ω, and, for every θ ∈ [0, b − τ), let us consider the needle-like variation u θ ∈ L ∞ ([a, b], R n ) associated to the optimal control u * , which is given by for almost every t ∈ [a, b].
Lemma 3 (Continuity of solutions).
For any (τ, v) ∈ L[F(·)] × Ω, denote by x θ the corresponding state trajectory to the needle-like variation u θ , that is, the state solution of Then, we have that x θ converges uniformly to the optimal state trajectory x * whenever θ tends to zero.
Proof. We have that
Then, by definition of the distributed-order operator, Now, using the mean value theorem for integrals, there exists anᾱ such that Therefore, by the left inverse property, we obtain the following integral representation: Moreover, note that With the help of the triangular inequality, we can write that From the Lipschitz property of f and the boundedness with respect to the control, it follows that Now, by applying the fractional Grönwall inequality (Lemma 2), it follows that , and E α,1 is the Mittag-Leffler function of parameterᾱ.
Hence, by taking the limit when θ tends to zero, we obtain the desired result: The next result is a corollary of Lemma 3.
Lemma 4.
There exists 2 ≥ 0 such that Proof. Using similar arguments of Lipschitz-continuity of f and its boundedness with respect to the control u, we get −1 , and, as a consequence of Lemma 3, we obtain that We conclude the proof by applying again the fractional Grönwall inequality (Lemma 2), in which we set 2 Our aim is to prove that z θ converges uniformly to zero on [τ, b] whenever θ → 0. The integral representation of z θ is given as follows: for every t ∈ [τ, b]. Let us investigate the two first terms of the right-hand side of (4). By boundedness of f with respect to u, we have that Further, using the classical Taylor formula with integral rest, we have Hence, from Lemma 4, we deduce that and, referring to Lemma A.3 in Reference [17], we get an estimate for the second term of (11), and we end the proof by application of the fractional Grönwall inequality of Lemma 2.
Pontryagin's Maximum Principle of Distributed-Order
It follows the main result of our work: a distributed-order Pontryagin maximum principle for the fractional-order optimal control problem (1). (1)). If (x * (·), u * (·)) is an optimal pair for (1), then there exists λ ∈ L α , called the adjoint function variable, such that the following conditions hold for all t in the interval [a, b]:
Theorem 1 (Pontryagin Maximum Principle for
• the maximality condition • the transversality condition where the Hamiltonian H is defined by Proof. First of all, note that the regularity of function f with respect to the state variable (recall that f is continuously differentiable with respect to x) is exactly as in our previous paper [12]. For this reason, the adjoint system (6) and its transversality condition (7) remain exactly the same as the ones proved in Reference [12]. Therefore, we only need to prove the maximality condition (5), which is new due to less regularity of f with respect to control functions and the fact that now the controls take values on the closed Ω set. We start by using integration by parts (Lemma 1) for functions λ ∈ L α and η ∈ AC α on [τ where λ is the adjoint variable given in Reference [12]: Substituting (9) and the variational differential system given in (3) into (8), we obtain that which leads to Next, recall that, from the definition of distributed-order fractional integral and the mean value theorem, we have the existence of anᾱ such that where m = 1 0 ψ(α)dα. Moreover, by the fundamental law of calculus and the duality of the Riemann-Liouville integral operator, we have also that Next, using the boundary condition from system (3), it yields ). Finally, substituting this expression into (10), we get However, with respect to the cost functional J, the limit because, by assumption, (x * , u * ) is an optimal pair. This limit can be written as Considering the fact that τ is a Lebesgue point of it follows from the Lebesgue differentiation property Moreover, with respect to the third limit in (14), we can apply the Lipschitz property of L to obtain L(s, x θ (s), u θ (s)) − L(s, x * (s), u θ (s)) Therefore, because x θ − x * θ is a uniformly convergent series of functions, we conclude that the integrand L(s, x θ (s), u θ (s)) − L(s, x * (s), u θ (s)) θ is uniformly bounded. Furthermore, we have Next, by the continuity Lemma 3, we have x θ − x * → 0 whenever θ → 0. Thus, we can express the residue term only as function of θ, that is, L(s, x θ (s), u θ (s)) = L s, x * (s), u θ (s) + (x θ (s) − x * (s)) · ∂L(s, x * (s), u θ (s)) ∂x + o(θ), and the following expression yields for the second limit: Hence, thanks to the Lebesgue bounded convergence theorem, and, altogether, we get Hence, using inequality (13) and (12), we obtain that where H = L(t, x, u) + λ · f (t, x, u). Because τ is an arbitrary Lebesgue point of the control u * and v is an arbitrary element of the set Ω, it follows that the relation holds at all Lebesgue points, which ends the proof.
An Illustrative Example
As an example of application of our main result, let us consider the following distributedorder fractional optimal control problem: where the distribution function of order of differentiation is given by Let u * be an optimal control to problem (16). Theorem 1 give us a necessary optimality condition that u * must satisfy. The Hamiltonian function associated with this problem is given by From the maximality condition (5), we know that u * (t) maximizes a.e. in [0, 2] the mapping w → (1 − 3w)x * (t) + wx * (t)λ(t).
Due to the positiveness of the initial condition (x a > 0) and the linearity of the distributed order derivative, we have that x * (t) > 0 for all t ∈ [1,5]. Thus, the mapping to be maximized can be reduced to and u * has the form Now, it remains to determine the switching structure of the control through investigation of the adjoint boundary value problem given by (6) and (7), that is, Note that, because problem (16) does not have a terminal phase constraint, the fractional transversality condition (7) is simplified to λ(5) = 0. Moreover, since λ(·) is a continuous function, there is ξ > 0 such that u * (t) = 0 for all t ∈ [5 − ξ, 5]. With this, we have that D ψ(·) 5 − λ(t) = 1, and it follows, by backward integration, that 1]. Noting that, for c := 5 − (3mΓ(ᾱ + 1)) 1 α ∈ [7/2, 5[, we get λ(c) = 3, we conclude that
Conclusions
Recent applications and experimental data-analysis studies have shown the importance of systems with "diffusing diffusivity" in anomalous diffusion, modeled with fractional, standard Brownian motions and distributed-order operators [18][19][20]. The theory of the calculus of variations for distributed-order fractional systems was initiated in 2018 by Almeida and Morgado [11], and it has been extended by the authors in 2020 to the more general framework of optimal control [12]. There, we established a weak Pontryagin Maximum Principle (PMP), under certain smoothness assumptions on the space of admissible functions, where the controls are not subject to any pointwise constraint [12]. The objective of the present article was to state and prove a strong version of the PMP for distributed-order fractional systems, valid for general non-linear dynamics and L ∞ controls and, in contrast with References [11,12], without assuming that the controls take values on all the Euclidean space. Our statement is as general as possible, and it encompasses the distributed-order calculus of variations of [11] and the weak PMP of [12] as particular cases. Moreover, in the analysis of a strong version of PMP, we emphasized the use of needle-like variations to control perturbations, dealing with controls taking values on a closed set in a much larger class of admissible functions than in References [11,12]. Our approach began by proving results on continuity of solutions due to needlle-like variations, and then followed by a differentiability result on the state solutions with respect to perturbed trajectories. The statement and the proof of the Pontryagin Maximum Principle are rigorously given. Finally, the new necessary optimality conditions were illustrated by a simple example for which an analytical solution could be found. To deal with real optimal control problems of Nature, which are impossible to solve analytically, it is important to develop numerical methods based on the fractional distributed-order Pontryagin maximum principle here obtained. This will be subject of future research. | 3,812.2 | 2021-08-08T00:00:00.000 | [
"Mathematics"
] |
The power of the dead in Neolithic landscapes > an agricultural-celestial metaphor in the funerary tradition of the Funnel Beaker Culture in the Sandomierz Upland
One of the essential features of the Funnel Beaker Culture (FBC) was the development of long monumental barrows with timber structures. Having elongated, almost triangular or asymmetrical trapezoidal forms with wider ‘entrances’ or ‘facades’ and narrower ends, they were often located in abandoned agricultural fields or settlements near wet or boggy areas (Midgley 2008.11–12; Woźny 1996.94–96; Adamczak 2013.184–186). Their elongated shapes seem to evoke the concept of axiality, suggesting they were carefully positioned in the landscape. Archaeoastronomy may elicit the meaning of orienting long barrows in one direction or another. In this paper, I report on an ongoing study of orientations of long barrows in the Sandomierz Upland, in central-southeastern Poland.
Introduction
One of the essential features of the Funnel Beaker Culture (FBC) was the development of long monumental barrows with timber structures.Having elongated, almost triangular or asymmetrical trapezoidal forms with wider 'entrances' or 'facades' and narrower ends, they were often located in abandoned agricultural fields or settlements near wet or boggy areas (Midgley 2008.11-12;Woźny 1996.94-96;Adamczak 2013.184-186).Their elongated shapes seem to evoke the concept of axiality, suggesting they were carefully positioned in the landscape.Archaeoastronomy may elicit the meaning of orienting long barrows in one direction or another.In this paper, I report on an ongoing study of orientations of long barrows in the Sandomierz Upland, in central-southeastern Poland.
The Sandomierz Upland (region 342.36 in Kondracki's 1994 taxonomy) is a north-eastern extension of the Holy Cross Mountains.It is a well-defined and rather flat area, rarely exceeding 300m a.s.l., and bounded by river valleys: to the east by the upper Vistula, to the north by Kamienna, and to the west by Świślina.To the south, the upland is bound by the Wygiełzowskie Range and is drained by two Vistula tributaries, Opatówka and Koprzywianka (Kondracki 1994).The whole upland is covered with a ABSTRACT -FBC earthen long barrows were roughly oriented along the East-West axis, with deviations not exceeding the frame of the solar arc.Also, the Sandomierz Group monuments display this general pattern.The paper brings together archaeoastronomy, landscape archaeology and symbolic archaeology.
KEY long barrows;Sandomierz Upland;calendar dates;archaeoastronomy DOI> 10.4312\dp.43.22 thick layer of loess deposits accumulated during the Vistulian (Weichselian) glaciation.The eastern and northern borders of the upland have steep escarpment edges that abruptly descend to the bottoms of river valleys.The rest of the area is broken by numerous deep and narrow stream and river valleys.The major Vistula tributary, Opatówka, runs NW-SE through the region, but various small streams are oriented SW-NE, giving the local topography considerable variation (Fig. 1).To the south and southwest, clusters of the Holy Cross Mountains rise to around 600m a.s.l., generating prominent landscape features.
Within the Stryczowice sub-region, FBC large settlements were located at the edge of river valleys, and some small and less permanent sites occupied loess uplands.In many cases, they were located where a Lengyel-Polgár settlement existed earlier (Burchard et al. 1991.100).Large settlements were established along small tributaries of the Kamionka River, which is exposed to the Sun, clustering around Stryczowice and Broniszowice, where the densest network of meandering stream valleys and richest rolling landscape features occur in the region (Bąbel 1975.536).Apart from settlements, the outstanding characteristics of the FBC Eastern and South-Eastern Groups were the constructions of earthen long barrows.While in Pomerania, Greater Poland, Kuyavia and Lower Silesia, irregular boulders were used to enclose the mound, in the South-Eastern Group large blocks were replaced by smaller limestone stones or a timber palisade.Although lacking massive stones, the earthen long barrows erected in the Sandomierz Upland should also be interpreted as monumental tombs.Within the whole area, these earthen long barrows were usually situated near the highest elevations, include watershed splits (Bąbel 1975.535-536).The area is drained towards the north-east by the Kamionka River, which flows into the Kamienna, and then into the Vistula (see Fig. 1).Recently, Marek Florek (2011) aptly described all the mounds and barrows known in the region.
Long barrows and their spatial perception
To begin with, I will suggest that long barrows were often deliberately built in places so as to appear to create skylines viewed from neighbouring settlements.Viewed from settlements these barrows indeed were silhouetted, but no less important was the constitution of the visual network between barrows in the entire sub-region.I have discussed the visual appreciation of these monuments in detail elsewhere (Iwaniszewski 2006), but my aim here is to emphasise the reproduction of their privileged locations with respect to visual and astronomical involvements.My research indicates that the long barrows located in elevated positions have greater visual control of the overall environment than the settlements situated on the slopes descending to the riverine valleys.Furthermore, all long barrows investigated have visual predominance in almost all directions, while settlement areas tend to have more restricted visual relationships.This contrast in the visibility patterns suggests that the barrows' associated symbolism and meaning were probably extended over specific areas, linking monuments with exterior referents.The erection of long barrows at higher elevations may also be interpreted as the placement of the dead in closer proximity to the heavenly sphere.For these reasons, the location of the monuments and settlements in the landscape and their visual properties should be seen as reflecting patterns between two spatially separated, but also socially and ritually, parts of the same society.It seems evident that their visibility and intervisibility patterns and an overall spatial interplay were produced as a result of field clearing and deforestation.
The incorporation of these monuments into the landscape can be understood in the broader context of visual experiences.It seems evident they were unlikely to be visually perceived, unless we admit that the surrounding forest was cleared (see Tilley 2010. 47-49).Fortunately, there is much evidence for the burning of the natural vegetation from the Bronocice region in the south (Kruk et al. 1996.55-69;Kruk, Milisauskas 1999).Although the oldest traces of deforestation in Bronocice may derive from pre-FBC periods, the intensification of the slash-and-burn technique resulted in a substantial thinning of the forest cover during the Bronocice II-III, between 3560 and 3100 BC (Kruk, Milisauskas 1999.120;Kruk et al. 1996.26, 68-69;Milisauskas et al. 2012. 81).The changes in the woodland cover in the Stryczowice sub-region have not been studied yet.The only, and indirect, evidence comes from Gawroniec Hill at ≥mielów (mid-4 th millennium BC), located 15.5km east-northeast of the Stryczowice long barrows, where, in the FBC context, archaeologists found mollusc shells of particular snail species (Krysiak 1952).The presence of these snails, which is indicative of the ecological conditions of habitats where they were found, suggests that in the Kamionka and upper Kamienna areas some of the upland slopes and hilltops could also have already been partially deforested due to the expansion of croplands and pastures (Barga-Więcławska, Jedynak 2014).
To sum up, these anthropogenic changes in the natural environment could have improved visibility conditions across the entire region.I do not want to say that the whole area was already totally and permanently deforested; what I am suggesting is that the fact that the monuments were erected near elevated prominences suggests they would become visible from similarly high and already deforested locations.So it is possible that grasslands mixed with agricultural plots and pockets of woodland dominated the upland loess landscape (see also Nowak 2009.449-450).Observe the tree, water, wagon, and field motifs on a late Funnel Beaker vessel from the Bronocice phase II, dated to about 3637-3373 cal BC (Milisauskas, Kruk 1982;Bakker et al. 1999.785-786),which seem to represent parcelled fields separated by trees (woodland?)(Fig. 2).
Diverse FBC groups inhabited the Stryczowice subregion for a longer time, not only significantly transforming the local landscape (deforestation produced by slash-and-burn and scratch plow agriculture and intensive grazing), but also dramatically changing their experience of the surrounding world, permitting them to create a visual relationship into a wider landscape experience.Naturally, it is hard to evaluate the overall extension of forest clearance, but examples from Bronocice suggest that an important part of the region, especially in the valleys, should have remained forest.Be that as it may, there can be little doubt that the growing importance of longdistance panoramas of landscapes required changes in traditional ways of perceiving the cosmos, both in terms of everyday habits and shared worldviews (see more in Tilley 2010.42-51).I am aware that the analysis of the visibility of the long barrows shows a tendency to emphasise them as primarily visual constructs, and stress visual perceptions of the landscape at the expense of other forms of landscape perception.This position appears to privilege knowledge gained through sight, which might be due to our western mode of seeing the landscape, and not one shared by FBC peoples (see Ingold 2000. 243-287;Cummings, Whittle 2004.8-9).Therefore, to avoid easy over-interpretation, I add to sight the notion of the feeling of the weather-world (Ingold 2010;2011.126-135)which introduces a more multi-sensory experience of the surrounding world.
Having established that Middle Neolithic cereal agriculture and the maintenance of domesticated livestock could have caused more permanent deforestation of the region, I am now in a position to find out whether the monuments were oriented to the Sun's positions on the distant horizon.I am assuming that the elevated locations where the monuments were built allowed the horizon to be seen.As is known, archaeoastronomy maintains that orientations of structures have some meaning in relation to astronomical objects or events observed in the sky.This is not to say that different structures are set up in the landscape to represent patterns in the heavens for no other purpose.Rather it implies that their constructors laid them out to utilise the celestial realm as a way of a social discourse (e.g., to reinforce the elite's right to rule, to legitimise rituals performed on specific dates).
Within this context, we must attend more closely to the orientation patterns of long barrows.In general, all FBC long barrows, regardless of geographical location, appear to follow a more or less regular pattern of placing monuments on an E-W axis, with deviations to NE-SW and SE-NW (Iwaniszewski 2015).In Kuyavia, the overall majority of axes are situated within the angle of the annual movement of the Sun along the horizon, or the solar arc of the region (Iwaniszewski 1995).I assume that the axis of long barrows conveys a strong sense of a directed sightline determining movements toward and away from targeted landmarks and solar events.Therefore, astronomical sightlines may act as a means by which meanings and values projected onto distant landmarks are evoked at the monuments, with the monuments themselves associated with them (see below).In one a way or another, this pattern is indicative of a particular symbolic significance accorded to the E-W axis and to astronomical phenomena that occurred along the horizon.If this last argument is valid, then FBC groups sought for suitable places that afford a sufficiently wide, but not necessarily panoramic, view.The sites of earthen long barrows appeared to meet those conditions and enabled the potential observers to see a horizon position of the Sun.Now, if observations made from these spots not only served to record of recurrent positions of the Sun, but also to schedule particular activities (such as planting or harvesting), they had to return to these places at regular intervals.The monuments physical presence in the landscape could have been used to assess and reassess the importance of the location from which astronomical observation was possible.The evidence for ritual practices found in the wider area of long barrows is evidence that these places were visited regularly.The orientation patterns of monuments suggest they could have acted as kinds of calendrical indicators for recurrent astronomical events, as well as for specific activities shared by whole communities.
Archaeoastronomical arguments are essential for this interpretation.All sites in the Stryczowice sub-region have been visited and examined (see Fig. 3).Since there may be some ambiguity as to the direction in which a given alignment might have been used, declinations were obtained in both directions.On the eastern Stryczowice eastern, the spread of declinations is between -16°and -22°, corresponding to solar dates between February 3/November 7 and January 5/December 3. Westward orientations of both long barrows yield declinations between 15°and 21°, corresponding to solar dates between May 1/August 14 and May 26/July 22.At Broniszowice, the axis of the long barrow extended eastward points to a declination close to -14°, corresponding to solar dates of February 11/October 31.Its westward orientation corresponds to the days of April 29/August 14.The comparable solar dates collected at Kunów are February 19/October 23 for the barrow's eastern alignment and April 26/August 18 for the western one.
It is important to observe that orientation patterns of the FBC long barrows are not linked with turning points in the yearly solar cycle, namely, the solstices, and equinoxes.However, it is noteworthy that all orientations are located within the solar arc of particular sites, that is, within the angle of annual displacement of the Sun along the horizon, suggesting their constructors oriented them intentionally.It may imply that Funnel Beaker sky watchers were interested in specific dates marked by the rising and setting positions of the Sun rather than in astronomical events such as solstices or equinoxes.The meaning of the dates mentioned above will become evident when compared to the region's seasonal changes.So now I will attempt to find associations between the solar times determined by the axes of the long barrows and the annual distribution of climatic and ecological variables.
Fig. 3. Alignments of earthen long barrows at the Sandomierz Upland. Dates arbitrarily computed for 3500 BC. Due to the lack of in situ measurements, the dates from Malice Kościelne should be taken as approximate ones (even as ± 6 days).
and also the wettest month), and January (the coldest month).However, the local climate may further be affected by the climatic conditions of the adjacent Holy Cross Mountains.After winter, minimum rainfall steadily increases from March to July.Soil moisture depends on rain, and any lack of rain in March-April and July may substantially reduce yields.The turn of April and May coincides with the onset of a warmer part of a year.Delayed frosts can occur as late as mid-May (Kowalski 1997.17).The ripening of cereals occurs between May and July, a time when rain is also required; a lack of rain in summer also reduces yields.In total, the duration of the growing period is about 200-210 days (see Fig. 4).It should be remembered that present-day climatic phenomena differ from those in the past.During the Atlantic climatic period (7460-3830 BC) the temperature in central Europe was 2°higher than modern levels (e.g., Harmata 1995.33), so it is probable that delayed frosts were less frequent.
Overall, winds from the WSW and West bring most precipitations and increased cloud (Kowalski 1997).The wind rose from Sandomierz shows in the region the prevailing winds are from the West, WSW, and WNW (see Fig. 5).
To sum up, eastward and westward solar calendar dates roughly tend to cluster around the so-called mid-quarter days.As is known (e.g., McCluskey 1989), this term refers to dates falling midway between the dates of the solstices and equinoxes, i.e. at the beginning of February, on the turn of April and May, in mid-August and in November.Some scholars associate the quartering of the year with the traditional beginning of the four climatic seasons (Nilsson 1920.76;McCluskey 1989).However, in my opinion, it may well refer to the much earlier division of the year into cold and warm halves, with the starting dates around the beginning of November and of May, respectively (Liugman 1938.445, 450-451).This concept finds additional support in the history of Indo-European languages.The earliest division of the year seems to have been based on the separation of two distinct seasons (either as 'wet' and 'dry' or as 'cold' and 'hot' seasons, Buck 1971Buck .1011Buck -1016)).Nilsson (1920.45-85),who discusses the history of ancient European time-reckoning, also observes that the names of the seasons are borrowed from the names of the climatic phases.
The turn of April and May, as well as the turn of October and November, fall approximately at midpoints between the equinoxes and solstices.Thus, the orientations of Stryczowice, Broniszowice, and Kunów long barrows seem to be linked with turning points in the annual seasonal cycle (Fig. 6).The third turning point (at the turn of January and February) may eventually be related to the period of maximum cold, so its significance for the farming calendar is nil, since it is a resting time in agricultural activities.The fourth turning point, mid-August, may denote the end of harvest-time.The clustering around these dates may suggest they were the intended targets.Of course, these associations are speculative, and the proposed link between the seasonal cycle and the third and fourth dates remains to be further explored.
I shall now recapitulate what archaeoastronomy adds to the overall image of long barrows' significance.The shape of earthen long barrows shows clear evidence of axiality.These monuments denote meaningful directions formulated in terms of alignments towards landscape and horizontal astronomi- cal targets at seasonally significant times.The meaning of the dates revealed by long barrow orientations is inferred from their correlation with important seasonal changes (such as rainfall, temperature, and vegetative cycles) and agricultural activities (planting, harvesting).In my opinion, the changing positions of the Sun along the horizon provide the temporal and spatial frame for activities performed at the monuments.It informed the potential participants of rituals about the context within which they acted.It shows that solar positions along the horizon cannot be interpreted as being observed by a detached Sun watcher interested in fixing the Sun's positions at the turning points of the annual solar cycle.Rather, solar positions should be viewed as being associated with other natural cyclical phenomena.If this is true, then we are dealing with new types of interpretation.The Sun might be considered as capable of communicating with humans, i.e. as being able to signal the change in the seasons and the need to start/end specific agricultural tasks.To sum up, locations of long barrows in high places, together with their solar alignments associated with seasonal changes seem to show new ways in which FBC peoples approached (and understood) their environs.
This idea leads me to the following proposal.The constructors of long barrows who observed the turning points in the annual agricultural cycle also discovered the recurrent character of particular weather patterns.Now, although long barrows were constructed and erected by living inhabitants, they were built to house the dead, mythical or real ancestors of the nearby villagers.To predict and thereby to control recurrent solar and weather, or astrometeorological, phenomena occurring at specific dates would have been the symbol and prerogative of those ancestors rather than of their living descendants.It might be that it was the ancestors buried under long barrows were who utilised the Sun to signal a change in the weather to humans.It might also be that these distant horizon features where the Sun was observed at meaningful dates provide indications about those ancestors abode.In the following, I will explore these possibilities.
Both proposals are in many ways speculative, but at least they fit the body of data we have at present.
The dead and the fertility of the soil: Neolithic beginnings of an agricultural metaphor in burial practices
As a particular kind of artefact, earthen long barrows have received numerous interpretations.However, these interpretations cannot be limited to socio-economic or ritual-mortuary issues.I suggest the spatial layout of long barrows not only permitted a new visual perception of the landscape, but also involved a different perception of the sky.Perceptions of the celestial vault within dense forest and cleared areas should have produced different experiences of the world, giving rise to new cosmological beliefs (Tilley 2010.50-51).Therefore, archaeoastronomy, together with walking around the landscape may add new evidence which allows the formation of new and more nuanced interpretations.
As has been suggested many times (Childe 1949;Fleming 1973;Kośko 1976;Hodder 1984;1992;Midgley 1985;Sherratt 1990), the silhouette and outline of long barrows were regarded as imitations (in shape and arrangements) of monumental trapezoidal timber longhouses built by previous LBK (Linearbandkeramik) and Lengyel-Polgár groups.The arrangement of a series of long barrows at Stryczowice, Broniszowice, Garbacz-Skała, and Kunów suggests that a similar metaphor may have been used by Neolithic societies in this region.These 'villages for the dead' situated at the highest locations could have been easily seen from similar places in the upland, but remained invisible from the bottom of the river valleys or lower slopes descending into those valleys.It seems that long barrow locations were visually interconnected and involved in the same web of cosmological concepts.From this brief description,
Fig. 6. Alignments of long barrow 2 at Stryczowice (photo by author).
it is clear that their intervisibility and spatial interplay must be regarded as reflecting a new understanding of the landscape.In my opinion, these elements provide us with a potential guide to interpretation: uplands were associated with funerary monuments and dead ancestors, while lower slopes, lower elevations, and lowland river valleys were associated with living communities.Furthermore, I also observed (Iwaniszewski 2006) that Funnel Beakers situated the barrows in the context of 'outdoor' activities performed outside settlements rather than within the sphere of daily household activities.Regarding the spatial pattern, long barrows were located in fields, grasslands, and forests, i.e. within areas of the routine, daily economic activities of Middle Neolithic societies.It is possible that long barrow locations were considered as desirable within a landscape in which primary 'outdoor' economic activities took place.Still, the association of essential economic activities such as gathering, farming or pasturing with enduring funeral monuments appears to be systematic regarding both spatial location and conception.Not only were they linked with the dead, with an ancestral presence in the landscape, and the possible reassessment of the rights over particular plots of land, but also represented a formalised and repeated pattern of social activities that could have been symbolically controlled by those ancestors.This ongoing transformation of the landscape could have provided both identities for the nearby population and symbols of authority imbued with religious meaning.Barrett makes a closely related point, observing that "as people move through their lands, not only do they learn about relationships between place and their ancestors but also learn about themselves and their particular rights and responsibilities in this land-based scheme of existence" (Barrett 1999. 193).My argument is that clearing the farmland of trees not only permitted the view that land ownership was ancestrally determined, but also that it derived its potency (or fertility) from powerful ancestors.In other words, what I am proposing here is the starting point of a process of associating a fertility cult with a cult of the dead that still is observed in Central European folk culture (e.g., Pisarzak 1978).
Although long barrows from the Stryczowice sub-region differ in many aspects (construction materials, location near water sources), they nevertheless match the FBC earthen long barrow general system.Their orientation, shape, size, and internal structure evince a formalised practice within well-defined spaces and follow a definite conceptual system.
In light of the above, it seems significant that the constructors of long barrows intentionally linked their orientations with turning points in the annual seasonal and farming cycles (and not with turning points in the solar year, such as solstices or equinoxes).Consequently, to predict or control those turning points by a dedicated ancestral monument would have been a symbol of the ancestors' power over the farming practices of their descendants.
Malice Kościelne
To test the hypothesis mentioned above, I examined two earthen long barrows at Malice Kościelne.Since I have not visited the site, my study relies on data provided by Barbara Bargieł and Marek Florek (2006a).The site occupies the southern bank of the Opatówka River, on the upper part of a slope falling towards the N-NE, about 34-35m above the present valley bottom.The site consists of the remains of two long barrows, and a nearby FBC settlement (about 200m away).My calculations based on the maps and plans published by the authors and from Google Earth maps show that the barrows are roughly oriented towards the distant (over 18km) hills of the Iwanickie Range, which has summits above 300m a.s.l.Declinations corrected for mean refraction vary between -13°06' and -9°28' (walls of Barrow 1 and the northern wall of Barrow 2), and correspond to sunset dates between Feb 12/Oct 30 and Feb 22/Oct 19) (Fig. 3).The Iwanickie Range may be used as a visual marker of the Koprzywianka River catchment area, which delimits the southern extension of the Sandomierz Upland.The Koprzywianka rises in the Jeleniowskie Range (below the Szczytniak summit, clearly visible from the Stryczowice Region) and flows into the Vistula in the city of Sandomierz.
Assuming that some alignments might be used in both directions, eastern declinations were also computed.On a nearby upland, there are a few small elevations, but they do not seem to be very impressive, so I cannot be sure of the intended direction.In general, these reverse alignments point to the distant and invisible valley of the Vistula, or to the place of the rising sun during the second half of April and August.
It is observed that the southern side of Barrow 2 is significantly skewed and displays alignments around the equinoxes.These are the only alignments that could have astronomical (i.e.equinoctial) meaning.
Local topography shows that the longitudinal axes of both tombs are skewed from the mean axis of the slope on which they were built.The site is unusual in that the direct view to the South is entirely blocked by the nearby elevations (azimuths between 216°a nd 183°), but does not affect vistas to the South-West and West.It seems therefore that through their alignments the tombs were deliberately placed to show visual links with a distant foresight where the Sun sets on specific days.Interestingly, this southwestern horizon cannot be observed from the nearby settlement area.
It is immediately evident that the Malice barrows are not aligned in the same way as those from the Stryczowice region (see Fig. 3).At Malice Kościelne, both long barrows are oriented along the SW-NE axis, while the Stryczowice ones prefer the SE-NW axis.Following Christopher Tilley (1984.122),this may be interpreted as signifying "the opposition or relations of non-identity" between two areas.Therefore, regarding the spatial pattern, it can be suggested that the Stryczowice and Malice Kościelne sites represent different social or political entities.However, in both cases, the long barrow orientations have clear connections with similar local climatic-meteorological cycles.Accepting that alignments were used in both directions, we find they are connected with almost the same turning points of the agricultural year.Therefore, although distinctly oriented, the Malice Kościelne long barrows appear to show the same concern with the solar dates as the ones in the Stryczowice sub-region.
In other words, in all the measured and analysed long barrows, the dates indicated by sunrise/sunset positions are close to the turning points of a seasonal calendar and may easily be associated with some ceremonial activities related to a fertility cult.The watershed of a given cycle (starting in October/November or in April/May) may be interpreted as signaling a liminal situation when one cycle ends and the other begins.The dead were naturally sent to the afterworld at different times, but privileged communication with them might have been achieved during these particular periods of the year.In this context, the above dates may also refer to liminal moments when the 'opening' and 'closing' of the natural world is observed, enabling the transition from the cold to warm or warm to cold halves of a year.These dates also enable open, direct and more efficient communication with the dead to be opened.In this way, the permanent link of long barrows with solar dates could have emphasised an idealised space/time structure shared by the members of a community and a particular association between the ancestors and the potential of cultivated fields.
The realms of the land of the dead
The wider ends of the excavated long barrows at Broniszowice, Stryczowice and Malice Kościelne, and a long barrow at Kunów, which has not been excavated, are on the eastern side.They are all situated near the highest parts of mounds.Although I consider the direction from the broader base towards the narrower end of a barrow to be the proper one (Iwaniszewski 1995.35), in the present paper both directions were assessed.The eastern horizon as observed from Stryczowice and Garbacz-Skała is connected to the distant and separate valley of the Opatówka River, a tributary of the Vistula.At Kunów and Broniszowice, however, direct views towards more distant horizon features are obstructed by the nearby hills.However, in all four cases, the long barrow alignments relate to distant horizon features located in the northwestern quadrant.I observed that the narrower and lower ends of long barrows tend to point to higher parts of distant horizons, while their higher and wider 'façades' seem to point much lower and nearer skylines.As the heads of the deceased were also oriented westward, it is possible that they were visually connected through the orientation of a long barrow to the distant and higher northwestern skyline.Furthermore, Neolithic settlements here tend to be concentrated in the lower parts of the lands separated by adjacent smaller rivers or streams.They are located to the east and southeast of the Garbacz, Stryczowice, and Broniszowice long barrows.It seems, therefore, that in the entire micro-region, long barrows were built with entrances oriented downhill towards the nearby river valleys.
At Malice Kościelne, the FBC settlement is located to the NW, also on a slope descending to a river.The barrows are roughly oriented towards a distant and natural fore sight associated with the catchment area of a remote Vistula tributary.The eastward alignments point to the nearby Opatówka valley.Like all earthen long barrows built by Funnel Beakers, these monuments also display formalised and standardised ritual and spatial behaviour.The space in front of the long barrows seems to have had special ritual significance (Midgley 1985.64;Bąbel 2006) where people could have occasionally gathered.The creation of a ritual space for a local community so as to receive the rays of the rising Sun obviously contrasts with the position of the dead, who are oriented towards the direction where the Sun sets and from where winds, precipitation, storms, and thunder come.As stated above, the locations of long barrows invariably maintain a spatial separation from settlements.In this way, the dead gradually become more distanced from the living community to turn into ancestors or mythical figures relegated to an indefinite abode.In my opinion, the dead, ancestors or mythological subjects might finally be transformed into inhabitants of the world which they shared with the Sun and significant meteorological events.Although the dead were buried in long barrows, their 'real' abodes became the heavens containing the Sun and water.Thus the placement of those funerary monuments at elevated locations could have emphasised their proximity to the celestial sphere.
Having established that these long barrows were preferentially oriented towards higher features found on distant horizons, the reasons for such targets must be defined.In my opinion, the lack of easily observed waterways nearby on the eastern horizon is contrasted with the presence of a remote river valley on the northwestern horizon.The Stryczowice long barrows align with distant landmarks of the valley of the Świślina River.Not only is this a distant horizon, but it also relates to a distinct waterway.The same case is found at Malice, where the distant landmark (Iwanickie Range) is related to the Koprzywianka River.One reason for this pattern might be that long barrow orientations relate them to remote 'upper' cosmic waters, rather than with actual lower watercourses below.If this is true, then the monuments could have been metaphorically viewed as vehicles joining the Eastern and neighbouring world of living with the Northwestern, Western and Southwestern distant world of the dead.Since non-local horizon targets involve the presence of remote 'upper' waters, then it is probable that a symbolic association between the dead and agricultural lands is produced through the mediatory roles of solar events and water falling from the skies.
South-eastern Funnel Beaker Culture and Tripolye cultural affinities
The Tripolye (Trypilia) culture occupied the territory between the Dnieper and the Carpathians next to the eastern limits of the southern-eastern FBC group (namely, the Lublin and Volhyn regions).This very rich painted-ware culture developed during the 5 th , 4 th and 3 rd millennia BC.Wider relationships that existed between communities evident in Trypolyean cultural traits and their western neighbours (e.g., Balcer 1981;Kośko 1981;Jastrzębski 1985) resulted in the reception and adaptation of a great variety of Trypolyean cultural patterns by FBC groups.They ranged from technological and stylistic traits in pottery manufacturing to cosmological ideas associated with concepts of the afterlife and ancestors (Kośko 1981.123-162).
The growing number of Trypolyean finds within the Sandomierz area has led some researchers to propose (e.g., Kośko et al. 1999.288) that these FBC communities were 'trypolyezed' ('eneolithised').Others suggest that contacts were in both directions (Videiko 1999.43-44).Still others suggest (Nowak 2014.193-194) the growing awareness of borderland society, which led to a very "peculiar, unique and autonomous space in relation to neighbouring areas." To sum up, cultural exchange between FBC and TC groups occurred between 3640-2880 BC or during the Late Middle Neolithic period, when intensive agricultural activity resulted in the clearing of forest from uplands and slopes (Kruk, Milisauskas 1999. 312-316;Videiko 1999).It was during this time that FBC mortuary rituals were affected or influenced by south-eastern neighbours (Burchard et al. 1991. 99).Also, certain motifs found in the ornamentation of FBC pottery appear to be adapted from the Trypolean Culture (Videiko 1999.66).
The upper world levels and motifs on the painted pottery of Tripolye Culture
So far, I have pointed out that long barrow orientations suggest that the ancestors' abode was localised in the upper regions of the universe.Within the Stryczowice sub-region, the northwestern part of the skyline was linked to the setting Sun and the moment the hot season arrived, especially the coming of spring storms, thunder and rain which typify the weather during the spring.The onset of the hot season may be identified through the alignments of long barrows, possibly conceptualised as 'houses of the dead'.I suggest that this relationship was reassessed through a series of ceremonies arranged to ensure good crops and the well-being of the ancestors who kept the fields fertile.The alignment came alive again shortly after the harvest.Ceremonies involving pouring grain from the recent harvest onto the monument to ensure a supply for the fields in the coming year could have been performed.
FBC groups evince a similar spatial-meteorologicalastronomical symbolism in Malice Kościelne.In February, they could have celebrated the transition from the dry to wet season, and when the Sun set over distant horizon fore sight in early November, they could have observed the arrival of the cold season and a significant decline in precipitation.Both calendar dates are liminal, since they indicate the transition from one season to another.In contrast with the Stryczowice region, where the calendar dates relate to the wet and hot season, at Malice Kościelne the solar calendar dates made manifest through alignments are associated with the dry, cold season.
The influence of TC noticed in the materials from the large FBC settlements in the Sandomierz sub-region of the southeastern FBC group allows me to ask whether any ideological or worldview patterns were imported from Tripolye Culture into the Sandomierz Upland.
My argument in this section will be that the location of the long barrows at the Stryczowice sub-region reflects a worldview which can be explained by specific depictions on Tripolye pottery.As is known, Boris A. Rybakov interpreted the motifs painted on Trypolyean cooking pottery in terms of a fertility cult.Furthermore, he concluded that these motifs represented the tripartite structure of the world (Rybakov 1965.37).Thus the upper band of wavy lines represented the 'upper' heavenly waters.Just below it, another band filled with spirals, solar and lunar symbols, and vertical streamer-type lines recalling falling rains rendered the 'upper' atmospheric-meteorological waters.The bottom band filled with vegetal, animal or human figures represented the level inhabited by people (see Fig. 7).It seems that the sky with its rain resources, together with the alternation of seasons and the life cycle, constituted central themes in the worldview of the Tripolyeans.Curiously enough, there is no place for the dead in Rybakov's interpretation (compare Kośko 1981.159).
Today, many of these ascriptions of meanings may appear ambiguous.Scholars who examine these depictions usually focus on the ascription of meanings, and this idea is based on an interpretation of motifs as visual symbols or signs.For example, spiral ornaments are interpreted as visual images of serpents, celestial dragons, dragon-serpents and serpen- tine-like goddesses, or as solar signs (Palaguta 1999;2009;Tsvek 1993.81-87).
The importance of water symbolism in funeral practices cannot be ruled out, since fragments of similar motifs have been found on pottery excavated at Broniszowice, Stryczowice, and Malice Kościelne.This fact enables us to explore other research possibilities.
Various scholars (e.g., Bradley 2000.60-63;Woźny 1996.50-55, Tab. 1, Map 1, 103-106;Adamczak 2013.183-186)have already noticed a correlation between the locations of long barrows and ritual deposits placed in water (springs, lakes, bogs, rivers).Watery associations between FBC settlements and long barrows have long been reported in other FBC groups.For example, in Kuyavia, Eastern Funnel Beakers often located their settlements and long barrows near water sources, deposited ritual items in water and used damp soils with snails and mussels to cover the dead in earthen long barrows (Woźny 1996.53-57, Tab. 1;Adamczak 2013.183-188).Similar votive bog deposits of the FBC have recently been suggested for the area of the South-Eastern Group (Libera, Zakościelna 2006).In the Sandomierz Upland, long barrows were located on raised loess hills, which are relatively dry areas.Unfortunately, the distribution of long barrows in connection with the locations of streams or springs has not been studied yet.Now, according to Jerzy Bąbel (1979), Jacek Woźny (1996.55), andKamil Adamczak (2013.184), the vessels used as ritual deposits seem to have shared a specific type of decoration.The most typical decorative motifs include wavy horizontal lines, or zigzags placed just below the rim, presumably representing water symbolism.Below them, sometimes motifs consisted of a central stalk and short lines in pairs placed at intervals and directed downwards, possibly representing trees or plants, seem to suggest a kind of composition containing a water and vegetative components (Woźny 1996. 103).According to Bąbel (1979) and Aleksander Kośko (1981.159-161)Funnel Beaker communities depicted celestial waters (in the form of horizontal lines), clouds (garlands) and falling waters (vertical rows of cuts).Note that, when connected with agriculture, such motifs may refer to the warmer half of the year, between March and July, when most rainfall occurs (Kowalski 1997. 17-18).
Assuming that the sky as conceptualised by Funnel Beakers was permeated with water and depicted in its pottery, one may ask whether the realm of the land of the dead was also related to water symbolism.My interpretation links the dead with celestial waters.It associates both the long barrows and the dead with the time when the warmer part of the year begins.The Earth is 'opened', so vegetation sprouts and the dead or the ancestors approach human settlements for food.Does it mean that it is the dead who pour water on the fields?According to Rybakov (1965), the lower level of the sky was where celestial bodies pass across the heavens and rain waters were stored.It follows that the decorated pottery describes the relationship between the upper sky, the lower sky, and the land.Thus it represents two celestial layers rather than one.My interpretation links the dead with the waters above.Both proposals are in many ways speculative, but at least fit the body of data we have at present.
There is little evidence showing the Sun was imagined as an animate being, with agentive power.The status of ancestors seems to be different.Examples from the Northern Funnel Beaker Group indicate they could use ships to reach the heavens (Adamczak 2013.184),but no such associations are known in the area.Therefore, I assume that the ancestors buried under long barrows were considered as animate beings who used the Sun to signal changes in weather to humans.
Conclusions
The landscape, archaeoastronomical and symbolic analysis of long barrow locations in the Stryczowice micro-region reveal interesting patterns related to the worldview of its Neolithic inhabitants.Combined with the designs displayed on some FBC vessels, those examples show patterns of the worldview of FBC societies.Although FBC vessels seem to imitate the ornamentation of Tripolye pottery, which depict a three-level structure of the world, some substantial differences in both images of the universe existed.According to the present interpretation, the fertility of the fields was interwoven with an ancestor cult.Different elements in the sky were merged: the Sun, spring rains, rain-bringing winds, cold, dry winds, remote mountain landmarks, and ancestors.While the first two elements were possibly of Tripolyean origin, the development of long barrows in the region resulted in the placement of the dead in the western quadrant of the sky, where they became associated with waters.
It can be argued that the dead were either incorporated into existing long barrows or were provided with monuments of their own to denote that the land -its productive-vegetative cycles -were owned or at least controlled by them.The dead were perhaps seen as having more permanence in the landscape than living communities and therefore had a longer-term claim to the land.On the other hand, the people were concerned with their own identity and defining themselves as separate from both the natural world and the physical location of the dead.The dead were approached and incorporated into systems of beliefs through the agricultural cycle itself.The structural organisational metaphor used to understand the world was based on the correlation made between astronomical and meteorological cycles.
This paper shows that archaeoastronomy can offer valuable insights into the study of the past.However, it also indicates that not all Neolithic monuments display align with turning points in the annual cycle.While such alignments may easily be dismissed by scholars whose research interest is lim-ited to seeking astronomically meaningful orientations, researchers with a keen interest in cultural phenomena may find this evidence strongly indicative of the intentions of long barrow builders and users.This type of research shows more affinities with the practice of archaeology than with archaeoastronomy itself (Bostwick 2006).
Archaeological research at Stryczowice was supported by the Polish Committee for Scientific Research (grant KBN Nr.T-186/267/P-DOT/00) and the State Archaeological Museum in Warsaw.The author thanks Barbara Matraszek, director of the Stryczowice archaeological project, for her generous invitation to conduct archaeoastronomical investigations and landscape observations.The author would like to thank Barbara Matraszek, Sławomir Sałaciński, Jerzy T. Bąbel and Bogdan Balcer for their helpful suggestions and comments on earlier drafts of this paper.This paper derives from the project 'Starry sky -an animated sky', initiated during my sabbatical leave in 2014.The author is also grateful to two anonymous reviewers for their helpful and valuable comments. | 9,871.4 | 2016-12-30T00:00:00.000 | [
"Physics"
] |
NETWORKING INFORMATION TECHNOLOGIES AS A METHOD INNOVATIVE CHANGES IN ACADEMIC LIBRARIES
HOVORUKHA V. B. Department “Computational Mathematics and Mathematical Cybernetics”, Oles Honchar Dnipro National University (Dnipro, Ukraine), е-mail<EMAIL_ADDRESS>ORCID 0000-00020936-9272 SEMENOVA Larysa A. Scientific and Technical Library, Dnipro National University of Railway Transport named after Academician V. Lazaryan (Dnipro, Ukraine), е-mail<EMAIL_ADDRESS>SEMENOVA Liudmyla A. Scientific and Technical Library, Dnipro National University of Railway Transport named after Academician V. Lazaryan (Dnipro, Ukraine), е-mail<EMAIL_ADDRESS>
Introduction
The international movement for openness of information and knowledge, the rapid growth of the number of digital media, the creation of digital libraries require upgrading to the educational infrastructure. Changing the stereotypes of modern library users are helping to transform libraries from simple book collections to media centers. These centers create the conditions for education, science, as well as spaces for collaboration, communication and leisure. This has a positive effect on the socialization and self-realization of the young people studying in the educational institution. Those who wish to get an education increasingly prefer a continuous form of study, namely distance courses. Distance learning courses are based on networking information technology.
These changes will best meet new user needs. It is impossible to imagine a modern university library without the use of the latest media technologies. The authors aim to understand the processes that are taking place at the present stage of development in educational and librarycommunication activities and to explore the impact of digital network technologies on the modernization of university academic institutions.
Today, there are many publications by domestic and foreign scholars on various aspects of the use of remote technologies and the provision of mobile services to users. Namely, T. Markova, I. Glazkova and E. Zaborova (2016) investigated the problem of quality of online distance learning, and N. Yevsyukova and S. Fedyaj (2018) examined the issue of improving the skills of library workers through the mass open online courses "Prometheus". L. F. Bandylko (2018) identified tendencies and directions of innovative activity of Ukrainian libraries at the present stage, and V. Zagumenna, T. Granchak (2017) provided an analysis of distance education of librarians in the virtual environment. M. L. Smirnowa, (2017) explored the motivational potential of new media and e-learning programs, and L. W. Afanasjewa with M. L. Smyrnowa, (2019) examined the use of innovative multimedia technologies in foreign language classes by technical students. V. M. Kukharenko's work (2018) is devoted to the obstacles to the implementation of distance learning. Areas of application of modern multimedia and interactive technologies in education are covered in the work of O. Ye. Konovalenko and V. O. Brusentsev (2017), and the issues of the use of modern innovative intellectual technologies in distance learning of engineers in the article by G. A. Samigulina and Z. I. Samigulina (2017). The implementation of networked information technologies in the practical activities of the library to expand and improve the forms and methods of information services for remote users has not been sufficiently researched in scientific publications. This problem was the focus of our research.
Methods
The current research use analytical and descriptive methods with a focus on analyzing the practical experience of the Scientific and Technical Library of the Dnipro National University of Railway Transport named after Academician V. Lazaryan (STL DNURT) in the field of application of networking information technologies.
The availability of electronic media breaks existing states borders. There are no barriers to knowledge transfer because remote users have now access to scientific and cultural heritage (Brünger-Weilandt, 2014). European libraries have started to create and develop digital online libraries to meet the needs of modern users. For example, the Deutsche Digitale Bibliothek DDB (German Digital Library) is the largest online library in the country. The portal of DDB opens access to culture treasury and sources information for all interested parties. Remote users can read online scientific articles, watch movies, and get acquainted with museum collections to the help of network technologies. It is very important that DDB users can be assured of digitized sources of information, as guaranteed by German cultural and scientific institutions. More than 2000 cultural and scientific institutions, libraries and archives participated in this project. The digitized material in good resolution remains on the website of the institution that worked it. The number of objects in the German Electronic Library is constantly growing. As of the end of 2018, the portal contained records of more than 24 million objects, according to Wikipedia (Deutsche Digitale Bibliothek, (n.d.)). It the long term, this library should integrate all German cultural and scientific institutions to the online libraries and integrate them into the European project "Europeana".
Ukrainian libraries have already begun to join this global trend, as participation in such projects contributes to the development and popularity of the library, enhances its image. Moreover, the facts show that such projects have great resonance. Thus, in the first few days after launching DDB FIZ portal Karlsruhe recorded 3.6 million hits and more than 25.6 million downloads (Deutsche Digitale Bibliothek -Kultur und Wissen online, 2012, December 20).
Networking information technologies are spread across different spheres of public life because their use for professional development of employees satisfies both the workers themselves of production or business companies, and employers. Modern multimedia and interactive technologies are used for modeling and creation of production situations, "field" researches, psychological exercises, creation of video presentations and videos for advertising of production, etc.
Integration of knowledge and communication, the international movement for open access to knowledge are changing the functioning and development of academic libraries in the direction of providing high-quality information services to the author-scientist and supporting the philosophy of open access to knowledge. Shifting the vector of priority to the needs of userscientists, providing new digital services to users, participating in the publishing of scientific periodicals turns university libraries into a partner of scientists-educators in the production, preservation and dissemination of knowledge (Kolesnykova, 2017).
Good mastery of the latest technologies, increased access of users to world information resources, production of innovative information products and services, digitization of the most valuable part of funds are the basic directions of development of academic libraries, which requires from library specialists era of networks, electronic documents and virtual reality (Bandilko, 2018).
Media literacy and the use of modern digital technologies make it possible for anyone to achieve their own educational goals, which is why more and more people are opting for distance learning. Distance learning is the main area of application of network technologies. The digital world is transforming education today like no other social phenomenon. Learning is becoming more use virtual as distance learning has some advantages over traditional forms of learning. Distance learning is an effective, flexible, mobile and consumer-friendly training system. By means of distance courses it is possible to receive education of different level or to improve their professional qualification regardless of their place of residence, physical disabilities, to inseparable from their professional activity. Networking technologies provide course participants to webinars, discussion of various issues on the forum or social networking. The process of teaching and certification of students' knowledge is carried out to the help of mobile information technologies.
University libraries should provide information and library support for distance learning systems. The requirements for the technical equipment of library facilities and the professional competence of library specialists are constantly increasing. The application of the latest multimedia technologies requires innovative changes in academic libraries. Functional and constantly updating websites and repositories, an easy-to-use electronic catalog should provide remote users to access to digital textbooks, tutorials, self-study scientific conference materials, as well as educational video films, video presentations, video lectures and virtual exhibitions. Academic librarians must have a good knowledge of the IT culture, constantly improve their level of information culture, in order to help users, work to the electronic catalog, use international databases (DB), evaluate sources of information, publishers, web addresses, values of Internet resources, guidance in the use of bibliographic, thematic, factual references, in different types of citations.
Results and Discussion
Distance learning courses, participation in the Clarivate Analytics and Bright TALK webinars help prepare professionals for innovative change, and they then actively put their knowledge and skills into practice. For example, in 2014, the employees of the Scientific and Technical Library DNURN named after of Academician V. Lazaryan took the opportunity for distance learning at the courses of NTU "KPI" "Curator of Content 3" (Network Information Analyst) and received certificates after their completion. The training and development of new technologies by STB employees allowed the creation of a new department of library and information technologies in the library with the sector of information analytics and the sector of support of the automated library system and software. The acquired knowledge and skills of specialists helped to expand the sphere of virtual services and allowed to fulfill better the information needs of the university community.
Today, librarians carry out scientometric and bibliometric studies prepare analytical management reports, to support the created database "University Science Publication Profile". They also mastered a new type of activity -editorial and publishing activities. This allowed successful publication of the open access electronic journals "Science and the Progress of Transport" (http://stp.diit.edu.ua/) and "Anthropological Measurements of Philosophical Research" (http://ampr.diit.edu.ua/) (Fig. 1).
Figure 1. E-Journals of STL DNURT published as "Library Publishing" Project
STB DNURT organizes practical seminars, webinars for the scientific community, creates and places useful and accessible video lessons on the site of the library. These lessons help scientists easily navigate the virtual space. All this has made it possible to expand the scope of online library services in the direction of providing users with access to digital scientific content.
To the help of the Open Conference Systems platform, which has been successfully mastered by the specialists of the institution, the Scientific and Technical Library engages in publishing and promoting the results of scientific activities of the University's scientists through the site of open conferences (Fig. 2).
Figure 2. Sites of open scientific conferences supervised by the DNURT Library
In 2014, the first site was created of the scientific conference "Anthropological Measurements of Philosophical Research" (access to the conference: http://confampr.diit.edu.ua/). Prior to the 2016 multi-format scientific and practical conference on library and information affairs in the format of open conferences with live broadcast through the YouTube channel, the site of the second scientific conference "The University Library at a New Stage of Social Communications Development" was created, the materials of which can be viewed at the following link http://conflib.diit.edu.ua/ (Pominova, 2016). All this was done through the use of the latest technologies.
Librarians have mastered all the processes involved in creating electronic copies of print editions (FineReader, Photoshop, ScanTailor, Adobe Acrobat) to join the new workflow that has now spread around the world to digitize the most valuable library stock. To date, collections of " Abstracts of Dissertations"(http://eadnurt.diit.edu.ua/jspui/handle/123456789/39) have been created and placed on the library's website, which contains 542 records, of which approximately 350 contain scanned documents and "Railway ukrainika" on the Rare and Valuable Editions" page (http://ecat.diit.edu.ua/zu/index.html), which gives you access to documents published between 1877 and 1940 and for which copyright has expired (Matveyeva, Yunakovska, 2017).
The coverage of the experience of the Scientific and Technical Library of the DNURT named after Academician V. Lazaryan makes it possible to see to the introduction of mobile technologies activates innovative changes in the activity of the library, makes it necessary to transform information and library activity in the direction of broader satisfaction of remote needs of users.
Conclusions
Processes of globalization and informatization, accessibility of electronic media, computerization of all spheres of public life, development and improvement of information technologies are leading to innovative changes in various spheres of modern activity of educational and library institutions. Today, online libraries are gaining in popularity, and distance learning is becoming a promising form of educational services today. The use of online information services makes the process of knowledge acquisition more flexible, mobile and convenient for those wishing to acquire knowledge.
The analysis of the practical activity of the Scientific and Technical Library DNURT named after Academician V. Lazaryan confirms the thesis that the introduction of digital network technologies in the activity of the institution modernizes the library, contributes to its popularity among users. So, STB DNURT changed the vector of its activity, expanded the list of services in the field of remote user services. Librarians have mastered new activities (editorial and publishing activities, digitization of valuable sources of library funds, creation of websites of open conferences, scientometric and bibliometric analytics, etc.). Library specialists are able to organize and successfully conduct practical video seminars and conferences, webinars for the scientific community, to help university scientists to navigate well in the scientific virtual space. The Scientific and Technical Library has the potential to continue to actively master these technologies through the development and support of open networking courses, distance training for employees, exchange of experience between educational and library institutions through various activities through multimedia.
The introduction of networked information technology has a positive impact on the innovative development of academic libraries, as it facilitates the transformation of the institution into a modern media center with a wide range of comfortable mobile services for users.
The results of the researches provide a better understanding of current trends in the development of academic libraries, and encourage further research of this topic in the search for new means of improving the field of user service quality content. | 3,115 | 2019-12-16T00:00:00.000 | [
"Computer Science",
"Materials Science"
] |
HBX-6, Standardized Cornus officinalis and Psoralea corylifolia L. Extracts, Suppresses Benign Prostate Hyperplasia by Attenuating E2F1 Activation
Background: The aim of this study was to simplify and identify the contents of the herbal formula, HBX-5. This study was carried out to evaluate the therapeutic effects of HBX-6 in a mouse model of benign prostatic hyperplasia (BPH). Based on in vitro, we selected a candidate, reconstituted an experimental agent and investigated the effects on testosterone-induced BPH rats. Cell viability was determined by MTT assay in RWPE-1 and WPMY-1 cells. The expression of androgen receptor (AR) was measured in dihydrotestosterone-stimulated RWPE-1 and WPMY-1 cells. BPH was induced in mice by a subcutaneous injection of testosterone propionate for four weeks. Animals were divided into six groups: Group 1, control mice; Group 2, mice with BPH; Group 3, mice with BPH treated with finasteride; Group 4, mice with BPH treated with 200 mg/kg HBX-5; Group 5, mice with BPH treated with 100 mg/kg HBX-6; and Group 6, mice with BPH treated with 200 mg/kg HBX-6. Changes in prostate weight were measured after treatments, and the thickness of the epithelium was evaluated. The expression levels of proteins associated with prostatic cell proliferation and cell cycle-related proteins were determined. Based on previous reports and in vitro results, we selected Cornus officinalis and Psoralea corylifolia among HBX-5 components and reconstituted the experimental agent, and named it HBX-6. The result represented a new herbal formula, HBX-6 that suppressed the pathological alterations in BPH and showed a marked reduction in proliferation-related protein expression compared to mice with BPH. Our results indicate that HBX-6 has a better therapeutic effect in the BPH murine model than those of HBX-5 and finasteride, suggesting the role of HBX-6 as a new BPH remedial agent.
Introduction
Benign prostatic hyperplasia (BPH) is one of the most frequently reported male health disorders, and has a considerable impact on men older than 50 years worldwide. The cumulative prevalence of BPH has been shown to range from 50% in men aged 41-50 years and to increase by 10% per decade and reach 80% in men older than 80 years. Most men older than 80 years are likely to experience the pathological symptoms of prostatic hyperplasia [1]. BPH is defined as a nonmalignant overgrowth prostate condition, which is implicated in lower urinary tract symptoms (LUTS) and bladder outlet obstruction (BOO) [2,3]. While there has been some agreement on the etiology of BPH, many researchers have reported that several risk factors, such as ageing, excessive dihydrotestosterone (DHT) levels, and the alteration of hormones may be involved in the development of the disease [4,5]. One major issue in BPH research is concerned with the interaction between hormonal disturbance and cellular proliferation [6]. Based on histological diagnosis, BPH has been characterized by the unregulated proliferation of connective tissue, smooth muscle, and glandular epithelial cells [7]. During BPH development and progression, cellular proliferation leads to prostate enlargement and the augmentation of stromal smooth muscle tone [8]. BPH has best been treated by two major categories of drug: α1-adrenergic receptors blockers and 5α reductase inhibitors. Alpha1 blockers bind and block the cognate receptors and relax the prostatic smooth muscle, relieving BOO [6]. Five alpha reductase inhibitors, also called DHT blockers, have primarily been used in the treatment of BPH. These agents prevent the conversion of testosterone to DHT, leading to prostate volume shrinkage and mitigation of urinary tract symptoms. While these agents are effective at symptomatic improvement, a significant limitation of these drugs is their adverse effects, such as reproductive dysfunction, gynecomastia, and subsequent progression to prostate cancer [9]. Hence, there is a definite need to develop substitutes for these drugs with reduced side-effects. As part of these efforts, herbal medicine-based drug development has been proposed.
HBX-5 is a standardized herbal medicine-based formula suggested for the treatment of BPH and is formulated from nine medicinal herbs. Our previous findings showed the antiproliferative effects of HBX-5 in a testosterone-treated rat model and suggested that HBX-5 could be further explored as a potential herbal medicine for the treatment of BPH [10]. Although our previous investigation indicated the therapeutic potential of HBX-5 in BPH development, medicine preparation process was limited by the complexity of HBX-5 composition, which suggested the need to simplify the contents of HBX-5.
Here, we established a DHT-stimulated prostate cell model to evaluate the inhibitory effect of individual component herbs of HBX-5 on androgen receptor (AR) expression. Based on in vitro results, we selected Cornus officinalis Sieb. et Zucc. and Psoralea corylifolia L., and reconstituted the new herbal formula, HBX-6. After the formulation of HBX-6, we identified its representative chromatograms. Based on the HPLC analysis and previous studies, we evaluated the antiproliferative effect of HBX-6 in testosterone-treated mice. Oral administration of HBX-6 suppressed prostate enlargement and pathological changes induced by testosterone injection through inhibition of proliferation-related protein expression. This molecular mechanism is associated with the inhibition of the E2F1-Rb pathway and a reduction in cyclin D1 expression. Overall, our study presents the possibility of treatment of BPH by the antiproliferative effect of new combined formula, HBX-6.
Cell Culture and Sample Treatment
The normal human prostatic epithelial cell line, RWPE-1, and normal human prostatic stromal cell WPMY-1 were acquired from the American Type Culture Collection (Manassas, VA, USA). RWPE-1 cells were cultured in Keratinocyte Serum-Free Medium supplemented with 0.05 mg/mL bovine pituitary extract, 5 ng/mL human recombinant epidermal growth factor, and an antibiotic-antimycotic cocktail (Gibco, Grand Island, NY, USA). WPMY-1 cells were cultured in Dulbecco's Modified Eagle's Medium, supplemented with 1% penicillin/streptomycin and 10% FBS (Gibco). After 24 h of incubation, the cells were serum starved prior to some of the experiments, as indicated. Then, the cells were treated with 10 nM DHT for 24-72 h, with or without various concentrations of components from HBX-5 (0.25-1000 µg/mL).
Cell Viability Assays
Cells were treated with the herbal components (0.25-1000 µg/mL) and incubated overnight followed by the addition of MTT solution (5 mg/mL) for 2 h. After aspirating the supernatant, the formazan product was dissolved in DMSO, and extent of cytotoxicity was measured at 570 nm using a BioTek™ Epoch microplate spectrophotometer (Winooski, VT, USA).
Preparation of the HBX-6
Cornus officinalis Sieb. et Zucc. and Psoralea corylifolia L. were obtained from Hwapyung D&F Co., Ltd. (Seoul, Korea). HBX-6 composition was as follows (values indicate proportions of each ingredient, expressed in parts for 1000 g): Cornus officinalis Sieb. et Zucc. (650 g) and Psoralea corylifolia L. (350 g). These two crude drugs were decocted gently in 10 times volume of 30% ethanol for 3 h and filtered, and the powder obtained was spray-dried to yield an extract that was approximately 8.33% of the original preparation by weight.
Calibration Curves, Limits of Detection, and Limits of Quantification
A 70% methanol stock solution containing the 6 reference components was prepared and diluted to appropriate concentrations for the construction of calibration curves. Six concentrations of the mixed standard solution were injected in triplicates, and their regression values were calculated by the equation Y = AX + B ( Table 1). The dilute solution was further diluted to a series of concentrations with 70% methanol for the gain of the limits of detection (LOD) and limits of quantification (LOQ). The LOD and LOQ under the indicated chromatographic conditions were determined at a signal-to-noise (S/N) ratio of 3 and 10, respectively.
Animal Treatments
A total of 60 male ICR mice (25 ± 2 g) were acquired from Daehan BioLink (Eumsung, Korea). All the experimental protocols were performed according to the IACUC Animal approval protocol from Sangji University prior to the initiation of any experimental study (#2017-02). BPH was induced in the mice by intramuscular injections of testosterone propionate for 30 days, as reported previously [11]. Briefly, mice were divided into six groups: control; BPH-induced mice (BPH); BPH induced mice treated with finasteride (5 mg/kg/day), p.o.; BPH-induced mice with HBX-5 200 mg/kg/day, p.o. (HBX-5 200 mg/kg); BPH-induced mice with HBX-6 100/200 mg/kg/day, p.o. (HBX-6 100 mg/kg and HBX-6 200 mg/kg, respectively). After 24 h of the final injection, body weights of all animals were measured, and the animals were sacrificed. The entire prostate tissues were resected and weighed for the PW/BW ratio which was computed as described below.
PW/BW ratio = (prostate weight in each mice of the experimental group/body weight in each mice of the experimental group) × 1000
Histological Analysis
The prostate tissues in each group were fixed with 4% formalin and embedded in paraffin, and the tissues were cut and stained with hematoxylin and eosin (H & E) for histological examination. Images were acquired using a Leica microscope (Leica DFC 295, Wetzlar, Germany).
Western Blot Analysis
The prostate tissues were lysed, and proteins were extracted using the lysis buffer. After extraction, proteins from each experimental group were separated on 8-12% polyacrylamide gels and were transferred on to PVDF membranes. After blocking, the membranes were incubated in primary antibodies described above. Followed by primary antibody incubation, the antibody was removed by washing the membranes thrice with TBST and the membranes were then incubated for 2 h with horseradish peroxidase-conjugated secondary antibody (1:2500) at 25 • C. After incubation, membranes were washed thrice with TBST, and the immunoreactive bands were detected using ECL solution (Ab signal, Seoul, Republic of Korea) and were captured on an X-ray film (Agfa, Belgium).
Statistical Analyses
The experimental data presented are represented as the mean ± standard deviation (SD) from three independent experiments. To compare statistically significant differences, one-way ANOVA analysis and Dunnett's post hoc test were utilized. The values of p < 0.05 were considered statistically significant using Prism 8 (GraphPad Software, San Diego, CA, USA).
Effect of Treatment with HBX-5 Components on Viability of RWPE-1 and WPMY-1 Cells
To evaluate the effect of HBX-5 components on the viability of prostate cells, we performed MTT assay in the normal prostate epithelial cell line RWPE-1, and the stromal cell line, WPMY-1, both of which were used to establish the in vitro BPH model. As shown in Figures 1 and 2
Effect of Treatment with HBX-5 Components on AR Expression in RWPE-1 and WPMY-1 Cells
Based on cell viability analysis by MTT assay, we investigated the effects of respective components of HBX-5 on AR expression in DHT-stimulated RWPE-1 and WPMY-1 cells. As shown in Figure 3, the expression of AR was significantly upregulated in response to DHT versus the untreated control group. In RWPE-1 cells, treatment with Aconitum carmichaelii Debeaux, Cornus officinalis sieb. et Zucc., Psoralea corylifolia L., and Trigonella foenumgraecum L. showed significant reduction in AR protein expression levels. Treatment with Cornus officinalis sieb. et Zucc., Psoralea corylifolia L., Trigonella foenumgraecum L., and Foeniculum vulgare Miller downregulated the expression of AR in WPMY-1 cells. Based on these results, we selected Cornus officinalis sieb. et Zucc., and Psoralea corylifolia L. for reconstitution of the new experimental formula, HBX-6.
Effect of Treatment with HBX-5 Components on AR Expression in RWPE-1 and WPMY-1 Cells
Based on cell viability analysis by MTT assay, we investigated the effects of respective components of HBX-5 on AR expression in DHT-stimulated RWPE-1 and WPMY-1 cells. As shown in Figure 3, the expression of AR was significantly upregulated in response to DHT versus the untreated control group. In RWPE-1 cells, treatment with Aconitum carmichaelii Debeaux, Cornus officinalis sieb. et Zucc., Psoralea corylifolia L., and Trigonella foenumgraecum L. showed significant reduction in AR protein expression levels. Treatment with Cornus officinalis sieb. et Zucc., Psoralea corylifolia L., Trigonella foenumgraecum L., and Foeniculum vulgare Miller downregulated the expression of AR in WPMY-1 cells. Based on these results, we selected Cornus officinalis sieb. et Zucc., and Psoralea corylifolia L. for reconstitution of the new experimental formula, HBX-6.
Chemical Profiling Analysis of the Newly Combined Herbal Formula, HBX-6
Representative spectrometry data of the mixed standards and the herbal extracts are shown in Figure 4. The persistently high contents of psoralen and isopsoralen were represented by acid hydrolysis as psoralenoside and isopsoralenoside, respectively. The six standard compounds were prepared to establish the calibration curves constructed by plotting the mean peak area versus the standard concentrations. As shown in Table 1, all standard curves show a suitable linear regression (r ≥ 0.9980) over the tested range, and then the limits of detection (LOD) and limits of quantification (LOQ) were analyzed for each compound. The identification of investigated compounds was carried out by comparison of their retention time and UV spectra with injected standards in the same conditions. The contents of four investigated compounds of Cornus officinalis sieb. et Zucc, Psoralea corylifolia L. and HBX-6 were summarized in Table 2. were treated with 10 nM DHT for 72h and 24hr respectively, with or without each herb of HBX-6. 1, Aconitum carmichaelii 250 µg/mL; 2, Cornus officinalis Sieb. et Zucc. 250 µg/mL; 3, Cistanche deserticola Y. C. Ma 250 µg/mL; 4, Psoralea corylifolia L. 250 µg/mL; 5, Dendrobium loddigesii Rolfe. 250 µg/mL; 6, Morinda officinalis How 250 µg/mL; 7, Cuscuta chinensis Lam. 250 µg/mL; 8, Trigonella foenum-graecum L. 250 µg/mL; 9, Foeniculum vulgare Miller 2 µg/mL; β-actin was used as a loading control. The protein band densities were obtained by densitometric analysis. Density values were presented as mean ± S.D. (### p < 0.001 versus untreated control group; * p < 0.05, ** p < 0.01, *** p < 0.001 versus DHT-stimulated group).
Chemical Profiling Analysis of the Newly Combined Herbal Formula, HBX-6
Representative spectrometry data of the mixed standards and the herbal extracts are shown in Figure 4. The persistently high contents of psoralen and isopsoralen were represented by acid hydrolysis as psoralenoside and isopsoralenoside, respectively. The six standard compounds were prepared to establish the calibration curves constructed by plotting the mean peak area versus the standard concentrations. As shown in Table 1, all standard curves show a suitable linear regression (r ≥ 0.9980) over the tested range, and then the limits of detection (LOD) and limits of quantification (LOQ) were analyzed for each compound. The identification of investigated compounds was carried out by comparison of their retention time and UV spectra with injected standards in the same conditions. The contents of four investigated compounds of Cornus officinalis sieb. et Zucc, Psoralea corylifolia L. and HBX-6 were summarized in Table 2.
Effect of HBX-6 Treatment on Prostate Weight in BPH-Induced Mouse Model
As shown in Figure 5, we investigated whether newly combined and simplified herbal formula, HBX-6 inhibited prostate enlargement in mice with BPH. Initially, we evaluated the macroscopic parameters of BPH and we observed that the BPH group showed significant prostatic enlargement and congestion compared with all other groups ( Figure 5A). After sacrificing the mice, we resected the prostate from surrounding tissues and measured the weight and calculated the relative prostate weight (PW) ratio and PW/BW index. The results showed that BPH group showed a significant increase in prostate weight, relative prostate weight ratio, and PW/BW index. In comparison to the induced BPH group, mice treated with finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) significantly suppressed the overgrowth of prostate by 15%, 17%, 21%, and 20%, respectively ( Figure 5B-D).
Effect of HBX-6 Treatment on Prostate Weight in BPH-Induced Mouse Model
As shown in Figure 5, we investigated whether newly combined and simplified herbal formula, HBX-6 inhibited prostate enlargement in mice with BPH. Initially, we evaluated the macroscopic parameters of BPH and we observed that the BPH group showed significant prostatic enlargement and congestion compared with all other groups ( Figure 5A). After sacrificing the mice, we resected the prostate from surrounding tissues and measured the weight and calculated the relative prostate weight (PW) ratio and PW/BW index. The results showed that BPH group showed a significant increase in prostate weight, relative prostate weight ratio, and PW/BW index. In comparison to the induced BPH group, mice treated with finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) significantly suppressed the overgrowth of prostate by 15%, 17%, 21%, and 20%, respectively ( Figure 5B-D). Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; * p < 0.05, ** p < 0.01, *** p < 0.001 versus the BPH group; ANOVA and Dunnett's post hoc test were used for significances between each experimental group. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; * p < 0.05, ** p < 0.01, *** p < 0.001 versus the BPH group; ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Histological Changes in Prostate of BPH-Induced Mice
In order to investigate whether the inhibition of prostatic enlargement correlated with cellular hypertrophy, we performed histological analysis and quantified the thickness of prostatic epithelial tissue (TETP). As seen in Figure 6, mice with BPH showed a typical pattern of hyperplasia and hypertrophy, proliferation of the epithelial cells bulging into the luminal area, and the appearance of multilayered epithelial areas. However, mice treated with finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) showed amelioration of testosterone-treated prostate structures, although certain minor structural features of induced BPH remained. The TETP values in mice with BPH showed a 2.27-fold increase over mice from control group. Notably, the TETP values from finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) treated mice were significantly lower than those in BPH-induced mice by 28.58%, 33.46%, 34.15% and 43.42%, respectively.
Effect of HBX-6 on Histological Changes in Prostate of BPH-Induced Mice
In order to investigate whether the inhibition of prostatic enlargement correlated with cellular hypertrophy, we performed histological analysis and quantified the thickness of prostatic epithelial tissue (TETP). As seen in Figure 6, mice with BPH showed a typical pattern of hyperplasia and hypertrophy, proliferation of the epithelial cells bulging into the luminal area, and the appearance of multilayered epithelial areas. However, mice treated with finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) showed amelioration of testosterone-treated prostate structures, although certain minor structural features of induced BPH remained. The TETP values in mice with BPH showed a 2.27-fold increase over mice from control group. Notably, the TETP values from finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), and HBX-6 (200 mg/kg) treated mice were significantly lower than those in BPH-induced mice by 28.58%, 33.46%, 34.15% and 43.42%, respectively. TETP was measured and represented as the mean ± SD (n = 5 for all experimental group). TETP was quantified at three section per each slide; ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Prostate Cell Proliferation in BPH-Induced Mice
PCNA is expressed in proliferating cells throughout the S-phase of the cell cycle and also shows increased expression in BPH human tissues [12]. Previous studies have also reported that PSA is a representative biomarker for prostate cancer but can also be used for the diagnosis of BPH, providing crucial information about cell proliferation and overgrowth [13]. In corroboration with previous reports, we showed that mice with BPH showed enhanced expression of PCNA and PSA. However, finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg) and HBX-6 (200 mg/kg)-treated mice groups showed decreased expression of prostate cell proliferation-related proteins. HBX-5 and HBX-6 particularly, rescued the protein expression of PSA to the levels similar to control group (Figure 7). TETP was quantified at three section per each slide; ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Prostate Cell Proliferation in BPH-Induced Mice
PCNA is expressed in proliferating cells throughout the S-phase of the cell cycle and also shows increased expression in BPH human tissues [12]. Previous studies have also reported that PSA is a representative biomarker for prostate cancer but can also be used for the diagnosis of BPH, providing crucial information about cell proliferation and overgrowth [13]. In corroboration with previous reports, we showed that mice with BPH showed enhanced expression of PCNA and PSA. However, finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg) and HBX-6 (200 mg/kg)-treated mice groups showed decreased expression of prostate cell proliferation-related proteins. HBX-5 and HBX-6 particularly, rescued the protein expression of PSA to the levels similar to control group (Figure 7).
Figure 7.
Effect of HBX-6 on proliferation-relative protein expression in prostate tissues of mice with BPH. The levels of PCNA and PSA were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Cell Cycle-Related Proteins in BPH-Induced Mice
E2F1 and cyclin D1 are important players in the Cdk-Rb signaling pathway and are mainly involved in the transition of the cell cycle from G1 to S [14]. As shown in Figure 8, the BPH-induced mice group showed higher expression of E2F1, Rb and cyclin D1 as compared to the control mice, and the increased expression of these cell cycle-associated proteins was rescued by finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), or HBX-6 (200 mg/kg) treatments. We observed that the inhibitory effect of HBX-6 was markedly better than that of HBX-5. These results suggested that HBX-6 plays a critical role in the alleviation of BPH by the inhibition of cell cycle-associated proteins and related cellular signaling (Figure 8). The levels of E2F1, p-Rb and Cyclin D1 were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Discussion
As they age, men commonly develop BPH. About 50% of men in their 50s, and 80% of men in their 80s show symptoms of BPH. Usually, BPH is diagnosed based on clinical characteristics of benign prostatic enlargement and/or LUTS, and erectile dysfunction [7]. Prostate enlargement usually accompanies pathological symptoms and complications; therefore, prostate volume has been focused on in studies to understand the etiology, clinical pathology, and treatment of BPH [15]. In Figure 7. Effect of HBX-6 on proliferation-relative protein expression in prostate tissues of mice with BPH. The levels of PCNA and PSA were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Cell Cycle-Related Proteins in BPH-Induced Mice
E2F1 and cyclin D1 are important players in the Cdk-Rb signaling pathway and are mainly involved in the transition of the cell cycle from G1 to S [14]. As shown in Figure 8, the BPH-induced mice group showed higher expression of E2F1, Rb and cyclin D1 as compared to the control mice, and the increased expression of these cell cycle-associated proteins was rescued by finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), or HBX-6 (200 mg/kg) treatments. We observed that the inhibitory effect of HBX-6 was markedly better than that of HBX-5. These results suggested that HBX-6 plays a critical role in the alleviation of BPH by the inhibition of cell cycle-associated proteins and related cellular signaling (Figure 8).
Figure 7.
Effect of HBX-6 on proliferation-relative protein expression in prostate tissues of mice with BPH. The levels of PCNA and PSA were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Effect of HBX-6 on Cell Cycle-Related Proteins in BPH-Induced Mice
E2F1 and cyclin D1 are important players in the Cdk-Rb signaling pathway and are mainly involved in the transition of the cell cycle from G1 to S [14]. As shown in Figure 8, the BPH-induced mice group showed higher expression of E2F1, Rb and cyclin D1 as compared to the control mice, and the increased expression of these cell cycle-associated proteins was rescued by finasteride, HBX-5 (200 mg/kg), HBX-6 (100 mg/kg), or HBX-6 (200 mg/kg) treatments. We observed that the inhibitory effect of HBX-6 was markedly better than that of HBX-5. These results suggested that HBX-6 plays a critical role in the alleviation of BPH by the inhibition of cell cycle-associated proteins and related cellular signaling (Figure 8). The levels of E2F1, p-Rb and Cyclin D1 were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Discussion
As they age, men commonly develop BPH. About 50% of men in their 50s, and 80% of men in their 80s show symptoms of BPH. Usually, BPH is diagnosed based on clinical characteristics of benign prostatic enlargement and/or LUTS, and erectile dysfunction [7]. Prostate enlargement usually accompanies pathological symptoms and complications; therefore, prostate volume has been focused on in studies to understand the etiology, clinical pathology, and treatment of BPH [15]. In Figure 8. Effect of HBX-6 on cell cycle related-protein expression in prostate tissues of mice with BPH. The levels of E2F1, p-Rb and Cyclin D1 were analyzed by a Western blot analysis. Values were presented as mean ± SD (n = 10); ### p < 0.001 versus control group; *** p < 0.001 versus BPH group. ANOVA and Dunnett's post hoc test were used for significances between each experimental group.
Discussion
As they age, men commonly develop BPH. About 50% of men in their 50s, and 80% of men in their 80s show symptoms of BPH. Usually, BPH is diagnosed based on clinical characteristics of benign prostatic enlargement and/or LUTS, and erectile dysfunction [7]. Prostate enlargement usually accompanies pathological symptoms and complications; therefore, prostate volume has been focused on in studies to understand the etiology, clinical pathology, and treatment of BPH [15]. In relation to this, an investigation of the effective medical treatment for BPH has been carried out over the last 25 years. Although invasive surgical intervention is generally considered as the gold standard of treatment for men with prostatic problems, shifts from transurethral resection of prostate to medication therapy management have been observed over time [16].
Therapy with the 5α reductase inhibitor, finasteride is a commonly used option for both, LUTS and BPH. A number of studies have shown that 5α reductase inhibitors have better safety and efficacy than α1-adrenergic receptors [17]. Despite their long clinical success, 5α reductase inhibitors show problems in clinical use. Treatment with 5α reductase inhibitors may lead to increased cardiovascular risk, breast cancer, and progression into high-grade prostate cancer. Especially, the recent data based on post marketing surveillance with 5α reductase inhibitors showed persisting adverse effects beyond drug seponation [18].
Herbal agents have been the subject of many studies in an effort to overcome the unintended consequences of chemical synthetic drugs in various diseases. As BPH is a chronic disease, the use of herbal medicine with efficacy on chronic conditions is deemed appropriate. Based on empirical experience, we developed a new formula composed of nine medicinal herbs and subsequently, we assessed its safety and efficacy on BPH-induced mice and cell lines. However, due to the drawbacks associated with the manufacturing process, we simplified and visualized the contents of HBX-5.
In this study, DHT-stimulated normal epithelial prostate RWPE-1 cells and normal stromal prostate WPMY-1 cells were used to assess the inhibitory effect of nine individual herbs on AR expression. Physiologically, androgens and AR are necessary to develop and preserve the male phenotype and male genitals. During the progression of BPH, AR is required for the proliferation of epithelial and stromal cells, thereby leading to prostate enlargement with obstructive uropathy [19]. Based on the evaluation of AR expression in prostate cells and empirical reports from oriental medicine, we selected Cornus officinalis and Psoralea corylifolia, and then reconstituted, standardized a new experimental formula, HBX-6.
The fruit of Cornus officinalis sieb. et zucc. is one of the primary herbs used in traditional Chinese medicine (TCM) [20]. Previous study reported that Cornus officinalis has the therapeutic effect on sexual dysfunction, and cornuside, an isolated compound from Cornus officinalis, has vascular relaxation activity, suggesting an enhancement of sexual function [21]. Psoralea corylifolia L., has been traditionally used in treatment of impotence, oliguria premature ejaculation in Korean medicine [22]. A recent study has detailed the therapeutic effect of Psoralea corylifolia L. on spermatogenesis in rat testes [23].
Although an in vivo BPH model is necessary to evaluate the efficacy of medication, spontaneous BPH with increasing age is unusual in animals except humans. A testosterone-induced BPH mouse model is therefore a good substitute model that mimics morphological changes in human prostate hyperplasia [24,25].
In this study, BPH-induced mice notably manifested the pathological alterations in the prostate and surrounding tissues. Prostatic enlargement and congestion in the BPH-induced mice showed a clear distinction as compared to the mice in the control group and treatment groups ( Figure 5). Abnormal cellular proliferation in the prostate tissue has been considered one of the most important events in the progression of BPH [26]. Using histological analysis, we observed an increased thickness of the epithelium due to pathologically activated epithelial cell proliferation in the prostate tissue of BPH-induced mice. In Hematoxylin and eosin (H & E) stained prostate tissue sections, the BPH-induced mice displayed reduced glandular luminal area and typical hypertrophic patterns as compared to the control group, whereas treatment with finasteride, HBX-6 (200 mg/kg), HBX-5 (100 mg/kg), and HBX-5 (200 mg/kg) reduced the severity of histological features of BPH ( Figure 6).
Patients with higher PSA levels suffer from progressive BPH, and PSA levels accurately reflect the prostate size and can be used as a reliable parameter with potential foreseeable efficacy as a marker in clinical treatment for BPH [27]. Along with PSA, PCNA, expressed in proliferating cell nuclei, is implicated in the progression of BPH. PCNA is an adminicular factor that is intertwined with cyclin D1-CDK complex [12]. In the present study, we noted that HBX-6 (100 mg/kg) and HBX-6 (200 mg/kg) reduced the expression of PCNA and PSA in testosterone-induced BPH mice (Figure 7).
There has been an increasing interest in exploring the underlying causes and mechanisms that are intricately related to the development and progression of BPH. The transcription factor E2F family of proteins is an important factor in the regulation of cell proliferation and apoptosis. The E2F family consists of eight members, which are classified as activators (E2F1, E2F2, E2F3) or repressors (E2F4, E2F5, E2F6, E2F7, E2F8), based on their ability to either activate or repress transcription [28,29]. E2F family mainly regulates the G1/S transition during cell cycle by regulating the transcription of cell cycle-related genes, such as cyclin A, cyclin E, and PCNA [30]. During G1/S transition, cyclin D-cdk4/6 complex phosphorylates Rb and allows its disassembly from E2F, which further activates the transcription of S-phase genes [14]. Among E2F family members, overexpression of E2F activators induces entry into S-phase, activates DNA synthesis, and also facilitates the transcriptional activation of E2F-target genes. E2F activators also can negate growth arrest signals initiated by the Cdk inhibitors [30,31]. Recently, Liang et al. reported that the overexpression of E2F1 promotes cell invasion and migration in prostate cancer cells [32].
Meanwhile, according to OECD guidelines for the testing of chemicals Section 4 health effects test, we performed a repeated dose 90-day oral toxicity study in rats. All animals were given HBX-6 daily at the dose of 1000, 2000 and 4000 mg/kg for 90 days and survived to the scheduled termination. Dose of 4000 mg/kg was decided as no observed effect level (NOEL) in male and female rats. Evidences from this study back up the possibility of HBX-6 as natural remedies for BPH.
An epidemiological study suggested that the development of BPH predisposes an individual to develop prostate cancer in his lifetime [33]. Similarly, previous studies have reported that the risk of prostate cancer is especially high in Asian males with BPH [34]. Although there are many different hypotheses about the relationship between prostate cancer and BPH, these two prostate diseases can be associated with each other by cellular and molecular mechanisms. Among them, an imbalance of cellular proliferation and apoptosis, and the associated molecular pathways have received attention in the management of prostate diseases. In this study, we observed that BPH-induced mice show an upregulation of E2F1, Rb, and cyclin D1. These results are consistent with previously reported evidence suggesting a connection between cyclin D1 and E2F1 expression in BPH. Interestingly, we showed that HBX-6 treatment significantly reduced the expression levels of E2F1, Rb, and cyclin D1 in comparison with BPH-induced mice (Figure 8).
Conclusions
The most significant finding of this study is that the simplified herbal agent HBX-6 has a strong therapeutic potential in the suppression of BPH via the inhibition of the E2F1 pathway in a testosterone-induced BPH mouse model. In addition, these results showed that the inhibitory effect of HBX-6 on prostatic cell proliferation is superior to that of positive control agents, finasteride, and HBX-5 (200 mg/kg). Taken together, we suggest the possibility of using HBX-6 as a therapeutic agent for BPH treatment.
Conflicts of Interest:
The authors declare no conflict of interest. | 7,809.2 | 2019-05-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Structural integrity, meltability, and variability of thermal properties in the mixed-linker zeolitic imidazolate framework ZIF-62
Metal–organic framework (MOF) glasses have emerged as a new class of melt-quenched glasses; however, so far, all MOF glass production has remained at lab-scale; future applications will require large-scale, commercial production of parent crystalline MOFs. Yet, control of synthetic parameters, such as uniform temperature and mixing, can be challenging, particularly, when scaling-up production of a mixed-linker MOF or a zeolitic imidazolate framework (ZIF). Here, we examine the effect of heterogeneous linker distribution on the thermal properties and melting behavior of ZIF-62. X-ray diffraction (XRD), Raman, and 1 H nuclear magnetic resonance spectroscopies revealed little discernable structural difference between samples of ZIF-62 synthesized in our lab and by a commercial supplier. Differential scanning calorimetry and variable temperature/isothermal XRD revealed the samples to have significantly different thermal behavior. Formation of ZIF-zni was identified, which contributed to a dramatic rise in the melting point by around 100 K and also led to the alteration of the macroscopic properties of the final glass. Parameters that might lead to the formation of unexpected phases such as an uneven distribution of linkers were identified, and characterization methods for the detection of unwanted phases are provided. Finally, the need for adequate consideration of linker distribution is stressed when characterizing mixed-linker ZIFs.
INTRODUCTION
Metal-organic frameworks (MOFs) are porous, crystalline, and "tunable" materials composed of organic linkers coordinated to inorganic metal centers; 1,2 component selection results in an almost infinite number of possible framework structures having a wide range of physical and chemical properties. These properties enable implementation in a variety of different applications such as gas storage, gas separation, and catalysis. 3,4 MOFs are typically synthesized in the form of microcrystalline powders although this is problematic as specialized and/or high-stress applications require MOFs formed in robust, bulk geometries. 5,6 To overcome this challenge, melt-quenching of these hybrid framework materials has been proposed, leading to bulk glasses. 7 Accordingly, MOF glasses have emerged as a new class of meltquenched glasses with unique and potentially advantageous properties stemming from their tunability and structural chemistry, which can be exploited in crystalline and glassy states alike. 8 This ability to form bulk, shapeable materials with enhanced processability and durability, without loss of chemical selectivity, greatly broadens the applicability of MOFs in many fields.
Nevertheless, the most challenging issue remains unsolved since bringing MOF glasses into real world applications requires scaling-up the synthesis procedures. 17 In this, several parameters are crucial to the structural integrity and phase purity of the final product, such as reaction time and temperature. 18,19 The presence of structural defects or tiny amounts of impurities may change the thermal behavior; this can induce a considerable increase in melting temperature or reduce the melting window of the material. 13 This is a concern not only for commercial materials but also when analyzing the behavior of newly synthesized materials, where phase purity plays a significant role in dictating thermal behavior. Currently, the MOF glass field tries to identify new meltable MOFs/ZIFs from among a huge number of available crystalline MOFs; thus, small impurities may play a large role by either broadening the melting window or, worse, by falsely identifying melting compositions as non-glass forming. 20 In mixed-linker ZIFs, effects of linker ratio on thermal behavior of the corresponding ZIF have been investigated before, and it was shown that increasing the structural disorder, through inclusion of multiple ligands, causing static atomic displacement or distortion in orientation, resulted in lower melting temperature. 13,21 Statistical models indicate that in mixed-linker ZIFs, different Zn 2+linker coordination spheres have corresponding probabilities. These different zinc environments are both kinetically and thermodynamically driven: respective steric hindrances of individual linkers and preferred coordination of each linker to metal sites result in some metal center-linker interactions being more common. For instance, ZIF-62 crystals form with a propensity of Zn to be coordinated by Im over bIm linkers, 21 higher-than-average Im coordination. This suggests that homogeneous linker distribution (e.g., through controlling synthesis parameters such as time and temperature) is a crucial factor in tailoring the physical properties of mixed-linker ZIFs/MOFs. In this study, we investigate the structural and thermal properties of two variants of ZIF-62, one produced in the lab and the other commercially, with different degrees of linker homogeneity. The structures of these samples are studied using x-ray diffraction (XRD), proton nuclear magnetic resonance ( 1 H NMR), and Raman spectroscopy. Differential scanning calorimetry (DSC) and variable temperature/isothermal XRD (VT-XRD) measurements are used to identify the differences in thermal properties, origin of unexpected phases, and changes in the melting behavior of the materials. Important parameters affecting the synthesis processes of a mixed-linker material are addressed, and guidelines to control these issues are discussed.
RESULTS AND DISCUSSION
We compare the structures of the two differently manufactured ZIF-62 samples, obtained via lab synthesis and from a commercial supplier, respectively (denoted as ZIF-62-synthesized and ZIF-62-commercial). Structural characterization was performed using XRD, 1 H NMR, and Raman spectroscopy, as shown in Figs. 2 and S1. Figure 2(a) compares the XRD patterns of samples of ZIF-62commercial, ZIF-62-synthesized, and ZIF-62-calculated using crystallographic data from the literature. 13 The XRD patterns of both ZIF-62-synthesized and ZIF-62-commercial display good agreement with the calculated one, with only slight changes in the intensity of some reflections. Figure S2 illustrates the differences in XRD patterns between ZIF-62-synthesized, ZIF-62-commercial, and ZIF-62-calculated. Similarly, the Raman spectra in Fig. S1 are consistent with the previously reported literature and reveal the same features for both samples indicating identical chemical bonding environments. 15 In mixed-linker ZIFs or MOFs such as ZIF-62, acid-digested 1 H NMR spectroscopy provides useful information about the integrity of the linkers as well as linker stoichiometry present in the framework. 13,22 As presented in Fig. 2(b), 1 H NMR spectra for the ZIF-62-commercial and ZIF-62-synthesized samples are wellmatched, confirming that the linkers are intact. Further analysis on the linker composition showed that the linker ratio, defined as bIm/(bIm + Im), deviates slightly from the canonical ZIF-62 linker ratio of 0.125 for both ZIF-62-synthesized (0.156) and ZIF-62-commercial (0.135) samples. Accordingly, linker compositions can be written as Zn(Im) 1.69 (bIm) 0.31 and Zn(Im) 1.73 (bIm) 0.27 for ZIF-62-synthesized and ZIF-62-commercial, respectively, and compared to the canonical composition, Zn(Im) 1.75 (bIm) 0. 25 . This suggests that both ZIF-62-synthesized and ZIF-62-commercial samples possess similar chemical structures and have slightly more bIm in the structure, as has been reported extensively in the literature. 23 Frentzel-Beyme et al. 13 demonstrated that the melting temperature of ZIF-62 can be tuned (∼70 ○ C difference) by varying the Im:bIm ratio in Zn(Im) 2−x (bIm)x, where x = 0.02-0.35. Motivated by this, we investigated the thermal properties of ZIF-62-synthesized and ZIF-62-commercial, starting with the ubiquitous technique of differential scanning calorimetry paired with thermo-gravimetric analysis (DSC-TGA). Figure 3 shows the calorimetric behavior of ZIF-62-commercial and ZIF-62-synthesized samples. According to Fig. 3(a), both ZIF-62-commercial and ZIF-62-synthesized samples show no mass loss upon heating to 600 ○ C, implying that there is no thermal decomposition prior to this temperature. On the other hand, DSC scans in Fig. 3(b) illustrate significant differences in temperature-driven enthalpic behavior. Although the scan for ZIF-62-synthesized shows the expected thermal response upon heating (an endotherm at ∼400 ○ C, characteristic of ZIF-62 melting), that of ZIF-62-commercial contained a variety of complex features related to various phase transitions, which are different from any kind of thermal behavior that has been reported for ZIF-62 so far. 15,17,18,24 As noted above, this behavior could be attributed to differences in the linker ratio, originating from higher amounts of bIm in ZIF-62-synthesized compared to ZIF-62-commercial. However, ZIF-62 with higher-than-ideal bIm content has been investigated and showed almost identical enthalpic responses in DSC traces. 13,25 Considering linker ratio deviation between ZIF-62-synthesized and ZIF-62-commercial, the expected melting temperature difference would be less than 10 ○ C. 13 To uncover the unusual phase transitions found in our DSC measurements, we performed VT-XRD on both ZIF-62-synthesized and ZIF-62-commercial samples. The results are presented in Figs. 3(c) and 3(d), respectively. We focused on 2-theta values in the range of 8 ○ -25 ○ since most of the crystalline features occur in this range. VT-XRD on ZIF-62-synthesized [ Fig. 3(c)] shows the expected loss of crystallinity upon melting starting at ∼400 ○ C, as evidenced by the disappearance of sharp Bragg diffraction peaks and the appearance of broad amorphous scattering, in good agreement with DSC data.
In contrast, ZIF-62-commercial [ Fig. 3(d)] first goes through a partial amorphization step around 300 ○ C (shown as a decrease in diffraction peaks particularly at higher 2-theta values in XRD and an endotherm in DSC), which is followed by the appearance and growth of new crystalline peaks starting at 400 ○ C (detectable in the DSC trace as an exothermic peak at almost the same temperature) and finally amorphization of the newly emerged crystalline phase (in agreement with the broad endotherm starting at 500 ○ C in DSC). Rather than full amorphization above 300 ○ C, the diminishing diffraction peaks (and retention of ZIF-62 peaks) may indicate that another phase which overlaps closely with the ZIF-62 diffraction peaks is amorphizing in this temperature range. We note that small differences between phase transition temperatures as measured by DSC and VT-XRD may arise due to the different atmospheres used: DSC was performed under nitrogen and VT-XRD under argon. To confirm the unexpected DSC scan of ZIF-62-commercial, the experiment was repeated, with consistent results (Fig. S3).
Diffraction patterns and corresponding sample micrographs representative for each selected temperature in the DSC and The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp VT-XRD of ZIF-62-commercial are illustrated in Fig. 4(a). As the temperature rises, the crystalline ZIF-62-commercial partially amorphizes and subsequently, a new crystalline phase, matching the XRD pattern of ZIF-zni, emerges. ZIF-zni, a dense zinc imidazolate framework, is a thermodynamically stable ZIF polymorph of ZIF-4, Zn(Im) 2 , with zni topology 26 (ZIF-4 has cag topology and crystallizes in the Pbca space group like ZIF-62 16,27 ). To confirm the formation of ZIF-zni, several isothermal XRD measurements were performed (360 ○ C for 14 h, 430 ○ C for 7 h, and 500 ○ C for 5 h), and the morphologies of the resulting samples were studied using scanning electron microscopy (SEM). We note that different temperatures and times were chosen to allow for the separation and identification of morphological differences between phases as follows: both ZIF-62 and zni phases (360 ○ C for 14 h), only the zni phase (430 ○ C for 7 h), and fully amorphous (500 ○ C for 5 h). In isothermal XRD at 360 ○ C for 14 h, ZIF-62 and ZIF-zni crystal phases exist in the final XRD pattern, which can also be seen in corresponding SEM images, as small rounded ZIF-62 crystals appear along with the rod-shaped zni phase. 28 When we examined the sample after the isothermal experiment at 430 ○ C for 7 h, we find that only the crystalline ZIF-zni phase remains, confirmed by the presence of only rod-shaped crystals (along with the substantial amorphous phase) in the SEM images. Finally, ZIF-zni becomes fully amorphous upon the isothermal run at 500 ○ C for 5 h; no crystals can be detected in its SEM images.
Previous reports showed that ZIF-zni can be formed via two routes: direct synthesis of ZIF-zni from its chemical precursors and recrystallization from amorphized isomeric Zn(Im) 2 while providing enough thermal energy to the system. 29,30 Comparing the enthalpic behavior of ZIF-62-commercial sample with other ZIF structures, almost identical phase transformations that have been observed for ZIF-1, ZIF-3, and ZIF-4 are detected in the ZIF-62-commercial sample. 29 2 ], which have the identical Zn(Im) 2 formula but very different crystal structures. 9 We note that N,N-dimethylformamide (DMF) was used as the solvent for the synthesis of isomer crystals except ZIF-3 in which a mixture of DMF and N-methylpyrrolidone (NMP) was used as the solvent. Calculated XRD patterns of these isomer crystals are shown in Fig. S5. It can be seen that the ZIF-4 XRD pattern is the closest to that of ZIF-62, which is explained by the fact that both crystallize in the Pbca space group with cag topology. 16 According to our DSC, VT-XRD, and calculated XRD patterns of Zn(Im) 2 isomer crystals, the in situ formation of ZIF-zni during heating can be attributed to the presence of Im-rich, ZIF-4-like regions in the ZIF-62-commercial sample. This is likely due to inhomogeneous coordination processes of the Im and bIm linkers to Zn 2+ during the up-scaled commercial synthesis. To estimate the amount of the ZIF-4-like phase, enthalpy of crystallization for the zni phase from the crystallization peak in the DSC scan (Fig. S6) was calculated, obtaining a value of 11.67 J g −1 . The enthalpy of zni formation from pristine ZIF-4 has been reported at 50 J g −1 . 31 Based on the corresponding enthalpy of crystallization for zni in ZIF-62-commercial, an equivalent of ∼23.3% of the ZIF-4-like phase is nominally present in the ZIF-62-commercial sample.
It has been shown previously that extending the synthesis time and increasing the synthesis temperature resulted in zni formation during the synthesis of ZIF-4. 28 To examine this effect in the commercial sample, SEM analysis was performed on as-received ZIF-62commercial and ZIF-62-synthesized samples, as presented in Fig. 5. As can be seen from Fig. 5(a), there is no zni phase (rod-shaped crystals) in the ZIF-62-synthesized sample, whereas in the ZIF-62commercial case, as seen in Fig. 5(b), a small number of rod-shaped zni crystals can be detected, which proves the formation of zni phase during ZIF-62-commercial synthesis. Corresponding meltquenched glasses of ZIF-62-commercial and ZIF-62-synthesized samples are illustrated in Fig. 5(c).
Quantification of the zni phase in ZIF-62-commercial using Rietveld refinement is presented in Fig. 6. Results revealed 1.8% of the zni phase in the ZIF-62-commercial sample (see the section titled "Materials and methods" for further details).
Accordingly, our results demonstrate that ZIF-zni and ZIF-4-like pockets were formed during ZIF-62-commercial synthesis. Moreover, as ZIF-62-commercial was heated, additional ZIF-zni is formed via recrystallization of amorphized ZIF-4-like pockets. In many cases, the presence of a small impurity, unwanted chemicals/phases, or phase separation can significantly influence the macroscopic properties of glasses. As illustrated in Fig. 5(c), meltquenched glasses formed from ZIF-62-synthesized and ZIF-62commercial are different, while the glass of the ZIF-62-synthesized sample is transparent (consistent with the literature), the glass of ZIF-62-commercial is completely opaque because of the presence of zni crystals. These macroscopic differences in these two glasses would clearly hamper the use of such glasses in optical applications.
We hypothesize that, based on the evidence presented, the ZIF-62-commercial sample undergoes incongruent melting. This may be a result of inhomogeneous linker distribution during synthesis which manifests in dispersed ZIF-4-like regions in ZIF-62. In collapsible framework structures such as ZIF-62, allowing the synthesis reaction to reach its maximum entropy (complete mixing of Im and bIm linkers) is of great importance. However, providing too much energy helps the reaction to find the enthalpic minimum and form the thermodynamically stable state (ZIF-zni). To show the complex behavior of ZIF-62-commercial, a pseudo-phase diagram is illustrated in Fig. 7(a). Although many ZIFs, including those in this study, are metastable, precluding a proper equilibrium phase diagram, due to their deep energy well, we believe that such a pseudophase diagram is still a useful tool for understanding the melting behavior of these complex systems.
According to the pseudo-phase diagram in Fig. 7(a), we can hypothesize possible linker ratios that result in incongruent melting. Even though enthalpic behavior of the system can be controlled, inhomogeneities might occur because the reaction is also affected by kinetics. At any linker ratio of Im and bIm, Im-rich regions and bIm-rich regions can form, and incongruent melting may occur. Without reaching complete mixing, it is possible for the material to be composed of two or more compositional points on the pseudo-phase diagram (at constant T). The melting behavior of ZIF-62-synthesized shows that, it is clearly possible to avoid the pink "liquid + ZIF-zni" region entirely as was investigated by Frentzel-Beyme et al. 13 (their experimental data points are included in the phase diagram). However, this is not achieved only by satisfying the proper linker ratio ([Im/bIm]): From NMR of ZIF-62-commercial, we expect that the material would not go through incongruent melting (X = 0.27). Instead, linker heterogeneity creates at least two different local Zn 2+ -linker environments and constrains ZIF-62-commercial to go through two different paths upon heating, as illustrated by the two white composition points in the phase diagram. Figure 7(b) illustrates the behavior of ZIF-62-commercial linker distribution during synthesis, showing the early clustering of Im linkers. In Fig. 7(c), continued synthesis produces both canonical and Im-rich ZIF-62 phases, along with small amounts of ZIF-zni crystals. When such kind of ZIF-62 polycrystals are heated, more zni crystals form from the Im-rich pockets, and the final product is a melted, amorphous ZIF-62 phase with zni crystals, as shown in Fig. 7(d).
Moreover, in order for MOFs/ZIFs to perform as expected in gas storage, gas separation, and catalysis applications, the parent material must be sufficiently phase-pure to guarantee optimal performance. 32,33 Specifically, in a glass derived from ZIF-62, the presence of a small amount of ZIF-zni can significantly influence the gas separation and catalytic performance since adsorption and diffusion of certain molecules will no longer be possible in dense zni regions, while it would be possible in the absence of such unfavorable phases. 13,17 We note that both ZIF-62 and ZIF-zni crystal habits have been described previously in detail, and the morphological differences between them enabled us to detect the presence of zni in ZIF-62. Similarly, amorphization of ZIF-62 upon heating allowed clear discernment of ZIF-zni peaks in the XRD patterns at higher temperatures, corroborating the formation of ZIF-zni during commercial synthesis. However, in other systems, there may be different crystal phases with the same crystal habits, hindering easy phase identification, while contributing to unexpected behavior and performance in applications. Furthermore, when identifying new meltable MOFs/ZIFs, the inhomogeneity of linker distribution and/or the presence of impurities or other crystalline phases can change the thermal properties substantially, impeding the accurate evaluation of thermal stability and melting windows. Given that thermal behavior is a deciding factor in application-driven MOF research and is arguably the most important factor in the discovery and characterization of novel amorphous MOFs, we believe that more effort should be spent on characterizing the linker distribution in mixed-linker ZIFs. Without proper consideration of linker heterogeneity, thermal characterization is only applicable to that singularly synthesized MOF and should not be generalized for all MOFs of that composition and topology. Although linker distribution analysis adds an extra step to investigations, it also brings attention to the fact that we now have an additional method to tune the physical MOF properties.
The Journal of Chemical Physics
We can tentatively summarize the important characterization steps to confirm the homogeneity of a mixed-linker MOF/ZIF material after synthesis in terms of subsequent transformation into a glassy state. Typical structural characterization must be combined with in-depth thermal analysis. XRD, SEM, Raman/FTIR, and NMR spectroscopy can provide useful information about overall structural and linker integrity; at the first glance, this information may confirm that the material has the intended structure. However, the most important step is the evaluation of the thermal behavior: it probes the dynamics of the system revealing differently behaved inhomogeneities and phases. Only by combining DSC with VT-XRD (and SEM) were the different phase changes of the inhomogeneous regions apparent. The information presented in this work provides a roadmap to identify synthesis differences, which may occur in mixed-linker MOFs/ZIFs.
CONCLUSION
In summary, we investigated the structural heterogeneity and thermal properties of meltable variants of ZIF-62. Our results showed that in such mixed-linker MOFs/ZIFs, uneven distribution of linkers might cause formation of polymorphs, which can result in significant changes in thermal properties. This can cause a dramatic increase in melting temperature and/or change the macroscopic properties, which is of importance for accurate characterization and in further processing of materials such as glasses. Thermal characterization methods such as DSC and VT-XRD are of great importance in testing the integrity and characteristics of a mixedlinker MOF/ZIF product. From a practical standpoint, results presented here can provide a guideline for characterizing the success of The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp scaling-up or large-scale production of ZIFs/MOFs. Yet, the striking differences in thermal behavior also stress the necessity of determining linker distribution in mixed-linker ZIFs and highlight that linker heterogeneity is an additional route to tune MOF physical properties.
Materials
ZIF-62-synthesized was prepared using the same procedure reported previously 15 and compared to a commercial ZIF-62 material as-received from ACSYNAM. Both materials were heated at 170 ○ C under vacuum overnight prior to use.
X-ray diffraction
VT-XRD and isothermal XRD experiments were conducted using a Rigaku Smartlab diffractometer (Cu Kα x-ray source with a wavelength of 1.54059 Å) with a Hypix-3000 (horizontal configuration) in 1D scanning mode. The voltage and current of the x-ray tube were set to 40 kV and 50 mA, respectively. For both experiments, the general Bragg-Brentano geometry was used with a 10 mm length-limiting slit as the incident section, a 2.5 ○ Soller slit with a K β filter, and an anti-scattering slit in the receiving part. A powder sample (∼40 mg) was placed in a corundum holder and installed on a HTK1200N (Anton Paar) heating stage. The vacuum stage was connected to the heating stage, and all the connections were sealed. The sample compartment was flushed two times using argon gas in the chamber and pulling vacuum afterward. A turbo-molecular pump (TMP) was used to evacuate the sample compartment. After the final evacuation step, a continuous argon flow of 50 ml min −1 was used during the whole experiment. For VT-XRD, a temperature control loop was set using "constant up down measurement" mode. Target temperature and ramp rate were set to 600 ○ C and 5 ○ C min −1 , respectively. Diffraction patterns were collected in the 2θ range of 8 ○ -25 ○ with a step size of 0.03 ○ and at a speed of 50 ○ min −1 . Setting these conditions resulted in obtaining a diffraction pattern every 6 ○ C. Isothermal XRD experiments were conducted using the "constant up down measurement temperature loop" mode. Target temperatures and holding times were set to 360 ○ C, 430 ○ C, and 500 ○ C and 14 h, 7 h, and 5 h, respectively. Diffraction patterns were collected in the 2θ range of 8 ○ -25 ○ with a step size of 0.03 ○ and at a rate of 10 ○ min −1 . The set ramp rate provided diffraction patterns every 6 min. XRD data presented in Fig. 2(a) were collected using a Rigaku MiniFlex diffractometer in the 2 θ range of 5 ○ -40 ○ with a step size of 0.01 ○ . Rietveld refinement was performed using GSAS-II software. 34 Instrumental parameters were extracted using LaB 6 as the reference.
Differential scanning calorimetry coupled with thermo-gravimetric analysis (DSC-TGA) DSC-TGA analyses were performed using a Netzsch STA 449 F1 instrument. Approximately, 15 mg of each sample was placed in a platinum crucible and gently pressed by hand to ensure a good contact between the crucible and the powder sample. All the measurements were performed under 20 ml min −1 nitrogen flow. First, the sample was heated to 120 ○ C with a ramp of 20 ○ C min −1 and equilibrated for 4 h to remove any volatiles. Subsequently, it was heated to 600 ○ C with a ramp rate of 10 ○ C min −1 .
Scanning electron microscopy (SEM)
The morphology of samples after isothermal XRD runs and as-synthesized and as-received commercial samples were analyzed using a JSM-7001 F electron microscope (Jeol Ltd., Japan). Approximately, 10 mg of each sample was placed on the carbon tape pasted on an aluminum cell. Samples were coated with a thin layer of carbon prior to measurement. Working distance and voltage were set to 15 mm and 15 kV, respectively.
Raman spectroscopy
Raman spectra for powder samples were collected using Renishaw inVia Raman microscope at 20× magnification with an excitation wavelength of 785 nm. Samples were placed on a glass slide and flattened. Spectra were collected in the wavenumber range of 500 cm −1 -1600 cm −1 with 50% laser power, acquisition time of 10 s, and one accumulation.
Nuclear magnetic resonance spectroscopy 1 H NMR spectra were measured on a Bruker 300 MHz spectrometer. Approximately, 6 mg of each sample was digested in a mixture of DCl (20%)/D 2 O (0.1 ml) and DMSO-d 6 (0.6 ml). Data analysis was performed in TopSpin software.
Glass samples
Approximately, 100 mg of ZIF-62-commercial and ZIF-62synthesized were pressed into pellets (1 cm diameter) with 0.7 tons for 1 min. Prepared pellets were transferred into a tube furnace (Carbolite), and the furnace was flushed with nitrogen gas for half an hour before heating to 450 ○ C at 5 ○ C min −1 and holding for 10 min. After heating, the pellets were left to cool down to room temperature at the natural rate of cooling of the tube furnace. Both heating and cooling steps were done under a constant nitrogen flow.
SUPPLEMENTARY MATERIAL
See the supplementary material for further structural and thermal characterizations. The authors declare that they have no competing financial interests, or other interests that might be perceived to influence the results and discussion reported in this paper. | 5,948.6 | 2020-11-23T00:00:00.000 | [
"Materials Science",
"Chemistry"
] |
Local dose rate effects in implantable cardioverter–defibrillators with flattening filter free and flattened photon radiation
Purpose In the beam penumbra of stereotactic body radiotherapy volumes, dose rate effects in implantable cardioverter–defibrillators (ICDs) may be the predominant cause for failures in the absence of neutron-generating photon energies. We investigate such dose rate effects in ICDs and provide evidence for safe use of lung tumor stereotactic radioablation with flattening filter free (FFF) and flattened 6 Megavolt (MV) beams in ICD-bearing patients. Methods Sixty-two ICDs were subjected to scatter radiation in 1.0, 2.5, and 7.0 cm distance to 100 Gy within a 5 × 5 cm2 radiation field. Radiation was applied with 6 MV FFF beams (constant dose rate of 1400 cGy/min) and flattened (FLAT) 6 MV beams (430 cGy/min). Local dose rates (LDR) at the position of all ICDs were measured. All ICDs were monitored continuously. Results With 6 MV FFF beams, ICD errors occurred at distances of 1.0 cm (LDR 46.8 cGy/min; maximum ICD dose 3.4 Gy) and 2.5 cm (LDR 15.6 cGy/min; 1.1 Gy). With 6 MV FLAT beams, ICD errors occurred only at 1 cm distance (LDR 16.8 cGy/min; 3.9 Gy). No errors occurred at an LDR below 7 cGy/min, translating to a safe distance of 2.5 cm (1.5 Gy) in flattened and 7 cm (0.4 Gy) in 6 MV FFF beams. Conclusion A LDR in ICDs larger than 7 cGy/min may cause ICD malfunction. At identical LDR, differences between 6 MV FFF and 6 MV FLAT beams do not yield different rates of malfunction. The dominant reason for ICD failures could be the LDR and not the total dose to the ICD. For most stereotactic treatments, it is recommended to generate a planning risk volume around the ICD in which LDR larger than 7 cGy/min are avoided.
Introduction
Considering stereotactic body radiotherapy (SBRT) and its effects on implantable cardioverter-defibrillators (ICDs), evidence is limited to case reports and mechanistic studies Especially since the clinical introduction of FFF radiotherapy, high dose rates at the isocenter of up to 1500 cGy/min for 6 MV FFF and 2500 cGy/min for 10 MV FFF photon beams can be achieved. Although FFF is known [2] to generate less scattered radiation especially for low-modulated stereotactic body radiotherapy (SBRT; e.g., SBRT in the lung), it may result in undesirably high local dose rates (LDR) within the CIED in realistic clinical scenarios. AAPM TG-203 report [3], the most recent comprehensive review on this topic, states in this context that currently no evidence exists on safe application of FFF radiotherapy for patients with CIEDs and suggests an increased monitoring of CIEDs pre-and posttreatment, which increases the workload for healthcare providers. A recent review on 13 lung cancer SBRT cases and a phantom study with explanted ICDs showed that even though no CIED failures were reported on the historical SBRT cohort, in vitro data suggest inappropriate sensing (IS) starting at a dose rate (DR) of 1200 cGy/min (200 mGy/s) when CIEDs were placed within the radiation field. Nevertheless, in the same phantom study, no radiation-induced effects were observed in 6 MV photon beams when a total number of 10 ICDs were placed up to 3 cm away from the PTV in clinical lung cancer SBRT scenarios [4]. In this study, specific DR effects in ICDs were not investigated.
It has been demonstrated that ICDs placed directly within the primary radiation beam malfunction [1,[5][6][7][8]. In absence of other known causes for radiation-induced CIED failures like neutron-generating photons at photon energies larger than 6 MeV, it is hypothesized that DR-related effects exist, which can influence ICD circuitry and thus lead to malfunctions. These effects may affect ICDs, which are in direct vicinity but not inside a clinical target volume. In the beam penumbra, LDR are still high but decrease rapidly with increasing distance. At present, there is no clear evidence available regarding which LDR can be safely applied and accordingly at which distance from the radiation field an ICD may be located when radiotherapy for a nearby tumor volume is of imminent need. The aim of this study was therefore to describe ICD effects in view of specific different LDR of radiation emitted from a medical linear accelerator with and without flattening filter. ICDs were placed at predetermined distances to the primary radiation beam with known LDR within each ICD with flattened (FLAT) and FFF beams at 6 MV. This will lead to a more precise description at which LDR ICDerrors occur and which LDR may be safely applicable at the position of ICDs in patients undergoing lung SBRT. A safe distance between ICD and radiation field margin will be provided for flattened and FFF 6 MV photon beams.
Cardiac implantable electronic devices
A total of 62 explanted ICDs (Medtronic Inc., Minneapolis and St. Jude Medical/Abbott, Saint Paul, MN, USA) were used. All devices were interrogated prior to experiments and relevant data were registered. Time between ICD implantation and radiation exposure was 4.3 ± 2.1 years. All ICDs were fully functional and had sufficient battery capacity. No ICD was previously subjected to therapeutic radiation. Detection parameters for ventricular tachyarrhythmias or pacing parameters were not reprogrammed; however, shock delivery was deactivated for safety reasons [1]. All devices were monitored continuously during experiments with wireless programmers (Medtronic, St. Jude Medical/ Abbott). All abnormalities observed in real time monitoring were recorded. In addition, interrogation of each ICD took place immediately before and after each experiment.
Experimental design and setup
Radiation experiments were carried out at a medical linear accelerator (LINAC; VersaHD, Elekta AB, Stockholm, Sweden). ICDs were placed on a 30 × 30 cm 2 solid water slab phantom (RW3, PTW, Freiburg, Germany). The entire setup was covered by 2 cm bolus material (Superflab, Eckert & Ziegler, Berlin, Germany) to ensure secondary electron equilibrium. Source-surface distance (SSD) was 98 cm and the depth of the isocenter was located at the level of the ICD's upper side. ICDs were located at predefined positions in 1.0, 2.5, and 7.0 cm distance to the nominal field edge (50% dose level) in the penumbra of the radiation field ( Fig. 1) with the ICD's inner side at the nominal distance. This setup ensured that circuitry and battery were exposed to radiation to a similar extent. For each experiment, exactly two devices were placed at opposite positions of the radiation field in crossline-direction (left-right, A-B) to ensure a comparable multileaf collimator (MLC) transmission (< 0.6%) [9] and primary beam scattered radiation. In this setup scattered radiation from opposing ICDs was considered negligible. Between the two ICDs, bolus material was placed with a size that filled the radiation field and the respective distance of each ICD to the radiation field so that all ICDs were in direct contact with the bolus material.
All dose deliveries were performed with a 5 × 5 cm 2 radiation field. This field size was chosen since for most SBRT, equivalent square field sizes typically do not exceed a mean equivalent square of 5 cm. ICDs were subjected to scattered and MLC-transmitted radiation from 100 Gy isocenter dose. Radiation fields of 6 MV beams with a flattening filter being present and a constant dose-rate of 490 MU/min (equals 430 cGy/min) at the isocenter or a 6 MV FFF beam with a dose-rate of 1450 MU/min (equals 1400 cGy/min) at the isocenter were used.
ICDs were divided into five experimental groups with varying distances to the radiation field and irradiated with different radiation beams as described in Table 1.
If an ICD from groups 1, 3, and 4 showed an ICD failure, then these devices were interrogated after 8 weeks. After assertion of normal parameters, dose delivery was then re- FLAT flattened photon radiation, FFF flattening filter free photon radiation, ICD implantable cardioverter-defibrillator peated in these devices with the radiation beam settings of group 5 (which did not induce any ICD errors in 16 devices and exhibited the same LDR as group 2), therefore determining whether radiation-induced effects were of permanent or transient nature.
Scattered radiation dose measurements
To determine the LDR at different distances from the radiation field, the experimental setup was reproduced with an ionization chamber at relevant positions. For the purpose of measurements, ICDs were replaced with bolus material. LDR were measured with a 0.3 cm 3 rigid stem ionization chamber (type 30016, PTW, Freiburg, Germany) connected to an Unidos webline electrometer (PTW, Germany) after calibration to ambient temperature and pressure. Point dose measurements were performed at following positions in crossline direction: Isocenter as well as at 1.0, 2.0, 2.5, 4.0, 5.0, 7.0, 8.0, and 10 cm distance to the radiation field. The measurement setup was reproduced in the treatment planning system (TPS) Monaco (Version 5.51.10, Elekta AB). Dose calculation was Monte Carlo based with a spatial resolution of 2 mm and 1% statistical uncertainty. The reported ICD doses equate to the doses at the proximal end of the ICD and are therefore maximum doses.
Data analysis
Any inadequate sensing (IS) ranging from a single event to continuous malfunction leading to inadequate defibrillation therapy is reported as a failure. Due to the nature of CIED events in radiotherapy and the goal to avoid any radiationinduced ICD malfunction in patients, all such ICD locations with LDR that resulted in ICD failure were deemed potentially harmful. A LDR that resulted in stable ICD function was concluded to be safe for radiotherapy. As a consequence of this dichotomic nature of our results, no subsequent data analysis (e.g., risk analysis) beyond mere data presentation was considered targeted and adequate.
Measurement details on local dose rates
Experimental groups 1 (FLAT and 1 cm distance to beam) and 4 (FFF and 2.5 cm to beam) exhibited the same LDR (16.8 vs. 15.6 cGy/min) as well as groups 2 (FLAT and 2.5 cm) and 5 (FFF and 7 cm; 6.6 vs 6.0 cGy/min; Table 2). Removal of the flattening filter (FFF) resulted at the same location (e.g., 2.5 cm distance from beam edge) in higher LDR (15.6 cGy/min) in comparison to FLAT beams (6.6 cGy/min). Of note, comparable LDR resulted in higher total Table 2). Therefore, these experiments may serve as a comparison whether the total dose or the LDR is causing malfunctions. LDR as a function of the distance from the radiation beam of 5 × 5 cm 2 6 MV FLAT and FFF beams at isocenter depths are shown in Fig. 2.
ICD failures at described local dose rates
Results are summarized in Table 2. The following is a description of noticed ICD failures with local ICD doses and accumulated radiation doses within the isocenter at the time of malfunction. The time of first malfunction after start of the beam is provided as well. All erroneous ICDs (n = 8) had a time between implantation and radiation exposure of 4.1 ± 2.8 years and thus did not differ from the total collective regarding their age. . None of the three ICDs in this group showed malfunctions when they were exposed after 8 weeks to the group 5 experimental setting after the initial group 4 dose delivery. Group 5: No incidents were observed for 16 ICDs, which were placed 7 cm away from the FFF beam.
Discussion
To our knowledge, this is the first investigation which tries to identify the cause for ICD malfunctions in close vicinity of direct, non-neutron generating radiation beams when an ICD is located near but not inside the primary radiation field. ICDs exhibited transient errors when the LDR exceeded 6.6 cGy/min. Only ICDs which were exposed to a much higher LDR of 46.8 cGy/min presented persistent errors after 8 weeks. On the other hand, the total radiation dose, which an ICD sustained did not imply failures. Specifically, errors occurred when the LDR was 15.6 cGy/min and the total ICD dose was 1.1 Gy, while no errors were noted when the LDR was 6.6 cGy/min with an ICD dose of 1.5 Gy. Previous studies have placed ICDs either inside direct radiation or at a distance where the LDR was too small to be considered a relevant factor (overview in [3]). From the presented results and considering LDR measurements at described locations, a lower limit for a LDR for ICD errors in non-neutron generating photon beams was concluded to be below 7 cGy/min for previously nonirradiated ICDs. At a LDR around 6 cGy/min, no malfunctions were observed in 32 ICDs from two different manufacturers. Malfunctions for both FFF and FLAT beams occurred at more than 15 cGy/min. In 2 of 8 ICDs, these malfunctions were persistent. In 6 of 8 ICDs, radiation-induced errors appeared to be only temporary since these devices showed no further malfunctions when they were exposed after 8 weeks to subsequent radiation with dose rates around 6 cGy/min. Furthermore, no discrimination between FLAT and FFF beams was possible with respect to quality and quantity of ICD malfunctions. However, since this comparison was conducted at points with similar mean dose rates but different distances to the radiation field no statement on varying instantaneous dose rates [10] can be drawn from these experiments. At LDR of 46.8 cGy/min repeated and persistent ICD malfunctions occurred. Even when reducing the LDR to 6 cGy/min and investigating these malfunctioning ICDs again after 8 weeks, 2 of these 3 particular ICDs showed persistent IS which indicates permanent defects in some but not all devices. For typical SBRT scenarios (reasonably small targets and thus radiation fields), it is therefore safe to keep 7.0 cm distance between ICD and the target volume for 6 MV FFF beam deliveries (even at a dose delivery with constantly 1400 cGy/min isocenter dose rate) or 2.5 cm distance of the ICD to a 6 MV FLAT beam. Larger field sizes lead to more scattered radiation, which should be reflected in the corresponding margin assignment accordingly.
The fact that ICD failures in groups 1 and 4 as well as 2 and 5 were similar in frequency and severity combined with the corresponding total ICD doses and LDR presented in Table 2 indicates that the dominant reason for ICD failures could be the LDR and not the total dose to the ICD. As visible in our subgroups, there is a more similar behavior (number of defects) in groups with similar LDR than there is in groups with similar total scatter dose. In addition, ICD errors occurred in their respective LDR groups at different cumulative radiation doses. Of note, manufacturers refrain from giving specific safe cumulative radiation doses for cardiac pacemakers and ICDs because it is currently unclear, whether there is such a cumulative dose effect or not [11]. On the other hand, all available guidelines actually express such a dose recommendation (the most used is 2 Gy) [3]. In this context we demonstrate that such a dose threshold is depending on the LDR at the position of such a device (for ICDs). We noted for a LDR of 46.8 cGy/min failures at 34, 68, and 238 cGy cumulative ICD doses. For 16.8 cGy/min, we detected failures at 12 cGy and 17 cGy cumulative ICD doses and at a comparable LDR of 15.6 cGy/min, we saw failures at 7, 8, and 22 cGy. Permanent failures, which were more severe and would have resulted in ICD replacement, were only noticed in ICDs from the 46.8 cGy/min LDR group even though these two specific ICDs failed at the lower cumulative doses of 34 and 68 cGy.
With increasing use of flattening filter free radiation techniques for SBRT for lung tumors, two factors shift in the focus with regard to radiation-induced ICD effects that were K not entirely investigated in the past: high dose rates and large target volume doses. While the TG 203-report [3] states that no robust evidence exists which supports recommendations for SBRT cases in CIED patients, our recently published study [1] suggested that high dose rates may be applied in close distance to ICDs. Due to VMATtypical variable dose rates, only few IMRT segments in the specific SBRT treatment were applied with dose-rates as high as 1500 cGy/min. Therefore, no conclusions regarding threshold-LDR were drawn from the results. Still, the data were suggestive for the notion that SBRT may be feasible even for target volumes close to an ICD because a high target volume dose of 150 Gy was applied to the isocenter with 6 MV FFF-VMAT without any ICD error.
Another recent study [4] subjected ICDs to either 6 MV or 10 MV FFF-IMRT and placed ICDs partially inside or 3 cm away from direct radiation beam. Irradiation was conducted either with 28 Gy single fraction or with 4 × 12 Gy. Here, only 10 MV plans resulted in ICD errors and no incidents were observed with 6 MV plans, therefore, corroborating that even small increments in neutrons already cause ICD upsets. In this study, an additional small number of ICDs were placed directly within radiation and remained unaffected when the dose rate was below 1200 MU/min or 1200 cGy/min for a short period.
Mouton et al. proposed a threshold dose rate of 20 cGy/ min after investigating 96 cardiac pacemakers at various LDR between 0.05 and 8 Gy/min [12]. In this investigation, all pacemakers were located within the beam axis and irradiated with 18 MV photons which results in high neutron doses. Therefore, no conclusions regarding a possible safe LDR for non-neutron generating beam energies at the position of the CIED could be drawn from this particular study since neutrons remain a major cause for severe electrical upsets in CIEDs.
In view of available data, it has become clear that CIED errors occur stochastically with increasing frequencies when neutron-producing photon energies are applied [1,[13][14][15] or when CIEDs are placed within radiation (overview in [3]). Here, no threshold radiation ICD dose can be assumed safe because ICD errors may occur even at lowest cumulative ICD doses when placed within the beam of 6 MV beams [6].
While a local ICD dose rate of less than 1200 cGy/min may be applied for a short period [4], our data suggest in comparison that with increasing time and radiation dose, much lower dose rates already result in ICD errors. Hurkmans et al. have suggested for FFF beams, that at the location of a CIED outside of the target volume, the LDR would be lower than 100 cGy/min and therefore dose-rate effects are rare [16]. The data from Aslian et al. serve well for the explanation of several case reports of high cumulative absorbed radiation doses to CIEDs [17,18]. The authors show that even high dose-rates of 1200 MU/min can be withstood by an ICD for a short time. On the other hand, it is possible that ICDs can withstand radiation doses that are accumulated in small increments over a longer time period [19]. Our data provide evidence for the notion that a much lower LDR of 15.6 cGy/min can result in ICD errors if sustained by the ICD for a longer period throughout one single treatment fraction, which may occur during SBRT cases. We show that ICD errors develop within the penumbra of flattened and unflattened beams and dose-rate effects are relevant. Therefore, our data substantiate the consideration that dose rate effects are rare (but exist) in clinical practice [16] because they depend on ICD position relative to the target volume.
We distributed an uneven number of ICDs among the investigated groups. After noticing ICD failures at 1 cm distance with flattened 6 MV beam (group 1), more ICDs were not included in this group because the emphasis of this investigation is focused on SBRT cases which are typically executed with FFF beams resulting in short treatment times and therefore facilitating breath-hold techniques. Furthermore, the 1 cm FFF group 3 was equipped with 10 ICDs as was the 2.5 cm FFF group 4. Here, the emphasis laid on generating a robust signal for discrimination between safe and unsafe LDR. Finally, 16 ICDs from two manufacturers were included in groups 2 and 5, corroborating our findings that LDRs of around 6 cGy/min do not result in ICD errors. Still, we investigated a total number of 62 ICDs, which is a relatively small number when looking at stochastic events. This limitation is determined by the limited availability of functioning ICDs. It might be challenging to add more measurement locations when trying to further elucidate a true threshold-LDR for ICDs (in cm steps) but this could be undertaken, after a robust power analysis using our data and when focusing on one single beam quality (flattening filter free only).
A total dose of 100 Gy at the beam isocenter is higher than any typical cumulative dose concept in lung SBRT and can therefore serve as an upper limit. An explicit consideration of potential dose fractionation effects were beyond the scope of this work, but since no ICD showed any error during maximum changes (beam-on, beam-off), a generalization to fractionated RT is considered feasible. Furthermore, any interfractional ICD recovery process will lead to fewer failures.
We included ICDs from two different manufacturers in our setting and distributed 1-chamber, 2-chamber and 3-chamber devices equally among all groups, but are aware that differences in architecture of the ICDs exist between different companies. Therefore, our results cannot be generalized to all available devices. But they provide a rationale for further discussion of safe and potentially deleterious ICD locations in SBRT-planning scenarios. With the in-creasing use of SBRT with flattened and unflattened photon beams and high local fractional target doses, our data can help to understand deleterious dose rate effects in ICDs and provide information on how to avoid them.
Conclusion
Dose rate effects play a role in radiation-induced ICD failures beside neutron radiation and total dose to the ICD. A LDR in ICDs between 15.6 and 6.6 cGy/min may cause ICD malfunction. At identical LDR, differences between 6 MV FFF and 6 MV FLAT beams were not found to yield different rates of malfunction. It is recommended to generate a planning risk volume (PRV) around the ICD in which LDR larger than 7 cGy/min are avoided. Depending on the effective field size and the dose rates used it is for most stereotactic treatments with 6 MV FFF beams sufficient to add a 7 cm ICD-PRV isotropic margin. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4. 0/. | 5,348 | 2022-03-10T00:00:00.000 | [
"Medicine",
"Physics",
"Engineering"
] |
Prospects for Industrial and Post-Industrial Tourism in Kuzbass Coal Mining Cluster
. This article explores Kuzbass potential for industrial and post-industrial tourism, drawing on successful foreign experience and several Russian cases. We identified common international trends in using industrial heritage and outlined the challenges that Russian companies are facing on the way towards a post-industrial paradigm. Western industrial museums present their local industry in the global context, while showing its impact on local community and culture. They make a wide use of modern technology, establish links with science and business, support local arts, and involve authentic members of the industry. Russia, however, can boast very few examples of industrial tourism (e.g. the Krasnaya Gorka Museum in Kemerovo), which is largely seen as having a purely academic appeal. Yet, Kuzbass, with its long history of coal-mining and other industries, has very extensive prospects in this type of tourism. To prove it, we performed a SWOT analysis of a prospective tour into reclaimed lands which showed far more strengths and opportunities for integrating local community, business, and industrial heritage than weaknesses and threats.
Introduction
The western world has long moved on from the industrial to the post-industrial era. As a result, industrial complexes have acquired a status of post-industrial heritage. We can see how once great factories, hi-tech equipment, breakthrough transport solutions, industrial living quarters, anthropogenic landscapes, and other objects of this now-extinct way of life have turned into large museum complexes, succeeding industrial enterprises as the backbone of local communities. Many of such sites in Europe are now on the UNESCO list.
However, Russia still largely remains in the industrial chronotope. The idea that a factory or a mine can be an attractive tourist site is still new to most Russian travel agents and holiday-makers. Local entrepreneurs hardly see old industrial sites as part of tourism or a potential source of income. Many Russian tourists associate industrial museums with the library dust and academic boredom.
Another phenomenon common for Russia is that some regions are trying to change their profile and move away from their industrial past. For example, in the last decade, Kuzbass -a coal-mining region -has made numerous attempts to rebrand itself as an ecologically pristine ski resort and erase the collective memories of miners' strikes, cave-ins, or black snow. However, Kuzbass coal is not only about methane explosions, bankrupt mines, or angry coal miners blocking the Trans-Siberian Railway. Kuzbass coal is also associated with the one-of-its-kind Autonomous Industrial Colony "Kuzbass", ambitious projects of the pre-revolutionary Kopikuz Joint-Stock Company, heroic labour on the home front during World War II, mining folklore and community life, successful experience of mine reclamation, and other aspects of life.
Even if Kuzbass is destined to become an elite ski resort, its industrial history will always be there. One cannot escape from the past, but one can learn to respect it and draw lessons. All these stories have to be preserved for future generations. And although the current conditions are not really fit for industrial tourism, sooner or later it will acquire the popularity it deserves and possibly become a significant source of income for the whole region or many of its company towns.
Yet, Russia is slowly heading towards a post-industrial paradigm, sustainable development, green technology, and other global trends. In fact, some regions (e.g., Ural) are beginning to connect their future with industrial tourism. Therefore, we found it relevant to analyse the foreign experience of industrial and post-industrial tourism and see how it could be adjusted and adapted to the Russian context, particularly in Kuzbass, where it definitely has a future.
Materials and methods
Our study started with a hypothesis that Kuzbass -an industrially developed region with coal-mining in the core of its economy -has an enormous potential for industrial and postindustrial tourism. Prior to suggesting a number of ideas about how this type of tourism could be integrated into the local economy and community life, we studied the foreign experience, its key features and trends. We also analysed the situation in Russia and identified the main obstacles -cultural, social, and economic -on the way towards industrial tourism.
To collect information, we mainly relied on the Internet, particularly the web-sites of industrial and post-industrial museums and tourist organizations, both in Russia and abroad. We also analysed a number of industrial tours offered by local travel agents and educational centres. E-library publications were another valuable source of information on the topic.
In addition to observation of the local context, our methods of study included interpretation, hypothetico-deductive processing, and comparative analysis of historic, cultural, social, economic, and ecological data collected from the above sources. Finally, we developed a prospective tour into reclaimed lands in Kuzbass and performed a SWOT analysis to identify its strengths, weaknesses, opportunities, and threats.
Results and Discussion
3.1 Industrial and post-industrial tourism in Russia and abroad
Definition
Broadly speaking, industrial tourism means both industrial heritage sites and operating plants. It offers tourists new experience connected with production and technology, as well as with the epoch the site belongs to [1].
To be considered an object of industrial tourism, the site has to be open to the public on a regular basis [2] and be the primary purpose of visit for the so-called industrial tourist [3]. However, the term "industrial tourism" is somewhat ambiguous: whether it stands for operating enterprises or industrial heritage sites depends on the country. For example, France and the USA use it to refer to active enterprises, while Germany and Poland apply it mainly to museums. It seems sensible to use the approach adopted in Britain and refer to regular commercial tourist visits to operating enterprises as industrial tourism and visits to museums and industrial heritage sites as post-industrial tourism [4].
The term "industrial" covers all activities including primary industry, mineral production and processing, state sector and private business, profit and non-profit organizations, services, etc. [5]. Any site of tourist attraction consists of three components: people (tourists, personnel, etc.), core (the site itself or an attractive event), and information. When it comes to operating enterprises, it is compulsory that tourist activity remains secondary [5,6].
Post-industrial tourism is often regarded as part of cultural tourism as tourists get acquainted with sites or events that have a certain level of high or popular culture, thus broadening their knowledge about the world [7].
Foreign experience
Great Britain, the country with a rich industrial history, leads the world in organized postindustrial tourism. Eight out of its 28 UNESCO sites are connected with industrial heritage, mostly associated with the Victorian era and the leading role of the British Empire in the world trade and economy.
In the USA, where, just like in Russia, mining remains a way of life for several states (Pennsylvania, West Virginia and others), it is treated as a precious phenomenon of community culture with its own folklore, songs, art, attitudes, and traditions. The American National Mining Association covers 55 mining museums and offers visits to operating mining sites.
The European Union, with its depleted natural resources, has an on-going project called "European Route of Industrial Heritage." Having originated in Great Britain, Germany, and the Netherlands, it now unites more than 850 industrial heritage sites in 44 countries [8].
We analysed the web-sites of 30 industrial and post-industrial tourist destinations abroad (industrial museums and other tourist organizations) and identified a large number of common features and trends. Most of them are listed below.
-Local industry in the global context. Industrial museums tend to present their local industry as part of the world history, stressing its effect on their local community and showing its place in the global chain of industrial and historic events.
-Added focus on local community. Industrial museums feature not only industrial history but also that of their local community and its way of life. They reconstruct authentic houses, build replicas of home and shop interiors, conserve or sometimes construct whole authentic-looking streets.
-Modern technology. To create an interactive environment, museums combine authentic working equipment with modern technologies (e.g. 3D displays or interactive kiosks).
-Making use of nature. Museums take advantage of the surrounding landscape to organize view points and hiking routes, as well as use prominent landscape features on their branded materials.
-Supporting local crafts. Industrial museums support local artisan production and local industries by providing space for craft fairs, art exhibitions, and workshops. In return, they may acquire unique souvenirs. They strive to express their unique personality through animal mascots, photo objects, interesting characters working as guides, etc.
-Internet-friendly. Virtual space is just as important as the physical environment. Museums make use of the Internet sites, social networks, YouTube videos, bar-codes, and hash tags, as well as organize creative selfie zones and online marketing campaigns.
-Authentic guides. Museums invite professional industry workers, e.g. retired miners, to work as guides. They can easily win the trust of the audience and personalize the information with true stories and professional jargon.
-Unique concept. Some unique feature or an artefact -а once-largest installation, the first-of-its-kind apparatus, or a never-before experience -can become the core of the museum's concept.
-Connecting with modern arts. Museums actively integrate modern arts related to the main concept by holding contests, organizing exhibitions and other events.
-Eco-friendly. Tour organizers try to emphasize the industry's eco-vector by showing that once hazardous production is getting less so, or that the industry is doing its best to become more environmentally friendly.
-Full infrastructure. Museums boast all necessary facilities and services, including public catering.
-Integrated character. Objects of industrial tourism tend to gather in clusters so one route might include visits to operating enterprises, closed heritage sites, related industries, museums of local lore, as well as indoor and open-air places of interest.
-Targeting families. Families are becoming a primary target audience for industrial museums so they organize special diversified programs for all ages.
-Open to partnerships. Industrial and post-industrial tourist organizations seek partners among all sorts of social groups -schools, golden age clubs, labour unions, etc.
-Linking with science. Museums contribute to the development of science by organizing field practice for students, providing venues for scientific conferences, welcoming schoolchildren on career days, employing academics to conduct historical research, etc.
-Seeking business sponsorship. Historical heritage sites seek financial support from businesses and trusts and can adjust their activities to meet their needs.
Russian experience
Although Russia is an industrial country, industrial heritage sites are more likely to be seen here as an obstacle to achieving greater production as they are often located on the premises of operating plants. As a result, there are few closed industrial enterprises with an intact core that would go back to the era of the Industrial Revolution. Quite common is a picture of ruined structures located in operating workshops or deep in the territory of a large industrial complex.
If an industrial enterprise with a long history has a museum, it is highly unlikely to be open to the public other than students or academics. Having to welcome regular tour groups is largely regarded as a nuisance because of insurance issues and lack of regulations on industrial tourism. Operating enterprises see no immediate profit in commercial industrial tourism. Neither do they look at it as an opportunity to promote the company or raise its spirit. Financial aspects remain more important than improving the image of the industry.
However, some regions are already drawing road maps of their economic development based on the prospects of industrial tourism, e.g. the Ural region with its Nizhny Tagil Plant of Steel Structures, the Steel Route at the Magnitogorsk Iron and Steel Plant, and the Chelyabinsk Pipe Rolling Plant; Saint-Petersburg with its old factories and docks; Moscow with its confectionary factories; small old towns of the Golden Ring Route with their traditional crafts, etc. [9].
Food industry, of all sectors, was the first to take advantage of industrial tourism as it could produce an immediate profit and marketing buzz. On the other hand, Russia has a gas and oil industry that is rich enough to care about its image. For example, the Rosneft Oil Company sponsors a route "The Black Gold of Russia" in Tyumen. The Lukoil Oil Company funded a hi-tech interactive museum of oil industry in the Komi Republic.
Kuzbass can boast only one industry-related museum that shares most of the characteristics of successful foreign tourist destinations listed above -the Krasnaya Gorka (Red Hill) Museum in Kemerovo. In fact, it is a cluster of several industrial heritage sites, including some unique examples of Dutch architecture. Located right across the river Tom', opposite the coking plant and the power station, it offers a breath-taking view of the industrial landscape.
Below is a list of features that it shares with foreign industrial museums. In particular, the Krasnaya Gorka: -tries to preserve the local coal-mining culture; -preserves and promotes the heritage of unique historical experiments, i.e. the Industrial Autonomous Colony "Kuzbass" that united enthusiastic socialists from all over the world for the sake of Soviet heavy industry in the 1920s; -has accounts in all major social networks and actively uses the Internet as a marketing tool; -hosts various cultural events, thus becoming an anchor point for the local community; -improves the status of the once shabby coal-mining district by the sheer fact of its existence; -conducts serious scientific research into the history of the Industrial Autonomous Colony "Kuzbass" and publishes The Krasnaya Gorka Almanac to promote local lore studies; -exhibits in its yard some authentic mining equipment, including a huge power-shovel and a one-rail elevator; -makes use of modern technology (e.g. interactive kiosks, a large transparent screen for presentations, a 3D excursion to the power station); -welcomes various city events and offers its premises for meetings, workshops, and other activities; -supports the local arts-and-crafts community by ordering souvenirs of various kinds; -organizes photo zones, on-line contests, and other events, encouraging its visitors to enlarge its online presence; -has a mascot called "Gamazyulya," a mythological patron of East-European coal miners, which features in souvenirs and sometimes acts as a tour guide, etc.
Current situation with industrial tourism
Siberian recreational resources are competitive advantages of this region. According to The Strategy for Developing Tourism in Kemerovo Region up to 2025, tourism is to become one of the most important growth points for such depressed regions as Altai, Buryatia, Tuva, Khakassia, and Trans-Baikal, and for such industrially developed areas as Irkutsk, Kemerovo, and Novosibirsk. And although the road map does not mention industrial tourism as such, industrial and post-industrial tourism is extremely flexible and can become part of other directions supported by the Strategy, e.g. cultural or historical tourism, business or adventure, as well as ecotourism.
Today, Kuzbass boasts 132 officially registered manufacturing enterprises [10]; yet, its industrial and post-industrial tourism is still in its infancy. A brief analysis of tour offers on the Internet proved that local commercial industrial tourism barely exists at all. Strange as it may sound, Kuzbass -with its long mining history -does not have a single mine open to the public. Even stranger is the fact that the so-called Seven Wonders of Kuzbass include no industrial sites connected with coal mining or coal processing. However, on their official web-sites, the largest local miners (Chernigovets, Kedrovsky, Barzassky, and Bachatsky) often publish reports about organized visits to their facilities. Yet, none of these visits falls under the definition of industrial tourism.
The local coal mines and refineries welcome mining students, high-school students, miners' children (on Coal Miner's and Metallurgist's Days), honoured guests (celebrities, politicians, athletes, etc.), journalists, bloggers and the like. A group of interested individuals (e.g. being in Kuzbass on business) can ask for a visit to the Koksokhim coking plant or one of local open mines but they would have to address them directly. For example, some years ago, Koksokhim organized an exclusive excursion for the descendants of the foreign engineers who worked in the Autonomous Industrial Colony "Kuzbass" in 1921-1927. The visit was arranged by the local council and the Krasnaya Gorka. Naturally, such visits are not commercial. Koksokhim has some monuments of historical significance constructed by the Kopikuz Company in the pre-revolutionary era. For example, the reinforced-concrete building for coal preparation with its elegant arches and curved piles is one of its kind in Siberia, and some of the river-stone constructions are the oldest masonry structures in the region. Of public interest can also be some historical equipment and furnaces, let alone the impressive coke quenching process. However, coal processing is potentially dangerous, and the plant itself is an object of strategic significance. Therefore, regular public access would be associated with potential risks or red tape -the problems that would outweigh any commercial or publicity benefits. Coal mines are driven by the same concerns when they refrain from regular group visits.
A brief analysis of local tour offers revealed only one product that belongs to industrial tourism in the full sense of this term. The Meridian travel agency offers a tour called "Industrial Kuzbass" around the oldest coal-mining sites in Kuzbass, namely the Bachatsky open mine and the Bachatsky smelter (1816) in the towns of Belovo and Gurievsk. This one-day tour, including transfer and lunch, seems to be the only commercial tour offer related to coal mining.
All-Russian trends in industrial tourism
Among other sectors, food industry remains the source of most popular tour offers in Kuzbass. The abovementioned Meridian travel agency sells several tours of confectioners, farms, and breweries. Using the flexibility of industrial tourism, the agency combines visits to food enterprises with excursions to local lore museums and historical sites in several Siberian towns. Such one-day trips are advertised as family events. This way, local industrial tourism follows an all-Russian trend, i.e., food companies make a good use of organized tours since it brings them immediate profits. Naturally, such marketing buzz is effective for small and medium-sized food producers, rather than mining and heavy industry enterprises.
Another all-Russian trend that we can observe in the local industrial tourism is its connection with educational tourism. The Kemerovo Regional Centre for Children and Youth Tourism offers a number of industrial tours for school children, e.g. to the Kemerovo radio station and TV centre, meteorological and hydrometeorological stations, fire units, the airport, the backstage of the Drama Theatre, and other places of interest.
Finally, the third trend relates to culture and service sectors and is fuelled by local enthusiasts. For instance, members of the "Inside and Outside" social project organize cheap excursions to the hi-tech office of a local Internet provider, flower shops, a printing firm, an embroidery workshop, a brewery, the oldest department store in Kemerovo, the Puppet Theatre, etc. Unfortunately, such excursions are irregular and the initial enthusiasm seems to fade away as these scarce visits fail to bring the instant profit the hosts might have expected.
Obstacles for post-industrial tourism
The situation with post-industrial tourism is becoming critical in the context of on-going deterioration of old Soviet and pre-revolutionary industrial buildings. The museumification of potential industrial heritage is a good option when it includes the whole production complex. Naturally, it is easier to plan a conservation of an old workshop than to restore a partially ruined building [11].
In the early 2000s, the Krasnaya Gorka considered turning one of the old river-side shafts into a museum and asked several local coal miners to estimate a possibility of restoring at least some meters of the currently closed mine throats. The miners were quite categorical: no coal enterprise would risk its people to restore a possibly gassy mine with rotten beams, unreliable planning documentation, and coal fire hazards. Eventually, the museum modelled some elements of underground mining on the ground floor of its main building. Currently, the river-side shafts are sealed, but you can see the blocked throats during an excursion to Gorelaya Gora (Smoking Hill) -a natural site where coal was discovered in 1721. Unfortunately, the route is seasonal and too challenging for tourists with special needs.
As for local specialized museums, they attract experts, but not the general public. For example, Kemerovo has a unique museum of coal at the Institute of Coal and Coal Chemistry (Siberian Branch, Russian Academy of Sciences). However, the museum has a clear scientific and career-building orientation. There is no information about the tickets or working hours on its web-site, which clearly indicates its non-profit character [12].
Modern mass tourism borders on entertainment. However, all local attempts to marry post-industrial tourism with entertainment have remained on paper due to their uneconomic character. Some years ago, Kemerovo had an ambitious idea to restore the cable road that used to transport coal from the steep right bank of the Tom' river to the coking plant and the power station on its flat left bank. Its implementation could have given the necessary "zest" to the concept of post-industrial tourism and possibly been more profitable than many other tourism development projects. However, it would have required unprecedented funding. Another example is the utopian project "City of the Sun" initiated by a "group of enthusiastic citizens" [13]. Their ambitious project involved converting some old houses designed by the Dutch architect J. van Loghem into an art space. What they did not take into account was the isolation of the houses from the city centre, as well as enormous costs and poor infrastructure, let alone the unwillingness of the city council to undertake a commercially unfeasible project.
Prospects for local industrial and post-industrial tourism
Having analysed some of the existing projects in Russia and abroad, we can identify the following prospects for local industrial and post-industrial tourism in Kuzbass.
Firstly, 3D excursions could offer a good alternative to visiting potentially dangerous or sensitive sites. The Krasnaya Gorka has recently introduced a virtual tour of the Kemerovo Power Station. The project was funded by the Siberian Generating Company. Just as spectacular, so it seems, might be a 3D underground tour around a coal mine or a virtual flight over an open pit.
Secondly, the future of local industrial tourism may lie in the improvement of the existing tourist routes. For example, the Krasnaya Gorka organizes seasonal two-hour trips to the foot of Smoking Hill, where the first local coal deposit was discovered back in 1721.
Tourists can see an impressive stone jetty, an industrial landscape, the remains of the cable road, and the closed entries to the mines. Unfortunately, the route is accessible only from August to October due to seasonal shifts of the river level and the poor quality of the path. However, a dam and some basic renovation could solve the problem. The embankment line between the bridge and the jetty could be made tourist-friendly and equipped with magnifying viewers to observe the coking plant and the power station across the river.
Thirdly, industrial tourism could be used to preserve the local mining culture and lifestyle. It would be a good idea to purchase one of J. van Loghem's houses and turn it into an exhibition of the mining community life back in the 1920s.
Fourthly, the focus of industrial tourism could be shifted onto а coal-mining landscape, which is often associated with ecology. The industrial landscape was formed at the industrial stage of social development as a result of scientific and technological achievements and engineering solutions. It reflects the whole complex of those processes and phenomena that can be called industrial culture [14]. One can be awed by colossal mine dumps from a special viewing platform at the Kedrovskaya open mine or from the highway, as the area of the mine is closed to the public. However, the industrial landscape is not only open mines and on-going excavations but also reclaimed sites. Local depleted open mines are planted with pines and buckthorn bushes, showing a good example of sustainable mining development. Since land reclamation in Kuzbass goes back quite a long way, some of the reclaimed areas have turned into real forests called "posadki" ("planted plants"). Most of these new forests are easily accessible by car on old industrial roads, so people go there to gather mushrooms and berries or have picnics. For example, it would seem quite a good idea to create a tourist product based on the reclaimed land around the closed Vladimirskaya mine (the former Volkov mine). It is a 50-minute bus ride from the city centre, and some of the reclaimed areas are over 60 years old. Historically, they formed a unique landscape of old open pits and dumps covered with pine trees and buckthorn bushes that gradually merged with authentic black taiga. The border between the real Siberian taiga and the "artificial" pine forest is practically invisible.
Finally, industrial tourism could be mixed with almost any other type of tourism. For instance, its cultural and educational version could tell a story of local industry and tourist culture in the USSR, the oeuvre of Kuzbass poets and bards, local taiga flora and fauna, etc. Married with sports and fitness tourism, it could feature Nordic walking, skiing, snowmobiling, orienteering, yoga, "tree-hugging", and other activities. Combined with entertainment or adventure tourism, the route could be modified as a quest. Also, industrial tourism can be adjusted to ecotourism and involve collecting garbage, studying land reclamation techniques, and other environmental efforts. A picnic with coal miner's food or picking mushrooms and berries can easily shift the paradigm into the sphere of gastrotourism.
In view of the above, we designed a prospective tour into reclaimed lands and performed its SWOT analysis (Table 1). Table 1. SWOT analysis of a tour into reclaimed lands.
Strengths:
-unique landscape: a mixture of planted pine forest and authentic taiga; -possibility of skiing in winter (integration into fitness tourism); -close location: an hour's drive from the city centre; -easy access by public transport; -drive through historical places: the road runs through the Miners Avenue, the most actively developing historical part of the city; -flexible transport: various well-trodden paths and old roads are available for both SUVs and bicycles; -possibility of hunting: several hunting huts have been popular among tourists since the 1970s; -mining culture: nearby are some mining buildings, a chapel with bells made from gas cylinders, a small river with a bridge, etc.
Weaknesses: -seasonal character: the route can be challenged by mud, tall grass, gnats, or tics from March to August and in November; -no access for disabled people: the route is not accessible for tourists with special needs.
Opportunities: -trigger for developing old mining districts; -restoring the good name of the industry, if successful; -flexible timing: from several hours to a weekend with an overnight stay in a camping site; -easy diversification according to the type of tourism, purpose, and target audience; -flexibility and easy integration into some other excursion; -benefits for local residents: former miners, foresters, and local farmers can work as guides or provide other services; -trigger for developing camping: local forestry managers can create good camping sites; -possibility of funding from mining enterprises.
Threats: -old industrial remnants: a possible presence of unmarked shaft entrances, pipes sticking out of the ground, etc.; -lack of enthusiasm from Kemerovo residents (main target audience) who can visit these places for free by bus or car and are not ready to perceive their customary landscape as something unique or bearing historical and cultural significance; -potentially dangerous wildlife, e.g. bears, elks on heat, hives of wild bees, etc.
As we can see, the strengths and opportunities of our prospective tour far outweigh its weaknesses and threats. Most importantly, such tours can promote partnerships between former and current coal miners, local residents, travel agents and other businesses, environmentalists, historians, volunteers, and other members of our regional community. They can present Kuzbass as a coal-mining region that cares about its people and environment, strives to preserve its local history and culture, takes pride in its core industry, promotes sustainable development, and is in touch with modern global trends in industrial and post-industrial tourism.
Summary
Although Russian industrial and post-industrial tourism is still in its infancy, regions such as Kuzbass are making the first steps in this direction to preserve their industrial heritagea vital part of their local culture -for future generations. A good example is the Krasnaya Gorka Museum in Kemerovo that shares most of its features with successful foreign museums of such kind. Despite current challenges and obstacles of economic, social, and cultural nature, industrial and post-industrial tourism is bound to acquire a wider tourist appeal in Kuzbass. Its prospects include virtual tours to potentially dangerous or sensitive industrial sites, seasonal routes to old remnants of industrial heritage, reconstructions of historical architecture, exhibitions of mining community life, trips to reclaimed lands, and products integrated with education, sports, entertainment, and other types of tourism. | 6,665.4 | 2020-01-01T00:00:00.000 | [
"Economics"
] |
Influence of post-annealing in sulfur atmosphere on thermally evaporated β-In2S3 films
Abstract Indium sulfide (In2S3) is an n-type semiconductor with wide bandgap energy (2.2–2.7 eV) and is currently used as buffer/window layer in thin film solar cells as an alternative to toxic CdS. In the present study, In2S3 thin films were deposited on soda lime glass substrates using thermal evaporation technique at different substrate temperatures, Ts = 200 °C–350 °C. Further, all the as-deposited films were annealed in sulfur ambient at 250 °C for 1 h. The structural, compositional and optical properties of annealed In2S3 films were analyzed using GIXD, EDS and Photon RT spectrophotometer respectively. All the annealed films exhibited polycrystalline nature with improved crystallinity and high optical transmittance in the visible region. Moreover, the as-deposited films were sulfur deficient whereas in annealed layers the S/In ratio was increased due to sulfur annealing. Therefore, annealing of In2S3 films in sulfur atmosphere enhanced the quality of the films. Among all the as-deposited and annealed films, the layers grown at Ts = 300 °C followed by annealing at 250 °C have shown better structural and optical properties than the other films.
Introduction
Now-a-days, researchers are showing great interest on environmental friendly materials to develop low cost solar cells. In 2 S 3 is a potential candidate for its use as a window/buffer layer in thin film solar cell applications due to its n-type conductivity, wide band gap energy (2.1-3.1 eV), good photoconductive response and less toxicity compared to CdS [1][2][3]. Generally, the properties of thin films can be enhanced by annealing, chemical treatment and doping or alloying processes. The main purpose of annealing is to improve the quality of thin films by enhancing the nucleation process, which consequently increases the grain size with reduced defects. In case of In 2 S 3 films deposited at higher temperatures, the sulfur deficiency is a common phenomenon occurred due to re-evaporation of sulfur from the film surface owing to its high volatility and vapour pressure. To overcome this problem, sulfur annealing is an apt approach to maintain stoichiometric composition of In 2 S 3 films. As per literature survey, despite the availability of numerous reports on annealing effect on In 2 S 3 films, a very few reports are available on the effect of sulfur annealing on In 2 S 3 films [4,5]. Therefore in the present study, thermally evaporated In 2 S 3 films were post annealed in sulfur atmosphere and the structural, compositional and optical properties of the layers were investigated using various techniques.
Experimental details
In 2 S 3 thin films were deposited onto soda-lime glass substrates using thermal evaporation technique (Hind Hi Vac BC 300 box coater) at different substrate temperatures, T s = 200 -350°C. In 2 S 3 powder (Sigma Aldrich 99.999%) was used as source material and kept at a distance of 14 cm from the glass substrates. The powder was evaporated at a rate of 15 Å/s. The grown In 2 S 3 films were of $500 nm thick measured using a quartz crystal thickness monitor. Further, the as-deposited layers were annealed under sulfur atmosphere (2 Â 10 À2 mbar) using two zone tubular vacuum furnace at 250°C for 60 min. The as-deposited as well as annealed In 2 S 3 films were characterized using various techniques. The structural details of the films were investigated using Ultima IV X-ray https://doi.org/10.1016/j.matpr.2020.08.520 2214-7853/Ó 2019 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 3rd International Conference on Solar Energy Photovoltaics. diffractometer with Cu Ka radiation source (k = 1.5406 Å) in grazing incidence X-ray diffraction (GIXD) geometry at 1 degree of grazing angle. The layer composition was analyzed by energy dispersive spectroscopy (EDS) with the aid of an INCA energy analyzer attachment (Oxford Instruments). The optical transmission and reflection spectra of the films were measured using Photon RT spectrophotometer (Essent Optics).
Results and discussion
3.1. Structural analysis Fig. 1(a) and (b) shows the GIXD patterns of as-deposited and annealed In 2 S 3 films. From Fig. 1(a), the films deposited at lower substrate temperatures (T s = 200°C and 250°C) showed amorphous nature due to insufficient thermal energy provided to the substrate as a result the ad-atoms could not move freely on the substrate surface, which causes the growth of many nuclei without surface diffusion [6]. Further increase of T s to 300°C and 350°C, the films exhibited polycrystalline nature with poor crystallinity and had mixed phases of cubic and tetragonal b-In 2 S 3 . The films changed from amorphous nature to polycrystalline with increase of T s due to increase of ad-atom mobility that stimulated the growth of crystallites in different orientations. Moreover, the existence of mixed phases of cubic and tetragonal b-In 2 S 3 is a common phenomenon observed in thin film formation irrespective of the deposition techniques used, which was reported in several works [7][8][9]. All as-deposited films required post-annealing treatment to improve the crystalline quality as well as other physical properties. Therefore, all the as-deposited layers were annealed at 250°C under sulfur ambient for 60 min. The annealed layers showed bet-ter crystallinity than the as-deposited films and had only cubic phase of b-In 2 S 3 (JCPDS 65-0459) (see Fig. 1(b)). Further, the size of coherent scattering region (L) and the lattice deformation (e) of the films were calculated using the appropriate formulae as reported in the literature [10] and the obtained values were tabulated in Table 1. From Table 1, it is observed that 'L' value increased in annealed films, which might be due to the coalescence of smaller nuclei or neighboring smaller crystallites. The crystallinity of the films improved after annealing due to the sufficient thermal energy available for recrystallization and grain growth with reduced defects. Fig. 2 shows the variation of S/In ratio for as-deposited and annealed In 2 S 3 films. It is observed that S/In ratio decreased from 1.21 to 1.01 with increase of substrate temperature (200°C-350°C) in the as-grown films due to re-evaporation of sulfur from the films, owing to its high volatility and vapour pressure, whereas for annealed layers, the S/In ratio is increased and varied in the range, 1.53-1.66. The films deposited at T s = 300°C, after sulfur annealing showed S/In ratio of 1.53, which is close to bulk value (1.49) of In 2 S 3 powder and the rest of the annealed films contained higher S/In ratio greater than 1.60. The increment of sulfur content in annealed films is due to rapid reaction of sulfur vapours with indium that led to the formation of stoichiometric b-In 2 S 3 phase. A similar variation of S/In values was reported by Bouabid et al. for flash evaporated In 2 S 3 films annealed at 300°C under sulfur atmosphere [4]. Moreover, the change in S/In ratio can affect the optical band gap and other properties of the In 2 S 3 layers. shows the transmission spectra of as-deposited and annealed In 2 S 3 films. After annealing, all the films showed high trasmittance (>60%) in the visible region compared to as-deposited layers and the interference fringes in the transmittance spectra indicates good homogeneity and uniformity of the films. The optical transmittance was improved for annealed films because of improved crystallinity.
Composition analysis
The optical band gap energy of the layers was evaluated from the following Tauc relation for direct allowed transitions [6], where hm is photon energy and A is a constant related to the effective mass of the material.
The evaluated band gap energy values of the as-deposited layers was initially decreased from 2.42 eV to 1.72 eV upto T S = 300°C and then increased to 2.64 eV due to change in composition and structural defects of the layers, whereas for annealed layers, the band gap energy was higher than the as-deposited layers and varied in the range, 2.48 -2.73 eV. The rise in band gap energy upon annealing was due to better stoichiometry, improved crystalline quality and less defects of the layers compared to asgrown layers [11]. Fig. 4 shows the variations of band gap energy values of the layers with the substrate temperature.
Conclusions
In 2 S 3 thin films were deposited at different substrate temperatures using thermal evaporation technique. Further, all the films
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 1,974 | 2020-09-24T00:00:00.000 | [
"Physics"
] |
Image captioning model using attention and object features to mimic human image understanding
Image captioning spans the fields of computer vision and natural language processing. The image captioning task generalizes object detection where the descriptions are a single word. Recently, most research on image captioning has focused on deep learning techniques, especially Encoder-Decoder models with Convolutional Neural Network (CNN) feature extraction. However, few works have tried using object detection features to increase the quality of the generated captions. This paper presents an attention-based, Encoder-Decoder deep architecture that makes use of convolutional features extracted from a CNN model pre-trained on ImageNet (Xception), together with object features extracted from the YOLOv4 model, pre-trained on MS COCO. This paper also introduces a new positional encoding scheme for object features, the “importance factor”. Our model was tested on the MS COCO and Flickr30k datasets, and the performance is compared to performance in similar works. Our new feature extraction scheme raises the CIDEr score by 15.04%. The code is available at: https://github.com/abdelhadie-almalla/image_captioning
Related works
In [9], Yin and Ordonez suggested a sequence-to-sequence model in which an LSTM network encodes a series of objects and their positions as an input sequence and an LSTM language model decodes this representation to generate captions. Their model uses the YOLO [8] object detection model to extract object layouts from images (object categories and locations) and increase the accuracy of captions. They also present a variation that uses the VGG [10] image classification model pre-trained on ImageNet [11] to extract visual features. The encoder at each time step takes as input a pair of object category (encoded as a one-hot vector), and the location configuration vector that contains the left-most position, top-most position, width and height of the bounding box corresponding to the object, all normalized. The model is trained with back-propagation, but the error is not propagated to the object detection model. They showed that their model increased in accuracy when combined with CNN and YOLO modules. They did not use all available data from the object features produced by YOLO, such as object dimensions and confidence.
In [12] Vo-Ho et al. developed an image captioning system that extracts object features from YOLO9000 [8] and Faster R-CNN [13]. Each type of features is processed through an attention module to produce local features that represent the part that the model is currently focusing on. The two local feature sets are combined and fed into an LSTM model to generate the probabilities of the words in the vocabulary set at each time step. A beam search strategy is used to process the results, in order to choose the best candidate caption. They used the ResNet [14] CNN to extract the features from images. From a given image as input, they first extracted a list of tags using YOLO9000, then break each tag into words and eliminate redundant ones so the list will contain only unique words. Each word i, including the "null" token, is represented by a one-hot vector of the size of the vocabulary set. After that, they embed each word into a d-dimension space using the word embedding method. They used LSTM units for language generation. They only keep the top 20 tags with the highest probabilities.
In [15], Lanzendörfer et al. proposed a model for Visual Question Answering (VQA) based on iBOWIMG. The model extracts features from Inception V3 [16] as well as object features extracted from the YOLO [8] object detection model, and uses the attention mechanism. The outputs of YOLO are encoded as vectors of size 80 × 1 in order to give more informative features to the iBOWIMG model, with each column containing the number of detected objects of the given type. Three of these object vectors are produced for detection confidence thresholds of 25%, 50% and 75% and then concatenated with the image features and question features.
In [17], Herdade et al. proposed a spatial attention-based encoder-decoder model that explicitly integrates information about the spatial relationship between detected objects. They employed an object detector to extract appearance and geometry features from all detected objects in the image, then the Object Relation Transformer to generate caption text. They used Faster R-CNN [13] with ResNet-101 [14] as the base CNN for object detection and feature extraction. A Region Proposal Network (RPN) generates bounding boxes for object proposals using intermediate feature maps from the ResNet-101 as inputs. Overlapping bounding boxes with an intersection-over-union (IoU) exceeding a threshold of 0.7 are discarded, using non-maximum suppression. All bounding boxes where the class prediction probability is below a threshold of 0.2 are also discarded. Then, for each object bounding box, they perform mean-pooling over the spatial dimension to build a 2048-dimensional feature vector. These feature vectors are then input to the Transformer model.
In [18] Wang et al. studied end-to-end image captioning with highly interpretable representations obtained from explicit object detection. They performed a detailed review of the effectiveness of a number of object detection-based cues for image captioning. They discovered that frequency counts, object size, and location are all useful and complement the accuracy of the captions produced. They also discovered that certain object categories had a greater effect than others on image captioning.
The work of Sharif et al. [19] suggested to leverage the linguistic relations between objects in an image to boost image captioning quality. They leverage "word embeddings" to capture word semantics and capsulize the semantic relatedness of objects. The proposed model uses linguistically-aware relationship embeddings to capture the spatial and semantic proximity of object pairs. It also uses NASNet to capture the image's global semantics. As a result, true semantic relations that are not apparent in an image's visual content can be learned, allowing the decoder to focus on the most important object relations and visual features, resulting in more semantically-meaningful captions.
Variš et al. [20] investigated the possibility of textual and visual modalities sharing a common embedding space. They presented an approach that takes advantage of object detection labels' textual nature as well as the possible expressiveness of visual object representations built from them. They investigated whether grounding the representations in the captioning system's word embedding space, rather than grounding words or sentences in their associated images, could improve the captioning system's efficiency. Their proposed grounding approaches ensure that the predicted object features and the term embedding space are mutually grounded.
Alkalouti and Masre [21] proposed a model to automate video captioning based on an Encoder-Decoder architecture. They first select the most important frames from the video and remove redundant ones. They used the YOLO model to detect objects in video frames and an LSTM model for language generation.
In [22], Ke et al. investigated the feature extraction performance of 16 popular CNNs on a dataset of chest X-ray images. They did not find a relationship between the performance on ImageNet and the performance on the medical image dataset. However, they found out that the choice of CNN architecture influences performance more than the concrete model within the model family for medical tasks. They also noticed that ImageNet pre-training gives a boost to performance in all architectures, with a lower boost for bigger architectures. They also observed that ImageNet pre-training yields a statistically significant boost in performance across architectures, with a higher boost for smaller architectures.
In [23], Xu et al. proposed a novel Anchor-Captioner method. They started by identifying the significant tokens that should be given more attention and using them as anchors. The relevant texts for each chosen anchor were then grouped to create the associated anchor-centered graph (ACG). Finally, they implemented multi-view caption generation based on various ACGs in order to improve the content diversity of generated captions.
In [24], Chen et al. suggested Verb-specific Semantic Roles (VSR) as a new Controllable Image Captioning (CIC) control signal. VSR is made up of a verb and some semantic roles that reflect a specific activity and the roles of the entities involved in it. They trained a Grounded Semantic Role Labeling (GSRL) model to locate and ground all entities associated with each role given a VSR. Then, to learn human-like descriptive semantic structures, they suggested a Semantic Structure Planner (SSP). Lastly, they used a role-shift captioning model to generate the captions.
In [25], Cornia et al. presented a unique framework for image captioning, which allows both grounding and controllability to generate diverse descriptions. They produced the relevant caption using a recurrent architecture that explicitly predicts textual chunks based on regions and adheres to the control's limitations, given a control signal in the form of a series or a collection of image regions. Experiments are carried out using Flickr30k Entities and COCO Entities, a more advanced version of COCO that includes semi-automated grounding annotations. Their findings showed that the method produces state-of-the-art outcomes in terms of caption quality and diversity for controllable image captioning.
Unlike previous works, our approach takes advantage of all object features available. The experiments section shows the effect of this scheme.
Research methodology
The experimental method involves extracting object features from the YOLO model and introducing them along with CNN convolutional features to a simple deep learning model that uses the widespread Encoder-Decoder architecture with the attention mechanism. "Results and discussion" section compares the difference in results before and after adding the object features. Although previous research encoded object features as a vector, we add object features in a simple concatenation manner and achieved a good improvement. We also test the impact of sorting the object tags extracted from YOLO according to a metric that we propose here.
Datasets used
We test our method on two datasets used usually for image captioning: MS COCO and Flickr30k. Table 1 contains a brief comparison between them. They are both collected from the Flickr photo sharing website and consist of real-life images, annotated by humans (five annotations per image).
It is worth noting that MS COCO does not publish the labels of the testing set.
Evaluation metrics
We use a set of evaluation metrics that are widely used in the image captioning field. BLEU [26] metrics are commonly used in automated text evaluation and quantify the correspondence between a machine translation output and a human translation; in the case of image captioning, the machine translation output corresponds to the automatically produced caption, and the human translation corresponds to the human description of the image. METEOR [27] is computed using the harmonic mean of unigram precision and recall, with the recall having a higher weight than the precision, as follows: ROUGE-L [28] uses a Longest Common Subsequence (LCS) score to assess the adequacy and fluency of the produced text, while CIDEr [29] focuses on grammaticality and saliency. SPICE [30] evaluates the semantics of the produced text by creating a "scene graph" for both the original and generated captions, and then only matches the terms if their lemmatized WordNet representations are identical. BLEU, METEOR, and ROUGE have low correlations with human quality tests, while SPICE and CIDEr have a better correlation but are more difficult to optimize.
Model
Our model uses an attention-based Encoder-Decoder architecture. It has two methods of feature extraction for image captioning: an image classification CNN (Xception [31]), and an object detection model (YOLOv4 [7]). The outputs of these models are combined by concatenation to produce a feature matrix that carries more information to the language decoder to predict more accurate descriptions. Unlike others' works that embedded object features before combining them with CNN features, we use raw object layout information directly. Language generation is done using an attention module (Bahdanau attention [32]), a GRU [3] and two fully connected layers. Our model is simple, fast to train and evaluate, and generates captions using attention. We believe that if humans can benefit from object features (such as the class of object, its position, and size) to better understand an image, a computer model can benefit from this information as well. A scene containing a group of people standing close together, for example, may suggest a meeting, whereas sparse crowds can indicate a public location. Figure 1 depicts our model.
Image encoding
A. Pre-trained image classification CNN In this work, we use the Xception CNN pretrained on ImageNet [11] to extract spatial features.
Xception [31] (Extreme version of Inception) is inspired by Inception V3 [16], but instead of Inception modules, it has 71 layers with a modified depth-wise separable convolution. It outperforms Inception V3 thanks to better model parameter usage.
We extract features from the last layer before the fully connected layer, following recent works in image captioning. This allows the overall model to gain insight about the objects in the image and the relationships between them instead of just focusing on the image class.
In a previous work, different feature extraction CNN models have been compared for image captioning applications. The results showed that Xception was among the most robust in extracting features and for this it was chosen as the feature extraction B. Object detection model Our method uses the YOLOv4 [7] model because of its speed and good accuracy, which make it suitable for big data and real-time applications. The extracted features are a list of object features, with every object feature containing the X coordinate, Y coordinate, width, height, confidence rate (from 0 to 1 inclusive), class number and a novel optional "importance factor".
Following human intuition, foreground objects are normally larger and more important when describing an image, and background objects are normally smaller and less important. Furthermore, it makes sense to use more accurate pieces of information than to use less accurate ones. Hence, our importance factor tries to balance the importance of the foreground large objects and objects with high confidence rates. The formula to calculate it for a single object is as follows: The importance factor gives a higher score to foreground large objects over background small ones, and higher score to objects with a high confidence over objects with less confidence.
After extracting object features, the importance factor is calculated for each object and concatenated to its tag. Then, all objects in the list are sorted according to this importance factor using the quick sort algorithm. Unlike previous works, our method makes use of all of the image's object information. Because of the size restriction in the output of the CNN, we use up to 292 objects, each with seven attributes (including the importance factor), which is usually enough to represent important objects in an image.
The list of features is flattened into a 1D array, of length less than 2048. It is then padded with zeros to length 2048 to be compatible with the output of the CNN module. The output of this stage is an array (1 × 2048).
As for calculating the confidence score, YOLO divides an image into a grid. B bounding boxes and confidence scores for these boxes are predicted in each of these grid cells. The confidence score indicates how confident the model is that the box includes an object, as well as how accurate the model believes the box that predicted is. The object detection algorithm is evaluated using Intersection over Union (IoU) between the predicted box and the ground truth. It analyzes how similar the predicted box is to the ground truth by calculating the overlap between the ground truth and the predicted bounding box. A cell's confidence score should be zero if no object exists in there. The formula for calculating the confidence score is: C. Concatenation and embedding In order to take advantage of the image classification features and the object detection features, we add this concatenation step, where we attach the output of the YOLOv4 subsystem as the last row in the output of stage 1. The output of this stage is of shape (101 × 2048).
Importance Factor
The embedding is done using one fully connected layer of length 256. This stage ensures a consistent size of the features and maps the feature space to a smaller space appropriate for the language decoder.
D. Attention
Our method uses the Bahdanau soft attention system [32]. This deterministic attention mechanism makes the model as a whole smooth and differentiable.
The term "attention" refers to a strategy that simulates cognitive attention. The effect highlights the most important parts of the input data while fading the rest. The concept is that the network should dedicate greater computer resources to that small but critical portion of the data. Which component of the data is more relevant than others is determined by the context and is learned by gradient descent using training data. Natural language processing and computer vision use attention in a number of machine learning tasks.
The attention mechanism was created to increase the performance of the encoderdecoder architecture for machine translation. And as image captioning can be viewed as a specific case of machine translation, attention proved useful when analyzing images as well. The attention mechanism was intended to allow the decoder to use the most relevant parts of the input sequence in a flexible manner by combining all of the encoded input vectors into a weighted combination, with the most relevant vectors receiving the highest weights.
Attention follows the human intuition of focusing on different parts of an image when describing it. Using object detection features also follows the intuition that knowing about object classes and positions help to grasp more about the image than mere convolutional features. When attention is employed to both feature types, the system will focus on different features of both object classes and positions in the same image.
Language decoder
For decoding, a GRU [3] is used to exploit its speed and low memory usage. It produces a caption by generating one word at every time step, conditioned on a context vector, the previous hidden state, and the previously generated words. The model is trained using the backpropagation algorithm deterministically. The GRU is followed by two fully connected layers. The first one is of length 512, and the second one is of the size of the vocabulary to produce output text.
The training process for the decoder is as follows: 1. The features are extracted then passed through the encoder. 2. The decoder receives the encoder output, hidden state (initialized to 0), and decoder input (which is the start token). 3. The decoder returns the predictions as well as the hidden state of the decoder. 4. The hidden state of the decoder is then passed back into the model, and the loss is calculated using the predictions. 5. To determine the next decoder input, "teacher forcing" is employed, which is a technique that passes the target word as the next input to the decoder.
Pre-processing
This section presents the pre-processing algorithm that was performed on the data: 1. Sort the dataset at random into image-caption pairs. This helps the training process to converge fast and prevents any bias during the training. Therefore, preventing the model from learning the order of training. 2. Read and decode the images. 3. Resize the images to the CNN requirements: whatever the size of the image is, it is resized to 299 × 299 as required by the Xception CNN model. 4. Tokenization of the text. Tokenization breaks the raw text into words, that are separated by punctuations, special characters, or white spaces. The separators are discarded. 5. Count the tokens, sort them by frequency and choose the top 15,000 most common words as the system's vocabulary. This avoids over-fitting by eliminating terms that are not likely to be useful. 6. Generate word-to-index and index-to-word structures. They are then used to translate token sequences into word identifier sequences. 7. Padding. As sentences can be different in length, we need to have the inputs with the same size, this is where the padding is necessary. Here, identifier sequences are padded at the end with null tokens to ensure that they are all the of same length.
Results and discussion
Our code is written in the Python programming language using TensorFlow 1 . library The CNN implementation and trained model were imported from Keras 2 . library, and a YOLOv4 model pre-trained on MS COCO was imported from the yolov4 library 3 . This work uses the MS COCO evaluation tool to calculate scores 4 .
Tests are conducted on two widely used datasets for image caption generation: MS COCO and Flickr30k. Every image has five reference captions in these two datasets, which contain 123,000 and 31,000 images, respectively. For MS COCO, 5000 images are reserved for validation and 5000 images are reserved for checking according to Karpathy's split [33]. In the case of Flicker30k dataset, 29,000 images are used for preparation, 1000 for validation, and 1000 for testing. The model was trained for 20 epochs and used Sparse Categorical Cross Entropy as the loss function. For the optimizer, Adam optimizer was employed. Table 2 presents the results of the proposed model on MS COCO Karpathy split and compares them to the results of the baseline model with features only from Xception. It can be noticed how well the evaluation scores increase after adding object features to the model, especially the CIDEr score, which increased by 15.04%. This reflects good improvement in correlation with human judgment when using full object features, and boosted grammatical integrity and saliency. It appears that the importance factor increases the BLEU metrics and decreases METEOR slightly, whereas the other metric values stay the same. Unlike the findings of Herdade et al. [17], our artificial positional encoding scheme did not decrease the CIDEr score. They tested multiple artificial positional encoding schemes and compared them to their geometric attention mechanism.
To show the effectiveness of our method, we compare our increase in results (with the importance factor) to the increase in results of Yin and Ordonez [9] on MS COCO Karpathy split in Table 3. They also measured the effects of incorporating object features on image captioning results. Their object feature extraction method extracts object layouts from the YOLO9000 model, encodes them through an LSTM Their baseline model has higher accuracy than ours, which may justify the difference between our scores and theirs. They did not report the BLEU-1, BLEU-2, BLEU-3 or SPICE score. We notice in Table 3 that our results are somewhat comparable to those of Yin and Ordonez [9]. We report all eight standard evaluation scores. The introduction of this type of feature extraction improves all evaluation scores over our baseline model. The increase in the SPICE score (5.88%) reflects increased semantic correlation when using object features, an expected consequence of feeding object tags into the model. SPICE is one of those metrics that are harder to optimize. The score difference between our model and theirs may be related to the feature combination and encoding method. They encode each feature type in a vector, and then add the two vectors, while our model concatenates the two feature sets directly.
We also compare our work with the work of Sharif et al. [19], who tried to benefit from linguistic relations between objects in an image, and we present a comparison between our model and theirs on the Flickr30k dataset [34] in Table 4. We can notice in Table 4 that our method also yields improvement on Flickr30k, with the bigger improvement being in the METEOR score. Sharif et al. benefited from linguistic information in addition to object detection features. Figure 3 displays a comparison between the baseline model and the model enhanced with object features, on MS COCO Karpathy split [33] validation testing sets. We see a clear increase in the results on all evaluation metrics on both sets, which indicates low generalization error and proves our hypothesis that enhancing the vision model with object detection features improves accuracy.
In order to qualitatively compare the textual outputs of the approach, we present in Fig. 4 a qualitative comparison between the results with object features and without them. We notice that the difference is remarkable, and the addition of the object features makes the sentences more salient grammatically, and with less object mistakes. In (a) for example, a skier was identified instead of just the skiing boots. In (b), the model before incorporating object features had mixed up people and snow boards. In (c), the two cows were correctly identified after adding object features. In (d), The model without object features falsely identified a man in the picture. In (e), the model could not identify the third bear without object features. In (f ), object features helped to identify a group of people instead of only two women.
Conclusions
In this paper, we presented an attention-based Encoder-Decoder image captioning model that uses two methods of feature extraction, an image classification CNN (Xception) and an object detection module (YOLOv4), and proved the effectiveness of this scheme. We introduced the importance factor, which prioritizes foreground large objects over background small ones, and favors objects with high confidence over those with low confidence and demonstrated its effect on increasing scores. We showed how our method improved the scores and compared it to previous works in the score increase, especially the CIDEr metric which increased by 15.04%, reflecting improved grammatical saliency. Unlike previous works, our work suggested to benefit from all object detection features extracted from YOLO and showed the effect of sorting the extracted object tags. This can be further improved by better methods for combining object detection features | 5,913.6 | 2022-02-14T00:00:00.000 | [
"Computer Science"
] |
Edinburgh Research Explorer Functional CD1d and/or NKT cell invariant chain transcript in horse, pig, African elephant and guinea pig, but not in ruminants
CD1d-restricted invariant natural killer T cells (NKT cells) have been well characterized in humans and mice, but it is unknown whether they are present in other species. Here we describe the invariant TCR (cid:2) chain and the full length CD1d transcript of pig and horse. Molecular modeling predicts that porcine (po) invariant TCR (cid:2) chain/poCD1d/ (cid:2) -GalCer and equine (eq) invariant TCR (cid:2) chain/eqCD1d/ (cid:2) -GalCer form complexes that are highly homologous to the human complex. Since a prerequisite for the presence of NKT cells is the expression of CD1d protein, we performed searches for CD1D genes and CD1d transcripts in multiple species. Previously, cattle and guinea pig have been suggested to lack CD1D genes. The CD1D genes of European taurine cattle ( Bos taurus ) are known to be pseudogenes because of disrupting mutations in the start codon and in the donor splice site of the first intron. Here we show that the same mutations are found in six other ruminants: African buffalo, sheep, bushbuck, bongo, N’Dama cattle, and roedeer.Incontrast,intactCD1dtranscriptswerefoundinguineapig,Africanelephant,horse,rabbit,and pig. Despite the discovery of a highly homologous NKT/CD1d system in pig and horse, our data suggest that functional CD1D and CD1d-restricted NKT cells are not universally present in mammals. © 2009 Elsevier Ltd. All rights reserved.
Introduction
CD1d proteins are expressed on the surface of a variety of antigen presenting cells and non-hematopoietic cells, and present cellular self-lipids and exogenous lipids with an ␣-anomerically linked sugar to T cells with a highly conserved, invariant TCR, NKT cells. CD1d−/− mice have no detectable mature NKT cells (Chen et al., 1997;Gapin et al., 2001), showing that functional CD1D genes are a prerequisite for their development. NKT cells have been implicated in oral tolerance, autoimmunity, dendritic cell maturation, tumor surveillance, and anti-microbial immunity. Natural exogenous ligands for NKT cells presented by CD1d have been identified, such as GSL-I from Sphingomonas species (Kinjo et al., 2005) and BbGL-II from Borrelia burgdorferi (Kinjo et al., 2006). The entire pop-ulation of NKT cells can be activated strongly by the synthetic ligand ␣-galactosylceramide (␣-GalCer) (Kawano et al., 1997), which is considered a universal super agonist for NKT cells. It has been suggested that the CD1d/NKT system evolved to cope with pathogens that produce antigens with ␣-glycosidic linkages (Kinjo et al., 2005), but there is only limited supportive data available.
Sphingomonas species contain antigens that are presented by CD1d to NKT cells. Sphingomonas, a genus that does not include highly pathogenic bacteria, belongs to the class of ␣proteobacteria. This class of bacteria contains peptidoglycan and LPS-negative bacteria, including pathogenic tick-borne genera: Rickettsia, Anaplasma, and Ehrlichia, all causing morbidity and mortality in livestock. Unfortunately, none of these bacteria has been studied closely enough to determine whether they contain antigens for NKT cells. Ehrlichia ruminantium (formerly named Cowdria ruminantium (Dumler et al., 2001)), causes heartwater (cowdriosis), Anaplasma bovis (formerly named Ehrlichia bovis) causes bovine ehrlichiosis, and A. marginale and A. centrale cause bovine anaplasmosis, and these diseases are major problems in the livestock industry in sub-Saharan Africa. Some indigenous African breeds of cattle are more resistant to heartwater and anaplasmosis than other 0161-5890/$ -see front matter © 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.molimm.2008.12.009 breeds, but this can be explained by higher resistance to the vector (ticks of the genus Amblyomma). All breeds of cattle will develop clinical disease once they get infected. Since the aforementioned bacterial pathogens do not carry the signature danger molecules LPS and peptidoglycan, recognition by the innate immune system other than TLRs, like the CD1d/NKT system, may be of crucial importance in the early defense against these pathogens.
It has been suggested that the group 1 CD1 proteins (CD1a, CD1b, CD1c) are not universally present in all species, whereas group 2 CD1 proteins (CD1d) are. CD1D genes have indeed been found in most mammalian species studied, including primates like humans and chimpanzees (Pan troglodytes), African green monkeys (Chlorocebus sabaceus) and rhesus macaques (Macaca mulatta) (Saito et al., 2005), mice (Mus musculus) (Bradbury et al., 1988), rats (Rattus norvegicus) (Ichimiya et al., 1994), cottontail rabbits (Sylvilagus floridanus) (Calabi et al., 1989), sheep (Ovis aries) (Rhind et al., 1999), and pigs (Sus scrofa) (Eguchi-Ogawa et al., 2007). However, not all of these genes have been shown to lead to functional transcripts or proteins yet. CD1 genes have also been discovered in chickens (Gallus gallus) (Maruoka et al., 2005;Miller et al., 2005;Salomonsen et al., 2005), but chicken CD1 genes could not be classified according to the existing isoforms, and are therefore named CD1.1 and CD1.2. There are two species that have till now been suggested to have no functional CD1D genes. Before the availability of its genome, the guinea pig (Cavia porcellus) family of CD1 genes had been well characterized, but a CD1D gene was not identified (Dascher et al., 1999). In cattle, two CD1D genes have been identified, named CD1D1 and CD1D2, but these are in fact pseudogenes (Van Rhijn et al., 2006). The two CD1D pseudogenes that were identified both contain a mutated start codon and an unspliceable intron. In this paper we describe CD1D pseudogenes in N'Dama cattle, and five other ruminants, including sheep, which had previously been assumed to have functional CD1D genes. Functional CD1d transcripts were identified in guinea pig, African elephant, horse, rabbit, and pig.
NKT cells can be distinguished by their highly conserved invariant TCR. The NKT cell population can be visualized by flow cytometric analysis using fluorescently labeled CD1d tetramers loaded with ␣-GalCer that interact with the NKT cell TCR. Human and murine CD1d tetramers are known to stain human and murine NKT cells, also in a species cross-reactive manner, so it is possible that these tetramers also recognize NKT cells in other species. However, the lack of species cross-reactive staining does not prove absence of NKT cells. Alternatively, evidence for the existence of NKT cells in a species might come from TCR ␣ chain sequences. Recent data on the molecular interactions between ␣-GalCerloaded CD1d and the invariant TCR Kjer-Nielsen et al., 2006;Scott-Browne et al., 2007) have provided clear insights in these interactions and allow detailed predictions on whether CD1d and TCR protein sequence homologs that are found in other species, like dog and horse as described in this paper, are likely to be true functional homologs.
Our data provide supportive evidence that functional CD1d transcripts and/or NKT cells are present in several mammalian species, but not in ruminants. This shows that the CD1d/NKT system is not universally present as previously thought. The CD1d/NKT system may be lacking in ruminants altogether, providing a possible explanation for their high sensitivity to Rickettsia, Anaplasma, and Ehrlichia.
Searches in genomes and EST databases
BLAST searches were performed in selected genomes (www.ensembl.org) and EST databases (www.ncbi.nlm.nih.gov/ BLAST) with the nucleotide sequence of the ␣1 and the ␣2 domains of human CD1D (NM 001766) and with the human TRAV10 segment (also called V␣24) used by NKT cells (AE000659). The results of the CD1D searches were included in a phylogenetic tree together with the ␣1 and the ␣2 domains of the known CD1 isoforms to assess with which CD1 isoforms they group. Also, the obtained potential CD1D genes were checked for the presence of a leader peptide, ␣1, ␣2, and ␣3 domain, and a transmembrane region. The nucleotide sequence of the ␣1 domain of the published sheep CD1d cDNA (AJ006722) was used to perform a BLAST search in the NCBI sheep EST database (www.ncbi.nlm.nih.gov/BLAST). The predicted amino acid sequences of the hits obtained from BLAST searches with the human TRAV10 segment in the genomes of selected species were all aligned and evaluated as described in Section 3.
Sequence analysis and homology modeling
Homology models of pig (Sus scrofa) and horse (Equus caballus) CD1d, as well their ␣ chain of the invariant NKT cell TCR were modeled using the Swiss Model Server (Schwede et al., 2003), using both human CD1d and V␣24 TCR crystal structures as templates. The obtained CD1d and TCR models were superimposed onto their corresponding human counterparts in the CD1d/␣-GalCer/V␣24 TCR crystal structure (PDB code 2PO6) . No reorientation of the TCR was necessary to accommodate the TCR CDR loops, due to their similar orientation in both models. The CD1d surface residues in all three CD1d orthologs are mostly conserved, except for a glycine residue instead of the human tryptophan (W153), which is responsible for tilting the galactose of ␣-GalCer when bound to human CD1d (Koch et al., 2005) in comparison to mouse CD1d. Therefore, we manually modeled this galactose in the orientation that it adopts when bound to mouse CD1d, as mCD1d also has this conserved glycine residue (Zajonc et al., 2005). The models were visualized using PyMol (pymol.sourceforge.net).
The Translate Nucleic Acid Sequence Tool was used (http:// biotools.umassmed.edu) for translation into amino acids. Alignments were performed and trees generated with ClustalW and Phylip. SignalP, available at http://www.cbs.dtu.dk/services/ SignalP/ was used to predict leading fragments and cleavage sites.
Invariant˛chain analysis
The human TRAV10 V segment (also called V␣24) that is used by the human NKT invariant TCR was used to identify TCR ␣ chain V segments in the genomes of cat, dog, horse, pig, cattle, guinea pig, African elephant, rabbit, and sheep. All resulting V ␣ segment sequences were translated and aligned with the human TRAV10 segment. We considered all V segments with higher sequence homology to TRAV10 than to any other human V segment as candidate V segments for the NKT invariant chain in other species. Because the CDR1 region is encoded by the V segment and known to interact with ␣-GalCer Kjer-Nielsen et al., 2006;Scott-Browne et al., 2007), we only included V segments in which at least two residues, including the P that was indicated as crucial in all studies, were identical to the human TRAV10 CDR1 region (VSPFSN). According to these criteria we identified one candidate V segment in cat, dog, horse, pig, guinea pig, African elephant, rabbit, and sheep, and three in cattle (Table 1a).
Using a forward primer before or at the CDR1 region of the candidate V segments and a reverse primer in the constant segment, we amplified partial TCR ␣ chains covering the CDR1, CDR2, CDR3, and part of the constant domain. For this purpose, PBMC-derived cDNA was available from cat, dog, horse, pig, guinea pig, African elephant, rabbit, sheep and cattle. CDR3 sequences that were highly homologous to the human and murine NKT CDR3␣ were obtained from horse (two out of four sequences) and pig (one out of 11 sequences). Six out of eight sequences obtained from cat had a two amino acid deletion in the CDR3 compared to the human and murine sequences. From cattle, one out of 15 sequences showed high homology to the human CDR3, but it had one extra amino acid. None of 15 sheep sequences, eight guinea pig sequences, eight rabbit sequences, and one African elephant sequence showed homology to the human invariant CDR3␣ (Table 1b). We were not able to derive TCR ␣ chain sequences from dog. To predict whether the obtained CDR1␣ and CDR3␣ loops would be able to interact with a CD1d/␣-GalCer complex, we generated models using the Swiss Model server (Schwede et al., 2003), and compared these to the available human data . The horse and pig invariant TCR ␣ chain/CD1d/␣-GalCer models suggest that these ␣ chain sequences are fully functional invariant NKT cell TCR sequences, capable of binding ␣-GalCer, when presented by its species-matched CD1d molecule (Fig. 1). Even though otherwise highly conserved, the differences in CDR3␣ length of the obtained bovine and feline sequences make it difficult to predict whether the residues that normally interact with ␣-GalCer do so in these species, and therefore we cannot conclude that these sequences represent the bovine or feline NKT invariant chain.
CD1D pseudogenes in ruminants
PCR products were generated using genomic DNA from N'Dama cattle (Bos taurus), African buffalo (Syncerus caffer), sheep (Ovis aries), roe deer (Capreolus capreolus), bushbuck (Tragelaphus scriptus), and bongo (Tragelaphus eurycerus), using heterologous CD1D primers. Subsequent cloning of PCR products and sequencing of at least four independent bacterial colonies of each species resulted in CD1D sequences available at Genbank with accession numbers EU247610-EU247617 and FJ028651-FJ028652. In case of small differences between sequences derived from one species, the sequence that was closest to the consensus sequence was submitted to Genbank. Alignment of the newly derived ruminant sequences with previously published CD1D sequences of humans and cattle (Fig. 2a) revealed that all newly derived ruminant CD1D sequences have the same disrupting mutations as the bovine CD1D genes. The start codon is mutated and the donor splice site of the first intron (the intron after the leading fragment) is mutated, rendering it an unspliceable intron. Interestingly, the mutated donor splice site of that intron forms ATG in all ruminant CD1D genes, and might function as an alternative start codon. This ATG is in the right reading frame and does in most cases not lead to any premature stop codons. However, the protein that would be synthesized is not predicted to contain a leading fragment by the SignalP program and can thus not be expressed at the cell surface (Signal peptide probability: 0.001, Signal anchor probability: 0.000). In N'Dama cattle and bongo we found one gene homologous to bovine CD1D1 and another gene homologous to bovine CD1D2. The obtained African buffalo and bushbuck sequences are homologous to bovine CD1D1. The roe deer and sheep sequences could not be classified as CD1D1 or CD1D2 (Fig. 2b).
The published sheep CD1d mRNA sequence with accession number AJ006722 (Rhind et al., 1999) does not show disruptive mutations, while the sheep CD1D pseudogene we describe here does. Comparison of exons 1-3 of these two sequences, revealed that they were >98% identical at nucleotide level, suggesting that AJ006722 may be a transcript of the gene we report here. To obtain additional data on the status of the sheep CD1D gene, we investigated CD1D transcripts in the sheep EST database. A BLAST search Table 1 Sequences of V segments homologous to TRAV10 and CDR3 of T cells using these V segments.
a Amino acid sequences of TRAV10-homologous V segments in several species, identified by searching the available genomic data. In green: CDR1. b TCR ␣ chain sequences using the TRAV10 homologs were derived from PBMC from multiple species. The CDR3 of these TCR ␣ chains that are highly homologous to the CDR3 of the human and mouse NKT TCR ␣ chain are aligned (top panel). The human and mouse sequences that are included were derived from literature. CDR3 that were not homologous to the CDR3 of the human and mouse NKT TCR ␣ chain, but were used by TRA10 homologous V segments are shown for comparison (lower panel). Green: CDR1; Grey: the first two amino acids of the FGXG motif, forming the end of the CDR3. Fig. 1. NKT cell receptor ␣ chain binding to CD1d-bound ␣-GalCer. Residues of CDR1␣ (green) and CDR3␣ (cyan) that directly interact through hydrogen-bonding with ␣-GalCer, are represented as stick, colored by atoms (oxygen in red, nitrogen in blue). The ␣-GalCer ligand is shown as yellow sticks, while the CD1d ␣1-helix is shown in grey. The ␣2-helix of CD1d was removed for clarity. Hydrogen bonds are depicted as blue dashed lines. Only one residue in the porcine and equine CDR1␣ sequence (Asn30) differs from the human counterpart (Ser30) but the model suggests that it can still hydrogen bond with the ␣-GalCer ligand. Several other TCR residues that are involved in binding to CD1d residues are also conserved or similar but not shown. See sequence alignment of CD1d (Fig. 3) and NKT TCR (Table 1) for detailed sequence conservation.
with the nucleotide sequence of exon 2, encoding the ␣1 domain of the AJ006722 sequence resulted in five hits that were >98% identical at nucleotide level, suggesting that they were transcripts of the same gene. Three of these hits (EE803429, DY491833, and DY491595) contained a mutated start codon and an unspliceable intron between the leading fragment and the ␣1 domain. The other two hits did not contain any sequence upstream of the ␣1 domain. From this we conclude that in the EST database there are no functional CD1D transcripts corresponding to the AJ006722 sequence, but there are transcripts of the pseudogene we describe in this paper. The only sheep CD1 proteins that have been demonstrated at protein level were CD1b and CD1e, isolated by immunoprecipitation with an antibody that recognizes multiple ruminant CD1 molecules (Rhind et al., 1999).
CD1D genes and CD1d transcripts in non-ruminant species
CD1D sequences were identified in the genomes of dog, cat, pig, guinea pig, horse, African elephant, rabbit, nine-banded armadillo, small Madagascar hedgehog, European shrew, and northern tree shrew (Table 2). A full length CD1D sequence without any of the characteristics of pseudogenes could be found in pig, horse, and nine-banded armadillo. The CD1D sequences of the other mammals were incomplete because of gaps in the genomic sequences. However, the available parts of the sequences did not show any of the characteristics of pseudogenes. In order to obtain the full length coding sequence of the incomplete genes, and proof that the CD1D genes are transcribed and properly spliced in vivo, we successfully cloned full-length CD1d transcripts from guinea pig, rabbit, horse, and African elephant PBMC (accession numbers FJ028653-FJ028656). Alignment of these sequences with the human and murine CD1d sequences (Fig. 3) shows that the residues on the surface of CD1d that interact with the NKT TCR are highly conserved. Contrary to all other CD1d sequences the African elephant CD1d sequence has a truncated cytoplasmic tail and lacks a YXXZ motif. The YXXZ motif in the tail sequence of murine and human CD1d is needed for interaction with AP-2 and thus trafficking to the late endosome (Chiu et al., 1999;Rodionov et al., 1999).
Discussion
In this paper we show that the NKT/CD1d system is present in horse and pig. Equine and porcine NKT invariant ␣ chains and CD1d transcripts are sequenced and their models suggest that they are likely to function as their human and murine counterparts. In addition, we sequenced full length CD1d cDNA of African elephant, guinea pig, and rabbit, and we show that in the genomes of dog, cat, African elephant, nine-banded armadillo, Madagascar hedgehog, European shrew, and Northern tree shrew (partial) functional CD1D genes are present, suggesting that these species may also have a functional NKT/CD1d system. However, in the six ruminant species we studied here, all CD1D genes we identified were nonfunctional, which strongly suggests that ruminants may not have NKT cells. The genomic sequence contained gaps. The available part does not contain any of the characteristics of a pseudogene. b Full-length transcripts of this gene that are predicted to translate into a functional protein have been described in this paper. c The gene is complete and did not contain any of the characteristics of a pseudogene, but it is unknown whether the gene is transcribed and translated in vivo. Human and murine CD1d-restricted NKT cells can be detected using ␣-GalCer-loaded CD1d tetramers. Even though human and murine CD1d tetramers cross-react between these two species, lack of detection of NKT cells in ruminants using murine or human CD1d tetramers is not conclusive because these reagents may not crossreact with ruminants. Using the same molecular approach that led to the identification of the invariant NKT ␣ chain in pig and horse we were unable to identify invariant NKT ␣ chain homologs in guinea pig, cat, rabbit, African elephant, cattle and sheep. So, even though these species do express V segments homologous to TRAV10 (V␣24), we have not found these V segments in combination with the canonical NKT CDR3␣. It is possible that we did not obtain invariant NKT ␣ chain sequences because the TRAV10-homologous V segment is used often by other, non-NKT cells in these species, our sample size is not big enough, and/or the NKT cells are under represented in PBMC. Therefore, based on TCR ␣ chain sequences only, we cannot conclude that NKT cells are absent in these species. However, the combination of the fact that ruminants lack functional CD1D genes, and the observed absence of an invariant ␣ chain sequence among 26 different ruminant (bovine and ovine) TRAV10 homologcontaining TCR ␣ chain sequences points strongly to absence of NKT cells in ruminants.
CD1d presents lipids with an ␣-glycosidic linkage to NKT cells and may therefore be an important molecule to stimulate the Fig. 3. Comparison of CD1d sequences. The human and murine CD1d sequences were aligned with the newly derived guinea pig, rabbit, horse, and African elephant sequences (accession numbers FJ028653-FJ028656). Residues that are in the human CD1d sequence known to interact with the human NKT TCR CDR3␣ are in yellow/underlined, and with the human CDR2 in red/underlined . The YXXZ motif in the tail sequence is shown in green/bold/italics. immune system in response to ␣-proteobacteria that contain these compounds. Ruminants are very sensitive to infection with these pathogens. Previously we have shown that European cattle lack functional genes for CD1D. Because we found CD1D pseudogenes and no functional CD1D genes in African N'Dama cattle (Bos taurus), two other species of the family Bovinae (bongo, bushbuck, and African buffalo), a member of the superfamily Bovidae that does not belong to the family of Bovinae (sheep), and a ruminant that is a member of the superfamily Cervidae (roe deer), we conclude that CD1d proteins are probably absent in all ruminants, though we are aware that we have not formally proven this. In the absence of a fully finished and assembled genome, it is difficult to prove that a functional CD1D gene is absent in a certain species. Southern blotting detects hybridizing sequences, but does not discriminate between functional genes and pseudogenes and between comigrating restriction fragments. Especially the latter poses problems because homology of CD1 genes can be exceptionally high, and these genes will be cut in an identical way by the restriction enzymes used, leading to an underestimation of the real number of CD1 genes. This probably explains that the guinea pig was previously suspected of having no CD1D gene (Dascher et al., 1999) based on Southern blot data, while we report a guinea pig CD1D gene and transcript here.
Our data on the presence of a CD1D pseudogene in sheep, carrying a mutated start codon and an unspliceable intron, seem to be in contradiction with published data on sheep CD1d. A fulllength cDNA sequence of sheep CD1d has been published and is predicted to translate into a normal CD1d protein (Rhind et al., 1999). However, this cDNA sequence has been assembled in silico from partial PCR products. The full-length cDNA sequence in which the first intron was properly spliced out has never been obtained (Rhind, personal communication). In the sheep EST database we could only find CD1D pseudogene transcripts, and no functional transcript analogous to the published one. Together, this suggests that the published sheep CD1d cDNA may derive from transcripts of a sheep CD1D pseudogene, and are consistent with the possibility that sheep do not have functional CD1d.
The artiodactyl pig (S. scrofa) is the closest relative of the ruminants that we studied, and it has a functional gene and no CD1D pseudogene. This dates the loss of a functional CD1D gene by point mutations, and thus the emergence of a CD1D pseudogene approximately 65 million years ago, when the ancestors of Suidae and Ruminantia diverged (Kumar and Hedges, 1998). This is consistent with the fact that we have only found CD1D pseudogenes in all ruminants studied and argues against the presence of a functional sheep CD1D gene.
To emphasize the special status of group 2 CD1 molecules as compared to group 1 CD1 molecules, it has often been stated that CD1d molecules and NKT cells are universally present in mammals, while this is not the case for group 1 CD1 molecules. Lack of functional CD1D genes in a considerable group of animals as we show here would suggest that there is no reason for a special status for CD1d proteins based on universal distribution among mammals. In addition to different expression patterns and being slightly separated based on sequence homology, group 2 CD1 molecules (CD1d) are thought to differ fundamentally from group 1 CD1 molecules (CD1a, CD1b, and CD1c) in that they stimulate an invariant T cell population. However, in addition to being able to activate NKT cells with an invariant TCR, it has been shown that CD1d can also stimulate other, non-invariant T cells (Baron et al., 2002;Huber et al., 2003;Jahng et al., 2004;Van Rhijn et al., 2004). Whether CD1d is the only member of the CD1 family of proteins that can stimulate an invariant T cell population, remains open: it is possible that in the future invariant group 1 CD1-restricted T cell populations will be discovered, and if so, this would question whether group 1 and group 2 CD1 proteins really perform fundamentally different functions. | 5,818.4 | 2009-04-01T00:00:00.000 | [
"Biology"
] |
Feature extraction and identification of leak acoustic signal in water supply pipelines using correlation analysis and Lyapunov exponent
The leakage of water supply pipeline is a common problem in the world. Timely discovery and treatment of leakage can avoid drinking water pollution, save water resources or avoid road collapse accidents. Therefore, it is of great practical significance to study pipeline leak detection methods. In this experiment, piezoelectric acceleration sensors were placed in different locations of a leak pipe to acquire the leakage signals. According to the generation mechanism of leak acoustic signals, the unpredictability characteristics of leak signal are investigated. The autocorrelation function is used to describe the unpredictability of the leak signal because it has the ability to analyze the coherent structure of time series, and the Lyapunov exponent can describe its complexity. The autocorrelation function sequence is used as featured extraction object. The Lyapunov exponent of this sequence is used to quantify the signal characteristics. By this method, the leakage can be effectively identified.
Introduction
Water is a precious resource, and the distribution of water resources is very uneven. China's per capita water resource is a quarter of the world's per capita. Even so, the average loss rate caused by leakage in China's water supply network is more than 30 %, which is 2 to 3 times that of developed countries [1]. There is a leaking in water supply pipe network, which not only wastes a lot of water, but also can cause a wider range of water pollution [2]. In order to minimize the loss, it is very important to judge whether the water supply pipe is leaking and accurately locate the leakage point.
Usually non-acoustic methods to detect leaks require a lot of time, such as: uninterrupted flow detection method [3], minimum night flow method [4], and pipeline pressure signal analysis method [5] etc. These methods only suit for some specific pipes or detecting be affected a lot by the environment.
The methods according to the acoustic signal: Initially, people judge a leak by the sound coming from the pipe, which rely on people's experience too much. In 1993, Wan proposed the adaptive filtering noise for the collected acoustic signals, and according to the spectral analysis, to determine whether there was a leak in the pipeline [6]. Tang Xiujia applied the neural network method to pipeline leakage identification work [7]. He used signal amplitude, average value, etc. as characteristic quantities to identify whether the pipeline leaked. This method cannot eliminate interference from other fixed sound sources. Ai et al. combined the Linear Predictive Coding Cepstrum Coefficients (LPCC) with Hidden Markov Models (HMM) to improve the ability to identify leakage [8], Toshitaka and Akira proposed to identify the leakage by combining the support vector machine (SVM) with the power spectral density of the leakage signal and the destruction factor based on AR model as the characteristics of the leakage acoustic signal [9].
Problem description
When there is a crack or small hole in the water supply pipe, the water will spray outwards and interact with the pipe to form vibration. The vibration propagates far away through the waveguide formed by the pipe and the fluid in the pipe. Place acceleration sensors on both sides of the leak point to obtain the vibration signal [2], as shown in Fig. 1.
Pipeline leakage was simulated in the laboratory, the actual experimental device is shown in Fig. 2.
The signals collected by the sensors can be simplified as: In the equation, ( ) and ( ) represent the signals collected by the sensors, ( ) is the leakage signal, ( ) and ( ) represent noise, is the delay time of the source signal to reach the two sensors; a is the attenuation factor. The location of the leakage point can be calculated by combining the length ( ) of the pipeline between the sensors and the propagation velocity ( ) of the leakage signal [2]: From the above analysis, it can be seen that the leak location method based on delay time. However, when the pipeline is inspected in the actual environment, a variety of non-leakage sound can also cause vibration.
Analysis and description of the turbulent characteristics of the leakage signal
Fluid near the leak point become to turbulent flow state, so there is turbulence noise generated; when water is ejected outside the pipe, there is low-pressure zone appeared near the leak. The gas nuclei in the water form vacuoles in the low-pressure area, the cavitation effect also produces acoustic signals [10]. It can be seen that the turbulence noise and cavitation noise are the root cause of the leakage acoustic signal. The characteristics of the leakage acoustic signal are closely related to the mechanism of turbulence and vacuole movement in the leakage.
Fluid turbulence is not a completely random movement, but there is an orderly movement in the disorder. Turbulence is intermittently alternating between the "active period" and the "quiet period," and the "active period" is repeatable. Due to the uncertainty of the generation and rupture of the cavitation bubbles in time and space, the "active period" of turbulent is no longer reproducible. Therefore, the leakage sound has an active period, that is, the acoustic signal generated by the turbulence and vacuoles at a certain moment has a certain survival time, the leakage signals observed during the same active period have similarities in themselves. However, the similarities and correlations are weak between different active period acoustic signals. Autocorrelation functions of observed signals were analyzed. In the "active period", the autocorrelation function has a larger value; it means that the part has self-similarity. Due to the influence of the cavitation, the correlation between the different active periods is weak. Therefore, at , the signal autocorrelation function will show irregularities. Fig. 3 shows the time-domain waveforms and autocorrelation calculation results of the signal detected by the accelerometer sensor. Analyzing the above results can be obtained: 1) from Fig. 3(b), it can be seen that the autocorrelation function shows an attenuation trend, and the time of the attenuation trend corresponds to the duration of the "active period" of the leakage signal; 2) from Fig. 3(c), with the increase of (delay time), ( ) oscillates around ( ) = 0, but the oscillation process is not regular, which indicates that there is a weak correlation between the leakage signals in a active period and other active periods. Fig. 4 shows the time domain of the knock ground signal and the processing result of the autocorrelation function. The distance between the knock and the pipe is 2 m, and the signal is still collected by the acceleration sensor on the pipe. We can see that in Fig. 4(c) ( ) oscillates around = 0 and the oscillation process has certain regularity, there is an obvious oscillation start and decay process. According to the principle of autocorrelation, it can be seen that the signal has "reproducibility". This is quite different from leakage signals (as shown in Fig. 3).
From analysis, we can see that some features of leakage acoustic signal and non-leakage acoustic signal are quite different, and the autocorrelation function can reflect the difference between the two. Therefore, leakage detection can be achieved by using the part of the autocorrelation function as the object of extraction.
The same result can be found in sever other experiments, they are: machine running, car passed, and so on.
Application of the Lyapunov exponent
Since the initial value sensitivity of chaotic system and the anti-interference ability of noise are better, the application of chaos theory to signal detection is an important development trend at present.
The maximum Lyapunov exponent can determine the chaos [11]. In most cases, it only needs to calculate the maximum Lyapunov exponent, if it is greater than zero, which indicates that the system is chaotic, otherwise the system is not chaotic.
In this paper, the correlation function and the maximum Lyapunov exponent are used to identify the leakage, and the specific process is: 1) the autocorrelation function of the signal collected by the sensor is calculated first; 2) the function value after the signal correlation length is used as the calculation sequence to calculate the maximum Lyapunov exponent, and then identify whether the leakage occurs.
When calculating Lyapunov exponent, phase space reconstruction is first performed [11]. In the process of phase space reconstruction, the embedding dimension and delay time directly affect the level of phase space reconstruction, and then influence the speed and precision of Lyapunov exponent calculation.
In this paper, the improved mutual information method [12] is used to calculate the delay time, the -tree algorithm of false nearest neighbors' method [13] is used to calculate the imbedding dimension. The maximum Lyapunov exponent obtained in each case is shown in the Table 1. From the above analysis, it can be found that the maximum Lyapunov exponent is positive when the leakage exists, and the value is relatively larger. When the noise of non-chaotic systems exists (such as machine's sound), the maximum Lyapunov exponent is negative. However, there is also turbulent flow at the valve corner of the pipeline, the maximum Lyapunov exponent is positive, but the value is small. However, there is also turbulent flow at the valve corner of the pipeline, the maximum Lyapunov exponent is positive, but the value is small.
Then the cross-correlation calculation of the signals obtained by the two sensors can calculate the corresponding delay time (∆ ), the result shown as Fig. 8, ∆ = 0.009596. Take the delay time into Eq. (2) to obtain the leak location. The calculation result is basically in line with the actual situation.
Conclusions
Based on the mechanism of the leakage sound signal, the interaction between the turbulence and the cavitation of the leak is analyzed, and the mechanism of the "non-repeatable" characteristic of the leakage signal's "active period" is analyzed. The signal autocorrelation function is used to describe the characteristic of the leakage signal. Among them, the autocorrelation function value after the correlation length of the signal reflects the "non-repeatability" characteristic. The function value is used as the extraction object to identify the leak with the maximum Lyapunov exponent.
Next step, I'll study Lyapunov exponent by changing a certain variable (such as the vibration frequency, water pressure, and so on). And compare with other detection methods. | 2,405.2 | 2018-09-24T00:00:00.000 | [
"Physics"
] |
Detection of Pseudomonas aeruginosa in Clinical Samples Using PCR Targeting ETA and gyrB Genes
Pseudomonas aeruginosa has variety of virulence factors that contribute to its pathogenicity. Therefore, rapid detection with high accuracy and specificity is very important in the control of this pathogenic bacterium. To evaluate the accuracy and specificity of Polymerase Chain Reaction (PCR) assay, ETA and gyrB genes were targeted to detect pathogenic strains of P. aeruginosa. Seventy swab samples were taken from patients with infected wounds and burns in two hospitals in Erbil and Koya cities in Iraq. The isolates were traditionally identified using phenotypic methods, and DNA was extracted from the positive samples, to apply PCR using the species specific primers targeting ETA, the gene encoding for exotoxin A, and gyrB gene. The results of this study indicate that 100% of P. aeruginosa isolates harbored the gyrB gene, whereas 74% of these isolates harbored ETA gene. However, the specificity of PCR for detection of P. aeruginosa based on the both genes was 100%, since no amplified product obtained using DNA extracted from other bacterial species. Hence by considering the importance of rapid detection of this bacterium due to the presence of problems in biochemical methods, PCR targeting multiple virulence genes is suggested in identification of pathogenic strains of P. aeruginosa isolated from some infections which should speed diagnosis of an antimicrobial therapy.
Introduction:
Pseudomonas aeruginosa is an environmentally ubiquitous Gram-negative bacterium which is a leading nosocomial pathogen causing various infectious syndromes (1). Clinically, this microorganism plays a critical role in the survival rates of affected patient; hence it is important to detect it quickly and accurately. In general, detection of P. aeruginosa is done by standard traditional culture such as morphologic and biochemical tests (2). However these tests are lengthy and unreliable and because of the seriousness of the infection, there is a need for a rapid and sensitive technique for early detection of pathogenic P.aeruginosa DNA-based techniques, such as polymerase chain reaction (PCR) are shown to have those characters for the identification of pathogenic bacteria in terms of accuracy, specificity and reliability (3). Many of these PCR methods have been applied for identification of P. aeruginosa (4,5).
A variety of virulence factors contribute to the pathogenicity of P.aeruginosa and are targeted by PCR for its detection such as oprL gene and exoS gene (6), gyrB gene (2,7), ecfX genes (8), Quorum sensing gene (9), gene encoded to phospholipase (plcH), rhamnolipid AB (rhlAB), alkaline protease (aprA) and elastase (lasB) (10) beside exotoxin A (ETA gene) (11,12). ETA gene is species specific and conserved for P.aeruginosa species and is not present in other species of Psuedomonas genus. The gene encoding exotoxin A has been sequenced, characterized and is known to contribute to P. aeruginosa pathogenesis, since strains of this species of bacteria are deleted for this gene are less damaging than parental strains (13). The main problem of PCR detection methods of P. aeruginosa is that they target only one gene which is inadequate for comprehensive and reliable diagnosis (7,14). Because P. aeruginosa strains demonstrate high genotypic diversity (15), and many studies have confirmed the absence of one or more of the virulence genes in some strains (16), this study aimed to detect pathogenic strains of P. aeruginosa by targeting ETA gene and another gene called gyrB that encodes the subunit B protein of DNA gyrase (7,8).
Materials and Methods:
Bacterial isolation: Between June and October 2015, 70 swab samples were taken from infected wounds and burns from patients attending the hospitals in Erbil and Koya cities in the Kurdistan region of Iraq. Brain Heart Infusion Broth was used to enrich the bacteria, and this was cultured on MacConkey agar plates and incubated at 37°C overnight to observe colony morphology. The observed colonies were inoculated on the selective medium (cetrimide agar) and processed further for biochemical tests according to MacFaddin (17) which include: Growth at 42ºC in trypticase soya agar, Indole production test, Vogesproskauer (VP), Methyl red test, urease activity, Oxidase test, Citrate utilization test, and catalase test.
DNA Extraction:
Genomic DNA was prepared according to Oliveira et al. (18) as follows: From a single colony 10 ml cultures were prepared in broth media for 12 hours, then centrifuged for 5 min at 6000 rpm to pellet the cells, The pellets were resuspend in TE buffer (PH 8) and 30 mg/ml lysozyme and incubated for 2 hours at 37ºC. After that, TE buffer (pH 8) containing Proteinase K (1mg/ml) was added for 1 hour for denaturation of cell protein, and 10 μl of 20% Sodium Dodocyl Sulfate (SDS) was added and incubated for 1 h at 37°C. One equal volume of phenol /chloroform/isoamyl alcohol (24/24/1) was added and put in to a shaker for 30 minutes, and was centrifuged for 5 min at 6000 rpm. The supernatant was transferred into a clean micro tube and then ammonium acetate was added by 10% of the volume with one equal volume of cold isopropanol to precipitate the genomic DNA. The precipitated DNA was transferred into another micro tube and washed with 200 μl of (70%) ethanol. Finally, the washed DNA was dissolved in TE buffer and stored at -20°C until use.
Application of Polymerase Chain Reaction:
Two sets of primers were used in the application of PCR. The first one was forward primer 5' GACAAC GCCCTCAGCATCACCAGC3'and the second was reverse primer 5'CGCTGGCCCATTCGCTCCAGCGCT 3',which targeted exotoxin A (ETA) gene, which amplified a 222 bp snippet. The second set of primes was a forward primer 5'AAGTACGAAGGCGGTCTGAA3' and reverse primer 5'GTTGTTGGTGAAGCAGAGCA3' which was targeting gyrB gene which amplifies a 367bp sequence, with both genes were specific for P. aeruginosa. The reagents required for PCR reaction include 25μl reaction mixture ,with 2U of Taq DNA polymerase, 2.5 μl of10 x PCR buffer (10 mM of Tris-HCl (pH8.5), 30 mM of KCl, 1.5 mM of MgCl 2 , and 0.4mM of each dNTPs with 50 ng template DNA beside 10pmol of each primer (0.5μM). The amplification program was run as follow: One cycle at 95°C for 2 min, 30 cycles of 92°C for 60 s, 59°C for ETA gene and 55°C for gyrB gene as annealing temperature and 72°C for 1 min, and finally One cycle at 72°C for 8 min. The amplified product was run on 1.2% agarose gel electrophoresis for 90 min at 75 Volts, stained with ethidium bromide and visualized by a U.V. transilluminator.
Results and Discussion:
Identification of P. aeruginosa using phenotypic methods.
Out of 70 samples collected from patients with infected wounds and burns, 24 (34%) of the isolates were successfully diagnosed as P. aeruginosa, using phenotypic methods. This reflects that P. aeruginosa is widely exists in hospital environments such as air and distribution systems (19). Moreover, P. aeruginosa has acquired many antibiotic resistance genes and therefore dominant compared to other kinds of bacteria. As this bacteria is introduced into areas when membranes and skin are disrupted by direct tissue damage, this makes P. aeruginosa prevalent in wound and burn trauma. The results of phenotypic tests include formation of large, pale, translucent and mucoid colonies on MacConkey agar plates, and formation of a greenish blue color on nutrient agar (due to production of pyocanin and fluorescin pigments).The identification of the isolates was confirmed by API 20E system and biochemical assays results (20,21). Despite the successful use of traditional methods for identification of P. aeruginosa, it is often lengthy and still needs validation.
Because Pseudomonas species are phenotypically very unstable and therefore, detection of this bacteria at the molecular level is very important especially for its pathogenic strains.
PCR Analysis
Recently, it his become difficult to cure infections caused by P. aeruginosa due to its acquisition many new antibiotic resistance genes that allow it to survive and distribute easily and convert to chronic colonization (22), Therefor early diagnosis and control of this pathogen have become vital for positive patient outcomes. Many researchers attempted to develop DNA -based techniques especially PCR for detection of P. aeruginosa due to its ease and accuracy, However, a definitive methodology still lacking. Two pairs of primers specific to P. aeruginosa genes were used in this study. The benefit of using more than one targeting gene for identification of one organism is to provide more confirmation and confidence in the identity of the organism, and to reduce the potential of false negative results caused by sequence variation in the primers (23). The first pair was targeting exotoxin A (ETA) gene as many studies targeted the same gene for identification of P. aeruginosa (24,25). Amplified products of the predicted size of 222bp were obtained using the DNA extracted from 74% of P. aeruginos isolates (Fig1). These results were in accordance with that recorded by Naiem (26) who detect this gene in 66% of the strains that were collected from human clinical infection samples from Al-Diwanyia (Iraq) hospital. This ratio increased to 73% in other studies conducted on samples collected in Baghdad (27). The ratio increased to 75% in a study conducted on samples collected from Kirkuk Hospital (28). Exotoxin A is produced by the majority of pathogenic P. aeruginosa isolates and can inhibit protein biosynthesis of the host at the level of stopping polypeptide chain elongation factor 2 (29). The absence of ETA gene from some strains is due to the mutation of that gene. By applying PCR targeting of another virulence gene (gyrB gene), the results showed amplified fragment with 367bp in 100% of the isolates (Fig 2), the same results obtained with other studies (25,30), indicating the presence of this gene in the conserved region of the genome of this bacteria, The difference in percentage of virulence genes is due to several reasons including nature of place, and type of prevalence strains (31). Since the specificity of PCR primers is very important criterion needed for detection of any bacteria, it was investigated for detection of P. aeruginosa based on the two genes used in this study, It was 100% accuracy, this result is agreed with that reported by other studies (32), Since no amplified product obtained using the DNA that extracted from other various bacterial species including Salmonella typhymerium, Staphylouocus aureus, Shigella dysenteriae and Escherichia coli. The present study is first of its kind in Erbil/Iraq to detect the presence and distribution of two virulence genes toxA and gyrB gene across the genome of P. aeruginosa isolates by PCR. Moreover, this technique was a confidant assay and specific for identification of this bacterium in short time with low cost.
Conclusion:
The results indicate that 100% of P. aeruginosa isolates from burn and wound infections harbored gyrB gene, whereas 74% of these isolate had ETA gene. Considering the importance of rapid and early detection of pathogenic strains of any bacteria due to the time and resources required for biochemical methods, PCR targeting multiple virulence genes (ETA and gyrB) is suggested for the identification of pathogenic strains of P. aeruginosa. This assay can be used for screening of some infections for effective antimicrobial therapy. | 2,557.8 | 2018-12-09T00:00:00.000 | [
"Biology",
"Medicine"
] |
Integrated analysis of fine-needle-aspiration cystic fluid proteome, cancer cell secretome, and public transcriptome datasets for papillary thyroid cancer biomarker discovery
Thyroid ultrasound and ultrasound-guided fine-needle aspiration (USG/FNA) biopsy are currently used for diagnosing papillary thyroid carcinoma (PTC), but their detection limit could be improved by combining other biomarkers. To discover novel PTC biomarkers, we herein applied a GeLC-MS/MS strategy to analyze the proteome profiles of serum-abundant-protein-depleted FNA cystic fluid from benign and PTC patients, as well as two PTC cell line secretomes. From them, we identified 346, 488, and 2105 proteins, respectively. Comparative analysis revealed that 191 proteins were detected in the PTC but not the benign cystic fluid samples, and thus may represent potential PTC biomarkers. Among these proteins, 101 were detected in the PTC cell line secretomes, and seven of them (NPC2, CTSC, AGRN, GPNMB, DPP4, ERAP2, and SH3BGRL3) were reported in public PTC transcriptome datasets as having 4681 elevated mRNA expression in PTC. Immunoblot analysis confirmed the elevated expression levels of five proteins (NPC2, CTSC, GPNMB, DPP4, and ERAP2) in PTC versus benign cystic fluids. Immunohistochemical studies from near 100 pairs of PTC tissue and their adjacent non-tumor counterparts further showed that AGRN (n = 98), CTSC (n = 99), ERAP2 (n = 98) and GPNMB (n = 100) were significantly (p < 0.05) overexpressed in PTC and higher expression levels of AGRN and CTSC were also significantly associated with metastasis and poor prognosis of PTC patients. Collectively, our results indicate that an integrated analysis of FNA cystic fluid proteome, cancer cell secretome and tissue transcriptome datasets represents a useful strategy for efficiently discovering novel PTC biomarker candidates.
INTRODUCTION
Thyroid cancer is the most prevalent malignant endocrine carcinoma in the world. In America, the incidence rate of thyroid cancer increased an average of 4.6% per year between 2004 and 2013, and the disease is now ranked as the third most common cancer in women [1]. Among all of the histological types of thyroid cancer,
Research Paper
Oncotarget 12080 www.impactjournals.com/oncotarget papillary thyroid carcinoma (PTC) accounts for the majority (80-85%) of cases. Most PTC patients have a good prognosis with a 10-year survival rate > 90%; however, a small portion of PTCs are aggressive and may develop distant metastases that are associated with higher mortality [2]. Currently, preoperative ultrasoundguided fine-needle aspiration (USG/FNA) followed by cytopathologic diagnosis is the standard procedure for examining thyroid nodules and determining therapeutic modalities [3]. Among all cases, however, 10-20% yield indeterminate biopsy results on preoperative USG/FNA diagnosis. These patients normally undergo thyroidectomy, but this is often unnecessary, as postsurgery immunohistological analysis has shown that 80% of the suspicious cases are benign [4,5]. Thus, if we hope to reduce unnecessary thyroidectomies and prevent deterioration in PTC, we urgently need reliable biomarkers that can distinguish between benign and malignant nodules in patients with indeterminate lesions.
Cystic fluid and tissue/cell specimens obtained from thyroid nodules during the FNA procedure represent ideal resources for discovering and verifying PTC biomarkers. Various types of potential PTC biomarkers have been explored in FNA cystic fluid and tissue/cell specimens, including DNA mutations/rearrangements and proteins. Several gene mutations have been identified in thyroid cancer. For example, somatic RET/PTC gene rearrangements and the BRAF V600E point mutation are frequently found in PTC, and clinicians have used them to decide whether or not to undertake thyroidectomy [2,6]. However, FNA cystic fluid samples typically contain very few cancer cells, limiting their usefulness for cytologic evaluation or genetic marker screening, and creating ambiguity in the diagnosis of thyroid cancer [7].
To address this issue, several groups have applied proteomic approaches to discover potential PTC biomarkers from FNA cystic fluid samples, PTC cell secretomes, or tissue specimens. For example, two studies identified secreted proteins (secretome analysis; 154 and 83 proteins, respectively) from PTC cell lines using LC-MS/MS analysis [8,9]. Two other studies investigated the differential protein profiles between benign and malignant cystic fluids using two-dimensional gel electrophoresis/MALDI-TOF-MS or iTRAQ-based LC-MS/MS. These studies then further verified two and three proteins, respectively, by Western blotting, ELISA and/or immunohistochemical analysis of PTC cystic fluids and tissue samples [10,11]. More recently, Martínez-Aguilar et al. used MS-based quantitative expression analysis of over 1600 proteins in normal thyroid tissues versus PTC to identify ~180 proteins that are deregulated in PTC tumors [12]. Although these proteomic studies have discovered numerous proteins as potential PTC biomarkers, however, challenges remain in determining how researchers should efficiently prioritize and select targets for further validation in a large PTC sample cohort.
In this study, we used an integrated omics approach to identify potential PTC biomarkers. We applied a GeLC-MS/MS strategy to comprehensively analyze the proteome profiles of abundant-protein-depleted FNA cystic fluids (i.e., those depleted of the top 14 most abundant proteins) from benign and PTC patients, as well as secretomes from the PTC cell lines, BHP 7-13 and CGTH W3. Using a criterion of at least two peptide hits for confident protein identification, we identified 346 and 488 proteins expressed in the FNA cystic fluids of benign and PTC patients, and 2105 proteins in the secretomes of the two PTC cell lines. Integrated analysis of these three datasets revealed that 101 proteins found in the PTC cystic fluid but not the corresponding benign samples were also detected in the PTC cell line secretome. We then combined two publicly available mRNA microarray datasets representing PTC tissues, and used them to identify the seven strongest candidates: NPC2, CTSC, AGRN, GPNMB, DPP4, ERAP2, and SH3BGRL3. Finally, we used Western blot analysis and/or immunohistochemistry to confirm the upregulations of selected candidates in PTC specimens, and evaluated their associations with the clinicopathological characteristics of PTC patients.
The strategy we used to improve the identification of novel PTC biomarkers is delineated in Figure 1. We used USG/FNA to collect thyroid cystic fluid samples from 12 patients: five cases of PTC and seven cases of benign disease (Supplementary Table 1). The individual cystic fluid samples from seven benign and three PTC specimens were pooled into two groups containing equal amounts of protein. These samples were subjected to top-14-high-abundance-protein depletion followed by GeLC-MS/MS-based proteomic profiling. The protein staining patterns of the abundant-protein-depleted fractions are shown in Figure 2A, and were quite different from those of the un-depleted (crude) and abundant protein fractions (Supplementary Figure 1). Our results demonstrated that the highly abundant proteins accounted for >90% of the total protein mass in the thyroid cystic fluid, as previously seen in human serum [13,14]. The gel lane of each sample was sliced into 60 fractions, and the proteins were subjected to in-gel tryptic digestion and identified by LC-MS/MS analysis using an LTQ-Orbitrap. Supplementary Table 2A lists the MS-identified proteins in the abundant protein fractions of benign cystic fluid, as captured by a Human 14 MARS column. To examine if the combined use of these technology platforms (i.e., abundant protein depletion and GeLC-MS/MS) can significantly improve the detection rate of low-abundance proteins in the thyroid cystic fluid samples, we used the same GeLC-MS/MS approach to analyze respectively the undepleted and abundant protein-depleted cystic fluid sample pooled from three PTC specimens. This analysis identified 241 proteins in the undepleted sample (Supplementary Table 2B), which is much less than the number of proteins (488) identified in the depleted sample (Supplementary Table 3B). Regarding the total identified spectra, there was a high proportion (45%, 24689/54411) of abundant plasma proteins in the undepleted sample as compared to a significantly lower proportion (19%, 22474/115905) of these abundant plasma proteins in the depleted sample (Supplementary Table 2C). After removing the abundant plasma protein identities from these two datasets, 194 proteins and 456 proteins remained respectively in the undepleted and depleted sample; importantly, we found that 284 proteins could only be identified in the depleted sample. Collectively, these findings confirmed that abundant plasma protein depletion is very useful to significantly increase the detection rate of low-abundance proteins in thyroid cystic fluid.
From the benign and PTC abundant-proteindepleted samples (60 μg each), the GeLC-MS/MS approach identified 346 and 488 proteins, respectively, with multiple (≧2) peptide hits and a false discovery rate (FDR) of 0.58-1.23% (Table 1 and Supplementary Table 3A-3B). Recent studies have reported galectin-3 (LGALS3) as a reliable immunohistochemical and serum biomarker for PTC detection [15,16]. Thus, it is notable that we identified galectin-3 in our FNA cystic fluid dataset, and our label-free spectral counting approach found that its expression level was much higher (~7.8 fold) in our PTC samples compared to our benign cystic fluid samples (Supplementary Table 4).
GeLC-MS/MS-based secretomic analysis of two PTC cell lines
Serum-free conditioned media of two PTC cell lines (CGTH W3 and BHP 7-13) were concentrated and desalted, and 50 μg of proteins from each sample were resolved by 8-14% gradient SDS-PAGE ( Figure 1). The protein staining patterns of the conditioned media from CGTH W3 and BHP 7-13 cells are shown in Figure 2B. As a quality control, we used Western blotting analysis to examine the distribution of α-tubulin, a cytoplasmic abundant protein, between total cell extracts and the conditioned media. We clearly detected α-tubulin in the total cell extracts but obtained little or no such signal from the conditioned media ( Figure 2C), confirming that the secreted/shed proteins had been specifically collected from the cultured PTC cell lines. The gel lane of each sample was sliced into 40 fractions, and the proteins were identified by the above-described GeLC-MS/ MS approach. Starting from 50 μg of proteins from the conditioned media, we identified 1921 and 830 proteins in the CGTH W3 and BHP 7-13 cell lines, respectively, with a FDR of 0.12-0.21% (Table 1 and Supplementary Table 3C-D). Our analysis thus yielded a total of 2105 proteins that served as our dataset for PTC biomarker selection (Supplementary Table 5A).
The identified proteins were further analyzed using bioinformatics programs designed to predict protein secretion pathways. Among the 537 non-redundant proteins identified in our thyroid cystic fluid samples, 246 proteins were predicted to be classical secreted proteins, as assessed by the SignalP 4.0 program (SignalP-TM probability ≧0.50, SignalP-noTM ≧0.45) based on the presence of a signal peptide in target proteins with or without transmembrane (TM) sequences [29]. The SecretomeP 2.0 program predicted that 131 proteins would be released through the non-classical pathway (SignalP score < 0.5 or 0.45 and Secretome score ≧0.50) [30]. Among them, only five proteins were determined to be integral membrane proteins, as assessed by TMHMM [31]. Among the 2105 proteins of the PTC cell secretome dataset, 426 were predicted to be classical secreted proteins, 723 were predicted to be non-classical secreted proteins, and 81 were predicted to be integral membrane proteins (Table 2). Taken together, our results indicated that 71.1% (382 of 537) of the cystic fluid proteins and 58.4% (1230 of 2105) of the conditioned-media proteins of cultured PTC cells could be secreted/released via different mechanisms. Notably, the cystic fluid proteins of the up-regulated group described above included a higher percentage (84.9%, 73 of 86) of secretory proteins than those of the down-regulated group ( Tables 4 and 6). These results indicate that most PTC-related proteins are predicted to be secretory, and thus PTC biomarkers could be exploited in cystic fluid. In addition, we used the Gene Ontology (GO) annotation tool in the DAVID Bioinformatics Resources (v6.8) to perform functional annotation of those 277 proteins with two-fold up-regulation or solely detected in PTC cystic fluids, including biological process, molecular function and proteins location of cellular components. Regarding the biological process, the top three categories were platelet degranulation, complement activation (classical pathway) and complement activation (Supplementary Figure 2A). The major molecular functions were classified as serine-type endopeptidase activity, structural molecule activity and heparin binding (Supplementary Figure 2B). The protein localization was mainly classified into extracellular such as extracellular exosome, extracellular space and extracellular region (Supplementary Figure 2C).
Selecting novel PTC cystic fluid biomarker candidates through a combined analysis of the PTC cystic fluid proteome, cell secretome, and tissue transcriptome
To search for PTC-specific biomarkers, the 191 proteins ( Figure 3A) uniquely detected in PTC cystic fluid samples were compared with the 2105 proteins ( Figure 3B) identified in the secretomes of the BHP 7-13 and CGTH W3 cell lines. We also mined two public cDNA microarray datasets (the E-GEOD-3678 dataset from EBI-ArrayExpress and the GDS1665 dataset from Gene Expression Omnibus) for a total of 4681 genes whose tissue mRNA expression levels were reported to be significantly elevated in PTC versus adjacent normal tissues ( Figure 3C and Supplementary Table 7) [32]. Our combined analysis of the three datasets ( Figure 3D) identified seven highly relevant potential PTC candidate biomarkers: epididymal secretory protein E1 (NPC2), dipeptidyl peptidase 1 (CTSC), agrin (AGRN), transmembrane glycoprotein NMB (GPNMB), dipeptidyl peptidase 4 (DPP4), endoplasmic reticulum aminopeptidase 2 (ERAP2), and SH3 domain-binding glutamic acid-rich-like protein 3 (SH3BGRL3) ( Table 3). All seven are predicted to be secreted via the classical or nonclassical secretion pathways, and four of them (CTSC, AGRN, DPP4, and SH3BGRL3) are known as exosomal proteins [33]. Of them, DPP4 was previously detected in the human serum proteome dataset [34] (Table 3), and was verified as a potential PTC biomarker using tissue specimens [35]. Otherwise, ERAP2 was also elucidated highly expressed in PTC tissue with cervical lymph node metastasis [36], but other five candidates had not previously been reported.
Verifying potential PTC biomarkers in the conditioned media of PTC cell lines and thyroid cystic fluids
We used Western blot analysis to verify the seven selected biomarkers in conditioned media of the CGTH W3 and BHP 7-13 cell lines, as well as in cell extracts. Galectin-3, which was previously reported as a promising PTC biomarker [37], was used as a positive control. As shown in Figure 4, all of the tested proteins except SH3BGRL3 could be detected in the conditioned media of CGTH W3 and/or BHP 7-13 cells, and their relative expression levels fitted well with the total spectral numbers detected in the PTC cell secretome (Supplementary Table 5B). Similar experiments were performed to evaluate the levels of these potential biomarkers in twelve abundant-protein-depleted cystic fluid samples including ten samples used for initial discovery experiment and two subsequently collected PTC cystic fluid samples. As shown in Figure 5A, five candidates (GPNMB, DPP4, ERAP2, CTSC, and NPC2) were successfully confirmed. Their levels were generally higher in PTC cystic fluid samples compared to benign cystic fluid samples when equal amounts of total protein were analyzed ( Figure 5B). We examined SFTPB as an additional candidate, as it was detected solely in our PTC cystic fluid data set with a very The numbers represent identified proteins for which at least two unique peptides of 95% peptide possibility and 95% protein possibility were identified using the Scaffold software. b The false discovery rate (FDR) was calculated as the ratio of the spectra assigned to a random database over those assigned to a normal database. www.impactjournals.com/oncotarget Figure 1: Strategy for identifying potential PTC biomarkers. Schematic representation of the experimental design used in this study. We first applied a GeLC-MS/MS approach to comprehensively analyze the proteome profiles of thyroid cystic fluid samples from PTC and benign lesions, as well as the secretome profiles of two PTC cell lines. Meanwhile, we searched public-domain transcriptome datasets for genes whose transcriptional levels were up-regulated in PTC tissues. We then integrated the three datasets to identify candidate genes/proteins that were selectively detected in the PTC cystic fluid samples, highly up-regulated in PTC tissues, and secreted/released from PTC cells. Finally, we verified the candidate proteins in FNA cystic fluid and PTC tissue samples using immunoblotting and immunohistochemistry.
Oncotarget 12084 www.impactjournals.com/oncotarget high spectral count (Supplementary Table 4), and was previously reported to be upregulated in PTC tissues [38]. Consistent with these observations, we found that SFTPB was drastically increased in two of the PTC cystic fluid samples ( Figure 5A). Taken together, these results indicate that the proteins selected based on our integrated analysis of the FNA cystic fluid proteome, PTC cell secretome, and tissue transcriptome represent strong potential biomarkers for detecting PTC from FNA cystic fluid.
Overexpression of AGRN, CTSC, ERAP2, and GPNMB in PTC tissues
To further examine the expression levels of the seven prioritized targets in PTC tissues, we surveyed antibodies suitable for their immunohistochemical (IHC) analysis. We obtained suitable antibodies against AGRN, CTSC, ERAP2, and GPNMB and used them to stain tissue specimens from 114, 117, 115, and 115 patients, respectively (Supplementary Table 8). Among them, 98, 99, 98, and 100 specimens, respectively, contained PTC tumor and adjacent non-tumor tissues. Representative paired-tissue staining patterns for the four target proteins are shown in Figure 6A. Analysis of the staining scores revealed that strong staining (2+ and 3+) for AGRN, CTSC, ERAP2, and GPNMB was respectively detected in 20 Figure 6B). Statistical analysis showed that the expression levels (IHC staining scores) of all four proteins were significantly higher in the tumor parts than in the adjacent non-tumor parts (89. 13 Figure 6C). It is noteworthy that little or no expression of GPNMB was detected in adjacent nontumor tissues.
Associations of AGRN, CTSC, ERAP2, and GPNMB expression with clinicopathological characteristics
We next used the median IHC staining score value as the cutoff for each protein, and explored the relevance of the observed protein expression levels to different clinicopathological characteristics using these IHC stained specimens (114 for AGRN, 117 for CTSC, 115 for ERAP2 and 115 for GPNMB) ( Table 4). Higher expression levels of AGRN and CTSC were found to be significantly correlated with lymph node metastasis, distant metastasis at diagnosis, tumor multicentricity, TNM stage, and disease-specific mortality. Notably, high CTSC expression was also significantly associated with an age greater than 45 years and with locoregional recurrence. In contrast, the expression levels of ERAP2 and GPNMB did not show any significant correlation with the tested manifestations.
Association of AGRN and CTSC expression with disease-specific survival (DSS)
To assess the correlation the expression of select proteins and patient survival, we used Kaplan-Meier plots to estimate the DSS rates of PTC patients. We found that the 5-year DSS for patients stratified by low or high protein expression were respectively 95.5% vs. 80.9% for AGRN (p = 0.015) and 96.4% versus 78.3% for CTSC (p = 0.0004), whereas there was no significant expression-related difference in DSS for ERAP2 or GPNMB (Figure 7). These results indicate that the tissue expression levels of AGRN and CTSC appear to be correlated with the DSS of PTC patients.
DISCUSSION
The integrated analysis of multiple omic datasets has shown promise for the identification of potential biomarkers and/or therapeutic targets for different cancer types [39,40]. Here, we set out to discover novel PTC biomarkers that are overexpressed in PTC tumors and can be secreted/released by PTC cells, and thus may be detected at elevated protein levels in PTC cystic fluid samples. Through an integrated analysis of inhouse-generated FNA cystic fluid proteome and cell line secretome datasets, as well as public-domain tissue transcriptome datasets, we identified seven proteins that fit the above criteria for an ideal PTC biomarker ( Figure 3). Further immunoassays confirmed that the levels of four or five of the seven target proteins appeared to be elevated in PTC cystic fluids or tumor tissues (Figures 5 and 6). Higher tumor tissue expressions of two proteins, AGRN and CTSC, were found to be significantly associated with lymph node metastasis, distant metastasis at diagnosis, and poor prognosis of PTC patients (Table 4). Our findings demonstrate that an integrated analysis of multiple omic datasets can be used to identify PTC biomarker candidates that may be of high clinical utility. Several groups have previously applied proteomic approaches to discover potential PTC biomarkers [8,10]. The study performed by Martínez-Aguilar et al. is of particular interest. The authors used SWATH-MS (sequential windowed acquisition of all theoretical fragment ion mass spectra) and MRM-HR (highresolution multiple reaction monitoring) to analyze the proteomes in frozen thyroid tissues that included normal, follicular adenoma, follicular thyroid carcinoma, and PTC samples. They identified 1512 proteins in PTC tissues; of them, ~180 proteins were deregulated in PTC tumors compared to normal tissues [12]. When we compared the PTC tissue proteome dataset (1512 proteins) with our present cystic fluid proteome dataset (537 proteins), we found 303 proteins commonly detected in both datasets (Supplementary Table 9). Among these 303 proteins, 101 proteins (33.3%, 101/303) showed similar up-or downregulation trend in both datasets; 73 were significantly up-regulated (≧2 fold) or solely detected in PTC, and 28 proteins were significantly down-regulated (≧2 fold) in PTC or solely detected in benign lesions. Notably, five out of the seven candidates identified in our present integrated analysis of multiple omic datasets (NPC2, CTSC, GPNMB, ERAP2, and SH3BGRL3) were among the proteins that Martínez-Aguilar et al. identified as being up-regulated (>1.5 to 4 fold increase) in PTC versus normal tissues; however, the levels of AGRN were down-regulated and DPP4 was not detected in PTC tissue specimens (Supplementary Table 9). Different approaches and study materials used in our current study and the work from Martinez-Aguilar et al. may account for the discrepancy of proteomic findings between the two datasets. Consistent with the previously reported SWATH-MS data made by Martínez-Aguilar et al., our IHC analysis of ~100 tissue specimens also revealed that CTSC, GPNMB, and ERAP2 are overexpressed in PTC ( Figure 6). Indeed, these three proteins have been consistently found to be overexpressed in PTC samples analyzed by different technologies in distinct areas of the world, including transcriptome analysis of specimens from Finland [32], SWATH-MS analysis of specimens from Australia [12], and our IHC analysis of specimens from Taiwan (this study). Our integrated analysis added further evidence that these proteins could be good PTC biomarker candidates by showing that they could be secreted/released by PTC cell lines and detected at elevated levels in FNA cystic fluids (Figures 4 and 5).
Moreover, we also included a quantitative cystic fluid proteome dataset analyzed by Dinets et al. Table 10). Four out of seven candidates we identified (NPC2, DPP4, SH3BGRL3 and GPNMB) were also detected in Dinets's dataset, with up-regulation of NPC2 LGALS3, which was previously reported as a potential PTC biomarker, was also included in this analysis. Two cytosolic proteins, α-tubulin and actin, were detected as controls. (B) The Coomassie Blue-stained protein profile was used as a loading control.
Oncotarget 12090 www.impactjournals.com/oncotarget Although several potential PTC biomarkers have been identified in the present study, there are several limitations of our approach. First, only a single proteomics analysis of a single pooled cystic fluid of each clinical group and secretome data of only two cancer cell lines were used for the discovery proteomics experiments. This may not be able to capture tumor heterogeneity and thus other potential biomarkers reflecting the tumor heterogeneity. Second, the poor correlation between expression levels of mRNA and protein may complicate the selection of secreted protein candidates deserved for further study. Third, the number of cystic fluid samples used for verification was small. Further study using larger numbers of samples is needed to prove the clinical utility of these candidate biomarkers.
Agrin (AGRN) is a multifunctional heparan sulfate proteoglycan of the extracellular matrix. It is localized in the basement membrane of the vessels and ducts, and may critically regulate blood-brain barrier conformation and/or synaptogenesis at neuromuscular junctions [41]. Significant overexpression of AGRN was observed during neoangiogenesis in liver cirrhosis and hepatocellular carcinoma, supporting the notion that AGRN stimulates tumor vascularization [42]. AGRN can be detected in ascitic fluid from patients with ovarian cancer, and is secreted by small cell lung cancer cells [43,44]. An immunoscreening of the extracellular proteome of colorectal cancer cells identified AGRN as an antigen that may be recognized by autoantibodies that exist in sera from colorectal cancer patients [45]. These observations together with our data suggest that AGRN, a proteoglycan that can be secreted by cancer cells via the exosomal pathway, may be a promising PTC marker candidate in cystic fluid.
Cathepsin C (dipeptidyl peptidase I, CTSC) belongs to the papain family of proteinases and participates in the catalytic activation of lysosomal cysteine hydrolase and leukocyte-derived serine proteases [46,47]. CTSC appears to regulate the degradation of extracellular matrix components that is associated with the metastasis of oral and ovarian cancer cells [48][49][50]. These observations are consistent with our findings that higher expression of CTSC is correlated with a higher percentage of distant metastasis at diagnosis, locoregional recurrence, and poor prognosis of PTC patients (Table 4 and Figure 7).
Glycoprotein nonmetastatic melanoma protein B (GPNMB) is a type 1 transmembrane glycoprotein that contains three binding motifs of heparin sulfate proteoglycan, lysosome and integrin; it is highly expressed in bone, where it modulates osteoblast maturation and matrix mineralization [51,52]. Overexpression of GPNMB has been correlated with tumor formation and metastasis in melanomas, gliomas, hepatocellular carcinoma, and breast cancer [53][54][55][56]. The antibody-drug conjugate, glembatumumab vedotin, in which a fully human monoclonal antibody against GPNMB is linked to the potent cytotoxin, monomethyl auristatin E, has been approved by the Food and Drug Administration for phase II clinical trials in stage III or IV melanoma and GPNMBexpressing metastatic triple negative breast cancer [57,58]. Although our data do not suggest that GPNMB is a prognostic indicator for PTC, its levels were found to be dramatically elevated in PTC cells and cystic fluid samples (Figures 4 and 5), suggesting that GPNMB is a strong potential candidate for the targeted therapy and/or diagnosis of PTC.
The endoplasmic reticulum aminopeptidases (ERAPs), which include EARP1 and ERAP2 (also called LRAP), play central roles in trimming longer precursors in the endoplasmic reticulum to generate antigenic peptides that are presented on major histocompatibility complex class I (MHC I) molecules [59]. Animal model studies have shown that altered levels of ERAP1 and ERAP2 can facilitate tumor immune evasion [60,61]. A recent study investigated the expression of both aminopeptidases in a variety of solid tumors and their normal counterpart tissues, and found that the tumor tissues retained, lost, or acquired expression of either or both aminopeptidases, compared to their normal counterparts, depending on the tumor histotype [62]. Of the thyroid specimens examined in this study, only four exhibited high-level expression of both aminopeptidases in tumor cells but none in their normal counterpart tissues. However, our IHC analysis of ~100 PTC tissue specimens demonstrated that ERAP2 is overexpressed in PTC ( Figure 6). Future studies are warranted to clarify the role of ERAP2 in PTC carcinogenesis and its potential as a PTC biomarker.
In conclusion, our data showed the significant overexpression of AGRN, CTSC, ERAP2 and NPC2 in PTC tissues, and the tissue expression levels of AGRN and CTSC were significantly associated with metastasis and poor prognosis of PTC patients. Therefore, we considered that integration of multiple omics profiling datasets, from FNA cystic fluid proteome, cancer cell secretome to tissue transcriptome, can be a useful approach to discover novel PTC biomarker candidates while there were not enough clinical samples for large scale and comprehensive analysis. Further studies such as using workable ELISAs to verify these candidates in cystic fluids from a large cohort of patients are warranted to evaluate their utilities in clinical settings.
Patient characteristics and clinical specimens
The collection and preparation of thyroid cyst fluids were performed as previously described [63]. Briefly, a real-time ultrasonographic machine with a 10-MHz transducer (ALOKA, Tokyo, Japan) was used to detect thyroid nodule and guide fine-needle aspiration with 22-or 25-gauge needles (Becton Dickinson, Singapore).
Oncotarget 12091 www.impactjournals.com/oncotarget SDS-PAGE, transferred to PVDF membranes, and probed with specific antibodies against the indicated target proteins. SFTPB, which was previously reported as a potential PTC biomarker, was also included in this analysis. (B) The Coomassie-Blue-stained protein profile was used as a loading control.
Oncotarget 12092 www.impactjournals.com/oncotarget After the fluid was smeared onto slide, air dried and stained by the Romanowsky-based Liu method [64], fineneedle aspiration cytology (FNAC) was performed and the cytological result was interpreted by a pathologist. Before sample collection, the informed consent forms had been acquired from all patients. Based on cytological and pathological examination, we collected 12 FNA cystic fluid samples from Chang Gung Memorial Hospital (Linkou, Taiwan): seven from patients with benign lesions (2 males and 5 females, mean (±SD) age 41.14 ± 13.16 years, age range 21~55) and five from PTC patients (2 males and 3 females, mean age 47 ± 16.46 years, age range 24~70). The cytology tests for the PTC patients were positive for cancer. The features of the FNA cystic fluid samples were assessed by histological analysis. Formalin-fixed and paraffin-embedded samples of thyroid tissues were stained with hematoxylin and eosin (H&E). The guidelines found in "Pathology and Genetics of Tumours of Endocrine Organs" (edited by the World Health Organization, 2004) were used to classify the histopathologic features of the tumor specimens, and clinical staging was based on the definitions of the American Joint Committee on Cancer, 2002 [65,66]. The characteristics of all study subjects are summarized in Supplemental Table 1. This study was approved by the Institutional Review Board of Chang Gung Memorial Hospital (IRB number 99-3565B).
Depletion of high-abundance proteins from FNA cystic fluid samples
The typical volumes of cyst fluids used for the top 14 abundant protein depletion were 4.28 μl (protein concentration: 233.64 μg/ml) and 8.82 μl (protein concentration: 113.38 mg/ml) which were respectively pooled from 3 malignant and 7 benign cystic fluid samples and then were subjected to depletion of 14 highly abundant proteins using Agilent Human 14 Multiple Affinity Removal System (MARS) columns (4.6 X 100 mm; Agilent, Palo Alto, CA, USA), which harbor antibodies raised against human albumin, IgG, antitrypsin, IgA, transferrin, haptoglobin, fibrinogen, alpha2-macroglobulin, alpha1-acid glycoprotein, IgM, apolipoprotein A1, apolipoprotein A11, complement C3, and transthyretin. Briefly, the FNA cystic fluid sample was prepared at 1 mg/40 µL in ddH 2 O and diluted fourfold (to 1 mg/160 µL) with 120 µL buffer A of the MARS column system. The diluted sample was processed using the suggested column run cycle which was coupled with AKTApurifier 10 fast protein liquid chromatography (FPLC) (GE Healthcare Life Sciences, Piscataway, NJ, USA) including sample loading, flow-through collection (depleted fraction), washing, eluting the bound proteins with buffer B of the MARS column system and reequilibrating the column for the next run. The depleted and bound fractions were desalted and concentrated with Amicon Ultra-15 Centrifugal Filter Devices (MW cutoff, 3000 Da; Millipore, Billerica, MA, USA). The proteins were then suspended in ddH 2 O, quantified using a Pierce BCA Protein Assay Kit (Thermo Scientific, Hudson, NH, USA), and stored at -80 o C for further study.
Cell culture and collection of conditioned media and cells extracts
The PTC cell lines, CGTH W3 and BHP 7-13 were provided from Dr. Jen-Der Lin (Chang Gung Memorial Hospital at Taoyuan, Taiwan) and cultured in RPMI 1640 supplemented with 10% fetal bovine serum and 100 units/ mL of penicillin/streptomycin (Invitrogen, Carlsbad, CA, USA) in a humidified 5% CO 2 atmosphere at 37 o C. Cells were grown to approximately 80% confluence in 15-mm culture dishes, washed twice with phosphate-buffered saline (PBS) and once with serum-free medium, and then incubated in serum-free medium at 37 o C for 24 h. The conditioned media were harvested and centrifuged at 1000 rpm for 10 minutes to eliminate the suspended cells. A proteinase inhibitor cocktail was added to the supernatants (final concentrations: 1 mM phenylmethylsulfonyl fluoride [PMSF], 1 mM benzamidine, 0.5 μg/ml leupeptin), which were then concentrated and desalted in Amicon Ultra-15 tubes. The cells that had adhered to the dishes were washed twice with PBS and lysed in homogenization buffer (10 mM Tri-HCl, 1 mM EDTA, 1 mM EGTA, 50 mM NaCl, 50 mM NaF, 20 mM Na 4 P 2 O 7 , 1 mM Na 3 VO 4 , 1 mM PMSF, 1 mM benzamidine, 0.5 μg/ml leupeptin, and 1% Triton-X100, pH 7.4) on ice for 15 minutes. The cell lysate was collected, sonicated on ice, and centrifuged at 11,000 rpm for 20 minutes at 4 o C. The supernatant was harvested as the cell extract. The BCA protein assay reagent (Thermo Scientific Pierce, Rockford, IL, USA) was used to determine the protein concentrations of the cell extracts and conditioned media, which were then stored at -80 o C for further use.
One-dimensional SDS-PAGE and in-gel tryptic digestion
The abundant-protein-depleted thyroid FNA samples (60 μg each) or the conditioned media of the PTC cell lines (50 μg each) were separated by 8-14% large-gradient SDS-PAGE (gel dimensions: 0.15 × 12.5 × 14 cm) and stained with Coomassie Brilliant Blue. The gel-separated proteins were processed for MS analysis using in-gel tryptic digestion, as previously described [39]. Briefly, each gel lane was cut into 60 (for FNA cystic fluid samples) or 40 (for conditioned media of PTC cells) pieces, destained three times (
Reverse phase liquid chromatography-tandem mass spectrometry
For LC-MS/MS analysis, peptide samples were reconstituted in HPLC buffer A (0.1% formic acid), loaded across a reversed-phase trapping column (Zorbax 300SB-C18, 0.3 x 5 mm; Agilent Technologies, Wilmington, DE, USA) at a flow rate of 0.2 μl/min in buffer A, and separated on a 10-cm analytical C 18 column (inner diameter, 75 μm; New Objective, Woburn, MA, USA) using a 15-μm tip (New Objective). The peptides were eluted using a linear gradient of 0-10% HPLC buffer B (99.9% acetonitrile containing 0.1% formic acid) for 3 min, 10-30% buffer B for 35 min, 30-35% buffer B for 4 min, 35-50% buffer B for 1 min, 50-95% buffer B for 1 min, and 95% buffer B for 8 min, all at a flow rate of 0.25 μl/min. The LC device was on-line coupled with a two-dimensional linear ion-trap mass spectrometer LTQ- Orbitrap (Thermo Fisher, San Jose, CA, USA) operated with the Xcalibur 2.0.7 software (Thermo Fisher). The MS full-scan was set to the 350-2000 Da range and the intact peptides were detected at a resolution of 30,000. The ion signal of (Si(CH 3 ) 2 O)6H+ at m/z 445.120025 was used as a lock mass for internal calibration. The utilized datadependent analytical mode alternated between one fullscan MS and six MS/MS scans for the six most abundant precursor ions in the MS survey scan. The m/z values selected for MS/MS were dynamically excluded for 40 s. The voltage of the electrospray ionization was 1.8 kV. The MS and MS/MS spectra were both obtained using one microscan with maximum full times of 1000 and 100 ms for MS and MS/MS, respectively. Automatic gain control was performed to prevent the ion trap from becoming overfilled; 5 × 10 3 ions were collected in the ion trap for the generation of MS/MS spectra.
MS data analysis and label-free spectral quantification
The RAW files of the spectra obtained from the LTQ-Orbitrap were searched against 20,367 Homo sapiens entries in the SwissProt-human_56.0 database, with trypsin assumed as the digestive enzyme. The MASCOT Daemon algorithm (version 2.2.03; Matrix Science, London, UK) was used for data processing, and one missed cleavage was allowed. The MS tolerance for the monoisotopic peptide window was set to 10 ppm, and the MS/MS tolerance was set to 0.5 Da. Carbamidomethyl cysteines (+57 Da) and oxidation of methionine residues (+16 Da) was set as variable modification. The charge states of the peptides were set to +2 and +3. All DAT files produced by MASCOT Daemon were combined using the Scaffold software (version 2.06.00; Proteome Software Inc., Portland, OR, USA) to evaluate the MS/MS-based peptide and protein identifications. The probabilistic threshold of protein identification was set at ≧95% and the peptide probability was set at ≧95%. The confidence of protein identification was based on the assignment of at least two identified unique peptides. The false discovery rate (FDR) was calculated from comparison of the spectra assigned to a decoy database that the spectra (random database) versus those assigned to a normal database. The decoy database was generated by Mascot of the same size (i.e., number of amino acids) and also the same number of proteins as the original normal database [67]. The GeLC-MS/MS labelfree spectral counts were applied to determine protein ratios and further compare protein expression levels, using the previously described algorithm [68,69]. Briefly, to use the spectral report from the scaffold to quantify protein expression, the spectra of each protein were normalized by all spectra detected, and the normalized values from the malignant and benign parts were expressed as a ratio. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral. proteomexchange.org) via the PRIDE [70] partner repository with the dataset identifier PXD007532.
Bioinformatic analysis
The proteins identified from the conditioned media of PTC cell lines and FNA cystic fluid samples were analyzed using the SignalP 4.0 and SecretomeP 2.0 servers to predict the secretory pathways used for proteins with or without signal peptides, respectively [29,30]. The TMHMM 2.0 program was used to predict transmembrane proteins that may have secretory potential [31]. The DAVID Bioinformatics Resources v6.8 [71,72] was applied to functional annotation of proteins selected from the discovery experiments, including the biological process, molecular function and cellular component.
Meta-analysis of two PTC tissue mRNA microarray datasets
Two public thyroid cancer tissue microarray datasets, E-GEOD-3678 and GDS1665, were obtained from ArrayExpress of the European Bioinformatics Institute (EBI) and the Gene Expression Omnibus (GEO) of the National Center for Biotechnology Information (NCBI) [32]. The tissue samples used for gene expression profiling were obtained from seven and nine independent PTC patients for the E-GEOD-3678 and GDS1665 datasets, respectively. The two-sample t-test was utilized to determine genes whose expression levels were significantly different between PTC and paired normal tissues (p-value ≧0.05). The mean intensity of each gene probe was measured from healthy and cancerous groups, the tumor/normal (T/N) ratio was calculated, the ratios were ranked, and the top 5% up-and down-regulated genes were selected. To unify the ID names with those used in the FNA cystic fluid proteome and cell line secretome datasets, the selected gene probe IDs were converted to Swiss-Prot IDs, and comparison was used to select the candidates that most consistently showed highlevel expression in cancerous tissues and FNA cystic fluid.
Immunohistochemical analysis
Formalin-fixed and paraffin-embedded tissue specimens from 121 PTC patients were obtained from Chang Gung Memorial Hospital and sliced into 4-μm-thick sections for immunohistochemical (IHC) staining, which was performed using an automatic IHC staining system according to the manufacturer's instructions (Bond; Vision BioSystems, USA). The antibodies used for IHC staining included those against AGRN (1:50 dilution, sc-25528, Santa Cruz Biotechnology), CTSC (1:40 dilution, sc-13986, Santa Cruz Biotechnology), ERAP2 (1:30 dilution, AF3830, R&D Systems), and GPNMB (1:50 dilution, BAF2550, R&D Systems). The IHC staining intensity and percentage in each section were evaluated by an experienced pathologist. Intensity scores of 0, 1, 2, and 3 indicated negative, weak, moderate, and strong staining, respectively, and the percentage score ranged from 0 (0%) to 100 (100%). The two scores were multiplied to obtain the IHC staining score (0 to 300). ERAP2, or GPNMB, and then analyzed for their disease-specific survival using a Kaplan-Meier plot. The log-rank test p-value is denoted in each plot. Two patients died of other causes but not thyroid cancer had been excluded from this analysis. Thus, the numbers of patients used for this analysis are 112, 115, 113 and 113 for AGRN, CTSC, ERAP2 and GPNMB, respectively. | 9,088.8 | 2018-01-04T00:00:00.000 | [
"Biology",
"Medicine"
] |
Pre-processing for noise detection in gene expression classification data
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Introduction
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy examples. This kind of data may originate, for example, from errors during data collection, such as contaminations of laboratorial samples. Gene expression data are examples of biological data that suffer from this problem. Although many Machine Learning (ML) algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis.
Noise can be defined as an example apparently inconsistent with the remaining examples in a data set. The presence of noise in a data set can decrease the predictive performance of Machine Learning (ML) algorithms, by increasing the model complexity and the time necessary for its induction. Data sets with noisy instances are common in real world problems, where the data collection process can produce noisy data.
Data are usually collected from measurements related with a given domain. This process may result in several problems, such as measurement errors, incomplete, corrupted, wrong or distorted examples. Therefore, noise detection is a critical issue, specially in domains demanding security and reliability. The presence of noise can lead to situations that degrade the system performance or the security and trustworthiness of the involved information. A wide variety of noise detection applications can be found in different domains, such as fraud detection, loan application processing, intrusion detection, analysis of network performance and bottlenecks, detection of novelties in images, pharmaceutical research, and others 17 .
Different types of noise can be found in data sets, specially in those representing real problems (see Figure 1). In order to illustrate these different types, the instances of a given data set can be divided into five groups: Mislabeled cases: instances incorrectly classified in the data set generation process. These cases are noisy instances; Redundant data: instances that form clusters in the data set and can be represented by others. At least one of these patterns should be maintained so that the representativeness of the cluster is conserved; Outliers: instances too distinct when compared to the other examples of the data set. These instances can be either noisy or very particular cases and their influence in the hypothesis induction should be minimized; Gene expression data are, in general, represented by complex, high dimensional data sets, which are susceptible to noise. In fact, biological or real world data sets, and gene expressions data sets are part of it, present a large amount of noisy cases.
When using gene expressions data sets, some aspects may influence the performance achieved by ML algorithms. Due to the imprecise nature of biological experiments, redundant and noisy examples can be found at a high rate. Noisy patterns can corrupt the generated classifier and should be therefore removed 21 . Redundant and similar examples can be eliminated without harming the concept induction and may even improve it.
In order to deal with noisy data, several approaches and algorithms for noise detection can be found in the literature. This paper focus on the investigation of distance-based noise detection techniques, adopted in a pre-processing phase. This phase aims to identify possible noisy examples and remove them. In this work, three ML algorithms are trained with the original data sets and with different sets of pre-processed data produced by the application of noise detection techniques. By evaluating the difference of performance among classifiers generated over original (without pre-processing) and pre-processed data, the effectiveness of distance-based techniques in recognizing noisy cases can be estimated.
There are other works 18, 24 that look for noise in gene expression data sets but, different from this work, the experiments reported in these papers eliminate only genes. In the experiments performed here, we use noise detection techniques mainly to detect mislabeled tissues.
Details of the noise detection techniques used are presented in Section 2. The methodology employed in the experiments, the data sets used and ML algorithms adopted are described in Section 3. The results obtained are presented and discussed in Section 4. Finally, Section 5 has the main conclusions from this work.
Noise Detection
Different pre-processing techniques have been proposed in the literature for noise detection and removal. Statistical models were the earliest approaches used in this task, and some of them were applicable only to one-dimensional data sets 17 . In these approaches, noise detection is dealt with by techniques based on data distribution models 3 . The main problem of this method is the assumption that the data distribution is known in advance, which is not true for most real world problems.
Clustering techniques 8,16 are also applied to noise detection tasks. In these approach, small groups of data, disperse among the existent examples, are regarded as possible noise. A third approach employs ML classification algorithms, which are used to detect and remove noisy examples 34,19 . The work presented here follows a forth approach, in which noise detection problems are investigated by distance-based techniques 20,30,5,32 . These techniques are named distance-based because they use the distance between an example and its nearest neighbors.
Distance-based techniques are simple to implement and do not make assumptions about the data distribution. However, they require a large amount of memory space and computational time, resulting in a complexity directly proportional to data dimensionality and number of examples 17 . The most popular distance-based technique referred in literature is the k-nearest neighbor (k-NN) algorithm, which is the simplest algorithm belonging to the class of instancebased supervised ML techniques 25 . Distance-based techniques use similarity measures to calculate the distance between instances from a data set and use this information to identify possible noisy data. One of main questions regarding distance-based techniques relates to the similarity measure used in the calculus of distances.
For high dimensional data sets, the commonly used Euclidian metric is not adequate 1 , since data is commonly sparse. The HVDM (Heterogeneous Value Difference Metric) metric is shown by 36 as suitable to deal with high dimensional data and was therefore used in this paper. This metric is based on the distribution of the attributes in a data set, regarding their output values, and not only on punctual values, as is observed in the Euclidian distance and other similar distance metrics. Equation 1 presents the HVDM metric. VDMa(x a , z a ) is the distance VDM (Value Difference Metric) 29 , adequate for nominal attributes and a is the standard deviation of attribute a in the data set. Since the data sets employed in this paper do not present nominal attributes, the second row of Equation 2 is not used in this work.
The k-nearest neighbor (k-NN) algorithm was used for finding the neighbors of a given instance. This algorithm classifies an instance according to the class of the majority of its k nearest neighbors. The value of the k parameter, which represents the number of nearest neighbors of the instance, influences the performance of the k-NN algorithm. Typically, it is an odd and small integer, such as 1, 3 or 5.
The techniques evaluated in this paper are the noise detection filters Edited Nearest Neighbor (ENN), Repeated ENN (RENN) and AllkNN, all based on the k-NN algorithm.
In order to explain the techniques evaluated, let T be the original training set and S be a subset of T, obtained by the application of any of the distance-based techniques evaluated. Now, suppose that T has n instances x 1 , ..., x n . Each instance x of T (and also of S) has k nearest neighbors.
The ENN algorithm was proposed in 37 . Initially, S = T, and an instance is considered noise and then removed from the data set if its class is different from the class of the majority of its k nearest neighbors. This procedure removes mislabeled data and borderlines. In the RENN technique, the ENN algorithm is repeatedly applied to the data set until all its instances have the majority of its neighbors with the same class. Finally, the AllkNN algorithm was proposed in Tomek 31 and is also an extension of ENN algorithm.This algorithm proceeds as follows: for i = (1, . . . , k), mark as incorrect (possible noise) any instance incorrectly classified by its i nearest neighbors. After the analysis of all instances in the data set, it removes the signalized instances.
Despite the large number of existent techniques used in noise detection problems, it is possible to find also recent studies that use hybrid systems, as well as ensembles of classifiers, to improve system performance and reduce deficiencies of the applied algorithms. Hybridization is used variously to overcome deficiencies with one particular classification algorithm, exploiting the advantages of multiple approaches while overcoming their weaknesses 17 .
Experiments
The experiments performed employed the 10-fold cross validation methodology 25 . All selected data sets were presented to the noise detection techniques investigated. Next, their pre-processed versions, resulting from the application of each noise detection technique, were presented to the three ML algorithms employed. The original version of each data set used in the experiments was also presented directly to the ML algorithms, aiming to compare the performance obtained by ML algorithms with the original data sets and with their pre-processed versions. The error rate obtained by the ML algorithms was calculated by the average of the individual errors obtained for each test partition. Each noise detection technique was applied 10 times, one for each training partition of the data set produced by the 10-fold cross validation methodology.
The experiments were run in a 3.0 GHz Intel Pentium 4 dual processor PC with 1.0 Gb of RAM memory. For the noise detection techniques evaluated, the code provided by 35 was used. The values of the k parameter, which define the number of nearest neighbors, were set as 1, 3 or 9, to follow a geometric progression that includes the number three, which is the default value of the mentioned code.
The ML algorithms investigated were C4.5, used for the induction of Decision Trees, RIPPER, which produces a set of rules from a data set and Support Vector Machines (SVMs), which looks for representative examples to improve the generalization of the decision border.
The C4.5 algorithm 27 uses a greedy approach to progressively grow a decision tree whose leaf nodes represent classes. C4.5 deals with noise data by using a pruning procedure. In this procedure, ramifications of the trained tree that present, according to some criterion, low expressive power, are pruned. This procedure aims to simplify the built tree and to reduce its classification error rate.
The RIPPER algorithm (Repeated Incremental Pruning to Produce Error Reduction) 6 is a rule induction algorithm proposed to obtain low classification error rates even in the presence of noise and high dimensional data. Rule induction algorithms are more flexible than decision trees algorithms, like C4.5, since new rules can be added or modified as new data are included 17 .
SVMs are learning algorithms based on the statistical learning theory, through the principle of Structural Risk Minimization (SRM) 33 . SVMs accomplish a non-linear data analysis in a high dimension space where a maximum margin hyperplane can be built, allowing the separation of positive and negative classes. They present high generalization ability, are robust to high dimensional data and have been successfully applied to the solution of several classification problems 28, 9 . In the experiments reported in this paper, we used data sets obtained from gene expression analysis, particularly tissue classification. Gene expression analysis problems are, in general, represented by complex and high dimensional data sets, which are very susceptible to noise. Table 1 shows the format of the gene expression data sets used in the experiments. It shows that each data set can be represented by a table where the first row has the identification of a particular tissue, the expression levels of different genes for this tissue and the label associated to the tissue.
The main features of the gene expression data sets used in the experiments are described in Table 2. This table presents, for each data sets, its total number of instances, number of attributes or data dimensionality and existent classes.
Most of the data sets used in the experiments reported in this paper are related to the problem of cancer tissue classification. The development of efficient data analysis tools to support experts may allow better and earlier diag-nosis of cancer, leading to more effective patient treatment and increase of survival rates. Several research groups are currently working with gene expression analysis of tumor tissues.
The ExpGen data set 4 contains expression levels measurements from 2467 genes obtained from 79 different laboratory experiments for genes functional classification. This application consists in categorize a gene in a given class that represent its function in the cellular environment. From these experiments, the data set is composed by only 207 genes, which could be categorized into five classes during the laboratorial experiments made.
The Golub data set 15 has gene expression levels from patients with acute leukemia. The gene expression data were obtained from 72 microarray images, and measure expression levels of 6817 human genes. The disease was categorized in two different types, Acute Lymphoid Leukemia (ALL) and Acute Myeloid Leukemia (AML). The same pre-processing made in 11 was applied to Golub data set to simplify its data.
The Leukemia data set is known in literature as St. Jude Leukemia 38 . It is composed by six different types of pediatric acute lymphoid leukemia and another group with examples which could not be categorized as one of the previous six types. The original data set has 12558 genes and so a pre-processed version found in http://sdmc.lit.org.sg/GEDatasets and described by 38 research was used, reducing the number of genes to 271.
The Lung data set has examples related to lung cancer, where, for each patient, the label can be normal tissue or three different types of lung cancer. The three different types of lung cancer analyzed are adenocarcinomas (ADs), squamous cell carcinomas (SQs) and carcinoid (COID). This data set has 197 instances, with 1000 attributes each, and was presented in 26 .
The last data set analyzed, the Colon data set, is described in Alon et al. 2 , and includes patients with and without colon cancer. The data set presents gene expression data obtained from 62 microarrays images, which measure expression levels of 6500 human genes. Pre-processing techniques reduced the number of input attributes to 2000.
For the SVMs training, the SVMTorch II 7 software was employed. The values of different SVMs parameters were the default values of the software used, kept the same for all experiments. For the C4.5, training was carried out by the software provided by Quinlan 27 and For the RIPPER algorithm training, the Weka simulator from Waikato university 13 was adopted. The parameter values for the three algorithms were the default values suggested in the tools employed, which were kept the same for all experiments. Scripts in perl programming language were also developed to convert data sets to different formats demanded by Wilson's 35 code, SVMTorch II, Weka simulator and C4.5 algorithm.
To evaluate results obtained in the experiments, the statistical test of Friedman 14 and Dunn's multiple comparisons post-hoc test 12 were employed, according to the methodology described in 10 . Friedman's test was adopted since it is recommended for the comparison of different ML algorithms applied to multiple data sets, and has the advantage of not assuming that the measurements have to follow a Normal distribution.
The null hypothesis assume that all analyzed algorithms are equivalent if their respective mean ranks are the same. If the null hypothesis is rejected, and therefore the analyzed algorithms are statistically different, a post-hoc test might be applied to detect which of the algorithms differ. Dunn's statistical post-hoc test was applied, since it is recommended to situations where all algorithms analyzed are compared to a control algorithm, the strategy employed in the experiments performed in this paper.
Experimental Results
In the pre-processing, the amount of removed instances was different for each data set analyzed. However, it was between 20 and 30% of the total number, except for the Colon data set, original and simplified versions, which presented reductions between 30 and 40%.
The time spent in the pre-processing phase was measured to show how the application of the noise detection techniques investigated can affect the overall processing time. It is important to mention that pre-processing phase is only applied once for each data set analyzed, generating a pre-processed data set that can be used several times for different ML algorithms. The time consumed was always less than one minute. Another observation is related to data sets complexity: more time was spent in the pre-processing of more complex data sets.
In order to measure the effectiveness of noise detection techniques employed, the performance of the three ML algorithms concerning accuracy, complexity and processing time necessary to build the induced hypothesis were evaluated with the original and the pre-processed data. For all experiments, the statistical tests were applied with 95% of confidence level.
For SVMs, in general, the error rates of the classifiers generated after the application of noise detection techniques, for all evaluated k values, were the same as those obtained for the original data sets. The same was true for the Colon data set, but only for some values of k. The pre-processed data sets Leukemia and ExpGen had only some similar results, but none better than those obtained for the original data sets, while Golub data set presented the worst results in all cases. The obtained results can be seen in Table 3, where the best results are highlighted in bold and error rates similar to the best ones for each data set are shown in italics. Standard deviation rates are reported in parenthesis.
The analysis of the C4.5 classification error rates, which can be seen in Table 4, shows that the pre-processed data sets Leukemia, Lung and Golub presented similar and better results than those obtained for the original data sets. The ExpGen data set presented only few similar error rates compared to those obtained for the original data set. The preprocessed data set Colon provided only worst results.
According to Table 5, the RIPPER algorithm presented similar error performance for the original and pre-processed data using the Leukemia, ExpGen and Colon data sets. In the last two data sets, some results were improved by the preprocessing. The remaining pre-processed data sets Lung and Golub presented more improvements in ML accuracy after the pre-processing phase. For these two data sets, error rates were lower after pre-processing, for the majority of the experiments carried out.
In the complexity analysis of the SVMs, the number of Support Vectors (SVs), data that determine the decision border induced by SVMs, was considered. A smaller number of SVs indicates less complexity of the induced model.
For the C4.5 algorithm, complexity was determined by the mean decision tree size induced. Reduced decision trees are easier to analyze and so result in comprehensiveness improvements for the model.
The complexity for RIPPER algorithm was observed by the number of rules produced during the training phase. The smaller the number of rules produced, the simpler the complexity of the generated model.
For all three ML algorithms investigated, the complexity was reduced when the pre-processed data sets were used, as presented in Tables 6, 7 and 8, respectively for the SVM, C4.5 and RIPPER algorithms. In these tables, the best results are highlighted in bold and complexities similar to the best ones, for each data set, are shown in italics. Standard deviation rates are reported in parenthesis.
According to Tables 6, 7 and 8, most of the complexities were reduced after pre-processing, except for the Golub data set and the RIPPER algorithm, in which not all complexities were reduced.
For the SVMs, the smaller the pre-processed data set produced by noise detection techniques, the lower the number of SVs obtained and, consequently, the complexity of the model. For the C4.5 algorithm, the model complexity has decreased until a lower bound from which further reduction in pre-processed data set would not reduce the complexity.
For the RIPPER algorithm, the final models were also simplified, but with less reduction in the complexity. The complexity obtained using the original data for the Golub data set was maintained for its pre-processed versions.
The time taken by the SVM, C4.5 and RIPPER algorithms to induce hypothesis using the pre-processed data sets was always reduced when compared to those obtained with the original data sets, taking at most 1 second. For SVMs, the processing time was only slightly reduced in comparison to the time obtained for the original data sets.
The analysis of results presented in this paper shows that the three noise detection techniques evaluated presented similar results, in terms of amount of noise removed (data set reduction), time taken and effect on the ML algorithms performance. A possible explanation is that they all are noise filtering techniques based on k-NN algorithm. Besides, they are related, AllkNN is an ENN extension, while RENN is the ENN algorithm applied multiple times. For the gene expression data sets analyzed in this paper, the differences present in these algorithms may not result in significant differences in the ML algorithms performance. Most of the experiments presented satisfactory results, with lower error rates and better performance if compared to those obtained in the analysis of the original data sets, which demonstrates that noise detection techniques improved the performance of the ML algorithms evaluated. The C4.5 and RIPPER algorithms benefited from the application of noise detection techniques for most of the data sets investigated and reduced the complexity of the induced models. For the SVMs, the new results were slightly better, with lower complexity.
Furthermore, the gain in comprehensiveness and the reduction in time spent during training process is another advantage, since the complexities of all data sets were reduced after pre-processing (the noise detection and removal phase).
Therefore, the application of noise detection techniques in a pre-processing phase presents the advantage of reducing the complexity of classifiers induced by ML algorithms, as well as reducing the time spent in classifiers training, producing, in most experiments, better or similar classification error results than those obtained for the original data sets. This indicates that the distance-based noise detection techniques kept the most expressive patterns of the data sets and allowed ML algorithms to induce simpler classifiers, as shown in the reduced complexity and lower classification error rates obtained.
Conclusions
This paper investigated the application of distance-based noise detection techniques in different gene expression classification problems. We did not found in the literature a single approach or algorithm able to detect noise without classification accuracy reduction that was tested in several data sets. We also were not able to find noise detection experiments using gene expression data sets able to detect tissues that are probably noise. The closest works we found in gene expression analysis were the works from 18,24 . However, these works detect and eliminate only genes, not tissues. The data sets employed here are related to both gene classification and tissue classification.
In the experiments performed here, three ML algorithms were trained over the original and pre-processed data sets. They were employed to evaluate the power of these techniques in maintaining the most informative patterns. The results observed indicate that the noise detection techniques employed were effective in the noise detection process. These experiments shown the the incorporation of noise detection and elimination resulted in simplifications of the ML classifiers and in reduction in their classification error rates, specially for the C4.5 and RIPPER algorithms. Another advantage for these two algorithms was an increase in comprehensiveness.
We are now investigating new distance-based techniques for noise detection and developing ensembles of noise detec-tion techniques aiming to further improve the gains obtained by the identification and removal of noisy data. Preliminary results, presented in Libralon 23 , suggest that ensembles of distance-based techniques can be a good alternative for noise detection in gene expression data sets. | 5,832.4 | 2009-03-01T00:00:00.000 | [
"Computer Science"
] |
Coulomb Potential Modulation of Atoms by Strong Light Field: Electrostatic Tunneling Ionization and Isolated Attosecond Light Pulse
An attosecond research upsurge has been overwhelmingly rising since the establishment of novel light source — single isolated attosecond laser in extreme ultraviolet/X-ray resulted by strong field high-order harmonics generation (HHG). In this chapter, based on the electrostatic tunneling ionization from Coulomb potential modulation of atoms by strong light field, we scrutinized the intrinsic phase of high-order harmonics and analyzed qualitatively the salient dependence of two mainstream single isolated attosecond pulse generation techniques as polarization gating(PG) and amplitude gating(AG) on carrier-envelope phase (CEP) of femtosecond driving laser. The conclusion is that the optimized CEP corresponding to the highest intensity contrast between the main and sideband attosecond pulses is π = 2 and 0 for polarization gating and amplitude gating, respectively. Further, an experimental implementation was given in detail to exemplify the tricks for optimum phase-matching process of HHG from the interaction of high-intensity femtosecond laser field with noble gas target. The effects of the relative location between Gaussian-shaped driving femtosecond laser field focus and the gas target source used on the HHG phase matching were studied, and the conclusion found that the expected position of gas target for optimum phase matching is always lying behind the focal point of the driving field used.
Introduction
With the invention in general and realization of the laser in particular by Maiman in 1960, field of optics soon entered the new era of nonlinear optics. In this regime, the optical properties of materials are no longer independent of the intensity of light-as was believed for hundreds of years before-but rather change with light intensity, giving rise to a wealth of new phenomena, effects, and applications. Today, nonlinear optics has entered our everyday life in many ways and has also been the basis for numerous new developments in spectroscopy and laser technology. Indeed, from the moment of birth of nonlinear optics, laser physics and nonlinear optics have been intimately related to each other. Within "traditional" nonlinear optics, the absolute changes of the optical properties are tiny if one follows them versus time on a timescale of a cycle of light. This simple fact is the basis of many concepts and approximations of "traditional" nonlinear optics. Over the years, however, lasers have improved in many ways, especially in terms of the accessible peak intensities and in terms of the minimum pulse duration available. These days, over 50 years after the invention of the laser, the shortest optical pulses generated are about one and a half cycles of light in duration. This comes close to the ultimate limit of a single optical cycle [1].
Thanks to the technique of chirped-pulse amplification (CPA) and the availability of CEP stabilized few-cycle laser [2], amplified laser pulses with focused peak intensities in the range of 10 22 W/cm 2 are available in some laboratories. As a result of this, today's light intensities can lead to substantial or even to extreme changes on the timescale of light, and the range of optics research is now shifting from the stage of perturbative to extreme (or nonperturbative) nonlinear optical mechanism [3,4]. As for the latter, one remarkable achievement is the establishment of novel light source-single isolated attosecond (1 attosecond[as] = 10 À18 s) laser pulse in extreme ultraviolet or even X-ray electromagnetic spectrum based on strong field HHG [5][6][7]. Since the fundamental processes of chemistry, biology, and materials science are triggered or mediated by the motion of electrons inside or between atoms, and that the atomic-scale motion of electrons typically unfolds within tens to thousands of attoseconds, breakthroughs in light source science are now opening the door to watching and controlling these hitherto inaccessible microscopic dynamics, opening up a new horizon of science by observing, controlling, and manipulating nature in a new dimension. Consequently, an attosecond research upsurge is overwhelmingly rising from physics, chemistry, and material science to even information processing and other fields [8][9][10]. What is worth mentioning is that transnational or transregional attosecond science research infrastructures are emerging, such as Extreme Light Infrastructure Attosecond Light Pulse Source (ELI-ALPS) in Szeged, Hungary. Its main objective is to establish a unique attosecond laser facility which provides developers and users with light sources within the THz to X-ray frequency range in the form of ultrashort pulses at high repetition rate.
Until now, the HHG based on the interaction between high-intensity few-cycle femtosecond laser and noble gases is regarded as the main process for generating attosecond laser pulse. The physical process in essence is frequency up-conversion resulted by optical field caused by atomic tunneling ionization, which means that, no matter what kind of single isolated attosecond pulse generation technique, such as amplitude gating [11] or polarization gating [12], the phase matching involved is the key issue, which affects the macroscopic response of the multiatom gas system and eventually determines the conversion efficiency. According to the three-step scenario of HHG proposed by Corkum [13], phase matching in process of HHG is microcosmically influenced by multiple factors involved: fundamental driving laser field, the gas system used, the high-order harmonics field that generated, and the plasma formed by the atomic ionization. It is very difficult and even impossible to carry out such a complete and accurate analysis on phase matching of HHG. So far, most theoretical analysis are addressing incompletely on one or two factors mentioned above, while the experimental demonstrations are just concerned about high-order harmonics yield, omitting intentionally or unintentionally the important details of phase matching optimization [14,15]. Consequently, the experimental details to optimize HHG attosecond laser pulse generation are still not made clear.
In this chapter, based on the three-step scenario of HHG, we scrutinized the intrinsic phase of high-order harmonics from the atom tunneling ionization induced by strong laser field and analyzed qualitatively the salient dependence of two fundamental single attosecond pulse generation techniques as polarization gating and amplitude gating on CEP of femtosecond driving laser. Finally, an experimental implementation was given in great detail to show the way to realize optimum HHG phase matching during interaction of strong optical field with noble gas target.
Strong field tunneling ionization and HHG
The light intensities necessary to rapidly ionize an atom are on the order of I ≈ 10 14 $ 10 16 W/cm 2 , which is well within the nonrelativistic regime. Thus, we can ignore the laser magnetic field for the moment and focus on the laser electric field [3]. Figure 1 visualizes this kind of ionization by high-intensity few-cycle femtosecond laser pulse for an electron bound in the Coulomb potential of a nucleus (assumed to be much more massive or fixed in space). The Gaussian linearly polarized laser pulse of t FWHM ¼ 5 fs is characterized by its electric field E t ð Þ 1 E t ð Þ cos ω 0 t ð Þ with carrier photon energy ℏω 0 ¼ 1:5 eV, together with the twodimensional scheme of the resulting electric potential experienced by an electron initially bound in an atom at three characteristic points in time. The large "tilt" along the electric field vector axis in the center of the pulse can lead to tunneling of the electron out of its binding potential through the potential barrier. If the barrier height is lowered below the binding energy, above-barrier ionization can occur. For circularly polarized light, the "tilt" stays constant, but its axis rotates in time [1].
Whether we can use the concept of electrostatic tunneling for light fields oscillating depends on the ratio between the light field period and the time the electron spends within the barrier, electron tunneling time. If the tunneling time is shorter than the period of light, the laser electric field can indeed be viewed as a static field along its polarization direction that parametrically changes its instantaneous value, which is the case of HHG process-based single attosecond pulse generation. Within the "static-field approximation," the tunneling ionization rate Γ i t ð Þ depends exponentially on the instantaneous barrier width l t ð Þ because the electron wave function is decaying exponentially in the barrier according to ψ x ð Þ∝ exp À k x j jx ð Þ, as in Figure 2. The probability of tunneling through the barrier is ∝ ψ l ð Þ j j 2 . Further, one can expect the general behavior Γ i t ð Þ as Apparently, the ionization characteristics would definitely have a strong dependence on CEP of the exciting laser pulses.
To get the exact description of the HHG spectrum, one can in principle solve the time-dependent Schrödinger equation numerically for the ionization and compute the atomic dipole moment from the known instantaneous wave function Ψ r, t ð Þ via the expectation value Ψ r, t ð Þ Àer j jΨ r, tÞ ð i h . Multiplying by the atoms' density delivers the macroscopic optical polarization P, which is one of the main parameters in the Maxwell equations. Neglecting the propagation effects, the radiated electric field is proportional to the second temporal derivative of P. The square modulus of its Fourier transform delivers the intensity spectrum of high-order harmonics. The most intuitive way to discuss semiclassically the extreme nonlinear interaction that leads to attosecond pulses is through the so-called three-step scenario introduced by Corkum [13], as in Figure 3. The first step (①) is the atom ionization via tunneling through the distorted Coulomb potential. Now the electron wave packet is released in the continuum where it gets accelerated by the external field (②). The third step (③) is the possible recombination with the nucleus and emission of a high-energy photon.
As for the second step, the movement of the electron wave packet can be described using Newton's second theorem of classical mechanics. Again, the magnetic field of the laser and the Coulomb potential bound on the electrons are ignored, and only the electric field of the laser is considered. And also we neglect the spatial dependence of the electric field since the wavelength of the laser field is much bigger than the distance the electron moves within the external field. We assume a laser field that the atom is exposed to is E t ð Þ ¼ E 0 cos w L t ð Þ. The electron tunnels through the potential wall and is released into the continuum with zero velocity at position zero at birth time Now one can use Eq. (3) to calculate the return time t r of the electron for different tunneling times t i . Since the process is periodic in time with a periodicity of π=w L , we use 0 ≤ t i ≤ 0:5T ¼ π=w L . Using Eq. (3), one can easily get that only those electrons freed at 0 ≤ t i ≤ 0:25T have the chance to return to the nucleus, as illustrated in Figure 4. Intuitively, the return time t r of electron with birth time t i is the horizontal ordinate of the intersection point formed by the laser field curve and its tangent line passing the point with t ¼ t i . And two important values for analyzing HHG are free time τ ¼ t r À t i that the electron spends in the continuum and the return energy W kin .
Here U p ¼ e 2 E 0 2 =4m e w L 2 is the wiggling electron's ponderomotive energy which is the period-averaged kinetic energy in external laser field E t ð Þ ¼ E 0 cos w L t ð Þ. The electrons with different freed time t i are bound to have different free times τ but might have equal return kinetic energy, as shown in Figures 5 and 6. Taking the characteristic time t m ¼ 17T=360 as the boundary, which corresponds to the maximum return kinetic energy of 3:17 U p , electrons freed earlier are called long trajectory electrons, while the others short trajectory electrons. The intrinsic phase of high-order harmonics resulted from the latter definitely has linear chirp, since the parameter τ linearly determining the phase has linear feature.
CEP dependence of single attosecond pulse generation
Single isolated attosecond pulses originated from HHG process have been demonstrated experimentally by a variety of gating techniques, which include spectral selection of half-cycle cutoffs as in amplitude gating (AG) [11,16,17] and ionization gating (IG) [18][19][20][21], temporal gating techniques such as polarization gating (PG) (including double optical gating) [12,[22][23][24][25][26][27], and spatiotemporal gating with the attosecond lighthouse effect [28,29]. Based on the intuitive description of HHG as the three-step scenario, PG and AG are the two most fundamental techniques, of which the former utilizes the dependence of HHG yield on driving field ellipticity, while on the latter the essential dependence of HHG itself is on driving field intensity.
PG involves the synthesis of laser field tailored specifically with time-dependent ellipticity. The left and right circularly polarized Gaussian laser fields used are as follows: Here T d is the time delay between the two laser pulses. So one can get the field synthesized and its ellipticity: The field in Eq. (8) can be decomposed into two linearly polarized fields, the driving field and gating field for HHG, respectively. Taking τ ω ¼ 2T ¼ 5 fs, T d ¼ 5 fs, and the critical field ellipticity ξ c ¼ 0:2 [30], one can have the gating zone with notable HHG effect as T G ¼ 1:5 fs. As shown in Figure 7, the two channels for HHG are appearing different evolutionary characteristics corresponding to the CEP changing from 0 to π=2, with channel ① depressed and channel ② enhanced. The intensity contrast between the HHG that resulted reaches the maximum at CEP ¼ π=2, meaning the generation of single isolated attosecond pulse from one driving laser pulse. Consequently the optimized phase setting for driving field is CEP ¼ π=2.
As for AG, the essence is spectral selection of half-cycle HHG cutoffs, so the linearly polarized driving field with CEP ¼ 0 could guarantee that the cutoff spectrum only comes from one channel and thus is supercontinuum supporting single isolated attosecond pulse, as shown in Figure 8. So the optimized phase setting for AG comparatively is CEP ¼ 0, which has been proven by the experimental results in Ref. [16].
Phase matching for high-energy attosecond pulse
The device used for optimizing HHG phase matching is schematically shown in Figure 9. The CEP stabilized few-cycle femtosecond laser (ladled as 1) is used as the driving optical field for HHG, with the inert gas (ladled as 5) in the nickel tube (ladled as 4) as the target source. The metal filter (ladled as 8), e.g., aluminum or zirconium, is used to filter the spectrum of residual driving laser from the HHG generated. Acrylic cover (ladled as 12) is specifically adopted for easier monitoring of the driving laser condition when the system is in vacuum. The procedure for optimizing high-order harmonics phase matching includes three steps. The details are deliberated in order as follows.
The first step is done in air. A Ti:sapphire oscillator (Rainbow, Femtolasers GmbH) produces sub-10-fs seed pulses with 2.4-nJ energy at 76-MHz. The oscillator was pumped by a continuous wave laser (Coherent Verdi) at 532-nm and 3.10watt. The oscillator output pulses are then stretched before coupling them into the amplifier. The output of the nine-pass amplifier with three-mirror configuration was more than 1.1-mJ with hundreds of picosecond duration and 3-kHz repetition rate. After amplification, the beam is sent into a grating compressor consisting of two high-efficiency transmission gratings and a vertical retro-reflector to get 25-fs laser pulse of 0.95-mJ. The beam from this laser system was focused into a fused silica hollow-core fiber filled with neon gas with a length of 1.2-m (250-μm inner diameter and 750-μm outer diameter) [31]. The broad optical spectrum ranging from 420-nm to around 950-nm obtained by self-phase modulation effect in hollow-core fiber, after traveling through a set of finely designed chirped-mirror compressor, gives laser pulse width of 4.6-fs which is about 1.7 optical cycles for the center wavelength of 803.5-nm, as shown in Figure 10. With focus spot size of 50μm, the peak intensity reaches above the level of 1.5 Â 10 14 W/cm 2 , which is enough to initiate the HHG in inert gas. The high-intensity femtosecond laser pulse will bring about air breakdown plasma near its focus, the color of which is blue and purple, as shown in Figure 11. Under the dark atmosphere, the longitudinal midpoint of the plasma can be roughly regarded as the focus point of the driving pulse. What is worth mentioning is that, before sending the drive pulse to the vacuum chamber, both the nickel tube and the metal filter plates must not be in the propagation path of the driving pulse.
Subsequent experimental steps are done using the focus point determined as a benchmark. Firstly, keep the driving femtosecond laser in its experimental settings, and roughly place the nickel tube 3 $ 5-mm after the focus point (yet outside the driving laser path by now to avoid damage). The nickel tube is a hollow-core cylinder with an outer diameter of 2.5-mm and inner diameter of 2.0-mm, the top of which is fully sealed to keep the inert gas system. Secondly, lower the power of the driving laser and then move slowly the nickel tube into the laser path so that the laser beam spot is just located at the transverse center of the gas tube. At this time the laser power should be too low to make any ablation of the nickel tube. Finally, recover the driving laser power gradually to its experimental value to drill the nickel tube. To reduce the impact of the surrounding air flow on the driving laser beam pointing stability, it is necessary to put the acrylic cover back to get nice tube drilling. It takes at least 30 minutes to finish the drilling process, and the quality can be checked through the pinhole far-field diffraction image, as shown in Figure 12.
This part illustrates the details to achieve the optimum phase matching for the most efficient harmonic conversion output. The zirconium filter of thickness of 0.15-μm is used, and its spectral transmittance characteristics are shown in Figure 13. When the HHG chamber reaches the required degree of vacuum, 2 Â 10 À4 Pa or even lower, send the driving femtosecond laser and open the neon gas pipeline control valve to generate high-order harmonics. The high-order harmonics signal is collected by an X-ray CCD. For the given driving laser settings and nickel tube position, the maximum high-order harmonics yield can be found by finely tuning the backing pressure of the incoming neon gas. By changing the nickel tube position, one can obtain the dependency of high-order harmonics yield on the relative position between the driving laser focus and the target source.
The results are shown in Figure 14, in which the horizontal ordinate of 0 indicates the focus point of the driving laser traveling along the positive direction. The optimum phase matching can be arrived when the nickel tube gas target is placed at about 4-mm after the driving laser focus, indicated by the maximum highorder harmonics yield there. The harmonics yield is showing significant asymmetry and is extremely low for the target position before the driving laser focus, which is different from the result of reference [9]. The beam profiles of the high-order harmonics detected by CCD and the driving field at the gas target are shown in Figures 15 and 16, respectively. It shows that, under the condition of optimum harmonics phase matching with gas target positioned about 4-mm after the driving laser focus, the harmonics generated have similar intensity spatial distribution with the driving laser field, providing an experimental evidence for the commonly used assumption that the single isolated attosecond pulse generated by a Gaussian femtosecond laser is also a Gaussian beam. But for the case of worse phase matching as in Figure 15(b), a Gaussian driving laser does not necessarily generate high-order harmonics with similar profile. Using a grazing incidence flat-field spectrometer, the HHG spectrum is analyzed and shown in Figure 17. The spectrum in the cutoff region is continuum, indicating that only one channel gave its contribution for the HHG as in Figure 8(b).
It is worth mentioning that other nickel tubes with different inner diameter sizes were also tried. The findings showed that optimum phase matching of HHG existing after the driving laser focus keeps the same for all these experimental situations, though the gas target position for best harmonic phase matching, the harmonics yield, and the backing pressure of gas target used vary to some extent. So we can safely say that the conclusion is universal for Gaussian-type driving laserbased HHG process.
Conclusions
The establishment of novel attosecond light source gave rise to an attosecond research upsurge from physics, chemistry, and material science to information processing, among which high-energy single isolated attosecond pulse resulted by interaction between strong optical field and noble gases which has been attracting much attention. Phase matching between the fundamental driving laser field and the high-order harmonics resulted is the key issue to such frequency conversion. In this chapter we scrutinized the intrinsic phase of high-order harmonics resulted from the atom tunneling ionization induced by strong laser field and analyzed qualitatively the salient dependence of polarization gating and amplitude gating on CEP of femtosecond driving laser. The conclusion is that the optimized CEP of driving femtosecond laser for generating single attosecond pulse is π=2 and 0 for techniques of polarization gating and amplitude gating, respectively. Meanwhile, an implementation was presented experimentally to exemplify the details for optimum HHG phase matching in the interaction of Gaussian-shaped high-intensity fewcycle femtosecond laser with inert gas target. We studied the dependence of the harmonics phase matching on the relative position between the gas target source and driving laser field focus and found that the optimum gas target position for HHG phase matching is always lying behind the focus of the driving field. | 5,142.2 | 2020-06-30T00:00:00.000 | [
"Physics"
] |
Lexical Ambiguity in Arabic Information Retrieval: The Case of Six Web-Based Search Engines
In recent years, both research and industry have shown an increasing interest in developing reliable information retrieval (IR) systems that can effectively address the growing demands of users worldwide. In spite of the relative success of IR systems in addressing the needs of users and even adapting to their environments, many problems remain unresolved. One main problem is lexical ambiguity which has negative impacts on the performance and reliability of IR systems. To date, lexical ambiguity has been one of the most frequently reported problems in the Arabic IR systems despite the development of different word sense disambiguation (WSD) techniques. This is largely attributed to the limitations of such techniques in addressing the issue of linguistic peculiarities. Hence, this study addresses these limitations by exploring the reasons for lexical ambiguity in IR applications in Arabic as one step towards reliable and practical solutions. For this purpose, the performances of six search engines Google, Bing, Baidu, Yahoo, Yandex, and Ask are evaluated. Results indicate that lexical ambiguities in Arabic IR applications are mainly due to the unique morphological and orthographic system of the Arabic language, in addition to its diglossia and the multiple colloquial dialects where sometimes mutual intelligibility is not achieved. For better disambiguation and IR performances in Arabic, this study proposes that clustering models based on supervised machine learning theory should be trained to address the morphological diversity of Arabic and its unique orthographic system. Search engines should also be adapted to the geographic location of the users in order to address the issue of vernacular dialects of Arabic. They should also be trained to automatically identify the different dialects. Finally, search engines should consider all varieties of Arabic and be able to interpret the queries regardless of the particular language adopted by the user.
Introduction
The recent years, research and industry have witnessed an increasing interest in developing reliable information retrieval (IR) systems that can effectively address the growing demands of users all over the world (Qi, Wang, & Shen, 2017;Zhang, 2016). In spite of the relative success of IR systems in addressing the needs of users and even adapting to their environments, many problems remain unresolved. One main problem is lexical ambiguity which has negative impacts on the performance and reliability of IR systems. It is even argued that lexical ambiguity is the most challenging problem for IR systems. This is because, in almost all languages, thousands of words have multiple connotations or meanings which need to be well considered in NLP applications. In English, for instance, over 80% of common English words have more than one dictionary entry, with some words having very many different definitions (Rodd, 2018). Hence, IR systems need to be trained to learn and process such words in order to achieve reliability and consistency. 2014). The WSD process is essential given that a great number of words have identical forms; moreover, they have different meanings when used in different contexts. This is technically known as polysemy. The problem with this linguistic feature, however, is that the perceived meaning of a word can vary greatly from one context to another (Ruhl, 1989). Readers/listeners, however, can quickly make use of contextual cues to select the most likely meaning when polysemous words are used within sentences and structures. Humans have the ability to reinterpret the sentence in the light of subsequent information. Evidence from brain imaging studies reveals the network of temporal and frontal brain regions that are known to be important for representing and processing ambiguous words (Rodd, 2018). It is even argued that listeners and readers rarely notice the ambiguities that pervade our everyday language (Altmann, 1998).
While it is usually easy for humans to identify the intended meaning of words with multiple meanings, it is still challenging for NLP and IR systems to determine the correct sense of such lexemes. When a word has different senses, it is difficult for the machine to determine the intended sense in a sentence (Saqib, Ahmad, Syed, Naeem, & Alotaibi, 2019;Trivedi, Sharma, & Deulkar, 2014). The word depression in a query, for instance, is challenging for IR systems. It is difficult for IR systems to assign its meaning to illness, weather, or economics. Thus, it is the task of WSD techniques to remove ambiguities and determine the correct sense of these words, and automatically assign the correct sense to a word with multiple meanings in a particular context (Dixit, Dutta, & Singh, 2015). The success of a given IR system depends on its ability to disambiguate, determine the correct sense, and finally retrieve only relevant documents in response to the user query.
Despite the development of different WSD techniques, evaluations of such techniques suggest that these have inherent limitations; therefore, lexical ambiguity remains the most serious problem for NLP and IR systems in Arabic. This is attributed mainly to linguistic peculiarities which are not usually considered in standard IR systems which are largely based on European languages. However, Arabic is a Semitic language, very different from European languages in terms of phonetics, morphology, syntax and semantics (Altaher, 2017;Khan & Alshara, 2019;Shaalan, Siddiqui, Alkhatib, & Abdel-Monem, 2018). Hence the challenge faced by researchers and developers of NLP applications for Arabic text and speech (Farghaly & Shaalan, 2009). It follows that IR systems should be adapted to take into consideration the unique linguistic features of Arabic.
In light of this argument, this study is undertaken in order to better understand the reasons for lexical ambiguity in the IR applications of Arabic; based on this understanding, reliable and practical solutions to the problem can then be developed. The remainder of this article is organized as follows. Section 2 surveys the main linguistic and WSD approaches for addressing the problem of lexical ambiguity in IR. Section 3 describes the methods and procedures of the study. The results of the study are reported in Section 4. Section 5 concludes this paper.
Literature Review
The literature suggests that the issue of lexical ambiguity has been extensively discussed in different linguistic disciplines including semantics, psycholinguistics, and discourse studies. Various semantic theories, including cognitive semantics, have been generated in order to explain the nature of lexical ambiguity and to capture as many generalizations as possible about the ambiguous and contextually-dependent nature of word meaning (Chierchia & McConnell-Ginet, 1993;Deane, 1988;Löbner, 2002;Lyons, 1975;Stallard, 1987;Tuggy, 1993). Issues of ambiguity, vagueness, polysemy, and homonymy have been the focus of lexical ambiguity studies. There is general consensus that lexical ambiguity comes from the meaning of the words, not the structure. The multiple senses of a word thus lead to more than one interpretation. Different reasons have been suggested. These include shifts in application, specialization in a social milieu, figurative language, homonyms reinterpreted, and foreign influence (Leech, 1981;Lyons, 1995). Semantic studies thus have been concerned with proposing approaches that help to determine the correct sense in ambiguous sentences. Semantic relatedness/interconnections, cognitive topology and lexical networks remain among the most popular semantic approaches to lexical ambiguity (Brugman & Lakoff, 1988).
In psycholinguistics, studies have generally focused on the mental lexicon, brain activity and responses to lexical ambiguity, and perception strategies governing the interaction between linguistic structures and performance (Durkin & Manning, 1989). Traditionally, the psycholinguistic approaches to lexical ambiguity were based one way or another on Chomsky's concept of linguistic competence. Studies in this tradition were concerned with the human ability to detect and resolve ambiguity and what an individual must know in order to comprehend and speak his language (Shultz & Pilon, 1973). In this regard, different experiments were carried out to investigate the universality of the problem. In other words, researchers sought to answer the question of whether the issue of lexical ambiguity should be considered analogous (Kess & Hoppe, 1978). This was aligned with Chomsky's concept of Universal Grammar. Under this traditional approach, lexical ambiguity was usually seen as a disadvantage as it could result in confusion and misunderstanding. Studies in this tradition stressed that linguistic ambiguity is problematic because of its negative impact on precise language processing (Kess & Hoppe, 1981). Recent studies in psycholinguistics, however, argue that ambiguity is no longer a problem-it is something that can be taken advantage of, because easy [words] can be repeatedly used albeit in different contexts (Finn, 2012).
Interestingly, in both semantics and psycholinguistics, discourse-based approaches have been used in the investigation of lexical ambiguity. In semantics, discourse is suggested as a mechanism for the resolution of lexical ambiguity. The focus is no longer on semantic relatedness. Likewise, the integration of discourse was tested and proved effective in helping individuals with aphasia and brain damage to resolve lexical ambiguity (Mason & Just, 2007;Tompkins, Baumgaertner, Lehman, & Fassbinder, 2000).
With the development of computational theory and NLP studies, the issue of lexical ambiguity has once again been the focus of many researchers. Different techniques have been developed in recent years to address the problem of lexical ambiguity and improve the performance of IR systems. Work on lexical ambiguity has traditionally focused on developing WSD techniques. The assumption has been that there is a close relationship between WSD and the IR. Therefore, correct disambiguation of words can lead to improvements in the effectiveness of retrieval systems (Sanderson, 1994;Zhong & Ng, 2012). Determining the correct sense or meaning of a given word increases the potential of IR systems to suggest relevant documents for a given user query.
According to the literature, there are three main WSD approaches: dictionary-based, ontology-based, and knowledge-based. The dictionary-based approach is usually considered to be the traditional WSD method and it is based on the development of corpus-based studies that use electronic corpora to resolve ambiguity issues. In this approach, a word's meanings are compared to those of the surrounding text where all the senses of a word that need to be disambiguated are retrieved from the dictionary (Agirre & Edmonds, 2007;Chen, 2000;Pal & Saha, 2015;Zhekova, 2014). One of the earlier attempts to implement this approach was Lesk's (1986) use of Oxford's Advanced Learner's Dictionary of Current English to resolve the issue of word senses (Indurkhya & Damerau, 2010). Similarly, Guthrie et al. used the Longman Dictionary of Contemporary English in 1991 to remove ambiguities and identify the correct sense of polysemous entries through the use of subject codes (Pal & Saha, 2015). The underlying principle in this approach is that there is a set of complete entries for each polysemous expression, from which anomalous alternatives are subsequently eliminated and only relevant senses are retained. Despite the continued research on dictionary-based approaches and techniques, lexical ambiguity remains pervasive so that many doubts have been raised about the reliability of these methods (Agirre & Edmonds, 2007). One major problem with this approach is that it is based on what can be described as 'static knowledge' as it makes no use of any specific knowledge manipulation mechanisms apart from the simple ability to match valences of structurally-related words (Boguraev & Pustejovsky, 1990).
With knowledge-based techniques, the main assumption is that disambiguation systems need sources of knowledge to determine the proper meaning of a lexeme that has multiple senses (Otegi, Arregi, Ansa, & Agirre, 2015;Sheng, Fan, Thomas, & Ng, 2001). Hence, these approaches are similar to dictionary-based ones in that both rely on sources of knowledge for disambiguation purposes. However, dictionary-based techniques are limited to the use of dictionaries, whereas knowledge-based techniques exploit different sources such as specialized corpora, WordNet and semantic systems. It is through these sources of knowledge that WSD systems are able to disambiguate words by means of defining their contexts. In other words, corpora, WordNet, and other sources of knowledge are used as the contexts for disambiguating lexemes with multiple senses. One major problem with knowledge-based approaches, however, is that they are based only on words to disambiguate target words. Devendra and Salakhutdinov (Chaplot & Salakhutdinov, 2018) explain that the sense of a word depends on not just the words in the context but also on their senses. Since the senses of the words in the context are also unknown, they need to be optimized jointly.
In order to overcome the limitations of both dictionary-based and knowledge-based approaches, ontology-based techniques have been developed. Ontologies are the most widely-used techniques in IR systems. In the ontology-based approach, words with multiple senses are disambiguated through the design of ontology of semantic concepts. The function of this ontology is to enable IR systems to resolve lexical ambiguity problems by drawing inferences from the concept network of the ontology (Hadzic, Chang, & Wongthongtham, 2009;Ławrynowicz, 2017;Mena & Illarramendi, 2001). The underlying principle of ontology-based techniques has been that searches in IT should be based on meaning and inference rather than on literal strings. IR systems and search engines should be equipped with mechanisms enabling them to understand the relationship between search items and concepts. However, in spite of their advantages in terms of enriching semantic inference and expressiveness, making inferences and understanding relationships between search items, deep levels of conceptual many case
Method
In order to of six sear the most p criteria are ability to a and achiev Goker In order to ambiguity, and Googl Table 1. Li The next ambiguitie engines.
Results ind
Google is selected se by explorin The investigation led to the conclusion that lexical ambiguity is the main reason that irrelevant items were selected in response to the selected queries. Overall, lexical ambiguity can be grouped under three main categories: the unique morphological and orthographic system, the diglossia feature, and the multiple colloquial dialects. These represent real challenges for IR systems and have negative impacts on their performance as explained below.
Results indicate that thousands of irrelevant documents were generated due to the unique morphological features which are not taken into account by the search engines. The Arabic language has a unique morphological system which can lead to an incorrect meaning being assigned to a particular word. This can be explained as follows. In order to determine the sense or meaning of a word, the three-letter root must be identified, followed by the identification of the syntactic context (Akesson, 2010;Ryding, 2005;Soudi, van den Bosch, & Neumann, 2007). However, in some cases, its meaning can still be ambiguous, and will need to be disambiguated (Glanville, 2018;Habash, 2010;Ryding, 2014). That is, it is sometimes difficult to relate the meaning of a given word to its three-letter root. The word مسكين (poor) as in له يا مسكين ولد من (What a poor guy), for instance, has no connection to the three-letter form سكن (literally translated as being constant or inhabited). This is partly due to the inevitable evolution of Arabic, just as in any other language. Hence, very often it is difficult for those IR systems based on Arabic dictionaries and glossaries to determine the sense or meaning of a given word. Additionally, Arabic is a synthetic language that is based on the case system. This case system is not usually used by Arabic people in spite of its importance in determining the correct meaning or sense of the word as shown in Table 2. Generally, Internet users are not familiar with the use of cases in their search. Furthermore, the vast majority of Arabic texts are not written using the case system. This poses real challenges for search engines and IR systems that are attempting to retrieve only relevant documents or items in response to users' queries in Arabic.
Another reason for the lexical ambiguity in Arabic is the feature of diglossia, of which there are two types. These are Modern Standard Arabic (MSA) which is considered the H (High) variety and Colloquial Arabic which are classified as the L (Low) variety. In the Arab countries, MSA is the official language and the formal language of education in schools. It is also used in the Press and TV news bulletins. Educated Arab speakers are usually able to produce and understand MSA, while uneducated people usually have difficulties in producing and even understanding this variety of Arabic (Albirini, 2016;Ferguson, 1996;Owens, 2013). There are great similarities between MSA and Classical Arabic (the language of the Quran and classical literature) especially in terms of morphology, grammar and structure. However, although MSA follows the basic syntax and morphology of Classical Arabic, the vocabulary is widely different (Ibrahim, 2009;Simpson, 2019). Colloquial Arabic, in turn, refers to the regional vernacular dialects. It is the language used in everyday speech (AlSuwaiyan, 2018). It is an umbrella term that covers various Arabic dialects including Egyptian Colloquial Arabic, Lebanese Colloquial Arabic, and Moroccan Colloquial Arabic. The morphological, lexical, and grammatical features of CA are very different from those of MSA (Bassiouney, 2009). Many words in MSA are used differently in CA, making it difficult for IR systems and search engines to determine the correct sense. It was also observed that the significant changes in the vernacular dialects of CA represent a real challenge to the performance of IR systems. Although these vernacular dialects of Arabic were not written and for centuries had been used only for oral communication, they are now widely used in writing, especially with the development of communication technologies, the proliferation of social media platforms, and the increasing interaction between people (Bassiouney, 2009;Harrat, Meftouh, & Smaili, 2019;Khedher et al., 2015).
The results of this study align with those reported in the literature in that the reasons for lexical ambiguity are not the same for all natural languages. This suggests that the linguistic peculiarities of a particular language should be considered by IR engineers if they are to provide workable and reliable solutions for the problem of lexical ambiguity (Dini L. & V., 1999;Kraaij, 2004;Mustafa & Suleman, 2015). Furthermore, all variations of Arabic must be taken into account during the development of IR systems. The colloquial Arabic dialects have long been ignored in NLP and IR applications, with the current search engines still catering mostly to MSA (Azmi & Aljafari, 2015;Obeid, Salameh, Bouamor, & Habash, 2019). IR systems are generally trained to deal with Standard Arabic which is in many ways different from the Arabic colloquial dialects. Thus, it is imperative that IR systems and search engines integrate these colloquial dialects to address the day-to-day needs of users all over the world. CA is the primary language of communication and younger generations are more adept at communicating in CA (Azmi & Aljafari, 2015;Bassiouney, 2009).
For better disambiguation and IR system performance in terms of Arabic, this study proposes that clustering models based on supervised machine learning theory should be trained to address the morphological diversity of the Arabic language and its unique orthographic system. Search engines should also be adapted to the geographic location of the users in order to address the issue of Arabic vernacular dialects. They should also be trained to automatically identify the various dialects, which will lead to the improvement in the IR performance as it reduces the possibility of having words with multiple meanings (Obeid et al., 2019;Sadat, Kazemi, & Farzindar, 2014).
Conclusion
In this article, we explored the reasons for lexical ambiguity in Arabic IR systems in order as a first step to proposing reliable and workable WSD solutions. It was revealed that linguistic peculiarities have important implications for IR engineering and performance. In Arabic, these have an impact on the reliability of IR systems and search engines. There are serious limitations of the selected search engines in considering the linguistic peculiarities of Arabic which constitute the main reasons for linguistic ambiguity in Arabic IR. These can be mainly attributed to the unique morphological system of Arabic, its diglossia, and the numerous colloquial dialects. WSD techniques need to consider these linguistic peculiarities for a better IR system performance. This paper was limited to considering the use of only the Arabic alphabet by search engines. Future work can focus on lexical ambiguity in the emerging Arabic chat Alphabets usually referred to as Franco-Arabic or Arabizi.
Acknowledgments
We take this opportunity to thank Prince Sattam Bin Abdulaziz University in Saudi Arabia alongside its Deanship of Scientific Research, for all the technical support it has unstintingly provided for the fulfillment of the current research project. | 4,681.6 | 2020-04-06T00:00:00.000 | [
"Computer Science",
"Linguistics"
] |
Coupled hard–soft spinel ferrite-based core–shell nanoarchitectures: magnetic properties and heating abilities
Bi-magnetic core–shell spinel ferrite-based nanoparticles with different CoFe2O4 core size, chemical nature of the shell (MnFe2O4 and spinel iron oxide), and shell thickness were prepared using an efficient solvothermal approach to exploit the magnetic coupling between a hard and a soft ferrimagnetic phase for magnetic heat induction. The magnetic behavior, together with morphology, stoichiometry, cation distribution, and spin canting, were investigated to identify the key parameters affecting the heat release. General trends in the heating abilities, as a function of the core size, the nature and the thickness of the shell, were hypothesized based on this systematic fundamental study and confirmed by experiments conducted on the water-based ferrofluids.
SAMPLES CHARACTERIZATION
. Particle diameter determined by different techniques: DTEM, SD_DTEM, and σ_DTEM are the numberweighted median diameter, standard deviation and distribution width obtained from TEM image analysis. DTEM_V is the volume-weighted particle diameter calculated from the number-weighted data, DXRD R is the particle diameter obtained from the Rietveld analysis of the XRD patterns, DXRD SPA is the particle diameter obtained from the single peaks analysis of the XRD patterns with the Scherrer equation as described in reference 1 , DMAG O is the particle diameter obtained from the analysis of the magnetization isotherms in generalized Langevin scaling by using Octave, while DMAG M is obtained by MINORIM software. LOW-TEMPERATURE MÖSSBAUER SPECTROSCOPY. The in-field measurements were done in a perpendicular arrangement of the external magnetic field with respect to the γ-beam and are useful to get information about the cationic distribution and the canting phenomena in the spinel structure.
Sample
Indeed, the angle θ between the magnetic moment ( ⃗) and the applied magnetic field has been estimated thanks to the following equation: where Bhf is the hyperfine field (Bhf 0T ), Beff the total effective magnetic field at the nucleus (Bhf 6T ), Bapp the external applied magnetic field, and α is the angle between Beff and Bapp. The angle θ corresponds to the canting angle of the magnetic moment for the octahedral sites, whereas for the tetrahedral ones the canting angle is equal to π-θ. This is because of the relative arrangement of the hyperfine and applied fields vectors that are parallel or antiparallel aligned for tetrahedral or octahedral sites, respectively. 2 Figure 3S. Low temperature 57 Fe Mössbauer spectra with no external magnetic field (upper side) and in the presence of external magnetic field (lower side) of the samples CoB (left), CoB@Mn (middle), and CoB@Fe (right). Octahedral sites are represented in blue, tetrahedral in red. For CoB@Fe, the external field increases up to 6 T. Table 2S. 57 Fe Mössbauer parameters of the samples obtained from the spectra recorded at low temperature (4 K) and with a 6 T magnetic field applied: values of the isomer shift (δ), effective field at 0 T (Beff 0T = Bhf) and 6 T (Beff 6T ), relative area (A), canting angles (α) and chemical formula calculated from site occupancy corrected by ICP-OES data. Without the external magnetic field, the spectra showed the overlapping of two sextets associated with the octahedral and tetrahedral sites of the spinel structure. The in-field measurements allowed us to split these two subspectra and to calculate the occupancy in the two sublattices. For instance, from the relative areas of the [3][4][5][6][7][8][9] Similar results were found for the sample CoA and CoC, whose inversion degrees are equal to 0.65 and 0.74, respectively. For the sample CoA@Fe, it was found from relative areas that 37% of Fe cations are located in tetrahedral positions and 63% in octahedral positions. The comparison of these data with the core ones allowed to estimate the cation distribution of the shell. Taking into account the iron fraction in the core calculated from ICP-OES measurements (0.10) and the site occupancy of the core, we estimated the amount of Fe in the shell, which corresponds to 38% and 62% for tetrahedral and octahedral sites respectively. Consequently, the ratio Fe(Oh)/Fe(Td) is 1.65. This result is in agreement with the theoretical maghemite ratio, which value is 1.67, while for stoichiometric magnetite is 2. 10 The same behaviour is revealed in the samples CoB@Fe and CoC@Fe2, whose Fe(Oh)/Fe(Td) ratios are equal to 1.71 and 1.74, respectively. The increased Fe(Oh)/Fe(Td) ratio with the NPs size is in line with the RT Mössbauer data that suggested a lower degree of oxidation for the larger sample, CoC@Fe2. 1 By using the same procedure, Fe content in CoB@Mn shell was estimated, and the inversion degree was found equal to 0.46. Consequently, the formula of the shell can be written as (Mn0.43Fe0.55)[Mn0.46Fe1.52]O4. Similar behaviour was observed for sample CoC@Mn, with an inversion degree of 0.44. This result is in good agreement with the theoretical value of inversion degree for nanosized manganese ferrite. 11 The chemical formula of the different samples with site occupancies are reported in Table 2S. By using Eq. 1S, it is also possible to calculate the canting angles. For sample CoA, the values for the tetrahedral and octahedral sites are 19° and 0°, respectively. Within the experimental error, we can consider that the magnetic moments of both sublattices are not canted being the angles calculated from the cosine's equation (Eq. 1S), and therefore small changes in the cosθ lead to significant changes on the angle values. The same results were found for samples CoB and CoC, which canting angle values for tetrahedral and octahedral sites are equal to 19°-10° and 25°-0°, respectively (Table 2S). Spin canting has not also been revealed in the core-shell samples. The sample CoB@Fe was measured at different magnetic fields (from 1 to 6 T) to gather information on the spin saturation process (Figure 3S, right). Even with the external magnetic field of 1 T, the splitting of the two sextets was observed. At 0 T the octahedral sites have a larger hyperfine field than tetrahedral ones, while at 6 T an inversion occurred ( Figure 3S). This is due to the antiparallel direction of the octahedral hyperfine field with respect to the externally applied field. The increase of the hyperfine field in the tetrahedral sites (or decrease in octahedral sites), when a magnetic field of 1 T is applied, is equal to 0.3 T. When a 2 T field is applied, the Bhf changes by 1.5 T for Td and 1.7 T for Oh sites, while it changes of about 1 T for all the subsequent increase of external magnetic field up to 6 T. The different behaviour observed below and above 2 T is probably caused by the unsaturated magnetic moment, that requires a field of such strength to be saturated, as it can be revealed from the field dependence of the magnetization at low temperature (Figure 3). Figure 5S. ZFC (full circles) and FC (empty circles) curves, normalized for the magnetization at Tmax of the ZFC curve, of core-shell samples and respective cores recorded at low external magnetic field (10 mT). Figure 6S) show a furcation at a specific temperature (Tdiff), with a maximum on the ZFC curve (Tmax), that is proportional to the blocking temperature of the largest particles and the mean value, respectively. Tb is the blocking temperature calculated with the first derivative of the difference curve (MFC-MZFC) as the temperature at which 50% of the nanoparticles are in the superparamagnetic state. 12,13 Both Tmax and Tb increases with the size of cobalt ferrite, as predicted by the Stoner-Wohlfarth model, 14 and are in a good agreement with the previously reported values for cobalt ferrite of similar size. [15][16][17][18] The saturation of the low-temperature part of the FC curve extends to higher temperatures for larger particles; it is a typical signature of enhancement of inter-particle interactions due to the increase of the mean magnetic moments per particle and reform of effective magnetic anisotropy. 19 ZFC curves ( Figure 5S) of core-shell samples show a dominant maximum (Tmax), associated with majority particle population, also confirmed by a single energy barrier distribution (-d(MFC-MZFC)/dT) 20 centred at a specific temperature (Tb). Both Tmax and Tb values increase in the core-shell samples compared to the cores, due to the increased particle magnetic volume. The difference Tdiff -Tmax is generally lower in the core-shell samples than in the cores, suggesting a decrease in the energy barrier distribution dispersity, because of the homogeneous growth on the shells around the seeds that induces a narrower size distribution.
DC MAGNETOMETRY
Nevertheless, also the increase of interparticle interactions in the core-shell systems can affect Tdiff, as also evidenced by the flatness of the FC curves. In particular, spinel iron oxide coated core-shells show a more pronounced FC saturation, typical for strongly interacting systems (e.g. superspin glass). [21][22][23] Magnetization isotherms of cobalt ferrite samples show no hysteretic behaviour at 300 K ( Figure 6S), typical for particles in the superparamagnetic state. A large hysteresis is instead present at 10 K for cobalt ferrite samples. Coercive field increases with increases the particle size, in agreement with the Stoner-Wohlfarth model, 14 For the real system of the superparamagnetic NPs with a size distribution, the magnetization, M of the NPs in the magnetic field, H can be written as a weighted sum of the Langevin functions: where f (μ) corresponds to the unimodal log-normal distribution of the magnetic moments, μ expressed as: where σ is the distribution width, μ0 and μm are the median and mean magnetic moment, respectively ( Table 3S). The second term in the equation (Eq. 2S) corresponds to an additional linear contribution to the magnetization, which can originate from some diamagnetic or paramagnetic components of the sample. The parameters of f (μ) were obtained from the refinement of the magnetization isotherm measured above TB in the Matlab/Octave software. The median magnetic size, dmag of the particle was calculated from the μ0 using the expression:
Eq. 5S
where a and μuc are the lattice parameter and the magnetic moment of the unit cell of the spinel phase (calculated assuming site occupancy estimated from LT 57 Fe Mössbauer spectroscopy), respectively (Table 3S). For comparison, magnetic moments ( ) and magnetic diameters (DMAG) have been calculated also by MINORIM software, which uses a non-regularized method. 24 The parameters used for the calculation of magnetic diameters are reported in Table 3S. where is equal to 0.5 and 0.64 for uniaxial (K2_uni) and cubic (K2_cub) anisotropy, respectively.
= 25
Eq. 8S 5 Eq. 6S and Eq. 7S depend on saturation magnetization and coercive field or anisotropy field, respectively, and give an approximated value of anisotropy constant assuming collinear orientation with respect to the magnetic field. Eq. 8S derives from the energy barrier equation, and it depends on blocking temperature (Tb) estimated from -d(MFC-MZFC)/dT curves, and particle volume anisotropy constant K3 was calculated by using <DMAG>, <DXRD>, and <DTEM> values. The results are reported in Table 4S.
DETAILS ON MAGNETIC FLUID HYPERTHERMIA Theoretical Calculation of SAR
The equations for the calculation of SAR are as follows: = P is the lost power, expressed as follows: Eq. 9S χ'' is the out-of-phase component of the susceptibility, expressed as follows: Eq. 10S 0 is the actual susceptibility, while τ the effective relaxation time, expressed in Eq. 11S and 14S, respectively.
is the susceptibility calculation parameter, and ξ is the Langevin parameter, expressed in Eq. 12S and 13S, respectively.
AC MAGNETOMETRY ON POWDERED SAMPLES
AC magnetometry was used to measure the temperature dependence of the in-phase (χ') and out-of-phase (χ'') component of the magnetic susceptibility at different frequencies (0.1-1000 Hz) for the core (Figure 9S) and core-shell samples ( Figure 10S) and the Néel relaxation time estimated by the Vogel-Fulcher equation (Eq. 17S,) is reported in Table 5S. The Néel relaxation times of core-shell samples are slower than those of the respective cores, due to the increased particle volume that dominates the overall decrease of effective anisotropy.
Eq. 17S
Where τ0 is the characteristic relaxation time, Eb the energy barrier against magnetization reversal, T the absolute temperature, and T0 the temperature value accounting for the strength of magnetic interactions.
As an example, the calculation for the CoA sample is reported. The Vogel-Fulcher equation (Eq. 17S) has been written in the logarithmic form (Eq. 18S):
INTERCALATION PROCESS
The hydrophobic nanoparticles were made hydrophilic by an intercalation process with CTAB, as described in the main article. The concentration of the colloidal dispersion was 3.4 mg • mL -1 . The presence of CTAB molecules was verified by FT-IR, as shown as an example in Figure 12S. To the best of our knowledge, in the literature the role of dipolar interactions in bimagnetic spinel ferrite-base core-shell nanoparticles in the heat release is not studied, due to the intrinsic complex nature of the system. 29,30,[39][40][41][42][43][44][45][31][32][33][34][35][36][37][38] Indeed, to investigate the role of dipolar interactions it would be necessary to have no changes in composition, size, morphology, etc. of the primary nanoparticles, but only differences in the aggregates/agglomerates in terms of size (number of primary NPs) and shape (random or controlled clustering such as chain-like alignment). In our work, the samples differ for several features (core size, shell thickness, and chemical composition) and therefore it is not possible to conclude about the specific role of the dipolar interactions in the heat release. The studies present in the literature about the dipolar interactions and their role in the heating dissipation are mainly devoted to single-phase nanoparticles, but their role is still debated with contradictory results showing improvement 30,[34][35][36]41 or deterioration [30][31][32][33]45 of the heating abilities that consequently are hardly predictable. Indeed, the shape and the size of the aggregate influence the role of the dipolar interaction, being commonly beneficial when ordered clusters (e.g. chains) are formed, but detrimental if NPs are randomly oriented. Some authors define the dipolar coupling constant λ as:
Eq. 20S
Where μ is the magnetic moment and d the mean diameter. For λ>2, the system is considered strongly interacting and aggregate may happen. When λ<2, the interparticle interactions are negligible. 37 For our samples (in the form of powder), λ is always <2, therefore dipolar interaction should be negligible. Nevertheless, as we report in Figure 14S, it is evident from the FC curve that dipolar interactions are present both in the powder and in the dispersions. However, in the selected concentration range (0.7-3.4 mg/mL) no significant effect can be detected, as can be seen after rescaling the curves to relative values. We can see that both the ZFC and FC curves are almost overlapped underlying that no changes occur in the strength of dipolar interactions increasing the concentration. To clarify the behavior of dipolar interaction with concentration, a cobalt ferrite sample has been measured (DXRD = 7.7 nm, a repeatability of the sample CoC) where both the effects of the concentration (3.4 mg/mL and 7.8 mg/mL) and the coating (cetyl trimethylammonium bromide (CTAB) and polyethylene glycol trialkoxysilane (PEG-TMS)), on SAR were studied ( Figure 15S). | 3,460.8 | 2020-05-06T00:00:00.000 | [
"Materials Science"
] |
Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns
We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice.
Introduction
The identification of mobile phones through their built-in components has been extensively investigated by researchers for different electronic components: the internal digital camera [1], the RF transmission components for various communication standards (e.g., GSM, WiFi) as described in [2,3], the microphones [4,5] and the accelerometers [6,7]. The identification is performed by exploiting tiny physical differences, which characterize electronic components due to the manufacturing process or the use of different materials. These differences can be observed in the digital output generated by the electronic components when they are stimulated by a similar or identical input (e.g., motion pattern stimulus to an accelerometer). Through a statistical analysis of the digital output, it is possible tone). Because of the nominal values of the electronic components and the different designs employed by the various manufacturers, the microphones of the different mobile phones introduce a different convolution distortion of the input audio signal (i.e., frequency response), which becomes part of the recorded audio. In general, the authors of the above papers use the Mel-Frequency Cepstrum Coefficient (MFCC) to define the features used for fingerprinting as commonly employed to fingerprint human speakers. Most of the papers use SVMs to classify mobile phones on the basis of the audio recordings.
Mobile identification based on Micro-Electro-MEchanical Systems (MEMS) sensor fingerprinting and, in particular, on accelerometers has been presented mainly in [6,17,18] where the authors describe the experimental identification of mobile phones using their built-in accelerometers and gyroscopes. Data are collected when the phones are subject to repeatable movements performed by a high precision robotic arm, so that a considerable dataset from which are extracted several statistical features is obtained. Then, using an SVM classifier, phones of the same brand and model are identified with an accuracy higher than 90% for some combination of features. Usually, the authors use variance, skewness, kurtosis and entropy-related (e.g., Shannon entropy, log entropy, threshold entropy) features for classification. Results show that, if properly stimulated, built-in accelerometers and gyroscopes can be used to extract fingerprints that allow for a very precise intra-model identification, thus confirming the applicability to anti-counterfeiting and other scenarios.
To our knowledge, no authors have attempted to identify and classify mobile phones on the basis of the built-in magnetometers, which are subject to a motion pattern.
The objective of this paper is to evaluate a technique for mobile phone identification based on the built-in magnetometers of the mobile phone, which are now present in most of the recent models of mobile phones. The technique is based on the stimulation of the magnetometers using a rotating platform with a fixed magnet. A mobile phone is installed on a cost-effective rotating platform spinning at a constant speed. Every time the mobile phone passes in front of the magnet, the magnetic field stimulates the magnetometer of the mobile phone. The digital output of the magnetometer is collected by the mobile phone itself and processed through appropriate statistical tools. In particular, statistical features like variance, skewness and kurtosis are extracted and used as fingerprints. A SVM learning algorithm is used to classify the different mobile phones on the basis of the extracted statistical features. SVMs are used here for their superior performance to other machine learning algorithms, like K nearest neighbour (KNN) and naive Bayes. This difference in performance among the machine learning algorithms is reported in the Results section of this paper. For each mobile phone, the experiment is repeated across six different days to ensure consistency in the results. A total of 10 phones from different brands and models or of the same model were used in the experiment. Our experimental evidence shows that inter-model (i.e., different models and brands) classification is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is far more challenging, the resulting accuracy being just slightly better than random guessing.
The remainder of the paper is organized as follows: Section 2 provides the overall methodology for the fingerprinting data collection, analysis and comparison. Section 3 shows the results of our tests, while in Section 4, we wrap-up, make final comments and point to future work.
Methodology for Data Acquisition and Processing
The overall methodology flow used in the paper for the collection of data, processing and analysis is shown in Figure 1. Each step is described in the following paragraphs.
The initial step is the setup of the test bed where the rotating platform for the definition of the motion pattern is configured. The test bed is illustrated in Figure 2, where a mobile phone is installed on a cost-effective rotating platform and a magnetic element (an iron cube) is positioned at one extreme of the test bed. The rotating platform rotates the mobile phone with a specific motion pattern. The built-in magnetometer is stimulated by the magnetic element when it passes over it. The magnetic perturbation is collected and analyzed using an Android application installed in the mobile phone. In this experiment, we have used the AndroSensor application, but any other application that is able to record the digital output from the magnetometer can be used.
Collection of data
Day 1…Day The application was configured to record the magnetometer digital output with a sampling time of 0.05 s. The motion pattern used in our experiment was as follows: +120 rpm then −120 rpm for 4 s, +150 rpm then −150 rpm for 3 s, +180 rpm then −180 rpm for 2 s. Each mobile phone was kept for 60 s before the start of the motion pattern in a fixed position in front of the magnet. Each mobile phone was subject to this motion pattern. A total of 10 mobile phones was used in the experiments. Table 1 shows the brand and models of the phones used in the experiment. We note that three phones were from the same brand and model (i.e., HTC One X), while the other phones were from different brands and models.
In each measurement campaign, each mobile phone is subject to 25 repetitions of the motion pattern. This experimental campaign was executed during six different days (even at the distance of a week), so as to ensure that the fingerprints are stable over time. As a consequence, we have a total of 25*6 = 150 motion patterns (henceforth called responses in the rest of this paper), which can be used for classification.
After collection, the data must be synchronized and normalized. This is an important step, since unsynchronized/unnormalized data can introduce a severe bias in the classification. Since the data collected by the magnetometers are particularly noisy (see Figure 3), the synchronization is done using the related accelerometers data, which are also collected by the AndroSensor application with the same rate (see Figure 4). The synchronization is performed using the variance trajectory technique. This technique is based on the calculation of the variance on a sliding window of samples, which moves along the response. The variance will increase substantially when the sliding windows meet a sharp rise or fall of the response. The rise of the variance identifies the beginning and the end of the response. This process is applied to all 150 responses gathered in the collection phase. The application of variance trajectory was inspired by its use in RF fingerprinting to detect the start and end of the wireless communication bursts [11]. After synchronization, the data are normalized. The normalization is carried out by applying the Root Mean Square (RMS) to each single response for each individual mobile phone.
To ensure that the fingerprints are stable over time, the classification through machine learning tools (described later on) is performed on the combination of the 150 collected responses. In other words, the representative set of each phone for classification is made up of 150 responses. The next step is to extract the statistical features from the 150 responses, which can be seen as time series with specific characteristics of variance or entropy. We follow a similar approach as those proposed in the literature for different built-in components (e.g., RF and accelerometers), where variance, skewness, kurtosis and entropy are calculated for each response. Table 2 shows the set of statistical features used in our classification problem. Now, since the resulting set of features is large, it is important to identify the subset of features that are expected to provide the best identification and verification accuracy. The process to achieve this goal is called feature selection. Various approaches to feature selection have been proposed in the literature (see, e.g., [19]). In this paper, we combine the Sequential Feature Selection (SFS) algorithm with a brute force approach. SFS starts with a single feature or a small set of features and incrementally adds a new feature at the time by measuring the resulting value of a given metric. If the metric improves, the feature is added; otherwise, another feature is checked for inclusion. The process continues until no further improvement of the metric is detected.
In this paper, a metric based on the overall accuracy of the confusion matrix was used for the SFS algorithm. Moreover, in order to avoid local maxima, a brute force search was also performed to select one or a few sets of combinations of 4 features among all possible combinations (sets of 4 features out of 18, which results in ( 18 4 ) = 3060 sets of features to check). In the brute force approach, all possible combinations of the 4 features were calculated. Then, the best combination of the 4 features was selected to seed the SFS algorithm, which computed the remaining features to add.
Once the best set of features is selected, the parameters of the machine learning algorithm at hand must be optimized. The execution of SFS is already based on optimal values reported in the literature for the application of SVM to fingerprinting. Yet, since it is the first time that classification of mobile phones based on magnetometers is attempted, the optimization of the parameters is performed specifically for the collected set of responses. As described in Section 3, a 3-fold approach was used for classification based on machine learning tools, and this process was repeated 50 times. For each repetition and each fold, feature selection and optimization of parameters is performed on the training set only, and classification accuracy is computed only on the test set. The histograms of the recurrence of the selected features, as well as the optimal values of parameters are provided in Section 3.
The final step is the classification itself, which is done through SVMs, widely adopted in fingerprinting (see [5,6,20]). A comparison with other standard classifiers (KNN, naive Bayes and random forests) is also carried out and reported.
In standard machine learning classification settings, classification performance is measured as follows. A given class is taken as a reference class (usually called the "positive" class), then the following quantities are computed: • T p is the number of true positive matches, where the machine learning algorithm has correctly identified a sample (e.g., a collected RF signal in our context) as belonging to the positive class; • T n is the number of true negative matches, where the machine learning algorithm has correctly identified a sample as not belonging to the positive class; • F p is the number of false positive matches, where the machine learning algorithm has mistakenly identified a sample as belonging to the positive class; • F n is the number of false negative matches, where the machine learning algorithm has mistakenly identified a sample as not belonging to the positive class.
One of the standard adopted metrics is the accuracy, which is defined as: where T p is the number of true positives and T n is the number of true negatives resulting from the application of the SVM machine learning algorithm to the problem of verifying that the collected fingerprints are representative of the same magnetometer evaluated in the training phase (i.e., for verification). The Receiver Operating Characteristics (ROC) is generated by plotting the T p rate vs. F p rate in a binary classifier system as its discrimination threshold is varied.
The Equal Error Rate (EER) corresponds to the condition on the ROC curve where T p and F p are equal. In this paper, the value of the EER is calculated for the X-axis. This metric is frequently used as a summary statistic to compare the performance of various classification systems. In general, the lower EER, the better the classification performance.
Finally, the confusion matrix is also used to show the results of the identification process. In the confusion matrix, each column of the matrix represents a predicted class, while each row represents the actual class. As in our experiments we used 10 phones, the confusion matrix has a dimension of 10 × 10. In the confusion matrix, the correct guesses (i.e., true positive or negative) are located in the diagonal of the table, so it is easy to inspect the table for errors, as they will be represented by values outside the diagonal. The overall accuracy can be defined as the sum of the elements on the diagonal over the total sum (which in our case, equals 1500, i.e., 150 responses for 10 phones).
Features and Parameters Optimization
In this section, we describe how the features are constructed and how parameter optimization is performed.
The features used in this paper are based on similar works cited in the Introduction: entropy-based features, variance, standard deviation, skewness and kurtosis. These features are applied both in the time domain and the frequency domain after a Fast Fourier Transform (FFT) is applied. Each feature is identified by the associated number as shown in Table 2.
The features can refer to each of the three axes of the magnetometers. Their selection has indeed been applied to all three axes.
As described in Section 2, the SFS algorithm is used in combination with a brute force approach. The metric used for the evaluation of the performance of the SFS algorithm is the overall accuracy of the confusion matrix derived from the application of a multiclass SVM. SVM is traditionally a binary classifier, so it must be combined with a multi-class approach to provide multiclass classification (as in our case, where we need to classify 10 mobile phones). In this paper, we will use the one-vs.-one approach, where for each binary learner, one class is positive, another is negative and the remaining classes are ignored. This approach exhausts all ( K 2 ) = K(K − 1)/2 combinations of class pair assignments. One-vs.-one is much less sensitive to the problems of imbalanced datasets than alternative approaches like one-vs.-all, but on the other hand, is more computationally expensive (one-vs.-all trains K classifiers only) [21]. Because we have a limited set of responses for each phone (i.e., 150) and the computational performance is not an issue, we selected the one-vs.-one approach. The ratio of the diagonal elements of the confusion matrix to the sum of all of the elements of the confusion matrix is the overall accuracy, which is the metric used in the SFS algorithm. Note that the overall accuracy includes both intra-model and inter-model accuracy because in our experiment, the set of phones includes both different models and the three phones of the same model (the three HTC phones).
Classification performance is evaluated using three-fold cross-validation. Each collection of statistical fingerprints (one for each mobile phone) is divided into three blocks, each having 50 fingerprints per block. Two blocks from each device are used for training, and one block is held out for classification. The training and classification process is repeated three times until each of the three blocks has been held out and classified. Thus, each block of statistical fingerprints is used once for classification and twice for training. Final cross-validation performance statistics are calculated by averaging the results over all folds.
As described in Section 2, the optimization process was repeated for each of the three folds on the training set only. Finally, the overall process was repeated 50 times. While, this can be a time-consuming process, it mitigates the risk of high variance in the results and provides a good evaluation of the relevance of the statistical features.
A bar chart showing the (average) fraction of times each feature gets selected for each fold is shown in Figure 5. Each fold is represented by a different color. This bar chart shows a predominance of entropy features, skewness and kurtosis in the time domain (Features 1, 2 and 5, 6), but also, the features in the frequency domain (amplitude) are somewhat relevant (Features from 13 to 18). Further parameters have to be tuned depending on which machine learning algorithm we are adopting. As described in [21], the SVM algorithm must be optimized on the C parameter (the so-called box constraint parameter), allowing the SVM user to control the weight of the classification errors during training, and the kernel function, which is used to define the shape of the computed hyperplane. Various kernel functions are available in the literature including linear, polynomial and Radial Basis Function (RBF). In this paper, we use SVM with RBF as a kernel function because this combination has demonstrated its effectiveness for fingerprinting classification in [20] and other references.
We recall that the definition of the RBF is the following: where the scaling factor γ is the second parameter to be tuned together with the box constraint parameter C. Both C and γ are positive real values. Various techniques can be used to optimize these values. In this paper, we adopt the grid approach with a set of exhaustive exponential values to base two, from 2 0 to 2 6 for the scaling factor γ and from 2 0 to 2 11 for the box constraint parameter C. These range of values were based on a previous optimization process, which has shown that values outside these ranges provided a low classification accuracy. See Figure 6, which shows an example of the previous optimization process with an extended range of values. The optimal values in this example are highlighted with the black circle mark in the figure.
This process was repeated for all 50 repetitions and the three folds. The final result of the SVM parameter optimization effort is shown in Figure 7 for parameter γ and in Figure 8 for parameter C. Again, the three different colors represent the three different folds. On the basis of the selected features, and the identified optimal values for C and γ, we can run the classification on the test set (held out folder) and analyze the results we obtain. This is reported in the next section.
Classification Results
The confusion matrices obtained in SVM classification, after training and tuning parameters as described in the previous section, are shown in Tables 3-5, which correspond to responses along the three axes of the magnetometers.
From Table 3, we conclude that classification accuracy is quite high for mobile phones of different brands and models, but it is much lower (almost getting to random choice) for mobile phones of the same model and different serial numbers.
For the sake of completeness, we have carried out a comparison of different machine learning classification algorithms to check whether SVM has indeed a superior performance. Table 6 reports comparative performance when operating on responses along the X axis, similar comparative results hold for axes Y and Z. The other algorithms were also optimized on the basis of their specific parameters (e.g., the number of neighbors for KNN, prior parameters for naive Bayes, the number of decision splits for classification trees).
From Table 6, we can clearly see that SVM offers a superior classification performance as compared to standard baselines for this particular classification problem.
The results presented so far were based on the digital output gathered from the magnetometer in the X direction. We now evaluate and compare classification performances obtained with the digital output taken from the magnetometer along the Y axis ( Figure 4) and the Z axis ( Figure 5).
From the different confusion matrices, one can see that the accuracy pattern is similar for all three axes (high inter-model accuracy, but low intra-model accuracy). The overall accuracy for classification based on the Z axis is 81.46% (the ratio of the sum over diagonal values of the confusion matrix in Table 5 to the sum over off-diagonal values), while the overall accuracy based on the Y axis is 76.02%, and the overall accuracy for the X axis is 70.61%.
The results from the confusion matrices can also be confirmed by performing binary classification separating two phones of different models (inter-model classification) and two phones of the same model (intra-model classification). The resulting ROCs are depicted in Figure 9 for different models (Sony Xperia X vs. Samsung Galaxy S7) and in Figure 10 for two HTC One X mobile phones (HTC One X 2 vs. HTC One X 3). The figures illustrate the ROCs for all three different axes of the magnetometers, averaging the results across the 50 repetitions. We complement the previous results by reporting the average inter-model and intra-model identification accuracy for all three axes of the magnetometers in Table 7. The inter-model accuracy is calculated as the average classification accuracy when including only one HTC mobile phone (i.e., phone identifiers from 1 to 8). The intra-model accuracy is computed when operating only with the three HTC mobile phones (i.e., phone identifiers from 8 to 10). Table 7. Average overall accuracy for inter-model and intra-model classification.
Addition of Gaussian Noise
In a practical application of mobile phone identification based on the fingerprints of the built-in magnetometers, it is well possible that the distance between the mobile phone and the magnetic element stimulating the magnetometer can vary. Changes in distance and orientation will definitely impact the Signal Noise Ratio (SNR). Different distances and different values of SNR can be simulated by adding Additive white Gaussian noise (AWGN) to the collected magnetometers responses. Figure 11 shows the ROCs for binary classification between Sony Xperia X and Samsung Galaxy S7 for decreasing values of SNR. The associated value of the EER is shown in the caption. As expected, a low value of SNR results in almost random choice identification (e.g., the green curve) because the machine learning algorithm is not able to leverage very noisy signals. Figure 11. ROC achieved by SVM in binary classification between Sony Xperia X and Samsung Galaxy S7 using the X-axis for decreasing values of SNR. Again, these curves are obtained after averaging over 50 repetitions.
Combination of Features from Different Magnetometers to Improve Accuracy
As a final step, we have attempted to combine the responses along all of the three axes to improve identification accuracy. Our experimental findings show that, despite the fact that the overall processing time is longer, significant improvements in the overall accuracy can be achieved. The best set of features from all three axes has been combined into a single matrix fed to the SVM algorithm. The resulting confusion matrix is shown in Table 8. Since the set of features is larger (18 features per 3 axes = 54 features), the brute force approach was not used in the optimization phase. In its stead, the best set of features from each axis was used as the seed for the SFS algorithm to obtain the best set of features in each fold and in each iteration. The optimization of the box constraint and scaling factor parameters was implemented as for the single axis case.
The resulting overall accuracy is 85.08%, with an inter-model accuracy of 98.07% and an intra-model accuracy of 54.15%. These figures are higher than considering each axis in isolation. Specifically, there is a significant improvement (almost 4%) for inter-model accuracy, as compared to the best result of the single axis (magnetometer in the Z direction; see Table 7) and a slight improvement in intra-model accuracy.
Conclusions
In this paper, we have described a potential approach for the practical identification of mobile phones using their built-in magnetometers stimulated through a motion pattern and a magnetic element. The motion pattern is implemented with a simple rotating platform, which moves the mobile phone over the magnetic element. The experiment has been carried out over six different days to prove the stability of the fingerprints. The testbed where the measurements has been performed was not ideal for our purpose. The magnetic fields generated by the arm electric motor and the nearby motors used to generate the motion pattern are not shielded, and other ferromagnetic objects are also present in the laboratory where data have been gathered. The SVM machine learning algorithm has been used in this experiment for mobile phone classification, and its superior performance in comparison to other machine learning classifiers has been shown. The responses taken along all three axes of the built-in magnetometers have been used and compared. The Z-axis provides a slightly better accuracy compared to the other two axes. In the final classification experiment, all three axes have been used for classification, yielding significantly improved performance. The resulting classification accuracy is quite high for inter-model classification (it reaches 98.07% when using all three axes), but is relatively poor for intra-model classification (as low as 54.15% when using all three axes). Different reasons can be put forward to explain the low intra-model classification: from the noisy environment to the small number of samples in the response, but also the location of the sensor in the mobile phone (i.e., in different models, the magnetometer might be placed in different positions, so that the fingerprint of the overall system smartphone-sensor is more distinctive). The small number of samples could be increased by lowering the speed of the motion pattern, but this has the obvious drawback that the experiment would take longer. The number of samples could also be increased by increasing the frequency of collecting the samples from the magnetometers, but older phones have a limit on the collection frequency (which is the one used in this paper). The authors will investigate alternative approaches to stimulate the built-in magnetometers of the mobile phones with the goal of generating improved fingerprints for both inter-model and intra-model classification. | 6,494.6 | 2017-04-01T00:00:00.000 | [
"Computer Science",
"Physics"
] |
Influence of indium and selenium co-doping on structural and thermoelectric properties of Bi2Te3 alloys
The melt-grown, indium and selenium co-doped Bi2Te3 single-crystal system is studied with a purpose to improve and analyze the thermoelectric performance in the low and near room-temperature range (10–400 K). The influence of co-dopants on the crystalline perfection, symmetry, dislocation, and single-crystal quality is investigated using high-resolution X-ray diffraction. The surface morphological features show the existence of small-angle grain boundaries, white patches, and tilt boundaries. Degenerate type of semiconducting behavior is seen in all the samples over the entire temperature range. The existence of small polarons is experimentally inferred from temperature-dependent electrical resistivity. Measurement of Seebeck coefficient confirms p- to n-type transition in the crystals doped with indium and selenium. The total thermal conductivity at 11 K was decreased by 3.4 times in (Bi0.98In0.02)2Te2.7Se0.3 as compared to pristine sample. Therefore, this novel co-doped indium and selenium Bi2Te3 single-crystal combination is viable to use as a competitor for low and near-room-temperature thermoelectric applications.
Introduction
About 70% of the energy produced in the world is dissipated into the environment, mainly as waste heat. Thermoelectricity is one of the cost effective and sustainable ways to convert this waste heat into usable thermal energy [1]. Thermoelectric generators (TEG), which are silent solid-state devices with no moving parts, can collect the waste heat from automobile exhaust, industry (steel plant), etc. and utilize the same for other applications. The efficiency of thermoelectric materials is experimentally measured with the help of conductivity, q is the electrical resistivity, and T is the temperature [2].
It is very challenging to increase the electrical conductivity and simultaneously decrease the thermal conductivity of a material to enhance the ZT values [3]. Currently one of the most promising narrow bandgap semiconductor single-crystal bismuth tellurides owns the such unique thermoelectric properties with low thermal conductivity at temperatures around 300 K or lower [4]. Bismuth telluride shows limiting solidification near its melting point (576°C). As a result, good-quality single crystals can be grown by melt growth technique. The intrinsic point defects formed during crystal growth function as negatively charged antisite disorders like Bi Te and Bi Se [5]. As a result, there is a formation anionic vacancy such as V Te and Vse, carrying positive charges. However, the role of these point defects in governing the thermoelectric properties of the doped bismuth chalcogenide compounds is not well understood. The parameters like electrical resistivity, thermal conductivity, and Seebeck coefficient need to be decoupled from each other. As there is an increase in this quantum confinement, band of electron energy structure tends to be narrower, which results in the greater values of electrical conductivity, Seebeck coefficient, and effective mass. The variation in the degenerate valley maximizes the entropy per carrier within the electronic bands. Additionally, the level of resonance also takes crucial role in optimizing the values of thermoelectric parameters. The insertion of dopants to the parent compound matrix results in the distortion of the Fermi level. As a result, there is an enhancement in the effective mass of the charge carriers without the deterioration of carrier concentration [6][7][8].
Bismuth chalchogenides are promising materials for thermoelectric applications due to their highpower factor ( S 2 q ), However, their high thermal conductivity is a point of concern. Single doping has been proved to be useful in improving the thermoelectric performance in recent years [9][10][11][12]. Prompted by these investigations and in continuation with our ongoing research on co-doped bismuth chalchogenide single crystals [13][14][15], we report here the synthesis and growth of In and Se co-doped Bi 2 Te 3 single crystals.
Adam et al. [16] have used the traditional melting process,for preparing polycrystalline bulk samples of the Bi 2 (Se 1-x Te x ) 3 system with x = 0.0-0.9. Te atoms were successfully and affordably substituted for Se atoms to produce Bi 2 Se 3 /Bi 2 Te 3 . Numerous studies have been conducted by Ibrahim et al. on the physical characteristics of Cu(II)-Schiff base complexes and metal-Schiff base complexes, including their electric and optical characteristics. Bi 2 Se 3 bulk alloys [17] created using mono-temperature melting procedure and used as source materials to form thin films on non-conductive ultra-cleaned glass substrates. Shokr et al. [18] adopted annealing process to generate polycrystalline solid solutions of (Bi 1-x Sb x ) 2 Se 3 (x = 0, 0.025, 0.050, 0.075, and 0.100). Sb 2-x Bi x Te 3 prepared by melting process shows peak power factor value of about 24.7 W/cm K 2 and the highest figure of merit of 1.14 [19]. Polycrystalline samples of (Bi 0.95 Sb 0.05 ) 2 Se 3 were created by Ibrahim et al. by melting method at 1273 K [20].
Indium contributes maximum number of resonant levels to the electronic bands of BiTe [21][22][23][24]. Se can achieve a high level of substitution at the site of Te due to their comparable chemical and physical properties. This serves as a motivation for doping with both In and Se. In our previous work, we have achieved the reduction of thermal conductivity due to co-doping of Sn and In in polycrystalline Bi 2 Se 3 [25]. When the bismuth telluride-based compounds are used for low-temperature power generation, intrinsic excitation becomes the major limitation. To widen the band gap, selenium alloying can be adopted, which is a practical solution to suppress the bipolar conduction [26]. Point defects, antisite defects, and vacancies generated by different doping and alloying reduce the thermal conductivity of Bi 2-Te 3 by shifting the phonon density of states. Bi 2 Te 3 and its alloys are the most important class of thermoelectric materials because they have the highest known efficiency at low temperatures (\ 300 K), whereas the silicon-based alloys are usually used in high-temperature thermoelectric applications ([ 600 K). The market for temperature control is dominated by Bi 2 Te 3 solid-state devices. Peltier cooling is becoming increasingly appealing as the demand to eliminate greenhouse-gas refrigerants grows, especially in small systems where the efficiencies are equivalent to conventional refrigerantbased coolants. With the accuracy in assessing the band structures, the interest in topological insulators of bismuth and its alloys has revealed new insights into the complicated electronic structure. Despite being the most popular and well-known functional material, silicon's thermoelectric material efficiency is unfortunately inferior to that of Bi 2 Te 3 . The silicongermanium alloys are not commonly used due to their high price and low ZT values [27,28].
To the best of our knowledge, doping on Bi 2 Se3 and Bi 2 Te 3 has been thoroughly explored, but simultaneous doping and composite for these compounds have received the least attention. We previously reported the low-temperature thermoelectric characteristics of the polycrystalline sample series of (Bi 1-x Sn) 2 Te 2.7 Se 0.3 , (Bi 1-x In) 2 Se 2.7 Te 0.3 , and (Bi 1-x In) 2 Se 2.7 Te 0.3 . As per continuation of the previous study, indium and selenium co-doped bismuth telluride single crystals are grown by melt growth technique. To investigate structural features, highresolution X-ray diffraction and powder X-ray diffraction techniques were employed. Thermoelectric properties including electrical resistivity, thermal conductivity, and Seebeck coefficient have been explored in the temperature range of 10-400 K. In the current study, it was observed that thermal conductivity of the system decreases drastically due to the co-doping. Hence, Bi 2 Te 3 with indium and selenium co-doped single-crystal system may be a good candidate for low and near room-temperature thermoelectric applications. The stoichiometric ratio of the precursors metallic powder of indium (99.9%), bismuth (99.99%), tellurium (99.99%), and selenium (99.995%) was subjected to intense grinding using a pestle and agate mortar for 2 h. The ground mixture powder was introduced into quartz ampoule of 200 mm length mm and 14 mm diameter. The vacuum sealing was carried out at 10 -3 Torr in an argon atmosphere. Ampoule containing sample was tied to the motor of the crystal puller. The sample was heated and cooled in eight segments in the range of 30-850°C, utilizing a programmable furnace attached to the crystal puller. The pulling of ampoule was carried out at the rate of 2 mm/hour at the transition temperature of 600°C. The temperature profile chart and the schematic diagram of process of melt growth are shown in our previously reported article [12,14]. The grown ingot of bismuth telluride is shown in Fig. 1. The structural studies were performed using powder X-ray diffractometer (Rigaku ultima IV, radiation source Cu Ka, wavelength 1.54 Å , resolution 0.02, PDXL Software, semiconductor detector) on finely powdered (Bi 1-x In x ) 2 Te 2.7 Se 0.3 single-crystal samples between 20°and 80°at the scan rate of 2°/ min. High-resolution X-ray diffraction (HRXRD) investigation was done on 2 9 2 9 1 mm 3 slashed crystal surface using ''Diffractometer D8 Bruker'' (Cu Ka source, double crystal monochromator Global mirror, 1 D speed detector, resolution 1.5 9 10 -4 , Diffrac Suit Software, semiconductor detector ) to obtain information about the crystalline perfection, symmetry, dislocation, single-crystal quality, and the influence of dopants on the inner plane structure of the single crystals. The surface morphological features of the single crystals were analyzed by field emission scanning electron microscope (FESEM) using ''Carl Zeiss Sigma'' (3 detectors, 20-120 lm range with a step-up voltage capable of 30 kV) at the particle scale of 1 lm with a 35 kX magnification. The chemical composition of the single crystals was examined by energy-dispersive X-ray analysis (EDS) utilizing EVO MA18 with Oxford EDS (X-act). The carrier concentration and mobility at ambient temperature were evaluated using Van der Pauw method (''Keithley meter 6220'' with an input current 50 mA, and the magnetic field of 6000 Gauss). All the crystal were cut into a rectangular parallelopiped shape with size of about * 1.5 9 1.0 9 6.0 mm 3 for the transport measurements. The melt-grown single-crystal ingot of dimension 2 9 5 mm 2 (shown in Fig. 1) is used for The crystal plane corresponding to larger surface area was identified as ''ab'' plane (perpendicular to ''c'' axis). Hence, this plane has been used to investigate thermoelectric properties of all the samples of (Bi 1-x In x ) 2 Te 2.7 Se 0.3 [29]. The thermoelectric properties such as electrical resistivity, thermal conductivity, and Seebeck coefficient are measured along ab plane. Due to corresponding variation in these three parameters, the thermoelectric figure of merit is focused on ab plane only rather than other planes such as ac and bc. Hence, this plane of the area in the single crystal has been used to investigate thermoelectric properties of all the samples of (Bi 1-x In x ) 2-Te 2.7 Se 0.3 . Electrical resistivity, thermal conductivity, and Seebeck coefficient of (Bi 1-x In x ) 2 Se 2.7 Te 0.3 were measured utilizing physical property measurement system (PPMS) (Quantum design) in the temperature range 10-400 K.
3 Results and discussion 3.1 Powder X-ray diffraction Figure 2 shows the XRD patterns of pristine Bi 2 Te 3 and co-doped (Bi 1-x In x ) 2 Te 2.7 Se 0.3 samples. The peak (015) at about 28°is found to be the highest in all the samples. The doping of selenium into the matrix of bismuth telluride follows the ''Vegards law'' [30,31]. The peak broadening observed at 27.34°is due to the changes in the ionic radii between the host lattice and the dopants. Figure 3 shows that, with the exception of (Bi 0.96 In 0.04 ) 2 Te 2.7 Se 0.3 , there is a modest shift in the peak indicated above towards higher angle theta. The peaks shift is mostly brought on by grain reflation and the considerable strain from the dopants' energy result in a lattice irregularity. Indium ion doping in the radius of bismuth ion has reached saturation. The bifurcation effect is observed at angles 28°, 40°, 55°, 65°corresponds to sharing of lattice points on the face of the existing crystal [32,33]. The XRD peak patterns are fitted using ''EXPO 2014'' software ( Fig. 4) [34]. Hence, there is an uneven lattice strain in the compound (Bi 0.96 In 0.04 ) 2 Te 2.7 Se 0.3 . This causes backward shift of XRD peak patterns in the samples. The crystallite size has been estimated using the Williamson-Hall formula.
''where b is the full width at half maximum intensity, k is the X-ray wavelength, e is the strain, and D is the crystallite size.'' As a function of co-doping of indium and selenium, there is a steady decrease in the crystallite size ( Table 1). The produced characteristic reliability factors such as profile factor (R p ), weighted profile factor (R wp ), expected profile factor (R ep ), and goodness of fit (v 2 ) are presented in Table 1. The R p, R wp, and R ep values are found to quiet higher due to the simultaneous occupation of Bi in the site of Te (1) and Te (2) [35,36]. The refined XRD patterns show that the crystal has a hexagonal structure with R À 3 m space group symmetry. The compound chemical phases of all the samples were revealed using ''Profex 3.14'' software, and the results are given in Table 2.
The table shows that supplementary phases Bi-Se, In-Te, and In-Se were found in all the doped samples. Although there are binary phases, amounting to 10-15%, influence of resonant levels on the thermoelectric response of Bi 2 Te 3 warrants further exploration. Indium on the Bismuth site and Selenium on Tellurium site can modify the density of states (DOS) by the influence of the resonant levels through interactions between the dopants and the host. Hence, there is a need to discuss conductivity and thermoelectric figure of merit of these samples [37].
3.2 High-resolution X-ray diffraction (HRXRD) Figure 5 indicates the inner plane scanned h-2h plot in the 2h range of 5°-90°, scanned at the rate of 2°/ min. The (006) plane is found to be the highest oriented phase in the Bi 2 Te 3 and Bi 2 Te 2.7 Se 0.3 single crystal, whereas the (0015) plane is found to be the highest oriented one in the (Bi 1-x In x ) 2 Te 2.7 Se 0.3 (x = 0.02 and 0.04) samples. This difference may be due to the variation in the recrystallization and anisotropic nature of the crystal [38]. The sharpness of the peaks indicates a reasonable degree of crystallinity, with low grain boundaries and growth along the 'c' direction [39].
The h-2h scan for the inner (015) plane in the range 20°-80°measured at the scan rate of 2°/min is shown in Fig. 6. The figure shows that the crystal growth has taken place in the (015) plane. This is in good agreement with the powder XRD diffraction of the outer plane (015) and confirms the growth along the 'c' direction [40]. The particle size and the lattice strain of the single crystals are given in the Table 3, which agree with the powder XRD data. The lattice strain for (Bi 0.98 In 0.02 ) 2 Te 2.7 Se 0.3 is highest due to the substitution of selenium and indium in the Bi 2 Te 3 matrix, insufficient surface tellurium atoms are available to complete the fifth and sixth levels of the quintuple stalk layer [41].
Azimuthal scan ([ scan) has been performed in the range -180°to 180°at the rate of 2°/min for the (015) plane, which has shown three peaks corresponding to three-fold symmetry of notifying the inplane orientation, in which the variation in the intensity of the peak may be attributed to the disorder in the in-plane direction at some region of the crystals (Fig. 7). The interval between the two peaks of all the samples is found to be 120°with some random variation in shifting. There are some inadequate tellurium atoms present on the surface of the crystal to complete each stalking of quintuple layer to maintain the fully terminated surface state. As a result, many stalking faults are introduced in the quintuple layer during the melt growth of indium and selenium-doped bismuth telluride. As a result, there exist selenium, tellurium vacancies and the intercalation of Bi-In atoms in the wander-walls gap which creates local strain fields in the crystal [42]. Figure 8 represents the rocking curve scan for the (0015) plane in the 2h range 18°-24°at the rate of 2°/ min. The rocking curve provides information about the perfection of single crystals. There is a gradual shift in the rocking curve to the higher 2h side as the doping concentration increases. The unexpected backward 2h change of the rocking curve of (Bi 0.96-In 0.04 ) 2 Te 2.7 Se 0.3 is mainly due to the small tilt angles formed by the indium super stoichiometric behavior with bismuth ion [43]. The rocking curves of all the samples match well with the simulated curve of the Lorentzian non-linear fitting. As evidence, the R 2 and FWHM (b) values are presented in Table 3, which shows the perfect single-crystalline nature of the samples. The sample doped with In (x = 0.02) has a small secondary peak in the rocking curve, which indicates the presence of low-angle grain boundary due to the presence of misorientation in the crystal domains and mosaic orientation [44].
The dislocation densities calculated using Eqs. 2 and 3 are given in the Table 3 [45,46].'' ð2Þ where x is the full width at half maximum intensity of the rocking curve, b s is the screw dislocation constant which comes out to be 15.0 nm, and b e is the edge dislocation constant which comes out to be 2.1 nm. However, some white patches are seen on the surface of the compound (indium doped, x = 0.04) due to the sublimation of tellurium (Fig. 9d). [47]. It is assumed that ordered surface morphological lowangle grain boundaries of (Bi 1-x In x ) 2 Te 2.7 Se 0.3 significantly affect the carrier conduction. Moreover, the selenium doping also forms minor interfaces during the crystal growth has the potential for a localized high-strain field, which can act as a carrier transport barrier.
3.3 Field emission scanning electron microscopy (FESEM) and energydispersive X-ray analysis of spectra (EDS) Figure 9 shows surface morphological features on the interface of the grown single crystals. From the nanoscale to the macroscale, surface morphology and topography are crucial characteristics of materials. These characteristics result from the chemical makeup, structure, and manufacturing processes of these materials. Materials can be identified by particular elements of their surface morphology, which have an impact on final surface characteristics including porosity, flatness, and volatilization of the samples. The utilization of chalcogenides in various thermoelectric material applications, therefore, requires knowledge of their surface morphology and topography. EDS image of Bi 2 Te 3 shown in Fig. 10A (a) indicates only the presence of bismuth and tellurium, whereas Fig. 10A (b) depicts the presence of element selenium along with bismuth and Te. Figure 10A Table 1 Characteristic reliability factors such as Profile factor (R p ), Weighted profile factor (R wp ), Expected profile factor (R ep ), Goodness of fit (v 2 ), crystallite size, lattice parameters (a, b, and c), and lattice strain information obtained from the powder XRD analysis of meltgrown (Bi 1-x (c) and (d) confirms the presence of indium. The expected and observed atomic % of the elements are presented in Table 4. There is a slight variation in the expected and observed atomic percentage due to the volatilization of selenium. The data acquired at the interface specify that the observed Bi atomic percentage is homogenous whereas the Te atomic percentage is varying significantly. Hence, there could be any significant change in the thermoelectric performance of the prepared compounds. The EDS mapping provided in the Fig. 10B shows the homogeneity in the grown single crystals [48]. Figure 11 depicts the temperature-dependent electrical resistivity of indium and selenium co-doped Bi 2 Te 3 single crystal in the range 10-400 K. Overall, the electrical resistivity increases with increasing temperature in all the samples indicating a degenerate semiconducting behavior, so that these materials seem to act more like a metal than a semiconductor. Due to the presence of electron from one donor atom close to the next atom, there is an overlapping of different wave functions providing the communication between the fifth electrons of all the donor atoms. Hence, all the samples exhibit low electrical resistivity at 10 K [43]. Under this state of degeneracy, the energy band over the quantum states are occupied by the fifth electron of the donor atom. There is an overlap of conduction band with the donor band gives rise to a composite band [49]. A hump-like feature is observed near 150 K in all the samples because of the interaction of dopants indium and selenium with bismuth and tellurium, respectively, which leads to the formation of percolation passages [50,51]. The curves exhibit a metal-semiconductor electrical resistivity transition at 300 K, due to the overlapping of half-filled 6p and the fully filled 6s bands in the Bi 2 Se 3 compound, whereas the electrical resistivity decreases with rise of temperature beyond 300 K showing semiconducting behavior because of the exceeding of thermally excited carriers as compared to the ionized impurity carriers. As a result, the mobility of the carriers seems to be reducing with the rise in temperature, which leads to the decrease of electrical resistivity. The defects are controlled by the positively charged vacancies. Te and Se act as electron donor atoms and exhibit n-type electrical conduction. The holes between grains reduce the direct connection of the grains and prevent electrical conduction that occurs due to direct contact between the grains. Thus, it has been found that increase in electrical resistivity with temperature for all the compounds. However, the electrical resistivity decreases with rise of temperature beyond 300 K showing semiconducting behavior because of the exceeding of thermally excited carriers as compared to the ionized impurity carriers [52][53][54][55]. By observing the trend of increasing electrical resistivity with doping, it is realized that doping on these single crystals can induce both charge scattering and lattice vibrations [24]. The compound doped with indium (x = 0.02) shows the highest electrical resistivity due to the suppression of the bond between bismuth and tellurium by a small quantity of indium where it leads to the formation of antisite defects Bi (In)-Te, Bi (In)-Se. A similar behavior has also been observed in our previous work [56].
Seebeck Coefficient
Temperature-dependent Seebeck coefficients of the pristine and doped samples are shown in Fig. 12. The Seebeck coefficient is found to be positive for Bi 2 Te 3 , concentration and the conduction of electrons occur by both electrons and holes [59]. On the other hand, (Bi 0.98 In 0.02 ) 2 Te 2.7 Se 0.3 has shown the n-type semiconducting behavior which could be attributed to partially substitution of tellurium by selenium, which decreases the p-type charge conduction monotonically and increases the n-type charge conduction [21,60]. The change in carrier's type with Te content could be attributed to the bound dangling creation at grain boundaries due to the Te vacancies that act as an electron source. Similar transitions from n-type to p-type and vice versa have been observed in our previous work [20]. Due to defects controlled by the positive charged vacancies in Se serving as electron donors and (ii) Bi antisite defects, the 0.02 doped Bi 2 Te 3 combination exhibits n-type electrical conduction. The electronegativity and atom size discrepancies between Bi and the cation (Se, Te) diminish with the partial substitution of Te by Se, which improves the antisite defects and boosts the electron density [56]. Due to the random distribution of low-angle grains, there is a variation of orientations in various parts of the granular samples. Hence, the carriers have to be pass through several potential barriers leading to simultaneous energy filtering and carrier localization. This leads to a variation in the trend of the temperature dependence of the Seebeck coefficient as a function of composition [58]. The density of states, effective mass, and Hall mobility values are presented in Table 5 [59]. It is observed that there is a slight difference in the coefficient of Negative Fermi-energy values indicate that its levels are within the bulk valence band due to the existence of the Dirac point on the surface of the single crystals [61]. The low-angle grain boundaries play a critical role in the presence of different meanfree paths of carriers and phonons due to the combined effect of dislocation scattering and carrier scattering. As a result, it has been observed that there is an increase in the electrical resistivity and reduction in lattice thermal conductivity [62,63]. Figure 13 shows the total thermal conductivity (K) measured in the temperature range 10-400 K. All the samples have exhibited the characteristic phonon peak at low temperatures (near about 11 K) due to the combined effect of electron-phonon and electronimpurity scattering. The exponential drop in thermal conductivity is due to phonon-phonon scattering, especially the Umklapp process [beyond 11 K] [64,65]. The highest (28.00 W/mK) and the lowest (7.0 W/mK) values of K have been observed in pristine and the single crystal doped with indium 0.02. To understand the mechanisms of thermal conduction, the electronic thermal conductivity and lattice thermal conductivity [Figs. 14 and 15] have been calculated using the Wiedemann-Franz formula
Thermal conductivity
where T is the temperature, L o is the Lorenz number for degenerate semiconductor and is calculated using the relation [L 0 ¼ 1:5 þ exp ÀS j j 116 ] (Fig. 16) [45]. We observe that the K l exhibits a reduction with doped indium content in the single crystals. The added dopants distort the lattice of Bi 2 Se 3 , which generates local vibrations that scatter the high-frequency phonons. The reduction in lattice thermal conductivity is achieved by the dislocation and distortions produced by the indium and selenium doping. This encourages the scattering of mid-and strong wavelength phonons. Hence, there is a systematic reduction in lattice thermal conductivity at 11 K [66][67][68]. In addition, selenium doping also forms minor interfaces during the crystal growth which helps reduce the lattice thermal conductivity. In this study, we have achieved 3.4 times reduction in total thermal conductivity in the (Bi 0.98 In 0.02 ) 2 Te 2.7 Se 0.3 samples as compared to that of the pristine at 11 K, though there is a small cross-over between Bi 2 Te 3 and Bi 2 Te 2.7 Se 0.3 . The significant decrease in thermal conductivity results from the multiple textures, lattice distortions, and dislocations that Indium and Tellurium induce on phonon transport. These effects distinctly cause the scattering of mid-and long-wavelength phonons across a broad wavelength spectrum to lower the thermal conductivity. To put it briefly, the multiscale hierarchical textures and lattice defects created by doping effectively restrict the movement of various frequency phonons, reducing the contribution of lattice thermal conductivity to thermal conductivity.
The presence of volume/lattice faults is highly correlated with thermal conductivity. Therefore, volume defects like cracks and lines scatter the midand long-wavelength phonons, whereas atomic defects like lattice defects or vacancies scatter shorter wavelength phonons. Hence, the region * 50 K where the thermal conductivity changes for two alloys Reduced lattice thermal conductivity is caused by lattice disorder acting as a scattering center for low/mid-energy phonons. Estimates of the dislocation density and strain from structural investigations also showed that indium and selenium doping increase dislocation and cause problems in the Bi 2 Te 3 lattice. The 0.02 and 0.04 dopants exhibit extremely stressed lattices and large dislocation densities, which are validated by HRXRD study. This might result in a significant decrease in the heat conductivity of the lattice [69]. Our previously reported polycrystalline (Bi 1-x In x ) 2 Te 2.7 Se 0.3 samples have shown lower thermal conductivity values as compared to the single crystals because the poly crystalline materials have large number of grain boundaries that scatter both phonons and electrons [66]. The temperature-dependent power factor (PF = S 2 / q) and thermoelectric figure of merit [ZT ¼ S 2 T Kq ] have been calculated for the grown single crystals from 10 to 400 K. The highest value of power factor 3074 lW/ mK 2 has been found in Bi 2 Te 3 , which is 5.12 times greater than our previously reported polycrystalline Bi 2 Te 3 samples (Fig. 17) [48]. The maximum ZT value * 0.26 is noticed in Bi 2 Te 3 single crystal at 350 K (Fig. 18). Comparison of Electrical resistivity, Thermal conductivity, Seebeck coefficient, and ZT values of some reported Thermoelectric systems is shown in Table 6 [26,[70][71][72]. Thermoelectric characteristics of melt-grown indium-and selenium-doped bismuth telluride are the focus of the current investigation. Crystal structure of Bi 2 Te 3 -based alloys tends to be more organized, resulting in the greatest value of band gap, as demonstrated by Tang et al. The lattice thermal conductivity is found to decrease drastically because of the induced alloying dispersion. Doped bismuth telluride grown via bridgeman method shows a monotonically increasing electrical resistivity with temperature (Yashmita et al.); however, ptype specimen shows a considerably faster rise in resistivity with temperature than the n-type. Present compound exhibits the similar behavior. Kim et al. generated Sb-and Se-co-doped Bi 2 Te 3 thermoelectric nanocomposites by combining the atom-by-atom assembly of multi-walled carbon nanotubes and Al 2 O 3 nano-powders. In contrast, co-doping bismuth telluride is a major focus of this study. The bandgap of Bi 2 Te 2 Se alloys may be effectively widened by alloying, which reduces the bipolar effect at elevated temperatures. The multiscale phonon scattering decreases the heat conductivity of the lattice, and bipolar conduction also decreases thermal conductivity, but the presence of antisite defect in our codoped samples makes a significant decrease in the thermal conductivity in the temperature range of 40-350 [26,[70][71][72]. Figure 19 exhibits the quality factor (B) of the (Bi 1-x In x ) 2 Te 2.7 Se 0.3 single crystals at 350 K. The intrinsic thermoelectric properties can be defined by the dimensionless thermoelectric quality factor using B ¼ 8p 2m e ð Þ The highest B value of about 15 9 10 -4 V -1 is obtained for (Bi 0.96 In 0.04 ) 2 Te 2.7 Se 0.3 crystals at 350 K, due to its highest weighted mobility and the lowest lattice thermal conductivity.
Conclusion
The thermoelectric performance of (Bi 1-x In x ) 2-Te 2.7 Se 0.3 single crystals grown by melt growth technique has been studied in the temperature range 10-400 K. Structural investigation such as powder XRD and HRXRD show that the single crystals have hexagonal structure with R À 3 m space group
Data availability
The thermoelectric data were generated for this study are available from the corresponding author [A N Prabhu] on request. | 7,193.2 | 2023-05-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Effect of Cooling Methods on the Strength of Silico-ferrite of Calcium and Aluminum of Iron Ore Sinter during the Cooling Process
In the iron making process, a high mechanical strength is favorable for iron ore sinters in the blast furnace, and the bonding phase is regarded as one of the key components that determines the quality of the iron ore sinter, in which the silico-ferrite of calcium and aluminum (SFCA) is one of the typical phases. In this study, synthesized samples with different SFCA mass fractions were prepared to study the effect of different cooling methods on the strengths of the SFCA samples. The results showed that the strength of a sample could be improved by increasing the SFCA content during a temperature change. Further, the test results for the compressive strength suggested that the SFCA had a positive effect on the strength of the iron ore sinter during cooling, with slow cooling being significantly effective at preventing the generation of thermal stress. Moreover, the Biot number was introduced to normalize all of the cooling methods. The results showed that higher mechanical strengths for iron ore sinters will be obtained with higher SFCA content and lower Biot numbers, which will guide the evaluation of mechanical strength of iron ore sinter after the cooling process in industry.
Introduction
The metallurgical performance of the iron ore sinter, especially its mechanical strength, plays a significant role in the blast furnace during the iron making process. In general, the mineral structure of the iron ore sinter mainly consists of the core ores and the bonding phase. The core ores are mainly bonded by the bonding phase during the sintering process, while the strength of the core ores is relatively high and will not be the limiting factor for the mechanical strength of the iron ore sinter, therefore, the strength of the bonding phase to a great extent determines the quality of the iron ore sinter [1,2]. The mineral composition of the bonding phase mainly consists of calcium silicate and calcium ferrite, with the latter having better resistance to fracture and having been proven to be crucial for the strength of the bonding phase [3]. Most of the calcium ferrite in the iron ore sinter was considered to be a solid solution of calcium ferrite (CaFe 2 O 4 , CF) with dissolved SiO 2 and Al 2 O 3 , and was called the silico-ferrite of calcium and aluminum (Ca 5 Si 2 (FeAl) 18 O 36 , SFCA) by Hancart et al. [4]. Moreover, it has been shown to positively influence metallurgical performance, including for the strength of the iron ore sinter in the blast furnace [5].
The SFCA has been extensively studied as a key bonding phase because of its important role in determining the quality of the iron ore sinter, with most studies focusing on the mechanism of its formation. Some researchers [6][7][8] found that the crystallization mechanism of SFCA was more complex than that reported in the previous papers based on the results of in situ X-ray diffraction (XRD). Others [9,10] reported that the structure and stability of the SFCA were closely related to the content of Al 2 O 3 and Fe 3 O 4 . All of these studies involved the formation process of the SFCA, which is above 1000 • C. In addition, the strength of iron ore sinter or bonding phase was also studied. Zhang et al. [11] measured the strength of sinter body of iron ores using a microsintering method, while the relationship between the chemical composition and strength of sinter body was discussed. The results showed that the compressive strengths of iron ores decreased with the increasing of the contents of SiO 2 and Al 2 O 3 . Liu et al. [12] found that Australian iron ore concentrate exhibited high strength during the bonding phase, the tumbler index of the sinter first increased and then decreased with the increasing of Ore-A ratio. Wei et al. [13] studied the effect of immersion depth of ultrasonic probe (IDUP) on the properties of CF, the results showed that the compressive strength of CF increased from 52.5 MPa to 87.3 MPa when the IDUP increased from 10 mm to 30 mm. Tang et al. [14] investigated the influence of basicity and temperature on the bonding phase strength of iron ore sinters, the results showed that the bonding strength exceeds 4000 N, which was equal to 80 MPa for a sample with diameter of 8 mm, with the temperature in the range of 1280-1300 • C and the basicity at 2.0, 2.4, and 3.4-4.0. However, iron ore sinters are always cooled down to lower than 150 • C from above 750 • C on the cooling machine in a sintering plant and then delivered to the iron making process. Thus, consideration should be given to the effect of the cooling process on the quality of the iron ore sinter, and especially the strength of the bonding phase, which has scarcely been studied previously. Therefore, in this study, a novel evaluation criterion was created to correlate the cooling method and the strength of the bonding phase during the cooling process of the iron ore sinter. As we know, the temperature gradient may cause the generation of thermal stress during the cooling process of the iron ore sinter, and a stress concentration will lead to cracking along the bonding phase, which will affect the mechanical performance of the iron ore sinter in the blast furnace. Therefore, it is necessary to study the effect of the cooling method on the strength of the main bonding phase, namely the SFCA, in the iron ore sinter. In fact, a series of different cooling methods, in which the cooling in air, forced air and the vertical tank (a new cooling process that has been applied to some of the steel plants in China) was set according to the real industrial cooling process and the cooling in water and furnace was under extreme conditions, were applied to the synthesized samples, whose component is mostly the SFCA, a key composition in the iron ore sinter. Therefore, the "strength" obtained in this paper would fundamentally be a reference to the tumbler strength and shatter strength in metallurgical properties of the iron ore sinter after cooling process. Further, the Biot number was introduced to normalize all the cooling methods considered in the study, which means that the iron ore sinter after the cooling process, whose Biot number can be calculated, will not perform well in mechanical strength if the "strength" of SFCA samples is relatively low under the cooling method of the same Biot number in this paper.
In this study, sintered samples with different SFCA contents were synthesized. Next, the thermal expansion coefficients of the SFCA samples were obtained, which caused the generation of thermal stress in the iron ore sinter when heated or cooled, and the changing rate of the linear expansion coefficient with temperature (CRT) was defined to study the effect of SFCA content on strength of samples during temperature change. Further, to study the effect of different cooling methods on the strength of the SFCA, a series of cooling methods were investigated, followed by compressive strength tests. Finally, the relationship between SFCA content, Biot number, and strength of the samples were created to guide the evaluation of mechanical strength of iron ore sinter after the cooling process in industry.
Synthesis and Verification of Calcium Ferrite
All of the analytical reagents (chemical purity > 99.5%) used in this study, including calcium carbonate (CaCO 3 ), ferric oxide (Fe 2 O 3 ), silicon dioxide (SiO 2 ) and aluminum oxide (Al 2 O 3 ), were purchased from an e-commerce supplier. After being fully dried in a drying oven for 6 h at 80 • C and sifted through a 200 mesh sieve (75 µm), sufficient quantities of CaCO 3 and Fe 2 O 3 were weighed to obtain a mole ratio of 1:1 using an electronic scale. Then, they were stirred together clockwise with a glass rod for 30 min and thoroughly blended in an agate crucible for 2 h. Next, moderate deionized water was added to the raw materials and acted as an adhesive. A cylindrical mold was filled with 10 g of this powder and compression molding proceeded with a briquetting machine at a holding pressure of 5 MPa for 1 min. The cake-shaped samples (approximately 9 mm in height and 22 mm in diameter) were fully dried in a drying oven for 12 h at 80 • C and prepared for sintering. Groups of five samples were placed in a 100 cm 2 corundum crucible, as shown in Figure 1a, and sintered in a high-temperature box furnace in air according to the sintering schedule. As shown in Figure 1b, the temperature in the furnace rose from ambient temperature to 1200 • C at a rate of 5 • C/min, during which the temperature remained constant for 30 min periods at 200 • C, 400 • C and 800 • C. Then the samples were slowly cooled down to ambient temperature in the furnace after being sintered for 120 min at 1200 • C in the furnace. Thus, the CF samples were synthesized for the subsequent research. All of the analytical reagents (chemical purity > 99.5%) used in this study, including calcium carbonate (CaCO3), ferric oxide (Fe2O3), silicon dioxide (SiO2) and aluminum oxide (Al2O3), were purchased from an e-commerce supplier. After being fully dried in a drying oven for 6 h at 80 °C and sifted through a 200 mesh sieve (75 μm), sufficient quantities of CaCO3 and Fe2O3 were weighed to obtain a mole ratio of 1:1 using an electronic scale. Then, they were stirred together clockwise with a glass rod for 30 min and thoroughly blended in an agate crucible for 2 h. Next, moderate deionized water was added to the raw materials and acted as an adhesive. A cylindrical mold was filled with 10 g of this powder and compression molding proceeded with a briquetting machine at a holding pressure of 5 MPa for 1 min. The cake-shaped samples (approximately 9 mm in height and 22 mm in diameter) were fully dried in a drying oven for 12 h at 80 °C and prepared for sintering. Groups of five samples were placed in a 100 cm 2 corundum crucible, as shown in Figure 1(a), and sintered in a high-temperature box furnace in air according to the sintering schedule. As shown in Figure 1(b), the temperature in the furnace rose from ambient temperature to 1200 °C at a rate of 5 °C/min, during which the temperature remained constant for 30 min periods at 200 °C, 400 °C and 800 °C. Then the samples were slowly cooled down to ambient temperature in the furnace after being sintered for 120 min at 1200 °C in the furnace. Thus, the CF samples were synthesized for the subsequent research. As an acknowledged method to obtain the chemical composition of a specific material, XRD was employed to verify the composition of the CF that was synthesized as previously described. One CF sample was crushed completely in an electromagnetic crusher. Then, the powder was sifted through a 325 mesh sieve (45 μm) and analyzed with an Ultima IV X-ray diffractometer made by the Rigaku Corporation of Japan (Tokyo, Japan). The Cu was used as the target material, and the diffractometer was operated at 40 kV and 40 mA, with a scanning speed of 20°/min. A comparison of the XRD patterns for the synthesized and standard CF, depicted in Figure 2, shows that their diffraction intensities, half peak widths, and peak positions coincide completely, which indicates that the CF was successfully synthesized as previously described and could be used as the raw material in the SFCA synthesis. As an acknowledged method to obtain the chemical composition of a specific material, XRD was employed to verify the composition of the CF that was synthesized as previously described. One CF sample was crushed completely in an electromagnetic crusher. Then, the powder was sifted through a 325 mesh sieve (45 µm) and analyzed with an Ultima IV X-ray diffractometer made by the Rigaku Corporation of Japan (Tokyo, Japan). The Cu was used as the target material, and the diffractometer was operated at 40 kV and 40 mA, with a scanning speed of 20 • /min. A comparison of the XRD patterns for the synthesized and standard CF, depicted in Figure 2, shows that their diffraction intensities, half peak widths, and peak positions coincide completely, which indicates that the CF was successfully synthesized as previously described and could be used as the raw material in the SFCA synthesis.
Synthesis of Silico-Ferrite of Calcium and Aluminum
Similarly to the procedure for the CF synthesis, the SFCA samples were synthesized through weighing, blending, molding, and sintering. The composition ratios of the raw materials for the SFCA samples, which are listed from S1 to S5 in Table 1, were based on the work of Xue [15]. The proportions of Fe2O3 and Al2O3 changed with changes in the n(Al2O3)/n(Fe2O3) value, the molar ratio of Al2O3 to Fe2O3, while the mass ratio of CF and SiO2 remained constant. The CF samples obtained as described in Section 2.1.1 were crushed and sifted completely through a 200 mesh sieve before being mixed in the raw materials for the SFCA samples, as were the Fe2O3, Al2O3, and SiO2. Note that the sintering schedule used here was the same as that used in the CF synthesis.
Semi-Quantitative Analysis of X-ray Diffraction
To obtain the mass fractions of the pure SFCA phase in samples S1-S5, the semi-quantitative Kvalue method based on XRD analysis, proposed by Frank H. Chung in 1974 [16], was introduced. In this study, corundum (α-Al2O3) was selected as the calibration compound and mixed well with Fe2O3 at a mass ratio of 1:1 for the XRD analysis. The ratio of the strongest peak intensity of Fe2O3 to that of α-Al2O3 was defined as the K-value. Then, the crushed and sifted samples (S1-S5) were mixed with α-Al2O3 at equivalent known quantities. The equations for the SFCA content calculation are as
Synthesis of Silico-Ferrite of Calcium and Aluminum
Similarly to the procedure for the CF synthesis, the SFCA samples were synthesized through weighing, blending, molding, and sintering. The composition ratios of the raw materials for the SFCA samples, which are listed from S1 to S5 in Table 1, were based on the work of Xue [15]. The proportions of Fe 2 O 3 and Al 2 O 3 changed with changes in the n(Al 2 O 3 )/n(Fe 2 O 3 ) value, the molar ratio of Al 2 O 3 to Fe 2 O 3 , while the mass ratio of CF and SiO 2 remained constant. The CF samples obtained as described in Section 2.1.1 were crushed and sifted completely through a 200 mesh sieve before being mixed in the raw materials for the SFCA samples, as were the Fe 2 O 3 , Al 2 O 3 , and SiO 2 . Note that the sintering schedule used here was the same as that used in the CF synthesis.
Semi-Quantitative Analysis of X-ray Diffraction
To obtain the mass fractions of the pure SFCA phase in samples S1-S5, the semi-quantitative K-value method based on XRD analysis, proposed by Frank H. Chung in 1974 [16], was introduced. In this study, corundum (α-Al 2 O 3 ) was selected as the calibration compound and mixed well with Fe 2 O 3 at a mass ratio of 1:1 for the XRD analysis. The ratio of the strongest peak intensity of Fe 2 O 3 to that of α-Al 2 O 3 was defined as the K-value. Then, the crushed and sifted samples (S1-S5) were mixed with α-Al 2 O 3 at equivalent known quantities. The equations for the SFCA content calculation are as is the K-value mentioned above; I represents the strongest peak intensity of the corresponding compound; and ω and ω' represent the mass ratio of the corresponding compound in S1-S5 and the mixtures, respectively. Thus, the mass ratio of the pure SFCA in samples S1-S5 could be obtained.
Test of Linear Expansion Coefficient
The linear expansion coefficients (LECs) of samples S1-S5 from 30 to 400 • C were measured with the NETZSCH DIL 402SE made by the NETZSCH Group of Germany (Selb, Germany). The samples S1-S5 were prepared as described in Section 2.1.2 in a strip-shaped mold for the test, as shown in Figure 3. After one of the samples was fixed in the chamber, the following parameters should be input to the machine: the number of sample, the length of sample, the range of temperature, and the heating rate. The rest of the parameters remained at their default. Then the thermal expansion instrument was started to obtain the LEC of each sample from 30 to 400 • C.
Metals 2019, 9, x FOR PEER REVIEW 5 of 18 is the K-value mentioned above; I represents the strongest peak intensity of the corresponding compound; and ω and ω' represent the mass ratio of the corresponding compound in S1-S5 and the mixtures, respectively. Thus, the mass ratio of the pure SFCA in samples S1-S5 could be obtained.
Test of Linear Expansion Coefficient
The linear expansion coefficients (LECs) of samples S1-S5 from 30 to 400 °C were measured with the NETZSCH DIL 402SE made by the NETZSCH Group of Germany (Selb, Germany). The samples S1-S5 were prepared as described in Section 2.1.2 in a strip-shaped mold for the test, as shown in Figure 3. After one of the samples was fixed in the chamber, the following parameters should be input to the machine: the number of sample, the length of sample, the range of temperature, and the heating rate. The rest of the parameters remained at their default. Then the thermal expansion instrument was started to obtain the LEC of each sample from 30 to 400 °C.
Test of Cooling and Compressive Strength
Just like the cooling process for an iron ore sinter in a steel plant, samples S1-S5 were cooled down to 800 °C in the furnace after being held at 1200 °C for 120 min. After that, five different cooling methods, classified as (I) cooling in water, (II) cooling in forced air, (III) cooling in air, (IV) cooling in the furnace, and (V) vertical tank cooling, were used to cool the samples down from 800 °C to ambient temperature, as listed in Table 2.
Test of Cooling and Compressive Strength
Just like the cooling process for an iron ore sinter in a steel plant, samples S1-S5 were cooled down to 800 • C in the furnace after being held at 1200 • C for 120 min. After that, five different cooling methods, classified as (I) cooling in water, (II) cooling in forced air, (III) cooling in air, (IV) cooling in the furnace, and (V) vertical tank cooling, were used to cool the samples down from 800 • C to ambient temperature, as listed in Table 2. (I) Cooling in water: a fast cooling method for an extreme case was tested first. After 800 • C, samples S1-S5 were poured into a barrel with 5 L of water at ambient temperature. They were removed after 3 min and dried in an oven for 6 h.
(II) Cooling in forced air: after 800 • C, samples S1-S5 were moved into a vertical tank and cooled with forced air at 30 m 3 /h in 1 atm and 20 • C using a homemade cooling device, which consisted of a blower, flowmeter, and vertical tank, as shown in Figure 4. To minimize the heat loss, the door of the tank was shut and sealed immediately after the samples were placed on a porous grate in the tank. Here, a thermocouple was used for the real-time monitoring of the surface temperatures of the samples, and the samples were removed after being cooled down to the ambient temperature.
(III) Cooling in air: after 800 • C, samples S1-S5 were removed from the furnace with a corundum crucible and exposed to the atmosphere for slow cooling until their surface temperatures dropped to the ambient temperature.
(IV) Cooling in the furnace: after 800 • C, samples S1-S5 were kept in the furnace in air until their surface temperatures dropped to the ambient temperature.
(V) Vertical tank cooling: as shown in Figure 4, a homemade vertical tank cooling device was designed to simulate a semi-industrial experiment with a vertical tank cooling process for the iron ore sinter. The surface temperatures of samples S1-S5 first dropped to 600 • C in the sealed vertical tank. Then, they were processed with forced air at 10 m 3 /h in 1 atm and 20 • C. The blower was stopped when the samples cooled down to 150 • C, after which they were removed from the tank and cooled down to the ambient temperature in the atmosphere.
All of the samples were subjected to a compressive strength test using an electro-hydraulic servo pressure testing machine, after being processed with the five cooling methods previously described. Some of the parameters needed to be set before the test: the sectional area of each sample was set as 380 mm 2 , the loading rate was set as 100 N/s, and the rest of the parameters remained default. Next, one of the samples was fixed at the center of the fixture, and then the machine was started. After the sample was completely crushed, the machine stopped automatically and the test of the next sample continued. Note that the cooling and compressive strength tests were repeated twice to ensure their accuracy. (I) Cooling in water: a fast cooling method for an extreme case was tested first. After 800 °C, samples S1-S5 were poured into a barrel with 5 L of water at ambient temperature. They were removed after 3 min and dried in an oven for 6 h.
(II) Cooling in forced air: after 800 °C, samples S1-S5 were moved into a vertical tank and cooled with forced air at 30 m 3 /h in 1 atm and 20 °C using a homemade cooling device, which consisted of a blower, flowmeter, and vertical tank, as shown in Figure 4. To minimize the heat loss, the door of the tank was shut and sealed immediately after the samples were placed on a porous grate in the tank. Here, a thermocouple was used for the real-time monitoring of the surface temperatures of the samples, and the samples were removed after being cooled down to the ambient temperature.
(III) Cooling in air: after 800 °C, samples S1-S5 were removed from the furnace with a corundum crucible and exposed to the atmosphere for slow cooling until their surface temperatures dropped to the ambient temperature.
(IV) Cooling in the furnace: after 800 °C, samples S1-S5 were kept in the furnace in air until their surface temperatures dropped to the ambient temperature.
(V) Vertical tank cooling: as shown in Figure 4, a homemade vertical tank cooling device was designed to simulate a semi-industrial experiment with a vertical tank cooling process for the iron ore sinter. The surface temperatures of samples S1-S5 first dropped to 600 °C in the sealed vertical tank. Then, they were processed with forced air at 10 m 3 /h in 1 atm and 20 °C. The blower was stopped when the samples cooled down to 150 °C, after which they were removed from the tank and cooled down to the ambient temperature in the atmosphere. All of the samples were subjected to a compressive strength test using an electro-hydraulic servo pressure testing machine, after being processed with the five cooling methods previously described. Some of the parameters needed to be set before the test: the sectional area of each sample was set as 380 mm 2 , the loading rate was set as 100 N/s, and the rest of the parameters remained default. Next, one of the samples was fixed at the center of the fixture, and then the machine was started. After the sample was completely crushed, the machine stopped automatically and the test of the next sample continued. Note that the cooling and compressive strength tests were repeated twice to ensure their accuracy.
Semi-Quantitative Analysis of X-ray Diffraction
The XRD pattern of the mixture consisting of Fe2O3 and α-Al2O3 with a mass ratio of 1:1 is shown in Figure 5. The ratio of the strongest peak intensity of Fe2O3 to that of α-Al2O3 suggests that the Kvalue mentioned in Section 2.2 is 2.24.
Semi-Quantitative Analysis of X-ray Diffraction
The XRD pattern of the mixture consisting of Fe 2 O 3 and α-Al 2 O 3 with a mass ratio of 1:1 is shown in Figure 5. The ratio of the strongest peak intensity of Fe 2 O 3 to that of α-Al 2 O 3 suggests that the K-value mentioned in Section 2.2 is 2.24.
The XRD patterns of samples S1-S5, mixed with equivalent known quantities of α-Al 2 O 3 , are shown in Figure 6. The relative peak intensities of Fe 2 O 3 and α-Al 2 O 3 were measured with the software MDI Jade version 6.1 developed by the Materials Data, Inc. (Livermore, CA, USA) [17]. Therefore, the mass ratios of the SFCA in samples S1-S5, averaging two repeated test results, are listed in Table 3, and the relationship between the n(Al 2 O 3 )/n(Fe 2 O 3 ) value and the mass ratios are shown in Figure 7. As can be seen, there was a maximum of 91.23% and minimum of 86.70% in S2 and S3, respectively, with increases occurring from S1 to S2 and from S3 to S4, and a stable trend occurring for samples S4 and S5. The XRD patterns of samples S1-S5, mixed with equivalent known quantities of α-Al2O3, are shown in Figure 6. The relative peak intensities of Fe2O3 and α-Al2O3 were measured with the software MDI Jade version 6.1 developed by the Materials Data, Inc. (Livermore, CA, USA) [17]. Therefore, the mass ratios of the SFCA in samples S1-S5, averaging two repeated test results, are listed in Table 3, and the relationship between the n(Al2O3)/n(Fe2O3) value and the mass ratios are shown in Figure 7. As can be seen, there was a maximum of 91.23% and minimum of 86.70% in S2 and S3, respectively, with increases occurring from S1 to S2 and from S3 to S4, and a stable trend occurring for samples S4 and S5. The XRD patterns of samples S1-S5, mixed with equivalent known quantities of α-Al2O3, are shown in Figure 6. The relative peak intensities of Fe2O3 and α-Al2O3 were measured with the software MDI Jade version 6.1 developed by the Materials Data, Inc. (Livermore, CA, USA) [17]. Therefore, the mass ratios of the SFCA in samples S1-S5, averaging two repeated test results, are listed in Table 3, and the relationship between the n(Al2O3)/n(Fe2O3) value and the mass ratios are shown in Figure 7. As can be seen, there was a maximum of 91.23% and minimum of 86.70% in S2 and S3, respectively, with increases occurring from S1 to S2 and from S3 to S4, and a stable trend occurring for samples S4 and S5. Figure 6. XRD pattern of samples S1-S5 mixed with equivalent known quantities of α-Al2O3. Figure 7. Relationship between the mass fraction of the SFCA and the corresponding n(Al2O3)/n(Fe2O3) value of the samples S1-S5.
Scanning Electron Microscope and Energy Dispersive Spectrometer Analyses
The SFCA morphologies obtained using a scanning electron microscope (SEM) with back scattered electron imaging (BSE) for samples S1-S5 are shown in Figure 8. To obtain the phase compositions of the samples, the SEM morphologies with the corresponding EDS analysis results for the line and map scanning of sample S2 are shown in Figures 9-10. The element analysis results for points a, b, and c in Figure 9 are listed in Table 4. The enrichment of the element Ca around the SiO2 contributed to the formation of CaSiO3. The dark gray Al2O3 particles are larger than those of the SiO2, and the gaps between phases are obvious. Further, the grayish white or bright white phases are CF, mixed with a small amount of the incompletely reacted Fe2O3. Generally, SFCA is a kind of solid solution cooled down from high-temperature liquid CF with dissolved Al2O3 and SiO2, which accounts for most of the entire sample. Relationship between the mass fraction of the SFCA and the corresponding n(Al 2 O 3 )/n(Fe 2 O 3 ) value of the samples S1-S5.
Scanning Electron Microscope and Energy Dispersive Spectrometer Analyses
The SFCA morphologies obtained using a scanning electron microscope (SEM) with back scattered electron imaging (BSE) for samples S1-S5 are shown in Figure 8. To obtain the phase compositions of the samples, the SEM morphologies with the corresponding EDS analysis results for the line and map scanning of sample S2 are shown in Figures 9 and 10. The element analysis results for points a, b, and c in Figure 9 are listed in Table 4. The enrichment of the element Ca around the SiO 2 contributed to the formation of CaSiO 3 . The dark gray Al 2 O 3 particles are larger than those of the SiO 2 , and the gaps between phases are obvious. Further, the grayish white or bright white phases are CF, mixed with a small amount of the incompletely reacted Fe 2 O 3 . Generally, SFCA is a kind of solid solution cooled down from high-temperature liquid CF with dissolved Al 2 O 3 and SiO 2 , which accounts for most of the entire sample. Table 4. EDS analysis results for the points a, b, and c in Figure 9. the line and map scanning of sample S2 are shown in Figures 9-10. The element analysis results for points a, b, and c in Figure 9 are listed in Table 4. The enrichment of the element Ca around the SiO2 contributed to the formation of CaSiO3. The dark gray Al2O3 particles are larger than those of the SiO2, and the gaps between phases are obvious. Further, the grayish white or bright white phases are CF, mixed with a small amount of the incompletely reacted Fe2O3. Generally, SFCA is a kind of solid solution cooled down from high-temperature liquid CF with dissolved Al2O3 and SiO2, which accounts for most of the entire sample. Table 4. EDS analysis results for the points a, b, and c in Figure 9.
Effect of SFCA Content on Strength of Samples during Temperature Change
The LECs of samples S1-S5 from 30 to 400 °C are shown in Figure 11. The variation trends of curves S1, S2, and S5 are similar, with the LEC increasing significantly from 30 °C to 200 °C and then
Effect of SFCA Content on Strength of Samples during Temperature Change
The LECs of samples S1-S5 from 30 to 400 • C are shown in Figure 11. The variation trends of curves S1, S2, and S5 are similar, with the LEC increasing significantly from 30 • C to 200 • C and then stabilizing at 9.0 × 10 −6 • C −1 until the end. Curve S3 rises continuously to 400 • C and finally reaches a maximum of 10.0 × 10 −6 • C −1 . Curve S4 rises significantly from 30 • C to 150 • C and then declines slightly to 225 • C, followed by another rise until the maximum of 9.1 × 10 −6 • C −1 . Curve S5 ascends to a maximum of 9.25 × 10 −6 • C −1 at 250 • C and then drops until the end. . Linear expansion coefficients of samples S1-S5 from 30 to 400 °C.
In order to study the relationship between the SFCA content and LEC for samples S1-S5, the changing rate of the linear expansion coefficient with temperature (CRT), which was equal to the ratio of the absolute value of the difference between the LEC at 30 °C and those after 30 °C and the LEC at 30 °C for samples S1-S5, was defined and obtained as °C after 30 30 As shown in Figure 12, the curves show that the CRT represents the growing trend of the LECs of samples S1-S5 from 30 to 400 °C, with curve S3 inclined the most and S2 the least, especially after Figure 11. Linear expansion coefficients of samples S1-S5 from 30 to 400 • C.
In order to study the relationship between the SFCA content and LEC for samples S1-S5, the changing rate of the linear expansion coefficient with temperature (CRT), which was equal to the ratio of the absolute value of the difference between the LEC at 30 • C and those after 30 • C and the LEC at 30 • C for samples S1-S5, was defined and obtained as As shown in Figure 12, the curves show that the CRT represents the growing trend of the LECs of samples S1-S5 from 30 to 400 • C, with curve S3 inclined the most and S2 the least, especially after 200 • C. To study the relationship between the SFCA contents and CRT curves of samples S1-S5, an obvious negative correlation between the content of SFCA and the average of the CRT was obtained, as shown in Figure 13. The following equations are introduced to clarify the significance of CRT here where α is the LEC (1/ • C); l t and l 0 are the lengths of the samples at temperature t and 30 • C (m), respectively; ε is the strain of samples during the change of the temperature. Therefore, the relationship between the LEC and the strain can be expressed as as shown in the equation, during the change of the temperature, the changing rate of strain, which is the main factor leading to the crack initiation, will remain constant when the LEC remains constant, therefore, the more stable the value of the CRT is, the lower the probability of the crack generation is, which indicates that the mechanical strength of the samples increases with the SFCA content during the temperature change. CRT values of samples S1-S5 from 30 to 400 °C. Figure 12. CRT values of samples S1-S5 from 30 to 400 • C. Figure 12. CRT values of samples S1-S5 from 30 to 400 °C.
Effect of Cooling Method on Strength of Samples during Cooling Process
The relationships between the cooling methods and the compressive strengths of samples S1-S5 are plotted with error bars in Figure 14, which show that S2 has the highest and S5 the lowest compressive strengths in samples S1-S5 with all the cooling methods. Meanwhile, for samples S1-S3, the compressive strength of (IV) cooling in the furnace is the highest, while that of (I) cooling in water is the lowest, which indicates that the compressive strength after slow cooling is higher than that after fast cooling. Further, the relationships between the compressive strengths of samples S1-S5 and the SFCA contents for all of the cooling methods are depicted in Figure 15. As shown in the results, the strengths of samples S1-S5 are basically positively correlated with the SFCA contents, indicating that the SFCA plays a positive role in improving the strength in a specific cooling process.
Effect of Cooling Method on Strength of Samples during Cooling Process
The relationships between the cooling methods and the compressive strengths of samples S1-S5 are plotted with error bars in Figure 14, which show that S2 has the highest and S5 the lowest compressive strengths in samples S1-S5 with all the cooling methods. Meanwhile, for samples S1-S3, the compressive strength of (IV) cooling in the furnace is the highest, while that of (I) cooling in water is the lowest, which indicates that the compressive strength after slow cooling is higher than that after fast cooling. Further, the relationships between the compressive strengths of samples S1-S5 and the SFCA contents for all of the cooling methods are depicted in Figure 15. As shown in the results, the strengths of samples S1-S5 are basically positively correlated with the SFCA contents, indicating that the SFCA plays a positive role in improving the strength in a specific cooling process. In order to investigate the impact of the vertical tank cooling and other four cooling methods on the quality of the samples, the average compressive strengths with the different cooling methods were obtained. As shown in Figure 16 In order to investigate the impact of the vertical tank cooling and other four cooling methods on the quality of the samples, the average compressive strengths with the different cooling methods were obtained. As shown in Figure 16
Scanning Electron Microscopy Analysis
The SEM morphologies of sample S2 magnified 50, 200, and 1000 times after using four different cooling methods are shown in Figure 17.
Scanning Electron Microscopy Analysis
The SEM morphologies of sample S2 magnified 50, 200, and 1000 times after using four different cooling methods are shown in Figure 17. Figure 16. Comparison of average compressive strengths of samples S1-S5 after vertical tank cooling and other four cooling methods.
Scanning Electron Microscopy Analysis
The SEM morphologies of sample S2 magnified 50, 200, and 1000 times after using four different cooling methods are shown in Figure 17. As shown in the SEM morphologies, (I) cooling in water caused the largest number of the cracks, followed by (II) cooling in forced air and (III) cooling in air, whereas no cracks formed in the sample subjected to (IV) cooling in the furnace. In fact, the different thermal stresses generated in the interior As shown in the SEM morphologies, (I) cooling in water caused the largest number of the cracks, followed by (II) cooling in forced air and (III) cooling in air, whereas no cracks formed in the sample subjected to (IV) cooling in the furnace. In fact, the different thermal stresses generated in the interior of the sample, varied with the cooling rates of the different cooling fluids, leading to different degrees of cracking, which is presumed to be the main factor affecting the compressive strengths of samples S1-S5 under the different cooling methods.
Effect of SFCA Content and Cooling Method on Strength of Iron Ore Sinter during Cooling Process
As we know, the cooling process of iron ore sinter can be regarded as a gas-solid convection heat transfer process from outside, and Biot number is a dimensionless number which represents the ratio of internal thermal conductivity resistance to surface heat transfer resistance of an object, therefore, it can unify different geometric dimensions of SFCA samples or iron ore sinters and different cooling methods, by which the relationship can be created between the compressive strength obtained in this research and the mechanical strength in industrial process. Therefore, in order to study the effect of the SFCA content and cooling method on strength of the iron ore sinter during the cooling process, the Biot number, which reflects the distribution of the temperature field inside an object under non-steady-state heat conduction conditions, was introduced to normalize the five cooling methods considered in the study as Bi = α cooling d 0 2λ sinter (8) where Bi is the Biot number; α cooling is the convective heat transfer coefficient of the cooling method (I)-(V) (W/(m 2 · K)); d 0 is the diameter of samples S1-S5 of iron ore sinter and is 0.022 (m); and λ sinter is the coefficient of thermal conductivity for the iron ore sinter and is 1.91 (W/(m·K)) [18]. For the The effect of the SFCA content and Biot number on the compressive strength of the iron ore sinter during the cooling process is shown in Figure 18. As shown in the results, a high (low) strength basically corresponds to a high (low) value for ω(SFCA) or 1/Bi. In particular, the highest strength of 196.56 MPa was obtained with the highest ω(SFCA) (91.23%) and 1/Bi (14.18), which indicated that higher mechanical strengths for iron ore sinters will be obtained with higher SFCA content and lower Biot numbers during the cooling, and this will guide the evaluation of mechanical strength of iron ore sinter after the cooling process in industry. lower Biot numbers during the cooling, and this will guide the evaluation of mechanical strength of iron ore sinter after the cooling process in industry.
Conclusions
In this paper, the effect of a cooling method on the strength of SFCA, a kind of desirable bonding phases with good metallurgical properties in sinter, was represented by correlation analyses of the SFCA contents and the thermal expansion coefficients and compressive strengths under different cooling methods. All of the measurements in the study, especially the tests of thermal expansion, cooling process and compressive strength, were arranged to study the effect of cooling methods on the strength of SFCA of iron ore sinter during the cooling process, which was meaningful to the industrial cooling process of iron ore sinter. The synthesized SFCA was regarded as the important
Conclusions
In this paper, the effect of a cooling method on the strength of SFCA, a kind of desirable bonding phases with good metallurgical properties in sinter, was represented by correlation analyses of the SFCA contents and the thermal expansion coefficients and compressive strengths under different cooling methods. All of the measurements in the study, especially the tests of thermal expansion, cooling process and compressive strength, were arranged to study the effect of cooling methods on the strength of SFCA of iron ore sinter during the cooling process, which was meaningful to the industrial cooling process of iron ore sinter. The synthesized SFCA was regarded as the important phase in iron ore sinter, the compressive strength represented the mechanical strength of iron ore sinter, and Biot number correlated the cooling method used in the study with the cooling process of iron ore sinter in industry. However, the results in this paper merely provided a reference to the industrial process, which were limited by the equipment and parameters in laboratory. The conclusions can be summarized as follows: • An obvious negative correlation between the SFCA content and CRT average suggests that the SFCA in the bonding phase significantly influences the mechanical strength of the iron ore sinter. • For a specific cooling method, the compressive strength of the SFCA samples increased with the SFCA content, suggesting that the SFCA phase has a positive effect on the mechanical strength of the iron ore sinter during the cooling process.
•
The results of a compressive strength test suggested that slow cooling can prevent the generation of thermal stress which leads to a deterioration in the strength of the iron ore sinter.
•
The mechanical strength of an iron ore sinter could be improved by decreasing the Biot number and increasing the SFCA content, which will guide the evaluation of mechanical strength of iron ore sinter after the cooling process in industry. | 10,010 | 2019-04-02T00:00:00.000 | [
"Materials Science",
"Engineering"
] |
Eternal Independent Sets in Graphs
The use of mobile guards to protect a graph has received much attention in the literature of late in the form of eternal dominating sets, eternal vertex covers and other models of graph protection. In this paper, eternal independent sets are introduced. These are independent sets such that the following can be iterated forever: a vertex in the independent set can be replaced with a neighboring vertex and the resulting set is independent.
Graph Protection
Let G = (V, E) denote a finite, undirected graph with vertex set V and edge set E. The problem of protecting a graph with mobile guards has been studied in a number of recent papers. We shall begin with a review of some of these models before introducing the eternal independent set problem, which can be viewed in the same light.
A dominating set of a graph G = (V, E) is a set D ⊆ V such that each vertex in V − D is adjacent to a vertex in D. The minimum cardinality amongst all dominating sets of G is the domination number γ(G).
Let {D i }, D i ⊆ V , i ≥ 1, be a collection of sets of vertices of the same cardinality, with one guard located on each vertex of D i . Each protection problem can be modeled as a two-player game between a defender and an attacker : the defender chooses D 1 as well as each D i , i > 1, while the attacker chooses the locations of the attacks r 1 , r 2 , . . . (which are sometimes called requests). Each attack is dealt with by the defender by choosing the next D i in response to the attack r i , subject to some constraints that depend on the particular game. The defender wins the game if they can successfully defend any sequence of attacks, subject to the constraints of the game described below; the attacker wins otherwise. We note that the sequence of attacks may be infinite in length.
We say that a vertex (edge) is protected if there is a guard on the vertex or on an adjacent (incident) vertex. A vertex v is occupied if there is a guard on v, otherwise v is unoccupied. An attack is defended if a guard moves to the attacked vertex (across one edge, i.e., in one "step").
Eternal Protection Problems
For the eternal domination problem, each D i , i ≥ 1, is required to be a dominating set, r i ∈ V (assume without loss of generality r i / ∈ D i ), and D i+1 is obtained from D i by moving one guard to r i from an adjacent vertex v ∈ D i . If the defender can win the game with the sets {D i }, then each D i is an eternal dominating set. The size of a smallest eternal dominating set of G is the eternal domination number γ ∞ (G). This problem was first studied by Burger et al. in [1] and will be referred to as the one-guard moves model.
For the m-eternal dominating set problem, each D i , i ≥ 1, is required to be a dominating set, r i ∈ V (assume without loss of generality r i / ∈ D i ), and D i+1 is obtained from D i by allowing each guard to move to a neighboring vertex (if it so chooses). That is, each guard in D i may move to an adjacent vertex, as long as one guard moves to r i . Thus it is required that r i ∈ D i+1 . The size of a smallest m-eternal dominating set (defined similar to an eternal dominating set) of G is the m-eternal domination number γ ∞ m (G). This "all guards move" version of the problem was introduced by Goddard, Hedetniemi and Hedetniemi [3]. The m in m-eternal denotes that multiple guards may move in response to an attack.
In the eviction model, each configuration D i , i ≥ 1, of guards is required to be a dominating set. An attack occurs at a vertex r i ∈ D i such that there exists at least one v ∈ N (r i ) with v / ∈ D i . The next guard configuration D i+1 is obtained from D i by moving the guard from r i to a vertex v ∈ N (r i ), v / ∈ D i (i.e., this is the "one-guard moves" model). The size of a smallest eternal dominating set in the eviction model for G is denoted e ∞ (G). That is, attacks occur at vertices with guards and we must move that guard to an unoccupied neighboring vertex. This problem was introduced in [6].
A vertex cover of G is a set C ⊆ V such that each edge of G is incident with a vertex in C. The minimum cardinality of a vertex cover of G is the vertex cover number τ (G) of G. An independent set of G is a set I ⊆ V such that no two vertices in I are adjacent. The maximum cardinality amongst all independent sets is the independence number α(G). It is well known that α(G) + τ (G) = n for all graphs G of order n (see e.g. [2, p. 241]).
The clique covering number θ(G) is the minimum number k of sets in a partition V = V 1 ∪ · · · ∪ V k of V such that each G[V i ] is complete. Hence, as is well-known, θ(G) equals the chromatic number χ(G) of the complement G of G. Thus for every graph G, α(G) ≤ θ(G).
A matching in G is a set of edges, no two of which have a common endvertex. The matching number m(G) is the maximum cardinality of a matching of G. It is also well known that τ (G) ≥ m(G) for all graphs, and that equality holds for bipartite graphs (see e.g. For the m-eternal vertex covering problem, each D i , i ≥ 1, is required to be a vertex cover, r i ∈ E, and D i+1 is obtained from D i by moving one or more guards to neighboring vertices; i.e., each guard in D i may move to an adjacent vertex provided that one guard moves across edge r i (we assume without loss of generality that one end-vertex of r i is not in D i , otherwise the two guards on the endvertices of r i simply interchange positions). If the defender can win the game with the sets {D i }, then each D i is an eternal vertex cover. The size of a smallest eternal vertex cover of G is the eternal covering number τ ∞ m (G). This problem was introduced in [7].
A survey on eternal protection problems can be found in [8].
Eternal Independent Sets
For the eternal independent set problem each D i , i ≥ 1, is required to be an independent set, r i ∈ D i , and D i+1 is obtained from D i by moving the guard on r i to an adjacent vertex. (We say the vertex r i is attacked.) If the defender can win the game with the sets {D i }, then each D i is an eternal independent set. The size of a largest eternal independent set of G is the eternal independence number α ∞ (G). This will sometimes be referred to as the one-guard moves model.
For the m-eternal independent set problem each D i , i ≥ 1, is required to be an independent set, r i ∈ D i , and D i+1 is obtained from D i by moving the guard on r i to an adjacent vertex while the remaining guards in D i may also move to neighboring vertices (so long as r i / ∈ D i+1 ). If the defender can win the game with the sets {D i }, then each D i is an m-eternal independent set. The size of a largest m-eternal independent set of G is the m-eternal independence number α ∞ m (G). This will sometimes be referred to as the all-guards move model.
We shall sometimes say that if a guard at u moves to v, that vertices u and v are switched. A total-switch of an independent set D into an independent set Z is a simultaneous replacement of all vertices in D where each vertex v i ∈ D is replaced by a neighbor z i such that |D| = |Z|. Note that D ∩ Z = ∅, since D is an independent set. For the total-eternal independent set problem each D i , i ≥ 1, is required to be an independent set, r i ∈ D i , and D i+1 is obtained from D i by a total-switch. If the defender can win the game with the sets {D i }, then each D i is an total-eternal independent set. The largest cardinality of a total-eternal independent set is the total-eternal independence number of G denoted α ∞ t (G).
Observe that for the total-eternal independent set problem, the actual sequence of attacks does not matter, since all the guards must move upon each attack.
These eternal independent set problems are analogous to the eviction model of eternal domination. Related concepts for independent sets have been considered in [4,5,9], but the exact parameters defined here have not been studied prior to this, as far as we know.
Examples
We give a few small examples to illustrate the various definition. Observe that α ∞ ( We alert the reader to the fact that C 5 is an example that will be used several more times throughout the paper and illustrated in Figure 1. In Figure 1, a guard on a shaded vertex can move to an unshaded neighbor (the left guard must move clockwise, the right guard must move counterclockwise from this initial configuration) and the resulting guard configuration induces an independent set (and is isomorphic to the initial configuration). Also α ∞ (K n,n ) = 1 and α ∞ m (K n,n ) = n = α ∞ t (K n,n ). The corona of a graph G, denoted cor(G), is the graph obtained from G by adding a pendant vertex to every vertex of G. cor( More generally, it is easy to see that α ∞ (cor(G)) = α(G) and the simple proof is omitted.
follows from the second part of the proof of Theorem 4.2, below.
Clearly
is true follows since we can perform a total-switch along the edges of a free-matching. In a total-switch request for D, we replace D by Z where v i moves to u i . If another request is done (now on Z) we switch back to D. Hence D is total-eternal and α ∞ t (G) ≥ |D| = m f (G). Conversely, let D be a maximum total eternal independent set. A total-switch sends D to Z such that Z is independent, |D| = |Z| and every vertex v i in D moved to a neighbor z i in Z. Since |Z| = |D|, it follows that every v i moved to a distinct neighbor z i in Z hence Then v 1 moves to a neighbor z 1 , v 2 to a neighbor z 2 , and so on so Z = {z 1 , . . . , z k } is an independent set. Now a total-switch of D sends it to Z via the same edges e i = (v i , z i ) and a total switch on Z send it back to D, hence D is also total eternal independent set, and α ∞ t (G) ≥ |D| = α ∞ (G). In any infinite sequence of switchings imposed on D, we always keep moving v i to u i and u i to v i . So all these requests keep us with independent set T with none/some/all vertices in D and none/some/all vertices in Z such that |D| = |T | = |Z|; hence D is an eternal independent set. Hence α ∞ (G) ≥ |D| = m i (G).
There exist graphs for which equality in the chain give in Theorem 2.1 does not necessarily hold. Consider C 5 , where 2 = α ∞ (C 5 ) > m i (C 5 ) = 1. There are also graphs for which α ∞ m (G) < m(G) such as K 3 with a pendant vertex attached to one of the vertices or a K 5 with a pendant vertex attached to one of the vertices.
It seems that a graph with large matching and with low chromatic number should force a large free matching and hence a large total-eternal independence number. We detail this relationship in the next proposition.
Proof. Let M be a maximum matching of cardinality m. The subgraph of G induced by M , denoted G * , has χ(G * ) = t ≤ k. Let A 1 , . . . , A t be the color classes of G * . Now the m edges of M are divided into t(t − 1)/2 pairs (A i , A j ). Hence, by averaging, for some pair (i, j), the pair (A i , A j ) contains at least m/(t(t − 1)/2 ≥ m/(k(k − 1)/2 = 2m/k(k − 1) edges from M and these edges form a free matching.
Clique Coverings
Observe that α ∞ (G) ≤ θ(G), for all graphs G, since α(G) ≤ θ(G), for all G (since no clique in a clique cover can contain more than one vertex from any independent set). Proposition 3.1 Let G be a connected triangle-free graph with θ(G) ≥ 2 and no isolated vertices. Then α ∞ (G) < θ(G).
Proof. Suppose to the contrary that α ∞ (G) = θ(G). Let C = C 1 , C 2 , . . . , C θ be a minimum clique cover. Note that |C i | ≤ 2 since G is triangle-free. Then an eternal independent set D must contain exactly one vertex from each C i . If we request a vertex v ∈ C i , v ∈ D, that vertex must switch to another vertex in C i (since every C j , j = i contains another vertex in D). Thus each C i must be a K 2 . Let u, v be two vertices of minimum distance in D and such that the cliques from C in which they are contained are connected by an edge. Clearly 2 ≤ dist(u, v) ≤ 3. If dist(u, v) = 2, then a request to one of them (which one depends on their locations) will switch one of them so that u and v are adjacent. If dist(u, v) = 3, then consecutive requests to both u and v will cause two switches resulting in u and v being adjacent.
Proposition 3.1 is sharp for infinitely many graphs. Let G consist of n paths of length three having a common vertex w, i.e., a star K 1,n where each edge is subdivided once. G is K 3 free with θ(G) = n + 1 and α ∞ (G) = n (because of the induced matching, see Theorem 4.1 below). We leave open the problem of characterizing the triangle-free graphs for which α ∞ (G) = θ(G) − 1.
As another example, for a triangle-free graph G on n vertices, cor(G) (which is again triangle-free) has the following properties: θ(cor(G)) = n > α ∞ (cor(G)) ≥ c √ n log n. The left hand-side come from Proposition 3.1, while the right side come from the Ramsey number R(K 3 , K n ) and the fact that α ∞ (cor(G)) = α(G), since it is well-known that if G is trianglefree, it has an independent set of cardinality at least c √ n log n and this is sharp.
We can ask for which connected graphs is α ∞ (G) = α(G) = θ(G)? It seems difficult to structurally describe these graphs but some observations are in order. If θ = 1, then α ∞ (G) = α(G) = θ(G). Now let θ(G) > 1 and C = {C 1 , C 2 , . . . , C k } be a minimum clique covering. Supposing α(G) = θ(G), we get that there is an independent set consisting of one vertex from each C i . In order for α ∞ (G) = α(G), clearly each C i must contain at least two vertices and no two C i 's that are both K 2 's can be joined by an edge. This leads us to the following.
, then by the pigeonhole principle there must simultaneously exist two guards within the same clique from some minimum clique-covering. But two such guards cannot be on independent vertices. Thus α ∞ (G) = θ(G).
For the other direction, let us assume α ∞ (G) = θ(G). From the observation above, we may assume that θ(G) > 1.
Using the notation from above, each clique C i from clique cover C can contain at most one edge from any matching. Further, each C i is a clique with at least two vertices, because if any C i is a K 1 , then we can easily force a switch that destroys independence. Any eternal independent set D of cardinality θ(G) contains exactly one vertex from each clique of clique cover C, since α ∞ (G) = α(G) = θ(G). Denote the vertices in D as D = {v 1 , v 2 , . . . , v k }, k = θ(G) and let v i ∈ C i . Obviously D is an independent set. If C i is a K i , i > 1, let . .} be the other vertices in the clique C i along with v i . For simplicity in what follows, we shall omit the superscript on a u k i vertex when it is clear from the context and refer to these as u i type vertices.
We construct a modified graph G as follows. If any u i type vertex is adjacent to any v j , j = i, delete that u i vertex (since if v i were attacked first in G, the guard could not switch to that u i vertex without destroying independence). Then if any K 2 's in the resulting graph have a u i type vertex adjacent to any u j type vertices, j = i, delete all such u j vertices (since such vertices cannot be switched to without destroying independence). In the resulting graph G , what must remain are K 2 components and other cliques with more than two vertices (any two such cliques with more than two vertices may be connected via a limited number of edges). If there are any K 1 components in G , then D is not an eternal independent set, since we could force a switch in G that destroys independence. The K 2 components can be removed and placed into the induced matching, M , that we are building. So only cliques with more than two vertices remain in the reduced graph G . Observe that neither (u i v j ) nor (u j v i ) are edges for any distinct cliques C i , C j , in the clique cover C when restricted to G .
Let D ⊆ D be the vertices of D that are in G . Let D = {v 1 , v 2 , . . . , v t } and let D * = D \ D . Considering G, start with guards on the vertices of D and attack all the vertices of D * and then attack each of the vertices in D , with v i ∈ D switching to a vertex u a i , for some a. The set of u a i vertices are an independent set. Either the edges switched across form an induced matching or some u a i is adjacent to some v b . But there are no such adjacencies in the graph G . Hence we can add these edges switched across to the K 2 components above to form an induced matching of G.
It seems interesting to find graphs classes for which α ∞ m (G) = θ(G); C 4 and P 4 are two examples where equality holds, but which have α ∞ m (G) > m i (G). Proof. If there is a maximum induced matching with t edges, then a vertex can be switched along each of these edges eternally; therefore there exists an eternal independent set with t vertices.
Bipartite Graphs
Suppose there exists an eternal independent set D with k vertices. We can request a set vertices be attacked such that all these vertices are in A, say on a 1 , a 2 , . . . , a k . Because if some vertex v ∈ D is not in A (i.e., v / ∈ A), then it must be that V ∈ B; so therefore we can attack v. Thus the guard on v cannot stay in B and must move to A , so we can repeat requesting until all vertices are in A, say a 1 , . . . , a k , |D| = k.
Let b i be the vertex in B such that if a i is attacked next, a i is switched to b i . Then b i cannot be adjacent to any a k , k = i, otherwise independence would be destroyed, contradicting that the set of vertices was an eternal independent set.
So consider the set of edges e i = (a i , b i ) which is a matching. There are no edges between the a i being all in A , there are no edges between the b i being all in B. If there is an edge between e i and e j (j = i) then it is either (a i , b j ) or (a j , b i ) which is impossible. As all b i are independent from all a j (j = i), this is an induced matching.
This property does not hold for all non-bipartite graphs: C 5 is an example of a graph with α ∞ (C 5 ) = 2 and m i (C 5 ) = 1. Furthermore, observe that any tree T with θ(T ) = α(T ) > 2 has θ(T ) > m i (T ). This is because in order for θ(T ) = m i (T ), T would have to all the edges in the tree in some minimum clique covering. But two edges that are joined by an edge cannot be in the same induced matching.
A linear-time algorithm for finding a maximum induced matching in a tree is given in [10,11]. Thus, using Theorem 4.1, one can find give an algorithm that computes the order of the maximum-eternal independent set in a tree in linear time. We give here an alternative linear time algorithm which is simpler and directly finds a maximum eternal independent set in a tree.
A stem in a tree is a vertex adjacent to a leaf and the height of a tree with specified root vertex r is the maximum distance from r to any leaf.
If the height of a tree T with at least two vertices is one, then the maximum eternal independent set is of size 1. Otherwise, suppose the height of tree T is more than one. In this case, we find a root vertex r of T that is not a stem, which necessarily exists as T is not a K 1,m for any m ≥ 1. The root may be a leaf.
We shall build a set D that will eventually contain the vertices of a maximum eternal independent set. Pick a stem v 1 of maximum distance from r. Let w be parent of v 1 (possibly, w = r). Let v 1 , . . . , v k be all the stems that are children of w. Place each v i in D. Remove all children and grandchildren of w from T , letting the resulting tree be T . Proceed recursively on T , terminating when the tree T has height at most one. If T has height at most one, then no more vertices will be added to D.
We now prove the algorithm finds a maximum eternal independent set.
Proof. When the height of T is one, α ∞ (T ) = m i (T ) = 1.
Now assume the height of T is h > 1. When h = 2, D consists of the children of r. In this case |D| = m i (G). Let us suppose h > 2. Consider the tree T as described in the algorithm. Clearly the maximum eternal independent set of T − T consists of k vertices: v 1 , . . . , v k (none of which are leaves), since a guard on v i can move to one of its children. Note that w is a leaf in T . Then the eternal independent set D found by the algorithm in T is a largest eternal independent set not containing w. Therefore, D ∪ {v 1 , . . . , v k } is a maximum eternal independent set of T , since a guard in v i can move to its child when attacked.
In other words, D consists of vertices labeled v i at any time in the algorithm. These vertices form an independent set; therefore no vertex ever labeled w can be part of this same independent set. No vertex labeled w can be subsequently labeled as v i , as w becomes a leaf in the tree T . If we think of the edges that guards move across in this scheme as a matching, then D consists of one endvertex from each edge in this matching. Each neighbor of a w vertex is an endvertex of an edge in this matching.
Furthermore, the root, r, cannot be part of this independent set unless it is labeled as v 1 at some point in the algorithm, otherwise a guard on r would have to move to one of its children when r is attacked, but this child was once a w vertex (and thus is adjacent to some v i vertex that may have a guard on it). Proof. Recall that in the m-eternal independent set problem, we may move as many guards as needed (including the possibility of a total-switch), as long as we move the guard from the attacked vertex.
Let M = {e i = (v i , u i ) : i = 1, . . . , k} be a matching of maximum cardinality, so k = m(G). Since we can do total-switches from the endvertices of M in A to the endvertices of 6. Describe some graph classes for which α ∞ m (G) = α(G). Well-covered graphs (i.e., graphs in which all maximal independent sets have the same cardinality) have this property, since there exists a perfect matching between the vertices in the symmetric difference of any two maximal independent sets, c.f. [4]. When a vertex is attacked, there exists a maximal independent set containing a neighbor of the attacked vertex (since each vertex belongs to some maximal independent set) and there exists a perfect matching that can be switched across between the vertices in the symmetric difference of these two maximal independent sets.
Cayley graphs are another class of graphs that may have this property. | 6,594 | 2016-03-25T00:00:00.000 | [
"Mathematics",
"Computer Science"
] |
A bound on massive higher spin particles
According to common lore, massive elementary higher spin particles lead to inconsistencies when coupled to gravity. However, this scenario was not completely ruled out by previous arguments. In this paper, we show that in a theory where the low energy dynamics of the gravitons are governed by the Einstein-Hilbert action, any finite number of massive elementary particles with spin more than two cannot interact with gravitons, even classically, in a way that preserves causality. This is achieved in flat spacetime by studying eikonal scattering of higher spin particles in more than three spacetime dimensions. Our argument is insensitive to the physics above the effective cut-off scale and closes certain loopholes in previous arguments. Furthermore, it applies to higher spin particles even if they do not contribute to tree-level graviton scattering as a consequence of being charged under a global symmetry such as ℤ2. We derive analogous bounds in anti-de Sitter space-time from analyticity properties of correlators of the dual CFT in the Regge limit. We also argue that an infinite tower of fine-tuned higher spin particles can still be consistent with causality. However, they necessarily affect the dynamics of gravitons at an energy scale comparable to the mass of the lightest higher spin particle. Finally, we apply the bound in de Sitter to impose restrictions on the structure of three-point functions in the squeezed limit of the scalar curvature perturbation produced during inflation.
Weinberg in one of his seminal papers [1] showed that general properties of the S-matrix allow for the presence of the graviton. Not only that, the soft-theorem dictates that at low energies gravitons must interact universally with all particles -which is the manifestation of the equivalence principle in QFT. This remarkable fact has many far-reaching consequences for theories with higher spin particles.
Even in the early days of quantum field theory (QFT), it was known that there are restrictions on particles with spin J > 2 in flat spacetime. For example, Lorentz invariance of the S-matrix requires that massless particles interacting with gravity in flat spacetime cannot have spin more than two [1][2][3]. Moreover, folklore has it that any finite number of massive elementary higher spin particles, however fine-tuned, cannot interact with gravity in a consistent way. There is ample evidence suggestive of a strict bound on massive higher spin particles at least in flat spacetime in dimensions D ≥ 4 from tree-level unitarity and asymptotic causality [4][5][6][7][8][9], 1 however, to our knowledge there is no concrete argument which completely rules out a finite number of massive particles with spin J > 2.
Most notably, it was argued in [9] that in a theory with finite number of massive particles with spin J > 2, unless each higher spin particle is charged under a global symmetry such as Z 2 , they will contribute to eikonal scattering of particles, even with low spin (J ≤ 2), in a way that violates asymptotic causality in flat spacetime. The same statement is true even in anti-de Sitter (AdS) spacetime where the global symmetries of higher spin particles are required by the chaos growth bound of the dual CFT [10]. In addition, there is no known string compactification which leads to particles with spin J > 2 and masses M M s in flat spacetime, where M s is the string scale. Of course, it is well known that higher spin particles do exist in AdS, but they always come in an infinite tower and these theories become strongly interacting at low energies [11,12]. All of these observations indicate that there are universal bounds on theories with higher spin massive particles. In this paper, we will prove such a bound from causality. We will show that any finite number of massive elementary particles with spin J > 2, however fine tuned, cannot interact with gravitons in flat or AdS spacetimes (in D ≥ 4 dimensions) in a way that is consistent with the QFT equivalence principle and preserves causality. In particular, we will demonstrate that the three-point interaction J-J-graviton must vanish for J > 2. However, this is one interaction that no particle can avoid due to the equivalence principle, implying that elementary particles with spin J > 2 cannot exist.
For massless higher spin particles, the inconsistencies are even more apparent. The tension between Lorentz invariance of the S-matrix and the existence of massless particles with spin J > 2 was already visible in [1]. Subsequently, the same tension was shown to exist for massless fermions with spin J > 3/2 [13,14]. A concrete manifestation of this tension is an elegant theorem due to Weinberg and Witten which states that any massless particle with spin J > 1 cannot possess a Lorentz covariant and gauge invariant JHEP04(2019)056 energy-momentum tensor [2]. 2 Of course, this theorem does not prohibit the existence of gravitons, rather it implies that the graviton must be fundamental. More recently, a generalization of the Weinberg-Witten theorem has been presented by Porrati which states that massless particles with spin J > 2 cannot be minimally coupled to the graviton in flat spacetime [3]. Both of these theorems are completely consistent with various other observations made about interactions of massless higher spin particles in flat spacetime (see [16][17][18][19][20][21] and references therein). Furthermore, the generalized Weinberg-Witten theorem and the QFT equivalence principle are sufficient to completely rule out massless particles with spin J > 2 in flat spacetime [2,3]. The basic argument is rather simple. The Weinberg-Witten theorem and its generalization by Porrati only allow non-minimal coupling between massless particles with spin J > 2 and the graviton. Whereas, it is well known that particles with low spin can couple minimally with the graviton. Therefore, the QFT equivalence principle requires that massless higher spin particles, if they exist, must couple minimally with the graviton at low energies -which directly contradicts the Weinberg-Witten/Porrati theorem.
Any well behaved Lorentzian QFT must also be unitary and causal. 3 Lorentz invariance alone was sufficient to rule out massless higher spin particles in flat spacetime. Whereas, massive elementary particles with spin J > 2 do not lead to any apparent contradiction with Lorentz invariance in flat spacetime. However, any such particle if present, must interact with gravitons. The argument presented in [9] implies that finite number of higher spin particles cannot be exchanged in any tree-level scattering. However, this restriction is not sufficient to rule out massive higher spin particles, rather it implies that each massive higher spin particle must be charged under Z 2 or some other global symmetry. On the other hand, the equivalence principle requires the coupling between a single graviton and two spin-J particles to be non-vanishing. By considering an eikonal scattering experiment between scalars and elementary higher spin particles with spin J and mass m in the regime |s| |t| m, where s and t are the Mandelstam variables, we will show that any such coupling between the higher spin particle and the graviton in flat spacetime leads to violation of asymptotic causality. This is accomplished by extending the argument of [9] to the scattering of higher spin particles which requires the phase shift to be non-negative for all choices of polarization of external particles.
A similar high energy scattering experiment can be designed in AdS to rule out elementary massive higher spin particles. However, we will take a holographic route which has several advantages. We consider a class of large-N CFTs in d ≥ 3 dimensions with a sparse spectrum. The sparse spectrum condition, to be more precise, implies that the lightest single trace primary operator with spin J > 2 has dimension ∆ gap 1. It was first conjectured in [23] that this class of CFTs admit a universal holographic dual description with a low energy description in terms of Einstein gravity coupled to matter fields. The conjecture was based on the observation that there is a one-to-one correspondence between scalar effective field theories in AdS and perturbative solutions of CFT crossing equations in JHEP04(2019)056 the 1/N expansion. The scalar version of this conjecture was further substantiated in [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41] by using the conformal bootstrap. More recently, the conjecture has been completely proven at the linearized level even for spinning operators including the stress tensor [42][43][44][45][46][47]. In the second half of the paper, we will exploit this connection to constrain massive higher spin particles in AdS by studying large-N CFTs with a sparse spectrum. To this end, we introduced a new non-local operator, capturing the contributions to the Regge limit of the OPE of local operators. This operator is expressed as an integral of a local operator over a ball times a null-ray. It is obtained by generalizing the Regge OPE introduced in [47] to non-integer spins, resulting in an operator that is more naturally suited for parametrizing the contribution of Regge trajectories which require analytic continuation in both spin and scaling dimension.
In the holographic CFT side we will ask the dual question: is it possible to add an extra higher spin single trace primary operator with J > 2 and scaling dimension ∆ ∆ gap and still get a consistent CFT? A version of this question has already been answered by a theorem in CFT that rules out any finite number of higher spin conserved currents [48][49][50][51] -which is the analog of the Weinberg-Witten theorem in AdS. However, ruling out massive higher spin particles in AdS requires a generalization of this theorem for nonconserved single trace primary operators of holographic CFTs. The chaos (growth) bound of Maldacena, Shenker, and Stanford [10] partially achieves this by not allowing any finite number of higher spin single trace primary operators to contribute as exchange operators in CFT four-point functions in the Regge limit. However, this restriction does not rule out the existence of such operators rather it prohibits these higher spin operators to appear in the operator product expansion (OPE) of certain operators. On the other hand, causality (chaos sign bound) imposes stronger constraints on non-conserved single trace primary operators. In particular, by using the holographic null energy condition (HNEC) [45,47] applied to correlators with external higher spin operators, we will show that massive higher spin fields in AdS (in D ≥ 4 dimensions) lead to causality violation in the dual CFT. This implies that any finite number of massive elementary particles with spin J > 2 in AdS cannot be embedded in a well behaved UV theory in which the dynamics of gravitons at low energies is described by the Einstein-Hilbert action.
One advantage of the holographic approach is that it also provides a possible solution to the causality problem. From the dual CFT side, we will argue that in a theory where the dynamics of gravitons is described by the Einstein-Hilbert action at energy scales E Λ (Λ can be the string scale M s ), a single elementary particle with spin J > 2 and mass m Λ violates causality unless the particle is accompanied by an infinite tower of (finely tuned) higher spin elementary particles with mass ∼ m. Furthermore, causality also requires that these new higher spin particles (or at least an infinite subset of them) must be able to decay into two gravitons and hence modify the dynamics of gravitons at energy scales E ∼ m. So, one can have a causal theory without altering the low energy behavior of gravity only if all the higher spin particles are heavier than the cut-off scale Λ.
Causality of CFT four-point functions in the lightcone limit also places nontrivial constraints on higher spin primary operators. In particular, it generalizes the Maldacena-Zhiboedov theorem of d = 3 [48] to higher dimensions by ruling out a finite number of higher JHEP04(2019)056 Figure 1. Spectrum of elementary particles with spin J > 2 in a theory where the dynamics of gravitons is described by the Einstein-Hilbert action at energy scales E Λ. The cut-off scale Λ can be the string scale and hence there can be an infinite tower of higher spin particles above Λ. Figure (a) represents a scenario that also contains a finite number of higher spin particles below the cut-off and hence violates causality. Causality can only be restored if these particles are accompanied by an infinite tower of higher spin particles with comparable masses which is shown in figure (b). This necessarily brings down the cut-off scale to Λ new = m, where m is the mass of the lightest higher spin particle. spin conserved currents [50]. The advantage of the lightcone limit is that the constraints are valid for all CFTs -both holographic and non-holographic. However, the argument of [50] is not applicable when higher spin conserved currents do not contribute to generic CFT four-point functions as exchange operators. We will present an argument in the lightcone limit that closes this loophole by ruling out higher spin conserved currents even when none of the operators are charged under it. 4 For holographic CFTs, this completely rules out a finite number of massless higher spin particles in AdS in D ≥ 4 dimensions.
The bound on higher spin particles has a natural application in inflation. If higher spin particles are present during inflation, they produce distinct signatures on the late time three-point function of the scalar curvature perturbation in the squeezed limit [52]. The bounds on higher spin particles in flat space and in AdS were obtained by studying local high energy scattering which is insensitive to the spacetime curvature. This strongly suggests that the same bound should hold even in de Sitter space. 5 Our bound, when applied in de Sitter, immediately implies that contributions of higher spins to the three-point function of the scalar curvature perturbation in the squeezed limit must be Boltzmann suppressed ∼ e −2πΛ/H , where H is the Hubble scale. Therefore, if the higher spin contributions are detected in future experiments, then the scale of new physics must be Λ ∼ H. This necessarily requires the presence of not one but an infinite tower of higher spin parti- 4 We should note that we have not ruled out an unlikely scenario in which the OPE coefficients conspire in a non-trivial way to cancel the causality violating contributions. Three-point functions of conserved currents are heavily constrained by conformal invariance and hence this scenario is rather improbable. 5 This argument parallels the argument made by Cordova, Maldacena, and Turiaci in [53]. The same point of view was also adopted in our previous paper [47]. cles with spins J > 2 and masses comparable to the Hubble scale. Any such detection can be interpreted as evidence in favor of string theory with the string scale comparable to the Hubble scale. The rest of the paper is organized as follows. In section 2, we present an S-matrix based argument to show that massive elementary particles with spin J > 2 cannot interact with gravitons in a way that preserves asymptotic causality. We derive the same bounds in AdS from analyticity properties of correlators of the dual CFT in section 3. In section 4, we argue that the only way one can restore causality is by adding an infinite tower of massive higher spin particles. In addition, we also discuss why stringy states in classical string theory are consistent with causality. Finally, in section 5, we apply our bound in de Sitter to constrain the squeezed limit three-point functions of scalar curvature perturbations produced during inflation.
Higher spin fields in flat spacetime
In this section, we explicitly show that interactions of higher spin particles with gravity lead to causality violation. Eikonal scattering has been used in the literature [9,[54][55][56][57][58] to impose constraints on interactions of particles with spin. When the center of mass energy is large and transfer momentum is small, the scattering amplitude is captured by the eikonal approximation. Focusing on a specific exchange particle for now, the scattering amplitude is given by a sum of ladder diagrams. These diagrams can be resumed (see figure 2) and as a result introduce a phase shift in the scattering amplitude [59]. 6 This phase shift produces a Shapiro time delay [60] that particles experience [9]. Asymptotic causality in flat spacetime requires the time delay and hence the phase shift to be non-negative [9,61]. Moreover, positivity of the phase shift imposes restrictions on the tree-level exchange diagramswhich are the building blocks of ladder diagrams -constraining three-point couplings between particles. This method has been utilized to constrain three-point interactions between gravitons, massive spin-2 particles, and massless higher spin particles [9,54,55]. Here we apply the eikonal scattering method to external massive and massless elementary particles with spin J > 2.
We will briefly review eikonal scattering in order to explicitly relate the phase shift to the three-point interactions between elementary particles. We will take two of the external JHEP04(2019)056 Figure 3. Eikonal scattering of particles. In this highly boosted kinematics, particles are moving almost in the null directions such that the center of mass energy is large. particles to be massive or massless higher spin particles (J > 2) and the other two particles to be scalars. The setup is shown in figure 3 where particles 1 and 3 are the higher spin particles, whereas particles 2 and 4 are scalars. We will then use on-shell methods to write down the general three-point interaction between higher spin elementary particles and gravitons [62]. This allows us to derive the most general form of the amplitude in the eikonal limit. Positivity of the phase shift for all choices of polarization tensors of external particles, constrains the coefficients of three-point vertices. In particular, for both massive and massless particles with spin J > 2 in space-time dimensions D ≥ 4, we find that the three-point interaction J-J-graviton must be zero. However, this is one interaction that no particle can avoid due to the equivalence principle, implying that elementary particles with spin J > 2 cannot exist.
Eikonal scattering
Let us consider 2 → 2 scattering of particles in space-time dimensions D ≥ 4 as shown in figure 3. Coordinates are written in R 1,D−1 with the metric Denoting the momentum of particles by p i , with i labeling particles 1 through 4, the Mandelstam variables are given by where q is the momentum of the particle exchanged which in the eikonal limit has the property q 2 = q 2 , where q has components in directions transverse to the propagation of the external particles. 7 The tree level amplitude consists of the products of three-point JHEP04(2019)056 where the sum is over all of the states of the exchanged particles with mass m I . In the above expression, C 13I and C 24I are on-shell three-point amplitudes which are generally functions of the transferred momentum q, as well as the polarization tensors and the center of mass variables. In highly boosted kinematics, particles are moving almost in the null directions u and v with momenta P u and P v respectively. The center of mass energy s is large with respect to other dimensionful quantities such as the particle masses. In particular, we have s |t| = q 2 . The total scattering amplitude is given by the sum of all ladder diagrams in t-channel which exponentiates when it is expressed in terms of the impact parameter b which has components only along the transverse plane, where, Before we proceed, let us comment more on the exponentiation since it plays a central role in the positivity argument. We can interpret the phase shift as the Shapiro time-delay only when it exponentiates in the eikonal limit. However, it is known that the eikonal exponentiation fails for the exchange of particles with spin J < 2 [63][64][65]. It is also not obvious that the tree level amplitude must exponentiate in the eikonal limit for the exchange of particles with spin J ≥ 2. A physical argument was presented in [9] which suggests that for higher spin exchanges it is possible to get a final amplitude that is exponential of the tree level exchange diagram. First, let us think of particle 2 as the source of a shockwave and particle 1 to be a probe particle traveling in that background. At tree-level, the amplitude is given by 1 + iδ, where we ensure that δ 1 by staying in a weakly coupled regime. Let us then send N such shockwaves so that we can treat them as individual shocks and hence the final amplitude, in the limit δ → 0, N → ∞ with N δ =fixed, is approximately given by (1 + iδ) N ≈ e iN δ . This approximation is valid only if we can view N scattering processes as independent events. Moreover, we want to be in the weakly coupled regime. Both of these conditions can only be satisfied if δ grows with s -which is true for the exchange of particles with spin J ≥ 2 [9]. Therefore, for higher-spin exchanges, we can interpret δ (or rather N times δ) as the Shapiro time delay of particle 1.
There is one more caveat. The exponentiation also depends on the assumption that δ is the same for each of the N -processes -in other words, the polarization of particle 3 is the complex conjugate of that of particle 1. In general, particle 3 can have any polarization, however, we can fix the polarization of particle 3 by replacing particle 1 by a coherent state JHEP04(2019)056 of particles with a fixed polarization. Since we are in the weakly coupled regime, we can make the mean occupation number large without making δ large. This allows us to fix the polarization of particle 3 to be complex conjugate of that of particle 1 because of Bose enhancement (see [9] for a detail discussion).
Let us end this discussion by noting that the N-shock interpretation of the eikonal process is also consistent with classical gravity calculations. For example, the Shapiro time delay as obtained in GR from shockwave geometries is the same as the time delay obtained from the sum of all ladder diagrams for graviton exchanges -which indicates that these are the only important diagrams in the eikonal limit. Thus, it is reasonable to expect that the exponentiation of the tree-level diagram correctly captures the eikonal process.
Positivity. When δ(s, b) grows with s, we can trust the eikonal exponentiation which allows us to relate the phase shift to time delay. In particular, for a particle moving in u direction with momentum P u > 0, the phase shift δ(s, b) is related to the time delay of the particle by δ s, b = P u ∆v . (2.6) Asymptotic causality in flat space requires that particles do not experience a time advance even when they are interacting [61]. Therefore, ∆v ≥ 0, implying that the phase shift must be non-negative as well. So far our discussion is very general and it is applicable even when multiple exchanges contribute to the tree level scattering amplitude. From now on, let us restrict to the special case of massless exchanges. 9 Using the tree-level amplitude (2.3), we can write which must be non-negative. Note that ∂ 2 b annihilates 1/| b| D−4 , which is why we can consider the exchange particle to be on-shell. 10
Higher spin-graviton couplings
There are Lagrangian formulations of massive higher spin fields in flat spacetime, as well as in AdS [66][67][68]. However, in this section, we present a more general approach that does not require the knowledge of the Lagrangian. We write down all possible local three-point interactions between two higher spin elementary particles with spin J and a graviton. This three-point interaction is of importance for several reasons. First, this is one interaction that no particle can avoid because of the equivalence principle. Therefore the vanishing of 9 For non-zero mI , the q integral yields (2π) where K is the Bessel-K function. 10 The same can be seen from the choice of the integration contour, as described in more detail in [9]. By rotating the contour of integration in q, we cross the pole at q 2 = 0 and hence it is sufficient to consider only three-point functions on-shell. this three-point interaction is sufficient to rule out existence of such higher spin particles. Moreover, as we will discuss later, this three-point interaction is sufficient to compute the full eikonal scattering amplitude between a scalar and a higher spin particle.
JHEP04(2019)056
We start with the massive case and consider the massless case later on. Here we use the same method used in [54,62] for deriving the most general J − J − 2 interaction. The momenta of higher spin particles are p 1 , p 3 and the graviton has momentum q (see figure 4). The conservation and the on-shell conditions imply where m is the mass of the higher spin particle. It is sufficient for us to consider polarization tensors which are made out of null and transverse polarization vectors z 1 , z 3 , z satisfying Transverse symmetric polarization tensors can be constructed from null and transverse polarization vectors by substituting z µ 1 i z µ 2 i · · · z µs i → E µ 1 µ 2 ···µs i − traces. In addition, we need to impose gauge invariance for the graviton. This means that each on-shell vertex should be invariant under z → z+αq, where α is an arbitrary number. Using (2.8) and (2.9), we can write down all vertices in terms of only five independent building blocks 11 In order to list all possible vertices for the interaction J − J − 2, we must symmetrize the on-shell amplitudes under 1 ↔ 3. We can then construct the most general form of on-shell three-point amplitude from these building blocks. In particular, for J ≥ 2, we can write three distinct sets of vertices. The first set contains J + 1 independent structures all of JHEP04(2019)056 which are proportional to (z · p 3 ) 2 : The second set contains J-independent structures which are proportional to (z · p 3 ): Finally the third set consists of J − 1 independent structures which do not contain (z · p 3 ): In total there are 3J independent structures that contribute to the on-shell three-point amplitude of two higher spin particles with mass m and spin J and a single graviton. Therefore the most general form of the three-point amplitude for J ≥ 1, is given by 12 n=1 a n A n . (2.14) Note that 3J is also the number of independent structures in the three point functions in the CFT side after imposing permutation symmetry between operators 1, 3 and taking conservation of stress-tensor into account.
Eikonal kinematics
We now study the eikonal scattering of higher spin particles: 1, 2 → 3, 4, where, 1 and 3 label the massive higher spin particles with mass m and spin J and 2, 4 label scalars of mass m s (see figure 3). Let us specify the details of the momentum and polarization tensors. In the eikonal limit, the momentum of particles are parametrized as follows 13 Here the propagators of the gravitons are canonically normalized to 1. Therefore we need explicit GN dependence in (2.14) since it couples to the graviton. 13 Our convention is p µ = (p u , p v , p).
JHEP04(2019)056
where, P u ,P u , P v ,P v > 0 and p µ 1 − p µ 3 ≡ q is the transferred momentum of the exchange particle which is spacelike. The eikonal limit is defined as P u , P v |q|, m i . In this limit P u ≈P u , P v ≈P v and the Mandelstam variable s is given by s = −(p 1 + p 2 ) 2 ≈ P u P v . Moreover, for our setup we have m 1 = m 3 = m and m 2 = m 4 = m s .
Massless particles have only transverse polarizations but massive higher spin particles can have both transverse and longitudinal polarizations. General polarization tensors can be constructed using the following polarization vectors µ T,λ (p 1 ) = 0, where vectors e µ λ ≡ (0, 0, e λ ) are complete orthonormal basis in the transverse direction x ⊥ . The longitudinal vectors do not satisfy (2.9) because L · L = 0. However, they still form a basis for constructing symmetric traceless polarization tensors which are orthogonal to the corresponding momentum.
The polarization tensors constructed from (2.16) are further distinguished by their spin under an SO(D − 2) rotation group which preserves the longitudinal polarization L for each particle. We denote this basis of polarization tensors as E µ 1 µ 2 ···µ J j (p i ) where j labels the spin under SO(D − 2). These tensors are basically organized by the number of transverse polarization vectors they contain. The most general polarization tensor for a particle with spin J can now be decomposed as where r j 's are arbitrary complex numbers. However, in order to show that the higher spin particles cannot interact with gravity in a consistent way, we need only to consider a subspace spanned by where, after contractions with other tensors we perform the following substitution: e i 1 λ 1 e i 2 λ 2 · · · e i j λ j → e i 1 ···i j in which e i 1 ···i j is a transverse symmetric traceless tensor. 14 One can easily continue this construction to generate the remaining polarization tensors. One 14 In other words, whenever we see a combination of transverse polarization vectors: µ 1 T,λ 1 µ 2 T,λ 2 · · · µ S T,λ S , we will replace that by either of µ 1 T,+ µ 2 T,+ · · · µ S T,+ ± µ 1 T,− µ 2 T,− · · · µ S T,− , where e µ + ≡ (0, 0, 1, i, 0) and e µ − ≡ (0, 0, 1, −i, 0). For us, it is sufficient to restrict to these set of polarization tensors.
JHEP04(2019)056
should add more longitudinal polarization vectors and subtract traces in order to make them traceless.
Bounds on coefficients
We now have all the tools we need to utilize the positivity condition (2.7) in the eikonal scattering of a massive higher spin particle and a scalar. The expression (2.7) requires knowledge of the contributions of all the particles that can be exchanged. However as we explain next, in the eikonal limit the leading contribution is always due to the graviton exchange. Let us explain this by discussing all possible exchanges: • Graviton exchange: since, gravitons couple to all particles, the scattering amplitude in the eikonal limit will always receive contributions from graviton exchanges. In particular, in the eikonal limit, the contribution of graviton exchange to the phase shift goes as δ(s, b) ∼ s.
• Exchange of particles with spin J < 2: these exchanges are always subleading in the eikonal limit and hence can be ignored. 15 • Exchange of higher spin particles J > 2: in the eikonal limit, the exchange of a particle with spin J can produce a phase shift δ(s, b) ∼ s J−1 . However, it was shown in [9] that a phase shift that grows faster than s leads to additional causality violation. Therefore if higher spin particles are present, their interactions must be tuned in such a way that they cannot be exchanged in eikonal scattering. This happens naturally when each higher spin particle is individually charged under a global symmetry such as Z 2 . We should note that it is possible to have a scenario in which an infinite tower of higher spin particles can be exchanged without violating causality. However, we will restrict to the case where only a finite number of higher spin particles are present. At this point, let us also note that in AdS, the exchange of a finite number of higher spin particles are ruled out by the chaos growth bound of the dual CFT.
• Exchange of massive spin-2 particles: massive spin-2 particles can be present in nature. However, the exchange of these particles, as explained in [9], cannot fix the causality violation caused by the graviton exchange. Therefore, without any loss of generality, we can assume that the scalar particles do not interact with any massive spin-2 particle. For now this will allow us to ignore massive spin-2 exchanges. Let us note that it is not obvious that the argument of [9] about massive spin-2 exchanges necessarily holds for scattering of higher spin particles. So, at the end of this section, we will present an interference based argument to explain the reason for why even an infinite tower of massive spin-2 exchanges cannot restore causality.
In summary, in the eikonal limit, it is sufficient to consider only the graviton exchange. In fact, we can safely assume that the scalar interacts with everything, even with itself, only 15 We have mentioned before that the eikonal exponentiation fails for the exchange of particles with spin J < 2. However, we can still ignore them because the exchange of lower spin particles cannot compete with the graviton exchange in the eikonal limit.
JHEP04(2019)056
via gravity. Let us also note that we are studying eikonal scattering of higher spin particles with scalars only for simplicity. The calculations as well as the rest of the arguments are almost identical even if we replace the scalar by a graviton. In the graviton case, the argument of [9] about massive spin-2 exchanges holds -this implies that the presence of massive spin-2 particles will not change our final bounds.
We now use (2.7) to calculate the phase shift where C 13I is given by equation (2.14). For scalar-scalar-graviton there is only one vertex, written as Consequently, the sum in (2.7) is over the polarization of the exchanged graviton. In the eikonal limit, this sum receives a large contribution from only one specific intermediate state corresponding to the polarization tensor of the exchanged graviton appearing in C 13I of the form z v z v and the polarization tensor appearing in C I24 of the form z u z u . 16 As discussed earlier, if δ(s, b) grows with s, causality requires δ(s, b) ≥ 0 as a condition which must be true independent of polarization tensors we choose for our external particles. In particular, in the basis E E E, δ(s, b) can be written as where K is a Hermitian matrix which is encoding the eikonal amplitude in terms of the structures written in (2.14). 17 Causality then requires K to be a positive semi-definite matrix for any b. We sketch the argument for constraining three-point interactions here and leave the details to appendices A and B. First, let us discuss D > 4. 18 We start with the general expressions for on-shell three point amplitudes. The polarization tensors for both particles 1 and 3 are chosen to be in the subspace spanned by E J , E J−1 and E J−2 : where, r J , r J−1 and r J−2 are real numbers. Using eikonal scattering we organize the phase shift in the small b limit in terms of the highest negative powers of the impact parameter b. We start by setting r J−2 = 0. We then demand K( b) to have non-negative eigenvalues order by order in 1/b for transverse polarization e ⊕ (or e ⊗ ) for all directions of the impact parameter b. 19 This imposes the following constraints on the coefficients 25) 16 In the eikonal limit, the sum over the polarization of the graviton, in general, is given by [9] I I µν (q)( I ρσ (q)) * ∼ 1 2 (ηµρηνσ + ηνρηµσ) .
(2.22)
17 This assumes polarization tensors being properly normalized, i.e. E E E † i E E Ei = 1, otherwise (2.23) should be divided by E E E † 1 E E E3. 18 D = 4 is more subtle for various reasons and we will discuss it separately. 19 Transverse polarizations e ⊗ , e ⊕ are given explicitly in appendix A.
JHEP04(2019)056
where, a i is defined in (2.14). In other words, we find that all vertices with more than two derivatives must vanish. Moreover, the coefficients a 1 , a J+2 , a 2J+2 are related and the interaction C JJ2 can be reduced to the following vertex When J = 2, no further constraints can be obtained using any other choice of polarization tensors. On the other hand, for J > 2 we can use the polarization tensor E J−2 (which always exists for J ≥ 2) yielding implying that C JJ2 = 0. Therefore, there is no consistent way of coupling higher spin elementary particles with gravity in flat spacetime in D > 4 dimensions. 20
D = 4
The D = 4 case is special for several reasons. First of all, the 3J structures of on-shell three-point amplitude of two higher spin particles with mass m and spin J and a single graviton are not independent in D = 4. These structures are built out of 5 vectors, however, in D = 4, any 5 vectors are necessarily linearly dependent. In particular, one can show that are two of the building blocks of onshell three-point amplitudes. The above relation implies that structures in the set (2.13) in D = 4 are not independent since they can be written as structures from set (2.11) and (2.12). Therefore, for spin J in D = 4, there are 2J + 1 independent structures which is in agreement with the number of independent structures in the CFT three point function of the stress tensor and two spin-J non-conserved primary operators. The D = 4 case is special for one more reason -there are parity odd structures for any spin J. In order to list all possible parity odd vertices for the interaction J − J − 2, we introduce the following building block that does not preserve parity: The parity odd on-shell three-point amplitude can be constructed using this building block.
In particular, we can write two distinct sets of vertices with B. The first set contains J independent structures: (2.30) 20 There are parity odd structures in D = 5 for massive particles of any spin. As we show in appendix C, These interactions also violate causality for J > 2 as well as J ≤ 2.
JHEP04(2019)056
The second set contains J − 1 independent structures: In d = 4, there is another parity odd structure which is not related to the above structures and hence should be considered independent 21 Therefore, the most general form of the three-point amplitude for J ≥ 1 is given by We can again use the polarization tensors (2.18) to derive constraints. However, for D = 4 the setup of this section is not adequate to completely rule out particles with J > 2. In D = 4, the transverse space is only two-dimensional and therefore does not provide enough freedom to derive optimal bounds. In particular, we find that a specific non-minimal coupling is consistent with the positivity of the phase shift. We eliminate this remaining non-minimal coupling by considering interference between the graviton and the higher spin particle.
In D = 4, the use of the polarization tensors (2.18) leads to the following bounds: a n = 0 and a 2 , · · · , a 2J+1 are fixed by a 1 (see (B.15)). The same set of bounds can also be obtained by using a simple null polarization vector where the transverse and longitudinal vectors are defined in (2.16) and the vectorx is given byx = (0, 0, 1, 0). The phase-shift in D = 4 is where, L is the IR regulator. Introduction of the IR regulator is necessary because of the presence of IR divergences in D = 4. Using the polarization (2.34) we obtain where, cos θ =b ·x. Coefficients f n andf n are linear combinations of parity even and parity odd coupling constants respectively. Requiring the phase shift to be positive order 21 We would like to thank J. Bonifacio for pointing this out. by order in 1/b in the limit b 1/m imposes the condition f n =f n = 0. This implies that all the parity odd couplings must vanish and all the parity even couplings are completely fixed once we specify a 1 (full set of constraints for spin J are shown in (B.15).) Therefore, positivity of the phase shift (2.36) is consistent with a specific non-minimal coupling of higher spin particles in D = 4. In order to rule out this specific interaction, we now consider interference between the graviton and the higher spin particle.
JHEP04(2019)056
Bound from interference. We now consider eikonal scattering of gravitons and massive higher spin particles: 1, 2 → 3, 4. In this setup, 1 and 3 are linear combinations of massive higher spin particle X and the graviton: αh + βX and α h + β X respectively, where α, α , β, β are arbitrary real coefficients. While 2 and 4 are a fixed combination of X and the graviton: h + X. We will treat 2 as the source and 1 as the probe (see figure 5). This setup is very similar to the setup of [55].
Positivity of the phase-shift can now be expressed as semi-definiteness of the following matrix where, δ Xh represents phase-shift when particle 1 is a higher spin particle of mass m and spin J and particle 3 is a graviton. 22 The above condition can also be restated as an interference bound where we have used the fact that δ Xh = δ * hX . In the eikonal limit, the dominant contribution to both δ hh and δ XX comes from the graviton exchange and hence δ hh , δ XX ∼ s, where s is the Mandelstam variable. Therefore, asymptotic causality requires that δ Xh should not grow faster than s. 22 similar notation is used for other elements of the phase-shift matrix.
JHEP04(2019)056
Let us now compute δ Xh for a specific configuration. Momenta of the particles are again given by (2.15) with appropriate masses. Moreover, we will use the following null polarization vectors for various particles: wherex = (0, 0, 1, 0) andŷ = (0, 0, 0, 1). In the eikonal limit the dominant contribution to δ Xh comes from X-exchange. In particular, after imposing constraints (B.15), we find that where cos θ =b ·x. The above phase-shift violates causality for J > 2 implying Therefore there is no consistent way of coupling higher spin elementary particles with gravity even in four dimensional flat spacetime.
Comments
Comparison with other arguments. As mentioned in the introduction, there are qualitative arguments in the literature in D = 4 suggesting that elementary massive higher spin particles cannot exist. The idea originally advocated by Weinberg, is to require physical theories for elementary particles to have a well behaved high energy limit or equivalently to demand a smooth limit for the amplitude as m X → 0 [5,6]. However, for minimal coupling with spin J > 2 particles, the amplitude grows with powers of s m 2 X as m X → 0 [4]. Therefore, given a fixed and finite cutoff scale Λ and a mass m X , the amplitude can become O(1) for m X √ s Λ. For instance, it was shown in [7] by considering only the minimal coupling of spin 5 2 to gravity, that tree-level unitarity breaks down at the energy √ s ∼ m X M pl M pl . Moreover, the break-down scale for a particle of spin J was conjectured to be even lower ∼ m 2J−2 X M pl 1 2J−1 [69]. This was shown to be true for massive spin J = 2 particles [70]. The existence of this scale implies that this particle cannot exist if tree-level unitarity is required to persist for scales up to M pl . This seems natural if we require the theory of higher spin fields to be renormalizable. However, from an effective field theory point of view, the smooth m X → 0 requirement, determines only the range of masses and cut-off scales over which the low energy tree level amplitude is a good description of this massive higher spin scattering experiment. Note that even within the tree level unitarity arguments, one still needs to consider all possible non-minimal couplings as well as all contact interactions in order to ensure that they do not conspire to change the singular behavior of the amplitude in the m X → 0 limit. In fact, [7,8] demonstrates examples in which adding non-minimal couplings can change the high energy singular behavior of the amplitude for longitudinal part of polarizations.
JHEP04(2019)056
By contrast, the causality arguments used here, require only the cut-off to be parametrically larger than the mass of the higher spin particle, Λ m X . Then, given an impact parameter b m −1 X , the desired bounds are obtained even if the amplitude or phase shift M (s, t) , δ(s, b) 1 (unlike the violation of tree-level unitarity requiring the amplitude to be O(1)) since even the slightest time advance is forbidden by causality. Moreover, in the eikonal experiment, the two incoming particles do not overlap and hence contributions from the other channel and contact diagrams can be ignored [9].
An interference argument for D > 4. A generalization of the interference argument of D = 4 to higher dimensions also suggests that there is tension between massive higher spin particles and asymptotic causality. In fact, it might be possible to derive the bounds of this section by demanding that the phase shift δ Xh does not grow faster than s, however, we have not checked this explicitly. This argument has one immediate advantage. For a particle with spin J, δ Xh ∼ s J−1 and therefore it is obvious that even an infinite tower of massive spin-2 exchanges cannot restore causality. The only way causality can be restored is if we add an infinite tower of massive higher spin particles. We should note that this arguments rely on the additional assumption that the eikonal approximation is valid for spin-J exchange with J > 2. The N -shocks argument of [9] is also applicable here which strongly suggests that the eikonal exponentiation holds even for J > 2, however, a rigorous proof is still absent.
Massless case. Higher spin massless particles are already ruled out by the Weinberg-Witten theorem. Nonetheless, we can rederive this fact using the eikonal scattering setup. If the higher spin particles are massless, then gauge invariance requires that each vertex is invariant under the shift z i → z i + α i p i , where α i 's are arbitrary real numbers. In this case only the three following structures are allowed for J ≥ 2 This is again, as we will see in the next section, in agreement with the three structures appearing in the CFT three point function once we impose conservation constraints for all three operators. The general form of the three-point function for J ≥ 2 is now given by For massless particles, E J is the only polarization tensor. As before, by requiring asymptotic causality we find for J > 2.
JHEP04(2019)056
Parity violating interactions of massive spin-2 in D = 4. The argument presented in this section can also be applied to J = 2 in D ≥ 4. Of course, our argument does not rule out massive spin-2 particles. Rather it restricts the coupling between two massive spin-2 particles and a graviton to be minimal (2.26) which agrees with [55]. However, for D = 4 our argument does rule out parity violating interactions between massive spin-2 particles and the graviton. Moreover, the same conclusion about parity violating interactions holds even for massive spin-1.
Restoration of causality. Let us now discuss the possible ways of bypassing the arguments presented in this section. Our arguments utilized the eikonal limit m, q √ s Λ, where Λ is the UV cut-off of the theory. Hence, our argument breaks down if the mass of the higher spin particle m ∼ Λ.
There is another interesting possibility. One can have a massive higher spin particle with mass m Λ and causality is restored by adding one or more additional particles. The contribution to the phase shift for a tree level exchange of a particle of mass M m, 1 b is exponentially suppressed ∼ e −bM . Hence, these additional contributions can be significant enough if the masses of these particles are not much larger than m. In addition, exchange of these additional particles can only restore causality if they have spin J > 2. However, exchange of any finite number of such particles will lead to additional causality violation. Hence, the only possible way causality can be restored is by adding an infinite tower of fine-tuned higher spin particles with masses comparable to m. Furthermore, causality for the scattering J+graviton→ J+graviton also requires that an infinite subset of these new higher spin particles must be able to decay into two gravitons which implies that this infinite tower does affect the dynamics of gravitons at energies ∼ m. 23 We will discuss this in more detail in section 4.
Composite higher spin particles. The argument of this section is applicable to elementary massive higher spin particles. However, whether a particle is elementary or not must be understood from the perspective of effective field theory. Hence, the argument of this section is also applicable to composite higher spin particles as long as they look elementary enough at a certain energy scale. In particular, if the mass of a composite particle is m but it effectively behaves like an elementary particle up to some energy scale Λ which is parametrically higher than m, then the argument of this section is still applicable. More generally, argument of this section rules out any composite higher spin particle which is isolated enough such that it does not decay to other particles after interacting with high energy gravitons q m.
Validity of the causality condition. Let us end this section by mentioning a possible caveat of our argument. In this section, we have shown that presence of massive higher spin particles is inconsistent with asymptotic causality which requires that particles do not experience a time advance even when they interact with each other. It is believed that 23 Note that we ignored loops of the higher spin tower. From the scattering J+graviton→ J+graviton, it is clear that an infinite tower of higher spin particles with mass M m cannot restore causality even if we consider loops.
JHEP04(2019)056
any Lorentzian QFT must obey this requirement. However, there is no rigorous S-matrix based argument that shows that positivity of the time delay is a necessary requirement of any UV complete theory. A physical argument was presented in [9] which relates positivity of the phase shift to unitarity but it would be nice to have a more direct derivation. In the next section, we present a CFT-based derivation of the same bounds in anti-de Sitter spacetime which allows us to circumvent this technical loophole.
Higher spin fields in AdS D
Let us now consider large-N CFTs in dimensions d ≥ 3 with a sparse spectrum. CFTs in this class are special because at low energies they exhibit universal, gravity-like behavior. This duality allows us to pose a question in the CFT in d-dimensions which is dual to the question about higher spin fields in AdS in D = d + 1 dimensions. Is it possible to have additional higher spin single trace primary operators X J with J > 2 and scaling dimension ∆ ∆ gap in a holographic CFT? In general, any such operator X J will appear as an exchange operator in a four-point function of even low spin operators. In the Regge limit σ → 0, 24 the contribution to the four-point function from the X J -exchange goes as ∼ 1/σ J−1 which violates the chaos growth bound of [10] for J > 2 and hence all CFT three-point functions X J OO must vanish for any low spin operator O. In the gravity side, this rules out all bulk couplings of the form OOX J in AdS, where X J is a higher spin bulk field (massive or massless) and O is any other bulk field with or without spin. For example, this immediately implies that in a theory of quantum gravity where the dynamics of gravitons at low energies is described by Einstein gravity, decay of a higher spin particle into two gravitons is not allowed.
The above condition is not sufficient to completely rule out the existence of higher spin operators. In particular, we can still have higher spin operators without violating the chaos growth bound if the higher spin operator X J does not appear in the OPE of any two identical single trace primary operators. For example, if each higher spin operator has a Z 2 symmetry, they will be prohibited from appearing in the OPE of identical operators. However, a priori we can still have non-vanishing X J X J O . In fact, the Ward identity dictates that the three-point function X J X J T must be non-zero where T is the CFT stress tensor. In this section, we will utilize the holographic null energy condition to show that X J X J T must vanish for CFTs (in d ≥ 3) with large N and a sparse spectrum, or else causality (the chaos sign bound) will be violated. The Ward identity then requires that the two-point function X J X J must vanish as well. However, the two-point function X J X J is a measure of the norm of a state created by acting X J on the vacuum and therefore must be strictly positive in a unitary CFT. Vanishing of the norm necessarily requires that the operator X J itself is zero.
In the gravity language, this forbids the bulk interaction X J -X J -graviton -which directly contradicts the equivalence principle. Therefore, a finite number of higher spin elementary particles, massless or massive, cannot interact with gravity in a consistent way even in AdS spacetime (in D ≥ 4).
Causality and conformal Regge theory
We start with a general discussion about the Regge limit in generic CFTs and then review the holographic null energy condition (HNEC) in holographic CFTs which we will use to rule out higher spin single trace primary operators. The HNEC was derived in [45,47], however, let us provide a more general discussion of the HNEC here. The advantage of the new approach is that it can be applied to more general CFTs. However, that makes this subsection more technical, so casual readers can safely skip this subsection.
As discussed in [24,26,47] the relevant kinematic regime of the CFT 4-point function for accessing the physics of deep inside the bulk interior is the Regge limit. In terms of the familiar cross-ratios, in our conventions this limit corresponds to analytically continuingz around the singularity at 1 followed by taking the limit z,z → 0 with z/z held fixed. Unlike the more familiar euclidean OPE limit, the contributions to the correlation function in this limit are not easily organized in terms of local CFT operators. In fact contributions of individual local operators become increasingly singular with increasing spin. Using conformal Regge theory [71], these contributions may be resummed into finite contributions by rewriting the sum over spins as a contour integral using the Sommerfeld-Watson transform. This formalism relied on the fact that the coefficients in the conformal block expansion are well defined analytic functions of J away from integer values which was later justified in [41]. This allows one to rewrite the sum over spins in the conformal block expansion as a deformed contour integral over J, reorganizing the contributions to a sum over Regge trajectories. We will not discuss the derivation here as the details are well reviewed in [43][44][45]71]. We will instead derive an expression for the contribution of a Regge trajectory directly to the OPE of two local operators in terms of a non-local operator E ∆,J described below.
We will first derive an expression for the contribution to the OPE of scalar operators ψψ by an operator of spin J and scaling dimension ∆. To this end, we will utilize the methods introduced in [62] to encode primary symmetric traceless tensor operators into polynomials of degree J by contracting them with null polarization vectors z µ : (3.1) It was shown in [62] that the tensor may be recovered from this polynomial by using the Thomas/Todorov operator. We are however interested in the case where the spin J is not necessarily an integer. Therefore we will employ the procedure introduced in [72] to generalize this expression to continuous spin by dropping the requirement that O(x; z) be a polynomial in z. With this definition, the expression for the contribution to the OPE by a continuous spin operators is given by a simple generalization of the expression appearing in [45]. We will then use the shadow representation [73][74][75] for the OPE in Lorentzian signature [76,77]:
JHEP04(2019)056
where we let points x 1 and x 2 to be time-like separated and the integration of x 3 is performed over the intersection of causal future of x 1 and the causal past of x 2 , N is a normalization constant and The integrals over z and z replace the contraction over tensor indices that would appear for integer J using the inner product for Lorentzian principal series introduced in [72]. These are manifestly conformal integrals and the integration can be performed using the methods described in [74].
In order to obtain the contribution to the Regge limit we will set x 1 = −x 2 = (u, v, 0) and analytically continue the points to space-like separations resulting in integration over a complexified Lorentzian diamond. We will then take the Regge limit by sending v → 0 and u → ∞ with uv held fixed. The resulting expression is an integral over a complexified ball times a null ray along the u direction: where C ψψO ∆,J is the OPE coefficient, C O ∆,J is the normalization of OO and we have used (u, v, x ⊥ ) to express coordinates. This operator captures the contribution to OPE of ψψ in the Regge limit. Therefore, analytically continued conformal blocks can be computed by inserting E ∆,J inside a three-point function. For example, in the case of external scalars we find where G ∆,J (z,z) is obtained from the conformal block by takingz around 1 while holding z fixed. In (3.4) this analytic continuation corresponds to the choice of contour in performing theũ integral. The integrand encounters singularities inũ as the points become null separated from x 3 or x 4 . Different analytic continuations of the conformal block can be obtained by choosing appropriate contours. The choice of contour in theũ plane was discussed in [47] in greater detail. By an identical Sommerfeld-Watson transform and contour deformation argument as in [71], the expression for the Regge OPE can now be used to capture the contribution of Regge trajectories
JHEP04(2019)056
where the coefficient a(ν) encodes the dynamical information about the spectrum of the CFT for the Regge trajectory parametrized by J(ν).
The operator E ∆,J can be contrasted with the light-ray operator L[O] introduced in [72]. Although both correspond to non-local contributions to the OPE in the Regge limit, they do not compute the same quantity. As mentioned above E ∆,J computes the analytic continuation of the conformal block, whereas L[O] computes the analytic continuation of conformal partial wave which is the sum of the block and its shadow which is proportional to G 1−J,1−∆ (z,z). However, because of the symmetry of the coefficient a(ν) under ν → −ν using either operator in the Regge limit will yield the same results after integration.
Holographic CFT: holographic null energy condition. As described in more detail in [43-46, 71, 78] the leading Regge trajectory in a holographic theory with a large ∆ gap can be parametrized as (3.7) Using this expression for the trajectory we find that at leading order in ∆ gap the coefficient a(ν) will have single poles corresponding to the stress-tensor exchange as well as an infinite set of double-trace operators. As shown in [45,47], in the class of states in which we are interested, the dominant contribution to this OPE is given by the stress-tensor and the double-trace operators will not contribute. This contribution is captured by the holographic null energy operator which is a generalization of the averaged null energy operator [45] and a special case of the operator E ∆,J described above with ∆ = d and J = 2. 25 In particular, in the limit r → 0, this operator is equivalent to the averaged null energy operator. Causality in CFT implies that the four-point function obeys certain analyticity properties [50,[79][80][81]. For generic CFTs in d ≥ 3, these analyticity conditions dictate that the averaged null energy operator must be non-negative [81]. However, for holographic CFTs, causality leads to stronger constraints. In particular, causality of CFT four-point functions in the Regge limit implies that the expectation value of the holographic null energy operator is positive in a subspace of the total Hilbert space of holographic CFTs [45,47]: where, 0 < ρ < 1. The class of states |Ψ are created by inserting an arbitrary operator O near the origin |Ψ = dy 1 d d−2 y .O(−iδ, y 1 , y)|0 , Ψ| = dy 1 d d−2 y 0| * .O(iδ, y 1 , y) , (3.11) 25 We are using the following convention for points x ∈ R 1,d−1 in CFT d : and δ > 0. The state |Ψ is equivalent to the Hofman-Maldacena state of the original conformal collider [82] which was created by acting local operators, smeared with Gaussian wave-packets, on the CFT vacuum. The HNEC is practically a conformal collider experiment for holographic CFTs (in d ≥ 3) in which the CFT is prepared in an excited state |Ψ by inserting an operator O near the origin and an instrument measures E(ρ) far away from the excitation, as shown in figure 6. Then, causality implies that the measured value E(ρ) must be non-negative for large-N CFTs with a sparse spectrum. Next, creating the state |Ψ by inserting the higher spin operator X J , we show that the inequality (3.10) leads to surprising equalities among various OPE coefficients that appear in X J X J T .
D > 4
We will use the HNEC to derive bounds on higher spin single trace primary operators in d ≥ 4 (or AdS D with D ≥ 5). We will explicitly show that spin 3 and 4 operators are completely ruled out and then argue that the same must be true even for J > 4. The case of D = 4 is more subtle and will be discussed separately.
Spin-3 operators
Let us start with an operator X J with J = 3 which does not violate the chaos growth bound because it has Z 2 or some other symmetry which sets OOX J=3 = 0 for all O. Consequently, this operator does not contribute as an exchange operator in any fourpoint function in the Regge limit and the leading contribution to the Regge four-point function still comes from the exchange of spin-2 single trace (stress tensor) and double trace operators. Therefore, the HNEC is still valid and we can use it with states created by smeared X J=3 to derive constraints on X J=3 X J=3 T .
The CFT three-point function X J=3 X J=3 T , is completely fixed by conformal symmetry up to a finite number of OPE coefficients (see appendix D). After imposing permutation JHEP04(2019)056 symmetry and conservation equation, the three-point function X J=3 X J=3 T has 9 independent OPE coefficients. We now compute the expectation value of the holographic null energy operator E(ρ) in states created by smeared X J=3 : where, µ is a null polarization vector: with ξ = ±1 and ε ⊥ 2 = 0. 26 Following the procedure outlined in [47], we can compute E(ρ) in state (3.13). The result has the following form where, I (n) ξ (λ 2 ) are polynomials in λ 2 which in general have terms up to order λ 6 , where Given our choice of polarization, different powers of λ 2 correspond to independent spinning structures and decomposition of SO(d−1, 1) 3 to representations under SO(d−2). Therefore positivity of E(ρ) implies that the coefficients of each power of λ 2 must individually satisfy positivity, for ξ = +1 as well as ξ = −1. Now, applying the HNEC order by order in the limit ρ → 1, the inequalities lead to 9 equalities among the 9 OPE coefficients. We find that the 9 OPE coefficients cannot be consistently chosen to satisfy these equalities. Hence, causality implies that X J=3 X J=3 T = 0 . Moreover, the Ward identity relates C X 3 , coefficient of the two-point function X J=3 X J=3 (see eq. (D.2)), to a particular linear combination of the OPE coefficients C i,j,k and hence the two-point function X J=3 X J=3 must vanish as well. This implies that we cannot have individual spin-3 single trace primary operators in the spectrum. The detail of the calculation are rather long and not very illuminating, so we relegate them to appendix E.
Spin-4 operators
We can perform a similar analysis with a spin-4 operator which leads to the same conclusion, however, the details are little different. The three-point function X J=4 X J=4 T , after imposing permutation symmetry and conservation equation, has 12 independent OPE coefficients (see appendix F). But the HNEC leads to stronger constraints as we increase the spin of X and these 12 OPE coefficients cannot be consistently chosen to satisfy all the positivity constraints. In fact, as we will show, it is easier to rule out spin-4 operators using the HNEC than spin-3 operators.
JHEP04(2019)056
We again perform a conformal collider experiment for holographic CFTs (in d ≥ 3) in which the CFT is prepared in an excited state |Ψ = dy 1 d d−2 y µ 1 µ 2 µ 3 µ 4 X µ 1 µ 2 µ 3 µ 4 (−iδ, y 1 , y)|0 , (3.18) where, µ is the null polarization vector (3.14). The expectation value of the holographic null energy operator E(ρ) in states created by smeared X J=4 can be computed using methods used in [47] where,Ĩ (n) ξ (λ 2 ) are polynomials in λ 2 (3.16) with terms up to λ 8 in general. Causality implies that different powers of λ 2 must satisfy positivity individually, for ξ = +1 as well as ξ = −1. We find that the 12 OPE coefficients cannot be consistently chosen to satisfy all the positivity constraints implying (see appendix F) (3.20) Consequently, the Ward identity dictates that the two-point function of X J=4 must vanish as well. This rules out single trace spin-4 operators with scaling dimensions below ∆ gap in the spectrum of a holographic CFT. As shown in the appendix F, we ruled out spin-4 operators even without considering E ξ=−1 (ρ). This is because as we increase the spin of X, the number of constraint equations increases faster than the number of independent OPE coefficients. This is also apparent from the fact that for spin-3, we had to go to order 1 (1−ρ) d−2 to derive all constraints. Whereas, for spin-4, the full set of constraints were obtained at the order 1 (1−ρ) d−1 .
Spin J > 4
For operators with spin J ≥ 5, the argument is exactly the same. In fact, it is easier to rule them out because the HNEC leads to stronger constraints at higher spins. For example, for J = 1, there are 3 independent OPE coefficients but the HNEC yields 2 linear relations among them. Consequently, the three-point function X J=1 X J=1 T is fixed up to one coefficient. The same is true for J = 2 -there are 6 independent OPE coefficients and 5 constraints from the HNEC. Furthermore, in both of these cases, constraint equations ensure that the expectation value of the holographic null energy operator behaves exactly like that of the scalars: E(ρ) ∼ 1 (1−ρ) d−3 for d ≥ 4. In fact, this is true for all low spin operators of holographic CFTs.
The HNEC barely rules out operators with J = 3. There are 9 independent OPE coefficients. Using the positivity conditions all the way up to order 1 (1−ρ) d−2 for ξ = ±1, we showed that the OPE coefficients cannot be consistently chosen to satisfy all the positivity constraints. Whereas, the HNEC rules out J = 4 operators quite comfortably. We only needed to consider positivity conditions up to order 1 (1−ρ) d−1 and only for ξ = +1 to rule them out. The same pattern persists even for operators with spins J ≥ 5 so we will not repeat our argument for each spin. Instead, we present a general discussion about JHEP04(2019)056 the structure of E(ρ) at each order in the limit ρ → 1 for general ∆ and J (in d ≥ 4 dimensions). This enables us to count the number of constraint equations at each order. A simple counting immediately suggests that a non-vanishing X J X J T cannot be consistent with the HNEC even for spins higher than 4. By studying various examples with specific values of J, ∆ and d, we have explicitly checked that our simple counting argument is indeed true.
The three point function X J X J T has 5 + 6(J − 1) OPE coefficients to begin with, however not all of them are independent. Permutation symmetry implies that only 4J OPE coefficients can be independent. In addition, conservation of the stress-tensor operator T imposes J additional constraints among the remaining 4J OPE coefficients. Therefore, the three-point function X J X J T is fixed by conformal invariance up to 3J truly independent OPE coefficients. 27 Furthermore, the Ward identity leads to a relation between these OPE coefficients and the coefficient of the two-point function C X J .
We again perform a conformal collider experiment for holographic CFTs (in d ≥ 4) in which the CFT is prepared in an excited state created by smeared X J . In the limit ρ → 1, the leading contribution to E(ρ) goes as where only a single structure contributes with an overall factor that depends on a specific linear combination of OPE coefficients. Just like before, the structure changes sign for different powers of λ 2 and hence in the 1st order, the HNEC produces only one constraint. It is clear from [45,47] that the coefficient of the term E(ρ) ∼
1
(1−ρ) d−3 is fixed by the Ward identity and hence automatically positive. On the other hand, the HNEC in general can lead to constraints up to the 2J-th order, i.e. the order E(ρ) ∼ 1 (1−ρ) d−2 . But for J > 3, one gets 3J independent constraints from the HNEC even before the 2J-th order.
It is easier to rule out operators with higher and higher spins. A simple counting clearly shows why this is not at all surprising. First, let us assume that the HNEC rules out any operator with some particular spin J = J * > 2. That means for spin J * the HNEC generates 3J * independent relations among the OPE coefficients. If we increase the spin by 1: J = J * + 1, we get 3 more independent OPE coefficients. However, the (2J * + 1)-th and (2J * + 2)-th orders in E(ρ) produce new constraints and at each new order there can be J * + 1 new equalities. Moreover, the λ 2 polynomials at each order now has a λ 2(J * +1) term with its own positivity condition -this means that there can be 2J * additional equalities from the first 2J * orders. Therefore, for spin J * + 1, there are 3 new OPE coefficients, whereas there can be 2(2J * + 1) new constraints among them. Of course, this is not exactly true because some of 2(2J * + 1) constraints are not independent. However, for J * ≥ 4, the number of new constraints 2(2J * + 1) 3 and hence this simple counting suggests that the HNEC must rule out operators with spin J ≥ 5.
Let us now demonstrate that this simple counting argument is indeed correct. First, consider J = 1. This is the simplest possible case which was studied in [47]. For J = 1, there are 3 independent OPE coefficients. The number of constraints (equality) from the 27 The number of independent OPE coefficients is different in d = 3.
JHEP04(2019)056
HNEC at each order is given by {1, 1}. 28 After imposing these constraints the expectation value of the holographic null energy operator goes as ∼ 1 (1−ρ) d−3 . Similarly, for J = 2 the number of constraints from the HNEC at each order is given by {1,1,2,1} and the total number of constraints is still less than the number of independent OPE coefficients [47].
For J = 3, the sequence is {1, 1, 2, 2, 2, 1} (see appendix E) and hence spin-3 operators were completely ruled out at the order 1 (1−ρ) d−2 . If we increase the spin by 1, we find that the number of constraints from the HNEC at each order is {1, 1, 2, 2, 3, 2, 1, 0} (see appendix F). The zero at the end indicates that spin-4 operators were already ruled out at the order 1 (1−ρ) d−1 . Our simple counting suggests that the number of zeroes should increase as we go to higher spins. Explicit computation agrees with this expectation. In particular, for J = 5, there are 15 independent OPE coefficients and the number of constraints at each order is {1, 1, 3, 3, 5, 2, 0, 0, 0, 0}. Therefore the spin-5 operators are ruled out at the order Similarly, for J = 6, there are 18 independent OPE coefficients. Explicit calculation shows that the number of constraints at each order is {1, 1, 3, 3, 5, 5, 0, 0, 0, 0, 0, 0}. Therefore, spin-6 operators can be ruled out even at the order 1 (1−ρ) d+4 . All of these results imply that the presence of any single trace primary operator with spin J > 2 is not compatible with causality.
AdS 4 /CFT 3
Similar to the D = 4 case on the gravity side, CFTs in d = 3 are special. Of course, large-N CFTs with a sparse spectrum in (2 + 1)-dimensions are still holographic and the HNEC once again implies that higher spin single trace operators with ∆ ∆ gap are ruled out. However, there are several aspects of the d = 3 CFTs which are different from the higher dimensional case.
First of all, in CFT 3 the three-point functions X J X J T have both parity even and parity odd structures for any J Furthermore, the number of independent parity even structures at d = 3 is different from the higher dimensional case. The general three-point function (D.4) implies that after imposing permutation symmetry and conservation equation, similar to the higher dimensional case X J X J T + should contain 3J independent structures. However, for d = 3, not all of these structures are independent. In particular, this overcounting should be corrected by setting OPE coefficients C 1,1,k = 0 for k ≥ 1 in (D.4) [62]. Therefore, in d = 3, the parity even part X J X J T + has 2J + 1 independent OPE coefficients. Whereas, the parity odd part X J X J T − has 2J independent OPE coefficients. Note that this is exactly what is expected from interactions of gravitons with higher spin fields in 4d gravity. There is another aspect of d = 3 which is different from the higher dimensional case. The choice of polarization (3.14) in d = 3 implies that ε ⊥ = 0 and hence the λ-trick does not work. However, the full set of bounds can be obtained by considering the full polarization JHEP04(2019)056 tensor for X J . This can be achieved by using the projection operator of [62] which makes the analysis more complicated. However the final conclusion remains unchanged.
Since we expect that the HNEC imposes stronger constraints as we increase the spin, it is sufficient to only rule out X J=3 . The steps are exactly the same but details are little different. After imposing permutation symmetry and conservation equation, the three-point function X J=3 X J=3 T has 7 parity even and 6 parity odd independent OPE coefficients. We again compute the expectation value of the holographic null energy operator E(ρ) in states created by smeared X J=3 : where µ 1 µ 2 µ 3 is the traceless symmetric polarization tensor. Using the techniques developed in [47], we now compute the expectation value of the holographic null energy operator E(ρ) in this state which can be schematically expressed in the following form where j n ( µ 1 µ 2 µ 3 , C i,j,k ) are specific functions of the the polarization tensors and the OPE coefficients. The dots in the above expression represent terms that vanish in the limit ρ → 1. The ln(1 − ρ) term is unique to the 3d case and is a manifestation of soft graviton effects in the IR. By applying the HNEC order by order in the limit ρ → 1, we again find that the HNEC can only be satisfied for all polarizations if and only if all the OPE coefficients vanish. Consequently, the Ward identity implies that we cannot have individual spin-3 operators in the spectrum. 29 Moreover, a simple counting again suggests that the same is true even for J > 3. In d = 3, as we increase the spin by one, the number of parity even OPE coefficients increases by 2. However, now there are two more orders perturbatively in (1 − ρ) that generate new relations among the OPE coefficients. Each new order produces at least one new constraint suggesting that if the HNEC rules out parity even operators with some particular spin J, it will also rule out all parity even operators with spin J + 1. In addition, it is straightforward to extend this argument to include parity odd structures, however, we will not do so in this paper.
Maldacena-Zhiboedov theorem and massless higher spin fields
In this section we argued that in holographic CFTs, any higher spin single trace nonconserved primary operator violates causality. On the gravity side, this rules out any higher spin massive field with mass below the cut-off scale (for example the string scale). But what about massless higher spin fields? In asymptotically flat spacetime, this question has already been answered by the Weinberg-Witten/Porrati theorem [2,3]. The same JHEP04(2019)056 statement can be proven in AdS by using the argument of this section but for conserved X J ≡ J . Conservation of J leads to additional relations among the OPE coefficients C i,j,k 's in J J T . Even before we impose these additional conservation relations, the HNEC implies C i,j,k = 0 for J > 2, which is obviously consistent with these new relations from conservation. Hence, our argument is valid even for higher spin conserved current J .
Causality of CFT four-point functions in the lightcone limit also rules out a finite number of conserved higher spin currents in any CFT [50]. This is a partial generalization of the Maldacena-Zhiboedov theorem [48], from d = 3 to higher dimensions. The argument which was used in [50] to rule out higher spin conserved current is not applicable here since J does not contribute to generic CFT four-point functions as exchange operators. 30 However, we can repeat the argument of [50] for a mixed correlator OOOO in the lightcone limit where, O ≡ T + J . For this mixed correlator, J does contribute as an exchange operator in the lightcone limit. In particular, we can schematically write where each diagram represents a spinning conformal block and dots represent contributions suppressed by the lightcone limit. The argument of [50], now applied to the correlator OOOO , implies that this correlator is causal if and only if the last term in (3.25) is identically zero. The J -exchange conformal blocks, for J > 2, in the lightcone grow faster than allowed by causality. This necessarily requires that the three-point function J J T must vanish -which is sufficient to rule out J for J > 2. This generalizes the argument of [50] ruling out higher spin conserved currents even when none of the operators are charged under it. We should note that technically it might be plausible for the OPE coefficients to conspire in a non-trivial way such that a conserved current J cannot contribute as an exchange operator (for all polarizations of the external operators) but still has a nonvanishing J J T . However, it is very unlikely that such a cancellation is possible since the three-point function J J T can only have three independent OPE coefficients. This unlikely scenario can be ruled out by explicit calculations. The above argument is applicable only because J is conserved. However, one might expect that a similar argument in the Regge limit should rule out even non-conserved X J for holographic CFTs. This is probably true but the argument is more subtle in the Regge limit because an infinite tower of double trace operators also contribute to the correlator OOOO . Hence, one needs to smear all four operators appropriately, in a way similar to [42,45], such that the double trace contributions are projected out. One might then use causality/chaos bounds to rule out the three-point function X J X J T . However, it is possible that the smearing procedure sets contributions from certain spinning structures in X J X J T to zero as well. In that case, this argument will not be sufficient. A proof along this line requires the computation of a completely smeared spinning Regge correlator which is technically challenging even in the holographic limit.
Comments
Small deviation from the holographic conditions. Large-N CFTs with a sparse spectrum are indeed special because at low energies they exhibit gravity-like behavior. This immediately poses a question about the assumptions of large-N and sparse spectrum: how rigid are these conditions? In other words, do we still get a consistent CFT if we allow small deviations away from these conditions?
In this section, we answered a version of this question for the sparseness condition. The sparseness condition requires that any single trace primary operator with spin J > 2, must necessarily have dimension ∆ ≥ ∆ gap 1. This condition ensures that the dual gravity theory has a low energy description given by Einstein gravity. However, we can imagine a small deviation from this condition by allowing a finite number of additional higher spin single trace primary operators X J with J > 2 and scaling dimension ∆ ∆ gap . As we have shown in this section, these new operators violate the HNEC implying the resulting CFTs are acausal.
Minkowski vs. AdS. It is rather apparent that the technical details of the flat spacetime argument and the AdS argument are very similar. For example, the number of independent structures for a particular spin is the same in both cases. In flat spacetime as well as in AdS, we start with inequalities which can be interpreted as some kind of time-delay. In addition, these inequalities when applied order by order, lead to equalities among various structures. These equalities eventually rule out higher spin particles. However, the AdS argument has one conceptual advantage, namely, it does not require any additional assumption about the exponentiation of the leading contribution. The CFT-based argument relies on the HNEC. The derivation of the HNEC utilized the causality of a CFT correlator which was designed to probe high energy scattering deep into the AdS bulk. It is therefore not a coincidence that the technical details of the AdS and the flat space arguments are so similar. Since the local high energy scattering is insensitive to the spacetime curvature, it is not very surprising that the bounds in flat space and in AdS are identical. This also suggests that the same bound should hold even in de Sitter.
Higher spin operators in generic CFTs. The argument of this section does not rule out higher spin non-conserved operators in non-holographic CFTs. However, the HNEC in certain limits can be utilized to constrain interactions of higher spin operators even in generic CFTs. In particular, the limit ρ → 0 in (3.10) corresponds to the lightcone limit and in this limit, the HNEC becomes the averaged null energy condition (ANEC). The proof of the ANEC [81,83] implies that in the limit ρ → 0, the inequality E(ρ) ≥ 0 must be true for any interacting CFT in d ≥ 3. Moreover in this limit, the HNEC is equivalent to the conformal collider setup of [82] which is known to yield optimal bounds. Therefore, the same computation performed in the limit ρ → 0 can be used to derive non-trivial but weaker constraints on the three-point functions X J X J T which are true for any interacting CFT in d ≥ 3. These constraints, even though easy to obtain from our calculations of E(ρ), are rather long and complicated and we will not transcribe them here.
JHEP04(2019)056
Other applications of the Regge OPE. In this note we specialized E ∆,J to the case of ∆ = d and J = 2 to arrive at the HNEC operator in order to make use of the universality of the stress-tensor Regge trajectory in holographic theories. However E ∆,J more generally describes the contribution of any operator to the Regge OPE of identical scalar operators. It would be interesting to find the actual spectrum of these operators contributing to the Regge limit of the OPE in specific theories. It would also be worthwhile to try and understand the subleading contributions to the Regge OPE in holographic theories. Although these contributions are not universal, we expect that causality will impose constraints on these contributions as well.
We have explored the Regge limit of the OPE of two identical scalars. Generalization to other representations is straightforward as it only requires knowledge of the CFT threepoint functions whose functional form is fixed by symmetry. Positivity of these generalized Regge OPE operators will likely lead to new constraints since they allow access to more general representations. Furthermore decomposition of the additional Lorentz indices under the little group will result on more constraint equations which need to be satisfied to preserve causality.
Make CFT causal again
In the previous section, we considered large-N CFTs in d ≥ 3 dimensions with the property that the lightest single trace operator with spin J > 2 has dimension ∆ ≡ ∆ gap 1. These holographic conditions are equivalent to the statement that in the gravity side the low energy behavior is governed by the Einstein gravity. Moreover, ∆ gap corresponds to the scale of new physics Λ in the effective action in AdS (for example it can be the string scale M s ). In any sensible theory of quantum gravity it is expected that the Einstein-Hilbert action should receive higher derivative corrections which are suppressed by the scale Λ. On the CFT side, this translates into the fact that there is an infinite tower of higher spin operators with dimensions above the ∆ gap . All of these higher spin operators must appear as exchange operators in CFT four-point functions in order to restore causality at high energies [42]. Furthermore, in this paper we showed that the sparseness condition is very rigid and we are not allowed to add an additional higher spin operator X J with spin J > 2 and ∆ ∆ gap if causality is to be preserved. Let us consider adding an additional higher spin primary single trace operator X J with dimension ∆ = ∆ 0 ∆ gap (or on the gravity side a higher spin particle with mass M 0 Λ) and ask whether it is possible to restore causality by adding one or more primary operators (or new particles) that cancel the causality violating contributions? In this section, we answer this question from the CFT side.
The bound obtained in the previous section from the HNEC is expected to be exact strictly in the limit ∆ gap → ∞. However, it is easy to see that the same conclusion is true even when ∆ gap is large but finite, as long as ∆ 0 ∆ gap . In this case, one might expect that the OPE coefficients are no longer exactly zero but receive corrections C i,j,k /C X J ∼ 1
JHEP04(2019)056
where a is some positive number. 31 However, this is inconsistent with the Ward identity which requires that at least some of C i,j,k /C X J ∼ O(1). Therefore, even for large but finite ∆ gap , the operator X J is ruled out as long as ∆ 0 ∆ gap . In addition, this also implies that if we want to add X J , it will not be possible to save causality by changing the spectrum above ∆ gap . Let us add extra operators at dimensions ∼ ∆ gap ∆ gap in order to restore causality. Note that if ∆ gap ∆ 0 , then contributions of these extra operators are expected to be suppressed by ∆ gap and hence we can again make the above argument. Therefore, contributions of these extra operators can be significant enough to restore causality if and only if ∆ gap ∼ ∆ 0 .
The above argument also implies that perturbative 1/N effects are not sufficient to save causality either. Any such correction must be suppressed by positive powers of 1/N and hence inconsistent with the Ward identity. This is also clear from the gravity side, both in flat space and in AdS. Causality requires that the tree level higher spin-higher spin-graviton amplitude must vanish. One might expect that loop effects can generate a non-vanishing amplitude without violating causality, however, these effects must be 1/N suppressed. Hence, this scenario is in tension with the universality of gravitational interactions dictated by the equivalence principle.
The behavior of four-point functions in the Regge limit makes it obvious that these extra operators at ∆ gap must have spin J ≥ 2 so that they can contribute significantly in the Regge limit to restore causality. Furthermore, causality imposes strong restrictions on what higher spin operators can be added at ∆ gap . The simplest possibility is to add a finite or infinite set of higher spin operators at ∆ gap which do not contribute as exchange operators in any four-point functions. However, this scenario makes the causality problem even worse. The causality of the Regge four point functions still leads to the HNEC and one can rule out even an infinite set of such operators by applying the HNEC to individual higher spin operators. The only other possibility is to add a set of higher spin operators at ∆ gap which do contribute as exchange operators in the four-point function X J X J ψψ , where ψ is a heavy scalar operator. In this case2, the HNEC is no longer applicable and hence the argument of the previous section breaks down. However, a finite number of higher spin primaries (J > 2) that contribute as exchange operators violate chaos/causality bound [10,42] and consequently this scenario necessarily requires an infinite tower of higher spin operators. 32 Therefore, the only way causality can be restored is to add an infinite tower of finely tuned higher spin primaries with ∆ ∼ ∆ gap ∼ ∆ 0 . In other words, addition of a single higher spin operator with ∆ = ∆ 0 necessarily brings down the gap to ∆ 0 .
Let us note that the above argument did not require that this new tower of operators contribute to the T T OPE. For this reason, one might hope that it is possible to fine-tune the higher spin operators such that causality is restored and the gap is still at ∆ gap when 31 CX J is the coefficient of the two-point function of XJ and C i,j,k are the OPE coefficients for XJ XJ T (see appendix D). 32 Note that the chaos bound does not directly rule out spin-2 exchange operators. Therefore, one might expect that the causality problem may be resolved by adding a finite number of spin-2 non-conserved single trace primaries. However, it was shown in [9] that non-conserved spin-2 primaries when contribute as exchange operators lead to additional causality violation and hence we will not consider this scenario.
JHEP04(2019)056
considering states created by the stress tensor. However, this scenario is also not allowed as we explain next. In this case, one can still prove the HNEC starting from the Regge OPE of T T when both operators are smeared appropriately (see [47]). One can then repeat the argument of the previous section to rule out X J , as well as the entire tower of operators at ∆ gap . Therefore, the only way the tower at ∆ gap ∼ ∆ 0 can lead to a causal CFT is if they also contribute to the T T OPE. In particular, an infinite subset of all higher spin operators must appear in the OPE of the stress tensor (and all low spin operators) Let us end this section by summarizing in the gravity language. At the energy scale E Λ, the dynamics of gravitons is completely determined by the Einstein-Hilbert action. If we wish to add even one higher spin elementary particle (J > 2) with mass M 0 Λ, the only way for the theory to remain causal is if we also add an infinite tower of higher spin particles with mass ∼ M 0 . Causality also requires that an infinite subset of these new higher spin particles should be able to decay into two gravitons. As a result, the dynamics of graviton can now be approximated by the the Einstein-Hilbert action only in the energy scale E M 0 and hence M 0 is the new cut-off even if we only consider external states created by gravitons.
Stringy operators above the gap
We concluded from both gravity and CFT arguments that finitely many higher spin fields with scaling dimensions ∆ ∆ gap are inconsistent with causality even as external operators. We can ask how this result may be modified if we consider external operator X to be a heavy state above the gap, analogous to stringy states in classical string theory.
Let us consider the expectation value of the generalized HNEC operator (3.6) in the Hofman-Maldacena states created by a heavy single-trace higher spin operator with spin l. Following [44] we parametrize the leading Regge trajectory as The external operator has the scaling dimension ∆ X ≥ ∆ gap . Consequently, we cannot take the ∆ gap → ∞ limit as before. Instead we must take ∆ gap to be large but finite and keep track of terms that may grow in this limit. In the Regge limit u → ∞, with 1 − ρ log(u) ∆ 2 gap , we expect the leading trajectory to be nearly flat and integration over the spectral density (3.6) to be approximated by the stress-tensor contribution at ν = −i d 2 up to 1 ∆ 2 gap corrections. This limit is similar to the discussion in section 5.5 in [43] for bounds on real part of phase shift for scattering in AdS. See also discussion about imaginary part of phase shift for AdS scattering in [43,44,46].
JHEP04(2019)056
Therefore the operator with a positive expectation value is given by 33 where the dots denote terms which are subleading in ∆ gap , t (i) 's consist of certain combination of OPE coefficients and polarization tensors. The OPE coefficients t (i) , are analytic continuation of original OPE coefficients. We have already seen that if the OPE coefficients do not grow with ∆ gap , the existence of the operator X is inconsistent with causality. One way in which causality may be restored, is to impose the following gap dependence on the OPE coefficients between heavy operators and the exchange operator: 34 The dependence of OPE coefficients on ∆ gap is chosen in (4.4) such that higher negative powers of 1−ρ would be multiplied by higher powers of 1 ∆gap and consequently become more suppressed in the regime of validity of stress-tensor exchange. This means that we would not get the previous constraints by sending ρ → 1 and as a result, there is no inconsistency with Ward identity or causality for higher spin operators above the gap.
Based on our CFT arguments, (4.4) is not fixed to be the unique choice which restores causality. However, this behaviour is very similar to how the scattering amplitude in classical string theory is consistent with causality. The high energy limit of scattering amplitudes in string theory are explored in [84][85][86][87][88]. In addition, generating functions of three point and four point amplitudes for strings on the leading Regge trajectory with arbitrary spin are constructed in [89,90]. Here we focus on a high energy limit of a two to two scattering between closed higher spin strings and tachyons in bosonic string theory. Using the results of [89,90], the string amplitude is given by the compact expression , (4.5) where the Mandelstam variables satisfy s + t + u = 4 α (l − 4) for closed strings. Here, (POL) represents the tensor structures and polynomials of different momenta. The Gamma functions poles in the numerator of (4.5) correspond to the exchange of infinitely many higher spin particles with even spins and the mass relation m(J) 2 = 2 α (J − 2). In the Regge limit, s → ∞ with t held fixed, the amplitude simplifies to (4.6) 33 The second line follows from the fact that at large ∆gap the saddle point is dominated by the stresstensor. Here we have assumed that the OPE coefficients do not scale exponentially with increasing ∆gap and hence will not affect the saddle-point. 34 In fact, in the case of stress-tensor exchange, Ward identities forces at least one combination of OPE coefficients to grow with ∆X ∼ ∆gap.
JHEP04(2019)056
Note that the Mandelstam variable s plays the same role as u in the CFT analogue. Therefore, to make gravity the dominant force we can either take α → 0 which corresponds to ∆ gap → ∞ in the CFT, or take t → 0 which in CFT language is the lightcone limit ρ → 0. In both cases, the polarization part, (POL) becomes where powers of s are dictated by consistency with the gravity result in limits mentioned above. Note that the tensor structure in (4.7) is independent of the momenta and does not change sign even if we perform the eikonal experiment in this limit. Thus, in the limit that gravity is dominant, possible causality violating structures are also vanishing and there is no problem with causality. This happens naturally in string theory since there is only one scale α , controlling coefficients in tensor structures, interactions between particles and their masses. As a result, vertices or tensor structures which have higher powers of momentum q (analogous to powers of 1 1−ρ in CFT) should be accompanied with higher powers of √ α (analogous to powers of 1 ∆gap ) on dimensional grounds. See also [9,91] for interesting details of eikonal experiment in string theory.
Cosmological implications
The bound on higher spin particles has a natural application in inflation. The epoch of inflation is a quasi de Sitter expansion of the universe, immediately after the big bang. The primordial cosmological fluctuations produced during inflation naturally explains the observed temperature fluctuations of cosmic microwave background (CMB) and the largescale structures of the universe. If higher spin particles were present during inflation, they would affect the behavior of primordial cosmological fluctuations. In particular, higher spin particles would produce distinct signatures on the three-point function of scalar perturbations in the squeezed limit. Hence, the bound on higher spin particles imposes rather strong constraints on these three-point functions.
Consider one or more higher spin particles during inflation. The approximate de Sitter symmetry during inflation dictates that mass of any such particle, even before we impose our causality constraints, must satisfy the Higuchi bound [92,93] where, H is the Hubble rate during inflation. Particles with masses that violate the Higuchi bound correspond to non-unitary representations in de Sitter space, so the Higuchi bound is analogous to the unitarity bound in CFT. 35 The bound on higher spin particles obtained in this paper are valid in flat and AdS spacetime. We will not attempt to derive similar bounds directly in de Sitter. Instead, we will adopt the point of view of [9,53] and assume that the same bounds hold even in de Sitter spacetime. This is indeed a reasonable assumption since these bounds were obtained by studying local high energy scattering which is insensitive JHEP04(2019)056 θ Figure 7. The squeezed limit of three-point functions.
to the spacetime curvature. Therefore, in de Sitter spacetime in Einstein gravity, any additional elementary particle with spin J > 2 cannot have a mass m Λ, where Λ is the scale of new physics in the original effective action. In any sensible low energy theory we must have H Λ and hence the causality bound is stronger than the Higuchi bound. Furthermore, the causality bound also implies that all elementary higher spin particles must belong to the principal series of unitary representation of the de Sitter isometry group.
Inflation naturally predicts that the scalar curvature perturbation ζ produced during inflation is nearly scale invariant and Gaussian. The momentum space three-point function of the scalar curvature perturbation ζ( k 1 )ζ( k 2 )ζ( k 3 ) is a good measure of the deviation from exact Gaussianity. Higher spin particles affect the three-point function of scalar perturbations in a unique way. In an inflating universe, the massive higher spin particles can be spontaneously created. It was shown in [52] that the spontaneous creation of higher spin particles produces characteristic signatures on the late time three-point function of scalar fluctuations. In particular, in the squeezed limit k 1 , k 2 k 3 (see figure 7), the late time scalar three-point function admits an expansion in spin of the new particles present during inflation: 36
JHEP04(2019)056
Now, if I J with J > 2 is detected in future experiments, then the scale of new physics must be Λ ∼ H. This necessarily requires the presence of not one but an infinite tower of higher spin particles with spins J > 2 and masses comparable to the Hubble scale. This scenario is very similar to string theory. Any detection of I J with J > 2 can be interpreted as evidence in favor of string theory with the string scale comparable to the Hubble scale and a very weak coupling which explains small H/M pl .
It is obvious from (5.2) that the effects of higher spin particles are always suppressed by the slow roll parameter and hence not observable in the near future. The derivation of (5.2) relied heavily on the approximate conformal invariance of the inflationary background. This approximate conformal invariance is also responsible for the slow roll suppression. However, if we allow for a large breaking of conformal invariance, the signatures of massive higher spin particles can be large enough to be detected by future experiments. In particular, using the framework of effective field theory of inflation it was shown in [95] that there are interesting scenarios in which higher spin particles contribute significantly to the scalar non-Gaussanity. Furthermore, it was shown in [95] that higher spin particles can also produce detectable as well as distinctive signatures on the scalar-scalar-graviton three-point function in the squeezed limit. Experimental exploration of this form of non-Gaussanity through the measurement of the BT T correlator of CMB anisotropies can actually be a reality in the near future [95]. In fact, in the most optimistic scenario, the proposed CMB Stage IV experiments [96] will be sensitive enough to detect massive higher spin particles, providing indirect evidence in favor of a theory which is very similar to low scale string theory.
A Transverse polarizations
We construct the transverse polarization tensors used in section 2 explicitly. These polarization tensors have only component in transverse directions x − y so they can be used in D ≥ 4. Let us define Let us consider following basis vectors
B Phase shift computations
A lemma. In order to get the bounds in the transverse plane, we can use a trick that will be used many times in this appendix. After plugging the polarization tensors for particles, we always find the following equation We would like to show that sign of I alternates by choosing different directions for b in the transverse plane. Let us first consider J = J , J = J + K. We specify x + , x − to be two arbitrary directions in the transverse plane and the direction of the impact parameter b is picked in the same plane spanned by x + , x − . By using e = e ⊕ we find and θ is the angle between the vector b and the x-axis, where . This implies that rotating b with respect to x-axis changes the sign of I for K = 0.
JHEP04(2019)056
If K = 0, both e ⊕ and e ⊗ yield the same sign for I, and we need to use polarizations having components in other transverse directions, therefore the following argument could not be applied to D = 4. For D ≥ 5, we can separate another transverse coordinate z from x + , x − and after taking derivative we place the impact parameter b in x, y, z plane. These coordinates are enough for getting the bounds and we do not have to consider other transverse directions in for D ≥ 6. Again by plugging e = e ⊕ , we find where θ is the angle betweenẑ and b. For any integer value of J and D, the hypergeometric function in (B.3) is a polynomial in its variable, changing sign for both even and odd J.
µ J T and send e µ 1 e µ 2 · · · e µ J → e µ 1 µ 2 ···µ J . We also need to impose e µ 1 µ 2 ···µ J 3 = (e µ 1 µ 2 ···µ J 1 ) † to have positivity. With this choice of polarization, only A 1 , · · · , A J+1 contribute to phase shift and we write down the contribution of each vertex to the phase shift. Let us defineδ(s, b) = π D/2−2 In the small impact parameter limit, the term with the most negative powers of b dominates over other terms. As explained in the lemma B, choosing different direction for b for D ≥ 5 changes the sign for each of these terms. Therefore by applying the argument successively, we find Note that for a 1 , there is no derivative and hence rotating direction of b does not change the sign of this term. Choosing e to be either e ⊗ or e ⊕ we find for A 1 a manifestly positive contributionδ T . In this case all the remaining vertices contribute to the phase shift and each vertex contribution is as follows which by taking b small and using the trick discussed in B yields
JHEP04(2019)056
While at the 1 b D−2 order, A 1 contributes and we find Off-diagonal Components of E J and E J−1 . In order to impose constraints on A J+2 , A J+3 , · · · A 2J+1 , we use E (1) = E J , E (3) = E J−1 . Subsequently, we find the contribution due to each of remaining vertices impling that a J+1+i = 0. Using the diagonal elements in E J−1 we find However the contribution from A 1 is given bỹ (B.13) Therefore, we find a J+2 = J a 1 , a 2J+2 = J(J−1) 2 a 1 . This proves (2.26).
Diagonal elements of E J−2 . For constraining a 1 we used the diagonal elements in E J−2 for both particles. Computing C JJ2 after imposing all the other constraints, we find for J ≥ 4δ and hence a 1 = 0 due to the trick used in B. The equation (B.14) is valid for J ≥ 4. For J = 3, we used interference between E (1) = E 0 and E (3) = E 3 to set a 1 = 0.
JHEP04(2019)056
C Parity violating interactions in D = 5 Only in D = 4 and 5, the massive higher spin particles can interact with gravity in a way that violates parity. We already discussed the case of D = 4. Let us now discuss the parity odd interactions in D = 5. Unlike D = 4, only massive particles are allowed to couple to gravity in a way that does not preserve parity. In order to list all possible parity odd vertices for the interaction J − J − 2, we introduce the following parity odd building block: The most general form of parity odd on-shell three-point amplitude can then be constructed using this building block. In particular, we can write two distinct sets of vertices. The first set contains J independent structures: While the second set contains J − 1 independent structures: The most general form of the parity violating three-point amplitude is given by Bounds on parity violating interactions can be obtained by using a simple null polarization vector where the transverse and longitudinal vectors are defined in (2.16). The vectorsx andŷ are given byx = (0, 0, 1, 0, 0) andŷ = (0, 0, 0, 1, 0). Positivity of the phase shift for this polarization leads toā n = 0 , n = 1, · · · , 2J − 1 (C. 6) for any spin J. Note that this bound holds even for J = 1 and 2.
JHEP04(2019)056 D Correlators of higher spin operators in CFT
Let us first define the building blocks Two-point function.
where, ∆ is the dimension of the operator X J and C X J is a positive constant. ε 1 and ε 2 are null polarization vectors contracted with the indices of X J in the following way (ε µ ε ν · · · ) X µν··· ≡ ε.X . where C n 23 ,n 13 ,n 12 are OPE coefficients and h ≡ ∆ + J. In the above expression all of the polarization vectors are null, however polarizations ε µ ε ν · · · can be converted into an arbitrary polarization tensor ε µν··· by using projection operators from [62]. The sum in (D.4) is over all triplets of non-negative integers {n 23 , n 13 , n 12 } satisfying J − n 12 − n 13 ≥ 0 , J − n 12 − n 23 ≥ 0 , 2 − n 13 − n 23 ≥ 0 . (D.5) To begin with, there are 5 + 6(J − 1) OPE coefficients C n 23 ,n 13 ,n 12 , however, not all of them are independent. The three-point function (D.4) must be symmetric with respect to the exchange (x 1 , ε 1 ) ↔ (x 2 , ε 2 ) which implies that only 4J OPE coefficients can be independent in general. Moreover, conservation of the stress-tensor operator T will impose additional restrictions on the remaining OPE coefficients C n 23 ,n 13 ,n 12 .
Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. | 26,436.2 | 2019-04-01T00:00:00.000 | [
"Physics"
] |
Integrability of solutions of the Skorokhod Embedding Problem for Diffusions
Suppose $X$ is a time-homogeneous diffusion on an interval $I^X \subseteq \mathbb R$ and let $\mu$ be a probability measure on $I^X$. Then $\tau$ is a solution of the Skorokhod embedding problem (SEP) for $\mu$ in $X$ if $\tau$ is a stopping time and $X_\tau \sim \mu$. There are well-known conditions which determine whether there exists a solution of the SEP for $\mu$ in $X$. We give necessary and sufficient conditions for there to exist an integrable solution. Further, if there exists a solution of the SEP then there exists a minimal solution. We show that every minimal solution of the SEP has the same first moment. When $X$ is Brownian motion, every integrable embedding of $\mu$ is minimal. However, for a general diffusion there may be integrable embeddings which are not minimal.
Introduction
Let X be a regular, time-homogeneous diffusion on an interval I X ⊆ R, with X 0 = x ∈ int(I X ), and let µ be a probability measure on I X . Then τ is a solution of the Skorokhod embedding problem (Skorokhod [19]) for µ in X if τ is a stopping time and X τ ∼ µ. We call such a stopping time an embedding (of µ in X).
For a general Markov process Rost [18] gives necessary and sufficient conditions which determine whether a solution to the Skorokhod embedding problem (SEP) exists for a given target law. The conditions are expressed in terms of the potential. When applied to Brownian motion (where we include the case of Brownian motion on an interval subset of R, provided the process is absorbed at finite endpoints) these conditions lead to a characterisation of the set of measures which can be embedded in Brownian motion. Then, in the case of a regular, one-dimensional, time-homogeneous diffusion with absorbing endpoints, necessary and sufficient conditions for the existence of a solution to the SEP can be derived via a change of scale. Let s be the scale function of X; then Y = s(X) is a local martingale, and in particular a time-change of Brownian motion. Further, let I = s(I X ) be the state space of Y . Then the set of measures for which a solution of the SEP exists depends on both I and the relationship between the starting value of Y and the mean of the image under s of the target law, see Theorem 3 below.
Apart from the existence result above, most of the literature on the SEP has concentrated on the case where X is Brownian motion in one dimension. Exceptions include Rost [18] as mentioned above, Bertoin and LeJan [4] who consider embeddings in any time-homogeneous process with a well-defined local time, Grandits and Falkner [8] (drifting Brownian motion), Hambly et al [9] (Bessel process of dimension 3) and Pedersen and Peskir [14] and Cox and Hobson [6] (these last two consider embeddings in a general time-homogenenous diffusions).
In the Brownian setting many solutions of the SEP have been described; see Obloj [12] or Hobson [10] for a survey. Given there are many solutions, it is possible to look for criteria which characterise 'small' or 'good' solutions. In both the Brownian case and more generally, there is a natural class of good solutions of the SEP, namely the minimal embeddings (Monroe [11]). An embedding τ is minimal if whenever σ ≤ τ is another embedding (of µ in X) then σ = τ almost surely.
Another criteria for a good solution might be that it is integrable, or as small as possible in the sense of expectation. In this article we are interested in the integrability or otherwise of solutions of the SEP, and the relationship between integrability and minimality in the case where X is a time-homogeneous diffusion in one dimension.
Consider the case where X is Brownian motion null at zero and write W for X. By the results of Rost [18] there exists a solution of the SEP for µ in W on R for any measure µ on R. If we require integrability of the embedding then the story is also well-known: Theorem 1 (Monroe [11]). There exists an integrable solution of the SEP for µ in W if and only if µ is centred and in L 2 . Further, in the case of centred squareintegrable target measures, τ is minimal for µ if and only if τ is an embedding of µ and E[τ ] < ∞.
Our goal in this paper is to consider the case where X is a regular timehomogeneous diffusion on an interval I X with absorbing endpoints. Let x ∈ int(I X ) denote the initial value of X, let m X denote the speed measure, and s X the scale function. Let µ be a probability measure on I X .
Our main result is as follows: Theorem 2. There exists an integrable solution of the SEP for µ in X if and only if E X (x; µ) < ∞ where E X (x; µ) is defined in (11) below. Further, in the case where E X (x; µ) < ∞ then τ is minimal for µ if and only if τ is an embedding and E[τ ] = E X (x; µ).
In the Brownian case there is a dichotomy, and for any embedding either E[τ ] = x 2 µ(dx) or E[τ ] = ∞, and so if the target law is square integrable then minimality of an embedding is equivalent to integrability. This is not true in general for diffusions: we can have integrable embeddings which are not minimal. The converse is also true: both in the Brownian case and more generally we can have minimal embeddings which are not integrable. This will be the case if E X (x; µ) = ∞.
We close the introduction by considering a quartet of illuminating and motivating examples.
be Brownian motion on R + absorbed at zero, and with Z 0 = z > 0. Then there exists an embedding of µ if and only if xµ(dx) ≤ z. Moreover, there exists an integrable embedding of µ in Z if and only if xµ(dx) = z and x 2 µ(dx) < ∞ and then an embedding τ is minimal if and only if E[τ ] < ∞ if and only if E[τ ] = (x−z) 2 µ(dz). Note that Z is a supermartingale so the necessity of xµ(dx) ≤ z is clear.
In particular, suppose V solves V t = v + aW t + bt with b > 0 and W 0 = 0, and set β = 2b/a 2 . Then there exists an embedding of µ if and only if e −β(u−v) µ(du) ≤ 1.
(Upward drifting Brownian motion is transient to +∞ and so there will be an embedding of µ provided µ does not place too much mass at values far below v.) Moreover, there exists an integrable embedding of µ if and only if e −β(u−v) µ(du) ≤ 1 and u + µ(du) < ∞. If there exists an integrable embedding then an embedding τ is minimal if and only if Example 3. Let P = (P t ) t≥0 be a Bessel process of dimension 3 started at P 0 = p > 0.
Note that a Bessel process is transient to infinity, and so for there to exist an embedding of µ, µ cannot place too much mass near zero. For an integrable embedding then in addition we cannot have too much mass far from zero as the process takes a long time to get there. Note also that Y = P −1 is a diffusion in natural scale and that Y is the classical Johnson-Helms example of a local martingale which is not a martingale.
The results extend to the case p = 0. Then any µ on R + can be embedded in P . There exists an integrable embedding if and only if µ is square integrable.
Then τ is an embedding of µ and τ is integrable, but τ is not minimal.
Preliminaries, notation and the switch to natural scale
Let X be a time-homogeneous diffusion with state space I X , started at x ∈ int(I X ), and suppose that if X can reach an endpoint of I X , then such an endpoint is absorbing. Suppose that X is regular, ie for all x ′ ∈ int(I X ) and x ′′ ∈ I X , P x ′ (H x ′′ < ∞) > 0. Then, see Rogers and Williams [16] or Borodin and Salminen [5], X has a scale function s and Y = s(X) is a diffusion in natural scale on the interval I = s(I X ). Denote the endpoints of I by {ℓ, r} and suppose y = s(x) lies in (ℓ, r). Then we have −∞ ≤ ℓ < y < r ≤ ∞.
For a diffusion process Z let H Z z = inf{s ≥ 0 : Where the process Z involved is clear, the superscript may be dropped.
We have that (Y t∧H Y ℓ,r ) t≥0 is a continuous local martingale. In particular, we can write Y t = W Γt for some Brownian motion W started at y and a strictly increasing time-change Γ. We have already seen from Example 3 that Y may easily be a strict local martingale.
Let µ be a law on I X and define ν = µ • s −1 so that for a Borel subset of I, ν(A) = µ(s −1 (A)). Then τ is an embedding of µ in X if and only if τ is an embedding of ν in Y . Moreover, the integrability of τ is also unaffected by a change of scale, and thus we lose no generality in assuming that our diffusion is in natural scale. Minimality is another property which is preserved under a change of scale.
Henceforth, therefore, we assume we are given a local martingale diffusion Y on I with Y 0 = y ∈ int(I) and target measure ν on I. Provided ν ∈ L 1 , write ν for the mean of ν, with a similar convention for other measures. It follows from our assumption on X that if Y can reach an endpoint ℓ or r of I in finite time then that endpoint is absorbing. The diffusion Y in natural scale is characterised by its speed measure which we denote by m. Recall that if Y solves the SDE dY t = η(Y t )dB t for a continuous diffusion coefficient η then m(dy) = dy/η(y) 2 .
(i) Suppose I is a finite interval. Then ν can be embedded in Y if and only if y = xν(dx). The idea behind the proof is to write Y as a time-change of Brownian motion, Y t = W Γt . Then, since Y is absorbed at the endpoints we must have that Γ t ≤ H W ℓ,r for each t.
In the first case of the theorem Y is a bounded martingale and E[Y τ ] = y for any τ . In the second case Y is a local martingale bounded below and hence a supermartingale for which E[Y τ ] ≤ y. In the third case Y is a submartingale. Since I has a finite endpoint, Y is transient. Further, Y is a supermartingale. Let τ be an embedding of ν where ν ≤ y. Let σ ≤ τ be another embedding. Then, from the supermartingale property, and since Y σ and Y τ are equal in law, But Y is a time change of Brownian motion Y t = W Γt for some strictly increasing time-change Γ. Brownian motion has no intervals of constancy, and hence nor does Y . It follows that σ = τ almost surely and hence τ is minimal.
We close this section with a discussion of the Brownian case, including a partial proof of Theorem 1, followed by a discussion of the local martingale diffusion case.
For W a Brownian motion null at 0, W 2 t∧τ − (t ∧ τ ) is a martingale and (2) so that µ is centred and in L 2 . Conversely, if µ is centred and in L 2 then there are several classical constructions which realise an integrable embedding, including those of Skorokhod [19] and Root [17]. See Obloj [12] or Hobson [10] for a discussion.
The final statement of Theorem 1 is deeper, and follows from Theorem 5 of Monroe [11]. One of the main goals of this work is to extend the work of Monroe to general diffusions. Note that the arguments above yield that in the Brownian case if τ is an embedding of µ and E[τ ] < ∞ then E[τ ] = x 2 µ(dx), so that if µ is centred and in L 2 then every integrable embedding is minimal.
Consider now the case of a general diffusion Y in natural scale. Suppose Y 0 = y = 0 and that ν is centred. Then to determine whether there might exist a integrable embedding we might expect to replace the condition x 2 µ(dx) < ∞ of the Brownian case with some other integral test depending on the speed measure m of Y and the target measure ν. Indeed we find this is the case with x 2 replaced by a convex function q defined in (4) in the next section.
But what if ν is not centred? In the Brownian case there is no hope that the target law can be embedded in integrable time, not least because E[H W x ] = ∞ for each non-zero x, but what if Y is some other diffusion?
Suppose the state space I of Y is unbounded above. Suppose Y 0 = y and ν ∈ L 1 with ν = xν(dx) < y. (In this discussion we exclude the degenerate case where Y is a point mass at ℓ.) One candidate way to embed ν is to first wait until H Y ν = inf{t : Y t = ν} and then to embed ν in Y started at ν, ie to set where Θ is the shift operator Θ t (ω(·)) = ω(t + ·) and τ ν,ν is some embedding of ν in Y started at ν. Note that since I is unbounded above and Y is a time-change of Brownian motion, it follows that H Y ν is finite almost surely. The embedding in (3) will be integrable if both H Y ν and τ ν,ν are integrable, and we can decide if it is possible to choose τ ν,ν integrable using the integral test of the centred case. Our results show that although embeddings of ν need not be of the form given in (3), nonetheless there exist integrable embeddings if and only if both E[H Y ν ] < ∞ and there is an integrable embedding τ ν,ν of ν in Y started at ν. In that case every minimal embedding has the same first moment.
Every minimal embedding has the same first moment
Let Y be a regular diffusion in natural scale on I ⊆ R. Suppose Y 0 = y. Let m denote the speed measure of Y , define q u via and let q = q y . Then q(Y t ) − t is a local martingale, null at zero.
In the case of a diffusion in natural scale, the main result of this paper is the following: Our goal is to prove Theorem 4. In this section we suppose that ν ∈ L 1 and −∞ ≤ ℓ < y < r ≤ ∞.
3.1.
The centred case with support in a sub-interval. Suppose ν is a measure with mean y and support in a subset [L, R] ⊂ (ℓ, r) of I where L < y < R.
In general, from Fatou's Lemma we know that for any embedding χ of ν, Then if χ ≤ τ and both χ and τ are embeddings of ν, we must have χ = τ almost surely. Hence τ is minimal. See also Proposition 4 in [1].
Suppose that σ is an embedding of ν. Our goal is to show that there exists an embeddingσ of ν such thatσ ≤ σ ∧ H L,R . Thenσ is minimal and Following a definition of Root [17], we define a barrier to be a closed subset B of Let B be the space of all barriers and given L, R with ℓ ≤ L < y < R ≤ r let B L,R be the set of all barriers B with (0, L) and (0, R) in B, and then (t, Let ρ be the standard Euclidean metric on R 2 . We map G into a bounded rectangle F = [0, 1] × [−1, 1] by (t, x) → (t/(1 + t), x/(1 + |x|)) and let r be the induced metric on G given by Lemma 2. Suppose ν has mean y and support in [L, R]. Suppose that σ is an Proof. First suppose ν puts mass on a finite subset of points in [L, R]. In this case it is easy to prove the result by adapting the proof in Monroe [11] which is based on topological arguments. We choose instead to give a more probabilistic proof. Let ν be a measure on n + 2 points. Label the points y 0 < y 1 < · · · < y n < y n+1 .
and note that η b is a probability measure on the same points as ν with mean y. Let It follows that C ≤,ν has a minimal element, b say and that Then τ Bν ∧ σ ≤ H y0,yn+1 and the result follows. Now consider the general case of a measure ν on [L, R] with mean y. Let and let σ n = inf{t ≥ σ : Y t ∈ C n } and ν n = L(Y σn ). Then σ n is a stopping time and ν n has mean y and finite support. By the study of the previous case there is a barrier B n such that Y τB n ∧σn has law ν n and τ Bn ≤ H L,R . We want to show that down a subsequence (B n ) n≥1 converges to a barrier B, τ Bn converges almost surely to τ B ≤ H L,R and Y σ∧τB ∼ ν.
By the compactness of B [L,R] , (B n ) n≥1 has a convergent subsequence. Let B be the limit. Moving to the subsequence, we may assume that B n → B. Write τ n as shorthand for τ Bn .
Note that E[H L,R ] is finite and choose T > 2E[H L,R ]/ǫ; then and n 0 such that max sup Clearly P(F (y, s)) ≤ ǫ/2 for all (y, s). Hence by the Strong Markov property and down a further subsequence if necessary, τ n → τ B almost surely. Thus For a diffusion Y with state space I, speed measure m and initial value Y 0 = y, and for a law ν on [L, R] with mean y, we have that E Y (y; ν) = q y (x)ν(dx). Clearly E Y (y; ν) < ∞ under the present conditions on ν. Proof. By the first case of Theorem 3 there exists an embedding σ of ν in Y , and then by Lemma 2 there exists a minimal embeddingσ = σ ∧ τ B with E[σ] = E Y (y; ν). If σ is minimal then σ =σ and E[σ] = E Y (y; ν). Conversely, by the arguments at the end of Lemma 1, for any embedding We construct a sequence of measures (ν n ) n≥n0 with supports in bounded intervals [L n , R n ] ⊂ (ℓ, r) and such that (ν n ) n≥n0 converges to ν. Hence, given σ and ν n there is a barrier B n with associated stopping timeσ n = τ Bn ∧ σ such that Yσ n has law ν n . For our specific choice of approximating sequence of measures we argue that the sequence of stopping times τ Bn is monotonic increasing with limit τ ∞ . Finally we show that σ ∧ τ ∞ is minimal and embeds ν.
Recall that our current hypothesis is that ν is a measure on I such that ν ∈ L 1 and Y 0 = y = ν.
For a measure η ∈ L 1 with mean c and support in [ℓ, r] define the potential Then U η ∈ V c and there is a one-to-one correspondence between elements of V c and probability measures on [ℓ, r] with mean c. For a pair of probability measures η i with support in [ℓ, r] we have that Given ν, and let ν n be the probability measure with potential U n . Then there exist {a n , b n } such that [a n , b n ] ⊂ (ℓ, r), ν n (A) = ν(A) for all measurable subsets A ⊂ (a n , b n ) and ν n ([ℓ, a n )) = 0 = ν n ((b n , r]). Then ν n has atoms at a n and b n and mean ν. Further (a n ) n≥n0 and (b n ) n≥n0 are monotonic sequences and the family (ν n ) n≥n0 is increasing in convex order.
Theorem 5. Suppose ν ∈ L 1 and Y 0 = y = ν. Let σ be an embedding of ν. There exists an barrier B such that τ B ∧ σ also has law ν and Proof. For each n, fix ν n as above. From our study of the bounded case we know there is a barrier B n which we can assume contains {(t, x), x ≤ a n or x ≥ b n } such that Y τB n ∧σ has law ν n . We now show that if p > n then B p ⊂ B n .
Suppose A ⊂ [a n , b n ]. If A ⊂ A n,p and Y σ∧τB n ∪Bp ∈ A then Y σ∧τB n ∈ A and hence we have Thus for every set A ⊂ [a n , b n ], ν n (A) = P(Y σ∧τB n ∈ A) ≥ P(Y σ∧τB n∪Bp ∈ A). Hence there must be equality throughout and B n ∪ B p ∈ B n . Now fix a sequence (B n ) n≥1 with B n ∈ B n . LetB n be the closure of ∪ ∞ i=n B i . We aim to show thatB n ∈ B n . For k > n let B k n = ∪ k i=n B i . By the arguments of the previous paragraphs B k n ∈ B n . Since the set of barriers is compact, B k n converges toB n as k ↑ ∞ and τ B k n ↓ τB n (note that τ B k n ≤ T an,bn < ∞). Hence, since paths of Y are continuous, andB n ∈ B n . It follows that for p > n,B p ⊂B n , and without loss of generality we shall assume that B p ⊂ B n .
Define B ∞ = ∩B n and set τ ∞ = τ B∞ . Then τ Bn ↑ τ ∞ . Also τ Bn ∧ σ ↑ τ ∞ ∧ σ and L(Y τ∞∧σ ) = lim L(Y τB n ∧σ ) = lim ν n = ν. It only remains to prove that 3.3. The uncentred case. Without loss of generality we may assume that the mean of ν satisfies ν < y. Then for there to be an embedding of ν we must have that I is unbounded above.
Again we construct a sequence of measures (ν n ) n≥n0 with supports in bounded intervals [L n , R n ] ⊂ (ℓ, r) and such that (ν n ) n≥n0 converges to ν.
Proof. It only remains to cover the case where Y 0 = y = ν. We may assume y > ν. For each n, fix ν n as above. From our study of the bounded, centred case we know there is a barrier B n which we can assume contains {(t, x), x ≤ a n or x ≥ b n ≡ n} such that Y τB n ∧σ has law ν n . Moreover, exactly as in the proof of Theorem 5, and with similar notation, it follows that if p > n then B p ⊂ B n , that τ Bn ↑ τ ∞ and that τ ∞ ∧ σ embeds ν. Finally Observe that q is convex and so lim n q(n)/n exists in (0, ∞]. Further y = xν n (dx) = vn un max{F −1 ν (u), (ℓ + 1/n)}du + n(1 + u n − v n ) and hence lim n n(1 + u n − v n ) exists and is equal to y − ν. Then, as before Proof of Theorem 4 in the case ν ∈ L 1 . If E Y (y; ν) = ∞ then since any embedding has E[σ] ≥ E[σ ∧ τ ∞ ] = E Y (y; ν) there are no integrable embeddings. Conversely, if E Y (y; ν) < ∞, then by Theorem 5 or Theorem 6 there exists an embeddingσ with E[σ] = E Y (y; ν).
Example 5. The following example shows that unlike in the Brownian case, in general integrability alone is not sufficient for minimality.
Suppose the diffusion Y solves Then H embeds ν and E[Ĥ] < ∞, butĤ is not minimal sinceĤ > H Y 1 ∧ H Y −1 which is also an embedding of ν. Example 6. This example gives another circumstance in which integrability is not sufficient to guarantee minimality.
Let Y be a time-homogeneous martingale diffusion on I = [ℓ, r] with −∞ < ℓ < y < r < ∞. Suppose ℓ and r are exit boundaries and that E[H Y ℓ,r ] < ∞. We take ℓ and r to be absorbing boundaries. (A simple example is obtained by taking Brownian motion started at y and absorbed at ℓ and r.) Let ν = (r − y)/(r − ℓ)δ ℓ + (y − ℓ)/(r − ℓ)δ r . Then for c > 0, H Y ℓ,r + c is an integrable embedding which is not minimal.
However, examples of this type are degenerate and may easily be excluded by restricting the class of embeddings to those satisfying σ ≤ H Y ℓ,r . Example 7. Now we give an example which shows that minimality alone is not sufficient for integrability.
Let Y be geometric Brownian motion so that Y solves dY t = Y t dW t . Let Y have initial value Y 0 = 1. It is easy to see that for a ∈ (0, 1] we have Let ν = δ 0 . Then τ = ∞ is the minimal stopping time that embeds ν in Y . Obviously τ is not integrable. More generally, let ν be any probability measure on (0, 1) with log y ν(dy) = −∞, and let Z be a random variable such that L(Z) ∼ ν. Let the filtration F = (F t ) t≥0 be such that Z is F 0 -measurable, and let W be a F-Brownian motion which is independent of Z.
Let τ = inf{u ≥ 0 : Y u = Z}. Then τ is an embedding of ν. Note that τ is a stopping time with respect to F but not with respect to the smaller filtration generated by Y alone. Moreover, , and hence lim x→∞ q 1 (x)/x = 2. Therefore, for any law ν on (0, 1), for a minimal embedding We give another example of a minimal non-integrable embedding which does not require independent randomisation in the section on the Azéma-Yor stopping time.
Another feature of this example, is that Y is a martingale and yet it is easy to construct examples with ν < y for which there is an integrable embedding. Hence integrability and minimality of τ is not sufficient for uniform integrability of (Y t∧τ ) t≥0 .
Alternative characterisations of E
In the comments before Theorem 4 we argued that in the non-centred case a natural family of embeddings was those which first involved waiting for the process to hit ν and then to embed ν in Y started at ν. For a stopping rule τ as given in (3) we have from the analysis of the centred case that Now we want to show that the right hand side of (7) is equivalent to the expression given in (5). More generally, for v ∈ [ν, y] we could imagine waiting for the process to hit v and then using a minimal embedding time to embed ν in Y started at v. Then we find We want to show that the right-hand-side of (8) does not depend on v.
If this expression is finite for any (and then all)
Proof. For any u, v, .
which does not depend on v.
Non-integrable target laws.
We have seen that if ν ∈ L 1 then there exists an integrable embedding of ν if E y [H ν ] and q ν (x)ν(dx) are both finite. In this short section we argue that if Y 0 = y ∈ (ℓ, r) and ν / ∈ L 1 then there does not exist an integrable embedding of ν.
Note first that q = q y is non-negative and convex, and hence q(x) ≥ α|x − y| − β for some pair of finite positive constants α, β. Let T n be a localising sequence for the local martingale {q(Y t∧σ ) − (t ∧ σ)} t≥0 . Then, by an argument similar to that in the proof of Lemma 1
Diffusions started at entrance points.
In the proofs of the main results we assumed that Y started at an interior point in (ℓ, r). Now we consider what happens if we start at a boundary point. The motivating example is a Bessel process in dimension 3 started at zero.
After a change of scale we may assume that we are working with a diffusion in natural scale. Then, if the boundary point is finite and an entrance point, it must also be an exit point (for terminology, see Borodin and Salminen [5, Section II.6]). We have assumed exit boundary points to be absorbing. It follows that an entrance point must be infinite; without of generality we assume that Y starts at +∞ and that I = (ℓ, ∞) where we may have ℓ = −∞.
So suppose that ∞ is an entrance-not-exit point. In particular, E ∞ [H z ] < ∞ for some z ∈ (ℓ, ∞) or equivalently ∞ zm(dz) < ∞. We show that the results of previous sections pass over to this case with a small modification. We suppose the initial sigma algebra F 0 is sufficiently rich as to include an independent, uniformly distributed random variable.
Remark 1.
Note that E Y (∞; ν) can be rewritten as It follows that if ℓ = −∞ and 0 −∞ |x|ν(dx) = ∞ then E Y (∞; ν) = ∞. However, if ν has support in [L, ∞] for L > ℓ or if ℓ m(dz) and 0 ℓ |x|ν(dx) are finite (the latter is always true if ℓ > −∞), then it is possible to have ν / ∈ L 1 and still have E Y (∞; ν) < ∞ and the existence of integrable embeddings. For example, suppose Y solves dY t = Y 2 t dB t subject to Y 0 = ∞ and suppose ν is a measure on This last expression as a clear interpretation as the sum of E ∞ [H ν ] and the expected time to embed law ν in Y started at ν using a minimal embedding. It follows that if ν ∈ L 1 and there exists an integrable embedding of ν started at ν then the stopping time 'run until Y hits the mean, and then use a minimal embedding to embed ν in Y started from the mean' is a minimal and integrable embedding.
Proof of Theorem 7. Suppose first that E Y (∞; ν) is finite. By assumption F 0 is sufficiently rich as to include a uniform random variable. (Note that if ν includes an atom at ∞ independent randomisation of this form will always be necessary to construct an embedding.) Then there exists a random variable Z with law ν and setting σ = inf{u ≥ 0; Y u ≤ Z} we have Y σ ∼ ν and If ν ∈ L 1 then we do not need independent randomisation. In this case both E ∞ [H ν ] and q ν (y)ν(dy) are finite (since E Y (∞; ν) is). Then there exists a minimal and integrable embedding τ ν,ν of ν in Y started at ν and Now suppose there is an integrable embedding. Then there exists an integrable minimal embedding σ say. The remaining parts of the theorem will follow if we can show that So, suppose σ is integrable and minimal. Since ∞ is an entrance boundary, there For n ≥ N letσ n = max{σ, H n } and let ν n = L(Yσ n ). Writeσ n = H n +σ n whereσ n = (σ − H n ) + and letν n = L(Y n σn ) where here the superscript reflects the fact that Y starts at n.
First we argue that for each n ≥ N ,σ n is minimal forν n in Y started at n. Supposeρ n ≤σ n also embedsν n in Y started from n. If ρ is defined by ρ = σ σ < H n H n +ρ n σ ≥ H n then ρ ≤ σ and Y ρ ∼ Y σ . By minimality of σ we conclude that ρ = σ and hencê ρ n =σ n .
Recovering results for general diffusions
Let X = (X t ) t≥0 be a time-homogeneous one-dimensional diffusion with state space I X and suppose X solves dX t = a(X t )dW t + b(X t )dt subject to X 0 = x. Then provided b/a 2 and 1/a 2 are locally integrable, X has scale function s = s X and speed measure m X given by .
Moreover, q y (z) = 2 z y (z − w)m(dw) = 2 For definiteness suppose s(x) ≥ ν, and denote by r the upper limit of I and by r X the upper limit of I X . Then r = ∞ and In general therefore, for x ∈ int(I X ) set E X (x; µ) = ∞ if I X |s(z)|µ(dz) = ∞ and otherwise (11) where I is the indicator function. As is the case for diffusions in natural scale, there is a second representation of E X in terms of the expected value of first hitting time of the weighted mean of the target law together with the expected value of an embedding in a process started at the weighted mean, namely Note that in this expression q is defined for the transformed process in natural scale.
Proof of Theorem 2. τ is minimal for µ in X started at x if and only if τ is minimal for ν in Y started at y = s(x). Furthermore, τ is an integrable embedding of µ if and only if τ is an integrable embedding of ν.
where E X is defined in either (11) or (12).
Remark 2. Drifting Brownian motion was the subject of Grandits and Falkner [8], and the conclusion of the previous example is contained in their Proposition 2.2.
Note that in the case Hence, for an embedding τ of µ the result E[τ ] = E X (x; µ) = ( zµ(dz) − x)/b is not unexpected, and can be proved directly by other means.
Minimality and Integrability of the Azéma-Yor embedding
Azéma and Yor [3,2] (see also Rogers and Williams [16,Theorem VI.51.6] and Revuz and Yor [15,Theorem VI.5.4]), give an explicit construction of a solution of the SEP for Brownian motion. The original paper [3] assumes the target law is centred and square integrable, but the L 2 condition is replaced with a uniform integrability condition in [2], see also [15]. Azéma and Yor [3] also indicate how the results can be extended to diffusions, provided that the process is recurrent and provided that once the process has been transformed into natural scale, the mean of the target law is equal to the initial value of the diffusion.
The Azéma-Yor stopping time for a centred target law ν in Brownian motion W null at zero is (13) τ W AY,ν = inf{u : W u ≤ β ν (J W u )}, where J W is the maximum process J W u = sup s≤u W u , and β ν is the left-continuous inverse barycentre function, ie β ν = b −1 ν where for a centred distribution η, b η (x) = E Z∼η [Z|Z ≥ x]. The Azéma-Yor embedding has become one of the canonical solutions of the SEP because it does not involve independent randomisation and because it is possible to give an explicit form for the stopping time. Further, amongst uniformly integrable (or equivalently minimal) solutions of the SEP for Brownian motion, the Azéma-Yor solution has the property that it maximises the law of the stopped maximum, ie for all increasing functions H, E[H(J W τ )] is maximised over minimal embeddings τ of ν in W by τ W AY,ν . In the case where ν ∈ L 1 but ν is not centred, Pedersen and Peskir [14] make the simple observation that we can embed ν by first running the Brownian motion until it hits ν and then embedding ν in Brownian motion started at ν using the classical centred Azéma-Yor embedding, ie they propose However, if the Brownian motion is null at zero, and ν < 0, then the embedding τ P P,ν no longer maximises the law of the stopped maximum. Instead Cox and Hobson [6] introduce an alternative modificiation of the Azéma-Yor stopping time which does maximise the law of the stopped maximum, and it is this embedding which we will study here. In fact the expected value of any embedding of the form H Y ν + τ ν,ν • Θ H Y ν can be found very easily, and our aim here is to analyse an embedding which is not of this form.
(Here the arg inf may not be uniquely defined, but we can make the choice of β ν unique by adding a left-continuity requirement.) Then the Cox-Hobson extension of the Azéma-Yor embedding is to set Note that if ν ≥ w, then for z ∈ [w, ν] we have β ν (z) = −∞. In this case the Cox-Hobson and Pedersen-Peskir embeddings are identical. However, if ν < w then the Cox-Hobson and Pedersen-Peskir embeddings are distinct.
To ease the exposition we assume that ν has a density ρ. (The general case can be recovered by approximation, or by taking careful consideration of atoms.) Then b = β −1 ν solves (16) (b(y) − y)ν((y, ∞)) = D ν (y), b is differentiable and ν((y, ∞))b ′ (y) = (b(y)−y)ρ(y). Then, writing τ for τ W CH,ν and L(ν) for the lower limit of the support of ν and using excursion-theoretic arguments, and hence τ W CH,ν is an embedding of ν. Cox and Hobson [7] prove that the embedding in (15) is minimal. A bi-product of the subsequent arguments in this section is a proof of minimality by different means. Note that this is only relevant in the case I = R, else every embedding is minimal.
Let Y be a regular diffusion in natural scale. Then by the Dambis-Dubins-Schwarz theorem Y can be written as a time-change of Brownian motion: Y t = W [Y ]t for some Brownian motion (on a filtration and probability space constructed from the original space supporting Y ). Then if we set Q = [Y ] −1 we have W t = Y Qt . Conversely, let W be Brownian motion and let (L W t (z)) t≥0,z∈R be its family of local times. Given a measure m on I (with a strictly positive density with respect to Lebesgue measure), set A s = I m(dz)L W s (z). Then A is strictly increasing and continuous (at least until W hits an endpoint of I) and we can define an inverse Γ = A −1 . Finally set Y t = W Γt ; then Y is a diffusion in natural scale with speed measure m.
It follows that if τ is a solution of the SEP for ν in W then Q τ is a solution of the SEP for ν in Y . Similarly, if σ is the solution of the SEP in Y , then Γ σ is a solution of the SEP in W . Hence there is a one-to-one correspondence between solutions of the SEP for ν in W and solutions for ν in Y .
Recall that we are supposing that ν ∈ L 1 . (Note that if ν / ∈ L 1 then it is not possible to define D ν (·), and the Azéma-Yor solution is not defined.) Suppose also that w > ν, which is the interesting case in which the Pedersen-Peskir and Cox-Hobson embeddings are distinct. By analogy with (15) define where β ν is as defined in (14). Then τ = τ Y CH,ν inherits the embedding property from τ W CH,ν and is a solution of the SEP for ν in Y . Now consider the question of minimality. It is clear that τ W CH,ν is minimal for ν in W if and only if τ is minimal for ν in Y . If ν = y then τ W CH,ν is not integrable, but τ may be integrable. Further, if τ is integrable for ν in Y started at w and if E Y (w; ν) < ∞ then τ is minimal if and only if E[τ ] = E Y (w; ν). In particular, if we choose the diffusion Y so that its speed measure satisfies m(R) < ∞, then necessarily E Y (w; ν) < ∞ (recall ν ∈ L 1 ). The minimality of τ for ν in Y and hence the minimality of τ W AY,ν will follow if we can show E[τ ] = E Y (w; ν). We have, (recall w > ν), Here we use excursion theory and the fact that for the first line (see also Pedersen and Peskir [13, Theorem 4.1]), (J Y τ ≥ z) = (Y τ ≥ β(z)) for the second line, b ′ (y) = ρ(y)(b(y) − y)/ν((y, ∞)) almost everywhere for the third, and the fact that b(y) ≥ w for the final line.
These statements are consistent with the case c = 0 of absorbing Brownian motion. Then ν can be embedded in integrable time if an only if ν = 1 and ν ∈ L 2 , or equivalently φ = 1 + 1/θ and φ > 2.
7.2. An example of Pedersen and Peskir. Pedersen and Peskir [14] give the expected time for a Bessel process to fall below a constant multiple of the value of its maximum, ie they find E[τ P AY ] where τ P AY = inf{u > 0 : P u ≤ λJ P u } and λ < 1. They find the answer by solving a differential equation subject to boundary conditions and a minimality principle. We can recover their result directly using our methods. | 10,381 | 2014-03-10T00:00:00.000 | [
"Mathematics"
] |
Transcription Factor: A Powerful Tool to Regulate Biosynthesis of Active Ingredients in Salvia miltiorrhiza
Salvia miltiorrhiza Bunge is a common Chinese herbal medicine, and its major active ingredients are phenolic acids and tanshinones, which are widely used to treat vascular diseases. However, the wild form of S. miltiorrhiza possess low levels of these important pharmaceutical agents; thus, improving their levels is an active area of research. Transcription factors, which promote or inhibit the expressions of multiple genes involved in one or more biosynthetic pathways, are powerful tools for controlling gene expression in biosynthesis. Several families of transcription factors have been reported to participate in regulating phenolic acid and tanshinone biosynthesis and influence their accumulation. This review summarizes the current status in this field, with focus on the transcription factors which have been identified in recent years and their functions in the biosynthetic regulation of phenolic acids and tanshinones. Otherwise, the new insight for further research is provided. Finally, the application of the biosynthetic regulation of active ingredients by the transcription factors in S. miltiorrhiza are discussed, and new insights for future research are explored.
INTRODUCTION
Salvia miltiorrhiza Bunge is a small genome size plant; thus, it makes it a model medical plant to study . The main active ingredients of S. miltiorrhiza can be divided into two groups: water-soluble phenolic acids and liposoluble diterpenoid tanshinones. Phenolic acids, like rosmarinic acids and salvianolic acids, are antibacterial, anti-oxidative, and antiviral reagents, Wenping et al. (2011), while tanshinones, such as tanshinone I, tanshinone IIA, dihydrotanshnone I, tanshinone IIB, and cryptotanshinone, exhibit antitumor, antioxidant, and anti-inflammatory activities .
Not surprisingly, initial investigations of phenolic acid and tanshinone have mainly focused on establishing their biosynthetic pathways. The biosynthetic pathways of phenolic acids and tanshinones in S. miltiorrhiza have been studied by overexpressing or inhibiting key enzyme genes (Gao et al., 2009;Kai et al., 2011;Ma et al., 2013). However, this approach has limited efficiency when compared to transcriptional regulation. Transcription factors (TFs) in plants regulate the biological processes through activating or inhibiting one or multiple pathways (Gao et al., 2014). To date, more than 1,300 TFs have been detected in S. miltiorrhiza (Wenping et al., 2011;Luo et al., 2014), including WKRYs, bHLHs, MYBs, AP2/ERFs, and so on. However, the regulatory mechanisms of the biosynthesis of the active ingredient in S. miltiorrhiza are still poorly understood.
Here we review the biosynthetic pathways of phenolic acids and tanshinones in S. miltiorrhiza, with particular focus on the TFs that regulate the pathways, and highlight effective research approaches for improving the active ingredients of medical plants.
BIOSYNTHETIC PATHWAYS OF PHENOLIC ACIDS AND TANSHINONES
The simplified biosynthetic pathways of phenolic acids and tanshinones is shown in Figure 1A.
All terpenoids are synthesized from sequential assembly of five-carbon building blocks (C5H8) called isoprene units, while the four isoprene units constitute diterpenes (Yang et al., 2016). Isopentenyl diphosphate (IPP) and its isomer dimethylallyl diphosphate (DMAPP) are the two precursors of all terpenoids and are synthesized via two independent pathways: the methylerythritol phosphate (MEP) pathway in the plastids and the mevalonate (MVA) pathway in the cytosol. It was proposed that tanshinones are chiefly synthesized by the MEP pathway rather than the MVA pathway . Then, geranyl diphosphate synthase (GPPS), farnesyl diphosphate synthase (FPPS), and geranylgeranyl diphosphate synthase (GGPPS) catalyze DMAPP and IPP successively to form geranylgeranyl diphosphate (GGPP), which is the universal precursor of all diterpenoids (Dong et al., 2011). Skeleton miltiradiene in tanshinone biosynthesis is formed from Sm1, SmCPS2, and SmKSL1. In the downstream pathway, P450s participate in tanshinone biosynthesis. Guo et al. (2013) found that a P450 monooxygenase CYP76AH1 transformed miltiradiene to ferruginol. However, reference genes for postmodification characterization involved in biosynthetic pathway need further investigation.
TFs REGULATING BIOSYNTHESIS OF PHENOLIC ACID AND TANSHINONE
In plants, the regulation and accumulation of secondary metabolites is usually controlled by a complex network containing TFs (Yang et al., 2012). And TFs act as switches in regulating secondary metabolites network. The action of TFs possesses three traits: (1) TFs act alone or in a combinatorial fashion with other TFs to modulate the expression of target genes (Pinson et al., 2009;Goossens et al., 2017); (2) TFs can positively or negatively regulate biosynthesis pathways ( Table 1); (3) one TF regulates the expression of multiple genes participating in one or more biosynthetic pathways (Goossens et al., 2017;Hassani et al., 2020; Table 1).
Currently, several TFs which can regulate phenolic acid and tanshinone biosynthesis have been characterized, and a transcriptional regulation network of ingredients in S. miltiorrhiza is shown in Figure 1.
bHLH Family
The bHLH family is the second largest class of plant TFs (Feller et al., 2011;Goossens et al., 2017) and define their functionality with the specific DNA-4binding domains. The bHLH family harbors two functionally distinct regions in 60 amino acids: the basic region at the N-terminus which can bind to the E-box DNA motif (CANNTG) and the HLH motif which often forms homodimers or heterodimers with other bHLH proteins (Feller et al., 2011;Shen et al., 2016;Xing et al., 2018b). MYC TFs, belonging to bHLH family, possess a JAZ interaction domain (JID) in the N-terminal region, which differentiates MYC from other bHLH proteins (Kazan and Manners, 2013). The bHLH family plays an important part in regulating the biosynthesis of secondary metabolites such as the flavonoid pathway in Arabidopsis thaliana (Outchkourov et al., 2014), the iridoid pathway in Catharanthus roseus (Van Moerkercke et al., 2016), and the anthocyanin pathway in Chrysanthemum morifolium (Xiang et al., 2015).
Eight bHLH TFs have been reported to participate in the regulation of biosynthesis of active ingredients in S. miltiorrhiza, namely, SmMYC2, SmMYC2a, SmMYC2b, SmbHLH51, SmbHLH10, SmbHLH148, SmbHLH3, and SmbHLH37. MYC2 is a core gene TF in the plant and is responsive to jasmonates . Zhou et al. (2016) discovered that the overexpression of SmMYC2 could significantly increase the yields of phenolic acids by simultaneously upregulating phenylpropanoid biosynthesis pathway and tyrosine biosynthesis pathway. However, SmMYC2a regulates phenolic acid biosynthetic pathway by binding with an E-box motif within promoters of SmCYP98A14 and SmHCT6, while SmMYC2b only binds with an E-box motif within promoters of SmCYP98A14. Zhang et al. (2020) overexpressed bHLH3 in S. miltiorrhiza, and contents of caffeic acid (CA), salvianolic acid B (Sal B), and rosmarinic acid (RA) were decreased by 50, 62, and 50%, respectively, compared with the control; in addition, the four tanshinone ingredients, the cryptotanshinone (CT), tanshinone I (T-I), tanshinone II A (T-II A), and dihydrotanshinone I (DT-I) decreased to 3, 14.48, 9, and 38% of the control, respectively. Interestingly, SmbHLH37, another bHLH TF of subfamily R like SmbHLH3, negatively regulates the biosynthesis of phenolic acids due to a dual effect, both by repressive binding to promoters of biosynthetic genes, and by a negative feedback loop on jasmonic acid accumulation . Along with suppressing key enzyme genes of the biosynthetic pathway, SmbHLH37 antagonizes transcription activator SmMYC2 and can interact with SmJAZs. In addition, SmbHLH51 positively regulates phenolic acid through upregulating many enzyme genes in the biosynthetic pathways . SmbHLH10 can directly bind to G-box within promotors of genes in the pathway, activate the expression of genes, and finally up-regulate tanshinones biosynthesis (Xing et al., 2018b). Xing et al. (2018a) found SmbHLH148 induced the accumulation of phenolic acids and tanshinones through activating virtually the whole biosynthetic pathway of phenolic acids and tanshinones.
MYB Family
The MYB family is one of the largest TF families in plants and possess three repeats (R1, R2, and R3). These are classified into four groups based on the number of adjacent repeats: 1R (R1/2, R3-MYB), 2R (R2R3-MYB), 3R (R1R2R3-MYB), and 4R (harboring four R1/R2-like) . The MYB family is known to participate in the regulation of primary metabolism, secondary metabolism, and plant development (Dubos et al., 2010).
It has been suggested that subgroup 4 of MYB family has a negative effect on the accumulation of phenylpropanoid metabolites and acts as transcriptional repressors of phenylpropanoid pathway by suppressing transcription of key enzymes . Zhang et al. (2020) found that SmMYB39, a MYB TF in subgroup 4, acts as a repressor in the rosmarinic acid pathway. The transcripts and enzyme activities of C4H and TAT, two key enzyme genes, were all down-regulated by SmMYB39. Deng et al. (2020) found SmMYB2, which activated the expression and promotion of salvianolic acid accumulation through binding to the MBS1/MBS2/MRE elements within the promoter CYP98A14. The three MYBs belonging to subgroups 20, SmMYB9b, and SmMYB98b act as direct activators in tanshinone biosynthesis Xing et al., 2018a), while SmMYB98 can promote both tanshinone and phenolic acid accumulation (Hao et al., 2020). Ding et al. (2017) found SmMYB36, a novel member of R2R3-MYB in evolution, or SmMYB36-bHLH complexes could up-regulate tanshinone biosynthesis but inhibit phenylpropanoid biosynthesis in S. miltiorrhiza hairy roots. Moreover, SmMYB36 can not only influence secondary metabolism but also regulate primary metabolism and may be a potential tool to alter metabolic flux. Overexpression or suppressing-expression of SmMYB111 can up-regulate or down-regulate, respectively, the production of Sal B, speculated that SmTTG1-SmMYB111-SmbHLH51, a ternary transcription complex, may act as a positive regulator of the phenolic acid pathway. SmMYB1 promotes phenolic acid biosynthesis by activating the expression of CYP98A14. Interestingly, the interaction between SmMYB1 and SmMYC2 additively activates the CYP98A14 promoter (Zhou et al., 2021).
Four AP2/ERFs in S. miltiorrhiza have been studied to regulate the biosynthesis of tanshinones and phenolic acids. Sun et al. (2019) found that the overexpression of SmERF115 reduced the yield of tanshinones but increased the yield of phenolic acids, and it is speculated that SmERF115 controlled the biosynthesis of phenolic acids mainly through regulating the expression of SmRAS1. In contrast, SmERF1L1 inhibits the biosynthesis of phenolic acids but promotes the biosynthesis of tanshinones, suggesting that a balance may exist between biosynthesis of phenolic acid and tanshinone in S. miltiorrhiza . In addition, SmERF128 and SmERF6 can also positively regulate diterpenoid tanshinone biosynthesis in S. miltiorrhiza. SmERF128 activated the expression of SmCPS1, SmKSL1, and SmCYP76AH1, while SmERF6 only recognized the GCC-box of SmCPS1 and SmKSL1, respectively (Bai et al., 2018;Zhang et al., 2019).
Other Families
Moreover, three GRAS TFs, two WRKY TFs, one AREB, one LBD, and one JAZ TF have also been identified to regulate active ingredients in S. miltiorrhiza.
GRAS TFs possess a C-terminal and comprise five conserved subdomains: LRI, VHIID, LRII, PFYRE, and SAW (Pysh et al., 1999;Hofmann, 2016). SmGRAS1, SmGRAS2, and SmGRAS3, all GRAS, are reported to influence tanshinone biosynthesis in S. miltiorrhiza, as positive regulators. Interestingly, SmGRAS2 may regulate the tanshinones biosynthesis through interacting with SmGRAS1, while SmGRAS1 and SmGRAS3 directly regulate the biosynthesis of tanshinones by activating SmKSL1 (Li et al., 2019. The WRKY family is a large TF family present in flowering plants and can regulate secondary metabolite biosynthesis (Yu et al., 2018) and interact with W-box (TTGACC/T) within the promoter of genes (Phukan et al., 2016). SmWRKY1 plays a role in the regulation of tanshinones biosynthesis and acts as a positive regulator through activating SmDXR in the MEP pathway, while SmWRKY2 positively regulates tanshinones through activating SmCPS in the downstream pathway Deng et al., 2019). The LBD proteins consists of approximately 100 amino acids with the N-terminal lateral organ boundaries (LOB) domain . Transgenic plants overexpressing SmLBD50 inhibit the synthesis of total phenolic acids in S. miltiorrhiza. It was speculated that LBD TFs may locate downstream in the JA signaling pathway and serve as the downstream gene of bHLH and MYB TFs, which play important parts in the biosynthesis of secondary metabolites in S. miltiorrhiza .
JAZ TF family can repress JA-dependent responses (Pauwels andGoossens, 2011), andPei et al. (2018) found that SmJAZ8, which acted as a core repressor regulating JA-induced phenolic acid and tanshinone biosynthesis in S. miltiorrhiza hairy roots, might directly interact with SmMYC2a and suppress its activity. SmAREB1 is a special TF, and the transcriptional activation assay showed it has no activity, but the SmSnRK2.6 protein interacts with the SmAREB1 protein and activates its transcription to positively regulate phenolic acid biosynthesis .
CONCLUSION AND FUTURE PERSPECTIVE
S. miltiorrhiza can be used for the prevention of vascular diseases, especially atherosclerosis and cardiac diseases, for example, myocardial infarction, myocardial ischemia/reperfusion injury, cardiac fibrosis, cardiac hypertrophy, and arrhythmia (Li Z.M. et al., 2018). Phenolic acids and tanshinones are the major active ingredients in S. miltiorrhiza. A large number of enzyme-coding genes in phenolic acid and tanshinone biosynthetic pathways have been over-expressed or down-regulated to enhance the production of these compounds. Recently, more attention has been focused on TFs, which can activate or inhibit the multiple genes involved in one or more biosynthetic pathways. In this review we have discussed the potential and current limitations of the use of TFs for improving the production yield of secondary metabolites.
To date, many TFs are hypothesized to regulate tanshinones and phenolic acids. The key TF candidates are screened through the response of exogenous inductors, the distributions of specific expression, and the homology with other TFs studied in other plants Yu et al., 2018;Zhang et al., 2018). However, only a few TFs have been experimentally proven to participate in biosynthetic regulation. We hope more experimental pieces of evidence can be offered, so that more reliable and efficient TFs could be found, and we propose that more experiments should be performed to verify the function of TFs. Moreover, although there are a large number of researches on the biosynthesis of phenolic acids and tanshinones, it has not been clear which special enzyme plays a part for some reactions. And it impedes the study of the mechanism in which TFs act. Jia et al. (2017) found that SmAREB1 promoted greater metabolic flux to the phenolic acid-branched pathway by interacting with SmSnRK2.6, a protein kinase; however, more upstream factors of TFs in S. miltiorrhiza remain elusive. Protein kinases are common regulators of TFs. In addition, exogenous plant hormones, biological stresses, and abiotic stresses can influence the expression of TFs, but little is known about the specific mechanism. The deeper study of this can make it cheaper and more convenient to regulate TFs, so as to make the regulation of plant secondary metabolite biosynthesis easier.
Some TFs can display a dual action and can regulate two pathways simultaneously. Many TFs have been found to bind sites on the promoter regions of both flavonoid and artemisinin genes in A. annua. Phenolic acids and tanshinones are two valuable pharmaceutical secondary metabolites in S. miltiorrhiza. SmMYC2a/b and SmMYB98 have been found to positively regulate biosynthetic pathways of phenolic acid and tanshinone simultaneously. Therefore, parallel transcriptional regulation of phenolic acid and tanshinone biosynthesis deserves further study.
Once the biosynthetic regulation of active ingredients by TFs in S. miltiorrhiza has been clearly understood, its clinical application will become more efficient. Furthermore, the knowledge obtained during studies with this model medicinal plant can then be extended to other complex medicinal plants, thus laying a foundation for the clinical application of medicinal plants. | 3,449.6 | 2021-02-24T00:00:00.000 | [
"Medicine",
"Biology",
"Chemistry"
] |
Welding of thin stainless-steel sheets using a QCW green laser source
Bipolar plates are structured thin metal sheets and are, next to the membrane electrode assembly (MEA), one of the main components of polymer electrolyte membrane fuel cells. One of the production steps of such bipolar plates is the joining process of its two halves. Laser welding is a suitable method for such an application since it is fast, non-contact, automatable, and scalable. Particularly important aspects of the weld seam are the weld seam width and depth. In this paper, welding of stainless-steel material analogous to materials used in bipolar plates is examined. For this purpose, a newly developed quasi continuous wave (QCW) green laser source with higher beam quality is employed to assess the effect of the wavelength and the spot diameter on the welding of stainless-steel material. By using various focusing lens, different sized beam diameters below 20 µm are achieved and their influence on the final welding result—specifically concerning the seam width—are analyzed. With welding speeds starting at 500 mm/s, reduced weld seam widths (≤ 100 µm) are realized, particularly with a focusing lens of 200 mm focal distance. The suitability of such a process for thin channels of under 75 µm width is examined.
With the climate change crisis and the inevitable exhaustion of fossil fuel reserves, the world is in search for sustainable solutions for the near future, where the energy supply is environmentally friendly, reliable, and affordable.Ambitious goals of gradually limiting greenhouse gases (GHG) to achieve 55% reduction by 2030 compared to 1990 and net-zero by 2050 are parts of the European Union's fit for 55 Package.This can only be accomplished by applying policies and instruments across various sectors: energy, transport, and land use for example.The use of alternative drives in the increasingly growing transport sector-a sector responsible for 27% of total EU GHG emissions in 2017-can significantly contribute to the success of such goals 1 .Alternative methods such as the fuel cell are to be used to convert renewable resources directly and efficiently such as hydrogen, which can be created in the event of an energy surplus and stored in tanks.In this regard, fuel cell technology is frequently seen as the core technology of the energy transition in its numerous fields of application 2 .Fuel cell technology significantly boosts the appeal of environmentally friendly mobility, thanks to comparably quick refueling times and the significantly higher ranges compared to battery electric vehicles (BEV).The fuel cell offers a solution to the mobility dilemma of the future, one that may demonstrate an excellent environmental balance due to the use of largely recyclable components 3 .However, a large-scale deployment and widespread market penetration of this technology are unlikely given the high production costs and poor production capabilities of a cell's primary components with the existing manufacturing techniques.
The bipolar plate is the heart of the hydrogen fuel cell and one of its main components next to the membrane electrode assembly (MEA) 4 .Various approaches exist for manufacturing bipolar plates, with the approach of a metallic bipolar plate offering the greatest advantages in terms of weight, volume, and serial production.This type of bipolar plate consists of two thin stainless-steel sheets, into which gas flow profiles are embossed, and that need to be hermetically sealed.The geometry of the gas flow channels in the flow field impacts the performance of the cell 5 .Compared to other joining processes, laser beam welding enables high manufacturing precision and locally concentrated thermal input into the material at high processing speeds.However, the trend in the design of flow channels for bipolar plates shows an ever-growing tendency towards narrower channels (0.1 mm in the channel width) 6 .These dimensions are achievable with stamping processes and are challenging for the joining processes.With a high beam quality single mode fiber laser emitting in continuous mode at 1064 nm, welds with high aspect ratios can be realized.The weld seam width remains an issue and is usually ≥ 0.1 mm, allowing almost no positioning tolerances 7 .
The aim of this paper is to investigate the possibility of producing weld seams with a width of < 75 µm by means of laser beam welding.A newly developed QCW fiber laser with a wavelength of 532 nm and a high beam quality is used.Available lasers on the market emitting in the green range with powers suitable for material welding are so far disk lasers with significantly lower beam qualities (M 2 ~ 25).By halving the wavelength from 1064 to 532 nm a theoretical reduction of the focus diameter by 50% is expected.The influence of the focus diameter on the energy input, the weld width and the weld shape are analyzed.For this purpose, process windows, in which defect-free overlap welding is possible at welding speeds of 500 mm/s and above, are identified.The system used is then evaluated in terms of its suitability for welding in narrow channels with a width of less than 75 µm.
Laser beam welding
Laser beam welding belongs to the group of fusion welding.Here, a laser beam is used as the energy carrier, which is guided from the laser beam source onto the workpiece with the aid of flexible glass fibers and mirrors.A focusing lens is used to shape the laser beam according to the requirements of the manufacturing process.To position the laser beam on the workpiece and to enable path-shaped processing, a relative movement between the laser beam and the workpiece is required.This relative movement can be done by steering the laser beam by means of mirrors integrated in the beam path (in a galvanometer scanning head for example) or by moving the workpiece by means of an external device 8 .Lasers are generally used in two different operating modes.A distinction must be made between continuous wave mode and pulsed wave mode.In continuous wave mode (cw mode), the laser is operated continuously.In pulsed wave mode, the laser beam is interrupted at regular intervals to generate pulses 9 .
Laser beam welding can be divided into two categories based on the outcome: heat conduction welding (HCW) and deep penetration welding (DPW) (see Fig. 1).The difference between these two lies in the energy input and the seam geometry.In heat conduction welding, the laser beam melts the surface of the material.The material is not heated above its vaporization temperature and the energy is only introduced into the material via thermal conduction.The weld geometry is lenticular with an aspect ratio of A ≈ 1 (ratio of weld penetration depth to weld width).During deep penetration welding, the vaporization temperature of the material is reached at the focal point of the laser beam.This causes a vapor capillary to form in the molten pool.In this vapor capillary, multiple reflections of the laser beam at the capillary walls occur, so that the radiation of the laser penetrates deeper into the workpiece.The resulting weld geometry is slender with a high aspect ratio (A ≥ 10) 10 .
Influence of the wavelength
For laser material processing to be efficient, the energy from the laser radiation must be introduced into the material and converted into heat.The conversion of laser energy into process heat is referred to as energy coupling.With increasing energy coupling, the heat input into the material increases.Since the thermal energy introduced has an additional effect on the temperature-dependent optical material properties, the phase transitions caused, and the associated geometrical properties, this is referred to as interaction between the laser beam and the workpiece.The ratio of the coupled power to the power P incident on the workpiece is called the absorptance A, and is a measure of the available power P abs , which can be defined according to the following Eq.(1) 10 A: absorptance (-); P abs : absorbed power(W); P: incident power (W).The remaining power is either reflected at the material surface or transmitted through the material.
The reflectance R, the absorptance A and the transmittance T can be calculated from the Eq. ( 2), where each can take a value between 0 and 1 10 .In the case of welding metal foils with a thickness ≫ used wavelength, the transmittance T is negligible, and the formula is then expressed in terms of the reflectance and the absorptance with their addition equaling up to one.
The absorptance depends, among other things, on the material.Its value for stainless steel (1.4301 and 1.4404) increases at shorter wavelengths.Figure 2 gives an overview of the absorptance of different metals in relation to the wavelength of the laser beam source used at perpendicular beam incidence and room temperature.The coefficient for stainless steel is about 37% for an infrared laser beam source with a wavelength of λ = ~ 1070 nm and 45% in the lower, green wavelength range λ = ~ 535 nm.Therefore, the use of a laser beam source in the low, green wavelength range is suitable for material processing of steel.
Spot diameter
In the field of laser beam welding, the beam quality and the focus ability of the laser beam source are of considerable importance.Thus, a higher beam quality would cause the focus diameter to fall below a critical minimal focus diameter d 0 and consequentially to a higher welding depth under otherwise unchanged conditions.To enable a comparison of laser beam sources of the same power, the beam density-referred to as brilliance in laser technology-is used.The brilliance depends on the output power P L of the laser as well as on the beam quality M 2 and the wavelength λ and can be calculated using the Eq. ( 3) 12,13 B: brilliance [W⁄(mm 2 •sr)]; P L : output power (W); ω 0 : focal radius (mm); θ 0 : far field divergence angle (mrad); M 2 : beam quality (-); λ: wavelength (mm).
Decreasing brilliance causes an increase in the minimum achievable focal radius ω 0 , which must be considered when selecting the laser beam source.The beam quality M 2 describes the deviation of the real laser beam from a theoretical ideal laser beam with a Gaussian beam distribution 13 .The propagation of a Gaussian beam is determined by the radius of the beam waist ω 0 and by the wavelength λ.At a large distance from the beam waist, the increase of the beam radius is linear at the far field divergence angle θ 0 .Since ω 0 can be arbitrarily adjusted with different lenses, the beam parameter product (BPP) is specified to characterize the laser beam.The beam parameter product is deduced from the beam quality M 2 and the wavelength λ and expressed as follows in Eq. ( 4) 12 BPP: beam parameter product (mm mrad); M 2 : beam quality (-); λ: wavelength (mm); ω 0 : beam waist radius (mm); θ 0 : far field divergence angle (mrad).
With the help of the beam parameter product, the theoretical focal diameter can be calculated according to 12 d 0 : beam waist diameter (mm); ω 0 : beam waist radius (mm); BPP: beam parameter product (mm mrad); f: focal length of the focusing lens (mm); ω R : raw beam radius (mm).
In addition to the beam parameter product, the influencing characteristics of the spot diameter are the focal length of the focusing lens f and the raw beam radius ω R , which is available before the focusing lens.A reduction of the focus diameter can therefore only be achieved by reducing the focal length or by increasing the raw beam diameter 13 .Furthermore, reducing the used wavelength from 1070 to 535 nm leads to smaller foci. (3) Absorptance of copper, aluminum, and stainless steel as a function of wavelength at room temperature.
Humping
In high-speed welding, weld defects occur above a certain welding speed.While wave crests and troughs can initially be observed on the weld surface (pre-humping), drops (humps) form in the weld pool as the welding speed further increases.This effect is referred to as humping.Ai et al. 14,15 explain the humping effect with the wave formation in the molten pool.During the welding process, the molten material is pushed backwards from the rear wall of the keyhole in gradually increasing waves against the feed direction.As can be seen in Fig. 3, the surface tension in the weld pool produces a convex surface.The narrowest point-the valley between the convex surface and the highest wave-solidifies at a higher cooling rate.This prevents backflow and periodic humps are herein formed 15 .
The main factors for the formation of humps are the inclination angle of the keyhole, a long and narrow melt pool, and colliding liquid flows in the melt pool 15,16 .The tilt angle of the keyhole increases as the welding speed increases, causing the vapor plume to further melt the metal at the back of the keyhole and lengthening the melt pool.A narrow weld pool causes a high backward vapor pressure, which causes an increase in the flow rate at which the melt is forced into the back of the molten pool.As a result, the height of the convex weld ejections increases inverse proportional to the width of the molten pool.The last factor to be mentioned is the collision of the backward-flowing molten pool with the backflowing, solidifying melt.The high momentum of the melt causes the formation of waves in the melt pool.This prevents backflow due to rapid solidification in the wave troughs and generates humps 15 .The limiting velocity above which humping occurs can be influenced by various process variables.Neumann et al. cite as such the material used, the focal length of the focusing lens used, the beam angle at which the laser beam impinges on the workpiece, the shielding gas used, and the laser power used 14 .Brodsky et al. show that a preheating of the material, for example using a ring laser, allows an increase of the maximum speed by 55% without humping effect.It is also shown that the humping distance, i.e., the distance between two humps, increases with increasing line energy 17 .
The line energy is calculated according to the Eq. ( 6) E: energy per unit length (J/mm); P: power (W); v: welding speed (mm/s) 10 .
Laser beam source
In this paper, experiments are carried out with a single mode fiber laser beam source emitting in the visible green wavelength range (532 nm).Laser power and welding speed are varied and three different focusing lenses are used.Table 1 shows the technical data of the laser beam source.The laser (Model: GLPN-500-R from the company IPG Photonics, Massachusetts USA) has a maximum average output power of about 500 W and emits in the visible green range with a wavelength of 532 nm.The Gaussian output beam has a beam quality of M 2 = 1.2, achieving a beam parameter product (BPP) of 0.2 mm mrad.The system does not operate continuously as in continuous wave (CW) operation, but continuous wave operation is simulated by successive pulses with a pulse repetition rate of up to 210 MHz and a short pulse duration of 1.2 ns.This mode is called Quasi CW mode.www.nature.com/scientificreports/ The laser can be operated in two modes: automatic current control mode (ACC mode, diode current is the control variable) and automatic power control mode (APC mode, output power is the control variable).A power measurement shows significant fluctuations in the output power.With the APC mode, a more constant output power can be generated in the low power range, but with this control mode the measured power does not reach the set power fast enough to be used for higher welding speeds.In ACC mode there is a deviation of less than 8% between measured power and target power in the range of 250-500 W, significant power losses are however measured in the power range below 250 W. In this mode, the laser does not emit any radiation if the power is set to 200 W and below.
Setup and material configuration
This beam source decouples the laser beam via a separate decoupling unit in the form of a freely propagating laser beam with a (full) opening angle of 10 mrad, for which an individual, shield-free beam guidance must be realized.An existing experimental table with a finely adjustable z-axis and a perforated grid plate as a working plate serves as the basic framework for the setup.A specially developed software interface allows the welding parameters to be set and all machinery to be remotely controlled from a shielded control cabinet.Starting from the beam source, the laser beam is guided via a short fiber optic cable to a water-cooled decoupling unit, where it is decoupled.The output coupler unit is integrated into the beam path of the optical setup and aligned.The laser beam is focused on the workpiece surface, which is located on a high-speed axis with an integrated lifting table.The setup is shown simplified schematically in Fig. 4.
The divergent beam emerging from the decoupling unit must be parallelized.For this purpose, a collimating lens made of fused silica with a focal length of f = 1000 mm is inserted into the beam path at the corresponding propagation length, so that a collimated beam radius of ω R = 5 mm is achieved.Subsequently, the collimated beam hits a deflection mirror, which is inclined by 45° and deflects it vertically downwards.For the laser beam welding process, focusing of the laser beam is necessary to melt the material of the workpiece.Focusing is done by means of a focusing lens made of fused silica, which is switched out three times to achieve different beam properties as shown in Table 2.The exact description of each lens from the providing company Thorlabs GmbH, Bergkirchen, Germany is LA4148-A-ML for the f50-lens, LA4380-A-ML for the f100-lens and LA4102-A-ML for the f200-lens.
The workpiece is mounted on an electromagnetic Linax Lxs highspeed axis from Jenny Science, Rain Switzerland.This can be used to weld on the workpiece surface at a welding speed of up to v = 2000 mm/s.Since the clamping jaws of the high-speed axis provide for lateral clamping, an attachment is designed and manufactured which allows the specimen to be clamped in such a way that a weld seam can be welded from above.The attachment consists of a plastic-printed T-piece and an aluminum plate onto which the sheet is pressed with two clamping plates.By screwing the clamping plates to the aluminum plate, thermal distortion of the sheets www.nature.com/scientificreports/during the welding process is counteracted.An overview of the chemical composition of the three stainless steels considered in this investigation is shown in Table 3.
The resulting weld seam width using this laser beam source and the setup is closely examined.The measuring method is shown in Fig. 5.The weld seam width is measured on microscopic images of the surface of the welded samples using a Keyence VHX 6000, Japan.This step is completed for all five welded lines before using a destructive grinding, polishing, and etching preparation to determine the weld seam width in the cross-section.
Defining the process parameter window
Within the scope of this paper, bead on plate welds on stainless steel 1.4310 are carried out to evaluate the weld formation at small focal diameters (< 20 μm).Since the theoretically smallest focus diameter can be generated with the f50 focusing lens, an investigation of the welding parameters in the lower speed range up to v = 500 mm/s and a power up to P = 300 W are carried out with this lens.For this purpose, welds are performed on plates of stainless steel 1.4310 with a thickness of d = 500 µm.The aim is to analyze how the seam width of the weld seams changes and whether a weld penetration depth of 0.2 mm is achieved.This welding depth is relevant for the welding of two 0.1 mm thick foils in the production of bipolar plates.Subsequently, high welding speeds, up to v = 2000 mm/s and powers up to P = 500 W are investigated with the f100 and f200 focusing lens.An in-situ analysis is not possible in this case, so the welds are all examined ex-situ using microscopic images and metallographic preparation of the samples.Each power used is first welded at v = 2000 mm/s, which is then reduced by 200 mm/s in each parameter set in the following tests.The welding speed is reduced to its lowest value of v = 600 mm/s or when the weld overshoots (penetration depth ≥ sample thickness), corresponding to a weld penetration depth of ≥ 0.5 mm.For a better overview, the parameter variations applied in these trials are listed in Table 4.The trials with the f50 lens are carried out in APC mode, since these are rather in the lower power range.For each parameter combination, five parallel welds are welded at 1 mm from each other on a 20 mm × 50 mm sample.The length of a weld seam is 40 mm.Table 3.Chemical composition of the materials (wt%) 18,19 .www.nature.com/scientificreports/With the help of the above-mentioned trials, the process window is defined.Using data collected from crosssections, the following findings can be stated:
C
• with the f50 (d 0 = 4.06 µm), the goal of a penetration depth of ≥ 0.2 mm is reached with a maximum weld- ing speed of 300 mm/s and a 300 W output power.A maximum welding speed of 200 mm/s with an output power of 200 W also achieves the required goal.With an output power of 100 W, the criterium is not met.Due to the welding depth with this lens being significantly lower than expected, no higher welding speeds are examined in this configuration (f50 and APC mode).• with the f100 (d 0 = 8.12 µm), the window parameter is larger: with the lowest output power of 200 W and a welding speed of 600 mm/s, a weld seam depth of ≥ 0.2 mm is reached.With the maximum set output power of 500 W, this criterium is met at the maximum welding speed of 1800 mm/s.• with the f200 (d 0 = 16.24µm), the highest welding speed amongst all the trials (2000 mm/s) and an output power of 300 W lead to the required welding depth.
A further insight into the kind of weld seams created with this laser can be acquired by longitudinal sections.Three parameter sets are chosen from the previous trials.The microscopic images of the weld seam surface in the beginning range, mid-range and the corresponding longitudinal sections are presented in Fig. 6.The three parameter sets are: a) f100, P = 300 W and v = 1000 mm/s b) f100, P = 400 W and v = 1000 mm/s c) f200, P = 500 W and v = 1800 mm/s The welding direction is in this case from right to left.As to be expected with these higher welding speeds, the weld seams show clear signs of the humping phenomena, which usually occurs at welding speeds ≥ 500 mm/s.The samples are chemically etched so the weld seam depth is visible due to darker shading.Notably, the welding depth shows an almost linear increase for a significant starting length of the weld seam (about 8 mm), with the laser beam not coupling until after a few millimeters from the start of the radiation.This can be justified by looking at the step response of the laser itself.In ACC mode, this step response has a duration of 5.5 ms 20 .Heussen investigated the response of the laser and the power output in detail in 20 .At a welding speed of 1000 mm/s for a and b, the laser beam travels 5.5 mm (or almost 14% of the total length) over the stainless-steel workpiece before reaching the set output power.Measured at the left side in the image, after the depth has stabilized, the value is then 0.25 mm and 0.33 mm for a and b and 0.194 mm for c.In the case of the parameter set c, the distance travelled is even higher at 9.9 mm.That is just under 25% of the total length of the weld seam.The correspondent weld seam widths on the top surface are ~ 0.1 mm, ~ 0.115 mm and ~ 0.09 mm.Another important finding in this Table 4. Overview of the parameter variations with each focusing lens.
Focusing lens
Theoretical focus diameter d 0 (µm) Actual output power P (W) Welding speed v (mm/s) www.nature.com/scientificreports/case, is the appearance of a significant decrease in the weld seam depth with parameter sets a and b, right before the maximum value is reached.This effect is heightened by the humping formation as well.
Evaluating the weld seam width and depth
The weld seam width and depth are major characteristics and are usually the used criteria to evaluate the welding process itself.For bipolar plate applications, these are even more crucial.A fluctuation in the penetration depth can lead to gaps between the half-plates, thus the joined plate cannot fulfill the criterium of being hermetically sealed.Maximizing the active area available within a bipolar plate is advantageous.One technique to achieve this is by scaling down the gas flow channels' width and maximizing their number.For this purpose, thinner weld seams are strived for.The used laser beam source in this paper has a high beam quality (one of the highest available on the market today) and a high brilliance, making it a good tool for such an analysis.In this section, an overview of the results reached with the previously mentioned trials is presented.Microscopic imaging and cross sections of the welded seams are used for this evaluation.In Fig. 7, the diagrams show how the weld seam depth changes in accordance with the welding speed at different output powers and using the three different focusing lenses.An overall trend is that with an increasing welding speed, the weld penetration depth decreases.This is to be expected and well known from previous literature.With the f50, only three data points fulfill the requirement of a welding depth ≥ 0.2 mm: at output powers of 200 W and 300 W and welding speeds of 200 mm/s and 300 mm/s.These welding speeds are comparatively low and not suitable for the application in bipolar plate welding.Promising results with the highest reproducibility-due to restricted standard deviation-are achieved with the f100 lens.Opposite to what can be expected, with the larger spot diameter with this lens (about 8 µm) in comparison to the previous lens (about 4 µm), the values for the welding depth are higher at a higher welding speed.Most of the data points are above the 200 µm line.With the f200 lens, the results show a large fluctuation, particularly with the 500 W curve.A counter-intuitive finding from evaluating the results is that with the increasing spot diameter, thus an increasing surface area with a wider energy distribution, the welding depth increases.This is particularly noted for the 300 W and 400 W powers.A similar trend is observed in the work of 21 , where it is shown, that below 200 µm beam waist, the focus ability and penetration depth are mainly influenced by the BPP and the divergence angle.
The results are plotted in the following graphs in Figs. 8 and 9.Each data point represents the averaged-out measurement from five weld seams and the standard deviation shows the degree of fluctuation.A first observation shows that the welding speed starting 800 mm/s has for the most part no significant influence on the achieved welding width, but rather that the weld seam width is mainly affected by the output power of the laser.This is mainly true for the weld seam width on the surface of the samples for both f100 and f200 with the only exception at higher welding speeds (1600 mm/s) with f100, where the 300 W curve rises above the 400 W curve.In the cross-section, the margin of error is too large to reach a definitive conclusion from the obtained results in this range.The weld seam width seems to not be distinctively higher or lower with a particular power.The deviations between the results on the surface and in the cross-section can be justified mainly with human error and due to welding defects caused by humping effects appearing on the surface.With the f100, the weld seam width stays below 150 µm for almost all the measured data.With the f200, almost all data points remain below the 125 µm line, with only three data points having a standard deviation that crosses it [500 W at 800 mm/s and 1000 mm/s (cross section) and 500 W at 1000 mm/s (surface)].Overall, it can be deduced that thinner weld seams are achieved with the f200.With the f200 lens, the smallest divergence angle is achieved.The BPP is kept unchanged for all the used lenses since the same laser is being used.A possible explanation for the occurrences observed in this work, is that with the f200 and due to the small divergence angle, the laser radiation is further reflected downwards in the keyhole causing a narrower and deeper weld seam.Whereas with the f100 and f50, the laser radiation-due to the larger divergence angles-is absorbed and reflected at a lower penetration depth, causing the welds to be larger in width and shallower, confirming the results from 21 .
In another evaluative aspect, certain trials are closely examined.The results here show a large fluctuation, particularly with the higher power of 500 W.This can be attributed to factors such as process instabilities and human error.While the investigation aims at targeting the same distance in the metallographic preparation of the samples, this is not a guaranteed outcome.The measurements are also carried out using microscopic images, which can lead to some faulty readings, particularly the occurrence of humping and defining of the limits of the weld seam causing the reading to slightly differ from sample to sample.Here, the criterium for the weld seam depth is to be on average above 0.2 mm and below 0.3 mm.At the same time, the weld seam width must be at a minimum for both the surface and the cross-section.In this case, the parameter sets listed in Table 5 deliver promising results.This shows that with the f200, reaching weld seam widths lower than 75 µm (0.075 mm) is possible.
Humping analysis
Humping is a phenomenon which occurs when welding at higher welding speeds, usually above 500 mm/s.The molten pool is pushed upwards due to dynamic flow characteristics around the keyhole.This is recognizable by the appearance of so-called humps and destabilizes the weld penetration depth.This negatively impacts the joining of two bipolar plate halves.Many strategies are being developed to further understand the formation of these humps and to try and push the welding speed limit upwards.In the scope of this paper, no strategies are Table 5.The minimal weld seam widths achieved with the f100 and f200 lenses.www.nature.com/scientificreports/Finally, with regards to defects, the humping defect is observed with most of the trials.The reason for this is the higher welding speeds used (starting at 600 mm/s), which are the typical boundary for the appearance of this phenomenon.
Figure 3 .
Figure 3. Formation of humping in a molten pool during welding 15 .
Figure 4 .
Figure 4. Schematic illustration of the welding setup.
Figure 5 .
Figure 5.The measuring method for the welding width on the surface (a) and for the depth and width in the cross section (b).
Figure 7 .
Figure 7. Weld seam depth plotted against the welding speed with the different lenses.
Figure 8 .
Figure8.Weld seam width in the cross-section (left) and on the surface (right) plotted against the welding speed with different set powers for f100.
Figure 9 .
Figure 9. Weld seam width in the cross-section (left) and on the surface (right) plotted against the welding speed with different set powers for f200.
Table 1 .
Specifications of the used laser beam source.
Table 2 .
Lenses used with the calculated focus diameters and Rayleigh lengths based on the theoretical values from the laser datasheet. | 7,359.8 | 2024-02-19T00:00:00.000 | [
"Engineering",
"Materials Science"
] |
Feature Selection on Sentinel-2 Multispectral Imagery for Mapping a Landscape Infested by Parthenium Weed
: In the recent past, the volume of spatial datasets has significantly increased. This is attributed to, among other factors, higher sensor temporal resolutions of the recently launched satellites. The increased data, combined with the computation and possible derivation of a large number of indices, may lead to high multi-collinearity and redundant features that compromise the performance of classifiers. Using dimension reduction algorithms, a subset of these features can be selected, hence increasing their predictive potential. In this regard, an investigation into the application of feature selection techniques on multi-temporal multispectral datasets such as Sentinel-2 is valuable in vegetation mapping. In this study, ten feature selection methods belonging to five groups (Similarity-based, statistical-based, Sparse learning based, Information theoretical based, and wrappers methods) were compared based on f-score and data size for mapping a landscape infested by the Parthenium weed ( Parthenium hysterophorus ). Overall, results showed that ReliefF (a Similarity-based approach) was the best performing feature selection method as demonstrated by the high f-score values of Parthenium weed and a small size of optimal features selected. Although svm-b (a wrapper method) yielded the highest accuracies, the size of optimal subset of selected features was quite large. Results also showed that data size a ff ects the performance of feature selection algorithms, except for statistically-based methods such as Gini-index and F-score and svm-b. Findings in this study provide a guidance on the application of feature selection methods for accurate mapping of invasive plant species in general and Parthenium weed, in particular, using new multispectral imagery with high temporal resolution.
Introduction
The dimension space of variables given as input to a classifier can be reduced without an important loss of information, while decreasing its processing time and improving the quality of its output [1]. To date, studies on dimension reduction in remote sensing have mostly focused on hyperspectral datasets [2][3][4] and high spatial resolution multispectral imagery using Object-Based Image Analysis (OBIA) [5][6][7]. Generally, multispectral images have received less attention comparatively, likely due to having a limited number of bands, which do not require dimension reduction. However, with the launch of high temporal resolution sensors such as Sentinel-2, the amount of image data that can be acquired within a short period has considerably increased [8]. This is due to the sensor's improved spectral resolution (13 bands) and a five day temporal resolution [9].
Generally, high-dimensional remotely sensed datasets contain irrelevant information and highly redundant features. Such dimensionality deteriorates quantitative (e.g., leaf area index and biomass) and qualitative (e.g., land-cover) performance of statistical algorithms by overfitting data [10]. High dimensional data are often associated with the Hughes effects or the curse of dimensionality, a phenomenon that occurs when the number of features in a dataset is greater than the number of samples [11,12]. Hughes effects affects the performance of algorithms previously designed for low-dimensional data. Whereas high dimensionality can lead to poor generalization of learning algorithms during the classification process [12], it can also embed features that are crucial for classification enhancement. Hence, when using dimension reduction algorithms, a subset of those features can be selected from the high dimensional data, increasing their predictive potential [13].
There are two main components of dimension reduction strategies: feature extraction or construction and feature selection or feature ranking. Feature extraction (e.g., Principle Component Analysis (PCA)), constructs a new and low dimensional feature space using linear or non-linear combinations of the original high-dimensional feature space [14] while feature selection (e.g., Fisher Score and Information Gain) extracts subsets from existing features [10]. Although feature extraction methods produce higher classification accuracies, the interpretation of generated results is often challenging [2]. However, feature selection methods do not change the original information of features, thus giving models better interpretability and readability. Feature selection techniques have been applied in text mining and genetic analysis [14]. Hence, in this study, they were preferred over feature extraction methods.
Traditional feature selection techniques are typically grouped into three approaches namely; filter, embedded and wrapper methods [15]. In earth observation related studies, feature selection algorithms have generally been compared based on this grouping [5,16]. However, in the advent of big data, this grouping can be regarded to be very broad, necessitating development of new feature selections algorithms. For instance, within filter feature selection methods, there are some that evaluate the importance of features based on the ability to preserve data similarity (e.g., Fisher Score, ReliefF), while others use a heuristic filter criterion (e.g., Mutual Information Maximization). Therefore, it is crucial to re-evaluate the comparison of feature selection algorithms in a data specific perspective.
Li, et al. [17] reclassified traditional feature algorithms for generic data into five groups: similarity-based feature selection, information theoretical-based feature selection, statistical-based feature selection, sparse learning-based feature selection and wrappers. They devised an open-source feature selection repository, named scikit-feature that provides 40 feature selection algorithms (including unsupervised feature selection approaches). Some selections, such as Joint Mutual Information and decision tree forward, are relatively new in earth observation applications [18].
Over the last few decades, the number of vegetation indices has significantly increased. For instance, Henrich, et al. [19] gathered 250 vegetation indices derivable from Sentinel-2. Computation of vegetation indices from a higher temporal resolution imagery like Sentinel-2 would lead to data with very increased multi-collinearity, a higher number of derived variables, and increased dimensions. Hence, the need for accurate and efficient feature selection techniques when dealing with new generation multispectral imagers such as Sentinel-2 is becoming increasingly valuable in vegetation mapping [8]. To the best of our knowledge, no earth observation related study has endeavored to undertake empirical evaluation of feature selection methods as provided in the scikit-feature repository for generic data.
In this study, feature selection algorithms based on the Li, et al. [17]'s classification were compared for mapping a landscape infested by Parthenium weed using Sentinel-2. The Parthenium weed is an alien invasive herb of tropical American origin that has infested over thirty countries. It has been identified as one of the seven most devastating and hazardous weeds worldwide [20]. A number of studies [21,22] have reported its adverse impacts on ecosystem functioning, biodiversity, agricultural productivity and human health. A detailed comparison of feature selection algorithms on Sentinel-2 spectral bands combined with vegetation indices and with respect to Li, et al. [17]'s classification, would therefore: (a) improve mapping accuracy, valuable for designing mitigation approaches; (b) shed light on the most valuable feature selection group and (c) identify the most suitable feature selection Remote Sens. 2019, 11, 1892 3 of 17 method for accurately mapping a Parthenium weed infested landscape. Unlike previous studies that evaluated features selection methods on the basis of overall classification accuracy [5,23,24], this study investigated their performance on a specific landscape phenomenon's (Parthenium weed) mapping accuracy, as high overall classification accuracy does not always mean a reliable accuracy for a specific class [25]. In this study, we sought to provide a detailed comparison of feature selection algorithms on higher temporal resolution satellite images with high data volume. Specifically, we looked at (a) their performance on Parthenium weed using specific class-related accuracies as an evaluation criterion, and (b) the impact of data size on their accuracy.
Study Area
This study was conducted within the Mtubatuba municipality on the North-East Coast of the KwaZulu-Natal province, South Africa ( Figure 1). The study area covers 129 km 2 , and is characterized by heavy Parthenium infestation. The area is predominantly underlain by Basalt, Sand and Mudstone geological formations [26]. Annual average rainfall ranges from 600 mm to 1250 mm and temperatures vary around 21 • C, respectively. Summers are generally warm to hot, while winters are cool to mild [27]. The sampling area is characterized by a mosaic of several land use/land cover types that include commercial agriculture (e.g., forestry plantations and sugarcane farming), subsistence farming (beans, bananas, potatoes, and cattle), mining, and high and low density residential areas [27].
Reference Data
Using a high resolution (50 cm) color orthophotograph [28] of the study area, conspicuous patches of Parthenium weed infestations were randomly selected, stratified equally across different land cover/use types. The selected sites were then surveyed using a differentially corrected Trimble GeoXT handheld GPS receiver with about 50 cm accuracy. The ground truth campaign was conducted during Remote Sens. 2019, 11, 1892 4 of 17 summer, between the 12th of January and 2nd of February 2017. At each Parthenium weed site, at least 10 m × 10 m quadrats were demarcated. The quadrats were located mostly in the middle of a large patch (greater than 10 × 10 m) of Pathernium in order to cater for any possible mismatches with the sentinel pixels [29]. In total, 90 quadrats were randomly selected across different land-cover types to account for variability in different ecological conditions of the study area. GPS points of surrounding land cover types such as forest, grassland, built-up and water bodies were also collected. Supplementary X-Y coordinates of these land-cover types were also created from the color orthophotograph to increase the number of training samples. The aforementioned land-cover classes were the most predominant in the study area and were therefore used to evaluate the discriminatory power of different models developed in this study for mapping Parthenium weed infested areas. In total, 447 reference points for mapping Parthenium weed and its surrounding land cover classes were obtained. To determine the optimal feature selection methods, and to test the effect of data sizes, these ground reference data were randomly split into training and test sets in three different ratios: 1:3; 1:1; 3:1 as shown in Table 1 [30]. The random split was undertaken using the function "train_test_split" of the Sklearn python library. "Random-state" and "stratify" parameters were included in the function to respectively allow reproducibility and obtain the same proportions of class labels as the input dataset. The data design also allowed evaluating the investigated feature selection methods with respect to Hughes effects.
Acquisition of Multi-Temporal Sentinel-2 Images and Pre-Processing
Three Level 1C Sentinel-2A images were acquired on 19 January of 2017 under cloudless conditions. The Semi-Automatic Classification Plugin [31] within the QGIS (version 2.14.11) software was used to correct the acquired images for atmospheric effects. The Semi-Automatic Classification Plugin uses the Dark Object Subtraction in order to convert Top Of Atmosphere reflectance (TOA) to Bottom Of Atmosphere reflectance (BOA). Bands (band 1, band 9 and band 10) with 60 m-resolution were omitted in this study. Moreover, bands with 20 m resolution were resampled to 10 m using ArcMap (version 10.3) to allow layer stacking with 10 m bands.
Feature Section Methods
In this section, the five groups of feature selection methods are briefly discussed. Two representative methods were randomly chosen from each group to achieve the comparison.
(A) Similarity-Based Feature Selection Methods
Similarity-based feature selection methods evaluate the importance of features by determining their ability to preserve data similarity using some performance criterion. The two selected feature selection algorithms in this study were Trace ratio and Relief. Trace ratio [32] maximizes data similarity for samples of the same class or those that are close to each other while minimizing data similarity for the sample of different classes or those that are far away from each other. More important features also have a larger score. ReliefF [33] assigns a weight to each feature of a dataset and feature values which are above a predefined threshold are then selected. The rationale behind ReliefF is to select features randomly, and based on nearest neighbors, the quality of features is estimated according to how well their values distinguish among the instances of the same and different classes nearing each other. The larger the weight value of a feature, the higher the relevance [34].
(B) Statistical-Based Feature Selection Methods
Statistical-based feature selection methods rely on statistical measures in order to the estimate relevance of features. Some examples of statistical-based feature selection methods include the Gini index and the F-score. The Gini index [35] is a statistical measure that quantitatively evaluates the ability of a feature to separate instances from different classes [14]. It was earlier used in decision tree for splitting attributes. The rationale behind Gini index is as follows: Suppose S is the set of s samples with m different classes (C i , i = 1, . . . , m). According to the differences of classes, S can be divided into m subset (S i , i = 1, . . . , m). Given S i is the sample set which belongs to class C i , s i is the sample number of set Si, Gini index of set S can be computed according to the equation below [36]: where P i denotes the probability for any sample to belong to C i and to estimate with s i /s.
The F-score [37] is calculated as follows: Given feature f i , n j , µ, u j , σ j represent the number of instances from class j, the mean feature value, the mean feature value on class j, the standard deviation of feature value on class j, respectively, the F-score of a feature fi can be determined as follows: Sparse learning based methods is a group of embedded approaches. They aim at reducing the fitting errors along with some sparse regularization terms, which make feature coefficients small or equal to zero. To make a selection, corresponding features are discarded [14]. Feature selection algorithms belonging to this group have been recognized to produce good performance and interpretability. In this study, sparse learning based methods with the following sparse regularization terms: 1-norm Regularizer (LS-121) [38] and 2,1-norm Regularizer (LL-121) [39] were implemented.
(D) Information Theoretical Based Methods
Information theoretical-based methods apply some heuristic filter criteria in order to estimate the relevance of features. Some feature selection algorithms that belong to this family include Joint Mutual Information (JMI) and Mutual Information Maximization (MIM) or Information Gain. The JMI seeks to incorporate new unselected features that are complementary to existing features given the class labels in the feature selection process [14] while the MIM measures the importance of a feature by its correlation with the class labels. MIM assumes that features with strong correlations would achieve a good classification performance [14].
(E) Wrapper
Wrapper methods use a predefined learning algorithm, which acts like a black box to assess the importance measures of selected features based on their predictive performance. Two steps are involved in selecting features. First, a subset of features is searched and then selected features are evaluated repeatedly until the highest learning performance is reached. Features regarded as being relevant are the ones that yield the highest learning performance [14]. These two steps are implemented using the forward or backward selection strategies. In forward selection strategy, the search of relevant features starts with an empty set of features, then features are progressively added into larger subsets, whereas in backward elimination, it starts with the full set of features and then progressively eliminates the least relevant ones [40].
However, the implementation of wrapper methods is limited in practice for high-dimensional dataset due to the large size of the search space. Some examples of wrapper methods that were used in this study include decision tree forward (dt-f) [41] and Support vector machine backward (svm-b) [41].
Vegetation Indices Computation
In total, 75 vegetation indices (VI) derived from Sentinel-2 wavebands were computed using the online indices-database (IDB) developed by Henrich, et al. [19]. The IDB provides over 261 parametric and non-parametric indices that can be used for over 99 sensors and allows the viewing of all available VI for specific sensors and applications [19]. VI in this study were selected because of their usefulness for vegetation mapping and in order to increase the dimensionality of Sentinel-2 data.
Classification Algorithm: Random Forest (RF)
RF classifier was used to infer models from different selected features using the investigated feature selection methods. Random forest (RF) is a combination of decision tree classifiers where each classifier casts a single vote for the most frequent class to classify an input vector [42]. RF grows trees from random subsets drawn from the input dataset using methods such as bagging or bootstrap aggregation. Split of input dataset is typically performed using attribute selection measures (Information Gain, Gini-Index). Attribute selection measures are useful in maximizing dissimilarity between classes and therefore determine the best split selection in creating subsets [43]. In the process of a RF model training, the user defines the number of features at each node in order to generate a tree and the number of trees to be grown. The classification of a new dataset is done by passing down each case of the datasets to each of the grown trees, then the forest chooses a class having the most votes of the trees for that case [44]. More details on RF can be found in Breiman [45]. RF was chosen in this study, as it can efficiently handle large and highly dimensional datasets [46,47].
Model Assessment
To assess the classification accuracy of models on test datasets, estimated classes in different models developed in this study were cross-tabulated against the ground-sampled classes for corresponding pixels in a confusion matrix. The performance of developed models was assessed on test data sets using performance measures such as user's (UA), and producer's accuracies (PA) and f-score of Parthenium weed class. Supplementary information including PA, UA of other classes, and Kappa coefficient was added. The UA refers to the probability that a pixel labeled as a certain class on the map represents that class on the ground. PA represents pixels that belong to a ground-sampled class but fail to be classified into the correct class. F-score is the harmonic mean of UA and PA. F-score is typically used to assess accuracy per-class as it represents a true outcome for specific classes [23,25]. Kappa coefficient represents the extent to which classes on the ground are correct representations of classes on the map. Their formulae are as follows: Kappa coe f f icient = P o − P e 1 − P e FN (false negative) represents the number of positive samples that were incorrectly labeled as negative samples; P o: relative observed agreement among classes; P e: hypothetical probability of chance agreement.
Software and Feature Selection
All investigated feature selection methods were applied on the three training datasets aforementioned using Scikit-feature library, a package of the Python (version 3.6) programming language. This library was developed by Li, et al. [14] and provides more than 40 feature selection algorithms. To obtain the f-score accuracy of selected features for each feature selection method, we first created a range of numbers that started from 1 to 85 (which is the total of variables) instead of specifying the number of selected features as prescribed by Li, et al. [14]. Each number in the range corresponded to the size of selected feature subsets. Then, for each dataset, a RF model was trained and evaluated on test dataset using selected feature subset through a loop iteration. RF was run in python (version 3.6) using the Sklearn library [48]. As RF was only used in this study for evaluating the performance of optimal variables selected by different feature selection methods, its hyperparameters were kept in default (e.g., number of trees in the forest equals to 10; criterion sets to "gini"). Default hyperparameters often yield excellent results [49]. Additionally, according to Du, et al. [50], larger number of trees do not influence classification results. This procedure was repeated ten times by reshuffling samples of training and test sets and the mean of f-score was computed for each select feature subset. This was to ensure the reliability of the results of investigated feature selection methods. The feature subset with the highest mean f-score was considered the most optimal. A code that automates the whole procedure including deriving VI, was written in Python (version 3.6). Figure 2 shows that the size of the feature subset and training and test sets determine the f-score accuracy of Parthenium weed using RF. In general, f-score increases with an increase in the size of feature subset until it reaches a plateau around 10 features, showing the insensitivity of RF in face of the noisy or redundant variables. However, it is noticeable that f-score accuracy of some feature selection methods such as Gini-index and ReliefF, which belong to the statistical-based feature selection methods, and LL-121 were found at smaller feature subsets. With respect to the size of the training set, f-score of Parthenium weed increased as the ratio between training and test got larger. As rule of thumb, this shows that when the ratio between training and test dataset is large (for example 3:1), a learning curve of higher F-score would be produced. Nevertheless, some feature selection methods such as svm-b, Gini-index and F-score seemed to yield similar f-score accuracies, regardless of the size of the dataset. selection methods such as svm-b, Gini-index and F-score seemed to yield similar f-score accuracies, regardless of the size of the dataset.
1st Training Set
As per Table 2, all the investigated feature selection methods yielded similar f-score of Parthenium weed for optimal feature subset. F-score accuracies varied from 71% to 72%. Svm-b produced the highest f-score accuracy (72.5%). However, in terms of the size of optimal feature subset, Relief and Gini-index were the best in reducing dimensionality of full-dataset. The size of feature subsets was 6 and 13 respectively. F-score method was the lowest at this point of view. For this dataset, the ReliefF method can be recommended as its f-score is among the highest and the size of optimal feature subset among the smallest. Computational time and accuracies of other classes were also low and high, respectively, for ReliefF method (Table 3). Table 2. F-score, PA and UA of Parthenium weed using optimal feature subsets yielded by investigated feature selection methods for first training set.
1st Training Set
As per Table 2, all the investigated feature selection methods yielded similar f-score of Parthenium weed for optimal feature subset. F-score accuracies varied from 71% to 72%. Svm-b produced the highest f-score accuracy (72.5%). However, in terms of the size of optimal feature subset, Relief and Gini-index were the best in reducing dimensionality of full-dataset. The size of feature subsets was 6 and 13 respectively. F-score method was the lowest at this point of view. For this dataset, the ReliefF method can be recommended as its f-score is among the highest and the size of optimal feature subset among the smallest. Computational time and accuracies of other classes were also low and high, respectively, for ReliefF method (Table 3). Table 2. F-score, PA and UA of Parthenium weed using optimal feature subsets yielded by investigated feature selection methods for first training set.
2nd Training Set
As shown in Table 4, apart from LS_121 and F-score, all feature selection methods could reduce the number of features with higher f-score accuracy than the full dataset. As for the first dataset, Svm-b was the best performing feature selection method. Its PA, UA and f-score of Parthenium weed and Kappa coefficient (Table 5) were the highest. ReliefF selected the least number (4) of optimal features and was among the highest top performing feature selection methods after svm_b in terms of f-score and PA of Parthenium weed. It was followed by Gini-index, LL_121 with respect to f-score and size of feature subsets. Once more, F-score method turned out to perform poorly because of a large number of selected features and no improvement of f-score accuracy of the Parthenium weed. Table 4. F-score, PA and UA of Parthenium weed using optimal feature subsets yielded by investigated feature selection methods for the second training set. Table 5. Classification accuracies of other classes using optimal feature subsets yielded by investigated feature selection methods for the second training set. Table 6 shows that LL_121 and ReliefF, respectively were among the feature selection methods that selected a small subset of features with high f-score of Parthenium weed, with PA and UA accuracies above 3% difference from the full dataset. ReliefF, for example, yielded 77.2% of f-score, 80% of PA and 75% of UA with only 7 optimal features. Full dataset yielded 72.6% of f-score, 75.2% of PA and 71.4% of UA without any feature selection method applied. Svm_b outperformed all the feature selection methods with the highest PA (82.3%) and f-score of Parthenium weed (78.1%), and kappa coefficient (0.83) ( Table 7). However, the number of optimal features selected was quite large (33). As for previous datasets, the performance of F-score method was the worst with the lowest PA (78.5%), UA (73.6%) and f-score (75.6%) of Parthenium weed and largest feature subset (82). Table 6. F-score, PA and UA of Parthenium weed using optimal feature subsets yielded by investigated feature selection methods for third training set. Figure 3 illustrates the spatial distribution of Parthenium weed and surrounding land-cover using full dataset and optimal features from ReliefF.
Feature Selection Method
Remote Sens. 2019, 11, x FOR PEER REVIEW 12 of 17 Figure 3 illustrates the spatial distribution of Parthenium weed and surrounding land-cover using full dataset and optimal features from ReliefF.
Discussion
This study sought to compare ten feature selection algorithms with two of those belonging to the following feature selection method groups: Similarity based feature selection methods, Statisticalbased feature selection methods, sparse learning based methods, Information theoretical based methods and Wrapper methods. These feature selection algorithms were applied on Sentinel-2 spectral bands combined with 75 vegetation indices for mapping a landscape infested by Parthenium weed. The comparison was based on the f-score of Parthenium weed using Random forest (RF). We also tested the effect of training and test sets sizes on the performance of the investigated feature selection algorithm.
Comparison of Feature Selection Methods
The results showed that feature selection algorithms could reduce the dimensionality of Sentinel-2 spectral bands combined with vegetation indices. The algorithms could increase classification accuracies of Parthenium weed using the random forest classifier by up to 4%,
Discussion
This study sought to compare ten feature selection algorithms with two of those belonging to the following feature selection method groups: Similarity based feature selection methods, Statistical-based feature selection methods, sparse learning based methods, Information theoretical based methods and Wrapper methods. These feature selection algorithms were applied on Sentinel-2 spectral bands combined with 75 vegetation indices for mapping a landscape infested by Parthenium weed. The comparison was based on the f-score of Parthenium weed using Random forest (RF). We also tested the effect of training and test sets sizes on the performance of the investigated feature selection algorithm.
Comparison of Feature Selection Methods
The results showed that feature selection algorithms could reduce the dimensionality of Sentinel-2 spectral bands combined with vegetation indices. The algorithms could increase classification accuracies of Parthenium weed using the random forest classifier by up to 4%, depending on the adopted feature selection method and size of the dataset. Previous studies found similar results [34,51]. For example, Colkesen and Kavzoglu [34] who applied filter-based feature selection algorithms and three machine learning techniques on WorldView-2 image for determining the most effective object features in object-based image analysis achieved a significant improvement (about 4%) by applying feature selection methods. Overall, ReliefF was the best performing feature selection method because it could bring down the number of features from 85 to 6, 4 and 7 on the first, second and third dataset respectively. Its f-score, PA and UA accuracies for Parthenium weed were also among the highest ( Table 2, for example). According to Vergara and Estévez [52], the purpose of feature selection is to determine the smallest feature subset that can produce the minimum classification error. This finding concurs with studies that endeavored to compare ReliefF with other feature selection methods. For example, Kira and Rendell [53] found that subset of features selected by ReliefF tend to be small compared to other feature selection methods as only statistically relevant features are retained during the selection process. Studies that compared ReliefF with other feature selection methods reported that mapping accuracy with selected features from ReliefF was similar to the best feature selection methods [34,54]. However, our findings are opposed to [34] who found that ReliefF selected the highest number of features in comparison with Chi -square and information gain algorithms, but slightly lower classification accuracies than information gain using random forest, support vector machines and neural network. We suggest that repeated classifications on reshuffled training and test data should be investigated to confirm their findings. In terms of f-score, PA and UA of Parthenium weed, svm-b, which belongs to the wrapper group outperformed all the feature selection methods for the three datasets (Tables 2, 4 and 6). Although computationally expensive, several authors have noted that wrappers outperform filter methods [55][56][57]. However, in this study svm-b was found to be less computational intense than some of filter methods (Tables 2, 4 and 6). F-score method did not perform well for mapping Parthenium weed because of low accuracies and large subset of selected features. To the best of our knowledge, its use is very limited in earth observation related studies. Further investigations are therefore necessary. Concerning investigated feature selection groups, not a single group performed well on all the datasets. This supports the recommendation that there is no universal 'best' method for all the learning tasks [58].
Impact of Training Sizes on Feature Selection Performance
The results show that the performance of feature selection algorithm depends on the ratio between the training and test dataset. It can be noticed that smaller difference of f-score accuracies of Parthenium weed between optimal features and full dataset were obtained when the ratio between training and test was 1:3 or 1:1 (Tables 2 and 3). In consistency with Jain and Zongker [59], a small sample size and a large number of features impair the performance of feature selection methods due to the curse of dimensionality. All the investigated feature selection algorithms positively influenced f-score accuracies of dataset when the ratio between training and test was large (1:3 or 70% training and 30% test) (Table 6). This concurs with reference [51], who demonstrated that the higher the training size was, the better the classification accuracy. They also highlighted the necessity of selecting the appropriate feature selection for improving classification accuracies. In this study, we found that some feature selection algorithms such as Gini-index and F-score, which belong to the Statistical-based Feature selection methods, and svm-b did not seem to be affected by the curse of dimensionality (Figure 2). We did not come across studies showing this finding, hence further investigations should be carried out for corroboration.
Implications of Findings in Parthenium Weed Management
On invaded landscapes, Parthenium weed expands more rapidly than native plants [60]. Spectral bands alone are not enough to achieve reliable mapping accuracies [61]. Increasing data dimensionality by combining, among others, Sentinel-2 image bands, vegetation indices and other variable types and applying an appropriate feature selection approach, higher Pathenium mapping accuracy can be achieved. This study provides a guidance on how newly developed feature selection methods based on Li, et al. [17]'s classification should be used to reduce the dimension of high temporal resolution imagery such as Sentinel-2 in mapping Parthenium weed. Accurate spatial distribution of Pathenium weed would enhance the decision-making for appropriate mitigation measures.
Conclusions
The following conclusions can be drawn from the findings: (1) Wrappers methods such as svm-b yield higher accuracies on classifying Parthenium weed using the Random forest classifier; (2) ReliefF was the best performing feature selection method in terms of f-score and the size of optimal features selected; (3) To achieve better performance with feature selection methods, the ratio of 3:1 between the training and test set size turned out to be better than ratios of 1.1 and 1:3; (4) Gini-index, F-score and svm-b, were slightly affected by the curse of dimensionality; (5) None of feature selection method groups seemed to perform the best for all the datasets.
The findings of this study are critical for reducing the computational complexity of processing large volume of Sentinel-2 image data. With the advent of Sentienl-2A and B, an increased volume of data is available, necessitating feature selection. This offers possibilities to derive useful information from them, hence accurate classification maps of Parthenium weed. Further research should look at comparing other feature selection methods with different classifiers. Moreover, a combination of feature selection methods such ReliefF and svm-b should be considered, as they select a small number of features and yield a high f-score accuracy, respectively.
Author Contributions: Z.K. was responsible for the conceptualization, methodological development, analysis and write-up. O.M., J.O. and K.P. were responsible for conceptualization, methodological development, reviewing and editing the paper. In addition, O.M. was responsible for acquiring funding.
Funding: This study was supported by the UKZN funded Big data for Science society (BDSS) programme and the DST/NRF funded SARChI chair in land use planning and management (Grant Number: 84157). | 7,676.2 | 2019-08-13T00:00:00.000 | [
"Environmental Science",
"Computer Science"
] |
Title Improving the viewing angle properties of microcavity OLEDs byusing dispersive gratings
The changes of emission peak wavelength and angular intensity with viewing angles have been issues for the use of microcavity OLEDs. We will investigate Distributed Bragg Gratings (DBRs) constructed from largely dispersive index materials for reducing the viewing angle dependence. A DBR stack mirror, aiming at a symmetric structure and less number of grating period for a practical fabrication, is studied to achieve a chirp-featured grating for OLEDs with blue emission peak of 450nm. For maximizing the compensation of the viewing angle dependence, the effects of dispersive index, grating structure, thickness of each layer of the grating, grating period and chirp will be comprehensively investigated. The contributions of TE and TM modes to the angular emission power will be analyzed for the grating optimization, which have not been expressed in detail. In studying the light emission of OLEDs, we will investigate the Purcell effect which is important but has not been properly considered. Our results show that with a proper design of the DBR, not only a wider viewing angle can be achieved but also the color purity of OLEDs can be improved. ©2007 Optical Society of America OCIS codes: (230.3670) Light-emitting diodes; (250.3680) Light-emitting polymer; (050.2770) gratings. References and links 1. A. Dodabalapur, L. J. Rothberg, and T. M. Miller, “Color variation with electroluminescent organic semiconductors in multimode resonant cavities,” Appl. Phys. Lett. 65, 2308-2310 (1994). 2. D. G. Lidzey, D. D. C. Bradley, S. J. Martin, and M. A. Pate, “Pixelated multicolor microcavity displays,” IEEE J. Select. Topics in Quantum Electron. 4, 113-118 (1998). 3. D. G. Lidzey, M. A. Pate, D. M. Whittaker, D. D. C. Bradley, M. S. Weaver, T. A. Fisher, and M. S. Skolnick, “Control of photoluminescence emission from a conjugated polymer using an optimised microactivity structure,” Chem. Phys. Lett. 263, 655-660 (1996). 4. F. S. Juang, L. H. Laih, C. J. Lin, and Y. J. Hsu, “Angular dependence of the sharply directed emission in organic light emitting diodes with a microcavity structure,” Jap. J. Appl. Phys. 41, 2787-2789 (2002). 5. K. Neyts, P. D. Visschere, D. K. Fork, and G. B. Anderson, “Semitransparent metal or distributed Bragg reflector for wide-viewing-angle organic light-emitting-diode microcavities,” J. Opt. Soc. Amer. B 17, 114119 (2000). 6. N. Tessler, S. Burns, H. Becker, and R. H. Friend, “Suppressed angular color dispersion in planar microcavities,” Appl. Phys. Lett. 70, 556-558 (1997). 7. L. Hou, Q. Hou, Y. Peng, and Y. Cao, “All-organic flexible polymer microcavity light-emitting diodes using 3M reflective multilayer polymer mirrors,” Appl. Phys. Lett. 87, 243504 (2005). 8. C. He, Y. Tang, X. Zhoa, H. Xu, D. Lin, H. Luo, and Z. Zhou, “Optical dispersion properties of tetragonal relaxor ferroelectric single crystals 0.65Pb(Mg1/3Nb2/3)O3–0.35PbTiO3,” Opt. Mat. 29, 1055-1057 (2007). 9. J. L. H. Chau, Y.M. Lin, A.K. Li, W.F. Su, K.S. Chang, S. L.C. Hsu, and T.L. Li, “Transparent high refractive index nanocomposite thin films,” Mater. Lett. 61, 2908-2910 (2007). 10. L. H. Smith, J. A. E. Wasey, and W. L. Barnes, “Light outcoupling efficiency of top-emitting organic lightemitting diodes,” Appl. Phys. Lett. 84, 2986-2988 (2004). 11. C. L. Lin, T. Y. Cho, C. H. Chang, and C. C. Wu, “Enhancing light outcoupling of organic light-emitting devices by locating emitters around the second antinode of the reflective metal electrode,” Appl. Phys. Lett. 88, 081114 (2006). 12. E. M. Purcell, “Spontaneous emission probabilities at radio frequencies,” Phys. Rev. 69, 681-684 (1946). 13. W. C. H. Choy and E. H. Li, “The applications of interdiffused quantum well in normally-on electroabsorptive Fabry-Perot reflection modulator,” IEEE J. Quantum Electron. 33, 382-393 (1997). #85208 $15.00 USD Received 12 Jul 2007; revised 21 Sep 2007; accepted 21 Sep 2007; published 28 Sep 2007 (C) 2007 OSA 1 October 2007 / Vol. 15, No. 20 / OPTICS EXPRESS 13288 14. O. H. Crawford, “Radiation from oscillating dipoles embedded in a layered system,” J. Chem. Phys. 89, 6017-6027 (1989). 15. V. Bulovic, V. B. Khalfin, G. Gu, P. E. Burrows, D. Z. Garbuzov, and S.R. Forrest, “Weak microcavity effects in organic light-emitting devices,” Phys. Rev. B 58, 3730–3740 (1998). 16. X. W. Chen, W. C. H. Choy and S. He, “Efficient and rigorous modeling of light emission in planar multilayer organic light-emitting diodes”, IEEE J. Display Technol. 3, 110-117 (2007); K. Neyts, “Simulation of light emission from thin-film microcavities,” J. Opt. Soc. Amer. A 15, 962–970 (1998); W. Lukosz, “Theory of optical-environment-dependent spontaneous emission rates for emitters in thin layers,’’ Phys. Rev. B 22, 3030–3038 (1980). 17. P. A. Hobson, J. A. E. Wasey, I. Sage, and W. L. Barnes, “The role of surface plasmons in organic light emitting diodes,” IEEE J. Sel. Top. Quantum Electron. 8, 378-386 (2002). 18. R. H. Jordan, L. J. Rothberg, A. Dodabalapur, and R. E. Slusher, "Efficiency enhancement of microcavity organic light emitting diodes", Appl. Phys. Lett. 69, 1997-1999 (1996). 19. Y. Kijima, N. Asai and S. Tamura, “A Blue Organic Light Emitting Diode,” Jap. J. Appl. Phys. 38, 52745277 (1999). 20. B. Deveaud, ed, The Physics of Semiconductor Microcavities: from fundamentals to nanoscale devices, (Wiley-VCY, 2007), Ch. 12, p.245.
Introduction
Microcavity structures are useful for modifying the light emission of organic light emission devices (OLEDs) [1][2][3]. However, typical microcavity structures will result in a large angular dependence on the emitting color [4], which causes problems for display applications. To reduce the color shift with viewing angle (V ang ) in microcavity devices, an extra scatter layer has been used on top of a microcavity OLED, in which a light control film is used for spatial color filtration of the emission [3]. However, the filtration will cause light blocking and absorption and thus reduces the device efficiency. Meanwhile, the structures with a metal mirror on one side of OLEDs and a semi-transparent silver mirror on the other side have been reported [5]. Since silver has absorption loss in visible region and is easily oxidized and diffused into organic layers, the effects may degrade the performance of OLEDs.
Distributed Bragg Reflector (DBR) has been actively studied for improving OLED performance. Since the materials used are typically transparent dielectrics, DBR has the advantage of low absorption loss. It has been found that poly (p-phenylenevinylene) (PPV) has dispersive refractive index which can reduce the V ang issue [6,7]. However, the wavelength (λ) region of PPV which contributes to large index dispersion also has intrinsic absorption loss. Besides, PPV is often used in the OLED structure. Due to the carrier transporting and balancing issues, there is a tradeoff in increasing the thickness of PPV for minimizing the color shift. Recently, some materials with large index dispersion have been reported [8,9] which can be used for dispersive DBR. However, most of the attention has been paid on the high refractive index (n) for improving the light extraction [9] but not for improving the V ang . In order to optimize the grating structures, the distribution of TE and TM modes on the angular emission power will be expressed for optimizing TE and TM grating reflectance, which has not been described in detail. In studying the light emission of OLEDs, we will tackle the Purcell effect which is important particularly for fluorescent organic materials and has not been properly considered recently [10,11].
In this paper, index-dispersive DBR mirrors will be introduced to reduce the color shift with V ang . The DBR structure will be designed to achieve chirp for increasing the effective cavity length and further minimizing the color shift. The effects of the material parameters of n magnitude and n dispersion, as well as the structural parameters of thickness of each layer, layer arrangement and number of period will be studied for reducing the V ang dependence.
Theoretical formulation
In a microcavity OLED, the resonance λ (λ r ) of cavity modes can be described as 2 ( ) cos where m is the mode number, L i is the thickness of the i th layer of the microcavity OLED structure. n i is the refractive index and θ i is the angle of the ray at the i th layer. Generally blue shifts with increasing V ang . In order to reduce the change of λ r with V ang , one can counter-increase the cavity thickness with V ang and increases n i (λ) with reducing λ . Here, both the cavity thickness and n i (λ) will be used to minimize the change of λ r with V ang . Typically, SiO 2 , Si 3 N 4 and TiO 2 are used to form DBR. However, their n has no obvious change with λ in visible light region. In order to improve the V ang issue, a dielectric material with large n magnitude and n dispersion, and wide energy gap should be used like the recently reported 0.65Pb(Mg 1/3 Nb 2/3 )O 3 -0.35PbTiO 3 (PMNT35%) [8] with an energy gap of 4.03eV. From the experimental results, the Sellmeier dispersion equation of n perpendicular (o) and parallel (e) to the uniaxial c-axis at room temperature are given as: The light emission of multilayered OLED structures is rigorously modeled through classical electromagnetic approach with an emitting layer sandwiched between two stacks of films taking into account the Purcell effect [12] which will be strengthened by Fabry Perot structures [13] of OLEDs. The nonradiative losses due to the metal electrode and other materials used in the structure as well as the effects of thick glass substrate have been fully considered, which have been ignored by others [5,14,15]. The vertical electric dipole (VED) and horizontal electric dipole (HED) are located in the recombination zone in the emission layer. The two stacks of films can be considered as two effective interfaces characterized by the total reflection and transmission coefficients. The total radiation power F V and F H for VED and HED, respectively, normalized by the radiation power of the dipole in an infinite medium ε e , can be obtained [16]. Similarly, the normalized power U V and U H for VED and HED, respectively, transmitted to the outermost region (air) can also be obtained. For a randomly oriented dipole with equal probability for all directions in space, we have where the superscript TE and TM denote the TE and TM modes respectively. The radiative decay rate of excitons m r Γ is modified to be 0 r F × Γ [16], where 0 r Γ is the radiative decay rate in the infinite medium. It is considered that the non-radiative decay rate Γ nr is a constant, the internal quantum efficiency i nt c av η in the microcavity becomes where 0 in t η is internal quantum efficiency (QE) of the bulk emitting material. The photon . Due to change of internal QE in the microcavity [see Eq. (5)], the outcoupling QE should be modified as η λ is the modified outcoupling QE taking the Purcell effect into account. The angular power density P(α) in outermost region (air) can be expressed as where (λ 1 ,λ 2 ) is the considered λ range and P o (λ) is the intrinsic emission spectrum of the emitting material. The external QE of OLED (η ext ) equals 0 int η η × . Meanwhile, from Eqs. (9) and (7), one can determine the TE and TM modes contributions to the output intensity which is important for optimizing the DBR structures and will be discussed in next section.
Results and discussion
The structure effects of layer thickness, layer structure and number of periods and the material effects of high n value and n dispersion of DBR will be investigated here to reduce the V ang dependence of emission λ . The DBR will also be designed to achieve a suitable chirp which can further diminish the V ang dependence. Besides PMNT [11], an index-dispersive polymer of TiO 2 doped polymer epoxy resin [9] will also be used for constructing DBR, since polymer LEDs (PLEDs) has gained intensive interest due to the simple fabrication process,. The refractive indices of organic materials, ITO and SiO 2 , Si 3 N 4 are assumed to be 1.6, 2.06+0.005i and 1.5, 2.0 respectively. The complex permittivity of Ag and Al is taken from [17]. In the discussion, unless specify, the peak λ shift by changing V ang from 0 o to 30 o (Δλ) is studied. It should be noted that the model has been verified and show very good agreement with the experimental result [18], although the details have not been shown here.
In optimizing the DBR structure, one needs to determine the significance of the TE and TM reflectance of the DBR on the electroluminescent (EL) spectrum. For this reason, the contributions of TE and TM modes to the angular power distribution described by Eqs. (7) and (9) have been investigated. The device structure is Al(40nm) /Alq(15nm) /BCP(10nm) /NPD(25nm) /m-MDATA(25nm) /ITO(15nm) /[PMNT(70nm) /SiO 2 (20nm)]×2.5 periods (see inset of Fig. 1). At V ang = 0, the power contributed by TE modes is the same as that of the
ITO (15nm)
TM modes. It is because the reflectance spectrum of TE is the same as TM at zero degree. Therefore, the TE and TM spectra coincide to each other as shown in Fig. 1. However, when the V ang increases, the power contributed by the TM modes to the peak of the total spectrum reduces (due to the decrease of TM reflectance) as compared to that of the TE modes (see Fig. 1). The features also happen in other microcavity OLED structures. As a result, the TE reflectance spectrum of DBR structures will be discussed hereby for improving the V ang .
Structure effects
A blue OLED [19] of Al(40nm) /Alq(15nm) /BCP(10nm) /NPD(25nm) /m-MDATA (25nm) /ITO(15nm) /[Si 3 N 4 (57nm) /SiO 2 (75nm)]×2.5 periods is investigated here. The ¼-λ thickness of Si 3 N 4 and SiO 2 is designed to make the emission λ peaks at 450nm. As shown in A blue OLED [19] of Al(40nm) /Alq(15nm) /BCP(10nm) /NPD(25nm) /m-MDATA (25nm) /ITO(15nm) /[Si 3 N 4 (57nm) /SiO 2 (75nm)]×2.5 periods is investigated here. As shown in Table 1 (column A), the peak λ blue shifts by 23.2nm to 427nm. For the target of reducing the V ang dependence, Si 3 N 4 is replaced by index-dispersive PMNT. Meanwhile, in order to take the full advantage of the dispersive feature and large magnitude of the n of PMNT, the thickness ratio of PMNT and SiO 2 is increased. Our results show that when thickness of SiO 2 is reduced, the stop band of the DBR reflectance spectrum decreases, which will improve the color purity due to the narrowing of the emission spectrum of NPD by the DBR as shown in Fig. 3(b) where EL of the non-cavity OLEDs is obviously broader than the OLEDs with DBR grating. However, when the thickness of SiO 2 is further reduced, most of the light will be filtered out and η ext will decrease, i.e. a tradeoff between color purity and η ext . Concerning PMNT, when its thickness increases, Δ λ reduces. However, when PMNT is too thick, the DBR reflectance spectrum will red shift too much and make the DBR not match with the emission spectrum of NPD and thus η ext reduces. With the understanding of the thickness effects, the period structure of PMNT (70nm) and SiO 2 (20nm) is finalized. As shown in Table 1 (column C) and reduces to 18.8nm and the spectrum has no significant distortion while for the Si 3 N 4 (57nm) /SiO 2 (75nm) DBR, the photon outcoupling QE at 30 o is distorted significantly (see Fig. 2). It is because the peak outcoupling QE (without taking into account the emission spectrum of NPD) as shown in the inset of Fig. 2 is blue shifted to 406nm which is too far from the emission peak of NPD. The DBR layer arrangement of high index (H) PMNT and low index (L) SiO 2 can be, starting from ITO, (a) HLHL⋅⋅⋅HL, (b) HLHL⋅⋅⋅HLH, (c) LHLH⋅⋅⋅LH and (d) LHLH⋅⋅⋅LHL, since the substrate is glass and it is reasonable to consider the SiO 2 layer and glass substrate have same n. Therefore, the structure of (a) is the same as (b) and (c) the same as (d). Our results show that the structure of (a) is better than (c). Take two periods DBR as an example, structure (a) will make Δ λ decrease 20nm when that of (b) is 27.5nm. Here, 2.5 periods of PMNT(70nm) /SiO 2 (20nm) will be used as the DBR as similar periods of DBR of TiO 2 (nm) /SiO 2 (nm) has been experimentally used for improving the efficiency of OLEDs [18]. In case smaller Δ λ is needed, the number of the DBR unit period and the thickness of PMNT can be increased. For instance, by increasing the thickness of PMNT from 70nm to 145nm, Δ λ can be reduced from 18.8nm to 15nm. Meanwhile, DBR with asymmetric periods can be used to further reduce Δ λ [20].
Material effects
By comparing the typical DBR structure (column A of Table 1) and the optimized structure (column C), it can be observed that Δ λ is considerably reduced particularly when V ang < 20 o . This is contributed by not just the structure design stated previously, but also by the material effects and chirp. When n magnitude increases from Si 3 N 4 (n≅ 2) to PMNT (n≅ 2.6), as can be observed from Table 1 by comparing column (B) and (C).
Chirp effects
Chirp can be introduced into the OLEDs by designing the DBR that the emission peak λ of 450nm is at the short λ side of the DBR reflectance spectrum [see Fig. 3(a)]. In this case, the effective optical path length of the microcavity OLED is increased when the emission λ is as shown in Fig. 3(b). However, at the same time, the emission intensity will decrease to about 1/5 of the case of PMNT = 62nm [see Fig. 3(b)]. Taking into account the tradeoff between Δ λ and emission intensity, the PMNT thickness is set at 70nm. Moreover, when V ang increases, the reflectance spectrum will blue shift. The increase of the effective optical path length will therefore diminish and degrade the compensation. By introducing the high n value and n dispersion of PMNT, the blue shift of the reflectance spectrum reduces and thus enhances the chirp for minimizing Δ λ . Figure 4 shows the angular intensity distribution of microcavity OLEDs. When V ang < 20 o , the change of the intensity of the PMNT grating is slightly larger that of the conventional DBR. The change becomes larger when the angle increases to 30 o but improves when the angle further increases and better than that of the conventional DBR when the angle > 45 o . As a consequence, by changing the DBR structure from the typical the Si 3 N 4 (57nm) /SiO 2 (75nm) DBR to the PMNT(70nm) /SiO 2 (20nm) DBR, there is generally no significant degradation in the angular intensity and eventually, the change recovers and is even better than the conventional DBR when then angle is > 45 o .
While QE and lifetime of PLEDs is continuously improving, it would be interesting to study the microcavity PLED using polymer DBR. Recently, TiO 2 nanoparticles doped polymer with large n magnitude and dispersion has been reported [9]. Based on the knowledge on optimizing the PMNT grating, a 2.5 periods of TiO 2 doped polymer (75nm)/ PEDOT:PSS (20nm) has been studied. Since the TiO 2 nanocomposite film dissolved in Tetrahydrofuran and PEDOT:PSS dissolved is water, it is possible to develop the polymer DBR. From column D of Table 1, Δ λ is further reduced to 16.3nm, which makes the polymer structure attractive for DBR applications.
Conclusions
The reduction of Δ λ by using index-dispersive DBR grating has been investigated. The expression of the TE and TM modes on the angular intensity has been detailed for optimizing the DBR structure. The light emission and QE of OLEDs has been studied with the Purcell effects. In optimizing DBR structures, material and structure parameters and chirp have been studied. Due to the dispersive feature and large magnitude of the PMNT refractive index, the Δ λ is reduced by ~50% at small viewing angles as compared with the typical DBR structure reported without a significant degrade in angular intensity. Our results also show that polymers with large dispersive index can have potential for making microcavity PLEDs with less V ang dependence. | 4,806.2 | 2007-10-01T00:00:00.000 | [
"Engineering",
"Materials Science",
"Physics"
] |
Smart thermal management with near-field thermal radiation
When two objects at different temperatures are separated by a vacuum gap they can exchange heat by radiation only. At large separation distances (far-field regime) the amount of transferred heat flux is limited by Stefan-Boltzmann's law (blackbody limit). In contrast, at subwavelength distances (near-field regime) this limit can be exceeded by orders of magnitude thanks to the contributions of evanescent waves. This article reviews the recent progress on the passive and active control of near-field radiative heat exchange in two- and many-body systems.
INTRODUCTION
The control of electron flow in solids is at the origin of modern electronics which has revolutionized our daily life. The diode and the transistor introduced by Braun [1] and Bardeen [2], respectively are undoubtedly the cornerstones of modern information technologies. Such devices allow for rectifying, switching, modulating and even amplifying the electric currents. Astonishingly, until very recently no thermal analogues of these building blocks were devised to exert a similar control on the heat flux. An important step forward in this direction has nevertheless been carried out by Baowen Li and co-workers [3,4] and by Chang et al. [5] at the beginning of 2000's, when they proposed a phononic counterpart of the diode and transistor [6]. These pioneer works have paved the way to a technology, also called "thermotronics" in analogy to traditional electronics, where electrical currents and voltage biases are replaced by heat currents and temperature biases to control heat conduction though a network of solid elements. A recent review [7] summarizes the last developments carried out to control heat flux carried conduction at both macroscale and microscale using artificial structures.
However, heat transport mediated by phonons in solid networks suffers from some weaknesses of fundamental nature which intrinsically limit the performance of this technology. One of these limitations is linked to the speed of acoustic phonons itself (the speed of sound) which bounds the operational speed of these devices. Another intrinsic limitation of phononic devices is the presence of local Kapitza resistances which come from the mismatch of vibrational modes supported by the different solid elements in the network. This resistance can drastically reduce the heat transported across the system. To overcome these limitations, concepts for a purely photonic technology have been proposed as an alternative way to handle heat transfer at the nanoscale. In the present work, we review recent developments carried out in this direction. After briefly introducing the theoretical framework commonly used to described the radiative heat transfer in the near-field regime between two or several solid bodies, we describe the main physical mechanisms and related device concepts which allow for a passive and active control of radiative heat transfer at the nanoscale. Finally, we conclude this review by suggesting future research directions for advanced thermal management with thermal photons.
SOME BASICS ON THE NEAR-FIELD HEAT TRANSFER
The radiative heat transfer between distant objects, in the far field, is bounded by the blackbody limit given by the Stefan-Boltzmann law [8]. The transport of heat in this situation is mediated by propagating modes of the electromagnetic radiation emitted by the objects. When separation distances are smaller than the thermal wavelength λ T defined by Wien's displacement law, which is about 10 µm at room temperature, near-field effects become relevant due to the contribution of evanescent modes of the electromagnetic field confined close to the surface of the objects. By bringing them at separations d < λ T , the blackbody limit can notably be overcome owing to this near-field contribution from evanescent waves [9][10][11][12][13]. Hence, the radiative heat flux exchanged in near-field between two silica d < λT T1 T2 V1 V2 Figure 1: Sketch of radiative heat exchanges between two solids of volumes V1 and V2 held at temperatures T1 and T2 and separated by a vacuum gap of thickness d. At large separation distances, heat exchanges are mediated by propagating photons (wavy arrows). At subwavelength distances (d < λT , λT being the thermal wavelength), the heat transfer is enhanced by the the contribution of evanescent waves localized on the surface of bodies.
samples separated by a distance d = 100nm around the ambient temperature with a temperature gradient ∆T = 50K is φ 20000 W.m −2 , while the blackbody limit is φ BB = σT 3 ∆T 75 W.m −2 and the solar flux used for conventional photovoltaics is about φ S = 1000 W.m −2 , σ being the Stefan-Boltzmann constant.
The near-field radiative heat exchange in a given configuration of several solid objects in a thermal non-equilibrium situation are commonly calculated in the framework of fluctuational electrodynamics. To illustrate the basic principles of this approach, let us consider the simple example of two objects with volumes V 1 and V 2 held at temperatures T 1 and T 2 which are separated by a vacuum gap of thickness d as sketched in Fig. 1. The thermal motion of charges within each of these objects induce fluctuational current densities j i (r, ω) (i = 1, 2) which themselves induce fluctational electric and magnetic fields E i and H i fulfilling the stochastic Maxwell equations [14] ∇ where i (r, ω) denotes the local dielectric tensor of object i (here assumed to be non-magnetic) at point r; 0 and µ 0 are the permittivity and permeability of vacuum. The linearity of these equations allows us to relate E i and H i to the source currents j i (r, ω) as follows [14] E i (r, ω) = iωµ 0 where G EE and G HE denote the linear electric and magnetic response tensors also called the dyadic Green functions of the system. From these expressions one can determine the mean Poynting vector which can be readily expressed in terms of both the Green tensor components and the correlations functions of fluctuating currents. Assuming that the objects are in local thermal equilibrium, then according to the fluctuationdissipation theorem these correlations are related to the local temperature by the following expression [15] where is the Planck constant and n(ω, is the Bose-Einstein distribution function at temperature T i ; k B is the Boltzmann constant. It follows that the Poynting vector can be expressed in terms of all local temperatures inside the system. The spectral radiative power P 1↔2 (ω) exchanged between the two objects can be obtained by integrating the flux expressed by the Poynting vector over the surfaces A i = ∂V i of two bodies as follows Note, that here this expression is only valid as long as the source currents in both objects are uncorrelated [14]. Finally, the net power exchanged between the two bodies is obtained by summation over all frequencies (i.e. P net = ∞ 0 dω 2π P 1↔2 (ω)). In many-body systems this approach can be generalized to take into account all multiscattering processes [16][17][18][19][20]. Formally the net power received by each object can be written in a Landauer-like form as where the transmission functions T ik are related to the coupling efficiency of modes at the frequency ω between the body i and body k. Explicit expressions for many-particle systems within the dipole model and for multilayer systems can be found in Ref. [16] and Ref. [18], respectively, and general expressions derived within the scattering-matrix approach can be found in Ref. [17]. For more details on the many-body theory and an extensive list of works on this topic we refer to the review [20]. Generally the transmission functions depend on the geometric configuration and in particular on the distance between the objects i and k as well as on the optical material properties i,µν and k,µν . This opens up the possibility to tune the heat transfer by changing the configuration of the involved objects. More interesting, if the material properties significantly depend on temperature, external electric or magnetic fields, the heat flux can be actively controlled by changing these quantities.
RECTIFICATION
The diode is one of the fundamental building blocks to control electron currents in electronic systems. In an electronic diode (Fig 2-a), the current can flow mainly in one direction when a bias voltage is applied through its two terminals. This corresponds to a strong asymmetry due to a nonlinearity in the electrical conductance. As in the case of electronics, one of the basic devices to impose directionality of radiative heat flows are the thermal diodes. When a temperature bias T 1 − T 2 is applied between two separated solids with temperatures T 1 and T 2 , the magnitude of the heat flux they exchange by radiation generally does not depend on the sign of this bias. However, in presence of temperature-dependent material properties of the receiver or emitter, an asymmetry can appear between the heat flux P f in the forward biased situation (T 1 − T 2 > 0) and the heat flux P r in the reverse scenario (T 1 − T 2 < 0), such that P f = P r . Hence, radiative thermal rectification can be achieved under these conditions. Notice that here P f and P r are assumed positive, since they represent the heat flowing from the hottest to the coldest terminal in the two temperature biased scenarios. [21]. (c) Schematic of a microfabricated VO2 based (phase-change material) radiative diode and measured near-field heat flux vs. the temperature bias ∆T in the forward (∆T > 0) and the reverse (∆T < 0) scenarios. Reproduced with permission from [39]. (d) Radiative thermal diode driven by nonreciprocal surface waves: the surface waves induce an asymmetry in the heat transfer between two magneto-optical nanoparticles placed close to a magneto-optical substrate in presence of a magnetic field B. This asymmetry is quantified by the rectification coefficient η = (P1 − P2)/P1, where P1 is the power received by particle 1 when T2 > T1 (backward scenario) while P2 is the power received by particle 2 in the opposite situation (forward scenario). Reproduced with permission from [47].
As happens with the electronic counterparts, radiative thermal diodes act as a good radiative thermal conductor for a given sign of the temperature bias, while they behave as an insulator in the opposite situation. A first thermal radiative rectifier (Fig 2-b) has been introduced by Otey et al. in 2010 [21] using two different polytypes of SiC having different temperature-dependent optical properties. In this case, the transmission function T 12 depends implicitly on the temperature of the two solids through the temperature dependence of their reflection coefficients r 1 and r 2 . Thus, in the forward scenario with a low temperature T and a high temperature T + ∆T we formally have a transmission function of the form while in the reverse scenario this function reads The heat transport asymmetry in the device can then be evaluated with the (normalized) rectification coefficient When the interacting solids have weakly temperature dependent optical properties, R is relatively small provided that the temperature bias is small as well. Hence, a rectification coefficient of ut to ≈ 29% has been reported [21] in near-field regime between two planar slabs of 3C-SiC and 6H-SiC with ∆T = 300 K or between slabs covered by an optimized coating [22] or slabs made with doped semiconductors with different doping levels and different thicknesses [23]. On the other hand, rectification coefficients as high as 90% have been reported between two solids when the temperature bias becomes large [24][25][26].
In 2013, phase-change materials have been proposed [27][28][29] to improve the asymmetry of the radiative transport in configurations leading to large rectification coefficients with a relatively small temperature bias. These materials undergo a sudden and drastic change in their optical properties around their critical temperature. Among these materials, metal-insulator transition (MIT) materials have attracted significant attention to design radiative heat rectifiers [27][28][29][30][31][32][33][34]. A widely MIT material is vanadium dioxide (VO 2 ) which undergoes its phase transition at T c ≈ 340 K [35,36]. Thanks to this transition rectification coefficients higher than 70% have been predicted and highlighted in far-field regime [27,37,38] with a temperature bias ∆T < 50 K and values around 90% have been observed in near-field regime [29,[39][40][41]. Furthermore, materials undergoing a normal-metal-superconductor transition [42][43][44][45][46] have also been considered to design radiative thermal rectifiers operating at cryogenic temperatures with similarly good performances.
Non-reciprocal materials have also been considered to break the symmetry in the heat transport berween two bodies. Rectification factors close to 90% have been recently predicted [47] in systems made with magneto-optical (MO) particles placed above a MO substrate (Fig. 2-d.) and exchanging heat via surface waves.
Finally, a concept of many-body rectifier working by embedding a passive intermediate body interacting with the two terminals, has been recently introduced [48]. Unlike the classical thermal rectification discussed above which require a noticeable temperature dependence of the optical properties of the materials, here the asymmetry in the heat transport results only from many-body interactions. Hence they can rectify the heat flux over a broad temperature range.
MODULATION AND SWITCHING
Controlling the magnitude and the direction of heat flux exchanged between solids at nanoscale is of prime importance in many technological applications and considerable effort has been made these last years to develop new strategies to this end. Below we discuss the recent developments carried out in this direction.
(a) (b) (c) Figure 3: (a) Nano-electromechanical thermal switch. The NEMS consists in a suspended bridge (thermal emitter) which is brought closer to a solid through application of an actuation potential Va. Their separation distance and the heat flux they exchange can be controlled with Va. Reproduced with permission from [50]. (b) Active control of heat flux exchanged between two solids. The thermal conductance between the emitter and the receiver is changed by adjusting the distance which separate them from a third body using a piezoelectric actuator. Reproduced with permission from [52]. (c) Heat flux exchanged in near-field regime between two twisted gratings. Reproduced with permission from [53]. (d) Giant thermal magnetoresistance in plasmonic structures: the thermal magnetoresistance of magneto-optical nanoparticle chains changes drastically with respect to the strength of an external magnetic field B orthogonal to the chain. Reproduced with permission from [19]. (e) Anistropic magnetoresistance: the thermal conductance between two magneto-optical particles changes with respect to the orientation of an applied magnetic field. Reproduced with permission from [58]. The most natural way to control the magnitude of flux exchanged between two solids is by changing their separation distance d. In near-field regime the transmission coefficient scales like T 12 ∝ 1/d n , where n = 6 for two nanoparticles, n = 3 for a spherical object in vicinity of a slab, and n = 2 for two slabs, for instance. It follows that a displacement of one decade from a given position modifies the heat flux by orders of magnitude. This property can be exploited by mechanically changing the separation distance between two objects to modulate or switch the near-field radiative heat flux. Furthermore, micro/nano electromechanical systems (MEMS/NEMS) have been developped [49][50][51] in the last years which allow for performing a high precision control of the separation distance between two solids in the subwavelength regime, up to distances of few tens of nanometers, by tuning the electrostatic interaction between the solids using an actuation potential (Fig.3-a). MEMS technology can be used for an active thermal management at nanoscale or to harvest on demand the near-field energy confined at the surface of hot objects using tunable near-field thermophotovoltaic converters [50]. Besides this control of near-field heat exchanges through a change of the separation distance between the solids, multiscattering effects induced by the presence of a third body has been proposed [52] to tune the heat exchanges in the near-field between two solids (Fig.3-b). By bringing a third body (even non-emitting) close to the emitter and receiver the heat flux exchanged between these bodies can be either amplified or inhibited thanks to many-body interactions.
Another way to control mechanically the near-field heat exchanges between two solids is the change of their relative orientation keeping their separation distance constant. This can have a strong impact in the coupling effciency of evanescent modes supported by each solid and therefore on the heat flux they exchange. Such change can simply be achieved using textured solids in relative rotation. For instance, the radiative heat flux between two uniaxial slabs with optical axis within the interface can be tuned by relative rotation of one slab [53] as shown for two grating structures in Fig.3-c. When the optical axes are aligned, the heat flux is maximal and when the optical axis are perpendicular to each other the heat flux has a minimum. This effect can of course be exploited for any two anisotropic media, and it has already been demonstrated [54] for two natural hyperbolic materials like hexagonal Boron Nitride (hBN). It must be noted that the operating speed of these mechanical control is intrinsically limited by the thermalization time of its components. With nanostructures interacting in near-field this time is typically in the order of few milliseconds. Moreover, it is worth to point out that mechanical controlled actuation may be difficult to implement in certain situations and moving parts in any device are usually not desirable because of wear and tear. Beside the mechanical control of the separation distance and relative orientation strain-controled switches have been recently proposed to tune the flux. In these systems an intermediate layer of material, whose permittivity is controlled with mechanical strain, drives the radiative heat flux between a source and a drain at fixed separation distances [55].
As shown in the previous section MO materials can also be used to control actively the near-field heat exchanges between two solids using an external magnetic field. This possibility has been first suggested by Moncada-Villa et al. [56,57] who have shown that a change of the magnitude of magnetic field can significantly modify both the nature and the coupling of evenescent modes. More recently new thermomagnetic effects in MO systems [19,58] have opened the way to a new strategy for controlling near-field heat exchanges. The first effect is a giant magneto-resistance [19] which enables a significant increase of the thermal resistance along MO nanoparticle networks (Fig.3-d)) with increasing magnitude of an external magnetic field. This giant resistance results from a strong spectral shift of localized surface waves supported by the particles under the action of a magnetic field. Recent works have combined MO materials and dielectrics in a hyperbolic multilayer structure [58,59], because on the one hand the formation of hyperbolic bands can increase the near-field radiative heat flux in such systems [60][61][62] and on the other hand the application of a magnetic field enables a significant active modulation of the heat flux. However, it seems that the effective medium calculations in [57] predict an increasement of the near-field heat flux for extremely large magnetic fields, whereas the exact calculations in [59] predict a heat flux reduction for moderately large magnetic fields.
An alternative to such magneto-optical control is the electrical actuation of optical properties of materials. Among all materials, graphene-based materials [63,64], have shown to be good candidates to ensure this control. By changing the Fermi level of a graphene sheet deposited on a solid using an external gating, the scattering properties of this solid can be actively modulated [51,[65][66][67][68][69][70][71][72][73][74][75][76][77][78]. This electrical actuation of optical properties of graphene-based materials has been exploited to efficiently tune and even amplify the near-field heat exchanges between two solids (see Fig. 4). The ferroelectrics state of some materials can also be tuned to control the radiative heat exchanges [79]. The active change of their spontaneous polarization can be used to shift the resonance frequency of surface phonon-polariton which some of these materials support and consequently control radiative heat transfer via varying external electric fields. Recently, three-body systems made with graphene-based materials coupled with ferroelectrics have demonstrated their strong potential to modulate near-field heat flux at kHz frequencies [80]. Finally switching and modulation of heat flux has been highlighted using metal-oxide-semiconductors (MOS). Analogously to the MOS capacitor in electronics, the accumulation and depletion of charge carriers in an ultrathin plasmonic film can be used to control the coupling of surface waves [81].
HEAT SPLITTING AND FOCUSING
The directional control of radiative heat flux exchanged in near-field regime in a set of solids can be achieved using various of the before mentioned mechanisms in order to break the symmetry. Hence, a heat flux splitting can be realized inside a set of pellets covered by graphene flakes by electrically tuning the Fermi level of graphene as sketched in Fig.5-a [82]. Such a control allows us to promote certain near-field interactions by tuning the graphene plasmons supported by the flakes.
The direction of heat flux can also be modified in magneto-optical systems using an external magnetic field. Indeed, as illustrated in Fig. 5-b [83] in four terminal junction forming a square with C4 symmetry, when a temperature difference ∆T = T L − T R is applied between the particles L and R a radiative thermal Hall effect [84] transfers heat transversally to the primary gradient bending so the overall flux. This effect results from the fact that the transmission coefficients T ij and T ji are not equal in nonreciprocal systems.
Thermal routers [85] based on magnetic Weyl semimetals have been recently introduced using the unique properties of optical gyrotropy. In these systems (Fig. 5-c), which consists of three spheres made of magnetic Weyl semimetals, the heat flux direction can be controlled by moving the Weyl nodes in the material using an external (magnetic or electric) field. It has also been shown that an anomalous photon thermal Hall effect can be realized in Weyl semimetals [86].
Recently, the concept of multitip scanning thermal microscopy (SThM) has been proposed [87] to locally focus and amplify the heat flux in regions much smaller than the diffraction limit and even smaller than the spot heated by a single tip. As illustrated in Fig.5-d, the full width at half maximum (FWHM) of the spatial distribution of heat flux on the surface of substrate can be significantly reduced in comparison with that with a single tip. For specific geometric configurations, the heat flux can even locally back propagate towards the emitting system which acts in Figure 5: (a) Graphene-based heat flux splitter. The thermal powers P12 and P13 exchanged in near-field regime between three identical pellets arranged in a symmetrical geometric configuration can be controlled by tuning the Fermi levels of graphene flakes deposited on their surface. Reproduced with permission from [82]. (b) Spectral power and heat flux lines by radiative Hall effect in a four terminal junction made of MO particles forming a square. The junction is exposed to an external magnetic field B in the direction orthogonal to the particle plane while TL = 310 K and TR = TT = TB = 300 K. The mapping shows the Poynting vector field around the particles and illustrates the symmetry breaking induced by the magnetic field. Reproduced with permission from [83]. (c) Radiative thermal router consisting of three spheres of the same radius made of magnetic Weyl semimetals forming an isosceles triangle in the x − y plane. By tuning the Weyl node separation 2b1 of the first sphere located at the apex of this triangle the thermal conductances G1→2 and G1→3 can be controlled in an asymmetric way. Reproduced with permission from [85]. (d) Heat flux focusing with a multi-tip SThM platform with three tips. The tip temperatures and their location are individually controlled, so that the thermal energy they radiate can be focused and even amplified in spots that are much smaller than those obtained with a single thermal source. Reproduced with permission from [87]. this case as a heat pump.
ACTIVE INSULATION, COOLING AND REFRIGERATION
While numerous research works have been devoted to the development of nanophotonic structures to control the far-field heat exchange and enable new applications in the field of radiative cooling, little attention has been paid so far to radiative cooling at subwavelength scales. During the last years some progress have been made in this direction and new mechanisms have been proposed to actively cool down solids through near-field heat exhange. The first advance in the development of solid-state photonic cooling operating in near field has been performed in 2015 [88]. The basic idea for this cooling mechanism consists in the use of a photodiode as illustrated in Fig.6-a which is brought close to the solid to be cooled down. By applying an external bias voltage on the photodiode, photons are emitted with a non-vanishing chemical potential which follows from a spectral shift in the Bose-Einstein distribution function. Consequently, the apparent temperature of the photodiode can be artificially made smaller than its real temperature, so that heat can flow in the opposite direction of temperature gradient (Fig. 6-b). Work is being performed on the photodiode, so there is no violation of any fundamental law. Moreover, the magnitude of heat exchange in the near field leads to a thermodynamic efficiency for such solid-state cooling device which is close to the Carnot limit. A (e) (f) Figure 6: (a) Photonic refrigerator working in near-field regime. By applying a bias voltage on a photodiode its apparent temperature can be reduced, so heat can be extracted from a hot solid by radiation. (b) Heat flux exchanged in the photonic refrigerator with respect with the bias voltage. Reproduced with permission from [89]. (c) Cooling by radiative heat shutting. By adiabatically modulating the temperature or the chemical potential of a solid, an extra flux superimposes to the steady state flux. Here we show the time-averaged fluxJ between a VO2 slab and a sample of SiO2 when the temperature of the VO2 slab is TL(t) = T0 + δT sin(Ωt) with an amplitude δT = 30 K, whereas the temperature of the other body is fixed at TR = T0. For a temperature modulation around the critical temperature of VO2, the average flux can extract heat from the body with stationary temperature. Reproduced with permission from [90]. (d) Thermal photonic refrigerator operating between a cold solid at Tc and a hot solid at T h . Two modes at frequencies ω1 and ω2 are coupled through a time-modulation of the refractive index at a time scale faster than the thermal relaxation process. (e) Net cooling power and work input as a function of the ratio of the frequencies of the two modes, for T h = 300 K and Tc = 290 K. (f) Coefficient of performance (COP) of the refrigerator normalized to the Carnot limit. Reproduced with permission from [93].
proof-of-principle of this cooling principle has been demonstrated recently [89].
The active modulation of physical properties or intensive quantities has also been proposed to cool down solids through near-field interactions in two and many-body systems. Latella et al. [90] have considered radiative thermal exchange between two bodies, where the temperature of at least one body is adiabatically (slowly) modulated through interactions with external thermostats. Due to the nonlinear dependence of the temperature in the radiative heat exchange, the time average heat flux can proceed against the average temperature bias, even though instantaneously heat flows always from the hotter to the colder body. When the modulation is performed in such a way that the number of modes which participate to the transfer decreases with the temperature, a radiative shuttling effect [91] can dynamically pump heat from a body with stationary temperature in spite of a vanishing average thermal bias (Fig.6c). This situation can occur, for instance, in systems made with phase-change materials whose optical properties drastically change across the phase transition. Similar heat-pumping mechanisms driven by combined modulations of positions and temperatures have been recently highlighted [92] in many-body systems.
Photonic refrigeration can also be observed in systems whose refractive index undergo a temporal modulation [93]. In these systems (Fig.6-d), two resonant modes such as cavities modes inside the solid to be cooled down are coupled and driven by a time modulation of refractive index. When this modulation is turned on, a fraction of the thermally generated photons from the mode of lowest energy are up-converted to the second mode and emitted in the surrounding environment. These photons carry a power (Fig.6-e) f).
LOGICAL CIRCUITS
Besides the control of heat fluxes, thermal information processing at the nanoscale remains today a challenging problem. Some building blocks have been introduced during the past years in this regard, with the aim of establishing thermal analogues of conventional electronic building blocks which are driven by thermal photons rather than by electrons. Among these devices, multistable systems have been proposed to store the radiative energy [94] and to release it into the environment upon request. For example, systems composed of phase-change materials have several equilibrium (stationary) temperatures and behave like thermal memories. As shown in (Fig.7-a) in the particular case of a bistable system [95] which consists in two slabs of temperature T 1 and T 2 , which mutually interact and which are coupled to two thermal reservoirs, two stable equilibrium temperature can exist. These states "0" and "1" correspond to the temperature pair (T 1 , T 2 ) for which the heat flux received by each slab vanishes ( Fig.7-b). Such states can be maintained for arbitrarily long times (Fig.7-c), provided that the temperatures of reservoirs are kept constant and no external perturbation modifies the net flux on each slab. By heating or cooling the slab made with the phase-change material, the thermal state of the system switch from one state to the other. This switching has been used to design self-induced thermal oscillators [96] by exploiting the hysteretic behavior of the phase-change material around its critical temperature (Fig.7-d,e). Another building block is the transistor. In electronics this device is a key element which allows for switching but above all for amplifying an electric current flowing through a solid using a simple external bias voltage. This building block is at the origin of modern electronics which have revolutionized our current life. In 2014, a radiative thermal analogue of a transistor has been introduced [97]. As its electronic counterpart, the radiative transistor is a three-terminal system (Fig.8-a) composed by a hot body (the source), a cold body (the drain) and an intermediate slab made of a phase-change material (the gate). By operating at temperatures close to the critical temperature where the phase transition takes place in these materials, the heat flux received by the drain can be switched (Fig.8b), modulated and even amplified (Fig.8-c) with a weak variation of the gate temperature. This behavior is closely related to the strong change in the optical properties of the phase-change material around its critical temperature. In this temperature range, the thermal resistance R = ( ∂ϕ D ∂T G ) −1 defined as the variation of flux ϕ D received by the drain with respect to the gate temperature T G is negative [98]. Under these conditions, the amplification factor of the transistor A =| ∂ϕ D ∂ϕ G | can be higher than unity [46,97]. By using single or combining several radiative transistors, logic gates have been designed [99,100] to perform a Boolean treatment of information with heat exchanged in near-field regime. In Fig.8-d we show an example of an AND-like gate made with a double gate transistor, where the gates are made of sililica and the drain is made of a phase-change material (VO 2 ). In this system, the temperatures T G1 and T G2 of the gates set the two inputs of the logic gate, and the temperature T D of the drain stands for the logic gate output. By introducing a threshold value for T D beyond which the output state of the gate switches from state "0" to state "1", we see that the system behaves as a digital AND gate (Fig.8-d,e). The overall operating time of the logic gate corresponds to the time required to switch from one state to the other. This time is directly related to the thermalization of each element through radiative interactions, and in nanostructured systems it is of the order of few milliseconds (Fig.8-f).
OUTLOOK
The spatio-temporal control of near-field radiative heat exchanges in complex solid architectures has opened the way to a new generation of devices for both a passive and active thermal management at nanoscale. The new degree of freedom enables the development of wireless sensors working with heat as primary source of energy rather than with electricity. In such devices, heat coming from various heat sources (machines, electric devices. . . ) can be captured, stored in thermal blocks (thermal capacitors or thermal memory) and used to launch sequences of logical operations in order to either control the heat flux propagation (direction, magnitude), trigger specific actions (opto-thermomechanical coupling with MEMS/NEMS, ignitiate chemical reactions. . . ) or even make information treatment with heat. In this perspective the operating speed of this technology could be a limiting factor. Indeed, in circuits involving interacting nanostructures the typical timescale to process one single operation is of the order of milliseconds or even more due to the thermal inertia of building blocks. For information processing this speed is obviously not competitive with the current electronic devices but it is more than enough for active thermal management and thermal sensing. For example, existing near-field probes like those developed in the last decades [101][102][103][104][105] can be further advanced to measure some of the theoretically proposed modulation effects locally, whereas new multi-tip or many-body setups like that in Ref. [52] are necessary to realize some of the thermotronic building blocks like the transistor. Nevertheless important progress could be done by considering 2D materials or solids far from their equilibrium where the heat carriers have different temperatures. In this last case the operating speed of thermal circuits could be reduce to few microsecond or even picoseconds, the typical relaxation time of electrons in solids. But this ultrafast physics of heat exchanges remains today a challenging problem both on a fundamental and practical point of view. | 7,938 | 2021-07-20T00:00:00.000 | [
"Physics"
] |
A new approach to vector scattering: the 3s boundary source method
This paper describes a novel Boundary Source Method (BSM) applied to the vector calculation of electromagnetic fields from a surface defined by the interface between homogenous, isotropic media. In this way, the reflected and transmitted fields are represented as an expansion of the electric fields generated by a basis of orthogonal electric and magnetic dipole sources that are tangential to, and evenly distributed over the surface of interest. The dipole moments required to generate these fields are then calculated according to the extinction theorem of Ewald and Oseen applied at control points situated at either side of the boundary. It is shown that the sources are essentially vector-equivalent Huygens’ wavelets applied at discrete points at the boundary and special attention is given to their placement and the corresponding placement of control points according to the Nyquist sampling criteria. The central result of this paper is that the extinction theorem should be applied at control points situated at a distance d = 3s (where s is the separation of the sources) and consequently we refer to the method as 3sBSM. The method is applied to reflection at a plane dielectric surface and a spherical dielectric sphere and good agreement is demonstrated in comparison with the Fresnel equations and Mie series expansion respectively (even at resonance). We conclude that 3sBSM provides an accurate solution to electromagnetic scattering from a bandlimited surface and efficiently avoids the singular surface integrals and special basis functions proposed by others. Published by The Optical Society under the terms of the Creative Commons Attribution 4.0 License. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal
Introduction
Traditionally, the theory of electromagnetic waves has been applied most frequently in the field of electrical engineering, where it is central to the understanding of antennas and the design of radar imaging systems. With the growth of low-cost civil radar applications, wide-band, wireless communications and the continued miniaturization of information technology, rigorous modelling of electromagnetic scattering in the built environment is now of increasing interest. At optical frequencies, the application of rigorous scattering models is also gaining importance. To realize the considerable potential of phonic-bandgap and integrated fiber-optic devices our knowledge of electromagnetic scattering gained at radio frequencies must be transferred (through suitable computational tools) to the significantly higher frequencies and shorter wavelengths of the optical spectrum.
In many optical applications, the characteristic dimensions of structures of interest (e.g. lens apertures) are often several orders of magnitude greater than the wavelength. In these cases, the design and function of optical systems are best understood using the simplified models of physical optics [1]. From the early days of optics, ray tracing has been used to describe the propagation of light reflected or refracted by slowly varying surfaces and lies at the heart of the modern computational tools design packages that are routinely used to design complex lens systems [2]. More recently Monte Carlo methods have been used to model random scattering from rough surfaces and turbid media and are central to photorealistic rendering of computer-generated images [3]. These are powerful computational tools, however, they typically neglect diffraction effects and consequently are not (without significant modification) suited to the study of coherent scattering phenomena.
Huygens' principle provides us with a useful understanding of diffraction phenomena by explaining the propagation of light from a boundary surface as the sum of appropriately phased wavelets [4]. With the introduction of the obliquity factor, Fresnel showed that Huygens' principle follows directly from the scalar wave equation when expressed in the integral form [4]. Under the assumption that multiple scattering is negligible, the Huygens-Fresnel principle allows us to represent coherent scattering as a simple convolution operation and the optical output of measuring instruments to be written as linear filtering operations applied to a "foil" representation of the surface form [5,6]. In this way, we have a useful and conceptionally simple means to characterize the performance of optical surface profiling instruments including coherence scanning interferometers [7] and focus variation microscopes [8] when in normal use. It is noted that in this context normal use suggests an application to surfaces that are slowly varying on the scale of a wavelength (i.e. within the regime of physical optics) and with slope angles such that polarization effects and multiple scattering can be neglected. In practice, however, the effects of multiple scattering (e.g. from edges and v-grooves) are known to confound optical profilometers [9]. Making use of a-priori knowledge and inverse scattering models, multiple scattering can sometimes reveal features (e.g. undercuts) that otherwise would be invisible [10]. Furthermore, there is growing evidence that condition monitoring of machine tools and additive manufacturing processes is possible using straightforward measurements of scattering distribution and artificial intelligence (AI) algorithms [11].
The motivation of the work described in this paper was to create a computational tool that provides a rigorous solution to 3D scattering from an arbitrary interface between homogenous media for use in inverse scattering problems and the creation of synthetic training sets for AI condition monitoring applications. The method that we describe, properly accounts for polarization and the resonant behavior that is known to be problematic in coherent scattering. Using modest desktop computing it can be applied directly to irregular objects with a characteristic dimension of up to a few tens of wavelengths, open surfaces of equivalent area, or to arbitrarily small objects (with appropriate scaling). Moreover, the method is conceptionally simple and provides further insight into electromagnetic scattering from sub-wavelength surface features.
Background
In general terms the propagation of electromagnetic waves though in inhomogeneous media is governed by Maxwell's equations and these can be solved numerically according to well-defined illumination conditions. For the case of 3D media, Finite Element Methods (FEM) provide an efficient solution for monochromatic scattering behavior [12] while related Finite Difference Time-Domain (FDTD) analysis is typically applied to model wide-band or partially coherent illumination [13]. For the case of inhomogeneous and near-planar surface structures such as those forming semiconductor devices, Rigorous Coupled Wave Analysis (RCWA) provides an elegant solution that is partially expressed in the (spatial) frequency domain. RCWA is a particularly efficient way to analyse the performance of periodic structures such as blazed and coated diffraction gratings [14].
Electromagnetic scattering at the surface defined by the interface between homogenous media allows further simplification. In this case, the field at any point in a given space can be calculated from the field component on any surface that bounds that space according to the Kirchhoff's diffraction integral [15]. This approach can be formulated in several different ways. The Stratton-Chu formulation of the scattered field is defined in terms of all 6 components of the electric and magnetic fields the boundary surface [16]. It is noted, however, that Maxwell's equations relate these components and formulations due to Kottler [17] and later Franz [18] require only the 4 tangential components of these fields. In this way, if the illuminating field is known and appropriate boundary conditions applied, the surface field components can be calculated.
In computational terms the process can be treated as a Boundary Element Method (BEM) in which elemental areas act as finite sources and radiate into the volume of interest [19]. This approach presents some significant practical problems, however. First, the boundary elements that define typical surfaces have different shapes and areas and have varying radiation patterns. Second, the calculation of these radiation patterns is non-trivial as it requires integration of high-order singular functions. The Method of Moments (MoM) provides one solution to this problem [20] but requires significant computational overhead. The Method of Auxiliary Sources (MAS) [21] obviates the need for integration by expanding the fields in terms of a finite number of sources located on either side of the boundary. MAS avoids the singularities, but its accuracy is found to be sensitive to source position (this is discussed further in Section 3.3). Finally, we note that the Kirchhoff diffraction integral applies only to closed surfaces and many of the methods stated cannot be applied to open boundaries. With appropriate illumination conditions, however, these restrictions can be overcome (this is discussed further in Section 4.1).
The following section describes a new formulation of a Boundary Source Method (BSM) based on the Franz formulae of surface scattering. The method is similar in concept to the MAS [21] method, however, sources are now applied at the boundary and the extinction theorem (effectively an alternative boundary condition) is applied at a finite distance above and below the boundary. For the case of spherical particles and planar surfaces, we show that this method provides solutions that are almost identical to the exact solutions provided by Mie scattering and Fresnel equations respectively. Furthermore, we show that the method properly account for resonances.
Theory
In the following section we first discuss the electric field generated by the Franz surface integrals and relate these equations to an expansion of the electric field in terms of basis functions representing electric and magnetic dipoles. Calculation of the scattered electric field that is observed when a given object surface is illuminated by a known, monochromatic electric field is then discussed by applying appropriate field constraints. The relationship between this approach and the Huygens-Fresnel principle is then explored and the proper positioning of sources and constraints is discussed. Finally, the electromagnetic scattering problem is formulated as a straightforward matrix inversion.
Scattered field as a dipole expansion
Let us consider scattering from an object medium A within an ambient medium B as shown in Fig. 1. Let A and B be linear isotropic and homogenous media characterized by electric permittivities ε A and ε B , and magnetic permeabilities µ A and µ B , respectively. Definingn as the unit outward surface normal, according to the Franz formulation [18] a monochromatic electric field of angular frequency, ω, propagating, in medium A is given by, where ∇× denotes the curl of the vector field with respect to the space variable r; and G A (r , r) = exp(−i2πk A |r−r |) 4π |r−r | and k A = 1/λ A are the Green's function and wave number in medium A. It is also noted that the first term in Eq. (1) can be identified as the combined field due to magnetic dipoles that are tangential to the surface and have a strength (per unit area) that is proportional to the amplitude of the tangential component of the electric field. Similarly, the second term can be identified as electric dipoles that are also tangential to the surface and have strength (per unit area) that is proportional to the amplitude of the tangential component of the magnetic field. Noting that the ambient field, E B (r), is the sum of the incident field, E r (r), and the field scattered from the surface, we can write the field in medium B, where the Green's function and the wavenumber are appropriate to medium B and the terms can be interpreted as magnetic dipoles and electric dipole sources as discussed above. It is also noted that it is possible to write the magnetic field in each medium, H A (r) and H B (r) as similar superposition integrals [18], however, these fields are completely defined by the electric fields E A (r) and E B (r) and for simplicity are omitted here. Finally, we can simplify these formulae since the tangential components of the electric field and magnetic field are continuous across the boundary [22] such that,n (4) For computational purposes, it is convenient to combine the dipoles in each elemental area into discrete sources that are tangential to the surface and have magnetic and electric dipole moments where N is the number of source points. In this way Eqs. (5) and (6) define a multipole expansion of the electric fields in terms of 4N complex variables that define the phase and amplitude of each of the 4 electric and magnetic dipoles applied at each source point.
For a given incident field we can calculate the dipole moments required to satisfy appropriate boundary conditions. In the following, we make use of the extinction theorem of Ewald and Oseen [23] which can be written as the following constraints, in the ambient medium B; and in the object medium A. Here it is important to note the reversal of the Green's functions such that the exterior Green's function is used for the interior constraint and vice-versa. In order to solve Eqs. (7) and (8) we need to impose at least 4N constraints to the components of the field at the appropriate interior and exterior points as will be discussed in detail in Section 3.3. Before doing so, however, it is instructive to consider the electric field due to co-located orthogonal magnetic and electric dipoles as follows.
Vector Huygens' wavelets
Let us consider the field in medium A due to a magnetic dipole with moment m = m o k and an electric dipole of moment p = p o j co-located at the origin where i, j, k are the unit vectors of the coordinate system. If we apply the constraint of Eq. (7) at the position r = −Di where It is noted that by applying this constraint we have constructed a dipole source pair that does not radiate in the −i direction. The radiation pattern, E H (r), of this source pair is given by and is illustrated in Fig. 2. In this figure the red, blue and black arrows show the direction of the electric field, magnetic field and Poynting vector respectively while the shading corresponds to the magnitude of the Poynting Vector. The radiation pattern is strongest in the i direction and weakest (tends to zero as distance increases) in -i direction. It is interesting to note that if A 0 (r), is the amplitude at a distance r in the positive i direction, further analysis shows that the amplitude distribution of the field is given by, A(r, θ) = A 0 (r)(1 + cosθ)/2, where cosθ is the direction cosine measured from the x-axis. The term (1 + cosθ)/2 can be recognized as the well-known "obliquity factor" of the Huygens-Fresnel principle that results from a similar scalar analysis [24]. For these reasons we refer to such a forward propagating dipole source pair as a vector Huygens' wavelet and we note a similar "hypothetical vector Huygens' secondary point source" is described by Marathay [25].
From this analysis, the extinction condition of Eq. (7) requires that the tangential dipole source pairs must correspond to vector Huygens' sources that only radiate into medium A according to the interior Green's function G A (r). The extinction condition of Eq. (8) requires that the same tangential dipole source pairs exactly cancel the illumination field within medium A according to exterior Green's function G B (r). We are now in a position to consider the placement of control points where these conditions are imposed.
Optimised control point location
It can be concluded from the previous discussion that it is possible to generate electromagnetic fields internal and external to a closed body using tangential electric and magnetic dipole sources. Furthermore, for a given illumination field it is possible to calculate the dipole moments such that they satisfy the extinction theorem and thereby the transmitted and scattered field components can be calculated. It is well known, however, that dipole sources are mathematically intractable as they display third order singularity that can only be removed through volume integration [26]. The discrete dipole sources proposed in the previous section result in a rapidly changing field that is characterized by non-physical (infinitely) high spatial frequencies at the boundary. Propagation is known to be equivalent to linear filtering or smoothing, however, and after a short distance the high spatial frequency content is attenuated. By carefully considering the filtering, and by applying the constraints of Eqs. (7) and (8) at control points that are appropriate to the source spacing, the method of solution can be greatly simplified relative to conventional BEM.
As described in Section 3.1 we propose to generate internal and external electromagnetic fields using a basis of tangential electric and magnetic dipole sources that are placed at N source points on the surface of interest. In total, the basis is defined by 4N complex variables and in order to find these variables we require at least 4N constraints. In this work we constrain the tangential components of the electric field at 2N control points situated a small distance, d, above and below the boundary as shown in Fig. 3(a). The control points effectively sample the field generated by the sources on surfaces above and below the boundary and it is important that the field is properly sampled according to Nyquist-Shannon sampling theory [24].
To illustrate the process, let us consider a forward propagating plane wave generated by equal amplitude vector Huygens' sources placed on a regular grid of spacing, s, as shown in Fig. 3(b). In this case, the vector Huygens' source (as presented in Section 3.2) can be regarded as the convolution kernel that effectively smooths the granularity in the field imposed by the grid. We place a similar grid of control points in a parallel plane separated from the source plane. We require the separation distance, d, that is required to assure proper sampling. Figure 4(a) shows the spectrum of the spatial frequencies that are observed in the E x component of the electric field due to a vector Huygens' source at a wavelength λ = 1 [A.U.], at a distance, d = s = λ/100. The plot shown in Fig. 4(b) are sections through the distribution in the k x and k y directions and the green lines show the maximum spatial frequency according to the Nyquist limit defined by the control point spacing s (k x , k y < 1/2s). The Figs. 4(c) and 4(d) are similar to Figs. 4(a) and 4(b) but for a distance d = 3s = 3λ/100. At a distance d = 3s, the spatial bandwidth of the E x component of vector Huygens' wavelet predominantly falls within the bounds of the Nyquist limit and the effects of aliasing are therefore removed. Although this conclusion has been drawn for a specific component and wavelength, we note here that the spectra for other components are similar and a similar "rule of thumb" is drawn for all components for λ /1000 < s < λ /10. Finally, we note that if we increase the distance such that d >> 3s then several local sources will have a similar influence on a given control point and the system behaves in a similar manner to the underdetermined case where there are more sources than constraints, no unique solution exists and more complex and less efficient matrix procedures (such as pseudo-inverse) must be exploited. In the following section, the matrix solution of Eqs. (7) and (8) is considered in more detail.
Matrix formulation
To apply the theory of Section 3.1 it is necessary to write Eqs. just below and just above the boundary surface such that, Accordingly, Eqs. (7) and (8) can be written as the matrix equation, where S and C are vectors that represent the sources and constraints and W is a matrix that defines the influence of each source to each constraint such that, where where In this way the top row sub-matrices in W relate the sources to theû i component of the electric field at the control points in medium A using the Green's function for medium B such that, In a similar manner, the sub-matrices in the second row relate the sources to thev j component of the electric field at the control points in medium A using the Green's function for medium B. The third and fourth rows relate theû j andv j components of the electric field at the control points in medium B using the Green's function for medium A respectively, such that the fourth row is, We note that if the medium A is a perfect conductor then the field expansion requires only tangential electric dipoles and their strength can be found by cancelling the illumination field at every internal point such that, and Finally, we note that W is a square matrix with either 16N 2 elements (for the case of dielectric/dielectric interface) or 4N 2 elements (for the case of a dielectric/perfect conductor interface). If d = 3s and s<λ min /2 (where λ min is smallest of λ A or λ B ) we have found that this matrix is well conditioned and can always be inverted efficiently to find the source coefficients, such that, S = W −1 C, as we show in the following examples.
Application of 3sBSM
In this section, we apply the 3sBSM to calculate the scattering from planar and spherical boundaries comparing the results are compared with the well-known analytic formulae of Fresnel and Mie.
Scattering from a plane dielectric/dielectric interface
The aim of this section is to demonstrate that the 3sBSM can replicate the well-known Fresnel equations that define amplitude reflection and transmission coefficients at a plane surface between two dielectrics. The Fresnel equations are strictly only valid, however, for the case of infinite plane waves at an infinite plane (i.e. an open boundary) and the comparison presents a problem, as the theory developed in Section 3. is derived from the closed boundary integrals of Franz. However, we note here that the 3sBSM is applicable to open boundaries in the sense that each of the dipole basis functions obeys Maxwell's equations and secondly, regularly spaced tangential electric and magnetic dipoles form a complete basis (note this is not the case when the Stratton-Chu formulae are applied to an open boundary [27]). We can therefore apply the method with confidence if i) the expression used to describe the illumination, E r (r), obeys Maxwell's equations and ii) a sufficient surface area is covered by the basis such that the contribution due to omitted dipoles can be assumed negligible.
In this example, we consider reflection from an air/glass interface over a surface area of 20λ B × 20λ B with illumination by a Gaussian beam with a waist radius 2λ B (@1/e) at the surface according to the rigorous 3D vector Gaussian beam formulation [28]]. The refractive indexes of the upper medium B and lower media A are n B = (ε B µ B )/(ε 0 µ 0 ) = 1 and n A = (ε A µ A )/(ε 0 µ 0 ) = 1.5, where ε 0 and µ 0 are the free space electric permittivity and magnetic permeability. The angle of incidence is nominally the Brewster (polarising) angle (approx. 57 degrees). The surface is sampled such that s = λ B /5. Figure 5(a) shows extinction in the plane of incidence. The E x , E y and E z components of the sum of the incident field, E r (r), and that radiated from the surface dipoles at k B = 1/λ B are shown in the top row showing complete extinction in the lower medium A and reflection of the E y component in accordance with incidence at the Brewster angle. The lower row shows E x , E y and E z components of the electric field that is radiated from the surface dipoles at k A = 1/λ A illustrating extinction in the upper medium B and the transmitted field in the lower medium A. Figures 5(b) and 5(c) show a comparison of the reflection and transmission coefficients for different plane-wave components using the 3sBSM and Fresnel formulae. It is noted here that due to diffraction, the incident Gaussian beam can be decomposed into plane polarized, plane-wave components cover over a range of incident angles from about 49-66 degrees. The reflected and transmitted fields can be similarly decomposed. The phase and amplitude of the reflection coefficients in Fig. 5(b) and transmission coefficients Fig. 5(c) are in good agreement with those predicted by the Fresnel equations.
In Fig. 5(a) the surface is highlighted by the yellow pixels, while the ± 3s zone between the control points is highlighted by the red pixels. We note here that the field within the ± 3s zone is not actually an accurate representation of a physical electric field but is merely one solution to an infinite number of electric fields that satisfy the extinction theorem in the remaining domain as will be discussed further in Section 5.
Scattering from a dielectric sphere
The work in this section provides a comparison between 3sBSM and Mie formulas for the case of scattering of plane waves from dielectric spheres. In principle this closed boundary is an easier problem to solve than open boundary in the previous section, however, closed surfaces resonate at certain frequencies which can lead to inaccuracy [29].
The matrix equations of Section 3.4 have been solved for the case of a dielectric sphere of refractive index n A = (ε A µ A )/(ε 0 µ 0 ) = 1.5 in a vacuum n B = (ε B µ B )/(ε 0 µ 0 ) = 1 with radius r = λ B and s = λ B /10. The incident light is assumed to be a plane wave which propagates in the k direction (i.e. along z-axis). Figure 6(a) demonstrates the extinction of the sum of the scattered field and incident field inside of the sphere and the extinction of the transmitted field outside the sphere while the surface position and ± 3s zones are highlighted by yellow and red pixels as before. In Fig. 6(b) the comparison with Mie series is shown at a radius, r = 1.4λ B , is presented.
Although a very good agreement with Mie scattering is observed, it is well known that resonant cavities present additional complexity and difficult to model properly [30], [29]. When resonances occur the Scattering cross section C scat of spherical particles increases sharply [31] such that, where a n , b n are the coefficients in the expansion of the scattered fields in vector spherical harmonics (for details, please see [31] Chapter 4.4). The scattering cross section C scat for a dielectric sphere with refractive index n A = 2, n B = 1 and radius r = λ B is shown as a function of the wavelength [A.U.] of the incident plane wave with field components in Fig. 7(a). Resonant peaks are identified A-D. The 3sBSM has been used to model the same system and a comparison of the scattered fields with Mie predictions are shown in the lower half of this figure. The incident light is assumed to be a plane wave which propagates in the k direction (i.e. along z-axis). The plots shown in Fig. 7. show an excellent agreement between the predictions of 3sBSM and those of Mie series. We also note that scattered fields calculated for a very small sphere with radius r = λ B /100, s = r/10 = λ B /1000, n B = 1, n A = 1.5 also coincide nicely with Mie series. The reasons for this are discussed further in the following section.
Discussion and conclusions
In this paper we have presented a straightforward method to compute the electric fields that are scattered and transmitted by the surface at the interface between homogenous media. The electric fields are expressed as an expansion of sources in the form of electric and magnetic dipole pairs that are i) tangential to the surface and ii) spaced at approximately even intervals. The relative phase and amplitude of the dipole sources are found by applying the extinction theorem to the tangential components of the electric field at suitable control points in each medium. We have found that control points are optimally positioned at a perpendicular distance of 3s on either side of the interface where s is the nominal source spacing the source, resulting in a robust and efficient solution based on regular matrix inversion and for this reason we refer to the method as 3sBSM.
The choice of sample spacing, s, effectively defines resolution of the technique. It might be expected from the results presented that the 3sBSM provides an exact calculation of any smooth surface generated by a point cloud of this resolution (e.g. generated using a cubic-spline interpolation). It should be remembered, however, that it is the electric field components at the control points that are constrained, and these have propagated a minimum distance, 3s, from the sources. As mentioned in Section 4.1 this electric field could in principle be generated by an infinite number of sources positioned anywhere in the region between surfaces defined by the inner and outer control points. It is more reasonable therefore to expect the method to correctly represent a smooth surface that lies somewhere between these bounds, or equivalently a surface form that is defined by a bandlimited function and is adequately sampled (by the Nyquist criteria) at a resolution of 3s. We also note here that more advanced sampling strategies such as those described in the context of MAS might also be possible [32,33]. Finally, it is worth stating that in order to sample the propagating electric fields correctly, the maximum sample spacing is half of the wavelength in the medium of greatest refractive index. There is no apparent restriction to the minimum sample spacing.
With these restrictions the 3sBSM provides a straightforward and easy to implement method to solve electromagnetic scattering problems. The results in this paper show a good agreement between 3sBSM and analytical results obtained from Mie series expansion and Fresnel formulae. 6. Scattering from a sphere with radius r=λ B , n B = 1, n A = 1.5, at discretization s = λ B /10; (a) extinction of the sum of the scattered field and incident field inside of the sphere and the extinction of the transmitted field outside the sphere, (b) comparison of the scattered fields with Mie series. Fields are plotted in xz plane, outside the sphere at radius a = 1.4λ B as a function of the angle θ between radius vector of the point and the x-axis (such that in forward scatter θ = π/2). Furthermore, the method appears to provide accurate results for resonant cases where problems have been reported elsewhere. We believe this is because i) the basis functions used to generate the electric fields at the control points are complete and ii) the fields at the control points are correctly sampled according to Nyquist criteria. With these two conditions we believe that the 3sBSM provides a single, unique solution to the electric fields defined by the extinction theorem.
Disclosures
The authors declare no conflicts of interest. | 7,172.2 | 2019-10-14T00:00:00.000 | [
"Physics",
"Engineering"
] |
DETECTION OF BUILT-UP AREAS USING POLARIMETRIC SYNTHETIC APERTURE RADAR DATA AND HYPERSPECTRAL IMAGE
Polarimetric synthetic aperture radar (POLSAR) is an advantageous data for information extraction about objects and structures by using the wave scattering and polarization properties. Hyperspectral remote sensing exploits the fact that all materials reflect, absorb, and emit electromagnetic energy, at specific wavelengths, in distinctive patterns related to their molecular composition. As a result of their fine spectral resolution, Hyperspectral image (HIS) sensors provide a significant amount of information about the physical and chemical composition of the materials occupying the pixel surface. In target detection applications, the main objective is to search the pixels of an HSI data cube for the presence of a specific material (target). In this research, a hierarchical constrained energy minimization (hCEM) method using 5 different adjusting parameters has been used for target detection from hyperspectral data. Furthermore, to detect the built-up areas from POLSAR data, building objects discriminated from surrounding natural media presented on the scene using Freeman polarimetric target decomposition (PTD) and the correlation coefficient between co-pol and cross-pol channels. Also, target detection method has been implemented based on the different polarization basis for using the more information. Finally a majority voting method has been used for fusing the target maps. The polarimetric image C-band SAR data acquired by Radarsat-2, over San Francisco Bay area was used for the evaluation of the proposed method. * Corresponding author
INTRODUCTION
In this paper, we consider how combination of polarimetric synthetic aperture radar (PolSAR) data and hyperspectral images can be used to enhance the detection of targets (built-up area).Hyperspectral imaging (HSI) sensors collect data that can be represented by a three-dimensional data cube.For each pixel within a hyperspectral image, a continuous spectrum is sampled and can be used to identify materials by their reflectance.One shortcoming of HSI is that it provides no surface penetration.To overcome these limitations and enhance HSI system performance, we fuse HSI data with PolSAR sensor data.In counter camouflage, concealment, and deception applications, HSI data can be used to identify ground cover and surface material, and a PolSAR data can determine if any threat objects are under concealment.Because PolSAR and HSI sensors exploit the different phenomenology, their detection capabilities complement each other.PolSAR penetrates foliage and detects targets under tree canopy, but has significant clutter returns from trees.HSI, on the other hand, is capable of subpixel detection and material identification.Both SAR and HSI systems may suffer substantial false-alarm and missed detection rates because of their respective background clutter, but we expect that combining SAR and HSI data will greatly enhance detection and identification performance.
Polarimetric SAR Target Detection
The strategic advantage of SAR in target detection is the possibility to monitor under foliage.In PolSAR images, the main feature of target is a relatively large backscattering signal, which is usually brighter in comparison with the clutter.Generally, some statistical tests on the intensity of the clutter or the polarimetric target decompositions (PTD) based on the physics concepts (Yamaguchi, et al. 2005) have been applied to separate the targets from the background.Several detectors were proposed in the recent years.Some of them exploit the different polarimetric channels as independent measurements of the same scene (Rey 2002).Another class of polarimetric detectors adds some physical rationale exploiting knowledge regarding the scattering.The idea behind these methodologies is that the differences between clutter and targets can be magnified if some specific aspects of the polarimetric return are observed.In this second category, there are algorithms with a detection role based on some rationale linked to the physical behavior of the clutter (Nunziata, Migliaccio et al. 2012).The built-up areas could be estimated by many methods (T.Moriyama et al. 2005, L. Zhang et al 2008, S. Guillaso et al. 2003, Guillaso et al. 2005).In this paper, PTD and polarimetric correlation coefficient are used to quickly estimate built-up area as shown in Fig. 1.The structure of building is like a dihedral corner reflector, so these areas have strong double-bounce scattering component.The double-bounce scattering component can be estimated by using the PTD.
The measured coherency matrix can be represented as the sum of several scattering components using PTD (Moriyama et al. 2005).For example, the coherency matrix is decomposed into the three scattering mechanisms corresponding to odd-bounce, even-bounce, and volume scattering for PolSAR data in built-up area.where is a ratio of HH backscatter to VV backscatter in oddbounce scattering and is a coefficient similar to in evenbounce scattering.and are the ratios of HH and HV backscatters to VV backscatter in volume scattering respectively.But a ground-trunk interaction for forests is also like a dihedral corner reflector in C and L bands.To detect the refined built-up areas, the different scattering characteristics between natural distributed areas and built-up areas is used.The main point of difference is polarimetric correlation coefficient, because the reflection symmetry condition, does not hold for built-up areas.The correlation coefficient between co-and cross-polarized channels is defined by [6] (3)
If the correlation coefficient Cor(
) is close to one in a test area, this area can be seen as a built-up area.As shown in Fig. 1, we used different polarization states (different ellipticities and orientation angles) to calculate several correlation coefficients.Polarization refers to the alignment and regularity of the electric field component of the Electromagnetic wave.The path of the end point of the Electric wave vector traces out an ellipse in its general form as shown in Fig. 2. The size of the ellipse is proportional to the amplitude of the wave.The shape can be characterized by two geometrical polarization parameters, the ellipticity τ varying from -45 to +45 and the orientation angle φ varying from 0 to 180.The electric field of a monochromatic plane wave propagating in the z-direction can be represented by a two component vector in any polarization basis.This can be expressed in terms of a complex polarization vector.Rough built-up area can be estimated firstly by setting a threshold value of double-bounce scattering component, then refined built-up area can be obtained by setting a threshold value of correlation coefficient to rough built-up area.In this paper, the values of and are the mean value of doublebounce scattering component and correlation coefficient.
Detection methods overview
Recently, target detection has attracted considerable interest in many hyperspectral remote sensing applications, such as agriculture, forestry, geology, and defence.In fact, the aim of target detection is to identify targets, rare pixels with known spectral signatures.Over the last two decades, several detection algorithm have been developed using statistical, physical, or heuristic approaches (Manolakis and Shaw 2002, Manolakis, Marden et al. 2003, Nasrabadi 2014).Most algorithms are based on second-order statistics to construct detector, such as the matched filter (MF) (Manolakis, Marden et al. 2003), the constrained energy minimization (CEM) (Farrand and Harsanyi 1997) and the adaptive coherence estimator (ACE) (Kraut, Scharf et al. 2001, Manolakis, Marden et al. 2003).In 2015, Zou and Shi instead of refining the target spectrum directly similar to (T.Wang et al. 2014, X. Fan et al. 2011), built a new hierarchical CEM (hCEM) to suppress the variational background spectra while preserving the targets.In here, we use the hCEM method with the purpose of improving the performance of traditional CEM detector.Since the classical CEM detector, in some special cases, cannot completely push out the targets and suppress the backgrounds in one round of filtering process, we filter the data for several times to solve this problem.In hCEM method, the CEM detectors of different layers are linked in series.After each layer's detection, some background spectra are suppressed by a nonlinear function based on the output of the detector.Then, the transformed spectra are forward sent to the next layer's detector, until the CEM detector's output converges to a constant.Suppressing the undesired backgrounds makes the CEM detector better concentrate on the hard-detected targets.In this way, the performance of the detector will be gradually enhanced layer by layer.
Brief Introduction to CEM
Consider a hyperspectral image with N spectral vectors and L bands: . All spectra of the hyperspectral image can be arranged in an L×N matrix as .The aim of CEM algorithm is to design an optimal finite impulse response (FIR) filter, specified by the vector .The average output energy from all the pixel vectors can be represented as ( 5) where represents the correlation matrix, and represents the output of the detector.The CEM designs an FIR filter, which minimizes the total output energy, subject to a constraint that the filter's response to d is a constant (e.g., ) as follows: (6 where d is a prechosen target spectrum and can be obtained by averaging different target spectral vectors of a certain material in one hyperspectral image.The solution of the aforementioned optimization problem is given in (W.H. Farrand et al, 1997), which is (7) Usually, the target pixels will get large value of outputs, while the background pixels will get small ones.Finally, each element of y is compared with a fix threshold.If the output value is higher than the threshold, we decide the target present in the corresponding pixel; otherwise, we decide there is a target absent.
hCEM
In hCEM method, we perform a transformation on the spectra for beneficial target detection.The traditional CEM detector is a single-layer detector, while the proposed hCEM detector consists of different layers of traditional CEM detectors, and the detectors of different layers are linked in series.After each layer of detection, the background spectra are suppressed (reduce its magnitude while keeping its direction in the spectral space) based on the current layer's output score.The CEM detector is constructed based on the correlation matrix R, while the hCEM detector is constructed based on the corresponded revised correlation matrix.Since the revised correlation matrix contains more information of the hard-detected spectra, the hCEM could have better concentration on those hard-detected pixels (Zou and Shi, 2015).Now, consider the kth layer.The CEM output of this layer can be represented as ( 8) where and represent the spectral matrix and the correlation matrix of the kth layer, respectively.Then, each spectral vector is transformed by multiplying a nonnegative number based on its output score as follows: (9) where a nonlinear function is used to impose on the spectral vector .We consider this function as a "softthreshold" operation: hold the spectrum whose output score is large, while suppress the spectrum whose output score is small.In this way, the undesired background spectra are gradually suppressed after each layer's detection, while the target spectra will remain unchanged.In this paper, the nonlinear suppression function is defined as follows: (10) where is a positive parameter to adjust the shape of the function (10).Fig. 3 shows shape of function (10) under different choices of .Finally, the target spectra and the transformed background spectra will be used to construct the new CEM detector in the (k + 1)th layer.The aforementioned steps will be repeated until the output converges to a constant.In this paper, we calculate the error of the average output energy of the current layer and the previous layer, as follows: (11) If ( refers to a small positive number), the iteration will be stopped.
Study area
The
Built-Up Areas Detection
The first to detect the built-up areas from POLSAR data, according to the Fig. 1., we used the double bounce component of Freeman decomposition.The result is shown in Fig. 5.After further using the correlation coefficient in different polarization states by 5 degrees steps of ellipticity () and orientation () angles, the refined built-up areas can be obtained as some of them shown in Fig. 6.Built-up areas detected in hyperspectral image by hCEM method using 5 adjusting parameter ( ) and two of them are given in Fig. 7.
Decision Fusion
Different information sources can have different degrees of reliability, i.e., one data set might be more reliable than others in a specific analysis since the characteristics of sensors or data sets are not necessarily all the same.If each data set is taken as a separate information source, classification can be considered as an example of multisource data classification which has conceptually two different approaches.In this paper we used the second method by Majority voting procedure.As shown in Fig. 9, the final target maps generated by majority voting on hyperspectral and POLSAR target detection maps, separately.If the pixel of two this maps is labeled as the target, then this pixel is determined as the target at the final built-up map as shown in Fig. 10.
CONCLUSION
When the resolution is low, for example, in the case of space borne SAR or vegetation such as tree or grass and building in the same resolution cell should be considered in realistic scattering scenario.In the first attempt we reserved the odd and even-bounce scattering components related to building and the remaining scattering components is removed.The built-up areas are estimated by using Freeman PTD and the correlation coefficients in several polarization states.On the other hand, and in second step we have used a hyperspectral target detection algorithm, the hCEM algorithm, which suppresses undesired background spectra and holds the target spectra through a layerby-layer filtering procedure and in each layer, we have constructed a better detector than previous layers.Experimental results on two real POLSAR and Hyperspectral images suggest that our procedures in POLSAR and Hyperspectral have reliable results.
Fig. 1 .
Fig. 1.The flow chart of detection of built-up area.The volume scattering represents the remaining component besides odd-bounce and even-bounce scattering in built-up area.
Fig. 3 .
Fig. 3. Shape of the nonlinear suppression function with different choices of .
Radarsat-2 C-band full polarized image and Hyperion hyperspectral image of San Francisco, in northern California, USA, are used for the buildings detection.The nominal slant range resolution of POLSAR data is 11.1 m at near range to 10.5 m at far range.The spatial resolution of hyperspectral image is 30m×30m.(a) (b) Fig. 4. Google earth (a), and Hyperspectral true color (b) images of study area The study area includes mostly urban areas and forest areas.Google earth and hyperspectral true color images from the study area are shown in Fig.4.
Fig
Fig. 5. Built-up areas estimated by using double bounce scattering component
Fig. 8 .
Fig. 8. Fusion procedures One category is the data fusion approach shown in Fig.8(a) in which the feature vectors of the data sources (or sensor) are given to a central decision procedure which makes the final decision.The second category shown in Fig.8(b) is the decision fusion approach in which a final class decision is made by summarizing only the class decisions of each data set.In this paper we used the second method by Majority voting procedure.As shown in Fig.9, the final target maps generated by majority voting on hyperspectral and POLSAR target detection maps, separately.If the pixel of two this maps is labeled as the target, then this pixel is determined as the target at the final built-up map as shown in Fig.10.
Fig. 10.Final Target map Finally we calculated Correctness parameter (Tp/Tp+Fp) for each output as shown in Table1. | 3,452.4 | 2015-12-10T00:00:00.000 | [
"Environmental Science",
"Mathematics"
] |
Design of clustered MEN based on effective use of multi-energy
This paper put forward a new concept ---- “clustered MEN”, based on multi-energy complementarity, which abandons the concept that microgrids must be connected to the big grid. Firstly, this paper expounds the basic concept of multi-energy microgrid and how to improve its reliability and flexibility. Next, it focuses on the analysis of its implementation methods, including topology design using the concept of "energy hub", and the enumerations of objective function and constraints. Finally, this paper explains how to cluster the microgrids by energy routers, the stratified structure of cluster the microgrids and its characteristics. This paper provides ideas for the concept of multi-energy systems and smart cities.
Introduction
In recent years, with the development of distributed generation (DG) and support from government, microgrid is emerging [1,2]. The Multi-Energy Network (MEN) can achieve coordinated optimization and control of distributed energy devices, then ultimately realize the complementarity of various energy sources such as electricity, heat and gas, which improving the energy efficiency [3]. However, due to the problems of DG power quality and reliability, there are many challenges in the gridconnected and dissipated generation of distributed power [4,5].
At present, the cluster research of microgrid is mostly focused on microgrids of electricity, such as DC microgrids [6,7]. However, there are relatively few cluster researches on multi-energy systems. Most of the integrated microgrids that contain multiple energy sources are locally consumed or converted into the large grid [8,9]. This paper put forward a new concept----"clustered MEN", that is, to dilute the view that microgrid must be integrated into the large grid. By reasonable design of topology and complementary of multi-energy, microgrid can improve the utilization of DG and even run independently. Then microgrids are clustered by the use of energy routers to further amplify the complementary role of microgrid, so that the clustered MEN is more robust.
As for topology design, this paper establishes a comprehensive goal considering economy, environment and reliability, based on regional energy optimization allocation principles [10], and uses the concept of "Energy Hub" to complete the construction of microgrid from scratch. In this paper we will show the validity and future potential about clustered MEN.
Multi-Energy Microgrid
The multi-energy microgrid established in this paper mainly includes the following types of energy: a. Generation with Fossil Energy, such as thermal plant/gas engine. b. Generation with Sustainable Energy, such as PV /wind generation. c. Battery Energy Storage, such as lead battery. The microgrid configuration above may be referred to as a smart area, which includes smart grid, heat supply network and transportation network. In order to make up for shortcomings of DG such as power quality and reliability, the microgrid in this paper proposes the following solutions: 2.1. HEMS HEMS(Home Energy Management System) includes the home network(such as LED, television washer, air conditioner, etc.), home gateway, heat pump, fuel cell, sustainable energy(such as PV and wind generation ) .
Among them, the consumption of PV is the most important problem. Therefore, the user's behavior needs to be adjusted. For example, the using time of washing machine and electric cooker should be when PV is sufficient, water heater heats in advance using PV generation. The optimized plan of electricity will improve the PV proportion of self occupied, reduce electricity costs finally, improve the economy of the user.
Units smart meter
A Smart House consists of smart appliances due to installation of smart cards, which act as communication icon between Smart Meter and appliances. Number of such Smart Houses are connected with a Town Server. This Town Server is able to control power, provided by service provider and power generated by Regenerative sources. Smart Houses are controlled by Town Server using Smart Meter.
Energy storage system
Battery can realize the coupling and decoupling between different energy networks. Battery can store energy and release it when needed, it enables the different energy networks to be interconnected through the battery. In addition, battery will also realize the energy network's Decoupling. battery can release the stored energy at an appropriate time to achieve time decoupling; The stored energy is transferred between different energy grids for proper transportation and spatial decoupling.
Electric vehicle (EV)
As a new generation of transport, plug-in hybrid electric vehicle (PHEV) and battery electric vehicle(BEV) has received widespread attention and development due to its ability to store energy and replenish . The wind -EV bidirectional complementation will reduce the abandoned wind energy and contribute to realizing clean energy charging and improving wind energy utilization. EV is used as a social infrastructure, for example, EV supplies energy to HEMS while there is energy shortage in home, accordingly, HEMS supplies energy to EV while there is energy surplus in home. In this way, the reliability and flexibility of microgrid is greatly improved.
Topology designing and objective function
This paper uses the key conception of "Energy Hub" to complete the construction of microgrid from scratch, and totally complete the choose of investments and the power flow between different energy Internet .
Energy hub
The Swedish Federal Institute of Technology has built the concept of an energy hub (EH) capable of coupling various energies such as electricity, heat and gas [11]. The introduction of energy hubs stems from our mathematical requirements for the coupling of input and output energy. When the system is running, users and managers pay more attention to input energy and output energy. The conversion process can be approximated as a black box, so that a local energy Internet can be abstracted as shown in figure 1.
The energy hub can abstractly represent the conversion process. Where m P represents the n-th type of input energy source of the local energy internet, and n L represents the m-th type of output energy obtained by the user side [12]. The function includes various conversion processes in the hub, such as energy conversion, transmission, and storage. Therefore, energy hub can be summarized as the following three parts: d. Energy transmission equipment: only transmit energy without different forms of energy conversion, such as power transmission lines, gas network pipelines, hot pipe network; e. Energy conversion equipment: to achieve the conversion and coupling of different kinds of energy, such as gas turbine to convert natural gas into electrical energy, wind turbine to convert wind energy into electrical energy; f. Energy saving equipment: it can be divided into electric storage devices and heat storage devices. the EV mentioned before is also energy saving equipment to some degree.
In an EH, the coupling between output energy sources L and input energy P can be formulated nonlinearly as follows: L f P (1) For an EH with M types of input energy and N types of output energy, Eq. (1) can be rewritten as follows: 11 12 where MN c denotes the coupling factor. the coupling factor is a combination of the dispatch and efficiency factors.. Efficiency is determined by the characteristics of the energy converters. The dispatch factor represents the operating status of the EH. The parameters and costs of the energy converters to be selected is shown in table 1. Then establish the branch energy flow model and energy flow equations for the EH [8], then, whether to select a device, whether the device or the fracture is connected, can be represented by a 0-1 matrix.
Objective function and constraints
3.2.1. Objective function. The objective of EH planning is to minimize the overall cost, including the economic costs 1 C and the environmental costs 2 C : Where i is the Power factor of the i-th energy converter; k the index for pollution and K is the total number of pollution and is the unitary environmental value of the pollution and the amount of fine.
Constraints. a)Equality constraints
Internal energy hub meets the constraints of the hub itself as (1) b)Inequality constraints The range of output of each distributed power supply and conversion equipment:
Clustered by energy router
The optimization of energy flow is inseparable from the real-time transmission of information in a microgrid and between microgrids, so energy router is needed. Energy router is the core equipment of the energy internet, including energy, information, customization and the function of system operation needs. Energy switch enables secure connection of energy subnets and energy routers. While the Energy switch performs detection and management of the entire network, it also submits relevant information (such as energy production consumption and storage methods) to energy routers, ensuring the efficient use of resources and equipment and achieving a balance between supply and demand. It can work in the grid, interconnection and island three different modes.
In a separate HEMS, excess electricity from PV generation is disconnected, but if microgrids are clustered, excess electricity is transferred and used in neighboring houses, which completes the optimization of transfer among regions. With the help of energy router, different microgrid can be clustered to realize the optimization of transfer Among regions.
As shown in figure 2: The The structure of Clustered expandable microgrids.
The structure of Clustered expandable microgrids. will be: a. Construction of an appropriate scale distribution grid (The first cluster) b.
Expansions of clusters according to increase of regional demands (The second cluster) c.
Interconnections of clusters by electrical routers (Tie-lines and Inverter control) For instance, as shown in figure 3, the output of PV is larger in fine area and the output of wind generation is larger in rainy area, and energy demand in the office district is concentrated and daytime load is higher than nighttime. Therefore, there are differences in DG's output and users' demand in different microgrids, we can included that: a. Energy management in larger areas is more effective than that in a single house. b. Excess electricity stored in battery in sunny areas can be transferred to houses that require electricity. It is not necessary for a battery to be installed in an individual house. If one battery is installed for every few houses, then the installation cost will be decreased.
c. Demand in residential areas is larger in the morning and night,while demand in commercial areas is larger in the daytime, by transferring electricity between areas, electricity can be used effectively.
Conclusions
Large power grid has many advantages such as long transmission distance, large capacity, low loss and so on, but at the same time, it has problems such as difficulties in scheduling, resonance and system security. Therefore, this paper focus on the concept of "clustered MEN", which abandons the concept that microgrids must be connected to the big grid. Then this paper describes the implementation mode of clustered MEN from the aspects such as device level, target level, and interconnection layering planning: a. HEMS, Smart Meter, Energy Storage System and EV can all optimize the multi-energy flow. b. "Energy Hub" to complete the construction of microgrid from scratch. c. Microgrids are clustered by the use of energy routers to further amplify the complementary role of microgrid.
This paper is just a sketch of clustered MEN. The hub model needs to be improved, the reliability index is not taken into account in the objective function, we will continue our study. | 2,595.6 | 2018-09-01T00:00:00.000 | [
"Computer Science"
] |
Intelligent Healthcare System Using Mathematical Model and Simulated Annealing to Hide Patients Data in the Low-Frequency Amplitude of ECG Signals
Healthcare is an important medical topic in recent years. In this study, the novelty we propose is the intelligent healthcare system using an inequality-type optimization mathematical model with signal-to-noise ratio (SNR) and wavelet-domain low-frequency amplitude adjustment techniques to hide patients’ confidential data in their electrocardiogram (ECG) signals. The extraction of the hidden patient information also utilizes the low-frequency amplitude adjustment. The detailed steps of establishing the system are as follows. To integrate confidential patient data into ECG signals, we first propose a nonlinear model to optimize the quality of ECG signals with the embedded patients’ confidential data including patient name, patient birthdate, date of medical treatment, and medical history. Then, we apply Simulated Annealing (SA) to solve the nonlinear model such that the ECG signals with embedded patients’ confidential data have good SNR, good root mean square error (RMSE), and high similarity. In other words, the distortion of the PQRST complexes and the ECG shape caused by the embedded patients’ confidential data is very small, and thus the quality of the embedded ECG signals meets the requirements of physiological diagnostics. In the terminals, one can receive the ECG signals with the embedded patients’ confidential data. In addition, the embedded patients’ confidential data can be received and extracted without the original ECG signals. The experimental results confirm the efficiency that our method maintains a high quality of each ECG signal with the embedded patient confidential data. Moreover, the embedded confidential data shows a good robustness against common attacks.
Introduction
In recent years, the technology of Internet of Things (IoT), in which sensors are connected to the network to transmit important sensor information, has gradually matured [1][2][3][4][5][6][7]. Moreover, supporting medical treatment by IoT is even more important [8,9]. The electrocardiogram (ECG) represents the electrical activity of the human heart so that the ECG can be used as a reference for the analysis of cardiac pathology and the diagnosis of the cardiovascular system. Therefore, the ECG is a significant piece of bio-information which needs to be protected and transmitted in the hospital network, and it is necessary to apply the information hiding technology of ECG to protect patients' rights and information.
Research on protecting ECG information through watermarking or masking techniques remains an important topic. Kong al. [10] and Engin et al. [11] proposed a method of simple data masking of ECG signals, but the method is not blind. Zheng and Qian [12,13] proposed a method to indicate wavelet-domain ECG data in complex non-QRS frames to ensure the recovery of practically undistorted ECG signals. Kuar et al. [14] used a blind masking method to present the secure transmission of ECG signals in wireless networks. Ibaida et al. [15] improved the LSB (Least Significant Bit) watermarking technique and applied this upgraded approach to embed health information into ECG signals. Ibaida et al. [16] announced a watermarking approach to hide patient biomedical information in ECG signals to ensure the integrity of the patient-ECG connection, which is convenient for a wearable health monitoring platform. Nonetheless, it is hard to select the embedding site.
Authors in [17,18] apply a discrete wavelet transform (DWT) with seven-level decomposition to transform the ECG signal and combine the synchronization code with a watermark embedded in the low-frequency sub-band of level 7 to get better Signal-to-Noise ratio (SNR) and bit error rate (BER). However, the quality of all watermarked ECG signals degrades when increasing the embedding strength. Furthermore, Guo and Zhou [18] proposed a model with blind detection by single-channel electromyography. Dey et al. [19] designed watermarks by reversible binary bits and embedded the watermarks into the PPG signal and extracted them based on an error prediction algorithm. Dey et al. [20] embedded the binary watermarked image into the ECG signal proposed so as to obtain a novel session-based blind watermarking method. However, both methods [19] and [20] have the drawback of not being blind. Ayman and Ibrahim [21] established a waveletbased information-hiding technique that combines encryption and scrambling to protect patients' confidential data. To protect patient rights and information, transform-domain single-coefficient quantization is administered to ECG digital watermark encryption technology [22,23]. By this practice, the distortion in the PQRST complexes and the amplitude of ECG signal is very small. Jero et al. [24,25] utilized curvelet transformation to determine coefficients for storing diagnostic information. The novelty in their method is the use of curvelet transformation, suitable selection for locating watermark, and a threshold judgement concept. In [26], the original time-frequency watermarking scheme is realized with a lead-independent beat-to-beat adaptive data container design. The authors demonstrated six wavelets, six encoding bit depth values, and two watermark content types to catch the necessary conditions for the watermarked ECG to fit International Electrotechnical Commission (IEC) performance requirements. Sanivarapu et al. [27] announced a wavelet-based watermarking procedure for the patient information hidden in the ECG as a QR image. First, they converted the 1D ECG signal into a 2D ECG image using the Pan-Tompkins algorithm and used a wavelet transform to decompose the 2D ECG image. Then, they decomposed the wavelet detail coefficient and the QR image using QR decomposition for incorporating the data. The concepts proposed in [28] are single-sample quantification, ECG watermarking, and threshold-based compression, which reduce data size while ensuring patient data confidentiality and authenticity.
Since patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history have the right to personal privacy, the hospital and related personnel must respect and protect patients' confidential data during network transmission or telemedicine in the hospital to prevent leaked, stolen, or even misappropriated data. Accordingly, we design a useful information-hiding technique to embed patients' confidential data into ECG signals in this study. In other words, we propose a new technique to hide patients' confidential data in ECG signals. Since ECG has high accuracy requirements, especially PQRST waves [29], we rewrite the signal-to-noise ratio (SNR) and the low-frequency amplitude embedding rule as a performance index and constraint so as to obtain an optimized model for embedding sensitive patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history attached with patient bed into ECGs. The optimization model is processed by simulated annealing (SA) algorithms, and the results are applied to incorporate the confidential patient data for satisfying better signal-to-noise ratio (SNR), root mean square error (RMSE), and similarity. Consequently, the distortion of PQRST complexes and ECG amplitude is very low, so that the embedded confidential data can fit the requirements of physiological diagnostics. Through network transmission, the ECG with confidential data embedded can be received at the other end and the confidential data can be extracted without the original ECG. In addition, the embedded confidential data shows a good robustness against common attacks.
The rest of this study is as follows. Section 2 gives a sketch of some preliminary work, including discrete wavelet transform (DWT) and simulated annealing (SA). Section 3 presents the proposed method. Section 4 displays the experimental results. Finally, conclusions are shown in Section 5.
Preliminaries
In this section, we recall the knowledge we need to use in the later section: discrete wavelet transformation (DWT) and simulated annealing (SA).
Discrete Wavelet Transformation (DWT)
The discrete wavelet transformation using scaling and shifting parameters is defined by Moreover, V j = span ϕ j,k : k ∈ Z and W j = span ψ j,k : k ∈ Z provide that From a multi-resolution analysis of L 2 (R) and the subspaces · · ·, W 1 , W 0 , W −1 , · · · represents the orthogonal differences of the V k above. The orthogonal relations give the existence of sequences h = {h k } k∈Z and g = {g k } k∈Z which conform to where h = {h k } k∈Z and g = {g k } k∈Z are low-pass and high-pass filters, respectively. In the following work, the host digital audio signal S(n), n ∈ N represents the sampling of the original audio signal S(t) at the nth sampling time, and the orthogonal Haar wavelet basis is utilized to realize the DWT of the host digital audio signal S(n) by filter bank [30,31].
Simulated Annealing (SA)
Simulated annealing (SA) is an artificial intelligence algorithm that utilizes a random way to approximate the global optimal value of a given function. The SA algorithm is derived from the principle of solid annealing. The solid is heated to a sufficient temperature, and then slowly cooled. During heating, the internal particles of the solid become disordered as the temperature rises, and the internal energy increases., but gradually tend to order, reach equilibrium at each temperature, and finally reach the ground state at room temperature. Then the internal energy is reduced to a minimum. According to the Metropolis criterion, the probability that a particle tends to equilibrium at temperature T is e − ∆E/(kT), where E is the internal energy at temperature T, ∆E is the change number, and k is the Boltzmann constant. Using solid annealing to simulate the combinatorial optimization problem, the internal energy E is simulated as the objective function value f, and the temperature T is evolved into the control parameter t, that is, the simulated annealing algorithm for solving the combinatorial optimization problem is obtained: starting from the initial solution i and the initial value of the control parameter t, the iteration of "generate new solution calculate difference of objective function accept or discard" is repeated for the current solution, the t value gradually decays, and the current solution at the end of the algorithm is the approximate optimal solution obtained. The annealing process is controlled by the cooling schedule (Cooling Schedule), including the initial value t of the control parameters and its decay factor ∆t, the number of iterations at each t value, and the stopping condition [32]. The implementation of the SA algorithm is easy due to its simple concept and calculation.
Proposed Method
The electrocardiogram wave of electrocardiogram (ECG) signals was named by the Dutch physiologist W. Einthoven (the inventor of the ECG). He classified one cardiac cycle into P, Q, R, S, and T complex waves, which are shown in the ECG pattern at the top of Figure 1. Due to the fact that the ECG diagnosis is dependent on the PQRST waves [19], we have to avoid the shape distortion of these waveforms when we add patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history into ECG signals. Accordingly, we propose an optimization model to maximize the quality of the embedded ECG signals under low-frequency amplitude modification. As the flowchart shows in Figure 1, we give the details of patients' confidential data embedding in the following subsections.
Perform DWT and Binary Bits
Let S denote an ECG signal, and cut S into several segments. Without generality, each segment has the same length n sample points. We then perform DWT decomposition on each segment , , , n S s s s to get high-frequency sub-band Sj,H and low-frequency
Perform DWT and Binary Bits
Let S denote an ECG signal, and cut S into several segments. Without generality, each segment has the same length n sample points. We then perform DWT decomposition on each segment S = {s 1 , s 2 , · · ·, s n } to get high-frequency sub-band S j,H and low-frequency sub-band S j,L in different level j = 1,2,3, . . . , as shown in Figure 2. At the same time, patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history is converted to be the binary bits B = {b i |b i = 1 or 0} since binary bits B can be easily hidden into the lowest-frequency sub-band of DWT.
Proposed Patient Confidential Data Hiding Technique
Since the energy and quality of a signal are concentrated in the low frequency, the high frequency is susceptible to noise interference and can be easily removed or filtered without affecting the quality of the signal. Therefore, the binary bits are usually embedded in low frequencies to prevent them from being removed, filtered out, or interfered with by noise. In order to define a better threshold when embedding and extracting the binary bits in low frequencies, original audio signal of length L is firstly segmented by I frames, and then k-level DFT is realized on each frame. Therefore, the total number of lowest-frequency coefficients in each frame is /( 2 ) k n L I = ⋅ and the mean of these coefficients is Then, two thresholds are defined by and where ε >0 is the embedding strength. Finally, the binary bit of value 0 or 1 is embedded into low frequencies by the following rules.
where ˆi c is the embedded DWT low-frequency coefficient corresponding to original
Proposed Patient Confidential Data Hiding Technique
Since the energy and quality of a signal are concentrated in the low frequency, the high frequency is susceptible to noise interference and can be easily removed or filtered without affecting the quality of the signal. Therefore, the binary bits are usually embedded in low frequencies to prevent them from being removed, filtered out, or interfered with by noise. In order to define a better threshold when embedding and extracting the binary bits in low frequencies, original audio signal of length L is firstly segmented by I frames, and then k-level DFT is realized on each frame. Therefore, the total number of lowest-frequency coefficients in each frame is n = L/(I · 2 k ) and the mean of these coefficients is Then, two thresholds are defined by and where ε > 0 is the embedding strength. Finally, the binary bit of value 0 or 1 is embedded into low frequencies by the following rules.
whereĉ i is the embedded DWT low-frequency coefficient corresponding to original DWT coefficient c i .
Enhance Performance by the Proposed Optimization Model
Generally, the shape of an ECG signal is distorted when embedding patient confidential data. In order to lessen the distortion of the ECG shape, we take the maximum of the SNR which is defined by where {s i } means the original ECG signal sample points and {ŝ i } means the unknown embedded (or modified) ECG signal sample points. Since we realize the DWT using orthogonal wavelet bases, the SNR can be rewritten as From the perspective of SNR maximization, we plan to evaluate the unidentified values of {c i |1 ≤ i ≤ n} using the following optimization models.
Solving the Proposed Optimization Model by SIMULATED Annealing (SA)
Since Equations (13a)-(13d) form an optimization model, we apply the Simulated Annealing (SA) solver to approximately find the optimal solutions of {ĉ i |1 ≤ i ≤ n}. At each point in time, SA stochastically selects a solution that is in sight of the current solution, measures its quality, and then decides to move toward it or to continue with the current solution according to either one of two probabilities between which it chooses on the basis of the fact that the new solution is better or worse than the current one. During the search for the optimal solution, the temperature gradually drops from an initial positive number to zero and affects both probabilities: at each step, the probability of moving toward a better new solution is either kept to 1 or is altered towards a positive value; on the other hand, the probability of moving to a worse new solution is progressively altered towards zero. In this section, SA is applied to approximate the optimal solution of the proposed Step 1. Given the initial value of parameters, including initial solution C 0 , initial temperature T 0 , final temperature T f , cooling rate r, and iteration number D for each temperature, C = {|c 1 |, |c 2 |, · · ·, |c n |}, where C = {|c 1 |, |c 2 |, · · ·, |c n |}.
(2) The probability of going from the current solution to a new solution (neighbors) is given by an acceptance probability function P(∆E, T) that depends on ∆E and T.
In case ∆E ≤ 0, the probability function P(∆E, T) is equal to 1, indicating that the current solution S is replaced by the new solution S'. In case ∆E > 0, the current solution S is replaced by the new solution S' when the probability function P(∆E, T) = e ( −∆E T ) is bigger than a thresholdŜ.
Step 3. When Step 2 is complete, the temperature T is lowered by a cooling rate r to a new temperature T = rT.
Step 4. Check if the temperature T comes to the final temperature T f to break off simulated annealing.
After using SA to integrate the essential bodily functions of the patient into the ECG signal S, an integrated ECG signalŜ is obtained in each case.
Perform Inverse Discrete Wavelet Transform (IDWT)
After the solution of the optimization model is obtained by SA, the patients' confidential data is embedded into the lowest-frequency coefficients of DWT of ECGs along with the binary bits. Next, we perform inverse discrete wavelet transform (IDWT), as shown in Figure 3, to obtain the embedded ECGs and transfer the embedded ECGs to end devices over the network. The end devices that receive embedded ECGs use the extraction method in the next section to extract the binary bits to obtain patients' confidential data. , 0 Step 3. When Step 2 is complete, the temperature T is lowered by a cooling ra new temperature T = rT.
Step 4. Check if the temperature T comes to the final temperature Tf to break ulated annealing.
After using SA to integrate the essential bodily functions of the patient into t signal S, an integrated ECG signal Ŝ is obtained in each case.
Perform Inverse Discrete Wavelet Transform (IDWT)
After the solution of the optimization model is obtained by SA, the patient dential data is embedded into the lowest-frequency coefficients of DWT of ECG with the binary bits. Next, we perform inverse discrete wavelet transform (ID shown in Figure 3, to obtain the embedded ECGs and transfer the embedded ECG devices over the network. The end devices that receive embedded ECGs use the ex method in the next section to extract the binary bits to obtain patients' confidentia
Extraction Method
When extracting the patients' confidential data, we segment the test audio into several frames and then implement DWT on each frame in the same manner as in the embedding process. Suppose n consecutive absolute values ĉ * i , i = 1, · · · ,n are the optimal-embedded coefficients. We extract the binary bits B by using the following rules: Figure 4 shows the architecture of the proposed intelligent confidential data communication system. First, we obtain patients Electrocardiogram (ECG) from the ECG sensor module or website in [33,34]. Next, by Sections 3.1-3.5, we first embed the patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history attached with patient bed into the ECG signals in the wavelet domain by the proposed hiding method. Then, the embedded ECGs are transmitted to related end devices through internet connections. At the end devices, the optimal-embedded patients' confidential data are extracted after receiving the embedded ECGs and performing DWT on the embedded ECGs.
Extraction Method
When extracting the patients' confidential data, we segment the test audio into several frames and then implement DWT on each frame in the same manner as in the embedding process. Suppose n consecutive absolute values = ⋅⋅⋅ * , 1, , i c i n are the optimalembedded coefficients. We extract the binary bits B by using the following rules: Figure 4 shows the architecture of the proposed intelligent confidential data communication system. First, we obtain patients Electrocardiogram (ECG) from the ECG sensor module or website in [33,34]. Next, by Sections 3.1-3.5, we first embed the patients' confidential data including patient name, patient birthdate, date of medical treatment, and medical history attached with patient bed into the ECG signals in the wavelet domain by the proposed hiding method. Then, the embedded ECGs are transmitted to related end devices through internet connections. At the end devices, the optimal-embedded patients' confidential data are extracted after receiving the embedded ECGs and performing DWT on the embedded ECGs.
Experiments and Discussion
In experiments, we use the ECG data obtained from the website in [33, 34] to simulate the proposed method for each ECG signal with a length of 4096 samples represented by 16-bit. Since the 5-level Haar DWT is performed on the ECG signal, the lowest-frequency sub-band in level 5 has 128 coefficients. The embedding strength ε is set to 2000 and
Experiments and Discussion
In experiments, we use the ECG data obtained from the website in [33,34] to simulate the proposed method for each ECG signal with a length of 4096 samples represented by 16-bit. Since the 5-level Haar DWT is performed on the ECG signal, the lowest-frequency sub-band in level 5 has 128 coefficients. The embedding strength ε is set to 2000 and 4000 for n = 2 and n = 4, respectively. Experimental results and discussion are listed in the following.
Without generality, the superiority of the proposed method is judged by signal to noise ratio (SNR) and similarity, which are formulated as follows: where s i andŝ i denote original ECG signal sample point and embedded ECG signal sample point. The higher the SNR and the similarity, the smaller the distortion of the embedded signal, that is to say, the higher the SNR and the similarity, the better the quality of the embedded signal.
Our proposed method embeds patients' confidential data into their ECG signals with minimal distortion and then enables each hidden ECG signal to be of good quality. For example, Figure 3a demonstrates the original ECG signal without patients' confidential data, and Figure 3b demonstrates the hidden ECG signal with patients' confidential data using the proposed embedding technique in DWT 5-level decomposition. In Figure 5c,d, the blue curve represents the original ECG signals and the green curve represents the optimal-embedded ECG signals in the case of embedding strength ε = 4000 and n = 4. Figure 5e shows that they are almost indistinguishable visually. Moreover, as shown in Table 1, the main drawback in methods in [22,23,28] is that the SNR, which represents the quality of each embedded ECG signals, decreases significantly with the increase of the embedding strength Q. Since SA is applied to optimize the quality of each embedded ECG signal, we improve the disadvantage that the quality of each embedded ECG signal greatly decreases with increasing the embedding strength ε . In other words, our method maintains a good SNR (i.e., good quality) with sufficient hidden capacity for each embedded ECG signal regardless of the increase in the embedding strength ε . Moreover, as shown in Table 1, the main drawback in methods in [22,23,28] is that the SNR, which represents the quality of each embedded ECG signals, decreases significantly with the increase of the embedding strength Q. Since SA is applied to optimize the quality of each embedded ECG signal, we improve the disadvantage that the quality of each embedded ECG signal greatly decreases with increasing the embedding strength ε. In other words, our method maintains a good SNR (i.e., good quality) with sufficient hidden capacity for each embedded ECG signal regardless of the increase in the embedding strength ε. After the embedding, common attacks including re-sampling, low-pass filtering, and noise interference are utilized to test the robustness of the embedded patients' confidential data. The test methods and average BER results are explained as follows.
(1) Low-pass filtering: Low-pass filter is a kind of signal processing so as to give easy passage to low-frequency signals and difficult passage to high-frequency signals. By the BER information in Table 2, one can find that the robustness of the proposed method against the low-pass filter attack with a cutoff frequency of 3000 Hz and 6000 Hz is significantly higher than the methods in [9,22,23,28]. (2) Noise interference: Gaussian noise is a common noise for audio and other signals. Table 3 lists the experimental results of adding Gaussian noise with various noise intensities to the embedded and compressed audio signal. In common noise intensities −40 dB and −30 dB, the robustness of the proposed method is significantly higher than the methods in [9,22,23,28]. (3) Re-sampling: Resampling converts an audio signal from a given sample rate to a different sample rate. Upsampling or interpolation increases the sampling rate, and downsampling or decimation decreases the sampling rate. Both can be accomplished using integer-valued interpolation or decimation factors. In the proposed method, the embedded-compressed audio signal is first decimated from 44,100 Hz to 22,050 Hz and then interpolated to the original 44,100 Hz. This step is repeated two more times from 44,100 Hz to 11,025 Hz and 8000 Hz, then back to 44,100 Hz. The BER during the resampling attack is shown in Table 4. BER information shows that the proposed embedding method leads to lower BER and better robustness. We performed all experiments above using patients' Electrocardiogram (ECG) data from the website in [33,34]. With the exception of a few patients with damaged ECGs, most of the ECGs were successfully tested by our proposed method.
Conclusions
Based on the proposed optimization model and SA algorithm, patients' confidential data are embedded into ECG signals using DWT lowest-frequency amplitude embedding method. After testing the ECG dataset using the proposed embedding method, the difference between the embedded ECG signal and the original ECG signal is so small that it is almost negligible enough to be suitable for physiological diagnosis. Furthermore, the proposed method improves the disadvantage that the quality of each embedded ECG signal greatly decreases with the increase of the embedded strength ε. On the terminal device, the user can transmit the received embedded ECG through the Internet and then accurately extract the embedded patient's confidential data by performing DWT on the embedded ECGs received. | 6,098.4 | 2022-10-30T00:00:00.000 | [
"Computer Science"
] |
Progressive and Corrective Feedback for Latent Fingerprint Enhancement Using Boosted Spectral Filtering and Spectral Autoencoder
The objective of this research is to design an efficient algorithm that can successfully enhance a targeted latent fingerprint from various complex backgrounds under an uncontrolled environment. Most algorithms in literature exploited dictionary learning schemes and deep learning architectures to capture latent fingerprints from complicated backgrounds and noise. However, an algorithm learned from other high-quality fingerprint images may not solve all possible cases within a given unseen image. We propose a new feedback framework to distinguish latent fingerprints from complex backgrounds and gradually improve friction-ridge quality using the information provided inside the given unseen image. We combine two efficient mechanisms. The first mechanism enhances high-quality areas in priority and feeds the enhanced areas back to improve the quality of latent fingerprints in the nearby area. The second mechanism is to verify that the first mechanism works correctly by detecting anomalously enhanced fingerprint patterns. The second mechanism employs a spectral autoencoder that learns from good fingerprint spectra in the frequency domain. The anomalous fingerprint area is sent back to the first mechanism for further improving the enhanced result. We benchmark the proposed algorithm against available state-of-the-art algorithms using two fingerprint matching systems (one commercial off-the-shelf and one open-source) on two public latent fingerprint databases. The experimental results show that the proposed algorithm outperforms most state-of-the-art algorithms in the literature.
I. INTRODUCTION
A latent fingerprint must contain several crucial features such as singular points, friction ridges, and minutiae in order to identify a prime suspect [1]. Even though several latent fingerprints are carefully collected from a crime scene and extensively developed in an advanced laboratory, these latent fingerprints may fail to identify or verify any suspect due to four uncontrolled conditions. First, we cannot control the quality and quantity of latent fingerprints unintentionally left at a crime scene. Hence, latent fingerprints are always incomplete, low-quality, and partial. Second, The associate editor coordinating the review of this manuscript and approving it for publication was Zhe Jin .
we cannot control the surface and background where fingerprints are deposited. So, we collect latent fingerprint images with uncontrolled background patterns, and friction ridge quality depends on the smoothness of a surface. Third, we cannot control dust, grease, and other substances that can contaminate latent fingerprints. Therefore, these latent fingerprint images are always noisy. Finally, the fingerprint can be overlapped with other fingerprints by multiple touches. We need to separate these overlapped fingerprints before sending each of them to identify the suspect. These four uncontrolled conditions have made our task very difficult for identifying the prime suspect from available latent fingerprints. Hence every genuine feature existing in the latent fingerprint is significant. The goal of this research is to restore and preserve any critical features that are hidden in latent fingerprints.
Most fingerprint enhancement methods in the handbook of fingerprint recognition [2] were designed explicitly for rolled and slapped fingerprints. These fingerprints are obtained from various live-scan fingerprint sensors under a controlled environment. The fingerprint quality is usually quite good, with no interfering background. On the contrary, this is not the case for latent fingerprints. Latent fingerprints are always low-quality and partial. In addition, latent fingerprints usually compose of complex background and noise. Hence most methods in [2] fail to restore and preserve essential parameters for enhancement. For example, the classic Gabor filtering method, proposed by Hong et al. [3], suffers from a failure to estimate orientation and frequency parameters correctly. Therefore we cannot rely on these enhanced results.
Since 2008, several researchers have shifted their research to solve the latent fingerprint enhancement problem [4], [5]. Several latent fingerprint enhancement algorithms have emerged from machine learning tools. The most popular approaches are based on dictionary learning [6]- [13] and deep learning [14]- [22]. The key of these learning approaches is to learn from a large set of high-quality fingerprint images and to use these learning models to enhance friction ridges of low-quality latent fingerprint images. Lately, these learning approaches have been succeeded in enhancing latent fingerprints and significantly improved the identification rate.
A. RESEARCH MOTIVATION
Most previously mentioned algorithms in literature perform latent fingerprint enhancement by using learning models in the spatial domain [6]- [12], [14]- [22]. On the contrary, working in the frequency domain provides us some advantages. Firstly, the Fourier transform decomposes a fingerprint image into a spectral magnitude image and a spectral phase image. We can extract and enhance friction ridges easier in the spectral magnitude image because friction ridge spectra peak and pack in the frequency domain. Secondly, we can eliminate unwanted spectra that unrelated to friction ridges in this spectral magnitude image. Thirdly, the spectral phase image is directly related to minutiae locations in the latent fingerprint image. We leave this spectral phase image untouched to preserve the locations of genuine minutiae in the original image. Hence we only manipulate the spectral magnitude image. Our previous work proposed a dictionary learning in the frequency domain, called spectral dictionary [13]. The enhanced results of this approach are auspicious. Hence our motivation is to explore the possibility of a more complicated learning model in the frequency domain to solve the latent fingerprint enhancement problem.
Performing fingerprint enhancement in the frequency domain gives us another significant advantage. We can design a progressive feedback mechanism. An adaptive boosted spectral filtering (ABSF) technique [24] has succeeded in enhancing rolled and slapped fingerprints because its concept has taken on this mechanism. The ABSF algorithm initially enhances friction ridges with high-quality first in the frequency domain. Then these locally enhanced results are fed back to the original image to improve low-quality friction ridges nearby in the spatial domain. The overlapped block-based Fourier transform allows us to diffuse good spectra into bad spectra, resulting in progressive feedback enhancement. Even though the ABSF can significantly improve the very low-quality of clear friction ridges, it has a problem with the noisy friction ridges on the complex background. If the initial block contains friction ridges with noise on the complicated background, the enhanced result may contaminate with enhanced noise or background instead. Moreover, the feedback operation propagates the enhanced error to the nearby blocks resulting in enhancement failure of the entire image. Hence, the original ABSF algorithm is not suitable for directly enhancing the latent fingerprint.
We aim to bring latent fingerprint enhancement to another level. To achieve this goal, we need to combine two concepts; the progressive feedback mechanism of ABSF and an efficient learning model in the frequency domain. We have done preliminary work with the progressive feedback mechanism on latent fingerprint enhancement in [26]. This approach has shown some impressive results by enhancing corrupted friction ridges in complex backgrounds. However, this algorithm requires human assistance for the manual selection of the initial block location. In addition, this algorithm is sensitive to error propagation. If the algorithm fails to enhance the genuine friction ridges, the enhanced error propagates to the nearby blocks resulting in a domino effect. In another preliminary work [27], we have tried to combine two concepts. We employed an autoencoder to predict a corresponding matched filter with the progressive feedback mechanism. The results are still more room for improvement. In this paper, we provide a novel framework for latent fingerprint enhancement.
B. RESEARCH CONTRIBUTION
There are three key contributions of this paper as follows.
1) We introduce a novel framework for latent fingerprint enhancement, which fully exploits a progressive feedback mechanism incorporated with a new learning model for anomalous fingerprint pattern detection. The seamless integration of two independent mechanisms brings latent fingerprint enhancement to another level. 2) We develop the progressive feedback mechanism for handling automatic latent fingerprint enhancement. We introduce an automatic initial block localization method, which can indicate multiple initial locations. This method helps reduce the risk of initial start at the wrong location resulting in enhanced error propagation. 3) We propose a new learning model based on a stacked autoencoder, called spectral autoencoder, in the frequency domain. We train this spectral autoencoder by a large set of enhanced spectral patches of high-quality VOLUME 9, 2021 fingerprints. This spectral autoencoder can detect anomalous fingerprint patterns from outputs of the progressive feedback mechanism. The error locations are detected and sent back for re-enhancement in the next iteration. This learning model provides a superior scheme for error detection and error correction of the proposed framework.
II. RELATED WORK
Most latent fingerprint enhancement algorithms have exploited the conventional Gabor filtering concept. These algorithms have difficulty finding reliable parameters such as ridge orientation and frequency for Gabor filters due to corrupted friction ridges in latent fingerprint images.
Karimi-Ashtiani and Kuo [4] firstly addressed a latent fingerprint enhancement problem in 2008. They used the Gabor filters to enhance latent fingerprints. However, their parameter estimation method is not suitable for latent fingerprint problems. In 2011, Yoon et al. [5] proposed a robust orientation field estimation for latent fingerprint enhancement using short-time Fourier transform (STFT) and randomized random sample consensus (R-RANSAC) algorithm. Since then, the proposed methods have shifted to exploit machine learning tools to solve this problem. We can categorize algorithms in literature into three main approaches; dictionary learning, deep learning, and progressive feedback. Table 1 shows selected algorithms that specifically address the latent fingerprint enhancement problem. Note that we list only algorithms that we have available enhanced results for our benchmark comparison.
A. DICTIONARY LEARNING APPROACH
The concept of the dictionary learning approach is to estimate the reliable orientation and frequency parameters for Gabor filters. The dictionary learning can learn good orientation and frequency parameters from high-quality fingerprints. In 2013, Feng et al. [6] firstly proposed a global dictionary of orientation field estimation, called GlobalDict, to retrieve local ridge orientation on latent fingerprints. The estimated local orientation is then applied to the Gabor filter with fixed frequency at 1/9 cycles/pixel and fixed standard deviation at 4. Yang et al. [7] improved the performance of the previous work [6] using local dictionaries, called LocalDict. They presented a location-dependent dictionary relative to the finger pose using different dictionary sets for different fingerprint positions. Then, they used the estimated local orientation with fixed frequency and standard deviation for the Gabor filter parameters. Cao et al. [8] introduced two dictionary sets, including coarse and fine friction ridge structures, called RidgeDict, for ridge frequency and orientation estimation. They applied both estimated parameters to Gabor filters with a fixed standard deviation at 4. Liu et al. [9] presented a dictionary characterized by Gabor function with varying orientations, frequencies, and phases. They used sparse representation with the multi-scale Gabor dictionaries to reconstruct a fingerprint patch. In addition, Chen et al. [10] improved multi-scale dictionaries for orientation estimation by covering larger fingerprint areas. Liu et al. [11] developed multi-scale dictionaries with iterative orientation estimation. Xu et al. [12] combined Gabor dictionaries with minutiae dictionaries. On the contrary, Chaidee et al. [13] designed the dictionary learning from Gabor spectral responses via Gabor filter banks and sparse representation in the frequency domain. Instead of using the Gabor filter, the dictionary, called SpectralDict, predicted the shape of the filter in the frequency domain for enhancing latent fingerprints.
B. DEEP LEARNING APPROACH
Since 2017, the deep learning approach has gained attention to solve the latent fingerprint enhancement problem. The deep learning approach aims to transform a latent fingerprint image into an enhanced fingerprint image directly. Tang et al. [14] proposed a unified network architecture named FingerNet, which combines latent fingerprint segmentation, orientation estimation, enhancement, and minutiae extraction. This architecture contains two deep convolution neural networks (CNN), one for orientation estimation and segmentation and another for minutiae extraction. Then the estimated orientation parameters with fixed frequency shaped Gabor filters for latent fingerprint enhancement. Svoboda et al. [15] and Li et al. [16] independently proposed a deep autoencoder that provided an end-to-end solution for latent fingerprint enhancement. Qian et al. [17] proposed a densely connected UNet (DenseUNet) to produce a high-quality fingerprint patch, not the whole image. Then, the network iteratively enhanced the whole image. Recently, Liu and Qian [18] introduced Deep Nested UNet architecture, called DN-UNets, for latent fingerprint segmentation and enhancement. This network combines nested UNets with dense skip connections, transforming a whole latent fingerprint image into an enhanced image directly. On the other hand, some researchers have exploited the potential of using generative adversarial networks (GAN) for latent fingerprint enhancement. Dabouei et al. [19] showed that a conditional GAN could reconstruct partial latent fingerprints. Liu et al. [20] introduced a cooperative orientation generative adversarial network (COOGAN) to transform latent fingerprint images using a shared representation of ridge enhancement and orientation features. Xu et al. [21] presented a GAN-based data augmentation in their network structure. The synthesized data can facilitate the network to translate a latent to enhanced fingerprint effectively. Huang et al. [22] proposed a progressive GAN for learning the enhanced result and orientation field. They trained both generator and discriminator with progressively growing from low-resolution.
C. PROGRESSIVE FEEDBACK APPROACH
A progressive feedback approach was first introduced by Sutthiwichaiporn et al. [23]. The ABSF technique [24] fully exploited this approach for enhancing rolled and slapped fingerprints. Deerada et al. [25] applied this approach to improve the quality of latent fingerprints for reference point detection. This method had shown the potential of latent fingerprint enhancement. Then, Srisutheenon et al. [26] firstly applied this concept for latent fingerprint problems. They designed a simple and effective algorithm to shape a local matched filter. The filtering process firstly enhanced the highest quality block. Then, the enhanced block was inserted back to the input image to improve the quality of fingerprint spectra of neighboring blocks nearby. The drawback of this method is that the highest quality block was manually selected by human assistance. Horapong et al. [27] combined the progressive feedback method [26] with a learning model in a two-stage design. The first stage iteratively applied a matched filter in the high-quality fingerprint region. The second stage used an autoencoder to predict filters in the low-quality region. These preliminary works led us to design a new framework for better performance.
III. PROPOSED METHOD
We introduce a new framework for solving a latent fingerprint enhancement problem. The new framework exploits a feedback mechanism for the handling of this complicated problem. The framework consists of three main processes; A, B, and C, as shown in Fig. 1. The first process, A, is to find the best locations for starting the following enhancement sequence. The best locations should contain the clearest friction-ridges in an input latent fingerprint image. The second process, B, adopts the progressive feedback mechanism of ABSF [24] and pushes it to another level. The goal of process B is to enhance the high-quality fingerprint blocks first and feed the enhanced fingerprint blocks back to improve the low-quality fingerprint area nearby. This process gradually improves the quality of latent fingerprint block-by-block until the entire fingerprint segment is enhanced. The third process, C, is to detect anomalous fingerprint patterns. This process examines the enhanced results and pinpoints the abnormal locations of the enhanced image. Once anomalous blocks are detected, this process will provide feedback of incorrectly enhanced block positions to process B re-enhance again. This process C provides corrective feedback for better enhancement results. Each process is explained as follows.
A. PROCESS A: INITIAL BLOCK LOCALIZATION
Similar to previous works [24], the proposed algorithm needs to start at the best-quality genuine fingerprint location of an input latent fingerprint image. Then the following process can enhance and propagate the genuine spectrum to a nearby area to improve the overall quality of an enhanced output. However, if we start at the corrupted fingerprint location, the enhanced output may be irrelevant to the targeted fingerprint. Moreover, the output result may be contaminated by enhanced background and noise instead. In this work, instead of starting at only one block as [24], [26], we propose an algorithm that can start at multiple blocks, which can mitigate the problem of starting at the wrong location. One of the successes of the proposed algorithm depends on this process. This initial block localization process composes of nine steps shown in Fig. 2. We explain the detail for each step as follows.
1) THE 1 ST STEP: BLOCK PARTITIONING
We partition an input image into non-overlapped blocks, b (m, n), with a block size of 16 × 16 pixels. The b (m, n) is the block at row m and column n of this partitioning. We interest in only the area of the manual segment of the input latent fingerprint image. Assume that b (m, n) ∈ BOI 1 , where the BOI 1 is a set of blocks of interest covering the manual segment of the input latent fingerprint image.
2) THE 2 ND STEP: SMOOTH INTENSITY REJECTION
The second step aims to remove blocks with smooth intensity from the BOI 1 . In general, the blocks with high-quality fingerprints should have a wide range of intensity values (grayscale 0-255). We count a number of the intensityoccurrence for each block in BOI 1 . We reject the block whose intensity-occurrence is less than one-third of the maximum intensity-occurrence in BOI 1 . The residual blocks are in BOI 2 .
3) THE 3 RD STEP: INTENSITY OUTLIER REJECTION
The third step aims to remove blocks with too-dark or toobright intensities in BOI 2 . The blocks with too-dark intensity tend to be smudge, and the blocks with too-bright usually contain no fingerprint. Assume that the block b (m, n) ∈ BOI 2 and µ b(m,n) is an average intensity of pixels within this block b (m, n). We calculate µ b(m,n) for every b (m, n) in BOI 2 . We measure the mean (µ BOI 2 ), the standard deviation (σ BOI 2 ), and the skewness (ϑ BOI 2 ) of all µ b(m,n) . The three rejection conditions are defined as follows.
. The residual blocks after these rejections are in BOI 3 .
4) THE 4 TH STEP: VERY WEAK FINGERPRINT SPECTRUM REJECTION
The previous steps have been done in the spatial domain. In the fourth step, we analyze fingerprints in the frequency domain. We use a 64×64 window to covers a 16×16 block of BOI 3 at the center. We crop a window from the input image at the corresponding location of each block in BOI 3 . The Tukey window is multiplied to this 64×64 window to reduce blocking artifact, where the cosine fraction is 0.72. Then we transform each window by using a fast Fourier transform (FFT) to obtain a 64 × 64 spectral patch. In the frequency domain, fingerprint spectra are located in the ring-shaped bandwidth of the approximate radius-frequencies from 5 to 13 frequency points for the 500-dpi fingerprint-image-resolution. We search for the highest spectral magnitude in this ringshaped bandwidth. The maximum peak of the spectrum represents the potential fingerprint signal strength of this block. Then we sort the maximum spectral magnitudes of all blocks in BOI 3 . Then, we set a rejection threshold to the r percent (We use r = 25 percent in this experiment). Lastly, we reject the blocks where their highest spectral magnitude is lower than the rejection threshold. At this point, the blocks with solid fingerprint signals are still in the BOI 4 .
5) THE 5 TH STEP: OUT-OF-FINGERPRINT-BAND SPECTRUM REJECTION
This step continues working on the frequency domain. Suppose the maximum spectral magnitude outside the ring-shaped bandwidth is higher than the maximum spectral magnitude inside this bandwidth. We reject this block because spectral signals from background or noise are stronger than fingerprint signals. The blocks that passed this rejection are in the BOI 5 .
6) THE 6 TH STEP: WEAK FINGERPRINT SPECTRUM REJECTION
This step is similar to the 4 th step, except we set the r threshold to 50 percent in this experiment. Assume that the rest of the blocks that passed this 50 percent rejection is in the BOI 6 . If the total number of rejected blocks (BOI 5 −BOI 6 ) is greater than 60 percent of the total number of previous blocks in BOI 5 , we leave untouched BOI 6 . Otherwise, we return all blocks in BOI 5 instead (BOI 6 = BOI 5 with no rejection). Moreover, we reject blocks with the 1 st highest peak magnitude less than or equal to 10 percent of the peak magnitude at zero frequency. All blocks that passed this 6 th step are in the BOI 6 .
7) THE 7 TH STEP: OVERLAPPED FINGERPRINT SPECTRUM REJECTION
We try to eliminate overlapped fingerprints in this step. We find the 2 nd highest spectral peak in the fingerprint bandwidth and compare it to the 1 st highest spectral peak. For each block in the BOI 6 , we calculate the dual peak magnitude ratio R, which is the ratio of the 2 nd peak magnitude divided by the 1 st peak magnitude. We also calculate the difference angle θ between the 1 st peak and the 2 nd peak. We need to reject the blocks that tend to contain overlapped fingerprints. The block is rejected if one of the following conditions is true: -If (R > 0.5) and ( θ > 30 • ), we reject this block, -If (R > 0.75) and ( θ > 20 • ), we reject this block, -If (R > 0.8) and ( θ > 10 • ), we reject this block. All remaining blocks that passed these conditions are in the BOI 7 .
8) THE 8 TH STEP: HIGH CONTRAST EDGE SPECTRUM REJECTION
Sometimes strong spectral peaks inside fingerprint bandwidth are not related to a fingerprint pattern, but high-contrast edges in the background cause these high peaks in fingerprint bandwidth. In the frequency domain, the fingerprint spectral peaks are at their fundamental frequency and harmonics. On the contrary, the spectral shapes of the high-contrast edges are in the form of a ''sinc'' function. A center peak of the main lobe of the sinc function is at zero frequency. The first sidelobe peak and the second sidelobe peak of the sinc (x) are located at x = 3π/2 and x = 5π/2, respectively. In the 64 × 64 spectral patch, the locations of the first and second sidelobe peaks are depended on the transition width of the high-contrast edge. We notice that the first sidelobe peak of the high contrast edge is usually in the fingerprint bandwidth from our experiments. We can distinguish the high-contrast edge spectra from the fingerprint spectra by detecting the second sidelobe peak. We suspect that these spectra are from high-contrast edges if we can detect the second sidelobe peak in the same direction as the first sidelobe peak. To implement this step, we measure the frequency distance, l (frequency point), from the center (zero frequency) to the highest peak in the fingerprint band. If this peak is the first sidelobe peak of the sinc function, the second sidelobe peak should be at (5π/2) l/ (3π/2) = 1.67l. We search for the second sidelobe peak at 1.67l within the same direction as the first sidelobe peak in the range of ± 1 • . If the second sidelobe peak exists, we reject this block. The residual blocks are in BOI 8 .
9) THE 9 TH STEP: INITIAL BLOCKS CALCULATION
This step is the final step of process A. We cluster the residual blocks in BOI 8 by grouping connected blocks using an 8-neighbors of a block connectivity operation. Then we calculate a centroid of each cluster. If the centroid is at a block, b (m, n), inside the cluster, this block is one of the initial blocks for the next process B. If the centroid is outside the cluster, we find the nearest block to the centroid for the initial block for the following process. Fig. 3 demonstrates the examples of the residual blocks in BOI 8 and the initial block locations from process A.
B. PROCESS B: PROGRESSIVE FEEDBACK LATENT FINGERPRINT ENHANCEMENT
Process B is designed to enhance the targeted latent fingerprint block-by-block starting from the given initial blocks obtained by process A. The concept of this process is similar to the ABSF technique [24]. However, the ABSF technique is designed to handle rolled and slapped fingerprints that contain less noise without complex background. Therefore, we need to modify the ABSF technique to handle the latent fingerprint with background noises. Process B can be divided into three sub-processes: total variation decomposition, initial block enhancement, and iterative blocks enhancement and feedback. Each sub-process is explained in detail as follows.
1) TOTAL VARIATION DECOMPOSITION
The total variation (TV) decomposition is required to reduce high contrast edges in the input latent fingerprint image that create substantial interference within the fingerprint bandwidth in the frequency domain. We use a TV minimization [28] for the decomposition of cartoon and texture components. In this work, we use an anisotropic TV regularization (L1-norm) for TV minimization. That is, (1) where f c (x, y) and f (x, y) are the cartoon-component image and the input latent fingerprint image, respectively. λ is a regularization parameter that is set to 0.45 by our empirical result. TV ani (f c (x, y)) is the 2-D anisotropic total variation of an image, defined by This equation is a sum of the L1-norm of a first-order forward finite-difference along the horizontal direction (D x ) and vertical direction (D y ) for all pixel locations (x, y). In this work, the maximum iteration for solving the TV minimization is set to 20. Lastly, the texture-component image
2) INITIAL BLOCK ENHANCEMENT
Given the initial blocks from process A, we need to enhance these blocks with priority because the following sub-processes rely on the correctness of these initial block enhancements. Firstly, we divide the texture-component image, f t (x, y), into 16 × 16 non-overlapped blocks. This partitioning is similar to the first step of process A. Assume that the b (m, n) is one of the initial blocks provided by process A. With the initial block at the center, we crop a 128 × 128 window from the f t (x, y) by where f w_b(m,n) (x, y) is the cropped window. This window size of 128 × 128 is empirically appropriate for fingerprint spectral analysis in the frequency domain. We also apply a Gaussian window, g σ (x, y), with a standard deviation equal to 16 pixels (σ = 16). Note that we employ the Gaussian windowing to prevent discontinuity of signal at the window boundary. Then we take the FFT of this window by is the 128 × 128 spectral patch coefficients at frequency point, (u, v), of the signal cover the initial block, b (m, n), and F {·} represents the FFT operator. Next, we build a matched filter from this 128×128 spectral patch. Because process A selects the initial block from the high-quality blocks of the input latent fingerprint image, we can exploit the high-quality spectra of this latent fingerprint. Assume that the highest spectral magnitude within the fingerprint bandwidth, which the radius frequencies for a spectral patch of 128 × 128 are between 10 to 22 frequency points, is Then, we select the absolute spectral magnitude greater than or equal to half of the highest magnitude within the fingerprint bandwidth as a matched filter, as shown in (7).
In the design, we need to smooth the matched filter to increase the filter's bandwidth by convolving with a Gaussian smoothing filter with a standard deviation of 2.75, as in [26], [27]. Finally, the magnitude of the matched filter is given by Then we multiply the magnitude of the spectral patch by the magnitude of the matched filter, and we raise the multiplication result to the power of 1.25. The spectral boosted magnitude is given by Finally, we take a 128 × 128 inverse fast Fourier transform onto the spectral boosted magnitude with the original phase of its corresponding spectral patch, as shown in (10).
is the original phase of the spectral patch, F −1 {·} is the inverse FFT operator, andf w_b(m,n) (x, y) is the enhanced window with the b (m, n) at the center. Then we crop the enhanced block (16 × 16 pixels) from the center of the enhanced window (128×128 pixels). The enhanced block is placed on two images at the corresponding location of the original block, b (m, n). One is the enhanced image, f e (x, y), and another one is the feedback image,f t (x, y). The enhanced image has the same size as the original latent image with all zero-value pixels for initialization. We insert the enhanced block into the enhanced image by The feedback image is obtained by modifying the texturecomponent image from the TV decomposition. The intensity of the enhanced block is scaled to the range of [−1,1]. This scaled block is inserted into the texture-component image bŷ Thef t (x, y) is the input image for the next sub-process. We called it the feedback image because it contains the initial enhanced block that can improve the fingerprint quality for the neighboring blocks nearby in the next sub-process.
For other initial blocks, we redo the initial block enhancement sub-process using (4) through (12) in parallel. Similarly, the other initial enhanced blocks are placed back in both the enhanced image simultaneously by (11) and the feedback image by (12). Fig. 4 (b) demonstrates this action.
3) ITERATIVE BLOCKS ENHANCEMENT AND FEEDBACK
In this sub-process, we iteratively enhance the feedback image,f t (x, y), block-by-block by starting from the surrounding blocks of the initial blocks from the previous sub-process. Our concept is to enhance the blocks with a genuine fingerprint spectrum first. Once we put these enhanced blocks back into the feedback image, the enhanced fingerprint spectra can improve the weak fingerprint spectrum of the nearby blocks in the next iteration. This sequence is one of the crucial concepts of the proposed algorithm, as shown in Fig. 4 (c)-(g).
The enhancement sequence in this sub-process is different from the previous sub-process with two aspects. First, we use a 64 × 64 FFT window instead of the 128 × 128 FFT window. The smaller spectral patches are more suitable for enhancing weak latent fingerprints in noisy areas. Second, the matched filter in (8) is applied if the peak of the fingerprint spectra is strong enough. Otherwise, the bandpass filter is used to preserve the original spectra instead. The magnitude of the ideal bandpass filter is defined by Note that the fingerprint spectra range from 5 to 13 frequency-points for the 64 × 64 FFT window.
We use the same measure as our preliminary works [26], [27]. The spectral peak ratio (SPR) is defined by where p 1 and p 2 are the first and second highest peaks of spectral magnitude in the fingerprint bandwidth. From our experiments, we found that the first peak usually represents the potential fingerprint spectrum, and the second peak represents other spectra such as high sharp edges, noise, or background. For the best case, if p 2 is equal to zero, the SPR is equal to one. For the worst case, if p 1 is equal to p 2 , the SPR is equal to 0.5. Hence the range value of SPR is from 0.5 to 1. Based on the SPR value, we divide the strength of the fingerprint signal for each block into three classes; strong, moderate, and weak, as shown in Table 2. We divide this sub-process into three tiers based on three threshold levels, as shown in Table 2. We aim to enhance all blocks with an intense fingerprint spectrum in the first tier. The sequence of block enhancement begins with the nearest neighbors of the initial enhanced blocks. We calculate the nearest neighbors using the Euclidean distance transform in [29]. We arrange an enhancement priority of neighboring blocks ascending order from the nearest distance to the VOLUME 9, 2021 further distance [29]. Hence the nearest neighboring blocks, attached to the four sides of the initial enhanced block, are enhanced first using (4) through (10), except that the feedback image,f t (x, y), is used instead of the texture-component image, f t (x, y), in (4). Note that we use the 64 × 64 FFT window instead of the 128 × 128 FFT window in (4). We calculate SPR for each block and compare it with a threshold, η = 0.67. If the SPR is greater than or equal to the threshold (SPR ≥ 0.67), we enhance this block using the matched filter (8). Otherwise, we use the ideal bandpass filter in (13) to keep the fingerprint spectra for the next tier. We enhance all four blocks in parallel. Then we insert all four enhanced blocks into the feedback image simultaneously, as in (12). However, only the enhanced blocks that passed the matched filter are placed into the enhanced image (11). Next, the four corner blocks of the initial enhanced block are selected to be enhanced, as shown in Fig. 4. The sequence repeats until all blocks in the given manual fingerprint segment are enhanced. Then the first tier is finished.
For the second tier, we repeat this sub-process for the moderate spectrum blocks using (4) through (12) by changing the threshold η to 0.6. This sub-process is similar to the first tier. Finally, we repeat the sub-process in the third tier by changing the threshold η = 0.5. At the third tier, we output all enhanced blocks to the enhanced image (11). The enhanced image is the final output from process B.
C. PROCESS C: ANOMALOUS FINGERPRINT PATTERN DETECTION
The enhanced fingerprint image from process B may have some defects because strong spectra from noise and background may overcome the weak latent fingerprint spectra. We need to check anomalous fingerprint patterns from the enhanced image and correct these errors. Process C is to detect the locations of anomalous blocks from the enhanced image of process B. These anomalous block positions are then sent back to process B as corrective feedback for our fine-tuning enhancement. In this process, we employ a hierarchical autoencoder to measure the quality of spectral magnitudes of fingerprint patterns. We split this process into four parts; (1) fingerprint spectral autoencoder architectures, (2) training of fingerprint spectral autoencoders, (3) data preparation for spectral autoencoder training, and (4) anomalous fingerprint pattern detection. We give a clear explanation for each part as follows.
1) FINGERPRINT SPECTRAL AUTOENCODER ARCHITECTURES
We design two networks to capture fingerprint spectrum patterns in the frequency domain. The first network, called a locally spectral autoencoder, is aimed to learn from good fingerprint spectral shapes of a 64 × 64 spectral patch. The 64 × 64 spectral patch is the magnitude of the FFT coefficients. Hence this network can estimate the local fingerprint spectrum shape in a 64 × 64 FFT window. Note that the original block size for enhancement is 16 × 16 pixels in the spatial domain. The block size is extended to 64 × 64 pixels for the FFT window size in the frequency domain. The second network, called a regionally spectral autoencoder, is hierarchical. It combines nine locally spectral autoencoders for inputs and outputs. This network learns from 3×3 fingerprint spectral patches-the nine neighboring spectral patches contain regional ridge-flow patterns of fingerprints. Hence this network can estimate and predict regional spectrum patterns of fingerprint in the frequency domain. Note that the 3 × 3 spectral patches are extracted from the 3 × 3 enhanced blocks in the spatial domain. Similarly, each enhanced block size is 16 × 16 pixels and extended to 64 × 64 pixels for the FFT window size.
The locally spectral autoencoder architecture comprises three fully connected hidden layers, as shown in Fig. 5 (a). The first and the third internal layer contain 1,024 nodes, and the most internal layer has 512 code-size. The inputs of this autoencoder are the spectral magnitudes of the 64 × 64 spectral patch. Because the 2-D FFT of a real image is conjugate symmetry, we select only the top-half spectral magnitudes from the 4,096 2-D FFT coefficients as the input vector. We combine the 2,048 coefficients from the top half of a spectral patch with the additional 33 coefficients from the next half horizontal line of the spectral patch. In other words, only spectral magnitudes of 2,081 2-D FFT coefficients out of 4,096 are chosen as the input nodes. This input size reduction decreases the computational complexity and training time of this autoencoder. We rearrange the 2,081 spectral magnitudes into a 1-D vector with a length of 2,081. The first network layer reduces the input size from 2,081 to 1,024. The second layer compresses 1,024 code-size into 512 code-size. The decoder reverses the compression process and reconstructs the 1-D vector of 2,081 spectral magnitudes. Finally, the reconstructed spectral patch can be estimated at the output of this autoencoder.
The regionally spectral autoencoder architecture has only one hidden layer with 1024 code-size, as shown in Fig. 5 (b). The input nodes are from 3 × 3 fingerprint spectral patches, encoded to 9×512 = 4,608 using the nine encoders of locally spectral autoencoders. We rearrange nine of the 512 codes obtained from each encoder as an input vector following the order number shown in Fig. 5 (b). Each input from each encoder is concatenated and rearranged into a 1-D vector with a length of 4,608. The decoder reverses the encoder process to reconstruct 3 × 3 fingerprint spectral patches in the same manner. VOLUME 9, 2021
2) TRAINING OF FINGERPRINT SPECTRAL AUTOENCODERS
The locally spectral encoder-decoder learning uses an input-output pair from two corresponding 64 × 64 spectral patches, as shown in Fig. 5 (a). We obtain a spectral patch for input training from the magnitude of the FFT coefficients of an original high-quality fingerprint block with a size of 64 × 64 pixels. We also obtain a modified spectral patch for output training from a spectral patch modification process. In this process, the corresponding enhanced block, which its location is corresponding to the input patch, is extracted from the enhanced fingerprint image. We apply the VeriFinger10.0 [30] to enhance high-quality fingerprint images. Then we crop the corresponding enhanced fingerprint block (64 × 64 pixels) from this enhanced image, and we perform the 2-D FFT operation to obtain the enhanced spectral patch. The enhanced spectral patch is converted into a modified spectral patch using three steps. Fig. 6 shows an example of this conversion. Firstly, we eliminate the center frequency magnitude (zero frequency) by placing zeroes at the four center points of the 64 × 64 enhanced spectral patch, as shown in Fig. 6 (b). Second, the foreground spectra are segmented using the Chan-Vese segmentation algorithm [31]. The initial boundary is a square at the boundary of a spectral patch. We set a maximum iteration for 300 iterations. As a result of this active contour segmentation, we obtain several spectral objects, as shown in Fig. 6 (c). Third, we keep only the strong fingerprint spectral object closest to the center (zero frequency). We reject undesired spectra (harmonic spectra) that are far away from the center. To achieve this goal, we calculate a Euclidean distance between the center and each object's peak. We keep the spectral object with a minimum distance to the center, as shown in Fig. 6 (d). Finally, the modified spectral patch comprises the original spectral magnitude and the obtained spectral object magnitude divided by two. Fig. 6 reveals the step-by-step results of the spectral patch modification process.
FIGURE 6.
Step-by-step results of the spectral patch modification process for locally spectral autoencoder training.
We use two levels of sparse autoencoder with 1,024 and 512 hidden neurons for the locally spectral autoencoder. We employ the same method as the stacked denoising autoencoder architecture [32], and we train the network based on pairs of spectral patches in supervised learning. The loss function is a mean squared error function with two regularization terms. We implement our training on the MAT-LAB toolbox [33] by setting the coefficient of l 2 norm regularization and the sparsity regularization to 0.01 and 4, respectively. We choose a scale conjugate gradient-descent algorithm [34] to update weights and bias values during the training. Also, the encoder transfer function is a positive saturating linear function, and the decoder transfer function is linear.
The regionally spectral encoder and decoder learn from encoded vectors extracted from 3 × 3 spectral patches, as shown in Fig. 5 (b). The input vector is a concatenation of the 512-length encoded vectors from nine locally spectral autoencoders. The 4,608-length vector is used as the input and output for training the network. We use another sparse autoencoder to learn essential features in unsupervised learning. In this layer, we select the number of hidden neurons as 1,024. Other training parameters are the same as the locally spectral autoencoder parameters.
3) DATA PREPARATION FOR SPECTRAL AUTOENCODER TRAINING
We extract the high-quality fingerprint blocks from the NIST-SD4 database [35]. This database contains 4,000 rolled fingerprints of 8-bit grayscale images. Each image is 512 × 512 pixels with 500 dpi stored in a modified JPEG lossless format. In this database, most images have a clear friction-ridge structure sufficiently for training our proposed networks. We use the ''mindtct'' function in the NBIS software package [36] to collect high-quality fingerprint blocks. We randomly choose the blocks with the averaged quality value of 2-by-2 local cells [37] more than 3.75, except around the core and delta point since these are rare data. In addition, we extract the core and delta points of NIST-SD4 images using the VeriFinger 10.0 extractor [30]. Fig. 7 (a) shows example pairs of 3 × 3 spectral patches from corresponding original/enhanced pairs. Fig. 7 (b) demonstrates example pairs of an individual spectral patch from fingerprint/enhancement pairs available in our training dataset.
Our work creates 51,350 pairs of 3 × 3 spectral patches and 241,107 pairs of individual spectral patches for network training. In particular, the training spectral patches created for the friction ridge area are taken from several directions distributed around the core point. We collect these spectral patches from eight sectors around the core point. More particularly, we perform the MCAR [38] setting in the regionally spectral autoencoder network dataset during training. We aim to improve the ability of the regionally spectral autoencoder network to deal effectively with the missing spectral patches. The number of missing spectral patches is randomly selected from 1 up to 4 patches during the training. Finally, we split the entire dataset into three different sets: training, validation, and testing in the ratio of 70:15:15. Fig. 8 visualizes a set of bases learned by our network model and reveals how the basis attempts to capture the fingerprint spectral peaks in various patterns. The advantage of the proposed autoencoder is that these spectral patches are sparse in the frequency domain, and they can represent patterns of friction ridges in the spatial domain.
4) ANOMALOUS FINGERPRINT PATTERN DETECTION
We apply the regionally spectral autoencoder for anomalous fingerprint pattern detection. The enhanced image from process B is the input of this process C. We analyze the enhanced image block-by-block of 16 × 16 pixels inside the segmented area. Each targeted block and its eight neighboring blocks form a group of 3 × 3 enhanced blocks, as shown in Fig. 9. With each block at the center, the block size is extended to a window size of 64 × 64 pixels to cover the nearby enhanced area. Each spectral patch can be obtained by calculating the magnitudes of the FFT coefficients of each window. Finally, we obtain 3 × 3 overlapped spectral patches from each targeted block, as shown in Fig. 9. These 3 × 3 spectral patches are an input of the regionally spectral autoencoder.
As shown in Fig. 9, we generate nine groups of 3 × 3 spectral patches for the targeted block and its eight neighboring blocks. The spectral patch of the targeted block is located at a different position in each group, resulting in a different prediction result. Nine groups of 3×3 spectral patches are fed into the regionally spectral autoencoder. Here we obtain nine output spectral patches of the same targeted block from nine output groups. Then we average these nine output spectral patches of the same targeted block to predict the spectral patch of the enhanced block, as shown in Fig. 9.
We use the Jensen-Shannon divergence [39] to measure the difference between the enhanced spectral patch and the predicted spectral patch of the same targeted block. The Jensen-Shannon divergence is a symmetrized version of the Kullback-Leibler divergence, defined by where S E and S P are the enhanced spectral patch and the predicted spectral patch from the same targeted block b (m, n), respectively. S M = 0.5(S E + S P ) is an arithmetic mean of the two spectral patches. D K (m,n) (S E S P ) is the Kullback-Leibler divergence between two spectral patches S E and S P of the targeted block b(m, n), given by for all frequency-point (i, j) in both spectral patches. We convert the divergence between enhanced and predicted spectral patches into a similarity score, defined by where SS FSP (m, n) is the fingerprint spectral pattern similarity score at the targeted block position (m, n) in the manual fingerprint segment (BOI 1 ). We calculate the minimum divergence for all blocks inside the manual fingerprint segment. VOLUME 9, 2021 Hence the maximum similarity score is one. Note that the higher the similarity score, the lower the divergence between enhanced and predicted spectral patches. With this similarity score, we can distinguish between normally and abnormally enhanced blocks. The enhanced blocks, in which their similarity score is less than a threshold level, are classified as abnormally enhanced blocks. Otherwise, they are classified as normally enhanced blocks. We set the variable threshold depending on the number of refinement iterations (k). The threshold level is gradually decreased (increased risks of false-negative) for each ongoing refinement iteration, given by where η FSP (k) is a variable threshold level at the k refinement iteration, and k is the number of refinement iterations varying in the range of 1 ≤ k ≤ 5. A mean value and a standard deviation value of all similarity scores in the fingerprint segment are denoted by µ SS and σ SS , respectively. After each refinement iteration is complete, the positions of abnormally enhanced blocks are sent back to process B instead of initial block locations from process A. We assign all block positions of normally enhanced blocks as the new initial blocks, with no more enhancement for these blocks. An input image for the next refinement iteration is composed of enhanced blocks at the locations of normally enhanced blocks and the texture-component image at the locations of abnormally enhanced blocks. Then we repeat process B and process C until k is reached the assigned number of the refinement iteration. The final output is the enhanced image.
IV. EXPERIMENTAL RESULTS
We perform benchmark tests with two automatic fingerprint identification systems (AFIS), one commercial-off-the-shelf We experiment on two public latent fingerprint databases; NIST-SD27 [42] and IITD-MOLF DB4 [43]. Even though the NIST-SD27 dataset has been withdrawn from NIST, it is a fundamental choice for benchmark due to rich published enhancement results. Latent fingerprint images in this dataset were collected from real solved cases. This dataset contains latent fingerprints with different qualities on various complex backgrounds. On the other hand, the IITD-MOLF DB4 dataset is currently available with limited published enhancement results. Latent fingerprints in this dataset are usually on a clear background. With these two databases, we can evaluate the performance of latent fingerprint enhancement algorithms in two different environments.
A. BENCHMARK TESTS ON NIST-SD27 DATASET
The NIST-SD27 database contains 258 latent fingerprint images from actual crime scenes and corresponding ten-print image pairs. Latent fingerprints in this database are classified into three classes of quality; good, bad, and ugly. There are 88 images for good quality, 85 images for bad quality, and 85 images for ugly quality. For a real-world scenario, we insert the NIST-SD14 [44] for an additional background database to extend the fingerprint gallery in our latent fingerprint identification experiments. We use only 27,000 images from file cards (with prefix f-), which are in the first subset of NIST-SD14. As a result, a total of 27,258 fingerprints are available for identification testing on the NIST-SD27 database.
1) BENCHMARK TEST WITH COTS VERIFINGER 10.0
We use a Cumulative Matching Characteristic (CMC) curve, which reports identification rate vs. ranking, as a performance metric for our benchmark tests. We start the performance testing by probing the enhanced images obtained from each enhancement algorithm to the COTS VeriFinger 10.0 system. The CMC curve can be calculated from the ranking output of this COTS matcher. Fig. 10 shows the CMC curves from the COTS VeriFinger10.0. We provide precise identification rates for rank-1, −5, −10, −20, and −30 in Table 3.
The proposed algorithm outperforms most state-of-the-art algorithms except in the good-quality case. It achieves the best accuracy for rank-1 and rank-30. However, it is inferior to our preliminary work, the Semi-Prgs algorithm [26] for rank-5, rank-10, and rank-20. The reason is that this algorithm needs human assistance to choose the initial block location, but the proposed algorithm performs automatic initial block localization. Hence starting at the right location is very crucial for the proposed algorithm to achieve high accuracy. 96300 VOLUME 9, 2021 FIGURE 10. Comparison of CMC curves of eight latent fingerprint enhancement algorithms using COTS VeriFinger10.0 for both minutiae extractor and matcher. All 258 latent fingerprints from the NIST-SD27 database are probed. The database background comprises the corresponding 258 rolled fingerprints from the NIST-SD27 database and the 27,000 fingerprints from the NIST-SD14 database.
2) BENCHMARK TEST WITH MCC 2.0
In this experiment, we focus on minutiae detection and matching using open-source algorithms. The enhanced latent fingerprints are sent to the MINDTCT minutiae extractor from NBIS SDK 5.0.0 [36] to create a minutiae template. Then, the template matching is performed by the MCC SDK 2.0 [41]. Fig. 11 shows the CMC curves from this opensource matching system. We also report the identification rate for rank-1 to rank-30 in Table 4. Our proposed method still outperforms most algorithms for the overall case. However, if the rank is greater than 20, the DN-Unets [18] achieves the best accuracy.
Experiencing with two AFISs, we found that different AFIS provides different identification results. Changing the AFIS alters the ranking of identification performance of latent enhancement algorithms. For the good-quality case, our proposed method is inferior to a few published algorithms using COTS, but, by contrast, it is the best using minutiae-based matching. The contradiction happens for the bad-quality case. However, the proposed method provides robust latent enhanced results to be used with different AFISs.
B. BENCHMARK TESTS ON IITD-MOLF DATASET
The IITD-MOLF DB4 database [43] contains 4,400 latent fingerprint images from all ten fingers of 100 persons. Each finger can have at least one image, up to five images depending on the recording session. We treat each latent fingerprint image as an individual query fingerprint. Hence, we probe 4,400 unique queries for the test. However, we refer to the IITD-MOLF DB3_A [42] database for their corresponding ten-print image pairs. It has 4,000 live-scan slap fingerprint images captured by the CrossMatch L-Scan Patrol sensor for the same 1,000 fingers. Combined with 27,000 fingerprints from NIST-SD14, we have 31,000 fingerprints of a fingerprint gallery for identification tests.
In this benchmark test, we obtained only enhanced results from three published algorithms; SpectralDict [13], Finger-Net [14], and 2-Stage-Prgs [27]. Unfortunately, the state-ofthe-art authors [18] could not provide us with the enhanced results in this database. Therefore we cannot include results from [18] in this evaluation. Because [13] and [27] are our previous works, we can reproduce enhanced results from our source codes. For [14], we use the released code [45] to enhance latent fingerprint images. Note that the enhanced results from [13], [27], and the proposed algorithm used manual segments from [27], while the enhanced results from [14] used their auto-segments. We did not have enhanced results from [26] in this dataset because it needs manually selected initial block locations of 4,400 images. Using COTS VeriFinger 10.0, we plot the CMC curves for the IITD-MOLF DB4 database in Fig. 12 (a). The identification rates versus ranking are reported in Table 5. Our proposed method and our preliminary work [27] outperform the deep learning FingerNet [14] approximately 5% identification accuracy, a significant gap for improvement.
2) BENCHMARK TEST WITH MCC 2.0
Using the MINDTCT minutiae extraction [36] and the MCC minutiae matcher [41], we obtain the CMC curves for the IITD-MOLF DB4 database in Fig. 12 (b). The identification rates versus ranking are shown in Table 6.
The identification results are comparable. The FingerNet slightly outperforms our proposed method for rank-1 and rank-5. Nevertheless, our proposed method gains better performance for the rank-10, rank-20, and rank-30.
C. ABLATION STUDY ON THE REFINEMENT ITERATION
We perform an ablation study of the refinement iteration in process C, anomalous fingerprint pattern detection. In this process, we set a maximum number (k) of the refinement iterations to five. Without feedback or no iteration (k = 0), we activate only process A and process B without process C. This case is called ''feedforward.'' Fig. 13 demonstrates the CMC curves by varying feedbacks or refinement iterations; k = 0, 1, 2, . . . , 5. Similar to the previous experiments, we operate the same benchmark tests with two AFISs and two latent fingerprint databases. The experiment results show that the refinement iteration of process C always improves the performance of the proposed algorithm. However, increasing the number of refinement iterations does not guarantee better performance. For example, as shown in Fig 13 (a), using COTS VeriFinger with the NIST-SD27 database, the best performance is k = 2. The performance result of k = 5 is inferior to k = 2 or 3. For other cases, as shown in Fig 13 (b), (c), (d), the proposed method tends to gain better performance while k is increasing. Hence, we set k = 5 as our best performance results for benchmarking against the other algorithms, as shown in Fig 10, 11, and 12. Note that our preliminary work, the Semi-Prgs method [26], is similar to process B without process A. The difference is that we need to choose the initial block location for starting process B manually in [26]. We demonstrate why increasing refinement iterations may result in better or worse enhancement results in Fig. 14. Fig. 14 (a) shows some successful cases. The proposed algorithm can remedy some mistakes from the feedforward or previous feedback processes. However, the later iterations cannot further improve enhanced results. On the other hand, Fig. 14 (b) demonstrates unsuccessful cases. The proposed algorithm is confused by overlapped latent fingerprints in the B146 image from NIST-SD27. Another failure is caused by the large missing area of friction ridges in the 23_L_6_1 image from IITD-MOLF. The enhancement process goes wrong and cannot recover from corrective feedback. These examples can tell us why higher iteration may not yield the best result.
D. VISUAL INSPECTION AND COMPARISON
To understand the strength and weaknesses of the proposed algorithm, we visually inspect and compare enhanced results. We select enhanced results from the top-5 algorithms in Table 3 and the top-4 algorithms in Table 5 for visual comparisons. Fig. 15 shows two latent fingerprint examples from the NIST-SD27 database. The B180 image in Fig. 15 (a) contains a bad-quality latent fingerprint deposited on a knife blade. The Verifinger 10.0 can identify three enhanced results at the first rank and the other two within the third rank. The MCC 2.0 provides ranking differentiation. Our proposed algorithm achieves the best MCC ranking for this image. Most algorithms suffer from combining the latent fingerprint with a strong edge from the blade. Both FingerNet [14] and DN-UNets [18] reconstruct some fake ridges due to their over-segmentation around the actual latent fingerprint boundary. Our proposed result can enhance unclear ridges in the upper left zone, which are a failure from our preliminary work, Semi-Prgs [26].
The U270 image in Fig. 15 (b) is an ugly quality from the NIST-SD27 database. This image contains unclear ridges around a core point. Rankings from the VeriFinger 10.0 are the same for all algorithms, but rankings from the MCC 2.0 are different. The proposed algorithm preserves ridges around the core point area. Most algorithms could not correctly reconstruct ridges around singular point areas. Fig. 16 shows two other latent fingerprint examples from the IIT-D MOLF DB4 database. The 33_R_4_2 image in Fig. 16 (a) algorithm can reconstruct ridges in the central area while the other algorithms fail. The reason is that the proposed algorithm can diffuse high-quality ridge spectrum into the low-quality area, resulting in better-enhanced results.
Another showcase in the IIT-D MOLF DB4 database, the 24_R_8_4 image, is a right loop type in Fig. 16 (b). This image contains a weak fingerprint pattern for the entire segment. Moreover, there are very unclear ridges in the bottom zone. The proposed algorithm yields the rank-1 identification result for both AFIS systems. Our 2-StagePrgs algorithm [27] has a problem around a core point, resulting in a slightly shifted location of the detected core point. The FingerNet enhanced result [14] excludes the bottom zone from its segmentation and fails to produce its identification rank for the MINDTCT and MCC AFIS system. Note that the full enhanced results from both databases are available upon request.
E. EXECUTION TIME
We implement our works using the Matlab2018a toolbox and Microsoft Visual C# 2017, which runs on an Intel Core i7 CPU 2.2GHz with 8GB RAM and NVIDIA GeForce GTX 1060 6GB GPU. Table 7 reports the execution times for each process of the proposed method for two benchmark databases. The NIST-SD27 image size is 800 × 768 pixels and the IITD-MOLF DB4 image size is 320 × 448 pixels. Note that the later iteration is faster than the former iteration because most blocks have already been enhanced. Therefore, only remaining anomalous blocks are in this process for reenhancement. We train two separate components of the sparse autoencoder for the locally spectral autoencoder. The training time is 36.7 hours. In addition, it consumes 36.8 hours for retraining the full stacking. The regionally spectral autoencoder requires 16.2 hours for training and 12.6 hours for retraining the missing data.
Our proposed algorithm's execution time may be slow compared to those deep learning approaches, which require less than a second [18] or only a few seconds [14] for enhancement. However, this is not a critical issue. Instead, the more critical issue is how the algorithm can correctly enhance most hidden features in the input latent fingerprint image. The goal of the latent fingerprint enhancement is to increase the AFIS hit rate and identify the prime suspect.
V. CONCLUSION AND FUTURE RESEARCH
We combine two powerful mechanisms to solve the latent fingerprint enhancement problem. The first mechanism uses boosted spectral filtering to improve high-quality friction ridges in priority. Then the enhanced friction ridges are inserted back as feedback to improve low-quality ridges nearby. This mechanism provides progressive feedback that can exploit intra-image correlation. The second mechanism employs machine learning for corrective feedback. This mechanism uses locally and regionally spectral autoencoders for anomalous fingerprint pattern detection. Note that this mechanism explores inter-image correlation. The combination of the two mechanisms gives us a novel framework for latent fingerprint enhancement.
There are some drawbacks to the proposed framework. First, the proposed framework is quite complicated. Second, the proposed autoencoder cannot detect incorrectly enhanced results of global fingerprint patterns, as shown in Fig. 14 (b). Third, the manual segmentation of latent fingerprints is necessary for the input of the proposed framework.
Most deep learning approaches provide automatic latent fingerprint segmentation and enhancement in the same package. In contrast, the proposed algorithm requires manual latent fingerprint segmentation. Nevertheless, we think that human-guided segmentation is still required in the practical forensic routine if multiple latent prints are in one image. Moreover, latent fingerprint examiners need to focus on the targeted fingerprint or the best-quality latent fingerprint in the given image. The fully automatic system requires several automatic processes, including segmentation, quality assessment, enhancement, and targeted fingerprint selection. Therefore, we need a solution that has not been thoroughly answered yet.
Unfortunately, we cannot compare our enhanced results with the GAN approaches due to a lack of those results. We have already requested GAN's enhanced results from some authors in the literature, but we currently receive no results. Therefore, the question to be answered is that whether the GAN's results can surpass our results. In our opinion, the progressive feedback of our proposed framework is responsible for the most improvement of latent fingerprint enhancement. On the other hand, the exiting GAN approaches cannot exploit and boost weak friction ridges from the enhanced ridges (no intra-feedback mechanism). So we left this task for future research. We shall provide our enhanced results from both public databases for future comparison upon request to answer this question.
For future research, there is plenty of room for improvement. The spectral autoencoder with fully connected layers is simple, but it may not be efficient. We shall extend our learning model by exploring deep CNN architectures in the frequency domain for globally anomalous fingerprint pattern detection. Combining progressive feedback with a deep learning model such as CNN will bring us new hope for a better latent fingerprint enhancement scheme. | 14,358.4 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Numerical Verification and Comparison of Error of Asymptotic Expansion Solution of the Duffing Equation
A numerical order verification technique is applied to demonstrate that the asymptotic expansions of solutions of the Duffing equation obtained respectively by the Lindstedt-Poincaré(LP) method and the modified Lindstedt-Poincaré(MLP) method are uniformly valid for small parameter values. A numerical comparison of error shows that the MLP method is valid whereas the LP method is invalid for large parameter values.
INTRODUCTION
The Duffing equation has been used to model a number of mechanical and electrical systems [1].The differential equation that describes this oscillator has a cubic nonlinearity, and it has been named after the studies of G. Duffing in the 1930's.Traditional perturbation methods, such as the Lindstedt-Poincaré(LP) method, the multiple scale method and the harmonic balance method, are powerful tools for obtaining approximate solutions of the Duffing equation as well as other nonlinear equations.Asymptotic expansion solutions by perturbation methods are formally in the form of a power series of small parameter ε and are valid only for small value of ε [2].A modified Lindstedt-Poincaré(MLP) method [3,4] was proposed to obtain asymptotic expansion solutions of the Duffing equation, which works not only for small parameter values but also for large parameter values of ε .The essential idea of the MLP method is to transform the parameter ε into , which is defined such that the value of α is always kept small regardless of the magnitude of the original parameter ε .When an asymptotic expansion is formally constructed, it is important to verify that it accurately approximates the exact solution and that the error in the expansion behaves asymptotically as expected.Typically, asymptotic solutions for a few specific values of ε are chosen to show that the error between the asymptotic solution and exact (numerical) solution is relatively small.However, so few comparisons are sometimes insufficient to demonstrate that the asymptotic expansion is uniformly valid, which means the numerical error of the truncated asymptotic expansion is of the same order of magnitude with respect to the expansion parameter as the terms neglected.Although the quantitative error may be small, it does not become small at the rate expected [2,5].Therefore, one needs to further verify that the solution is indeed asymptotically accurate to the order to which it is constructed.In this paper, a numerical order verification technique, first proposed by Bosley [5], will be applied to demonstrate that the asymptotic expansions of solutions of the Duffing equation are uniformly valid up to the third order for small values of parameter ε .The order of the asymptotic expansion solutions of free vibration of the Duffing equation has been verified in Ref. [6], but we note that the reversion method is adopted there and the consequent expansions contain the secular term t t sin ε , which are effective only for small values of t ε .In this paper, the use of the LP method and the MLP method can overcome this defect.Furthermore, instead of evaluating the asymptotic solution at one fixed point 0 t t = in Refs.[5][6][7][8][9] or finite fixed points in Ref. [10], maximum absolute error on an interval ] , 0 [ T is introduced in this paper to give a more comprehensive evaluation of the error between the asymptotic and numerical solutions.When estimating numerical errors due to time evolution, it is more correct to use the maximum error on the time domain in engineering applications.Finally, a numerical comparison of the error of the LP method with that of the MLP method shows that the MLP method works also for large values of ε whereas the LP method is invalid.
ASYMPTOTIC EXPANSIONS OF SOLUTIONS
Consider the harmonically excited vibrations of the Duffing equation where Ω is the forcing frequency, p is the forcing amplitude.Taking a time transformation (3) Following the procedure of the classical LP method [2], the first four terms of the approximate solution of fundamental resonance can be worked out as follows 9) Similarly, following the procedure of the MLP method [4], the first four terms of the approximate solution of fundamental resonance of Eq.( 3) can be worked out as follows ) where
NUMERICAL ORDER VERIFICATION OF ASYMPTOTIC EXPANSIONS
We first give a brief introduction to the Bosley's technique [5].Assume that the asymptotic expansion solution of a nonlinear equation is ) The absolute error between the asymptotic solution and the exact solution is where K is a constant.Taking the logarithm of both sides of Eq.(18) yields 18) is evaluated at a fixed point 0 t t = in Refs.[5][6][7][8][9].We think it is partial because the errors are different at different points, namely, the error may be small at one point but large at the other points.In this paper, maximum absolute error which can give a comprehensive estimation of difference between the exact solution and the asymptotic solution over the domain of interest.Eq.( 19) can be numerically approximated by where i t are fixed points in the interval ] , 0 [ T and m is sufficiently large integer.In the following examples, the values of parameters in Eqs.( 1), ( 2), ( 19) and ( 20) are assumed to be . To verify the order of asymptotic expansion (4) obtained by the LP method, we first find the numerical solutions of Eqs.( 1) and ( 2) with ε starting from 001 .0 and ending at 03 .0 by a step size 001 .0 . Next, we evaluate the asymptotic expansion (4) at the same values of ε and i t as the numerical solutions for 2 , 1 , 0 = N and 3 respectively.In Fig. 1 we plot the values of the error at these 30 points, namely, in Eq.( 18) is replaced by the numerical solution.For 2 , 1 , 0 = N and 3, the least-squares fit of these data is used to determine respectively the slopes 1.00252, 1.96014, 2.95284 and 3.94723, whose relative errors are less than 2%, compared with the theoretical slopes = +1 N 1, 2, 3 and 4, respectively.So we can conclude that the asymptotic solution of the Duffing equation obtained by the LP method is indeed uniformly valid for small parameter values.In this paper, the computer algebra system Mathematica is applied to implement relative calculations and plots.
Similarly, the verification of the order of asymptotic expansion (10) with ( 11)-( 16) obtained by the MLP method is shown in Fig. 2, where α starts from 005 .0 (0.0006 for ε ) and ends at 1 .0 (0.0139 for ε ) by a step size 0025 .0 . For 2 , 1 , 0 = N and 3, the least-squares fit of these data is used to determine respectively the slopes 1.0001, 2.05442, 2.99075 and 4.00381, whose relative errors are less than 2.7%, compared with the theoretical slopes = +1 N 1, 2, 3 and 4, respectively.Thus we can conclude that the asymptotic solution of the Duffing equation obtained by the MLP method is indeed uniformly valid for small parameter values.
NUMERICAL COMPARISON OF THE MLP METHOD WITH THE LP METHOD
Now we show a numerical comparison of the MLP method with the LP method.For simplicity, we take only the third order approximation as an example.
In Fig. 3 we plot 3 E versus ε of the asymptotic expansion (4) with ( 5)-( 9) obtained by the LP method, where ε starts from 0.5 and ends at 2.5 by a step size 0.05.When So, for large parameter values of ε , the MLP method is valid whereas the LP method is invalid.
CONCLUSIONS
The asymptotic expansions of solutions of the Duffing equation obtained by the LP method and the MLP method are uniformly valid for small parameter values of ε .For large parameter values of ε , the MLP method is valid whereas the LP method is invalid.
for a fixed time 0 tt
= and for small values of ε , the value of log for different values of ε , these points should be nearly on a line and the linear equation that interpolates these points using a linear least-squares fit should have slope 1 + N .The error of Eq.( method is unacceptable because the maximum absolute error 3 E is larger than 21 and increases rapidly as ε increases.
Fig. 1
Fig.1 Order verification of the asymptotic expansion (4) obtained by the LP method | 1,941.8 | 2008-04-01T00:00:00.000 | [
"Mathematics"
] |
Reverse Carleson measures in Hardy spaces
We give a necessary and sufficient condition for a measure μ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mu $$\end{document} in the closed unit disk to be a reverse Carleson measure for Hardy spaces. This extends a previous result of Lefèvre, Li, Queffélec and Rodríguez-Piazza Lefèvre et al. (Rev Mat Iberoam 28(1):57–76, 2012). We also provide a simple example showing that the analogue for the Paley–Wiener space does not hold. As it turns out the analogue never holds in any model space.
Introduction
For 1 ≤ p < ∞ let H p be the Hardy space on the unit disk D equipped with its usual norm where S I = {z ∈ D: 1 − |I | ≤ |z| ≤ 1, z/|z| ∈ I } is the usual Carleson window. This theorem has been extended to several other spaces, like Bergman, Fock, model spaces etc., and we refer the reader to the huge bibliography on this topic for further information.
Note that H p contains a dense set of continuous functions for which the embedding (1.1) still makes sense when the measure has a part supported on the boundary. Then (1.2) implies that the restriction of the measure μ to the boundary has to be absolutely continuous with respect to Lebesgue measure and with bounded Radon-Nikodym derivative. It is thus possible to consider, more generally, positive, finite Borel measures supported on the closed unit disk: Here, we are interested in reverse Carleson inequalities f p ≤ C f L p (D,μ) , f ∈ C(D) ∩ H p (D), 1 < p < ∞. In [9] Lefèvre et al. proved that when μ is already a Carleson measure these hold if and only it there exists C > 0 such that for all arcs I ⊂ ∂D μ(S I ) ≥ C|I |.
We will show that this result can be deduced from a well-known balayage argument which does not require the Carleson condition. It will be clear from this argument that we have a reproducing kernel thesis for the reverse embedding: if the embedding holds on the reproducing kernels, then it actually holds for every function.
It turns out that the interesting part of the measure has to be supported on the boundary, while the part supported in the disk can be dropped.
Finally, we provide a simple example showing that the analogous reproducing kernel thesis for the reverse embedding in the Paley-Wiener space does not hold. We will actually show that the reproducing kernel thesis for the reverse embedding never holds in any model space. In a previous version of this work, the construction valid in the Paley-Wiener space was generalized to the situation of so-called one-component inner functions. We are very grateful to Anton Baranov who suggested the shorter proof presented below and which gives the result in general model spaces.
We shall use the following standard notation: f g means that there is a constant C independent of the relevant variables such that f ≤ Cg, and f g means that f g and g f .
Our main result reads as follows.
Theorem 2.1 Let 1 < p < ∞ and let μ ∈ M + (D). Then the following assertions are equivalent: The key implication of the above result if of course (2) ⇒ (3) which is based on a balayage argument.
Observe that in this theorem we do not require absolute continuity of the restriction μ| ∂D . Still, if we want to extend (1) to the entire H p -space, then, in order that D | f | p dμ makes sense for every function in H p , we need to impose absolute continuity on μ| ∂D . Note that the integral D | f | p dμ can be infinite for certain f ∈ H p when the Radon-Nikodym derivative of μ| ∂D is not bounded.
(3) ⇒ (4). Take h > 0 so that |I |/ h is a large integer N and consider the modified Carleson window Split I into N subarcs I k such that |I k | = h (and hence S I k ,h = S I k ). Then [11,Theorem 2.18]) we thus have We deduce that the Lebesgue measure on ∂D, denoted by m, is absolutely continuous with respect to the restriction of μ to ∂D and that the corresponding Radon-Nikodym derivative of μ is bounded below by C 3 . In particular one can choose Observe that when p = 2, then |K λ (z)| 2 is nothing but the Poisson kernel, for which the arguments below are very transparent. Let us however do the argument for general p. By hypothesis, integrating over S I,h with respect to area measure d A on D we get so that the previous estimate becomes Indeed, if z / ∈ I , then there are δ, h 0 > 0 such that for every 0 < h < h 0 and for every λ ∈ S I,h , we have |1 − λz| ≥ δ > 0, and the result follows from the estimate Suppose now that z = e iθ 0 ∈ I . Let h ≤ |I |, then setting λ = (1 − t)e iθ for λ ∈ S I,h we have Since 0 ≤ t ≤ h ≤ |I | and z = e it ∈ I , the set {e iθ : |θ − θ 0 | ≤ t, e iθ ∈ I } contains an interval of length at least t/2, we get On the other hand, integrating in polar coordinates, we get Hence ϕ h converges pointwise to a function comparable to χ I , and ϕ h is uniformly bounded in h. Now, from (2.1) and by dominated convergence we finally deduce that Let us see that, on the other hand, the μ-norm of the normalized reproducing kernels is uniformly bounded from below. If λ is such that | Im λ| > 1 then | sin(π(x n − λ))| e π | Im λ| , and hence | Im λ| |x n − λ| 2 1.
It is thus enough to consider points λ ∈ C with | Im λ| ≤ 1. Let x n 0 be the point of S closest to λ; then there is δ > 0, independent of λ, such that It is interesting to point out that μ is a Carleson measure for PW π , since S is in a strip and separated.
Failure in general model spaces
The previous construction can be generalized to model spaces in the disk. The model space associated to an inner function is K = H 2 H 2 , and the reproducing kernel corresponding to λ ∈ D is given by If is a finite Blaschke product of degree strictly bigger than one, picking for instance μ = δ 0 we immediatly get the reverse inequality on reproducing kernels [see (3.1)]. Clearly μ is Carleson, and since the degree of is not one, we can combine two linearly independent functions of K vanishing at 0.
In the general case we need to construct a measure supported on T.
Theorem 3.1 Let be an inner function which is not a finite Blaschke product.
Then there exists a measure μ on T such that K ⊂ L 2 (μ), the measure μ satisfies the reverse estimate on reproducing kernels k λ ,
1)
but the reverse Carleson embedding for the space K does not hold.
Proof Let us first assume that vanishes at z 0 = 0, and let = z 0 . Denote by μ the Clark measure for 0 , that is μ is defined by Clark introduced these measures in [6]. Observe first that K = C⊕z K 0 which implies that K ⊂ L 2 (μ). Continuous functions are dense in K (see [3] or [5, p. 187]) and functions in K are μ-measurable (see [10]). In particular, if we had reverse embedding on continuous functions then we would have it on the whole space K . It is thus sufficient to find a non-zero function f ∈ K with zero L 2 (μ)-norm. To this end, pick f (z) = 1 − 0 (z) which belongs to K . Clearly f = 0 μ-a.e, so that | f | 2 dμ = 0. Let us show that (3.1) is satisfied. Any reproducing kernel k λ has representation as the (orthogonal) sum In particular, Also, since μ is a Clark measure for K 0 , we have Thus, we clearly have Assume that there exists a sequence λ n such that k λ n 2 ≤ 2(1 + μ(T) 1/2 ) and (3.1) does not hold for λ n with any positive C. Since the norms of the kernels are supposed bounded on λ n , this implies that Passing if necessary to a subsequence we may assume that λ n → λ 0 , and it follows from the fact that the norms k λ n 2 are uniformly bounded that even in the case when λ 0 ∈ T we still have that the kernel k λ 0 is correctly defined and belongs to K . Then, by the Fatou lemma, we have that k λ 0 L 2 (μ) = 0. Hence, 1 − (λ 0 ) (z) = 0 μ-a.e. But 0 (z) = 1 μ-a.e. Thus, 1 − (λ 0 )z = 0 μ-a.e. z, which is impossible if the support of μ contains at least two points (but it does, since 0 is not a single Blaschke factor). Let us now discuss the situation when does not vanish at 0. For a = (0), the Frostman shift has a zero at 0. By the above discussions, there is a measure μ a such that K a ⊂ L 2 (μ a ), the reverse estimate holds on the kernels: and there is a non zero function f 0 ∈ K a with f 0 L 2 (μ a ) = 0. Recall that the Crofoot transform U a : K −→ K a , defined by is isometric onto K a . An easy computation gives k a λ = U a (c a,λ k λ ), λ ∈ D, where c a,λ = 1 − |a| 2 1 − a (λ) .
Using (3.2) and the isometry property of the Crofoot transform, we get C k λ 2 = C U a k λ 2 ≤ U a k λ L 2 (μ a ) = k λ L 2 (μ) , and so μ satisfies (3.1). We also have the Carleson measure condition for this measure: for every f ∈ K Finally, since there is 0 = f 0 ∈ K a with f 0 L 2 (μ a ) = 0, take the unique 0 = g 0 ∈ K with U a g 0 = f 0 , then g 0 L 2 (μ) = U a g 0 L 2 (μ a ) = f 0 L 2 (μ a ) = 0.
Note that the above proof actually works for finite Blaschke product with degree at least 3. | 2,625 | 2014-02-12T00:00:00.000 | [
"Mathematics"
] |
Phenotypic Variation across Chromosomal Hybrid Zones of the Common Shrew (Sorex araneus) Indicates Reduced Gene Flow
Sorex araneus, the Common shrew, is a species with more than 70 karyotypic races, many of which form parapatric hybrid zones, making it a model for studying chromosomal speciation. Hybrids between races have reduced fitness, but microsatellite markers have demonstrated considerable gene flow between them, calling into question whether the chromosomal barriers actually do contribute to genetic divergence. We studied phenotypic clines across two hybrid zones with especially complex heterozygotes. Hybrids between the Novosibirsk and Tomsk races produce chains of nine and three chromosomes at meiosis, and hybrids between the Moscow and Seliger races produce chains of eleven. Our goal was to determine whether phenotypes show evidence of reduced gene flow at hybrid zones. We used maximum likelihood to fit tanh cline models to geometric shape data and found that phenotypic clines in skulls and mandibles across these zones had similar centers and widths as chromosomal clines. The amount of phenotypic differentiation across the zones is greater than expected if it were dissipating due to unrestricted gene flow given the amount of time since contact, but it is less than expected to have accumulated from drift during allopatric separation in glacial refugia. Only if heritability is very low, Ne very high, and the time spent in allopatry very short, will the differences we observe be large enough to match the expectation of drift. Our results therefore suggest that phenotypic differentiation has been lost through gene flow since post-glacial secondary contact, but not as quickly as would be expected if there was free gene flow across the hybrid zones. The chromosomal tension zones are confirmed to be partial barriers that prevent differentiated races from becoming phenotypically homogenous.
Introduction
The Common shrew, Sorex araneus, provides an unparalleled opportunity to study the process of speciation. Robertsonian karyotypic variation, in which chromosomal arms are rearranged at the centromeres, subdivides this species into more than 70 karyotypic races [1,2]. Members of a race by definition share the same combination of acrocentric and metacentric chromosomes in a geographically contiguous part of the parent species' range [3].
Where parapatric karyotypic races make contact, Robertsonian incompatibilities cause them to form tension hybrid zones. Sorex araneus possesses eighteen fundamental arms, excluding the sex chromosomes, which are labeled alphabetically from largest to smallest (a-u); arms g-r vary in how they are combined into chromosomes [4,5]. Hybrid incompatibilities arise when homologous fundamental arms are combined into different metacentric chromosomes in the two parent races. The mismatches cause heterozygote chromosomes to align in chains and rings of varying complexity during the first division of meiosis ( Figure 1). In simple heterozygotes, two acrocentric arms from one race align with a single metacentric from the other (a chain of three, CIII); in complex heterozygotes, several metacentrics align in longer chains or rings. For complex heterozygotes, problems in pairing at prophase and difficulty in segregation at anaphase result in reduced hybrid reproductive fitness. The role of meiotic drive in fixing metacentrics within races and effects of metacentric incompatibilities between races have been studied in detail at several hybrid zones [5][6][7][8][9][10][11][12][13]. Simple heterozygotes cause only marginal reductions in fitness, but hybrids with large rings or long chains can be substantially unfit [7,[14][15][16][17]. The lowered fitness maintains the sharp hybrid zones by preventing the metacentrics from introgressing between races.
Hybrid unfitness can, in principle, lead to speciation. In the classical chromosomal model of speciation, karyotypic incompat-ibility leading to reduced fertility of hybrids was considered as the first stage in reproductive isolation [18,19]. Modern versions of the chromosome speciation model suggest that the reduction in gene flow is concentrated in the genes located on the rearranged chromosomes because of cross-over suppression [20][21][22], an effect heightened close to the chromosomal breakpoint [23][24][25]. Especially when combined with reinforcement, the reduction in gene flow can result in genetic and phenotypic differentiation, reproductive isolation, and ultimately speciation [22,26].
Intriguingly, however, substantial gene flow occurs across S. araneus hybrid zones despite reduced hybrid fitness. Genetic differentiation between races is weak, as indicated by allozyme markers [27,28], mitochondrial DNA [8,9], and microsatellite markers [29], even at zones where chains of nine (CIX) or eleven chromosomes (CXI) are formed in heterozygotes. Indeed, Fstatistics on multilocus microsatellite markers suggests that genetic differences between populations within races are higher than the differences between races [29]. Some studies have even found that fertility may be relatively high for some types of complex heterozygotes and that metapopulation dynamics can overwhelm the weak barrier effects of reduced hybrid fitness [15,16,30]. The lack of genetic differentiation between races contrasts with the high level of differentiation between sister species [31,32]. The persistence of gene flow has led some researchers to conclude that Robertsonian rearrangements do not promote speciation in S. araneus, but are instead merely remnants of past allopatric differentiation being lost following secondary contact [28,29,33].
In this paper we look at clines of morphological variation across two well-studied hybrid zones to determine whether phenotypic differentiation between hybridizing races is sharply delineated in similar fashion to the metacentrics or weakly defined like the genetic markers. We measured clines in skull and mandible shape using geometric morphometrics across the Moscow-Seliger (M-S) hybrid zone in European Russia [10,34] and the Novosibirsk-Tomsk (N-T) zone in Siberia [11,35,36]. Heterozygote offspring from the two zones produce especially long meiotic chains of eleven and nine and three chromosomes respectively ( Figure 1). We chose to look at skull and mandible shape because these trait complexes are multivariate, polygenic (including autosomal genes), and easily measured. In mice, for example, there are as many as 50 mandible QTLs scattered over all 19 autosomes [37][38][39][40]. Clines in these phenotypic traits should therefore indicate gross differentiation in the autosomal genome. The disadvantage of skeletal traits as measures of genetic differentiation is that an unknown component of their variance is environmental, though studies in other mammalian taxa suggest that overall heritability in these multivariate phenotypic systems is likely to fall between 0.3 and 0.5 [41,42]. Ambiguities of interpretation related to heritability are discussed below. This study is the first to systematically examine phenotypic clines across S. araneus hybrid zones. Previous morphometric studies have looked at morphometric variation in pure and hybrid race individuals at selected hybrid zones [43][44][45][46][47], others have looked at phenotypic variation across large segments of the species' range [48][49][50], and a few have looked at morphometric differences between hybridizing sister species of the S. araneus group [51][52][53]. While phenotypic clines have never been studied directly in S. araneus, these previous studies suggest that phenotypic differentiation between races is often small but statistically significant, that differentiation at the population level is often greater than differentiation between races, as with genetic markers, and that phenotypic differentiation between groups of races can be as large as between sister species.
Our specific aims are to determine: (1) whether phenotypic clines exist across these karyotypic hybrid zones; (2) whether the centers and widths of the phenotypic clines coincided with the centers and widths of chromosomal clines; (3) whether phenotypic differentiation is greater than differentiation in genetic markers; and (4) whether phenotypic differentiation is greater than expected if there was substantial gene flow.
Samples
Shrew skulls were photographed for morphometric analysis. The specimens represent two Sorex araneus hybrid zones that were the subject of previous chromosomal studies: (1) the Moscow-Seliger (M-S) zone in European Russia, where individuals were allocated to 18 sublocalities derived from five parallel trap lines crossing the hybrid zone ( Figure 2A); and (2) the Novosibirsk-Tomsk (N-T) hybrid zone in Siberia, where individuals were collected from 20 sublocalities scattered on both sides of the metacentric hybrid zone center ( Figure 2B) and two additional pure-race localities farther removed from the hybrid zone (see Table S1 for locality details for both hybrid zones). The chromosomal clines at these two hybrid zones were recently published by some of us [10,11,34]; the methods used to extract chromosomes and to estimate clines are described in full there. Briefly, however, karyotypes for each individual were determined using G-banding procedures [54]. Karyotypic clines were fit for each distinct metacentric chromosome using a maximum likelihood fit in two geographic dimensions and the chromosomal cline centers and widths estimated from those [55]. Trapping, handling and euthanasia of animals followed protocols approved by the Animal Care and Use Committees of the A.N. Severtsov Institute of Ecology and Evolution and the Institute of Cytology and Genetics of the Russian Academy of Sciences. No additional permits are required for research on non-listed species in Russia. Specimens were deposited the research collections of the Department of Zoology and Ecology, Kemerovo State University, Kemerovo, Russia. Skulls and mandibles were photographed at KSU by two of us (VBI, SSO).
Shape and size
A total of 282 specimens were used for morphometric analysis. These include 152 individuals from the M-S Zone (M = 50; S = 73; Hybrid = 29) and 130 from the N-T zone (N = 61; T = 42; Hybrid = 27). These specimens were grouped by sublocality for cline fitting. Fewer sublocalities were used in this study than in the chromosomal analyses [10,11] because broken specimens and small samples made some of those sublocalities untenable for morphometric analysis. The sublocalities used in this study are described above, shown in Figure 2, and described in Table S1.
Each cranium was digitally photographed in ventral view and each mandible in both medial and lateral views. Two-dimensional Cartesian coordinates were recorded for biologically homologous landmarks on each structure ( Figure 3, Table S2). Note that all of the landmarks on the lateral mandible are equivalent to landmarks on the medial mandible, except for landmark 17 (mental foramen). Those two datasets are therefore expected to be highly correlated. Shape and size data were collected with tpsDig by FJ Rohlf. The landmarks and associated data are available from Indiana University ScholarWorks (http://hdl.handle.net/2022/ 15279).
Landmarks were aligned using generalized Procrustes analysis (GPA) to remove differences in size, translation and rotation. The aligned shapes were projected orthogonally into tangent space for further analysis [56]. The mean was subtracted from the GPA superimposed landmarks to center their coordinate system, the covariance matrix of the residuals was calculated, and the residuals were projected onto the eigenvectors of the covariance matrix to obtain principal component scores that could be used as shape variables for subsequent analysis [57]. Note that all of the shape Table S1 for details of the numbered sites. Background maps courtesy of Google Maps TM . Centers of the two chromosomal hybrid zones are shown in red. doi:10.1371/journal.pone.0067455.g002 variance found in the original landmark data is preserved in the complete set of principal component scores.
Differences in shape between individuals or populations were measured as Procrustes distance, D, which can be calculated either as the square-root of the sum of the squared differences between corresponding landmarks after they have been superimposed, or as the square-root of the sum of the squared differences between PC scores for all principal components. D is used in several of our analyses, notably the estimation of phenotypic clines (see below). Thin-plate spline deformation grids were used to illustrate differences in shape [57]. Size of each element was measured as centroid size, which is the square-root of the sum of the squared distances between each landmark and the object's centroid [58]. Analyses were performed with the Morphometrics for Mathematica 9.0 add-in for Mathematica TM (http://hdl.handle.net/2022/ 14613) [59].
Tests for significant difference in shape
We measured phenotypic differentiation among several combinations of samples, as explained below. MANOVA was used to test the statistical significance of differences in mean shape between samples, notably among the pure race and hybrid groups at each of the two hybrid zones.
Q ST and differences expected from drift
Q ST the analogue of F ST for population differentiation in quantitative phenotypic traits, was used to measure differentiation between pure race samples on either side of both hybrid zones: where s 2 GW is the additive genetic variance within samples and s 2 GB is the genetic variance among samples [60][61][62]. An unbiased estimate of Q ST and its standard error were estimated using the jack-knife procedure of Weir and Cockerham [63]. Since the additive genetic variance of our populations was unknown, we substituted the phenotypic variances, 0.5 s 2 PW and s 2 PB , which assumes that that heritability (h 2 ) within samples is 0.5 and that environmental variance among samples is randomly distributed or absent [41]. If this heritability is an overestimate, then our Q ST values will have been underestimated [64]. The ambiguities of interpretation that arise from uncertainty about heritability are discussed below. Analyses were performed with Mathematica TM 9.0.
Estimation of selection on the phenotypes
The proportion of phenotypic differentiation to neutral genetic differentiation was measured with the ratio Q ST /F ST , where F ST a similar parameter used for differentiation in alleles or frequencies of genetic markers [60,62]. If F ST is estimated from neutral markers, then the ratio provides a measure of whether phenotypic differentiation is greater than expected due to drift [61]: if Q ST /F ST .1, then the phenotypes may be (or have been) under differentiating selection in the two populations; if Q ST /F ST equals 1 then the phenotypes may have differentiated by drift; or if Q ST / F ST ,1 then the phenotypes may be under stabilizing selection. F ST values for these two hybrid zones were published by Horn et al. [29] based on 16 microsatellite loci. These estimates were based on the same populations as our morphometric results. These estimates were supplemented by another published F ST estimates for the Moscow-Seliger zone at a different locality (11 autosomal microsatellite loci [65]).
Drift versus selection was also assessed by comparing the observed variance between populations to the amount expected showing graphically how the shape transect is calculated. A line is drawn through multivariate shape space between the two means of the pure race localities and the means of the hybrid localities are projected onto it. The result is a univariate axis that best describes the gradient in shape from one side of the hybrid zone to the other. For simplicity, the projections of the hybrids in this example all project between the means of the two races, but in reality they may also project outside them. doi:10.1371/journal.pone.0067455.g003 Phenotypic Clines in Sorex araneus PLOS ONE | www.plosone.org from random drift: Where s 2 w (t) is the expected variance of phenotypic change due to drift at time t, h 2 is the heritability, s 2 is the phenotypic variance of the trait, t is the number of generations elapsed, and N e is the effective population size [66,67]. As with Q ST heritability was assumed to be 0.5; if the true heritability is higher, then the amount of differentiation due to drift will be greater, whereas if it were lower (which it probably is) the amount of differentiation due to drift will be smaller. N e was unknown in our populations so we used two estimates, a conservatively small one derived from field censuses of local populations of S. araneus where N e = 70 [68] and a conservatively large one estimated from diversity in molecular markers where N e = 70,000 [69]. The true value undoubtedly lies between these two extreme estimates, probably closer to the small one, so the two estimates we derive from Equation 2 should confidently bracket the plausible range of phenotypic differentiation due to drift. The range of interpretation arising from uncertainty about heritability and population size are discussed below. Analyses were performed with Mathematica TM 9.0.
Hybrid zone widths
Hybrid zone widths were estimated by fitting a tanh cline model to the phenotypic data at the M-S and N-T hybrid zones. The tanh model [70][71][72] is equivalent to the logistic regression model of Gay et al. [73] when the two tails of the cline have the same slope. We used the tanh model because the logistic model can only handle values between 0 and 1, whereas our data, which have 0 and 1 centered on the means of the two opposing populations, have data points that are smaller than 0 and larger than 1. The phenotypic mean of each sublocality (see Figure 2) was calculated and a one dimensional tanh model was fit to those means.
Tanh models are normally fit to chromosomal data taken from a transect across a hybrid zone between end-point populations on either side where the proportion of a particular metacentric is 1 at the end where it is fixed in the population and grades to 0 on the other side where the metacentric is absent. We standardized our phenotypic data to vary in the same way by calculating the mean of pure race localities on either side of the zone and standardizing the phenotypic variations so that those means had values of 0 and 1.
To do this, we first reduced the multivariate shape data to a single descriptive variable that describes the phenotypic differences between the two races. First a line in multivariate shape space that passes through the mean shapes of the two pure race samples was calculated. The means of each locality were projected onto that line, providing a summary of phenotypic variation with respect to the two pure races for all of the localities (Figure 3d). Sublocalities with fewer than four individuals were not used. This method of reducing dimensionality is an extension of a simpler approach used by Gay et al. [73], who used the first principal component (PC1) of variation as a univariate summary of multivariate phenotypic variation. Using PC1 to measure a cline has the disadvantage that the first principal component summarizes most of the variance in the dataset, but it does not necessarily describe the differences between the two groups (e.g., it may describe sex-related variation, variation in size or age, or simply variance in random outliers). Our approach uses the major axis that best distinguishes the two races, which may or may not be parallel to PC1. The geographic axis of the tanh fitting was calculated as the distance of each sublocality to its nearest neighbor point on the metacentric cline center (in kilometers). Distances to localities on the Novosibirsk and Seliger sides of the two zones were arbitrarily given negative numbers for purposes of plotting data and fitting cline curves. We then standardized the shape variable so that the mean of the pure race localities on the 'negative' side of the cline (Novosibirsk and Seliger races) was 0 and the mean on the positive side (Tomsk and Moscow races) was 1. To do this we subtracted the Novosibirsk (or Seliger) mean from the variable and divided by the phenotypic distances (D) between the two pure race locality means.
The width (w) and center (c) of the hybrid zone were then estimated for each morphological element using maximum likelihood to fit a tanh curve: where y is the standardized phenotypic distance expected under the model, x is the position of the sampling locality as described above, c is the center of the phenotypic cline being estimated, and w is the width of the phenotypic cline [70][71][72]. Standard errors for the cline parameters were estimated with bootstrapping [74]. For each of 1000 iterations, each sublocality was resampled with replacement and its mean recalculated after Procrustes superimposition. Those means were resuperimposed, the morphological transect between the races was reestimated, and the cline was refit to generate a distribution for w and c. Standard errors were calculated as one standard deviation of the resampled parameters on either side of the median (i.e., plus and minus the 34.1th percentile). The distributions of these parameters are skewed so the positive and negative standard errors are not equal. These standard errors take into account uncertainties associated with the sampling at each sublocality, with the Procrustes superimpositions, with the estimations of the race means, with the shape transect between the race means, and with the fitting of the tanh cline. Standard errors cannot be estimated from the likelihood function itself because of the Procrustes superimposition, which is an iterative best-fit algorithm that is different for every resampling. Analyses were performed with Mathematica TM 9.0.
Estimations of the minimum age of the hybrid zones
When two allopatric populations first come into secondary contact, a sharp, narrow cline forms. If there is gene flow between the populations, then the cline grows wider as the phenotypes, genotypes, or karyotypes introgress into the adjacent populations [75]. The minimum time since secondary contact can be estimated based on the time it would have taken the cline to have grown to its present width if there was no barrier to gene flow [75]: where t is time in generations, w is the present width of the cline, and l is the root-mean-square gene flow distance. Generation time in shrews is approximately 1 year. Gene flow is measured as the average dispersal distance of individual shrews per generation, which we estimated at 1 km following Polyakov et al. [11] (see discussion below). The time since secondary contact will, of course, have been much longer than predicted by this equation if barriers to gene flow prevent dissipation of the cline. Analyses were performed with Mathematica TM 9.0.
Differentiation in shape and size between hybridizing races
Shape differentiation between pure races was approximately the same at the two hybrid zones ( Table 1). Q ST values ranged from 0.012 to 0.040, and the differences in mean shape between the races were statistically significant for all three data sets at both hybrid zones when tested with MANOVA.
Size differentiation of the elements was high at the N-T zone, but nearly absent at the M-S zone. Q ST for size ranged from 0.084-0.225 at the N-T zone and from 0.000-0.013 at the M-S zone. The large size difference between the N-T races is consistent with a previous size-based morphometric analysis that found that Novosibirsk shrews were significantly smaller than Tomsk shrews [45].
The hybrid phenotypes were not directly intermediate in shape between those of the two parent races at either hybrid zone. Differences in the mean shapes of the pure race samples and hybrids are shown in Figure 4. In this figure, the morphological distance of the hybrid samples to the parents is represented by the lengths of the sides of the triangles; if the hybrid phenotype were precisely intermediate between the parent races, then the hybrids would lie on the line connecting the parents. The finding that the hybrids are not intermediate, which can result from overdominance, epistasis, or other non-additive effects, is common in multivariate polygenic traits [38]. Note that individuals with pure race karyotypes could be of hybrid origin if an F1 hybrid backcrossed with an individual of one of the parent races. Genetic recombination in the hybrid parent could cause the apparently pure race offspring to more resemble the hybrid phenotype. If back crossing occurred asymmetrically between the parent races it could contribute to the hybrid phenotypes being more similar to one than the other.
Phenotypic clines
The phenotypic clines were substantially wider at the N-T zone (6.8 to 36.0 km) than at the M-S zone (2.5 to 4.2 km) ( Figure 5). The widths of the phenotypic clines parallel those of the metacentrics at these two zones. The cline widths at the N-T zone of the metacentrics forming the long chains (CIX) are 8.5 km (CI = 6.2 to 12.8 km) and the short chains (CIII) are 52.8 km (CI = 22.6 to 199.2 km) [11] and the metacentrics forming the long chain (CXI) have a width of 3.3 km (CI = 2.7-4.5 km) at the M-S zone [10,34]. The wide clines in mandible shape at N-T were more linear than sigmoidal. The geographic centers of the phenotypic clines were very close to the centers of the metacentric clines, all within 0.3 km except for mandible shape at N-T, which were offset by 3 to 4 km ( Figure 5).
No clines were found in any of the size data sets except at the N-T zone where the medial mandible had a cline 1.7 km wide centered 0.58 km toward the Tomsk side of the metacentric zone center and the lateral mandible had a cline that was 0.48 km wide and centered 1.58 km toward the Tomsk side of the zone. These sharp clines in mandible size are consistent with the distinctly larger size of the Tomsk race individuals [45].
Expectation of differentiation at the hybrid zones due to drift
We found that the amount of differentiation between the hybridizing races is smaller than expected from drift, though this interpretation depends on assumptions about effective heritability, population size, and the interval of time that drift operated. Those assumptions are relaxed below. We estimated the amount of differentiation expected from drift during the pre-contact period of allopatry and compared it to the observed differences at the two hybrid zones. If gene flow is substantially blocked between the races, we expect them to differ at least by the amount of drift that would have accumulated during their period of allopatry, not to mention subsequently. However, the observed differences across the two hybrid zones (0.18 to 0.32 standard deviates) were smaller than expected from drift (0.3 to 8.5 standard deviates) ( Table 2). This finding seems to suggest that phenotypic differentiation between the races has been lost through gene flow, but it is based on assumptions about heritability, effective population size (N e ), and the amount of time available for drift to accumulate (Equation 2). We discuss each of these in turn here and consider alternative scenarios. The rate of drift depends on the heritability of the traits. Our estimates arbitrarily use a heritability of h 2 = 0.5. This is probably the highest plausible estimate based on work on heritability of skull and mandible shape in other species [41,42]. If heritability is lower, then the amount of divergence expected from drift will also be proportionally lower. If heritability was at the small end of the plausible range (e.g., h 2 = 0.25), then the expectation from drift will fall to 4.25 and 0.15 standard deviates for large and small N e instead of 8.5 and 3.0. If this is the case, then the observed differentiation is still less than expected from drift under small N e , but is greater than drift under large N e .
Drift also depends on population size. The concept of ''population'' in small body sized animals like these shrews is complex because of metapopulation dynamics; the ''populations'' that would differentiate by drift could be viewed as the two local populations on either side of the hybrid zone, or they could be viewed as the two race metapopulations as units. We estimated drift based on both views. Because we are considering time scales that are several thousands of years long, probably 8,000 to 10,000 shrew generations, the metapopulation view almost certainly needs to be taken into account. Local population densities of S. araneus have been estimated in field studies at 5 to 98 animals per ha, not all of which are reproductive; individual home range sizes range from 0.04 to 0.28 ha [68]. We used N e = 70 as our local population estimate. Long-term effective population sizes of entire races have been estimated at 68,000 to 74,000 based on mtDNA haplotypes [69]. We used N e = 70,000 as our upper-end metapopulation estimate. The relevant value for our study most likely lies somewhere in between. The observed phenotypic differences at the M-S zone are smaller than expected from both the large or small estimates of effective population size; those at the N-T zone are smaller than the drift under small N e , but similar in magnitude to those based on large N e .
The effect of drift also depends on the interval of time over which it operated. Our estimate uses a 10,000 year interval, which is the approximate duration of Marine Oxygen Isotope Stage 2 (MIS2, 14-29 kya [76]), the period during which the Last Glacial Maximum occurred. The races are hypothesized to have lived allopatrically during this period, if not longer (see discussion below). If we make the conservative assumption that the races were isolated only during the most climatically intense part of MIS2 and that differentiation ceased with population expansion 10 kya, then the duration of allopatry could be as little as 10,000 years. If the time spent in allopatry was longer then the expectation of differentiation due to drift will be larger. The entire period since the last interglacial, includes not only MIS2, but also MIS 3 and 4, a period of 57,000 years (14-71 kya [76]). Thus, the races could easily have been allopatrically separated for more than five times the duration of our estimates, but probably not less.
Relaxing our assumptions suggests that only if heritability is very low, N e very high, and the time spent in allopatry very short, will the phenotypic differences we observe be large enough to match the expectation of drift. Otherwise, our results suggest that phenotypic differentiation has been lost through gene flow because it is less than expected from drift.
Estimates of time since secondary contact under a model of free gene flow
The estimates of time since secondary contact based on phenotypic differentiation are much shorter than is realistic, indicating that barriers to gene flow must exist. If there was free gene flow across the hybrid zones, then secondary contact would be no more than 0.8 to 158 years ago based on the width of the Horizontal axes show the distance in km from the center of the metacentric hybrid zone (Novosibirsk and Seliger distances shown as negatives) and vertical axes show the shape transects between the pure race samples (standardized with Novosibirsk and Seliger means equal to 0 and Tomsk and Moscow equal to 1). The center of the metacentric zone is highlighted with a dashed red line. The ML estimate of the phenotypic cline is shown in black, with its center marked by a vertical grey line and its width indicated by light grey shading. Data points are labeled using the locality numbering system in Figure 2 and Table S1. doi:10.1371/journal.pone.0067455.g005 phenotypic clines ( Table 2). These clines have almost certainly been in place longer than that. The N-T hybrid zone has been studied for more than 25 years [77], and the two races themselves have been known from locations near the zone for almost 40 years [78,79]. The M-S zone has been known in its current position for more than a decade [34]. Indeed, even the time taken to produce this paper is many times longer than the lower estimate of 0.8 years since secondary contact. In fact, it is probable that the races have been in contact along hybrid zones for 10,000 years since the beginning of the Holocene, probably for 6,000 years since the end of the Holocene climatic optimum. S. araneus is known from fossil cave faunas in the Altai Mountains prior during MIS2 [80], which document that shrews were living near the current position of the Novosibirsk-Tomsk hybrid zone during the last glacial maximum. The only way that the observed tiny levels of phenotypic differentiation can be maintained is with the protection of lowered gene flow. Our estimates of time since secondary contact are based on a dispersal distance of 1 km per generation based on individual home range size [68,81]. If the dispersal rate was higher, then the estimated time since secondary contact would become shorter. Nevertheless, for the estimated time since secondary contact to be even 8,000 years ago, the dispersal rate would have to be less than 1 cm per year. We therefore conclude that if gene flow was completely free across these hybrid zones, then phenotypic differentiation would long since have been lost given the current widths of the phenotypic clines and the observed shape differences between the hybridizing races.
Estimates of selection based on Q ST /F ST ratios
Published levels of genetic differentiation (F ST ) were generally smaller or similar in magnitude to phenotypic levels of shape differentiation (Q ST ). At M-S the shape traits had Q ST values that ranged from 0.012 (lateral mandible) to 0.021 (skulls), and at N-T they ranged from 0.012 (lateral mandible) to 0.040 (skulls) ( Table 1) Q ST to F ST ratios near 1 indicate that both phenotypic and neutral genetic markers have differentiated by the same process, which is normally interpreted as drift because of the neutrality of the genetic markers [61]. If the phenotype is under diversifying selection then the ratio will be greater than 1, but if the phenotype is under stabilizing selection in the two races then the ratio will be less than 1 [61]. Our results therefore suggest the possibility that skull shape has undergone diversifying selection. Invoking diversifying selection to explain differences that were identified above as smaller than expected from drift alone is, however, a contradiction. Mandible shape with its lower ratios is consistent with having either evolved by drift or having been subject to stabilizing selection on both sides of the hybrid zones.
Discussion
Collectively, our results suggest that the M-S and N-T hybrid zones are acting as partial but incomplete barriers to gene flow. Phenotypic differentiation is statistically significant, the clines in phenotype are similar in location and width to the chromosomal clines, and the differentiation is too great to persist in the face of completely unimpeded gene flow. Nevertheless, the amount of differentiation is less than the amount that would have accumulated by drift (or diversifying selection) during the inferred period of allopatry prior to secondary contact, suggesting that at least some differentiation has been lost subsequently. We will discuss each component of this logic here.
The close match between the position and widths of the phenotypic and chromosomal clines suggest that the occurrence of chromosomal heterozygotes helps to maintain the phenotypic clines. Either the karyotypic incompatibilities are maintaining the cline directly as a barrier to gene flow in their own right, or indirectly if gene flow is impeded by genetic incompatibilities instead of chromosomal ones. If there was no inhibition in gene flow whatsoever, then the phenotypic clines would be wider or non-existent given the inferred age of the hybrid contact zones. However, the phenotypic clines at the N-T and M-S zones are comparatively narrow. Chromosomal clines in S. araneus are often tens of kilometers wide [5]. The hybrid zone between the Hermitage and Oxford races in Britain, for example, which involves a CV complex heterozygote but which has an acrocentric peak at its center that favors the production of simple rather than complex heterozygotes, and is on average 25 km wide and at points is up to 40 km wide [5,55]. The Drnholec-Ulm zone in the Czech Republic, which involves only simple heterozygotes (with maximally two CIII meiotic configurations), has chromosomal clines approximately 40 km wide [5,82]. Clines in other mammal species are also often tens of kilometers wide [75]. The 2.5 to 6.8 km wide phenotypic zones at the M-S and N-T zones are thus comparatively narrow, which provides the basis for our conclusion that they are similar in width to the metacentric clines. The 24.5 to 34.0 km wide clines in mandibular shape at N-T are more typical of phenotypic clines. For all our traits, the phenotypic clines encompassed the chromosomal cline centers, despite their comparative narrowness, suggesting both that the two types of clines are linked and that phenotypic differentiation is not significantly affected by introgression, which would offset the phenotypic centers in one direction or the other (though this may be the case with mandibular shape at the N-T zone). The lack of substantial phenotypic differences between the hybridizing races suggests that at least some differentiation has been lost to gene flow since secondary contact. However, the narrow phenotypic clines and the incompatibility between the phenotypic and independent estimates of time since secondary contact suggest that the hybrid zones are indeed barriers to gene flow, even if they are not completely impermeable. Interestingly, Q ST /F ST ratios for skull shape hint that there may have been selection for phenotypic differentiation between the hybridizing races, though whether that selection is acting at present or whether it occurred many millennia ago in glacial refugia is unknown. That evidence is weak, however. If the alternative estimates for F ST are correct, diversifying selection is unsupported by our data.
The overall picture that emerges from our results is one in which long periods of allopatric separation in glacial refugia were accompanied by phenotypic differentiation due to drift, or perhaps selection. After secondary contact in the post-glacial period, probably at least 10,000 years ago based on the widespread distribution of S. araneus fossils during glacial times [80, 83 84], partial gene flow allowed some of this differentiation to be lost. That it was partial is indicated by the sharp clines and the incomplete loss of the differences, which have had more than enough time to dissipate completely if gene flow were unimpeded. The Robertsonian incompatibilities between the hybridizing races seem like the most plausible cause of the reduction in gene flow across the tension zones.
Several cycles of glacial differentiation and interglacial introgression have probably occurred over the longer history of these hybridizing races. While the origin of metacentric races has been hypothesized to be associated with post-glacial expansions since the last glacial maximum (LGM) at 21 kya [85,86], the fossil record of S. araneus s.l. extends back 2.5 to 3.0 million years [84]. Paleophylogeographic evidence indicates that some hybridizing races had last common ancestors that predate the last interglacial 125 kya [87], making it possible that the complex system of karyotypic variation has originated iteratively through the last five, ten, or even twenty glacial-interglacial cycles that have occurred during the species' history. If the intraspecific phylogeographic history of S. araneus is hundreds of thousands or millions of years deep, then the species would have passed through many allopatric phases split into glacial refugia. The process of speciation in S. araneus may be an iterative one, with ''two steps forward'' during long allopatric glacial cycles and ''one step back'' due to introgression during interglacial periods. The Robertsonian variation may not completely prevent gene flow, but the phenotypic data suggest that it inhibits enough of it that not all the differentiation is erased. This possibility deserves further investigation with a combination of paleontological, phenotypic, genetic analysis, and phylogenetic analysis.
Conclusions
Our phenotypic data show that there is significant differentiation across the hybrid zones, that the phenotypic clines are centered on the same places and have similar widths as chromosomal clines, and that the amount of differentiation across the zones is greater than expected if it were dissipating due to gene flow. While some results suggest that differences may have arisen by weak diversifying selection, the preponderance of evidence suggests that they arose through drift.
Chromosome rearrangements are thus confirmed to be a factor that helps maintain phenotypic differentiation in the Common shrew, despite the gene flow that appears to exist at the hybrid zones. Our data refute the idea that allopatrically evolved differences are being rapidly lost through gene flow across the hybrid zones since secondary contact (e.g., [28]), though there is little doubt that some phenotypic differentiation has been erased by slow gene flow. The sharp phenotypic clines we found in several of the skull and mandible shape would be blurred in only a few years if gene flow was unrestricted. In order for these clines to have been maintained for the thousands of years that they have probably existed, gene flow would have to be near zero at the loci that affect skull and mandible shape, which are expected to be spread across the entire set of autosomes. In Common shrews, chromosomal rearrangements are more than incidental in regard to phenotypes.
Supporting Information
Table S1 Detailed information about the localities sampled for this study. | 9,335.2 | 2013-07-10T00:00:00.000 | [
"Biology"
] |
Exome Sequencing of Cell-Free DNA from Metastatic Cancer Patients Identifies Clinically Actionable Mutations Distinct from Primary Disease
The identification of the molecular drivers of cancer by sequencing is the backbone of precision medicine and the basis of personalized therapy; however, biopsies of primary tumors provide only a snapshot of the evolution of the disease and may miss potential therapeutic targets, especially in the metastatic setting. A liquid biopsy, in the form of cell-free DNA (cfDNA) sequencing, has the potential to capture the inter- and intra-tumoral heterogeneity present in metastatic disease, and, through serial blood draws, track the evolution of the tumor genome. In order to determine the clinical utility of cfDNA sequencing we performed whole-exome sequencing on cfDNA and tumor DNA from two patients with metastatic disease; only minor modifications to our sequencing and analysis pipelines were required for sequencing and mutation calling of cfDNA. The first patient had metastatic sarcoma and 47 of 48 mutations present in the primary tumor were also found in the cell-free DNA. The second patient had metastatic breast cancer and sequencing identified an ESR1 mutation in the cfDNA and metastatic site, but not in the primary tumor. This likely explains tumor progression on Anastrozole. Significant heterogeneity between the primary and metastatic tumors, with cfDNA reflecting the metastases, suggested separation from the primary lesion early in tumor evolution. This is best illustrated by an activating PIK3CA mutation (H1047R) which was clonal in the primary tumor, but completely absent from either the metastasis or cfDNA. Here we show that cfDNA sequencing supplies clinically actionable information with minimal risks compared to metastatic biopsies. This study demonstrates the utility of whole-exome sequencing of cell-free DNA from patients with metastatic disease. cfDNA sequencing identified an ESR1 mutation, potentially explaining a patient’s resistance to aromatase inhibition, and gave insight into how metastatic lesions differ from the primary tumor.
Introduction
In 2014 there were be over 500,000 cancer related deaths in the United States; 90% of these deaths from metastatic disease. [1,2] While cancer is characterized by clonal progression, metastatic lesions and recurrent disease can differ substantially from the primary tumor, harboring unique mutations of clinical significance. [3] Identifying these differences as they emerge requires serial sampling of the tumor genome, [4] often from multiple metastatic sites, which may have limited feasibility due to technical challenges or financial burden. Sequencing from blood plasma, however, has the potential to identify these changes without the invasiveness associated with solid tumor biopsies. [5][6][7] Following the detection of mutant forms of KRAS and NRAS in the plasma of cancer patients, researchers have pursued cfDNA as a form of "liquid biopsy" of an individual's cancer, using it to identify oncogenic alterations in a variety of malignancies. [8][9][10][11][12][13][14] Changes in circulating tumor DNA (ctDNA) over the course of treatment can be measured easily through serial sampling due to the minimally invasive nature of blood draws. [15][16][17][18][19] Previous studies have focused on quantifying ctDNA levels to measure disease burden, [15,19,20] searched for the emergence of resistance mutations to specific therapies, [18,[21][22][23] tracked tumor evolution, [18] and assessed prognosis [12,24,25] and recurrence risk. [16] The detection of ctDNA requires especially sensitive methods due to its dilution by the DNA from non-cancerous cells, with variant allele percentages as low as 0.01% in early disease. [12,26,27] The study of tumors of varying types and stages has found that while ctDNA levels vary significantly between samples, metastatic disease correlates with higher levels of cfDNA in the plasma and a higher fraction of ctDNA. [6,28] The relative abundance of cfDNA and ctDNA makes it well-suited for whole-exome sequencing [18] which, unlike panels focusing on hotspot or patient-specific mutations, has the potential to identify novel mutations, giving it unique value in the study of therapeutic resistance and tumor evolution. Whole-exome sequencing from plasma has demonstrated high levels of concordance between mutations in the tumor tissue and cfDNA in metastatic disease; however, previously this has only been shown in samples with exceptionally high ctDNA levels (33-65% of cfDNA from tumor origin), greatly limiting its clinical utility. [18] In this study, we investigated the feasibility of whole-exome sequencing from the plasma of two patients with metastatic disease. We found that with only minor alterations to our experimental and analytical methods we could accurately recapitulate the tumor genome from plasma, identify the same clinically relevant mutations identified by sequencing tumor biopsies, and gain novel information about the evolution of the disease. These methods were sensitive in a sample with an average ctDNA variant percentage of 3.7%, indicating approximately 7.4% of cfDNA was of tumor origin (ctDNA), sufficiently low to identify ctDNA for a substantial portion of metastatic patients. [6,12,16] We conclude that cfDNA sequencing of patients with metastatic cancer lends valuable insight to the study and treatment of the disease.
Patient #1
A 52-year-old female was diagnosed with primary intimal sarcoma of the pulmonary artery that was unresectable at presentation. The patient was initially treated with radiation followed by chemotherapy (Fig 1) and at this time her tumor was screened for oncogenic mutations using a multiplexed mass spectroscopy-based assay that revealed the presence of PIK3CA R88Q and Q546R in the primary tumor. [29] As a result, she entered a phase I clinical trial of a PI3 kinase inhibitor and had a partial response that lasted 12 months. Twenty months after diagnosis the primary tumor DNA was screened again using a targeted panel of an Ion Torrent PGM. This confirmed the PIK3CA mutations but also revealed KRAS G12R. A blood draw was taken at this time, isolating 1 ml of buffy coat and 25mls of plasma (Table 1). At the time of blood collection the patient had numerous lesions in the lungs, pulmonary artery, and liver (Table 1). Due to the high concentration of cfDNA in the plasma (63ng/ml), whole-exome sequencing was conducted. Based on the KRAS mutation, the patient was then enrolled in a phase Ib clinical trial combining MEK and PI3 kinase inhibitors. The treatment was stopped after eight months due to complications resulting from treatment, and the patient died 30 months after the initial diagnosis.
Whole-exome sequencing of the primary formalin-fixed paraffin-embedded (FFPE) tumor revealed 48 somatic, exonic mutations (Fig 2A, Table 2. We conducted whole-exome sequencing of the cfDNA (524X average depth) and with a threshold of 1.5% variant allele percentage we identified 47 of the 48 somatic mutations present in the primary. At those 48 sites the mean sequencing depth in the cfDNA was 561X (181-1,197X). The average variant allele percentage across these 47 mutations was 3.7%, indicating that approximately 7.4% of the plasma DNA was of tumor origin. Importantly, we identified from plasma the activating KRAS G12R mutation and both activating mutations in PIK3CA (R88Q and Q546R). Controlling for sequencing depth, number of cfDNA mutant reads, or variant allele percentage in the primary tissue did not significantly improve the correlation.
Fifteen additional mutations were identified in the cfDNA. Among these, 11 were not present in the primary number and four were present in the primary tumor ( Fig 2B), but at allele frequencies below our 10% threshold for calling them in the primary tumor. These mutations were chosen for validation by sequencing on the Ion Torrent PGM where six of them were confirmed, eight failed to validate, and one did not sequence ( Fig 2B). The validation rate of 43% highlights the necessity of using orthologous sequencing methods in confirming the presence of low frequency mutations in cfDNA. cfDNA variant allele percentage correlated poorly with the primary tumor (Fig 3).
Patient #2
A 41-year-old female was diagnosed with ER+ HER2+ breast cancer, which had spread to the lymph nodes. The patient underwent neoadjuvant chemotherapy (TAC) followed by a bilateral mastectomy and oophorectomy ( Fig 4A). Following surgery, the patient underwent radiation therapy and was treated with Trastuzumab for one year and Anastrozole for 33 months, until the discovery of a 4cm liver lesion and bone metastases at the 11 th thoracic vertebra (T11). Additional chemotherapy and Herceptin were administered but the treatment was stopped following identification of liver metastases. At this time we collected a blood draw approximately 30 minutes before a liver biopsy was taken and obtained an archived FFPE sample of the primary tumor. The blood draw yielded 15mls of plasma at an average cfDNA concentration of 98ng/ml (Table 1). Following the first plasma sample the patient underwent treatment with the anti-Her2 drug TDM1 but following an initial partial response died 62 months after initial diagnosis. Whole-exome sequencing of the primary tumor and liver metastasis revealed a total of 48 nonsynonymous somatic mutations ( Fig 4B, Table 2). Sequencing of cfDNA to an average depth of 309X identified 38 of these mutations with an average variant allele percentage of 14%, indicating approximately 28% of cfDNA was of tumor origin. cfDNA VAP correlated well with the VAP in the liver metastasis (Fig 5A and 5B), but correlated poorly with the primary tumor (data not shown). Additional deep sequencing confirmed that an activating PIK3CA (H1047R) mutation was present only in the primary tumor, not in the liver metastasis or cfDNA, indicating that either the mutation emerged after metastasis, or was not present in the subpopulation that seeded the metastasis. Seventeen additional somatic nonsynonymous mutations were called from the plasma sample. Closer examination revealed that eight of these (47%) were unique to the plasma, potentially originating from metastatic sites not sampled (Fig 5C). Two of those mutations were selected for validation via Ion Torrent PGM, both of them successfully validated (Fig 5C).
By sequencing cfDNA from plasma we are able to get a snapshot of the tumor, likely from multiple metastatic sites. Here, the high correlation between the liver metastasis and cfDNA indicates that considerable information about the current tumor genome could be gained without the need for a biopsy. A mutation in ESR1 (D538G), which has been shown to impart resistance to estrogen deprivation therapy, was found in both biopsies of the metastases and the cfDNA. [30,31] This mutation was not present in the initial exome sequence of the primary tumor and its absence was confirmed by subsequent validation sequencing of ESR1 to a depth of 4,272X (Fig 6A). It is likely that the resistance of the tumor to the aromatase inhibitor Anastrozole can be explained by the mutant ESR1. This mutation was confirmed in a CLIA laboratory and anti-Estrogen Receptor treatments were considered between cfDNA sequencing and patient death. A total of 15 mutations were selected for validation on the Ion Torrent PGM, 13 of which were validated (Figs 4B and 5C). A second plasma sample was taken during response to TDM1 treatment (as determined by CT scan) and eight mutations present in the pre-treatment cfDNA sample were quantified in the during-treatment sample (Fig 6B). The pre-treatment cfDNA sample had a mean variant allele percentage of 13% across these eight sites while the during-treatment sample had a mean variant allele percentage of only 0.04% in the four sites containing mutant reads and no detectable mutant reads in four of the mutations tested.
Discussion
In this study, we have demonstrated that whole-exome sequencing of cfDNA from patients with metastatic cancer can accurately identify clinically actionable mutations, and requires only minimal alterations to well-established sequencing protocols. We were able to sequence and gain valuable data from a plasma sample with a mean variant allele percentage of 3.7%, much lower than values demonstrated in previous studies and well below the frequencies of a substantial portion of metastatic cancer patients. [12,15,16,18,19] Adoption of this approach has the potential to greatly expand the utility of sequencing versus the biopsy-dependent approaches which are currently the standard of care. Mutations present in the cfDNA tightly correlated with mutations present in a synchronous metastasis sample, indicating that sequencing cfDNA can generate a more accurate picture of a patient's metastatic tumor genome than relying on a biopsy of the primary tumor. The cfDNA tightly correlates with tumor tissue taken at the time of plasma acquisition and can therefore be used to take "snapshots" of the cancer genome. Additionally, mutations unique to cfDNA were found in both patients, potentially representing lesions not sampled by biopsy. Validation via orthologous sequencing methods confirmed that these mutations were not from normal tissue or the result of sequencing errors and were likely from sites not present in the biopsy. The inability to sample all metastatic sites within a cancer patient is a severe limitation of current sequencing techniques, and may be resolved with minimal modifications to standard sequencing procedures using cfDNA.
The two patients in this study had high levels of cfDNA in their plasma (Table 1), which allowed us to use over 100ng of cfDNA to construct our sequencing libraries. However, for many patients a concentration of 10ng of cfDNA per ml of plasma is more typical, indicating that multiple blood draws are required to get sufficient material for sequencing. Realizing this, we adopted the methods outlined in the Capp-Seq paper from the Diehn lab [19] that allows libraries to be made more efficiently, requiring less initial input DNA. Using these methods we successfully produced complex libraries from less than 40ng of cfDNA and successfully sequenced~25% of the input DNA molecules (opposed to the~1% efficiency achieved in our study). This improvement has allowed us to sequence sufficient cfDNA for nearly all our subjects. Another advantage of sequencing cfDNA is the ability to sequence serially-collected and minimally-invasive plasma samples, allowing for near real-time monitoring of the tumor genome during treatment. The identification of emerging mutations may allow therapies to be started or stopped as soon as the tumor environment renders this advantageous. In the case of patient #2, it is possible that serial cfDNA sequencing would have identified the emergence of the ESR1 mutation and treatment may have been adjusted from estrogen deprivation therapy (Anastrozole) to one targeting the estrogen receptor itself (e.g. Fulvestrant): this shift, and potentially others, may have delayed the progression of disease. In addition to looking for known resistance mechanisms, the nature of whole-exome sequencing allows for the identification of novel recurrent resistance mechanisms in a cohort of patients undergoing the same treatment, which may not be included in a targeted panel. Notably, during the response of patient #2 to TDM1 there was a dramatic reduction in the level of ctDNA, rendering it nearly undetectable by our sequencing approach. Monitoring via exome sequence during such periods would require extremely high sequencing depth, which would be prohibitively expensive with current sequencing costs.
A substantial focus has been placed on the sequencing of primary tumors and massive sequencing projects (TCGA et al.) have revealed a considerable amount of information about driver mutations in a variety of cancers. However, metastatic tumors, which are responsible for most patient deaths, are comparatively understudied. By sequencing primary tumors along with serially collected plasma samples it is possible to monitor metastatic progression at a genomic level. In patient #2 we observed an activating PIK3CA mutation in the primary tumor that was not seen in either the liver metastasis or cfDNA. It is likely that either the PIK3CA mutation became clonal after the metastatic process or that the mutation was not present in the metastatic clone; regardless, treatment with a PI3K inhibitor may have been effective in shrinking the primary lesion, but would have been ineffective against any of the distant metastasis. In contrast, sequencing of patient #1 showed that the cfDNA shared contained nearly all of the mutations identified in the primary tumor. While we were unable to get a sample of the metastasis, the low number of mutations unique to the cfDNA means it is not unreasonable to infer that there were relatively few differences between the metastasis and primary tumor. Sequencing cfDNA from larger cohort of patients may help us understand how metastatic progression varies in different tumor types and may identify therapeutically relevant patterns. The clinical utility of this method will depend largely on the systematic assignment of targeted therapies to identified cfDNA mutations.
Notably, services for cfDNA sequencing are becoming commercially available, but are based on panels and therefore have limited utility in a research setting. We demonstrate here that there is significant value of whole-exome sequencing from cfDNA.
Patient enrollment
Written consent was obtained from two patients with metastatic cancer for enrollment in this study. The study and consent procedures were approved by the Oregon Health & Science University Institutional Review Board and in accordance with federal and institutional guidelines. Up to 40mls of blood was collected in EDTA tubes. Plasma was isolated as described previously [16] and stored at -80°C until cfDNA was extracted using the QIAamp Circulating Nucleic Acid kit (Qiagen). Buffy coat was isolated from the same blood sample and DNA was extracted using the DNA Blood Mini kit (Qiagen). As part of the aforementioned study and consent procedure, FFPE tissue from the patient's primary tumors was acquired from archived pathology samples. Patient #1's sample was acquired from the University of Washington Pathology Department in Seattle, WA (http://www.pathology.washington.edu/clinical/dermpath/ contactinfo). Patient #2's sample was acquired from Compass Oncology in Vancouver, Washington (http://compassoncology.com). FFPE tissue was extracted using the DNA FFPE Tissue kit (Qiagen). The same patient's liver metastasis was taken from a frozen core biopsy and extracted with the DNeasy Blood & Tissue kit (Qiagen).
Whole-exome sequencing
A minimum of 100ng of cfDNA and 0.3-2μg of DNA from buffy coat and tumor tissue were used to create sequencing libraries. Agilent SureSelect XT reagents and protocol were used to prepare sequencing libraries. DNA from buffy coat and tumor tissue was sonicated to an average size of 150bp using a Covaris E220. Plasma DNA samples were not sonicated, as plasma DNA is already highly fragmented. Hybrid capture was conducted using Agilent SureSelectXT Human All Exon V4+UTRs. 100bp paired-end sequencing was conducted on an Illumina HiSeq 2000. An entire lane was dedicated to sequencing plasma DNA samples and all other libraries were sequenced two-to-a-lane. To maximize sequencing depth and avoid PCR duplicates, the plasma sample from the patient with metastatic sarcoma was made into three separate libraries, each sequenced on one full lane each, giving an average sequencing depth of 1,034X. Only a single library was needed to achieve sufficient coverage of cfDNA for the patient #2.
Bioinformatic analysis
In order to detect mutations we aligned HiSeq paired-end reads with hg19 human reference genome using bwa. [32] We used bwa aln to find the coordinates of input reads and then used bwa mem in order to generate alignments in a sam format. We converted the sam format to bam (binary) format using Samtools import. After sorting and indexing the reads in the bam formatted file, we use Picard Tools[33] MarkDuplicates to remove duplicate reads generated during the PCR amplification stage: removal is done by finding all reads that have identical 5' coordinates and keeping only the read pair with the highest base quality sums. After duplicate removal we realigned reads around SNVs and indels using the GATK Software Library. [34,35] The three libraries of the sarcoma patient were combined after PCR duplicate removal: local positions to target for realignment were called using RealignerTargetCreator and the reads were realigned using IndelRealigner. Finally, quality scores were recalibrated. This was done using GATK BaseRecalibrator and PrintReads, which binned reads based on the original quality score, the dinucleotide, and the position within the read. Sequencing statistics are summarized in Table 2 and were generated using Samtools flagstat, GAKT DepthOfCoverage, and Bedtools pairToBed.
To call mutations we compared the tumor samples with the normal samples using muTect v1.1.4 using the buffy coat as a matched normal. [36] Variants were considered somatic mutations if: (a) they were not present in the dbSNP database (except if the variant was also in the COSMIC database eg KRAS and PIK3CA mutations), (b) there was 30x sequencing depth at that site in the tumor/plasma sample and 10x sequencing depth in the matched normal sample, (c) it had a variant allele percentage of 10% for the tumor samples and 1.5% for plasma samples, and (d) there were at least two reads containing the variant allele. Mutations in cfDNA were then further filtered out if the matched normal had >1 read supporting the mutation or the mutation was only present in one strand of the cfDNA. Impact of variants was checked using Mutation Assessor v2 (www.mutationassessor.org).
Mutation validation
Primers were designed to cover a selection of mutations identified in each patient and then used to PCR amplify buffy coat, plasma, and tumor DNA samples from both patients. For each sample, amplicons were pooled in equimolar amounts and 10-100 ng were used for library creation using the Ion Xpress Plus Fragment Library Kit. Sequencing templates were generated using emulsion PCR on the Ion OneTouch 2 using the Ion PGM Template OT2 200 kit. Up to six barcoded samples were multiplexed on Ion 316 v2 chips. Sequencing was performed on a Personal Genome Machine (PGM) sequencer (Ion Torrent) using the Ion PGM 200 v2 sequencing kit. Torrent Suite software version 4.0.2 was employed to align reads to hg19. Reads were visualized using IGV v 2.2.32 (Broad Institute) and variant allele frequencies were determined for sites previously identified via Illumina sequencing.
Author Contributions
Conceived and designed the experiments: PTS TMB NJW JEK TMK. Performed the experiments: TMB NJW KJC TAM JWG. Analyzed the data: MP CLC. Contributed reagents/materials/analysis tools: CLC. Wrote the paper: TMB PTS JWG CLC. | 4,967.2 | 2015-08-28T00:00:00.000 | [
"Biology",
"Medicine"
] |
EFFECT OF THE APPLICATION OF CIRCULARITY REQUIREMENTS AS GUIDED QUESTIONS ON THE CREATIVITY AND THE CIRCULARITY OF THE DESIGN OUTCOMES
: Implementation of the circular economy is beginning to become a reality and the early stages of product design are crucial for a proper adoption of the circular model. However, several studies claim that when designers have many restrictions concerning sustainability imposed upon them, the creativity of the design outcome decreases. Therefore, it is necessary to help designers to think in a more circular manner and at the same time to increase creativity. In this research, we study whether creativity and circularity of the design outcomes change depending on how circular requirements are applied. For this purpose, an experiment has been carried out with 20 teams of three or four Engineering Design students. The teams were asked to propose a novel idea for a product design problem taking circular economy into account. All the teams had requirements concerning circularity expressed as selection criteria. They applied a modified “6-3-5 Method”, in which four people sketched and wrote three ideas during a period of time and then exchanged the ideas with the next person. Half of them received some circular economy requirements in the form of explicit guided questions during the creative generation of ideas, while the other half did not. The results indicate that explicitly introducing guided questions leads to no significant differences in creativity or circularity. However, using guided questions about circular requirements does lead to more dispersed circularity and creativity results. The practical implications of this are interesting, since circularity requirements do not decrease creativity when applied as explicit questions during the generation of ideas.
INTRODUCTION 1.Introduction
Sustainability is the satisfaction of the needs of human society without compromising the resources given by the ecosystems (Morelli, 2011) and maintaining it over a very long time (Heijungs et al., 2010).Human activity poses a threat for the future because nature's resources are finite and we need to face a number of sustainability challenges.
According to d'Orville (2019), achieving long-term sustainability requires coming up with new solutions and therefore creativity is closely linked to sustainability.This author claims that to attain the Sustainable Development Goals (SDGs) adopted by the United Nations creativity is necessary.This idea is defended in other studies.For example, Lozano (2014) indicates that creativity, and creative thinking in particular, is crucial to question reductionist mental models and build more sustainable societies.Mitchell and Walinga (2017) maintain that sustainability requires creative ways of thinking and new ideas.More recent studies also suggest that firms should make the effort to promote creative thinking initiatives (Awan et al., 2019).
Taking into account that circular economy and sustainability are closed related terms, although with some differences (Geissdoerfer et al., 2017), this work focuses on how to generate more creative and circular products during idea generation.Following the linear model of extract-produce-use-discard has created fundamental challenges (Rockström et al., 2009), many of which can be addressed during product design (Buchanan, 2001).The circular economy, as opposed to the linear economy, is a "system that is restorative or regenerative by intention and design.It replaces the 'endof-life' concept with restoration, shifts towards the use of renewable energy, eliminates the use of toxic chemicals, which impairs reuse, and aims for the elimination of waste through the superior design of materials, products, systems and within this, business models" (The Ellen MacArthur Foundation, 2012: 7).
The circular economy can be introduced during the earlier phases of design as a list of design requirements.However, previous findings show that considering environmentalrelated requirements may lead to less creative results.For instance, when young designers are expected to implement very strict design requirements which may give way to an understanding of "what is" instead of "what can be", the creativity of their results decreases (Cucuzzella, 2016).It also decreases when the requirements are so open that it is too challenging to reimagine a different future (Cucuzzella, 2016).Interestingly, using the terms "requirements" and "shall" for a concept generation task, leads the designer to focus on satisfying these explicit requirements and inhibits creativity (Mohanani et al., 2014).The same study reveals that if a list of ideas is provided during the concept generation task without explicitly using the word "requirement", creativity increases.Another study highlights that when designers use detailed environmental information, the solutions generated are more conservative and less creative.This makes it necessary for methods and tools in the future to deliver relevant information avoiding this fixation effect (Collado-Ruiz and Ostad-Ahmad-Ghorabi, 2010).The three studies agree that, when strict requirements exist, designers' creativity decreases.So, tension arises between creativity and satisfying requirements.
It is therefore necessary to analyse how to encourage circularity without compromising creativity in order to design both more circular and more creative products.There is a gap between how to introduce the circular economy approach/requirements in product design without compromising the creativity of the products.
The aim of this work is to gain knowledge about how applying circular requirements in the form of guided questions (GQ) influences the creativity and the circularity.Accordingly, the following research question is posed: Does applying a creative method introducing circular requirements as guided questions improve creativity and circularity?
To address this objective, this paper considers circularity requirements implicitly and compares the results of applying the 6-3-5 Method to generate ideas with the results when some circularity requirements are used in the form of guided questions during the generation of ideas.
LITERATURE REVIEW 2.1. Circularity
In the circular economy approach, resources are retained as much as possible by preserving and recirculating them instead of discarding them (Milios, 2018).The different actions that can be performed to introduce circularity are classified in three main groups, according to Bocken et al. (2016): • Slowing loops: designing long-life goods and product-life extension services to extend and/or intensify a product's life.This results in a slowdown of the flow of resources.• Narrowing loops: using as few resources as possible per product.
• Closing loops: recirculating the materials again through recycling or reuse, among others, after the use phase.
Nowadays, this change of model is starting to become a reality (European Commission, 2015).However, although awareness is more widespread, it is necessary to implement strategies to reduce the use of resources by humans (Bocken et al., 2016).Companies also have to develop business models that fit the circular consumption, thereby acting as an exchange agent while also allowing and facilitating the highest possible number of uses for their products (Selvefors et al., 2019).For firms it is also important to manage the pressure from stakeholders to increase the implementation of sustainability or circularity (Awan et al., 2017).
In Andrews' words "designers have to change their design thinking and practice and lead the development of the Circular Economy by creating products and services that match all inherent criteria of this model" (Andrews, 2015).Design thinking is understood as the collaborative process by which the designer's sensibilities and methods are employed to match people's needs with what is technically feasible and a viable business strategy (Brown, 2009).Design thinking skills can help designers to solve complex problems and to be able to adapt to changes (Razzouk and Shute, 2012).It can be achieved by understanding the user's needs and the social and economic context (Melles et al., 2011).
Therefore, industrial designers play a key role in circular design (Lofthouse, 2004).For designers, this implies the challenge of creating robust products that, in turn, follow circularity principles and remain with users for as long as possible without jeopardising the product's functionality, while also considering the consumer behaviour and attitudes and the social context (Lofthouse and Prendeville, 2018).The circular economy context must be considered right from the start of the design process (Bakker et al., 2019) by correctly using resources to optimise the product in terms of both its function and the resources employed to make it.Following this, Moreno et al. (2017) define a taxonomy of strategies that guide product designers to introduce circular economy into product design from the earlier stages of the design process.In their taxonomy, they focus on five main categories: resource conservation, life cycles, whole system design, the customer and development.Another example of an approach to circular design is the "Circular Design Guide" developed by IDEO (2017).This consists of a set of online tools that allow designers to have a guide on how to design in a more circular way with several design methods adapted to the principles of circular economy.
The Ellen MacArthur Foundation (2013), Van der Berg and Bakker (2015) or Mulder et al. (2014), among many others, have established strategies and guidelines to introduce circularity into product design.These design guidelines can guide designers to achieve more circular solutions.It is essential that design strategies and business models are related in order to encourage the transition from a linear to a circular economy (Camacho-Otero, 2015;Moreno et al., 2016).For instance, Bocken et al. (2016) propose a series of strategies for slow and closed loops: -Design strategies to slow loops.
Designing long-life products • Design for attachment and trust • Design for reliability and durability Design for product-life extension.
• Design for ease of maintenance and repair • Design for upgradability and adaptability • Design for standardisation and compatibility • Design for dis-and reassembly -Design strategies to close loops.
• Design for a technological cycle • Design for a biological cycle • Design for dis-and reassembly Some metrics also exist to assess circularity in products and evaluate their improvement potential.However, no standard method is currently available (The Ellen MacArthur Foundation, 2015;European Environment Agency, 2016).In order to apply these metrics, it is often necessary to know product details that are not defined in the conceptual phase: materials, weights, etc.Moreover, these metrics do not usually cover all the aspects contemplated under the circular economy umbrella.The metrics or methods available partially evaluate the circularity, but there are deficiencies in the assessment of circularity in concepts (Ruiz-Pastor et al., 2019).Circularity should thus be incorporated into product design.Consequently, it is necessary to adapt present methods to measure circularity in order to assess it globally and coherently (Mesa et al., 2018).
Creativity in Engineering Design
The design process is basically a problem-solving process.The initial phase of this process is the conceptual design stage (Pahl and Beitz, 1996), where the most important decisions are made (Cross, 1999).This stage starts from the problem to be solved, which is translated into requirements and design specifications that the product to be designed must achieve.The designer has to solve the problem in a creative way.
Creativity is an innate human characteristic and a very important factor when facing new design engineering challenges (Amabile, 1996).This is why an individual's creativity has been well studied from the field of psychology (Guildford, 1968;Torrance, 1969;López-Martínez and Navarro-Lozano, 2008).Nevertheless, in the field of Design Engineering, creative results also depend on the creative process (Csikszentmihalyi, 1998).In a real product design situation, the solution-seeking phase is the one in which a designer's creativity is witnessed to a greater extent.Many design methods exist to help synthesise solutions.Numerous studies have been published about creative problem-solving methods, as seen in the collections of methods by Jones (1970), VanGundy (1988) and Higgins (1994), and in the extensive literature in many different journals.
One of the most widely used methods is brainstorming and its variants.Brainstorming methods, in which stimulation is achieved by using the stimuli generated in the group, are extensively used in industry (López-Mesa, 2003).The "6-3-5 Method", also known as a brainwriting method (Rhorbach, 1969), is an intuitive idea-generation method that is subclassified into progressive methods in which ideas are generated by repeating the same set of steps a number of times to generate ideas in discrete progressive steps (Shah, 2000).It is a creative method for generating many ideas in a short time that is a variation of brainstorming and complements the individual work obtained by this technique.
Another kind of method to help introduce new elements into a product is to introduce guided questions (GQ) during the design process.During the design process, a question is a statement that requests the design actions that designers need to answer (Eris, 2004), which could act as a guide to lead designers to find a solution for the problem under consideration.Questions are used both implicitly and explicitly to help designers move away from their usual problem-solving routine (Cardoso et al., 2016).
A product's creativity is generally defined as the combination of its novelty and usefulness for most of the metrics that evaluate it (Chulvi et al., 2012).In other words, creativity in a product comes about when a stakeholder uses his or her capacity to produce novel and valid solutions for design purposes (Sarkar and Chakrabarti, 2008).This definition encompasses the three above-cited concepts that refer to creativity in design: the stakeholder, the solution-generating process, and the novelty and validity of the solutions that are generated.
Creativity in circular design
However, apart from the validity referring to the solution's usefulness, creativity may also involve seeking new characteristics that are to be included in the product design, such as circularity.In this sense, Charter (2018) states that designing for the Circular Economy requires thinking about how to enable product circularity in the early creative stages of design.Jawahir and Bradley (2016) claim that value creation through circularity requires, among other things, the use of visionary thinking, which combines creativity with an established technical basis to create implementable solutions to "realworld" problems.
These new characteristics or demands of the client or society can, in turn, be interpreted as a restriction by designers and as such, as mentioned in the introduction section, could be an obstacle for creativity (Cucuzzella, 2016;Mohanani et al., 2014;Collado-Ruiz and Ostad-Ahmad-Ghorabi, 2010).Nevertheless, circularity needs to be introduced into product design.This means that we face the problem of how to guide designers to introduce circularity into product design in a creative innovative fashion.
METHODOLOGY
This section describes the research methodology.A practical experiment has been carried out, in order to validate the research questions with empirical data.
Design Experiment
The experiment was performed with 72 year-3 Industrial Design students, 35 males and 37 females.First, all the participants attended a preparation session about circular economy.In this session, circular economy and design strategies to obtain circular designs were explained using examples.The following week a two-hour workshop was carried out to analyse four items of school furniture in terms of circular economy requirements.In this workshop the participants were distributed in four different sessions due to space and organisation restraints.The workshop was carried out in 20 work groups with three or four members in each one.These two sessions served as a preparation for the empirical study.The setting and the materials were the same in the four sessions: a room with tables and seats to allow participants to work in groups.
A week later the same teams that worked together were asked to generate a novel proposal for an item of school furniture.They were provided with the description of a design problem, in this case, a new piece of school furniture that should: -be novel -respond to some educational trends in which furnishings play a key role -take circular economy into account Before starting the generation of ideas, each team was provided with written instructions.In these instructions they were told that they should apply a creative meth that econ diver requi as se (Fig.
To At the end of the experiment, the participants were asked to individually complete a questionnaire to assess their satisfaction with the method followed to generate ideas.It asked them how much they liked the method, if they thought it was easy, and if they believed it had helped them to obtain more novel and circular ideas or not.
Creativity assessment
The creativity (C) of the proposals obtained was assessed according to the method proposed by López-Forniés et al. (2017) to specifically assess concepts according to the scale proposed to collectively measure novelty (N), usefulness (U) and technical feasibility (F).In this way, all three of the above-mentioned aspects were assessed by following the criteria set out in Table 1.The final score was then calculated for each concept by combining the three values (eq.1).
The creativity score ranged between 1 (more creativity) and 0.001 (less creativity).
Scale
Explanation Rate
Much novelty
The concept will be new and cannot be compared 1 Much usefulness The concept solves the problem
Much feasibility
The concept is easy to achieve without any technical changes
Average novelty
The concept exists but with considerable differences
Average usefulness
The concept only solves part of the problem Average feasibility Some investment is required to implement the concept
Little novelty
The concept already exists but for other applications
Little usefulness
The concept solves part of the problem under specific circumstances
Little feasibility
The changes are relevant and considerable investment is required
No novelty
The concept already exists for the same application
No usefulness
The problem has already been solved in a simpler way
No feasibility
The changes required are difficult to achieve and very high investment is needed Table 1.Assessing creativity (López-Forniés et al., 2017)
Circularity assessment
The circularity of each proposal was assessed in terms of the number of aspects that enhanced the product's circularity included in the proposal.To obtain a score, all the characteristics present in the proposals were classified according to whether they all followed the design guidelines that focused on slow, narrow or closed loops (Bocken et al., 2016;Mesa et al., 2018).
This was achieved by applying the weighted sum of all the characteristics that referred to the slow loops, closed loops and narrow loops of each of them.Since there is no semi-quantitative standard method to measure circularity in concepts, the weighting values of the method used for the creativity assessment were adopted.The values of the ratios were then established following the same weighting used to assess creativity in the metric proposed by López-Forniés et al. (2017).Consequently, ratios of 0.3, 0.7 and 1 were considered, depending on each type of action.According to The Ellen MacArthur Foundation (2013), the closer the product approaches the user, i.e. the more closed the loop is, the more favourable the action being performed in this loop will be for circularity.Therefore, the characteristics that favoured slow loops were assessed by multiplying the number of characteristics by 1, those favouring narrow loops were assessed by multiplying the number of characteristics by 0.7, while those favouring closed loops were assessed by multiplying the number of characteristics by 0.3.The characteristics set out in the solution proposals that did not refer to circularity were not taken into account.This meant that the higher the obtained score was (eq.2), the more circular the proposal was.Table 2 shows the values to score the circularity of the ideas.Hence: C = Is × 1 + In × 0.7 + Ic × 0.3 (eq.2)
Type of action in the loop Rate
Ideas for slow loops 1 Ideas for narrow loops 0.7 Ideas for closed loops 0.3 Other ideas that did not show slow, narrow or closed loops 0 Table 2. Rating of the type of ideas for an accountability of the circularity of conceptual designs Some of the characteristics obtained that did not refer to circularity included making the best of space, fun furniture, promoting creativity or being comfortable for children, etc. Fig. 6 shows one of the design outcomes with the characteristics to evaluate the circularity.when the methodology was modified by directly introducing GQ, the design results were more dispersed for circularity and creativity, possibly as a result of how designers' different personality profiles react.This would agree with Eris (2004), who says that introducing questions during the design process has an influence on the design outcomes.
The p whic
Previous findings showed that very strict requirements decrease creativity.In this study, we contribute with new findings about how the use of circular requirements as guided questions during the creative method increases circularity and creativity in some designers and decreases them in others.
CONCLUSION
The empirical data obtained showed that introducing GQ explicitly caused no difference in the creativity of the results as opposed to them being introduced implicitly into the description.That is, design requirements affect creativity in a similar way regardless of how they are introduced, i.e. implicitly as selection criteria or as explicit guided questions while solving the problem.
As regards how this affects the circularity of the proposed solutions, initially no significant difference was found in the results obtained by the two groups.Moreover, there are no significant differences between the numbers of circularity aspects.Differences did appear, however, in the dispersion of the results insofar as the participants with implicit requirements presented similar circularity results to one another, while wider dispersion was noted for those who were given GQ.In other words, one part of the study population obtained better results, while the other part achieved poorer results.This indicates that using requirements in the form of explicit GQ might have different effects on the designers who participated in our experiment.
When analysing this difference in terms of perception, a slight preference for using GQ was shown because GQ were also perceived to help the designers generate more circular ideas.Despite a greater number of participants stating that they liked the method without GQ more, there was also a higher percentage of participants who stated that they did not like the method, while the percentage of those people who used GQ and did not like the method was practically zero.Therefore, the preference for using GQ is because by using them they avoid discomfort, not because they provide comfort.
These findings have both practical and educational implications.Introducing circular requirements explicitly during the "6-3-5 Method" would affect the range of the circularity of the design outcomes.At a practical level, this finding can help in the management of design teams in companies in order to generate more creative and circular results.This can also have effects at an educational level.So, this study is a starting point from which to delve deeper into the interaction between the creative method, the questions about circular requirements and the designers.
The fact that the results do not show any differences regarding creativity supports the idea that there is still a need for research on how to foster creativity when designers are required to design more circular products.As dispersion was much wider when explicitly using GQ, the notion that using this methodology affects designers differently is reinforced, but in exactly what way was not studied.Verifying this would be very important to optimise the design results according to each individual by allowing the optimum methodology to be selected according to the designer's personality profile.It would be very interesting to distinguish those methodologies affected by the designer's personality or thinking style from those that are not.Future research lines in the design methodologies field thus indicate that it is worth studying the human-method interaction in order to optimise the design process by selecting optimum methodologies for each type of person.The advantage of having conducted a practical experiment is that the study has used real data but, in order to make up for the limited results, it would be a good idea to enlarge the number of participants in future experiments, to obtain more extensive and diverse data and to verify the results obtained in this work.
Fig
Fig. 2. Exp d one of the result of th l, which wa xperiment a Fig. | 5,390 | 2020-10-20T00:00:00.000 | [
"Environmental Science",
"Engineering"
] |
The Use of Pulsed Field Gel Electrophoresis in Listeria monocytogenes Sub-Typing – Harmonization at the European Union Level
Pulsed Field Gel Electrophoresis (PFGE) has been widely applied to characterize numerous bacteria. PFGE is a form of RFLP typing in which the bacterial genome is digested with rare cutting enzymes. These restriction enzymes cut genomic DNA infrequently and thus generate a smaller number of DNA fragments (10-20 bands). These fragments of a wide range of sizes, from 20 kb to 10,000 kb (Herschleb et al., 2007), are separated using specialized electrophoresis techniques. Differences in the restriction profiles are used to carry out genetic comparisons among isolates. Computer-based analysis is simplified, enabling rapid and easy comparison on strains. Currently, PFGE is often considered the”gold standard” of molecular typing methods for bacterial foodborne pathogens such as Salmonella, E.coli, Campylobacter, Yersinia, Vibrio and Listeria.
Introduction
Agarose gel electrophoresis is commonly used for separation of DNA molecules in molecular biology research and bacterial characterization in particular. It separates DNA fragments by size. It is widely used to detect PCR amplification products or determine DNA restriction genetic profiles. Consequently it is used in most of the bacteria characterization methods.
Pulsed Field Gel Electrophoresis (PFGE) has been widely applied to characterize numerous bacteria. PFGE is a form of RFLP typing in which the bacterial genome is digested with rare cutting enzymes. These restriction enzymes cut genomic DNA infrequently and thus generate a smaller number of DNA fragments (10-20 bands). These fragments of a wide range of sizes, from 20 kb to 10,000 kb (Herschleb et al., 2007), are separated using specialized electrophoresis techniques. Differences in the restriction profiles are used to carry out genetic comparisons among isolates. Computer-based analysis is simplified, enabling rapid and easy comparison on strains. Currently, PFGE is often considered the"gold standard" of molecular typing methods for bacterial foodborne pathogens such as Salmonella, E.coli, Campylobacter, Yersinia, Vibrio and Listeria.
The food-borne disease caused by Listeria monocytogenes (L. monocytogenes) is one of the main public health concerns in Europe (Allerberger & Wagner, 2010;EFSA, 2010;Goulet et al., 2008). Outbreaks and related clusters have to be detected as quickly as possible in order to improve the surveillance and control of this pathogen. Among the molecular methods used for sub-typing L. monocytogenes, PFGE has been widely applied to characterize food and human isolates over the last ten years (Brosch et al., 1996). Due to its high discriminating power and epidemiological relevance, this method has become the "gold standard" for L. monocytogenes sub-typing (Graves & Swaminathan, 2001).
One way to accelerate the recognition of clusters common to food and human isolates requires that significant number of isolates was sub-typed by laboratories involved in its surveillance. A standardized protocol was developed by the Center for Disease Control and Prevention (CDC) in Atlanta USA (PulseNet) and has been largely used at the international level (Graves & Swaminathan, 2001). Several surveillance networks currently work throughout the world using this protocol Pagotto et al., 2006). These networks have proven their efficiency for an early detection and a better understanding of L. monocytogenes outbreaks (CDC, 2010;CDC, 2011;Gilmour et al., 2010).
In Europe, in the frame of PulseNet Europe project, two PFGE sub-typing inter-laboratory trials were carried out in 2003(Brisabois et al., 2007Martin et al., 2006). The resulting PFGE data demonstrated that PFGE profiles can be compared and exchanged between laboratories. However, PulseNet Europe has not been active since November 2006 due to a lack of funding . Moreover, in the PulseNet Europe subtyping inter-laboratory trials, only quality and interpretability of the profiles were assessed (Martin et al., 2006). Profile interpretation was not evaluated and remains difficult to standardize, in particular when dealing with a wide range of profiles including large bands, double peaks and uncertain bands.
PulseNet USA has developed standard operating procedures (SOP) for computer-assisted PFGE profile analysis using BioNumerics software (Applied Maths, Sint-Martens-Latem, Belgium) (Gerner-Smidt et al., 1998). The SOP has evolved toward an automated interpretation process. However, some steps still require the user to make critical decision during the analysis, in particular for (1) abnormal band assignment and (2) closely related profile interpretation. This crucial step is a major drawback of PFGE and its improvement remains a challenge for PFGE standardization .
In 2006, the ANSES Maisons-Alfort Laboratory for Food Safety has been designated European Union Reference Laboratory (EURL) for L. monocytogenes. It coordinates a network of 29 National Reference Laboratories (NRLs) representing 27 Member States as well as Norway. Most of them are in charge, amongst other tasks, of typing food, environmental and veterinary L. monocytogenes strains isolated at national level.
One of the EURL objectives was to harmonize PFGE protocols used by the European food NRLs. This article first describes, the principle of PFGE applied to L. monocytogenes and the relationship between a PFGE profile and bacteria's genetic make-up. It then explains the EURL SOP for interpreting PFGE profiles, based on PulseNet USA SOP. Finally, it focuses on the work undertaken by the EURL to stimulate NRLs to perform PFGE with a standardized protocol including an SOP for profile interpretation.
Principles of PFGE -Relatedness between PFGE profiles and genetic reality
The PFGE method starts with the extraction of the bacterial chromosomes without damaging to the DNA, by mean of a very gentle extraction procedure. The chromosomes are then restricted using a rare cutting enzyme. For L. monocytogenes, the enzymes are ApaI or AscI (Carriere et al., 1991). These restriction enzymes AscI and ApaI generate respectively between 6 to 12 and 14 to 17 fragments in the range of separation of the PFGE. The combinations of the profiles generated by the two enzymes are used to characterize the strains. A third profile generated by SmaI can be added to reinforce the analysis (Carrière et al. 1991) The restricted DNA fragments are commonly separated in a PFGE CHEF (Contour-clamp homogeneous electric field) system (Chu et al., 1986). For L. monocytogenes, the range of separation is between 33 and 1135 kb. The migration parameters applied depend on the bacteria species. For L. monocytogenes the established parameters are a pulse angle of 120° www.intechopen.com and a linear switch-time ramp of 4 to 40 s. These migration parameters have been standardized in the PulseNet USA protocol. The difference in the restriction profiles enables genetic comparisons among L. monocytogenes strains. Profiles are specific to each strain and are used as characterization data to identify them (Graves & Swaminathan, 2001). However the restriction profiles are merely an image of the genome structure and must be interpreted as such (Tenover et al. 1995).
Relationship between PFGE profiles and genetic reality
The PFGE profiles are composed of DNA fragments separated along the PFGE migration range. Bands actually consists a huge copy number of the same DNA fragment flanked by two restriction sites. However it often happens that one band is composed of several fragments of the same size but coming from different parts of the bacterial chromosome (Singer et al. 2003). This explains why band intensity can vary along the profile depending of the number of superposed fragments in the same band.
The numbers and positions of the bands on the gel determine which bands are different or identical between different strain profiles. The first interpretation procedure defined by Tenover et al. (1995) showed that the interpretation of the number of band differences between a pair of isolates is based on the minimum number of genetic mutational events that would result in the observed number of band differences. For example, two isolates that differ by two to three bands would be considered as closely related since a single genetic event can explain this difference. More recently, researchers of the USA CDC proposed that the "Tenover" criteria were not generally applicable for investigation of all foodborne outbreaks. Genetic transfer, superposition of bands and other artifacts which might affect the relatedness of the profiles and the interpretation must be taken into account when interpreting profiles. According to the new criteria adopted for L. monocytogenes one band of difference is considered to be significant for distinguishing between two profiles (Barrett et al., 2006). However, in practice, in spite of these criteria, the interpretation requires many subjective decisions. This subjectivity increases the variability of the profiles and, consequently, affects the way in which results are interpreted (Gerner-Smidt et al. 1998).
The burden imposed by PFGE implies the application of a highly standardized protocol for the performance, interpretation and exchange of PFGE profile between centers in Europe.
PFGE protocol
The EURL PFGE protocol developed by EURL and standardized between NRLs is similar to the newly updated PulseNet USA (PN USA) PFGE standardized protocol (Halpin et al., 2009) with minor modifications. In the PN USA extraction protocol, the cell density per plug is lower than in the EURL protocol (0.9-1.0 OD 610 PN USA against 1.6-1.8 OD 600 EURL) and consequently in proportion to the cell density, proteinase K, lysozyme, Sodium Dodecyl Sulfate and other lysis buffer reagents are used at lower concentration. The lysozyme is prepared in a TE buffer (PN USA) instead of sterile water (EURL). Lysozyme incubation is undertaken at 56°C (PN USA) instead of 37°C (EURL). The amount of restriction enzyme is higher in the PN USA protocol than in the EURL one, for AscI 0.125 U/µL (PN USA) instead of 0.100 U/µL (EURL), and for ApaI 0.250 U/µL (PN USA) instead of 0.100 U/µL (EURL).
The reproducibility of this protocol between European NRLs has been assessed already two times at the occasion of two inter-laboratory proficiency testing trials (PT trials) in 2009 and 2010. The results obtained were satisfactory. At this time 14 NRLs, representing 14 member states, have been assessed competent by the EURL for L. monocytogenes PFGE sub-typing.
Standard operating procedure for PFGE profile interpretation
This method is based on the interpretation method developed by Barrett et al. (2006) and the PulseNet USA PFGE profile interpretation SOP. It includes one band of difference as the limit to consider two PFGE profiles as indistinguishable as recommended in Barrett et al (2006). It includes a down limit for band interpretation at 33kbp, established according to EURL own experience and the conclusion drawn from the PT trial organized on Salmonella by Peters et al. (2006), and a new profile identification strategy based on database library organization (explained in detail below). This new strategy aims to reduce any artificial diversity generated by the operator's interpretation of the profile. Prior to any analysis of PFGE profiles the quality of the gel should be checked. This involves two steps: an assessment of the overall quality of the gel and an interpretation of the PFGE profiles. These two steps will be developed below.
Visual interpretation
The gel should not contain background or debris which impedes interpretation of the image. Nevertheless, if only part of the image is degraded, the intact part of the gel can be analysed normally. PFGE profiles must be fully visible in order to be analysed. Gels with small spots can be interpreted if the image is first processed using image processing software to remove spots from the image. Gels must enable good contrast and should not contain any fuzzy fields that could impede the analysis. The gel should not exhibit any grossly incomplete restriction bands (figure 1). The expected number of bands must not exceed the range given in table 1. The most frequent problem with PFGE profiles is by far the apparition or disappearance of bands due to incomplete restriction of the DNA, as shown by Martin et al. (2006). This problem is most likely related to incomplete DNA restriction, which is often due to poor DNA quality. If this occurs, the contamination of reagents, buffers or purified water used during the extraction step, are the primary suspects.
Enzyme
Number of bands expected AscI
-12
ApaI 14 -17 Table 1. Number of bands expected for a PFGE profile of L .monocytogenes (Carrière et al., 1991) It is sometimes difficult to detect incomplete restriction bands. They are detected when, in the upper part of the profile, the bands do not follow a descending order of intensity with respect to their molecular weight (figure 2). However an exception to this rule does not mean that the profile must be systematically rejected. Some incomplete restriction bands can be tolerated in a PFGE profile and criteria have been established for validating a profile carrying slightly incomplete restriction bands (figure 1 right). www.intechopen.com
Protocol for validation of doubtful bands in a PFGE profile
The PFGE profile should be analysed in two parts, separated by considering, first the upper part of the profile that contains the most intense bands, composed of long DNA fragments (between 200 and 1000 kb) and then the lower part of the gel which has smaller fragments (between 33 and 200 kb). In the upper part there is a low probability of bands overlapping since the bands observed in this area have a high molecular weight and are few in number.
In the lower part of the gel band overlapping is more likely because there are more fragments and they have of low molecular weight. The validation protocol is only applied to the upper part of the profile. A 200 kb separation limit was decided upon for separating the upper part and lower parts of the profile. This limit was defined empirically according to the EURL database (1500 AscI and ApaI L. monocytogenes PFGE profiles).
The validation protocol is based on an assessment of incomplete restriction bands relative to the average intensity of the profile's bands. Indeed, applied to the upper part of the profile, suspect bands (figures 3 left grey arrows) may be accepted if their intensity is less than 30% of the average intensity of the profile (
Reference system used
The Salmonella Braenderup (S. Braenderup) H9812 reference system was established by the USA CDC for PFGE of L. monocytogenes (figure 5 left) (Hunter et al., 2005). The former reference system, L. monocytogenes H2446 (figure 5 right), is still being used for extraction control by EURL, but only digested with AscI see Table 2. In both cases the reference profiles have to be visible and conducive to interpretation (figure 5), i.e., it must be possible to position all their bands precisely and the intensity of the peaks should not be at the background level. The Salmonella Braenderup H9812 XbaI digestion product must frame the analyzed profile to allow an efficient normalization process and must be run in every six lanes. L. monocytogenes H2446 AscI digestion product is loaded at the extreme left and extreme right of the gel. Moreover all set of controls must be applied to validate the reference systems as shown in Table 2.
Migration distortion analysis
Migration in the gel should not be distorted excessively in comparison to the standard reference system associated with the experiment in the normalization software (Here BioNumerics v6.5 Applied Maths, Sint-Martens-Latem, Belgium). For this purpose, the "Distortion bar" option of the BioNumerics software can be used (BioNumerics v6.
Interpretation of profile saturated intensity area
If a profile contains saturation zones, it cannot be interpreted. To detect this type of anomaly, the densitometric curve of the profile's bands simply needs to be displayed via the densitometric curve calculation feature (e.i. see BioNumerics user manual). Saturated peaks are shown with their tips truncated (figure 7). No saturation can be accepted in a molecular PFGE profile. Fig. 7. PFGE profile with saturated bands (red circles).
Interpretation of molecular profiles
Because every signal is related to the presence of DNA in the gel, molecular PFGE profiles must be interpreted objectively, as shown below, with a band on every signal (the three examples in figure 8).
Profile analysis protocol
The analysis begins with the marking of the bands found on the PFGE profiles, followed by the method developed by EURL to help operators to take band assignment decisions. As told before the profile identification strategy is based on the use of library identification. This method uses the whole database as a reference to assign profiles within a group of profiles similar at 90% (Also called library unit) and then allows the assignment of pulsotype number. A library is composed of several library units organized as follows. The percentage of similarity between two profiles within a library unit is calculated using the Dice coefficient, which depends on the number of bands that are common to both profiles. The determination of bands common to both profiles depends on two parameters, tolerance and optimization both set at 1% as recommended by PulseNet Europe (Martin et al., 2006). Profiles are grouped together according to the UPGMA (unweighted pair group method using arithmetic averages). This method allows profiles to be grouped according to their percentage of similarity. A library pools the profiles obtained with the same restriction enzyme according to the same PFGE protocol. For L. monocytogenes two libraries were created for ApaI and AscI profiles.
The profile interpretation step starts following the assignment of the bands on the profile. The purpose of this step is to minimize the diversity within library unit by reducing artificial diversity generated by the operator's interpretation of the profile. The first step of the interpretation starts by the comparison of the new profile against all library units. The new profile will be included in the library unit with the nearest average profile. At this stage the operator has to respect the following library unit definition: (1) verify the homogeneity of the new profile with library unit content, (2) change the new profile to match with its assigned library unit as much as possible, (3) perform profile modification within the library unit limit (90% similarity within library unit components), (4) check that a band is always placed on a true signal and finally. The example detailed below shows how this method in applied.
In the case of the no. 17 library unit (figure 9), all the profiles have a strong signal in their central part marked by three bands (yellow rectangle). However, in some cases the shape of the signal does not enable three bands to be positioned with certainty. These bands are www.intechopen.com called suspect signal bands (figure 10). Thus there are two distinct profile categories in this library unit, those which allow the easy positioning of these three bands ( figure 11) and the other profiles carrying suspect signals. This is in this situation, that analysis by library of profile comes into play. In this example, all the suspect profiles will be marked with the same number of bands as the clearly marked profile, but only if the suspect profiles allow the positioning of three bands on their signal (see the question marks in figure 10). Fig. 9. ApaI library unit n°17 presented entirely as a PFGE profile comparison file. If a profile has two clearly distinct bands in central position, and not three as for the other profiles of this library, these bands must be marked as they are and then depart from the library type profile. It will then be necessary to check that this profile is maintained clearly in the library and that it meets the library unit definition. In this example we focused on a part of the profile, however this method must be applied for every suspect signals.
Finally once the new profile has been included into the library unit it remains to be checked that there is no spanning between units 17 and another unit in the database (profile move from a library unit to another one). This verification can be made by marking the library unit on the global database dendrograms. This parallel organization of the database dendrograms and library unit allows the monitoring of database organization and integrity. The introduction of new profiles into the global database dendrograms can generate, time after time, changes in the organization of the UPGMA. These changes must be followed and checked on a regular basis (every three month at the EURL) to keep the library unit organisation consistent with the global database dendrograms. An automated script will be developed in collaboration with the software supplier to help the operator in this task.
Strengthening NRLs capacity for standardized sub-typing of Listeria monocytogenes
The EURL PFGE methods were dispatched to all NRLs. The laboratory has been certified by the French Accreditation Committee (COFRAC) for the PFGE methods since 2008 (accreditation no.1-2246, Section Laboratories, www.cofrac.fr). Annual workshops, including typing sessions, organized by EURL make stimulation of NRLs to perform PFGE. Since 2008, annual trainings has been organized by the EURL. Moreover, the EURL has organized PT trials in 2009 and 2010 to evaluate the ability of NRLs to perform conventional serotyping, molecular serotyping and PFGE. The PT trial would be renew on a regular basis. Next PT trial has been already paned for 2012.
Conclusion
The PFGE profile interpretation SOP is vital for the administration of a PFGE profile database. The published SOP deploys the process used by the curator to treat PFGE profiles. It could be followed by NRLs for their own local database organization. This SOP solves a problem caused by PFGE profile databasing, which is the introduction during the profile interpretation of an artificial diversity due to the operator in charge of the analysis. PT trials will be organized by the EURL on PFGE profile interpretation based on this SOP. The implementation of the SOP is part of the effort made by EURL to strengthen PFGE typing at European level. Once NRLs trained and evaluated on the SOP, it will be possible not only to share comparable PFGE profiles but also to share PFGE profiles normalized and marked. From the project naturally outcomes the implementation of a European PFGE database shared and filled in by the NRLs. The EURL database for L. monocytogenes food isolates (EURL Lm DB) was established in 2011 by EURL and is currently available for all NRLs. It enables to gather exhaustive typing and epidemiological information on L. monocytogenes strains circulating throughout the food chain across Europe. Most will agree that gel electrophoresis is one of the basic pillars of molecular biology. This coined terminology covers a myriad of gel-based separation approaches that rely mainly on fractionating biomolecules under electrophoretic current based mainly on the molecular weight. In this book, the authors try to present simplified fundamentals of gel-based separation together with exemplarily applications of this versatile technique. We try to keep the contents of the book crisp and comprehensive, and hope that it will receive overwhelming interest and deliver benefits and valuable information to the readers. | 5,237.4 | 2012-04-04T00:00:00.000 | [
"Biology",
"Medicine"
] |
Acceleration Techniques for Analysis of Microstrip Structures
This paper discusses the acceleration techniques for analysis of microstrip structures. Accurate calculation of parameters of such structures with numerical techniques requires the solution of dense matrix equations involving thousands of unknowns. Solution of this large problem takes long time. In this paper we present three techniques for such computations acceleration: parallel algorithm implemented in computer cluster, sparse bound-matrix technique, and graphic processing unit in conjunction with CUDA technology. The execution time and speed-up of proposed techniques are evaluated through comparing of different numbers of processors and unknowns. The results indicate that all presented techniques can significantly reduce computation time. DOI: http://dx.doi.org/10.5755/j01.eee.20.5.7109
I. INTRODUCTION
Microstrip devices are widely used in modern microwave systems [1]- [5].Microstrip transmission line, coupled lines, as well as multiconductor lines (Fig. 1) are used as basic elements in the design of such devices as filters [1], couplers [2], antennas [3], delay lines [4], [5], etc.Despite the fact that the microstrip lines have been known and used for more than 50 years, it is necessary to pay much attention to their analysis when new microstrip devices are designed.Most accurately microstrip structures may be analysed by numerical techniques such as: finite difference method (FDM) [6], finite elements method (FEM) [7], method of moments (MoM) [8], finite difference time domain (FDTD) method [9], as well as hybrid methods [10] and simulators [11].
The main drawback of numerical methods is their significant demand of computer resources, one of the main of which is the computation time, which in some cases can reach tens of hours [12].Achievements of computer technology allow different ways to speed up calculation of electromagnetic problems.For example Cui et al. in [13] and Jobava et al. in [14] applying MoM to PC clusters for calculation correspondently of scattering by large 3D objects and currents distribution.Ergul and Gurel in [15] also use computer cluster to solve scattering problems.Angeli et al. in [16] demonstrated the implementation of FDM on 64 processors cluster.Yu et al. in [17] and Geterud et al. in Manuscript received September 30, 2013; accepted November 18, 2013.[12] realized FDTD method on computer clusters.There are also examples of the use of graphic processing unit (GPU) instead of a CPU to solve electromagnetic problems: Potratz et al. in [18] use FEM in conjunction with GPU to calculate scattering parameters of waveguide structures, and Livesey et al. in [19] apply GPU and CUDA technology to accelerate FDTD calculations.Motuk et al. in [20] presented implementing of FDM on a multiprocessor architecture on a FPGA device.Overview of open publications [12]- [20] reveals that computer hardware devices and other computations accelerating techniques are not used to analyse microstrip structures and we will try to do it in this paper.
The paper is organized as follows.In Section II, the parallel calculation of general parameters of microstrip structures using a computer cluster is given.Sparse bandmatrices technique to accelerate calculation of microstrip structures parameters briefly described in Section III.The general principle of organizing calculations using GPU and CUDA technology and its application to the analysis of microstrip structures is presented in Section IV. Conclusions are discussed in Section V.In our previous work [6], we proposed a parallel algorithm for the analysis of coupled microstrip structures, i.e. calculation of dependency of electrical parameters of these structures on their design parameters.
Microstrip structures main electrical parameters -the effective permittivity r eff i c, and characteristic impedanceZ0 i c, for cand -normal waves can be found from corresponding capacities per-unit-length: where c0 is speed of light in vacuum; Ci c, is the i-th microstrip capacity per-unit-length correspondently for c-or is the capacity of the same microstrip when the substrate dielectric constant is changed to r = 1.
According to ( 1) and ( 2) electrical parameters of coupled or the multiconductor microstrip lines for cand -normal waves in the line are calculated two times: first time with dielectric substrate and second, when substrate substituted with air (r = 1).
From that follows that the analysis of coupled microstrip structures can be arranged in a parallel fashion, combining 5 computers in the cluster (Fig. 2).Cyclic calculations in the case of analysis of microstrip structures are necessary to perform when the influence of design parameters of these structures on their electrical parameters is investigated.In this case the master-node (Fig. 2) sends range of possible variations of design parameters and variation steps to slave-nodes.Slave-nodes, operating in a given cycle, calculate capacitances per-unitlength: Slave-node "c-substrate" -Ci c capacitance; Slavenode "-substrate" -Ci ; Slave-node "c-air" - (a) c i C , and Slave-node "-air" - (a) π i C capacitance.After slave-nodes finish their calculations, they send the results to the masternode.The masternode sorts the received data and calculates the effective permittivity and impedance according to (1) and ( 2).
It should be noted, that any numerical method and quasi-TEM approach could be used as an analysis method in the proposed parallel algorithm.We have implemented the algorithm in [6] using FDM and solving the problem by iterative technique.Exploring the performance of the proposed parallel analysis algorithm we calculated in [6] the electrical parameters of a multiconductor microstrip line.The analysis area of 100 × 500 unknowns was used.It was found that the parallel algorithm execution time on the 5 computers cluster 3.4 times less than execution time on a single computer.This means that the increase in computing performance exceeds 250 percent.
III. BOUND SPARSE MATRIXES
Solution of partial derivative equations in finite difference method leads to large systems of algebraic equations and computation of these equations is done in two main techniques: Iterative; Coupled matrices.
Calculation time using iterative technique depends on the desired accuracy and problem area and can be very long.For calculation speed up a coupled matrices technique can be successfully used.According to this technique finite difference solution is found by composing and solving linear equation system.
According to finite difference method problem area is divided into square nodes mesh.Value of each node depends on mean of other nodes closest to it where φ -electric field potential; i, j -indexes indicating the position of potential in 2D.So remote nodes have not influence on the calculated potential φ(i,j).Therefore analysis of all nodes in the problem area can be found in resolving the equation where [A] is coefficient matrix with many zero elements; [X] is a vector consisting of unknown node values; [B] is a vector consisting of known node values.Unknown nodes vector [X] can be calculated using for example this equation where [A] -1 is the inverse matrix of coefficients.By solving (5), unknown potential vector can be obtained and recomposed to potential distribution matrix -the problem area.Potential distribution can be farther analysed to find the device electrical parameters e.g.: electric charge density, capacity per-unit-length and so on.
Since calculated potential depends only on the neighbouring potentials -coefficient matrix [A] consist mostly of zero elements, which takes significant amount of memory -each "double precision" type value occupies 8 bytes.It is possible to reduce the memory space occupied by the coefficient matrix and to speed up the calculations using sparse matrices.In sparse matrices only non-zero elements are stored in memory.Unknown nodes vector [X] can also be found through various elimination methods (Gaussian, Gaussian and Jordan et al.).
In order to evaluate the speedup of the FDM calculations the coupled microstrip lines will be analysed.Their constructive parameters are as follows: substrate dielectric constant r = 6.0, normalized microstrips width W1/h = W2/h = 0.5, normalized space between microstrips S/h = 0.5.
Electrical parameters and potential distribution calculation speeds by different techniques are represented in Fig 3 .Two electrical parameters where calculated in the process: characteristic impedance Z0 and effective permittivity εeff (Table I).The investigation area chosen square and one side varied from 52 to 122.
IV. GPGPU & CUDA TECHNOLOGY
Analysing of the microstrip devices can be done using a general-purpose computing on graphics processing units (GPGPU).These processors can have up to 512 and even more processor cores (so-called general-purpose streaming multiprocessors) it means they have 100 times more cores than usual general-purpose CPU has.Their advantage is also that it is not additional specialized computing device.All the GPGPU are embedded on all desktop computers and laptops manufactured from a couple of years ago.Also they are extremely fast and efficient to perform operations with real numbers and with a high degree of data parallelism.In this way, computing performance increases many times comparing with a general-purpose CPU.It is becoming increasing prevalent to develop and investigate techniques to allow using of these computing capabilities.
There are two competing GPGPU programming platforms.A patented CUDA technology developed by NVIDIA Company [21], which integrating technology only in company produced GPGPU's.However, with the using of NVIDIA manufactured GPGPU video card the developing programs in CUDA is free.It should be also noted that CUDA technology appeared a little earlier than the second -OpenCL technology [22], so, at this time, it is more developed and designed scientific and engineering solutions specifically for CUDA technology.On the other hand, it is becoming now more pervasive technology -OpenCL.
OpenCL programming technology was created a bit later, who not only supports GPGPU's, but also the general propose CPU and special accelerators, they are used in mobile phones and embedded systems.This technology is completely free, so it can be integrated into any microprocessor or accelerator by any company and any scientist who wishes to build applications.The main problem of this technology, where is no many created or modified mathematical functions library for OpenCL technology yet.Therefore, in order to perform vector and matrix operations, it is needed to self-create the desired function, or settle for a lower calculation speed compared with CUDA technology.
By solving electromagnetic problems the iterative calculations are applied mostly because iterations could reduce the space occupied by variables in main memory.But iterative calculations limits the accuracy of the results, because of a given accuracy level for the iterative calculation.On the other side, the direct linear solvers allow to instantly get the correct result.Downside for direct linear solvers is that usage amount of main memory is significantly greater, what was recently simply impossible.Also iterative calculations are more complicated to split into smaller tasks in order to distribute them to parallel computing systems, than solving a system of linear equations using the direct methods.Undoubtedly solving linear equations also apply iteration calculations, but in this case the system of linear equations with special methods is decomposed into blocks those facilitate the distinction between linear calculations of parallel systems.
In order to evaluate the speedup of the FDM calculations the coupled microstrip lines will be analysed.Their design parameters are the same as described in Section III.
To solve linear equations system two libraries CULA and ViennaCL [23] will be used.Gaussian elimination technique will be used for execution time comparison.These libraries designed to solve linear equations system using dense matrices and sparse matrices, but for sparse matrices created coefficient matrix must be converted into matrix storage format.CULA library is optimized and works only with CUDA technology, ViennaCL can operate with OpenCL technology also.
Curves in Fig. 4 show implemented algorithms execution time with different number of unknowns to find (problem area).It is seen that, comparing execution time of CULA library curve and authors implemented Gaussian elimination technique curve, when 2500 unknowns were found, differ 120 times, and when 14400 unknowns were calculatedthese curves differ, more than 1000 times.Comparing curves corresponded to ViennaCL library and Gaussian elimination technique it is evidence that execution time in both case practically not differs for low number of unknowns -6400 and at 14400 number of unknowns differ only 1.24 times.Such negligible difference between calculations using ViennaCL library and Gaussian elimination technique can be explained by the fact that the larger set of features and hardware support in ViennaCL library case typically come at the cost of lower performance comparing with CUDA based implementations.This is also partly due to the fact that CUDA is tailored to the architecture of NVIDIA products, while OpenCL represents in some sense a reasonable compromise between different many-core architectures.Also one of the reasons is the different focus of ViennaCL -solvers for sparse instead of dense linear algebra.
Calculated electrical parameters of coupled microstrip lines analysed by GPGPU & CUDA technology are presented in Table II.
V. CONCLUSIONS
Accurate calculation of parameters of microstrip structures with numerical techniques requires the solution of dense matrix equations involving thousands of unknowns.Solution of this large problem takes long time.We present three techniques for such computations acceleration: parallel algorithm implemented in computer cluster, sparse boundmatrix technique, and graphic processing unit (GPU) in conjunction with CUDA technology.The execution time and speed-up of proposed techniques are evaluated through comparing of different numbers of processors and unknowns.The results indicate that all presented techniques can significantly reduce computation time: reduction of the parallel algorithm execution time is inversely proportional to the number of computers in the cluster, sparse bound-matrix is capable of hundreds of times to reduce the computation time compared with the iterative technique, GPUs reduce computation time in thousands of times compared with conventional mathematical techniques.
Fig. 2 .
Fig. 2. Organization of a computer cluster for the analysis of coupled microstrip structures.
Fig. 3 .
Fig. 3. Execution times of the implemented algorithms, where A algorithm using coupled sparse matrices technique, B -iterative algorithm and C -algorithm using coupled dense matrices technique.
Figure 3
Figure 3 execution times of different implemented algorithms with different number of unknowns.Comparison was done with implemented code in Matlab and as A curve show sparse matrix implementation vastly reduces calculation time.
TABLE II .
ELECTRICAL PARAMETERS OF COUPLED MICROSTRIP LINES* CALCULATED BY GPGPU & CUDA TECHNOLOGY.Note: Design parameters are the same as in TableI. | 3,175.2 | 2014-05-13T00:00:00.000 | [
"Computer Science"
] |
Dectin-1 signaling inhibits osteoclastogenesis via IL-33-induced inhibition of NFATc1
Abnormal osteoclast activation contributes to osteolytic bone diseases (OBDs). It was reported that curdlan, an agonist of dectin-1, inhibits osteoclastogenesis. However, the underlying mechanisms are not fully elucidated. In this study, we found that curdlan potently inhibited RANKL-induced osteoclast differentiation and the resultant bone resorption. Curdlan inhibited the expression of nuclear factor of activated T-cells, cytoplasmic 1 (NFATc1), the key transcriptional factor for osteoclastogenesis. Notably, dectin-1 activation increased the expression of MafB, an inhibitor of NFATc1, and IL-33 in osteoclast precursors. Mechanistic studies revealed that IL-33 enhanced the expression of MafB in osteoclast precursors and inhibited osteoclast precursors to differentiate into mature osteoclasts. Furthermore, blocking ST2, the IL-33 receptor, partially abrogated curdlan-induced inhibition of NFATc1 expression and osteoclast differentiation. Thus, our study has provided new insights into the mechanisms of dectin-1-induced inhibition of osteoclastogenesis and may provide new targets for the therapy of OBDs.
INTRODUCTION
Osteolytic bone diseases (OBDs) are a common complication in rheumatoid arthritis [1], osteoporosis [1] and Paget's disease [2], as well as in malignancies, such as multiple myeloma (MM) [3]. OBDs can adversely affect the quality of life and survival of patients due to severe bone pain, pathological fractures and hypercalcemia [1,4]. Bisphosphonates are widely used in the treatment of OBDs [5][6][7]. New therapeutic reagents have been reported to treat OBDs [8]. However, current therapies rarely halt the progression of OBDs. OBDs are caused mainly by abnormal osteoclast activation and osteoblast inhibition [9,10]. Therefore, further investigation of new strategies to inhibit the formation and function of osteoclasts will be important for the therapy of OBDs.
Research Paper
(Ctsk) and TRAP, which are responsible for osteoclastinduced bone resorption. MafB, IRF8 and BCL6 are inhibitors of NFATc1expression.
In this study, we showed that dectin-1 potently inhibited the differentiation and bone resorption of osteoclasts induced by RANKL plus M-CSF. Dectin-1 activation by curdlan in osteoclast precursors increased MafB expression and decreased NFATc1 expression, suggesting that dectin-1 inhibits NFATc1 through the stimulation of MafB. Interestingly, dectin-1 increased IL-33 expression in osteoclast precursors. Mechanistic studies revealed that IL-33 also increased MafB expression and decreased NFATc1 expression in osteoclast precursors and inhibited osteoclast precursors to differentiate into mature osteoclasts. Furthermore, blocking ST2 (IL-33 receptor) partially abrogated curdlan-induced inhibition of NFATc1 expression and osteoclast differentiation. Thus, our study has provided new insights into the mechanisms of dectin-1-induced inhibition of osteoclastogenesis and may provide new targets for the therapy of OBDs. days and with MCSF plus RANKL for 3 days. (A) curdlan at the indicated dosages was added at day 2. Cultures without addition of curdlan were used as controls. Cultures were stained for TRAP + cells. TRAP + cells with more than three nuclei were counted as osteoclasts (OCs).
Dectin-1 activation inhibits osteoclastogenesis in vitro
To examine the effects of dectin-1 signaling on osteoclastogenesis, we cultured bone marrow cells (BMCs) with RANKL plus M-CSF in the presence or absence of a selective dectin-1 agonist Curdlan. Curdlan treatment inhibited RANKL-induced osteoclast formation by decreasing the number and size of TRAP + multinucleated (> 3 nuclei) osteoclasts in a dose-dependent manner ( Figure 1A-1D). Concomitant to the inhibition of osteoclast formation, mRNA expression of Dcstamp for osteoclast fusion and Ctsk for bone resorption and the protein levels of bone resorption-related gene TRAP5b in the culture supernatants were also decreased by curdlan treatment ( Figure 1E,1F).
To explore the function of dectin-1 in curdlaninduced inhibition of osteoclast formation, we generated osteoclasts from dectin-1 knockout (dectin-1 -/-) mice with or without addition of curdlan. As shown in Figure 1G, curdlan treatment failed to inhibit dectin-1 -/osteoclast formation as compared to the untreated controls.
To assess the effects of curdlan on osteoclast bone resorption, we performed resorption pit formation assay. As compared to untreated controls, curdlan treatment remarkably diminished RANKL-induced osteoclast bone resorption ( Figure 4F,4G). Together, these results demonstrated that dectin-1 activation in osteoclast precursors inhibits osteoclast differentiation and bone resorptive function.
Dectin-1 signaling inhibits NFATc1 in osteoclast precursors
To explore the molecular mechanisms of dectin-1-induced inhibition of osteoclast differentiation, we performed gene expression profiling (GEP) analyses in osteoclast precursors with (Cur-pre-OC) or without (pre-OC) curdlan treatment. We found that Cur-pre-OCs expressed lower levels of Nfatc1, the master transcription factor for osteoclast differentiation [12], than pre-OCs ( Figure 2A); whereas the transcription factors Mafb, Bcl6, Irf7, Irf8 and Irf9 were upregulated in Cur-pre-OCs compared to pre-OCs ( Figure 2A). Interestingly, Curpre-OCs expressed higher levels of Clec7a (the gene for dectin-1) as compared to pre-OCs ( Figure 2A). The decrease of NFATc1 in Cur-pre-OCs compared to pre-OCs was confirmed by quantitative real-time PCR (qPCR) ( Figure 2B) and Western-blot analysis ( Figure 2C). The up-regulation of Mafb, Bcl6, Irf7 and Irf8 in Cur-pre-OCs compared to pre-OCs was confirmed by qPCR ( Figure 2D).
IL-33 inhibits osteoclastogenesis in vitro
Dectin-1 signaling stimulates the production of some inflammatory cytokines [19,20], which may be involved in osteoclast differentiation. To address this issue, microarray data were examined. Cur-pre-OCs expressed higher levels of Tnf, Il1b and Il33 as compared to pre-OCs ( Figure 3A). qPCR and ELISA further confirmed the increased expression of IL-33 in Cur-pre-OCs compared to pre-OCs ( Figure 3B,3C). These results demonstrated that dectin-1 activation increased IL-33 expression in osteoclast precursors.
TNF-α and IL-1β were shown to promote but not inhibit osteoclast differentiation [21,22]. We next examined the role of IL-33 in dectin-1-induced inhibition of osteoclast differentiation. Osteoclasts were generated in vitro in the presence of M-CSF plus RANKL with or without addition of curdlan or IL-33. IL-33 potently inhibited the development of osteoclasts ( Figure 4A) by decreasing the cell number and size of osteoclasts as compared with untreated control (Figure 4B,4C); while IL-33 induced comparable inhibition on osteoclastogenesis as compared to curdlan ( Figure 4A-4C).
Blocking ST2 partially abrogates curdlaninduced inhibition of osteoclastogenesis
To examine the role of IL-33 in dectin-1-induced inhibition of osteoclastogenesis, a ST2 (the IL-33 receptor) blocking antibody (αST2) was used during osteoclast culture. The addition of αST2 compared to control IgG increased the generation of osteoclasts in curdlan-treated cultures ( Figure 5A), as demonstrated by significantly higher osteoclast cell number and size in the cultures treated with curdlan plus αST2 compared to curdlan alone ( Figure 5B,5C), while lower cell number and size of osteoclasts were obtained in the cultures treated with curdlan plus αST2 compared to untreated controls ( Figure 5A-5C), indicating that blocking ST2 partially abrogated curdlan-induced inhibition of osteoclast differentiation. Though as compared to untreated cells, cells treated with Curdlan plus αST2 expressed lower levels of Nfatc1 and Ctsk ( Figure 5D,5E), these cells expressed higher levels of Nfatc1 and Ctsk than cells treated with curdlan alone (Figure 5D,5E). Furthermore, cells treated with Curdlan plus αST2 slightly increased the expression of TRAP5b as compared to curdlan-treated cells ( Figure 5F). Collectively, these results demonstrated the important role of IL-33 in mediating dectin-1-induced inhibition of osteoclastogenesis.
DISCUSSION
Abnormal osteoclast activation is a major cause of osteolytic bone diseases (OBDs); therefore, targeting osteoclasts may have important clinical significance in the therapy of OBDs [3,6,7,23]. In this study, we found that dectin-1 activation with curdlan inhibited RANKL-induced osteoclast differentiation by reducing osteoclast cell number and size in vitro. Dectin-1 activation also decreased the expression of Dcstamp, the key regulator of osteoclast precursor differentiation and fusion, and TRAP and Cathepsin K, which are essential for osteoclastic bone resorption. In addition, in the functional tests, we found that dectin-1 activation inhibited bone resorption of RANKL-induced osteoclasts. These results are consistent with previous observations that intravenous injection with Candida albicans enhanced new bone formation in mice [24], and dectin-1 activation in osteoclast precursors inhibited RANKL-induced osteoclast differentiation in vitro [18]. Furthermore, curdlan treatment failed to suppress dectin-1 -/osteoclast differentiation. Thus our data demonstrated that dectin-1 activation inhibits osteoclast differentiation and function.
We and others found that dectin-1 activation inhibited NFATc1 expression in osteoclast precursors [18]. However, the dectin-1 downstream signals responsible for NFATc1 inhibition were not fully defined. In this study, we found that dectin-1 activation enhanced the expression of transcription factors MafB, Bcl6, IRF7, IRF8 and IRF9. MafB and Bcl6 are known inhibitors of NFATc1 and osteoclast differentiaton [12]. The IRF family member IRF8 was also reported to inhibit NFATc1 and osteoclast differentiaton [12]. These data suggested that dectin-1 signaling may inhibit NFATc1 expression through upregulation of MafB, Bcl6 and IRF8. In contrast, Yamasaki et al. reported that dectin-1 inhibited NFATc1 expression through the inhibition of syk/c-fos downstream signaling [18]. Thus, our data reveals new insights into dectin-1-induced inhibition of NFATc1 expression and osteoclast differentiation.
It was reported that dectin-1 stimulates macrophages to produce some pro-inflammatory cytokines, such as TNF-α, IL-6 and IL-1β [19,20]. However, these factors are related to the stimulation of osteoclast differentiation [21,22,25,26]. In this study, we identified IL-33 as a new cytokine that was upregulated by dectin-1 and was related to the inhibition of osteoclast differentiation. Functional tests showed that addition of IL-33 inhibited osteoclast precursors to differentiate into mature osteoclasts and reduced their bone resorptive activity. This result is consistent with previous observations that IL-33 inhibits osteoclastogenesis [27,28]. In addition, blocking IL-33/ST2 by using a ST2 blocking antibody partially abrogated dectin-1 induced inhibition of osteoclastogenesis. Mechanistic studies revealed that IL-33 increased MafB, IRF7, IRF8 and IRF9 and decreased NFATc1 expression in osteoclast precursors. And blocking ST2 increased NFATc1 expression in dectin-1-activated osteoclast precursors. Thus, we identify IL-33 as an important mediator for dectin-1-induced inhibition of osteoclastogenesis.
Notably, the concentrations of IL-33 in supernatants of curdlan-treated osteoclast precursors were much lower than those we used to efficiently inhibit osteoclast differentiation in vitro. The reasons for this discrepancy are unclear. IL-33 is a nuclear cytokine, which is released via cell necrosis. Full length IL-33 can be cleaved by a wide range of proteases, such as caspase-1, elastase, chymase and tryptase, leading to production of different IL-33 variants [29][30][31][32]. Full length IL-33 and its cleaved variants may all exhibit some bioactivities but with different intensities [29][30][31][32]. Therefore, we first suggested that the bioactivity of the natural IL-33 produced by curdlan-treated osteoclast precursors might be much higher than the commercial synthetic IL-33 that we purchased from the company. Second, after release, most of IL-33 might be captured by the adjacent cells and only little amount of IL-33 was released into the supernatant. Third, the IL-33 ELISA kit may detect only some of IL-33 variants.
In summary, our study demonstrates that dectin-1 activation potently inhibits osteoclast differentiation and bone resorption function. Dectin-1 activation increases MafB and decreases NFATc1 expression. Dectin-1 activation increases the expression of IL-33, which is an important mediator for dectin-1-induced inhibition of osteoclast differentiation and bone resorptive function. Our study has provided new insights into the mechanisms of dectin-1-induced inhibition of osteoclastogenesis and may provide new targets for the therapy of OBDs.
Mice
Balb/c mice were purchased from the Jackson Laboratory. Mice were bred and maintained in pathogenfree facilities at the First Hospital Animal Center of Jilin University. 6-8 weeks old mice were used for experiments. All animal experimental procedures were reviewed and approved by the Animal Ethical Committee of First Hospital of Jilin University.
In blocking experiments, BMCs were cultured with M-CSF (10 ng/mL) for 2 days. At day 2 and 4, cells were cultured with M-CSF/ RNAKL with or without addition of curdlan in the presence of a ST2 neutralization antibody (αST2) (5 μg/mL) or control IgG (5 μg/mL). At day 5, cells were processed for TRAP staining, or cells and culture supernatants were collected for gene expression by qPCR or ELISA.
Tartrate-resistant acid phosphatase (TRAP) staining
BMCs were cultured with M-CSF for 2 days and with MCSF plus RANKL for 3 days. In some cultures, cells were treated with curdlan (10 μg/mL) or IL-33 (50 ng/mL) at day 2 and day 4. In ST2 blocking experiments, cultures were treated with curdlan in the presence of αST2 (5 μg/mL) or a control IgG (5 μg/mL) at day 2 and day 4. At day 5, culture medium was removed and cells were fixed and stained with Acid Phosphatase, Leukocyte (TRAP) Kit (Sigma) according to the manufacturer's instructions. TRAP + cells with more than three nuclei were considered as osteoclasts. Osteoclast circumference was calculated by the formula: 3.14 × (mean diameter).
Real time-polymerase chain reaction
qPCR was performed as previously described [34]. Total RNA was extracted from cells by using an RNeasy Mini kit (Qiagen) according to the manufacturer's instructions. Primer sets used for these analyses are: Il33,
Enzyme-linked immunosorbent assay (ELISA) and western blot analyses
IL-33 and TRAP5b ELISA kits were purchased from R&D Systems and Elabscience Biotechnology Co., Ltd, respectively. ELISA assays were performed according the manufacturer's instructions.
Western blot assay was performed as previously described [34]. Anti-mouse NFATc1 and β-actin antibodies were purchased from Cell Signaling Technology (CST).
Gene-expression profiling
BMCs were cultured with M-CSF for 2 days. At day 2, culture medium was removed and replaced with fresh medium containing M-CSF/RNAKL (10ng/ml) with or without addition of curdlan (10 μg/mL) or IL-33 (50 ng/mL). At day 4, cells were collected and stored in Trizol reagent (Invitrogen) at -80 0 C. Samples were sent to OneArray (http://www.OneArray.com.cn/, Beijing, China) for transcription profiling via genome-wide microarrays, and the subsequent data analysis was also performed by OneArray. | 3,071.6 | 2017-06-08T00:00:00.000 | [
"Biology",
"Medicine"
] |
MeSH indexing based on automatically generated summaries
Background MEDLINE citations are manually indexed at the U.S. National Library of Medicine (NLM) using as reference the Medical Subject Headings (MeSH) controlled vocabulary. For this task, the human indexers read the full text of the article. Due to the growth of MEDLINE, the NLM Indexing Initiative explores indexing methodologies that can support the task of the indexers. Medical Text Indexer (MTI) is a tool developed by the NLM Indexing Initiative to provide MeSH indexing recommendations to indexers. Currently, the input to MTI is MEDLINE citations, title and abstract only. Previous work has shown that using full text as input to MTI increases recall, but decreases precision sharply. We propose using summaries generated automatically from the full text for the input to MTI to use in the task of suggesting MeSH headings to indexers. Summaries distill the most salient information from the full text, which might increase the coverage of automatic indexing approaches based on MEDLINE. We hypothesize that if the results were good enough, manual indexers could possibly use automatic summaries instead of the full texts, along with the recommendations of MTI, to speed up the process while maintaining high quality of indexing results. Results We have generated summaries of different lengths using two different summarizers, and evaluated the MTI indexing on the summaries using different algorithms: MTI, individual MTI components, and machine learning. The results are compared to those of full text articles and MEDLINE citations. Our results show that automatically generated summaries achieve similar recall but higher precision compared to full text articles. Compared to MEDLINE citations, summaries achieve higher recall but lower precision. Conclusions Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. The combination of MEDLINE citations and automatically generated summaries could improve the recommendations suggested by MTI. On the other hand, indexing performance might be dependent on the MeSH heading being indexed. Summarization techniques could thus be considered as a feature selection algorithm that might have to be tuned individually for each MeSH heading.
Background
MEDLINE® citations are manually indexed using the Medical Subject Headings (MeSH)® controlled vocabulary. This indexing is performed by a relatively small group of highly qualified indexing contractors and staff at the US National Library of Medicine (NLM). MeSH indexing consists of reviewing the full text of each article, rather than an abstract or summary, and assigning descriptors that represent the central concepts that are discussed.
Indexers assign descriptors from the MeSH vocabulary of 26,581 main headings (2012), which are often referred to as MeSH Headings (MHs). Main heading descriptors may be further qualified by selections from a collection of 83 topical Subheadings (SHs). In addition there are 203,658 Supplementary Concepts (formerly Supplementary Chemicals) which are available for inclusion in MEDLINE records.
Since 1990, there has been a steady and sizeable increase in the number of articles indexed for MEDLINE, because of both an increase in the number of indexed journals and, to a lesser extent, an increase in the number of in-scope articles in journals that are already being indexed. The http://www.biomedcentral.com/1471-2105/14/208 NLM expects to index over one million articles annually within a few years [1].
In the face of a growing workload and dwindling resources, NLM has undertaken the Indexing Initiative to explore indexing methodologies that can help ensure that MEDLINE and other NLM document collections maintain their quality and currency and thereby contribute to NLM's mission of maintaining quality access to the biomedical literature.
The NLM Indexing Initiative has developed the Medical Text Indexer (MTI) [2][3][4], which is a support tool for assisting indexers as they add MeSH indexing to MED-LINE. Given a MEDLINE citation with only the title and abstract, MTI will deliver a ranked list of MHs, as shown in Figure 1. This includes not only MHs but also related SHs. MTI and its current relation to MESH indexing are described in more detail in the Methods section.
Even though indexers have access to the full text during indexing time, MTI has to rely solely on title and abstract since full text is not yet available for automatic processing. Most of the research in MEDLINE indexing with MeSH has been performed on MEDLINE titles and abstracts. We would like to explore the possibility of extending MTI to full text or other more suitable representations to understand the problems of dealing with larger representations, both in efficiency and performance. In previous work, full text has been used with the MTI tool [5]. Despite the decrease in precision, indexing based on full text provides a potential increase in recall.
In this work, we propose exploring the use of automatically generated summaries from full text articles as an intermediary step to identifying the salient pieces of information for indexing using several algorithms; i.e. MTI, individual MTI components and machine learning. To this end, we have considered summaries of different lengths generated automatically from the full text as surrogates for full text articles in automatic indexing. Summaries provide more information than title and abstract, which might improve the coverage provided by the automatic indexing approaches at the expense of some loss in precision. In addition, as the summaries contain salient information from the full text article, it may reduce the number of false positives that automatic indexing systems like MTI currently generate based on MEDLINE citations. As soon as more full text articles are available for automatic processing, they might be considered within the MTI system. This article is organized as follows. First, related work in indexing and automatic summarization is presented. Then, MTI is described, along with the two systems used for generating the automatic summaries. We later present the evaluation setup and discuss the results of several experiments. We finally draw conclusions and outline future work.
Related work
In this section, we present some previous work in biomedical text indexing and automatic summarization. We also present some related work on the use of automatic summaries as an intermediate step in text categorization and indexing.
Biomedical text indexing
In addition to the NLM Indexing Initiative developments, MeSH indexing has received attention from other research groups. We find that most of the methods fit either into pattern matching methods which are based on a reference terminology (like Unified Medical Language System (UMLS)® or MeSH) and machine learning approaches which learn a model from examples of previously indexed citations.
Among the pattern matching methods we find the MetaMap component of MTI and an information retrieval approach by Ruch [6]; in his system the categories are the documents and the query is the text to be indexed. http://www.biomedcentral.com/1471-2105/ 14/208 Pattern matching considers only the inner structure of the terms but not the terms with which they co-occur. This means that if a document is related to a MeSH heading but does not appear in the text being indexed, it will not be suggested. Machine learning based on previously indexed citations might help to overcome this problem.
A growing body of work approaches retrieval of MED-LINE citations as a classification task. For example, MScanner classifies all MEDLINE citations as relevant to a set of positive examples submitted by a user or not [7], and Kastrin et al. [8] determine the likelihood of MED-LINE citations, topical relevance to genetics research. The large body of related work provides valuable insights with respect to classification of MEDLINE citations and feature selection methods.
Machine learning methods tend to be ineffective with a large number of categories; MeSH contains more than 26k. Small scale studies with machine learning approaches exist [9,10], but the presence of a large number of categories has forced machine learning approaches to be combined with information retrieval methods designed to reduce the search space. For instance, PRC (PubMed Related Citations) [11] and a k-NN approach by Trieschnigg et al. [12] look for similar citations in MED-LINE and predict MeSH headings by a voting mechanism on the top-scoring citations.
In previous work, full text has been used within the context of MeSH indexing using the MTI tool [5]. This research shows that there is a potential contribution from the full text which usually is not available for title and abstract. However, in most of the previous work, including work at the NLM Indexing Initiative project, indexing is performed on titles and abstracts. This is due to the fact that, due to license restrictions, the full text of the articles is not available. Even if some of these articles might become available from open source journals, the indexing is performed before these articles are available. We would like to evaluate the performance of the current indexing tools so that they are ready when full text becomes commonly available for indexing.
Summarization of biomedical text
Text summarization is the process of generating a brief summary of one or several documents by selection or generalization of what is important in the source [13]. Extractive summarization systems identify salient sentences from the original documents to build the summaries by using a number of techniques. In the biomedical domain, the most popular approaches include statistical techniques and graph-based methods (see [14] for an extensive review of biomedical summarization).
Statistical approaches are based on simple heuristics such as the position of the sentences in the document [15], the frequency of terms [16,17], the presence of certain cue words [17] or the word overlap between sentences and the document title and headings [17]. Graph-based methods represent the text as a graph, where the nodes correspond to words or sentences, and the edges represent various types of syntactic and semantic relations among them. Different clustering methods are then applied to identify salient nodes within the graph and to extract the sentences for the summary [18,19].
Biomedical terminology is highly specialized and presents some peculiarities, such as lexical ambiguity and the frequent use of acronyms and abbreviations, that make automatic summarization different from that in others domains [20]. To capture the meaning of the text and work at the semantic level, most approaches use domain-specific knowledge sources, such as the UMLS or MeSH [21][22][23]. Moreover, biomedical articles usually follow the IMRaD structure (Introduction, Method, Results and Discussion), which allows summarization systems to exploit the documents' structure to produce higher quality summaries.
Examples of recent biomedical summarization approaches are described next. Reeve et al. [21] use UMLS concepts to represent the text and discover strong thematic chains of UMLS semantic types, and apply this to single document summarization. BioSquash [24] is a question-oriented multi-document summarizer for biomedical texts. It constructs a graph that contains concepts of three types: ontological concepts, named entities, and noun phrases. Fiszman et al. [25] propose an abstractive approach that relies on the semantic predications provided by SemRep [26] to interpret biomedical text and on a transformation step using lexical and semantic information from the UMLS to produce abstracts from biomedical scientific articles. Yoo et al. [22] describe an approach to multi-document summarization that uses MeSH descriptors and a graphbased method for clustering articles into topical groups and producing a multi-document summary of each group.
Finally, it is worth mentioning that, considering their intended application, the automatic summaries may be an end in themselves (i.e., they aim to substitute the original documents) or a means to improve the performance of other NLP tasks. Automatic summaries, for instance, have been shown to improve categorization of biomedical literature when used as substitutes for the articles' abstracts [27]. The next section explores this issue in detail.
Using automatic summaries for text indexing and categorization
Automatic summarization has shown to be of use as an intermediate step in other Natural Language Processing tasks, especially text categorization, when the http://www.biomedcentral.com/1471-2105/14/208 automatic summaries are used as substitutes for the original documents.
Shen et al. [28], for instance, improve accuracy of a web page classifier by using summarization techniques. Since web pages typically present noisy content, automatic summaries may help to extract relevant information and to avoid bias for the classification algorithm.
Similarly, Kolcz et al. [29] use automatic summarization as a feature selection function that allows to reduce the size of the documents within a categorization. In this context, the authors tested a number of simple summarization strategies and concluded that automatic summarization may be of help when categorizing short newswire stories.
In Lloret et al. [30], the use of text summarization in the classification of user-generated product reviews is investigated. In particular, the authors study whether it is possible to improve the rating-inference task (i.e., the task of identifying the author's evaluation of an entity with respect to an ordinal-scale based on the author's textual evaluation of the entity) by using summaries of different lengths instead of the original full-text user reviews.
In the biomedical domain, however, the use of automatic summaries in text categorization has been less exploited, and only a few preliminary works have been published [27].
Methods
In this section, we first present the Medical Text Indexer developed as part of the NLM Indexing Initiative. Then, we describe the summarization methods used to generate the automatic summaries.
The medical text indexer
The Medical Text Indexer (MTI) [2][3][4] is a support tool for assisting indexers as they add MeSH indexing to MEDLINE. Figure 1 shows a diagram of the MTI system. MTI has two main components: MetaMap [31] and the PubMed® Related Citations (PRC) algorithm [11]. MetaMap indexing (MMI) analyzes citations and annotates them with UMLS concepts. The mapping from UMLS to MeSH follows the Restrict-to-MeSH [32] approach which is based primarily on the semantic relationships among UMLS concepts. The PRC algorithm is a modified k-Nearest Neighbors (k-NN) algorithm which relies on document similarity to assign MeSH headings (MHs). PRC attempts to increase the recall of MTI by proposing indexing candidates for MHs which are not explicitly present in the title and abstract of the citation but which are used in similar contexts.
In a process called Clustering and Ranking, the output of MMI and PRC are merged by linear combination of their indexing confidence. The ranked lists of MeSH headings produced by all of the methods described so far must be clustered into a single, final list of recommended indexing terms. The task here is to provide a weighting of the confidence or strength of belief in the assignment, and rank the suggested headings appropriately.
Once all of the recommendations are ranked and selected, a Post-Processing step validates the recommendations based on the targeted end-user. The purpose of this step is to comply with the indexing policy at the NLM and to incorporate indexer feedback. This step applies a set of rules triggered by either recommended headings (e.g. if the Pregnancy heading is recommended add the Female heading) or by terms from the text (e.g if the term cohort appears in text, add the heading Cohort Studies). In addition, commonly occurring MHs called Check Tags (CTs) are added based on: triggers from the text, recommended headings, and a machine learning algorithm for the most frequently occurring Check Tags [33,34]. Check Tags are a special class of MeSH Headings considered routinely for every article, which cover species, sex, human age groups, historical periods and pregnancy [35]. Finally, MTI performs subheading attachment [36] to individual headings and for the text in general.
Indexers can use MTI suggestions for the citations that they are indexing. MTI usage has grown steadily to the point where indexers request MTI results almost 2,500 times a day representing about 50% of indexing throughput [37]. In addition, the users can access the MTI why tool to examine the evidence for the MTI suggestions in the MEDLINE citation they are indexing, providing a better understanding of the proposed indexing terms. Currently, there are a set of 23 journals indexed for which MTI is used as first line indexer. This means that the suggestions by MTI for these journals are considered as good as the ones provided by a human indexer and subject to the normal manual review process. MTI is available as well as a web service [38] and requires UTS (UMLS Terminology Services) credentials.
Summarization methods
Two summarizers are implemented and used for the experiments: the first is based on semantic graphs and the second is based on concept frequencies. Each summarizer is described below.
Graph-based summarization
We use the graph-based summarization method presented in Plaza et al. [23], which we briefly explain here for completeness (see [23] for additional details). The method consists of the following four main steps: • The first step, concept identification, is to map the document to concepts from the UMLS Metathesaurus and semantic types in the UMLS Semantic Network. We first run the MetaMap program over the http://www.biomedcentral.com/1471-2105/14/208 text in the body section of the document. MetaMap returns the list of candidate mappings, along with their score. To accurately select the correct mapping when MetaMap is unable to return a single best-scoring mapping for a phrase because of a text ambiguity problem, we use the AEC (Automatic Extracted Corpus) [39] disambiguation algorithm to decide. This algorithm was shown to behave better than other WSD methods in the context of a text summarization task (see [40]). UMLS concepts belonging to very general semantic types are discarded since they have been found to be excessively broad and do not contribute to summarization. • The second step, document representation, is to construct a graph-based representation of the document. To do this, we first extend the disambiguated UMLS concepts with their complete hierarchy of hypernyms (is a relations). Then, we merge the hierarchies of all the concepts in the same sentence to construct a sentence graph. The two upper levels of these hierarchies are removed, since they represent concepts with excessively broad meanings. Next, all the sentence graphs are merged into a single document graph. This graph is extended with two further relations (other related from the Metathesaurus and associated with from the Semantic Network) to obtain a more complete representation of the document. Finally, each edge is assigned a weight in [0, 1]. The weight of an edge e representing an is a relation between two vertices, v i and v j (where v i is a parent of v j ), is calculated as the ratio of the depth of v i to the depth of v j from the root of their hierarchy. The weight of an edge representing any other relation (i.e., associated with and other related ) between pairs of leaf vertices is always 1. • The third step, topic recognition, consists of clustering the UMLS concepts in the document graph using a degree-based clustering method similar to PageRank [41]. The aim is to construct sets of concepts strongly related in meaning, based on the assumption that each of these clusters represents a different topic in the document. We first compute the salience or prestige of each vertex in the graph, as the sum of the weights of the edges that are linked to it. Next, the nodes are ranked according to its salience. The n vertices with a highest salience are labeled as hub vertices. The clustering algorithm then groups the hub vertices into hub vertex sets (HVS). These can be interpreted as sets of strongly connected concepts and will represent the centroids of the final clusters. The remaining vertices (i.e., those not included in the HVS) are iteratively assigned to the cluster to which they are more connected. The output of this step is, therefore, a number of clusters of UMLS concepts, each cluster represented by the set of most highly connected concepts within it (the so-called HVS). • The last step, sentence selection consist of computing the similarity between each sentence graph and each cluster, and selecting the sentences for the summary based on these similarities. To compute sentence-to-cluster similarity, we use a nondemocratic voting mechanism [22] so that each vertex of a sentence assigns a vote to a cluster if the vertex belongs to its HVS, half a vote if the vertex belongs to it but not to its HVS, and no votes otherwise. The similarity between the sentence graph and the cluster is computed as the sum of the votes assigned by all the vertices in the sentence graph to the cluster. Finally, a single score for each sentence is calculated, as the sum of its similarity to each cluster adjusted to the cluster's size (Equation 1). The N sentences with highest scores are then selected for the summary.
Concept frequency-based summarization
The second summarization method is a statistical summarizer which is mainly based on the frequency of the UMLS concepts in the document, but also considers other well-accepted heuristics for sentence selection, such as the similarity of the sentences with the title and abstract sections and their position in the document. The method consists of five steps: • The first step, concept identification, is to map the document to concepts from the UMLS Metathesaurus and semantic types in the UMLS Semantic Network. MetaMap is run over the text in the body, abstract and title sections. As with the graph-based summarizer, ambiguity is resolved using the AEC algorithm. Again, concepts belonging to very general semantic types are discarded. • Term frequency representation: Following Luhn's theory [16], we assume that the more times a word (or concept) appears in a document, the more relevant become the sentences that contain this word. In this way, if {C 1 , C 2 , ..., C n } is the set of n Metathesaurus concepts that appear in the body of a document d, and • Similarity with the title and abstract: We next compute the similarity between each sentence in the body of the document and the title and abstract, respectively. The title given to a document by its author is intended to represent the most significant information in the document, and thus it is frequently used to quantify the relevance of a sentence. Similarly, the abstract is expected to summarize the important content of the document. We compute these similarities as the proportion of UMLS concepts in common between the sentence and the title/abstract, as shown in Equations 2 and 3. Title • Sentence position: The position of the sentences in the document has been traditionally considered an important factor in finding the sentences that are most related to the topic of the document [15]. In some types of documents, such as news items, sentences close to the beginning of the document are expected to deal with the main theme of the document, and therefore more weight is assigned to them. However, Plaza et al. [23] showed that this is not true for biomedical scientific papers. In contrast, it was found that a more appropriate criterion would be that which attaches greater importance to sentences belonging to the central sections of the article. For that reason, in this work we calculate a Position(S j ) score according to Equation 4, where the functions Intro(S j ), MRD(S j ), and Concl(S j ) are equal to 1 if the sentence S j belongs to the Background section, to the Methods, Results and discussion section, and to the Conclusions section, respectively, and 0 otherwise.
The values of σ , ρ, and θ vary between 0 and 1, and need to be empirically determined (see section Evaluation method). • The last step, sentence selection, consists of extracting the most important sentences for the summary.
Having computed the four different weights for each sentence (its CF-score, its similarity with the title and abstract sections, and its positional score), the final score for a sentence Score(S j ) is calculated according to Equation 5. Finally, the N sentences with highest score are extracted for the summary, where N depends on the desired compression rate.
α, β, γ , and δ can be assigned different weights between 0 and 1, depending on whether we would like to give more importance to one attribute or another.
Their optimal values need to be empirically determined (see section Evaluation method).
Evaluation method
This section presents the evaluation methodology, including the test collection, the summarization parametrization, and the evaluation of the indexing process.
Evaluation data set
We use a collection of 1413 biomedical scientific articles randomly selected from the PMC Open Access Subset [42]. This subset contains more than 436, 000 articles from a range of biomedical journals; they are in XML format, which allows us to easily identify the title, abstract, and the different sections. Moreover, the full texts of the articles in the PMC Open Access Subset are available for research purposes, so that we can run our summarizers and the MTI program over them. When collecting the articles, we made sure that they contain separate title, abstract, and body sections, and that they are assigned MeSH descriptors. It is also worth noting that the average length of the articles' body is 178 sentences. The shortest article is 16 sentences while the longest one is 835 sentences.
Summaries parametrization
We generated automatic summaries using the two summarizers explained in the previous sections, and using different compression rates (i.e., 15%, 30% and 50%). The text in the tables and figures were not taken into account when building the summaries.
For assigning values to the parameters of the summarizers, different combinations that arise from varying each parameter in [0,1] at intervals of 0.1, have been tested using a set of 150 biomedical articles different from those used in the experimentation. The combination of weights that, according to ROUGE metrics [43], produced the best summaries, was finally selected (i.e., α = 0.5, β = 0.1, γ = 0.2, δ = 0.2, σ = 0.2, ρ = 0.7, and θ = 0.1).
ROUGE is a commonly used evaluation method for summarization which uses the proportion of n-grams between a peer and one or more reference summaries to compute a value within [0,1]. Higher values of ROUGE http://www.biomedcentral.com/1471-2105/14/208 are preferred, since they indicate a greater content overlap between the peer and the model. The 1.2 version of ROUGE is used and the ROUGE-2 and ROUGE-SU4 metrics are used for evaluation. ROUGE-2 counts the number of bigrams that are shared by the peer and reference summaries and computes a recall-related measure. Similarly, ROUGE-SU4 measures the overlap of skip-bigrams. As model summaries, we use the articles' abstracts. Even though using more than one single reference summary would report more accurate results, previous experiments have shown that, when the size of the evaluation collection is large enough, using a single reference summaries produces reliable results [44].
Indexing evaluation
The evaluation of the indexing process is carried out by comparing the MeSH headings recommendations by the different indexing methods (i.e., MTI, individual MTI components, and machine learning) on the different types of documents (i.e., full text articles, titles and abstracts, and automatic summaries of different lengths) and the actual indexing of the articles by the MEDLINE indexers for the 1413 articles in the evaluation collection, and using text categorization measures: precision (P), recall (R), and F-measure (F 1 ). See Additional file 1: Evaluation benchmark.
Results and discussion
The following sections present and discuss the results of the experimental evaluation. Even though the evaluation is performed by comparing to previously indexed citations, as presented in the previous section, interannotation agreement between human indexers is not available. Previous work by Funk and Reid [45] have compared the consistency of indexing using doubly annotated MEDLINE citations, showing several MeSH branches with higher consistency, being the Check Tags the most consistent one. In addition to the overall results, we have shown results per MeSH heading branch. Table 1 shows the performance of the MTI indexing on different types of documents (i.e., full text articles, MEDLINE citations (titles and abstracts), and automatic summaries of different lengths). The micro and macro average measures in this table show that in both cases, the summaries perform better than full text. The best F 1 is obtained when the MEDLINE citations are used to discover indexing terms, while the worst F 1 is reported by the full text articles, the difference being more than 12 percentage points in F 1 . MEDLINE citations show the highest precision, while full text has the highest recall. The poor performance of MTI on the full text of the articles is mainly due to a very low precision (0.375 versus 0.596 for MEDLINE citations), while achieving a recall only slightly better than that of the MEDLINE citations. The high recall of the full text is expected since it contains more details than the summaries or MEDLINE citations.
Overall results
Regarding the use of automatic summaries, it is observed that the graph-based method (Gr-sum) produces better F 1 than the concept frequency-based summarizer (CF-sum). Graph-based summaries are more precise. However, recall is higher for the frequency-based summaries. The reason seems to be that, on average, frequency-based summaries are longer than graph-based ones, since the frequency-based summarizer tends to select longest sentences. Among the summaries, the ones at the 15% compression rate present the lowest recall but the highest precision, so achieving a higher F 1 for micro average. On the other hand, F 1 is slightly higher for macro average.
As expected, as the summary length increases, recall improves but precision worsens, and this is true for both types of automatic summaries. The best F 1 is obtained by shorter summaries, and this is due to the fact that, when the summary length grows, the improvement in recall is not enough to compensate for the loss of precision. Increasing the length of the summaries means adding non-central or secondary contents, so that the probability of MTI recommending incorrect MeSH headings is greater.
The automatic summaries produced by the graph-based method using a 15% compression rate attain indexing results close to those of the MEDLINE citations, the difference in F 1 being approximately 3 percentage points. The recall is higher for the automatic summaries than for the MEDLINE citations, but the precision is lower in the former than in the later. However, it must be taken into account that the summaries are generated automatically, and that it is expected that some important content is missing, which affects precision adversely.
We find as well that the difference between micro and macro average is large in terms of precision for full text. This means that there are very frequent terms with low precision but high recall. Table 2 shows the top terms ranked by the number of positive index entries. In both cases, full text shows a large recall compared to MEDLINE citations but with a much lower precision.
MTI components results
MTI components are combined and tuned using MED-LINE, since it is the target source of documents, providing an advantage compared to summaries and full text. This includes as well the set of additional rules added to either comply with indexing policies or address indexers feedback. We have performed several experiments that include using the individual components of MTI: MMI and PRC. MMI implements a dictionary http://www.biomedcentral.com/1471-2105/14/208 Table 3. F 1 results of MMI and PRC are lower compared to MTI results, which is due to the combination of complementary methods performed by MTI and to the ad-hoc filtering rules in the final step of MTI. MMI shows higher recall compared to PRC but both lower precision and recall compared to MTI. PRC shows higher precision compared to the other approaches but with a much lower recall, contributing to the MeSH headings suggested by MMI.
Except for PRC, the other indexing methods show the same behavior, the MEDLINE citations seem to perform better compared to the full text and the summaries. The automatically built summaries have better performance compared to full text.
Term ranking per document results
The indexing algorithms deliver the MeSH terms in decreasing order of relevance. This means that we could evaluate the ranking of the indexing algorithms. Ranking results are available in Table 4 and in an additional file. Average results of the ranking of MeSH terms per document have been obtained using the trec eval evaluation tool. We show the MAP (mean average precision), precision at 0 recall and precision@5. See Additional file 2: Evaluation of MeSH term ranking per document.
MTI and MMI already deliver ranked results. In the case of PRC, the frequency of the MeSH headings for the top 10 retrieved citations is used. Again, except for PRC, results obtained with MEDLINE citations seem to be better than the results obtained with the full text and the summaries. Summaries seem to perform better than full text, except for PRC.
Machine learning results
Summarization has been used as a feature selection algorithm in other categorization tasks, e.g. categorizing web pages [46]. We could consider the automatically built summaries as a method to perform feature selection on the full text articles. In this setup, MED-LINE abstracts are the human produced summaries of the articles.
We have compared the results of these three representations with MTI, MMI, PRC and two machine learning algorithms. We have included learning algorithms like SVM with linear kernel and AdaBoostM1, both from the WEKA package [47]. Precision, recall and F 1 are averaged over 10-fold cross validation. Since the number of available MeSH headings is quite large (over 26k), we have Table 5 shows the average performance of the learning algorithms. Overall, it seems that, when both SVM and AdaBoost are used, full text performs better compared to summaries and MEDLINE citations.
This performance might be due to the capabilities of the full text to provide disambiguation features that other methods, like MMI, are not using, similar to the increased performance of PRC on full text. In contrast to other works, summaries do not offer better performance compared to full text. On the other hand, further tuning of the set of parameters for the summarization process might improve summary performance [48]. From the learning algorithms, SVM seems to perform better compared to AdaBoost in most of the considered MeSH headings.
Globally, results for SVM and AdaBoost are better than MMI and PRC. This has been already seen in previous work with learning algorithms and very frequent MeSH headings. On the other hand, it has been shown [48] that less frequent MeSH headings have poorer performance compared to other approaches due to the scarcity of training data for those headings.
Results by MeSH branch
MeSH terms are organized in a tree structure. The top nodes of this tree define broad topics within the medical domain. Each branch is identified by a letter, and Table 6 contains the list of top-level branch codes from 2012 MeSH. A MeSH heading can be assigned to more than one branch, so in the analysis its contribution is added to all the branches it belongs to. As an example, Cohort Studies appears under the E (Analytical, Diagnostic and Therapeutic Techniques and Equipment) and N (Health Care) branches. We have used this MeSH structure to group the results by tree branches, according to the MeSH headings in those branches. The idea is that, for instance, the indexing of terms in branch C (Disease) will be different to the indexing of terms in branch G (Phenomena and Processes). See Additional file 4: Average results per MeSH 2012 top level branch code. Comparing both summary types and MeSH branches, we observe, as above, that graph-based summaries achieve higher precision but lower recall compared to the frequency-based summaries. We find that the larger differences between the two types of summaries occur in the B, M, N and Z branches.
In the case of the B (Organisms) and M (Named Groups) branches, terms like Humans, Mice, and Animals are most frequent terms in the results of each method. This result is similar to the one observed in full text articles. These terms belong to a special category denominated Check Tags (CTs) [49]. Recall that CTs are a special class of MeSH headings considered routinely for every article, which cover species, sex and human age groups, historical periods and pregnancy. The indexing for the most common CTs are derived from machine learning methods [33]. Summaries and full text seem to follow a different term distribution as the one expected by the trained methods. The result is a higher recall with lower precision. http://www.biomedcentral.com/1471-2105/14/208 In the case of the N (Health care) branch, terms like Cohort Studies are predicted by forced rules. These rules are encoded into MTI to comply with the indexing policy at the NLM and are supposed to improve the quality of indexing based on indexer feedback. Terms like cohort indexes the citation with the MeSH heading Cohort Studies, which seem to be more frequent in frequency-based summaries.
In the case of the Z (Geographicals) branch, the difference is larger, but becomes more similar as the size of the summary increases. The Z branch presents the highest recall but the lowest precision in the full text. On the other hand, the summaries do not exhibit this behavior. Examples of high recall but low precision in full text are: United States, (1g/l glucose: Gibco Laboratories, Grand Island, NY, USA), PMID "20473639", and Germany, Rapid DNA ligation kit was from Roche (Mannheim, Germany), PMID "19609521". In these cases, the country was mentioned as a reference in the full text. Neither the MEDLINE citation or the summaries contain mentions to them. If we compare the summaries to MEDLINE citations, the trend is higher recall but lower precision. Only the M branch (Named Groups) shows a slight advantage in favor of MEDLINE citations. The M branch contains a limited number of MeSH headings and some of them overlap with the Check Tags for which we have trained learning algorithms.
Comparing the recall of the summaries and the full text we find that, as expected, in most of the cases the full text has a higher recall. However, we have identifiedtwo MeSH branches for which the summaries achieve higher recall compared to full text. The branches are A (Anatomy) and D (Chemicals and Drugs). We find that terms in these branches are identified using the Related Citations which predicts the MeSH heading if there is enough evidence in similar documents. In this case, the summaries seem to be more similar to previously indexed citations.
Conclusions
This paper explores the use of different types of automatic summaries for the task of obtaining MeSH descriptors of biomedical articles. To this end, we compare the results obtained by different indexing algorithms (i.e., MTI, individual MTI components, and different machine learning techniques) when applied on (1) summaries of different lengths generated with two different summarization methods (2) full text articles and (3) MEDLINE citations.
Our results show that automatic summaries produce better indexing than full text articles. Summaries produce similar recall to full text but much better precision, which seems to indicate that automatic summaries can efficiently capture the most important contents within the original articles. Compared to MEDLINE abstracts, they allow for higher recall but lower precision. With respect to the different types of summaries, the best results are obtained by a graph-based method with a compression rate of 15%.
There are several reasons for the lower precision of summaries and full text compared to MEDLINE citations. In many cases, it is the use of specific techniques which were tuned for MEDLINE citations. This tuning provides a higher recall in summaries and full text due to the higher probability of triggering the rules. We have evaluated indexing without the forced rules and machine http://www.biomedcentral.com/1471-2105/14/208 learning algorithms. Without these rules, both the precision and recall dropped. A revision of the forced rules for the summaries and full text might improve the indexing performance.
Furthermore, it must be noted that summarization algorithms are tuned based on ROUGE. Tuning of the summarization algorithms based on MeSH indexing could also provide better performance.
Even with full text, the indexing recall is still low in some cases. We have looked into frequent example terms, and one of the reasons for low recall is that in some cases the terms are not explicitly mentioned in the citations or appear with a different term, e.g., synonym not covered by MeSH or the UMLS. The PRC and machine learning algorithms try to address this problem.
In previous work, machine learning has been evaluated on some of the MeSH headings and MEDLINE with mixed results [33,50]. We have contributed by comparing the performance of machine learning algorithms with different document representations on frequent MeSH headings. In our experiments, full text outperforms both summaries and MEDLINE citations. On the other hand, indexing performance might be dependent on the MeSH heading [48] being indexed. Summarization techniques could thus be considered as a feature selection algorithm [51] that might have to be tuned individually for each MeSH heading. | 9,515.8 | 2013-06-26T00:00:00.000 | [
"Computer Science"
] |
Inflationary Phenomenology of Einstein Gauss-Bonnet Gravity Compatible with GW170817
In this work we shall study Einstein Gauss-Bonnet theories and we investigate when these can have their gravitational wave speed equal to the speed of light, which is unity in natural units, thus becoming compatible with the striking event GW170817. We demonstrate how this is possible and we show that if the scalar coupling to the Gauss-Bonnet invariant is constrained to satisfy a differential equation, the gravitational wave speed becomes equal to one. Accordingly, we investigate the inflationary phenomenology of the resulting restricted Einstein Gauss-Bonnet model, by assuming that the slow-roll conditions hold true. As we demonstrate, the compatibility with the observational data coming from the Planck 2018 collaboration, can be achieved, even for a power-law potential. We restricted ourselves to the study of the power-law potential, due to the lack of analyticity, however more realistic potentials can be used, in this case though the calculations are not easy to be performed analytically. We also pointed out that a string-corrected extension of the Einstein Gauss-Bonnet model we studied, containing terms of the form $\sim \xi(\phi) G^{ab}\partial_a\phi \partial_b \phi $ can also provide a theory with gravity waves speed $c_T^2=1$ in natural units, if the function $\xi(\phi)$ is appropriately constrained, however in the absence of the Gauss-Bonnet term $\sim \xi(\phi) \mathcal{G}$ the gravity waves speed can never be $c_T^2=1$. Finally, we discuss which extensions of the above models can provide interesting cosmologies, since any combination of $f(R,X,\phi)$ gravities with the above string-corrected Einstein Gauss-Bonnet models can yield $c_T^2=1$, with $X=\frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi $.
I. INTRODUCTION
Cosmology and astrophysics at present are in the era of great reordering since the observational data offer incredible new insights in the field. Recently, the striking observed event of neutron star merging GW170817 [1], validated the fact that the gravitational waves and electromagnetic waves have the same propagation speed. This observation narrowed down significantly the viable gravitational theories, since every theory that predicts a gravitational wave speed c 2 T different than one, in natural units, is not considered as a viable description of nature. Particularly, most of the Horndeski theories of gravity and also all of the string-corrected Gauss-Bonnet theories of gravity are no longer considered as viable modified gravity theories, see Ref. [3] for a complete list of all the theories that are ruled out by [1].
To this end, in this paper we shall consider the possibility of reviving one class of string-inspired theories of gravity [2], and particularly of the Einstein Gauss-Bonnet theories of gravity. These theories can yield a viable inflationary era, and also can describe successfully the late-time acceleration era, for reviews see [4][5][6][7][8][9][10], and also Refs. [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] for an important stream of papers in the field. Our approach toward reviving the Einstein Gauss-Bonnet theories of gravity will be straightforward and focused on the speed of propagation of gravitational waves, which we shall investigate when it is equal to unity in natural units. As we will show, the imposed constraint c 2 T = 1, restricts the functional form of the coupling of the scalar field to the Gauss-Bonnet invariant. After finding the restricted form of the Einstein Gauss-Bonnet coupling, we shall consider the inflationary phenomenology of the resulting theory. Since the general case is difficult to tackle analytically, we shall assume that the slow-roll conditions hold true, and we shall examine the implications of this condition on the slow-roll indices and the potential. Then by using an appropriately chosen scalar potential, we shall examine the phenomenological viability of the theory and we shall confront the theory with the latest Planck 2018 observational data [30]. A similar work in the context of f (R) gravity, in which differences in the propagation phase of modified gravity models, was performed in [31].
This paper is organized as follows: In section II we shall review the essential features of Einstein Gauss-Bonnet theories of gravity and we shall specify the reason which makes the theory invalid in view of the GW170817 event. In section III we shall impose the slow-roll condition in the theory, and by using the slow-roll indices we shall find the implications of the slow-roll condition on the potential and the rest of the physical quantities of the theory. Accordingly, we shall confront the theory with the observational data. At the end of section III, we shall consider several alternative forms of string-corrected theories, and we indicate how these can become compatible with GW170817. Finally the conclusions follow in the end of the paper.
Before we proceed, in this paper we shall assume that the background metric is a flat Friedmann-Robertson-Walker (FRW) spacetime, with line element, with a(t) being the scale factor.
II. COMPATIBILITY OF EINSTEIN GAUSS-BONNET THEORY OF GRAVITY WITH GW170817
Let us first consider the simplest Einstein Gauss-Bonnet theory of gravity, in which case the gravitational action in vacuum is, where X = 1 2 ∂ µ φ∂ mu φ, and G = R abcd R abcd − 4R ab R ab + R 2 is the Gauss-Bonnet invariant. Also the function f (R, X, φ) appearing in the action (2) is chosen to be, where , and M p is the four dimensional Planck mass, while V (φ) is the scalar potential of the canonical scalar field potential. Essentially, the scalar theory is a canonical scalar field theory with scalar potential V (φ). The gravitational action (2) is the simplest case of string-corrected gravitational theory, but in a later section we shall consider variant forms of this action, to include higher derivative terms. The primordial perturbations of this type of theory have been thoroughly investigated by [32][33][34], and we shall adopt the notation of Ref. [32][33][34] for convenience. For the FRW background metric, the equations of motion of the theory are the following, For the function f (R, X, φ) chosen as in Eq. (3), the gravitational equations of motion read, −2Ḣ We also introduce at this point, the Q i functions that will be relevant in the sections to follow, For the above theory, the general expression for the scalar perturbation propagation wave speed is equal to, where F = ∂f ∂R . In addition, as it was shown in Ref. [32][33][34], the gravitational wave propagation speed is equal to, At this point, the source of the non-viability of the model (2) is apparent, and it is due to the fact that the gravitational wave speed (12) is not equal to unity. Therefore, if the function Q f is zero, then the gravitational wave speed is equal to one. Therefore, we impose the condition Q f = 0, which imposes the following condition on the Gauss-Bonnet scalar function ξ(φ), Thus if the coupling ξ(φ) satisfies the differential equation, the gravitational wave speed becomes equal to one, that is c 2 T = 1. The differential equation (14) can be solved analytically with respect toξ, and the solution is, where we used the definition of the e-foldings number, and we assumed that the integration constants are ∼ O(1) in reduced Planck units, for simplicity. It proves that the explicit form ofξ is the only quantity needed for the calculation of the slow-roll indices and of the observational indices of inflation, so the explicit form of ξ(φ) is redundant for our purposes. Also, by combining equations (14) and (15), we obtain,ξ which is also very relevant for the calculations to follow.
In conclusion, the main result of this section is Eq. (15) in conjunction with (17), which when are satisfied, the gravitational wave speed of the Einstein Gauss-Bonnet theory at hand is c 2 T = 1 in reduced Planck units. In the following section we shall study in detail the phenomenological implications of the above conditions in the Einstein Gauss-Bonnet theory at hand, when the slow-roll conditionḢ ≪ H 2 is assumed to hold true.
III. INFLATIONARY PHENOMENOLOGY OF VIABLE SLOW-ROLL EINSTEIN GAUSS-BONNET THEORY OF GRAVITY
In this section we shall investigate the inflationary phenomenology of the Einstein Gauss-Bonnet gravity model, with the scalar coupling to the Gauss-Bonnet invariant satisfying Eqs. (15) and (17). Obviously, if the Einstein Gauss-Bonnet satisfies Eqs. (15) and (17), it has a gravitational wave speed c 2 T = 1, thus it is compatible with the GW170817, and in this section we shall demonstrate that the GW170817 compatible Einstein Gauss-Bonnet model is also a viable inflationary model in the slow-roll approximation. The analytic calculation of the slow-roll indices and the corresponding observational indices for inflation is quite difficult in the general case, so we shall assume hereafter that the slow-roll approximation holds true, which is quantified by the following relations,Ḣ Also the slow-roll assumption affects the slow-roll indices, and in effect it may relate the terms involving the scalar potential, its derivative and other functions appearing in the equations of motion. Let us see how the gravitational equations of motion become by taking into account the slow-roll conditions (18), so the last two become, By using Eq. (17), and substitutingξ = Hξ in Eq. (19), the latter becomes greatly simplified, and it reads, since the last two terms in Eq. (19) cancel. Thus hereafter Eq. (21) will yield the derivative of the Hubble rateḢ. Now what is needed to proceed is to expressφ and the Hubble rate H as a function of the scalar field φ. Then by using the relation, we can express all the above quantities as a function of the e-foldings number, and eventually we can confront the theory with the observational data. Note that φ k in Eq. (22) is the initial value of the scalar field which is assumed to be taken at exactly the horizon crossing, and φ f is the value of the scalar field when inflation ends. In order to find the implications of the slow-roll conditions on the slow-roll indices, we must find the analytic functional form of the slow-roll indices in terms of the scalar field. The slow-roll indices for the theory at hand are [32][33][34], where E stands for, and Q t = 2 κ 2 + 1 2 Q b . From the slow-roll condition (18) it easily obtained that ǫ 2 ≃ 0, so we disregard this index hereafter. Let us find the explicit form of the slow-roll indices ǫ 1 , ǫ 4 , and we shall investigate the implications of the conditions ǫ 1 ≪ 1, ǫ 4 ≪ 1. For the slow-roll index ǫ 1 , substitutingḢ from Eq. (21), we have, from which it is obtained that, Also, the function E (24), appearing in the slow-roll index ǫ 4 in Eq. (23), has the following form for the theory at hand, and the slow-roll index ǫ 4 reads, From the above slow-roll index we can easily understand when the slow-roll dynamics holds true. Indeed, by assuming, the slow-roll index ǫ 4 becomes approximately ǫ 4 ∼ −Ḣ 2H 2 , which holds true in view of Eq. (18). Thus the approximations (29) are valid and we shall assume that these complement the slow-roll conditions (18). In view of the condition , so in view of the above and of the slow-roll condition (18), the equation of motion (7) becomes, and also Eq. (20) becomes,φ Eqs. (21), (30) and (31), in conjunction with Eqs. (15) and (17), are our starting point, since we haveḢ,φ and the Hubble rate H expressed as functions of the scalar field φ, which can be eventually reexpressed as functions of the e-foldings number, andξ,ξ as functions of the e-foldings number. Let us calculate in detail the slow-roll indices for an appropriately chosen potential. In general, the potential can be arbitrarily chosen, however we shall choose a simple form in order to provide analytic expressions for the slow-roll indices and for the corresponding observational indices. We examined several combinations of exponential and powerlaw potentials that can yield analytic results, however the only potentials that can provide a viable phenomenology are the power-law potentials. So assume that the scalar potential has the form, where V 0 an arbitrary parameter of dimension sec −4+n . In the following we shall use Eqs. (21), (30) and (31), in conjunction with Eqs. (15) and (17). Hence combining the above, the slow-roll index ǫ 1 reads, while the slow-roll index ǫ 4 reads, The above need to be expressed in terms of the e-foldings number N defined in Eq. (22), to this end we need to determine the final value of the scalar field when inflation ends, namely φ f . Also the slow-roll indices and the corresponding observational indices must be evaluated at the initial value of the scalar field at φ k , so we must solve Eq. (22) with respect to φ k , after we perform the integration. Let us find first the final value of the scalar field at the end of inflation, so by equating |ǫ 1 | = 1, we obtain, so by using this and performing the integration in Eq. (22), upon inverting N (φ k ), we obtain the function φ k = φ k (N ), which is, Now we can proceed in calculating the slow-roll indices and the corresponding observational indices of inflation. The spectral index of the primordial scalar curvature perturbations as a function of the slow-roll indices is equal to [32][33][34], which holds true when the slow-roll indices take small values. In addition, the tensor-to-scalar ratio r is equal to [32][33][34], where we took into account that the gravitational wave speed is equal to c 2 T = 1 for the model at hand. Let us now confront the theory with the observational data and specifically with the latest Planck 2018 data which constrain the spectral index n s and the tensor-to-scalar ratio r as follows, n s = 0.9649 ± 0.0042, r < 0.064 .
By evaluating the slow-roll index ǫ 1 appearing in Eq. (33) and ǫ 4 appearing in Eq. (34) at φ = φ k , with φ k being defined as a function of the e-foldings number in Eq. (36), the resulting expressions for the observational indices are too lengthy to present these here, however we shall quote the values of the free parameters for which compatibility with the observational data can be achieved. We shall work for convenience in reduced Planck units, and the result of our analysis is that when c 1 takes small values of the order c 1 ∼ O(10 −30 ) in reduced Planck units, and also when V 0 ∼ O(10) and with n < 0, compatibility with the observational data can be achieved. For example when c 1 = 10 −29.216 , V 0 = 10 in reduced Planck units and n = −0.0894, the spectral index n s and the tensor-to-scalar ratio take the following values, which are both compatible with the observational data. Thus we demonstrated that the Einstein Gauss-Bonnet theory with c 2 T = 1, and for a power-law potential can be compatible with the observational data when the slow-roll assumption is assumed. We need to note that the power-law potential we used, was chosen for demonstrational reasons only, due to the fact that we wanted to obtain analytic results. Of course, more realistic potentials can be used in order to obtain more stringent results, however our purpose was solely to demonstrate that the viable Einstein Gauss-Bonnet theory that evades the GW170817 constrain on the gravitational wave speed, can also provide a viable phenomenology. A more detailed and thorough analysis, with more realistic scalar potential, may require numerical analysis, so it is out of the scope of this paper.
A. Other String-corrected Theories of Gravity and Compatibility with GW170817
In this section we shall briefly mention several generalizations and extensions of the Einstein Gauss-Bonnet theory we discussed in the previous sections, that can also potentially provide viable inflationary phenomenology and at the same time can also have the gravitational wave speed equal to one, if some constraints are imposed. First, a simple extension of the Einstein Gauss-Bonnet action (2) is, with the function f (R, X, φ) being an arbitrary function of its arguments. So this case of theory may include f (R) gravity with a non-canonical scalar field, simple k-Essence models, non-minimally coupled scalar theory of gravity and so on. All these theories, yield the same gravitational wave speed as that of Eq. (12), with Q f being the same as the one defined in Eq. (10). So in principle, a large class of modified Gauss-Bonnet theories may be included. In addition, the recently studied ghost-free Gauss-Bonnet gravities studied in Ref. [35], belong in this class of models too. More interestingly, let us consider a string-inspired corrected action of action (42), which is the following, where G ab = R ab − 1 2 g ab R, the Einstein tensor. In this case, the gravitational wave speed is given by, but in this case, the function Q f is equal to, So if we demand that the scalar coupling function ξ(φ) is constrained to satisfy the following differential equation, then the gravitational wave speed (43) becomes c 2 T = 1. The gravitational theory with the action (42) is the most generalized action with string-corrections of Gauss-Bonnet type, that can be compatible with the GW170817 results, if the coupling ξ(φ) is restricted to satisfy the differential equation (45). Notice that theories containing only the term ∼ − 1 2 ξ(φ)c 2 G ab ∂ a φ∂ b φ can never be compatible with GW170817, since we always obtain c 2 T = 1, irrespective of the choice of the scalar coupling function ξ(φ). In principle, the inflationary phenomenology of theoretical models like that of Eq. (42) can be studied in the slow-roll approximation, however it is a much more complicated scenario in comparison to the model (2) so we refrain from going into details. Nevertheless, the resulting equations of motion can be used as a reconstruction method by specifying the Hubble rate and the function ξ(φ) which must satisfy Eq. (45). Then the resulting potential that can realize such a cosmological evolution can be found, however the calculation of the slow-roll indices could be quite complicated.
Before closing we need to discuss an important issue related to the gravitational wave speed at cosmic times later than the inflationary era. A similar analysis to the one we performed in this paper, was carried in Ref. [36], where it was also found that the coupling constant must take very small values of the order 10 −15 in reduced Planck units, in order to have compatibility with the observational constraints. Thus there seems to be some sort of universal behavior in the two approaches. In our case, the gravitational wave speed of Eq. (12) has this particular form only when tensor perturbations of a flat FRW background are considered. Indeed, the perturbed metric is [33], where adη = dt, the conformal time, and the tensor perturbation is quantified mainly by C µν , while the metric g µν denotes the FRW background metric. For the Einstein Gauss-Bonnet case, the differential equation that governs the evolution of the tensor gravitational perturbations is [33], where c T is defined in Eq. (12), and ∆ is the Laplacian for the FRW metric. Obviously, the above equation (47) governs the evolution of tensor perturbations before and after horizon crossing. It is thus vital to note that since Q f = 0 if the coupling function ξ(φ) satisfies Eqs. (13) or (14), the speed of the gravitational wave perturbations will always be equal to that of the speed of light, before and after horizon crossing, thus even during the matter and radiation domination era, and even at late times. However these are primordial gravitational waves, and this was exactly the focus in this work, to impose the condition c 2 T = 1 to primordial gravitational waves, and examine the inflationary phenomenology of the model. The difference of our approach with Ref. [36], is that the coupling function ξ(φ) is severely constrained to satisfy a differential equation that gives c 2 T = 1 for the primordial tensor perturbations propagation speed.
IV. CONCLUSIONS
In this paper we studied Einstein Gauss-Bonnet models and we investigated when these models can be viable in view of the striking GW170817 results which indicated that the gravitational wave speed is c 2 T = 1 in natural units. Specifically, the Einstein Gauss-Bonnet models are known to have c 2 T < 1, so in this paper we investigated in detail when these can have c 2 T = 1. As we demonstrated, this can be achieved when the scalar coupling to the Gauss-Bonnet invariant is constrained to satisfy a differential equation. In this case, the gravitational wave speed for the Einstein Gauss-Bonnet theory at hand becomes equal to one. Accordingly, we assumed that the slow-roll conditions hold true in the model at hand and we investigated the inflationary phenomenology of the model for a specific class of power-law scalar potentials. As we demonstrated it is possible to achieve compatibility with the observational data, however the results are model dependent. It is possible that a better choice of scalar potential may yield refined inflationary phenomenology and at the same time may also provide a successful description of the late-time era. An example of this sort is the quintessential inflation models [37][38][39][40][41][42], however in this case the study cannot be easily performed analytically. We also indicated which generalized string-corrected Gauss-Bonnet type of theories can also yield c 2 T = 1, and thus become compatible with the GW170817 results. As we demonstrated, theories that also contain terms of the form ∼ ξ(φ)G ab ∂ a φ∂ b φ, can also become compatible with GW170817, only in the presence of an Einstein Gauss-Bonnet coupling ∼ ξ(φ)G, but in the absence of the latter, these theories can never yield c 2 T = 1. Another important issue that we would like to discuss before closing, is that any combination of f (R, φ, X) gravity, in combination with the string-corrected terms ∼ ξ(φ)G and ∼ ξ(φ)G ab ∂ a φ∂ b φ, may also provide a theory with gravity wave speed equal to unity. This includes f (R) gravity, non-minimally coupled scalar theories and k-Essence theories. Also the slow-roll condition may be replaced by the constant-roll condition, and in this case non-Gaussianities may occur in the power spectrum of the primordial curvature perturbations. Specifically, a non-zero bispectrum will be obtained in the equilateral momentum approximation. We aim to report on this last issue in a future work. | 5,262.6 | 2019-08-20T00:00:00.000 | [
"Physics"
] |
The Pierre Auger Observatory: latest results and future perspectives
The Pierre Auger Observatory is the largest ultrahigh-energy cosmic ray observatory in the world. The huge amount of high quality data collected since 2004 up to now led to great improvements in our knowledge of the ultra-energetic cosmic rays. The suppression of the cosmic-ray flux at highest energies was clearly established, and the extra-galactic origin of these particles was confirmed. On the other hand, measurements of the depth of shower maximum indicate a puzzling trend in the mass composition of cosmic rays at energy around the ankle up to the highest energy. The just started upgrade of the Observatory, dubbed AugerPrime, will improve the identification of the mass of primaries allowing us to disentangle models of origin and propagation of cosmic rays.
Hadronic Shower
Ground level How to study UHECRs?
Detection of the fluorescence light emitted by de-excitacion of atmospheric N2 after interactions with the secondary particles of the shower Measurement of the particle density at ground level (e, γ, μ) Amount of fluorescence light is proportional to the energy that the shower dissipates in the atmosphere Calorimetric measurement of the primary particle energy Xmax depends on primary particle mass The distribution of particles at ground level depends on the energy and mass of the primary particle
Lateral Distribution
Longitudinal development
Mass Composition
Average of X max Std. Deviation of X max
Mass composition is not the same at all energies
Large proton fraction at the energy of the ankle Mean mass increases at highest energy
Method:
sky model as the sum of an isotropic fraction plus the anisotropic component from selected sources. The model predictions are compared to the data using the maximum likelihood ratio method.
vertical events and 1118 inclined ones @ E>20EeV
Correlation of UHECRs with the brightest AGNs of the Swift-BAT catalog under the assumption that all the selected sources contribute equally to the UHECR flux.
Auger Collaboration @ICRC 2017
15 Arrival directions of the UHECRs Departures from isotropy: ∼3σ C.L. region of Centaurus A (19 observed events vs 6.0 expected on ∼ average from an isotropic flux) ∼2.7σ C.L. excess has been found in the directions of the active galaxies from Fermi-LAT ∼4σ C. L. starburst galaxies in direction of Cen. A and in the South Galactic pole (NGC 4945, NGC 1068 NGC 253 and M83)
Auger Collaboration @ICRC 2017
UHE photons are tracers of the Greisen-Zatsepin-Kuzmin (GZK) process. If these predicted GZK photons were observed, it would be an indicator for the GZK process being the reason for the observed suppression in the energy spectrum of UHE cosmic rays
Search for photons E > 1EeV
Photons vs Hadrons showers • Higher value of the Xmax • Lower average number of muons • Steeper LDF and consequently a smaller footprint on ground
photons candidate above 10 EeV (SD) 3 photons candidate between 1-2 EeV (Hybrid)
The current upper limits impose tight constraints on current top-down scenarios proposed to explain the origin of UHE cosmic rays Down-going (all flavors) neutrinos that develop deep in the atmosphere generating inclined showers and triggering the Auger surface detector can be identified provided their zenith angles exceed 60 degrees.
Tau neutrinos entering the Earth with a zenith angle close to 90 degrees can interact and produce a tau lepton that decays in the atmosphere inducing an "upward-going" shower that triggers the surface detector.
Neutrinos of 10
∼ 18 eV are expected from interactions of UHECR in the sources or during propagation through the Universe.
No neutrino events observed
Different models for cosmogenic neutrinos that attempt to explain the origin of cosmic rays are excluded at the 90% C.L particularly those that assume proton primaries dN/dE = k E -2 → k ~ 6.4 x 10 -9 GeV cm -2 s -1 sr -1
18
Hadronic physics excess of muons in UHECR air showers compared to predictions of hadronic interaction models PRL 117, 192001 (2016) R E energy rescaling parameter to allow for a possible shift in the FD energy calibration, R had multiplicative rescaling of the hadronic component of the shower The measured longitudinal profile with its matching simulated showers, using QGSJet-II-04 for proton and iron primaries The observed and simulated ground signals for the same event. Enhancement of the capability of the Surface Detector to identify the mass of the primary particle on a shower-by-shower basis Auger Collaboration @ICRC 2017 A thin scintillation detector, which is mounted above the larger WCD, provides a robust and well-understood scheme for particle detection that is sufficiently complementary to the water-Cherenkov technique and permits a good measurement of the density of muons.
20
AugerPrime -The project of the upgrade Extension of the dynamic range of the WCD The dynamic range of the WCD will be enhanced by a factor 32 with an additional small (1") PMT that will be inserted in the WCD
Upgrade of WCDs
New electronics of the SD It will increase the data quality thanks to better timing accuracy and a faster ADC sampling. The number of collected events will be doubled in comparison to the statistics collected up to now by the existing Pierre Auger Observatory, with the advantage that every future event will have mass information and will allow us to better address some of the most pressing questions in UHECR physics 22
Summary
Spectrum →strong flux suppression Mass composition → light @ ankle mixed @ UHE
Photons and neutrinos search → constraints on p-dominated sources
Source → compatible with maximum rigidity scenario | 1,279.6 | 2018-08-01T00:00:00.000 | [
"Physics"
] |
Global research trends and hotspots of artificial intelligence research in spinal cord neural injury and restoration—a bibliometrics and visualization analysis
Background Artificial intelligence (AI) technology has made breakthroughs in spinal cord neural injury and restoration in recent years. It has a positive impact on clinical treatment. This study explores AI research’s progress and hotspots in spinal cord neural injury and restoration. It also analyzes research shortcomings related to this area and proposes potential solutions. Methods We used CiteSpace 6.1.R6 and VOSviewer 1.6.19 to research WOS articles on AI research in spinal cord neural injury and restoration. Results A total of 1,502 articles were screened, in which the United States dominated; Kadone, Hideki (13 articles, University of Tsukuba, JAPAN) was the author with the highest number of publications; ARCH PHYS MED REHAB (IF = 4.3) was the most cited journal, and topics included molecular biology, immunology, neurology, sports, among other related areas. Conclusion We pinpointed three research hotspots for AI research in spinal cord neural injury and restoration: (1) intelligent robots and limb exoskeletons to assist rehabilitation training; (2) brain-computer interfaces; and (3) neuromodulation and noninvasive electrical stimulation. In addition, many new hotspots were discussed: (1) starting with image segmentation models based on convolutional neural networks; (2) the use of AI to fabricate polymeric biomaterials to provide the microenvironment required for neural stem cell-derived neural network tissues; (3) AI survival prediction tools, and transcription factor regulatory networks in the field of genetics were discussed. Although AI research in spinal cord neural injury and restoration has many benefits, the technology has several limitations (data and ethical issues). The data-gathering problem should be addressed in future research, which requires a significant sample of quality clinical data to build valid AI models. At the same time, research on genomics and other mechanisms in this field is fragile. In the future, machine learning techniques, such as AI survival prediction tools and transcription factor regulatory networks, can be utilized for studies related to the up-regulation of regeneration-related genes and the production of structural proteins for axonal growth.
Introduction
Spinal cord neural injury is a neurological injury due to direct or indirect factors, characterized by motor and perceptual dysfunction, abnormal muscle tone, and various other pathological feedbacks in the corresponding injured segment (1,2).Currently, applied treatments in medicine usually fail to meet expectations, and research focuses mainly on using drugs, cellular therapies, and tissue engineering.
Artificial Intelligence (AI) is a generic term that implies the use of computers to model intelligent behavior with minimal human intervention, and it is described as the science and engineering of building intelligent machines.There are two main branches of AI in medicine: virtual and physical.The virtual branch consists of informatics methods ranging from deep learning information management to control of health management systems, including electronic health records and active guidance of physicians in treatment decisions.The physical branch is represented by robots used to help patients or surgeons.Artificial intelligence has recently emerged to analyze and manipulate nerve reproduction and recovery information.AI can rate the extent of neural plastination and efficacy of nerve stem cells, and studies of neural injury and restoration could also offer valuable data resources for AI (3).Meanwhile, AI can also help translate nerve signaling and control machine exoskeletons (4).In addition, artificial intelligence can also discern which gene and signaling pathway is critical for nerve recovery (5).
Therefore, AI systems and research on spinal cord neural injury and restoration can mutually reinforce each other and drive medical innovation.We used popular bibliometric software (CiteSpace and VOSviewer) to visualize and analyze the development history and research hotspots of AI research in spinal cord neural injury and restoration to analyze research shortcomings related to this area and propose potential solutions.
2 Data and methods
Data collection
After the preliminary data retrieval, two researchers (T Gy and Y Bin) screened all manuscripts separately to ensure they were relevant to the theme of this study (see Figure 1).The final results were exported as a "plain text file, " with "Full Record and Cited References" selected as the record content and stored in download_*.txt format.
Parameter settings and critical observations
Parameterization of VOSviewer 1.6.19:Inter-country publication analysis (up to a minimum number of 24 papers) and keyword clustering analysis were performed using VOSviewer software.
Parameterization of CiteSpace 6.1.R6:The time parameter was set from January 2004 to March 2024, 1 year as the time zone, Top N = 50, cropping was Pathfinder, Pruning sliced network and Pruning the merged networks, and the other settings were kept as default; Select keywords, literature, and journals for co-occurrence analyses and co-citation analyses.
(1) Conducting co-citation analyses of papers to define main research directions and hotspots.The following aspects are proposed to construct the co-citation graph: take the papers as nodes, cite frequency as the node size, link the literature with the co-citation relationship, and perform cluster analysis.(2) Creating a keyword co-occurrence graph to analyze the emergent words.(3) Creating the co-citation analysis graph of "hot" journals and studying its distribution in various disciplines.(4) Creating a two-plot superimposed journal map showing, among other things, citation trajectories and focus drift in the field.
Trend analysis of global publication output
Based on the selection procedure, 1,502 papers on AI research in spinal cord neural injury and restoration were collected from the WoS database.Only 12 articles were published in this field in 2004, and no relevant literature was published before 2004.On the whole, the number of published papers is on the rise, indicating that the attempts and explorations made by scholars for AI research in spinal cord neural injury and restoration are gradually increasing, and its research
Country/region analysis and author analysis
The VOSviewer 1.6.19result indicates that 20 nations have at least 19 publications on the research topic (see Figure 3).As can be seen in Figure 3, there is a growing global enthusiasm for AI research in spinal cord neural injury and restoration, with the highest number of papers published in Asia and America.However, as a whole, the strength of the connection between countries is relatively fragmented, indicating that international cooperation still needs to be strengthened.
The analysis results of the authors and institutions are shown in Table 1.Among them, Northwestern Univ and Univ Zurich have higher centrality, indicating that they have close connections with other institutions and frequently cooperate in conducting research and publishing articles.
Analysis of journal co-citation bursts
NEUROREPORT has paid the most attention to AI-assisted repair of spinal cord neural injury and restoration research and has paid attention to related hotspots for a more extended period
Journal biplot overlay analysis
Based on CiteSpace's research base data, Journal Citation Reports (JCR) 2011 data were analyzed using the Blondel algorithm for journal biplot overlay analysis of the literature in this area (see Figure 5), with the citing journals on the left and the cited journals on the right-the citing journal concentrated on MOLECULAR, BIOLOGY, IMMUNOLOGY; or NEUROLOGY, SPORTS, OPHTHALMOLOGY.The most significant direction of cited journals was SPORTS, REHABILITATION.NEUROLOGY, SPORTS, OPHTHALMOLOGY, and MOLECULAR BIOLOGY GENETICS are most strongly associated with the journal and are the "hot" areas for AI-assisted repair of spinal cord neural injury and restoration (z = 5.211, f = 219).
Keyword clustering analysis and burst analysis
Keyword clustering analysis was performed using CiteSpace 6.1.R6 software (see Figure 6A), including nine main clusters: #0 Muscle synergy, #1 spinal cord injury, #3 assistive technology, #4 central pattern generator, #6 rehabilitation training, #7 functional electrical stimulation, #10 hybrid assistive limbs, #12 brain-computer interface, and #14 neural networks.Three main research directions for AI research in spinal cord neural injury and restoration were identified: (1) research on assistive exoskeletons and motor rehabilitation (#0, #3, The 20 most cited journals for AI research in spinal cord neural injury and restoration.Outbreak journals have been heavily cited during a specific period.This chart lists the 20 outbreak journals identified in this research area from papers published from 2004 to 2024.The red box indicates the year in which the outbreak began, "year" means the earliest year of appearance, "Strength" is the number of references, "Begin" and "End" represent the beginning and end of the burst.3).
Clusters #0, #1, #2, #7, #8: Technology, Exoskeletons, Gait, treadmill training and Actuation.These clusters are all focused on machine exoskeletons to assist patients with rehabilitation exercises, and the main cited articles are Evans et al. (9) and Sanchez et al. (10).In recent years, robotic motion exoskeletons have provided standing and walking opportunities for people with spinal cord injury and considerable solutions for gait assistance and rehabilitation.The field focuses on actuation, structure, and interface connectivity components.
Clusters #3, #4, #5, #14: Rehabilitation Robot-Assisted Gait Training.The principal cited articles for these two clusters are Banala et al. (11) and Fang et al. (12).Gait training is critical for promoting neuromuscular plasticity, which is necessary to improve functional walking ability.Robot-assisted gait training was developed for spinal cord injury patients using active leg exoskeletons and force field controllers, which effectively apply force at the subject's ankle through actuators on the hip and knee joints for rehabilitation.
Clusters #9, #10: Brain-Computer Interfaces and Noninvasive Brain Stimulation.The primary cited article in this cluster is Collinger et al. (13).Upper limb paralysis or amputation results in the loss of the ability to grasp., manipulate, and carry objects in the upper limbs.These functions are critical for activities of daily living.Brain-computer interfaces can provide a solution for restoring many of these lost functions.In this paper, two 96-channel intracortical microelectrodes were implanted in a patient's motor cortex to test that quadriplegic patients can use this brain-computer interface to rapidly achieve neural control of a highperformance prosthesis.
Cluster #6: neuromodulation.The primary cited article in this cluster is Angeli et al. (14), which demonstrated that neuromodulation of spinal circuits by epidural stimulation enables wholly paralyzed patients to regain relatively fine autonomous control over paralyzed muscles.That neuromodulation of excitatory subthreshold motor states in the lumbosacral spinal cord network is the key to restoring conscious movement in individuals diagnosed with complete leg paralysis.A novel intervention strategy was discovered that significantly impacts the recovery of voluntary action in completely paralyzed individuals even years after injury.
Cluster analysis of co-cited literature on research hotspots in the last 5 years
We obtained the nine most significant clusters in the literature co-citation network (Figure 7B with Table 4).
Cluster #1, #10: Brain-computer interface technologies.The principal cited article in this cluster is Ajiboye et al. (15), which allows for the restoration of limb movement in patients with chronic quadriplegia through coordinated electrical stimulation of the surrounding muscles and nerves (also known as functional electrical stimulation); the patient's cortical signals can be used to direct limb movement through an implanted practical electrical stimulation component and an intracortical brain-computer interface.This is the first co-implanted functional electrical stimulation + intracortical brain-computer interface neuroprosthesis and represents a significant advancement in the clinical feasibility of neuroprostheses.
Cluster #11, #12: Overview of Neuromodulation and Electrical Stimulation.The primary cited article in this cluster is Gill et al. (16), where spinal sensory-motor networks that are functionally disconnected from the brain as a result of spinal cord injury can be facilitated by epidural electrical stimulation to encourage the return of robust, coordinated motor activity in paralyzed patients.Dynamic task training in the presence of epidural electrical stimulation is referred to as multimodal rehabilitation in this study.This article is the first report of such multimodal rehabilitation in patients with sensory and motor loss of the lower extremities due to spinal cord injury.
Hot spot analysis of co-cited literature
The cited literature of all nodes was ranked according to the number of co-citations, and the 10 articles with the highest number of co-citations are shown in Table 5.The main hotspots are AI exoskeleton and robot-assisted gait training (Table 6).
Summary and interpretation of visual analysis results
A total of 1,502 articles were screened, in which the United States dominated; Kadone, Hideki (13 articles, University of Tsukuba, JAPAN) was the author with the highest number of publications; ARCH PHYS MED REHAB (IF = 4.3) was the most cited journal, and topics included molecular biology, immunology, neurology, sports, among other related areas.
Keyword clustering analysis reveals two main research directions for AI research in spinal cord neural injury and restoration: (1) research on physically biased robot-assisted rehabilitation exercises in AI and (2) research on virtual branches of AI such as deep learning algorithm-assisted brain-computer interfaces and functional electrical stimulation.The results of the keyword breakout analyses show that deep learning and artificial intelligence have been the hottest in the past 5 years (Figure 8).
We performed a co-citation clustering analysis of the included articles to explore the hot directions further and obtained 15 clusters.We performed further analysis to show that the use of artificial intelligence in spinal cord neural injury and restoration focuses on artificial intelligence control electrical stimulation of the spinal cord neuroprosthesis (brain and spinal cord) and information processing.The ultimate goal is to enable patients with paralysis and limb injuries to recover limb function faster through artificially intelligent therapies such as robotic exoskeletons neuromodulation and brain-computer interfaces.
Next, we performed a co-citation cluster analysis of the literature over the last 5 years.The top research topics in the past 5 years were robotic motion exoskeletons for assisted motor rehabilitation and Structure of the discussion segment.topological neural networks and supervised learning (8,28), to improve the safety, tolerance, and walking functional efficacy of robotic exoskeletons to satisfy the needs of clinical patients for more efficient and high-quality treatments.(2) Brain-computer interfaces: a cluster analysis of literature co-citations and co-citations over the last 5 years shows that brain-computer interfaces with deep learning algorithms are one of the continuing hotspots in this field.Brain-computer interface devices are designed to restore lost function and can be used to form electronic "neural bypasses" to circumvent damaged pathways in the nervous system (29, 30).Artificial intelligence techniques applied to brain-computer interfaces can enable disabled and mobility-impaired people to control machines or other devices.Through implanted intracortical brain-computer interfaces, the patient's cortical signals can be used to direct limb movements (31).For example, Collinger (13) implanted two 96-channel intracortical microelectrodes in a patient's motor cortex and tested that quadriplegic patients could use this brain-computer interface to achieve neural control of high-performance prostheses rapidly.In addition, Ajiboye (15) restored limb movement in paralyzed patients through an implanted functional electrical stimulation component and an intracortical brain-computer interface.The authors concluded that neuroelectrical stimulation and intracortical brain-computer interface techniques could be combined to restore the neurophysiologic and motor status of SCI patients more effectively.In addition, in the future, researchers could apply machine learning algorithms to decode neuronal activity and control the activation of nerves and muscles in SCI patients with a customized, high-resolution neuromuscular electrical stimulation system, empowering patients with the critical ability to manipulate and release objects.(3) Neuromodulation and noninvasive electrical stimulation: the cluster analysis of literature co-citations and literature co-citations in the last 5 years shows that neuromodulation and noninvasive electrical stimulation are continuing hotspots in this field.Neuroelectrical stimulation is a noninvasive stimulation strategy (32) that transforms neuronal networks from dormant to functional, thereby gradually restoring control over paralyzed muscles (33,34).In this regard, "numerical models" enhanced by deep learning algorithms are the basis for theoretical simulations of neurostimulation techniques and provide technical guidance for clinical applications.Alexandre Boutet (35) constructed a machinelearning model using fMRI patterns of patients that predicts optimal versus non-optimal settings and has a priori clinically optimized DBS (88% accuracy).The authors suggest that future neuroelectrical stimulation research could incorporate deep learning algorithms, such as convolutional neural networks, and use various strategies to neuromodulate the physiological state of the nerves and restore motor function in paralyzed patients.In addition, finding more targeted neuroelectrical stimulation techniques by performing a series of spatially selective stimulations may be one of the future directions.
In summary, our results are relatively reliable based on the bibliometric results and the authors' understanding.
Experts' discussion on new research hotspots
In recent years, breakthroughs have been made for AI research in spinal cord neural injury and restoration, positively impacting clinical care.Firstly, artificial intelligence is widely used in neural imaging.For example, image segmentation models based on convolutional neural networks can make excellent contributions to imaging parameters, disease classification, and diagnosis of spinal cord neural injury patients before and after surgery (36,37).Second, AI can track and analyze in real-time all neural components of various nervous systems, i.e., neural structure, neurodynamics, neuroplasticity, and neural memory (38).
In addition, AI has many applications in repairing spinal cord nerve injuries using biomaterial technology (39).Transplantation of stem cells to the site of injury is a promising approach.Still, it faces many challenges and is highly dependent on the microenvironment provided by the lesion site and the delivery material (7).Using AI to fabricate polymeric biomaterials can provide the microenvironment required for neural stem cell-derived neural network organization to facilitate neural remodeling and repair (40).For example, Li (39) designed a 3D bioactive scaffold and demonstrated that neural network tissues derived from neural stem cells modified by pro-myosin receptor kinase C had strong viability within the scaffold.In addition, Yuan (41) designed DNA hydrogel with extremely high permeability properties by artificial intelligence for repairing a 2-mm spinal cord gap in rats and implanted the proliferation and differentiation of endogenous stem cells to form a nascent neural network.The authors concluded that neural network organization formed by transplantation in 3D innovative bioactive scaffolds may represent a valuable therapy for studying and developing SCI.Still, this technology has not yet been studied on a large scale, and future development should focus on this direction.
Research in the field of genetics: genomic data have high complexity and dimensionality due to differences in genetic structure and functional gene diversity.It is difficult to reveal the sequence patterns and biological mechanisms of genomes using classical analysis methods.At the same time, AI can mine critical biological information from massive multidimensional data, so they are widely used in genome analysis for various diseases (42)(43)(44).For example, Artificial intelligence can also discern which gene and signaling pathway is critical for nerve recovery.However, in the field of AI-assisted repair of spinal cord neural injury, the study of genomics and other mechanisms is fragile.In the future, various machine learning techniques, such as AI survival prediction tools, transcription factor regulatory networks, etc., can be utilized to conduct studies related to regeneration-related gene up-regulation and axon growth structural protein production.
Limitations of the study
The WOS core database was searched in this study, and no other English databases were searched.Only WOS data can be analyzed for journal and literature co-citation analysis (a core bibliometrics technique).There is no doubt that WOS, as an authoritative mainstream database, still contains comprehensive and reliable data.Secondly, due to the limitation of the length of the article, this paper does not fully present the details of the specific research methodology in the selected literature but only provides an overview of the ideas in the literature.
Outlook
The following research themes are crucial for future AI research in spinal cord neural injury and restoration.
Conclusion
This literature metric study reveals dynamic trends in publication patterns and research hotspots for AI-assisted neural injury and restoration of spinal cord neural injuries across the globe.In addition, it identifies potential partners and institutions, major research hotspots, and upcoming research directions in the fields, thereby providing precious guidance for future studies in this area.Finally, the results of this study will be a valuable resource for clinical practitioners, researchers, industrial collaborators, and other interested stakeholders.
value has been emphasized by many researchers in the academic community, see Figure2.
FIGURE 1 Flow
FIGURE 1Flow chart of the literature search strategy and selection in this study.
FIGURE 2
FIGURE 2Publication trends for AI research in spinal cord neural injury and restoration.The curve represents a continuous increase in the trend of publications from 2004 to 2024.
FIGURE 3
FIGURE 3National analysis map for AI research in spinal cord neural injury and restoration.(A) Country analysis graph.(B) Institution analysis graph.Each node represents a country (or institution), and its size represents the number of publications; the thickness of the lines means the intensity of cooperation between countries (or institution); the thicker the strings, the higher the intensity of collaboration.Different colors represent different times.
FIGURE 6
FIGURE 6 Keyword network diagram for AI research in spinal cord neural injury and restoration.(A) Keyword clustering network graph.Each node represents a keyword; The colors represent the year the clusters began to appear (Notes: #0 muscle synergies, #1 spinal cord injury, #3 assistive technology, #4 central pattern generator, #6 rehabilitation training, #7 functional electrical stimulation, #10 hybrid assistive limb, #12 brain-computer interface, #14 neural networks).(B) Keyword outbreak analysis chart.The red box represents the year the burst started, "year" means the earliest year of occurrence, and "strength" is the number of citations.
FIGURE 5
FIGURE 5 Two-map overlay of cited/cited journals for AI research in spinal cord neural injury and restoration.Each publication is added to two interrelated but different global science maps, with the citing publication on the left and the cited publication on the right.Each point on this map is an article with a corresponding magazine.The curves are citation links representing citation paths.The ellipses represent the citation frequency of the clusters.The data for the five curves in the figure are (z = 5.177, f = 2,178), (z = 5.211, f = 2,191), (z = 2.274, f = 1,048), (z = 3.708, f = 1,606), and (z = 3.317, f = 1,454).f: frequency of citations from left citing journals to right cited journals, z: normalization to the value of f. z and f represent the closeness and importance of the linkage.
FIGURE 7
FIGURE 7 Co-citation clustering of literature on AI research in spinal cord neural injury and restoration.(A) Literature co-citation clustering diagram for AI in spinal cord neural injury and restoration; (B) Literature co-citation clustering diagram for research hotspots in the last 5 years (2020-2024).The clusters' color represents the year of the first co-citation relationship, The nodes represent the cited publications, and their size represents the number of times.
( 1 )
Optimizing data quality and scale: Training AI models require larger, high-quality data pools, and when conducting biomedical explorations, it also requires innovative experimental means to collect relevant data sets.(2) Conducting large-scale clinical trials: Conducting large-scale clinical studies research on AI in spinal cord neural injury and restoration lacks substantial and high-quality clinical trials; therefore, high-quality multicenter and randomized controlled clinical trials should be conducted in the future for in-depth research.(3) Application feasibility of ChatGPT: ChatGPT has recently become a hot topic of discussion, and diagnosing diseases and providing therapeutic advice are promising research areas for ChatGPT.Nonetheless, users who lack specialized knowledge may not be able to recognize the authenticity.People should use ChatGPT cautiously, e.g., just for some initial understanding of the disease.
TABLE 1
Information table of included literature.
TABLE 2
Top 10 researchers with the most publications on AI research in spinal cord neural injury and restoration.
TABLE 3
Top 10 most cited journals for AI research in spinal cord neural injury and restoration.
TABLE 4
AI research in spinal cord neurological injury and repair nine most representative literature co-citation clusters.The silhouettes are the average contour values of the clusters (Tables3 and this table).Generally, groups with silhouette scores > 0.5 were accepted, and groups with silhouette scores > 0.7 had good clustering performances.The size represents the number of items in each group, and labels represent the clusters using the LLR algorithm.
TABLE 5
The four most representative literature co-citation clusters in research hotspots in the last 5 years (2019-2023). | 5,434.8 | 2024-04-02T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Estimation of centroid positions with a matched-filter algorithm: relevance for aberrometry of the eye
: Most Shack-Hartmann based aberrometers use infrared light, for the comfort of the patients. A large amount of the light that is scattered from the retinal layers is recorded by the detector as background, from which it is not trivial to estimate the centroid of the Shack-Hartmann spot. For a centroiding algorithm, background light can lead to a systematic bias of the centroid positions towards the centre of the software window. We implement a matched filter algorithm for the estimation of the centroid positions of the Shack-Hartmann spots recorded by our aberrometer. We briefly present the performance of our algorithm, and recall the well-known robustness of the matched filter algorithm to background light. Using data collected on 5 human eyes, we parameterise a simple and fast centroiding algorithm and reduce the difference between the two algorithms down to a mean residual wavefront of 0.02 µ m rms.
Introduction
The Shack-Hartmann wavefront sensor has a large number of ophthalmic applications, some of which have a great impact on the future life of the patients. Naturally, its performance has been questioned by many authors, usually for the problem of reconstructing the wavefront map from the measured centroid positions [1][2][3][4][5]. However, the measurement of the centroid positions is the core of the Shack-Hartmann wavefront sensor, and corresponds to the largest reduction of data in the measurement process [6,7].
Aberrometers are usually designed with a large number of CCD pixels per single lenslet, in order to cope with the extended nature of the Shack-Hartmann spots and provide adequate dynamic range. Shack-Hartmann spots are processed independently using a "software window", which typically corresponds to the aperture of one lenslet (between 10 × 10 and 20 × 20 pixels). As result, a large number of noisy pixels do not carry any significant information about the measured wavefront, and are responsible for a lack of precision in the estimation of the centroid positions.
The noise that corrupts the CCD data recorded by a Shack-Hartmann wavefront sensor is classically described by combined Poisson and Gaussian statistics, in order to model the fundamental randomness of the detection and the processing of photoelectrons [8]. For an open-loop aberrometer, both the precision and the accuracy of the estimated centroid positions are of primary interest. Precision is usually improved by an adequate reduction of the CCD data, so that noisy CCD pixels are partially suppressed. Methods to suppress irrelevant pixels mainly consist of applying a rectangular or Gaussian weighting function [9][10][11][12][13][14][15] and/or thresholding the data [16][17][18]. These methods can bias the estimated centroid positions if significant information is thrown away [19][20][21][22][23].
Matched filter algorithms have been introduced for solar adaptive optics [24], an application of the Shack-Hartmann wavefront sensor for which no point source is available. They allow to track the spatial features of an extended object, which is imaged by each lenslet of the Shack-Hartmann. For the estimation of the centroid position of a Gaussian Shack-Hartmann spot, a matched filter algorithm has also the advantage of being more linear than a simple centroiding [23,25].
Near infrared light is commonly used for aberrometry in the eye [26], at the cost of an increased amount of scattered background in the recorded CCD data. The major consequence of the background light on a centroiding algorithm is to bias the estimated centroid position towards the centre of the software window, simply because of its uniform distribution across the detector plane. We show in the next section that this feature is of particular relevance for aberrometry of the eye. As an alternative, we stress the benefit of estimating the centroid position of the Shack-Hartmann spot with a matched filter algorithm. The linearity of the matched filter algorithm is insensitive to the amount of background light, when the cross correlation is computed with Fourier transforms [27].
Description of the custom-built aberrometer
We present in this section some numerical simulations of the performance of a matched filter and a centroiding algorithms. We parameterise our simulations to realistically model the measurement process of a custom-built aberrometer, which we present in Fig. 1. The 0.2 mm pitch of a lenslet corresponds to 18.5 pixels of the CCD, and the data are processed using 15 × 15 software windows. The aberrometer uses a very narrow probing beam of full width at half maximum (FWHM) 0.5 mm in the pupil of the eye, in order to consistently obtain Gaussian Shack-Hartmann spots of FWHM w 3.5 pixels. We typically use a 15 µW probing beam to obtain spots with a mean peak a 400 D.U., at a 100 Hz frame rate. The detector has a 40 e − rms readout noise, and a gain of 30 e − /D.U. The use of a scanning mirror, which is conjugated with the pupil of the eye, reduces drastically the speckled aspect of the spots due to scattering [28]. For a mean signal a 400 D.U., we experimentally evaluated the precision of the centroid positions estimated with a matched filter algorithm as a standard deviation of 0.006 pixels. To do so, we measured a sequence of 1000 wavefronts using an artificial eye (a 18 mm lens, with an opaque screen in the back focal plane) in a "double-pass" configuration. This random error corresponds to a 2.5 nanometers rms error on the estimated wavefronts. We summarise the main parameters of our custom-built aberrometer in Table 1.
Parameterisation of the simulations
We model the 15 × 15 noise-free CCD image recorded by our custom-built aberrometer as a Gaussian profile with an additional homogeneous background. The FWHM of the simulated Shack-Hartmann spot is w = 3.5 pixels, the peak signal is a = 400 (10 bit) Digital Unit (D.U.), and the background b = 50 D.U. These values are typical for our aberrometer, operating at 780 nm. The centroid position of the spot is parameterised by the 2-dimensional vector ρ ρ ρ (in pixels, with the centre of the software window taken as origin). Only shifts smaller than 0.5 pixels are considered, which corresponds to accurately positioned software windows ("second pass centroiding"). We model the noise of each CCD pixel independently, as combined Poisson and Gaussian statistics, parameterised by the gain and the readout noise of the camera. (See Table 2.)
Matched filter algorithm
The matched filter algorithm estimates the shift that maximises the scalar product of a reference image (Gaussian spot, of FWHM 3.5 pixels) with the recorded data [8]. The scalar product of the two images can be seen as a cross correlation and thus be computed using the Fourier transform, according to the correlation theorem [23,27,29]. The linearity of the algorithm can be understood with the Shannon sampling theorem and the concept of space-bandwidth product [30]. The algorithm that we implemented is described in [27]. The accuracy of the estimated shift of the spot depends on the interpolation of the cross correlation. We do this interpolation by padding the cross spectrum of the two functions with zeroes, so that the estimated cross correlation has a size of 45 × 45 pixels.
The amount of shift that can be estimated without noticeable bias depends on the FWHM of both images. The accuracy of the algorithm is better than 0.001 pixels in the absence of noise for spots of FWHM w = 3.5 pixels shifted up to ρ x = 4 pixels, independent of the amount of homogenous background.
Centroiding algorithm
The matched filter is compared in this paper with a centroiding algorithm that uses a rectangular window, of width R (in pixels), and a normalised threshold 0 ≤ t ≤ 1. The algorithm first computes the minimum b and the maximum a of the 15 × 15 local data, and then set to zero the data that are bellow the threshold value (2a/3 − b) × t + b. A rectangular windowing is then applied, and the center of mass is computed as an estimate of the centroid position. The algorithm is shown in Fig. 2. For t = 0, there is no effective thresholding of the data. For t = 1, the threshold level is 2a/3. a (10 bit) D.U.
Parameterisation of a centroiding algorithm, with a normalised threshold t and a rectangular window of size R. The gray area corresponds to the data set to zero before centroiding.
Effect of background light on the centroid estimates
The main effect of background light on a centroiding algorithm is to introduce non-linearity in the estimated centroid positions. As soon as the "true" centroid position (that we define with the noise-free simulation) moves away from the centre of the software window, the centroiding algorithm leads to a biased estimation of the centroid position towards the centre of the window. Without background light, a centroiding algorithm has a given range of linearity, which corresponds to the domain of true centroid positions for which there is no truncation of the Shack-Hartmann spots by the weighting function.
We illustrate this effect with numerical simulations. An ensemble of 1000 noise realizations is simulated for each noise-free Gaussian spot, which is parameterised by a variable centroid position ρ ρ ρ = [ρ x , 0] and the numerical values of Table 2. Fig. 3 shows the Mean Square Error [8] (MSE, in pixels squared) in the estimated x-position of the centroid ( ρ x ) as a function of the noise-free centroid (ρ x ), using two unthresholded (t = 0) centroiding algorithms (R = 5 and R = 15) and the matched filter algorithm. For the R = 15 algorithm, this effect is due to the contribution of the uniform background light, the centroid of which is in the middle of the processed window. As a result, the estimated position of the centroid is biased towards zero. Without background light, the R = 15 centroiding algorithm remains linear (blue solid graph), because there is no significant truncation of the Gaussian spot over the full [0 − 0.5] pixels range of shifts.
The R = 5 centroiding algorithm is not linear both with and without background light (red dotted and solid graphs respectively), because there is a significant truncation of the Shack-Hartmann spot by the 5 × 5 rectangular window. For ρ x = 0.5, the error is 0.17 pixels without background, and 0.27 pixels with background. With background light, the error arises from a combined effect of the truncation of the Shack-Hartmann spot and the background. In the zero shift case (ρ x = 0), the low MSE error of the R = 5 centroiding algorithm (MSE 2 × 10 −5 pixels squared, both with and without background) should be carefully interpreted. This is a typical feature of a biased estimator, which can perform better than the theoretical lower bound of the variance [23]. This so called Cramér-Rao lower bound [8] has been investigated for the estimation of the centroid position of a point source [31,32], which is of particular relevance for the Shack-Hartmann wavefront sensor.
The matched filter remains linear over the whole [0-0.5] pixels range of shifts, even with the background light. The MSE of the matched filter is higher with the background light, because it is subject to the combined Poisson and Gaussian noise. For a non-biased estimator, having a larger error (variance) when the contrast of the image decreases is "natural". Figure 3 thus demonstrates that great caution is required in using a centroiding algorithm in practice, even when "smart" centroiding (recursive variable threshold, variable width centroiding) is used. Taking the matched filter as a reference, we discuss in Section 3 the performance of the centroiding algorithm, for data recorded on 5 human eyes. The comparative study of Section 3 confirms the large non-linearity of the unthresholded centroiding algorithm in the presence of background light. We also quantify the effect of the normalised threshold t on the centroid positions estimated by the centroiding algorithm.
Methodology
We present in this section a comparative study of the matched filter and the centroiding algorithms, using experimental data obtained with our custom-built aberrometer. We measure 5 young subjects during a 1 second trial that has no occurrence of blinks, and we compute the difference ∆ ∆ ∆ρ ρ ρ = ρ ρ ρ cent − ρ ρ ρ m f between the centroid positions estimated by the matched filter ρ ρ ρ m f and the centroiding algorithm ρ ρ ρ cent . The centroiding algorithm uses a threshold t and a rectangular window of size R, which is positioned on the integer value of the centroid position ρ ρ ρ m f . We present in Table 3 the mean values of the peak a and the background b of the data, which are estimated for each subject by spatio-temporal averaging of the minimum and maximum values of the processed local data. The values presented in Table 3 are close to the values we used in the simulations of Section 2 (a = 400 D.U. and b = 50 D.U.). We record more background light on subject 1 than on the other subjects, and we interpret this result by the low pigmentation of his eyes. Figure 4 shows that, for subject 2, the centroid positions ρ ρ ρ cent ("·") are systematically biased towards the centre of the software window, for R = 9 and no thresholding (t = 0). This effect is also apparent in Fig. 5, which shows that the norm of ∆ ∆ ∆ρ ρ ρ is proportional to the norm of the centroid positions ρ ρ ρ m f . The larger departures from a straight line obtained for the R = 15 centroiding algorithm (right graph of Fig. 5) are due to the contribution of a larger number of noisy pixels. Without any thresholding applied, the centroiding algorithm is barely sensitive to a sub-pixel shift of the Shack-Hartmann spot, for any size R of software window. and is significant for both subjects 1 and 2. The threshold level t has thus to be set sufficiently high to eliminate completely the background of the local data. Figure 7 shows the partially thresholded CCD data obtained with subject 2, for t = 0.1 (left) and t = 0.2 (middle). In both images, the pixels that are set to zero are not symmetrically distributed around the core of the spot. This leads to a large bias in the centroid estimates. For both subjects 1 and 2, the error is close to a minimum value for t = 0.8, independent of the size of the centroiding window R. For subject 2, σ is relatively insensitive to the value of the threshold in the range 0.4 < t < 0.8. We interpret the difference between the results of subject 1 and 2 by the high amount of background light recorded on subject 1. Given the results of Fig. 6, we will consider in the following the effect of thresholding the data for all subjects, with t = 0.8.
Effect of thresholding
Thresholding reduces the residual error of the centroiding algorithm, from approximatively σ 0.3 pixels (t = 0) down to σ 0.13 pixels (t = 0.8). The residual error does not fall bellow 0.13 pixels. We interpret this residual error by the truncation of the spot, which leads to bias in the centroid estimates. The truncation is illustrated in Fig. 7 (right graph, obtained with a threshold level t = 0.6). Regardless of t, the residual error is well above the 0.006 pixels precision of our aberrometer, which we experimentally measured using an artificial eye. Figure 8 shows for 5 subjects the mean rms error of the tip/tilt removed residual wavefront, for t = 0 and t = 0.8. This residual rms is computed using a modal reconstruction of Zernike coefficients (up to the tenth radial order). A t = 0.8 threshold allows to consistently decrease the difference between the matched filter and the centroiding algorithm down to a mean error of 0.02 µm rms, for the 3 window sizes. Without thresholding, we found a mean rms value of 0.045 µm for R = 5 and R = 9, and 0.062 µm for R = 15. Fig. 6.) For t = 0.6, the threshold is close to optimal, but there is still a σ 0.13 pixels residual error due to the truncation of the spot.
Conclusion
The extended nature of Shack-Hartmann spots and the amount of background light obtained in human eyes justify the choice of the matched filter algorithm for aberrometry. Its close relationship to the least-squares estimator makes it also suitable for dealing efficiently with a larger number of pixels subject to Gaussian readout noise [8].
However, we have shown that the difference between the (tip/tilt removed) estimated aberrations becomes in the order of 0.02 µm rms when an appropriate thresholding of the data is applied before centroiding (t = 0.8), independently of the size of the rectangular window R. This residual error is not significant for most ophthalmic applications of the Shack-Hartmann wavefront sensor, as it corresponds to λ /25 for a 0.5 µm wavelength. Using MATLAB 7.4.0, we found our implementation of the matched filter algorithm 6 times slower than the centroiding algorithm, for the processing of 15 × 15 pixels images. For an adaptive optics system, the modest gain in accuracy obtained with the matched filter algorithm might therefore be obtained at the cost of a reduced bandwidth, unless appropriate parallel processing of the data is implemented (using field programmable gate arrays for instance).
Without thresholding, the centroiding algorithm leads to centroid positions that are systematically estimated at the centre of the software window. With our custom-built aberrometer, we estimated the corresponding (tip-tilt removed) error between 0.045 and 0.062 µm rms.
This research was funded by Science Foundation Ireland under Grant No 07/IN.1/I906. | 4,145.4 | 2010-01-18T00:00:00.000 | [
"Computer Science",
"Physics"
] |
ReLoki: A Light-Weight Relative Localization System Based on UWB Antenna Arrays
Ultra Wide-Band (UWB) sensing has gained popularity in relative localization applications. Many localization solutions rely on using Time of Flight (ToF) sensing based on a beacon–tag system, which requires four or more beacons in the environment for 3D localization. A lesser researched option is using Angle of Arrival (AoA) readings obtained from UWB antenna pairs to perform relative localization. In this paper, we present a UWB platform called ReLoki that can be used for ranging and AoA-based relative localization in 3D. To enable AoA, ReLoki utilizes the geometry of antenna arrays. In this paper, we present a system design for localization estimates using a Regular Tetrahedral Array (RTA), Regular Orthogonal Array (ROA), and Uniform Square Array (USA). The use of a multi-antenna array enables fully onboard infrastructure-free relative localization between participating ReLoki modules. We also present studies demonstrating sub-50cm localization errors in indoor experiments, achieving performance close to current ToF-based systems, while offering the advantage of not relying on static infrastructure.
Introduction
Infrastructure-free ad hoc relative localization methods are needed in many applications that keep track of objects, robots, and even people.When neither a Global Positioning System (GPS), nor any infrastructure/landmarks are available for reliable global positioning, the agents must rely on relative localization to support missions like search-and-rescue or environmental monitoring, as shown in Figure 1.These systems can be further enhanced with ad hoc infrastructure setups, such as a localization beacon on a moving base station, which can be dynamically deployed in strategic locations.This paper proposes and validates a novel system using only Ultra-Wide Band (UWB) antenna arrays to relatively localize tags-to-tags or tags-to-mobile-beacons in a 3D environment.
We are focused on a light-weight, real-time distributed sensing solution for 3D relative localization that can seamlessly integrate (plug-and-play) into any existing multi-agent platform.Acquiring distance estimates based on Time of Flight (ToF) differences in UWB sensors is a commonly used method today [1][2][3][4].A lesser explored area of interest for UWB systems is using multiple antenna arrays to leverage their Phase Difference of Arrival (PDoA) [5].In this space, we propose ReLoki as a combined PDoA + ToF distributed relative localization system that is capable of estimating the transmitter's location when two modules are communicating.For PDoA sensing, Reloki uses an on-board four-element antenna array.In this paper, we study the sensing performance of three specific antenna arrays: the Regular Orthogonal Array (ROA) and the Regular Tetrahedral Array (RTA) which are used for fully on-board 3D relative localization, and the Uniform Square array (USA) which is targeted for ad hoc beacon systems.In [6], the authors propose a theoretical methodology for localization with phase difference measurements [7][8][9][10].We use a modified geometric approach for Angle of Arrival (AoA) estimation with the antenna arrays and show the performance characteristics in this paper.Here, the RX agent senses the relative positions q i,j of the TX agents w.r.t its body frame whenever a message is received from j.On the right, we show the scenario where ReLoki can act as a mobile beacon.All beacons are capable of localizing a transmitting agent in 3D and adding more beacons will improve estimates.
Related Works
A common method of obtaining range measurements using UWB systems is the Tag-Anchor model [2,3,11].Anchor nodes are set up at predefined locations and the Time Differences of Arrival (TDoAs) are used to estimate the relative position of these tags.Another implementation called Concurrent AoA estimation uses a single RX antenna as a tag and a multi-TX antenna array on the anchors situated at known locations in a room [12,13], with small delays between transmissions used for localization.These implementations are well suited for both indoor and outdoor deployments.By increasing the deployment density of tags and anchors, the localization accuracy can reach under 25 cm.Additionally, since each tag only requires one UWB RF subsystem, the tags are light-weight and power-efficient, making them well suited to battery-operated systems.However, these systems need simultaneous observations from at least four beacons for full 3D localization.This limits the localization to areas specifically setup with anchors.Any ad hoc, infrastructure-free deployments are not possible with this architecture.
To remove the need for infrastructure, we can observe a multitude of sensor fusion methods [3,[14][15][16][17], where the ranging measurements obtained from a single UWB TX/RX system are combined with some form of Simultaneous Localization and Mapping (SLAM) using an Inertial Measurement Unit (IMU) and/or visual data.This sensor fusion augments the UWB range-only measurements, enabling estimation of relative locations of neighbors without any ambiguity.The sensor fusion also allows for pose estimation, which provides full state information.While these systems generally have very good localization performance, they are far from plug-and-play and generally come with very complex computation requirements.Additionally, these systems require multiple readings from different points in the environment to estimate position, which contributes to an initial windup time before these sensors obtain lower errors in their positional estimates.
Multi-UWB range measurement systems have been developed to estimate the complete relative 2D positions (with pose) [18] and 3D positions [19][20][21] of robots.These systems use UWB antennas placed at pre-defined locations of the robot, i.e., the body of the robot is part of the antenna array structure.The difference in the ToF measurements on these different antennas corresponds to the AOA of the source.These systems work without any infrastructure and require only on-board processing for relative localization.They can additionally provide pose information with some assumptions.In these systems, the errors reduce with increasing separation between antennas.Hence, one of the main disadvantages is the large and bulky sub-frames needed to support the large separation between the antennas.For small sizes, the bearing estimation based on these ToF measurements has much higher errors than corresponding PDoA-based systems, as shown in [5].Additionally, the 3D localization comes with ambiguity of translational position in one axis, as these systems use planar antenna arrays.
For full 3D relative localization, we can fuse ranging and bearing information to obtain a combined position estimate.PDoA estimation is an active area of research.The idea is to use phase differences measured by antennas separated by a distance of roughly half the carrier wavelength to estimate the AoA of the signal.Many of the works using narrow-band radios [22,23] consider planar antenna arrays with many antenna pairs to obtain very low bearing angle errors.However, these arrays comes with ambiguity in localization in at least one bearing estimate.Narrow-band RF localization also sufferers from multi-path effects leading to reduced localization performance in environments with many reflecting surfaces.Additionally, one of our requirements is a low-weight hardware system that can be integrated onto existing robotic platforms.Current UWB localization systems perform very accurate phase detection for their high-precision range estimation and they come in a very small package.Hence, in the case of our proposed ReLoki systems, a UWB-based PDoA-bearing angle estimation is coupled with a ToF range measurement.Looking at other UWB-based PDoA estimation methods, an azimuth-only AoA version is considered in [5], using an antenna array consisting of two antenna elements.However, this system exhibits azimuth ambiguity, meaning there is a one-to-two mapping of the measured phase difference in the range [−π, π] to the actual azimuth angle in the same range.This results in two possible mirrored azimuth estimates for a single phase difference value.The system in [24] is most similar to ours and is an extension of that in [5], which uses one more orthogonal pair (total of three antennas) to estimate both azimuth and elevation angle, but the same issue of azimuth ambiguity still exists.A major distinguishing factor of our work is exploring 3D antenna arrays for PDoA in UWB systems to eliminate this ambiguity observed in planar antenna arrays.Our previous work explored full 3D relative localization without azimuth ambiguity, showing the preliminary results for the RTA antenna [25].In this work, we extend our preliminary results with improved experimental validation of the RTA antenna, present algorithms and validation for mobile ad hoc beacons using the USA antenna, and introduce a novel antenna design for ReLoki's antenna array.When dealing with the antenna separation for the PDoA measurements, existing work on wrapped PDoA measurements for antennas with spacing greater than half the carrier wavelength [6,26] has shown theoretical performance gain over simple PDoA-based estimation without wrapping (for antenna spacing less than half the carrier wavelength), but this comes at the cost of using a computationally expensive non-convex search strategy to address phase ambiguity resolution.As a first proof of concept, we instead implement a simpler approach with antenna spacing less than half the carrier wavelength to facilitate the low weight of the hardware and faster estimation.
Contributions
The main contributions of the paper are as follows.
•
First implementation of full 3D relative bearing estimation without ambiguity using 3D UWB antenna arrays.The current state of the art for UWB PDoA estimation using planar antenna arrays [5,24] performs bearing estimation with azimuth ambiguity.
Multi-Agent Relative Localization via UWB
Consider a UWB-based transceiver pair (i, j), with i transmitting and j receiving data, and each transceiver consisting of four antennas.The 3D position of these transceivers are defined as p i ∈ R 3 and p j ∈ R 3 .Note that these global positions will never be available to the agents.The goal of this system is to use this UWB system to obtain an estimate q i,j of the true relative position q i,j ≜ B j O R j (p i − p j ) ∈ R 3 of the transmitter with respect to the body frame of the receiver, where B j O R j denotes the rotation from global to the body frame of the transceiver performing the relative localization sensing.
Problem 1 (3D relative position estimation).Using the UWB sensing system mentioned above, find a co-designed Two-Way Ranging (TWR) + Phase Difference of Arrival (PDoA) pinging scheme and an online estimation algorithm that allows the receiving UWB system j to sense the relative position q i,j of the transmitter i, whenever i initiates a ranging request.The combined algorithm and hardware should minimize the expected relative localization error
ReLoki Proof of Concept
ReLoki is our envisioned solution to the relative localization problem, capable of determining the distance and direction of an incoming transmission.Each ReLoki node consists of two parts: (1) an antenna array with specific geometry that is used to transmit/receive (TX/RX) data, and (2) a base platform consisting of four UWB Integrated Circuits (ICs) capable of computing the phase of arrival and time of arrival of an incoming signal, connected to a processing subsystem that performs all the computations.Full 3D localization requires at least four antennas, as seen in [6].It should be noted that any four-element antenna array can be used with this system.In this paper, we will focus on the RTA, ROA, and USA arrays for their regular structures.An illustration of these antenna arrays is shown in Figure 2.
Relative Localization Using Four-Antenna Array
Localization of a source requires both ranging and direction/bearing information.Obtaining range measurements is straightforward.The ToF of a radio signal between two nodes is calculated using a well-established asynchronous Two-Way Ranging (TWR) protocol [27] between the nodes.This is known to provide centimeter-level accuracy [28,29].
To determine the direction to the source, we take advantage of the geometry of the specific receiving antenna array and transform the Phase Difference of Arrivals (PDoA) between different antenna pairs to the bearing angles.This can be performed each time a sensor i initiates what we call a Relative Position Ping (RPP).
For PDoA measurements, it is ideal to set the distance between all antenna pairs to half the wavelength of the carrier λ c /2 to prevent wrapping (phase difference estimated goes over π or −π, thereby wrapping the opposite sign).Additionally, the carrier frequency should be in sync between all UWB transceivers on the receiver.Assuming we have the phase of arrival Φ n i,j , n ∈ {1, 2, 3, 4} for all four antennas in array, we can compute the phase difference ∆Φ n,o i,j for all ( 4 2 ) = 6 antenna pairs as Ideally, the phase difference calculated will be zero when the source is perpendicular to the antenna pair.Our experiments showed that this is not the case in the hardware and all antenna pairs exhibited a phase difference bias.We use a bias cancellation term Φ n,o j to compensate for this.Bias compensation can be determined through a calibration experiment for each antenna pair.In this experiment, we first measure the phase of arrival without any bias for all possible bearing angles of the transmitter relative to the receiving antenna array.These measurements are then compared with the true angles expected based on the antenna geometry.The bias compensation Φ n,o j is found by minimizing the least squared error between the measured and true angles.Subsequently, the angle of incidence α n,o i,j for all six antenna pairs is computed using (3).We show the illustration of angle of incidence for all three antenna arrays in Figure 3.
Notably, a fraction of 1 0.95 is used.In the actual design of the antenna array, the spacing between antennas is set to 0.95λ c /2.This spacing is chosen to ensure that the calculated phase differences remain within the [−π, π] range, mitigating errors caused by noise in the phase calculation algorithm, as mentioned in [5].
Using the six incidence angle α n,o i,j measurements and the specific geometry of the antenna array, we can obtain the unit vector υ i,j = [ υ x i,j , υ y i,j , υ z i,j ].Below, we show the specific geometric transformations for each antenna array.
RTA Transformation
The RTA antenna has the antenna elements at the vertices of a regular tetrahedron.Since all six antenna pair separations are within half the carrier wavelength λ c /2, there is no phase wrapping in any antenna pair.The transformation for the RTA array is where υ x− i,j , υ y− i,j , and υ z− i,j are the noisy estimates of the x, y, and z components of the bearing vector, respectively.
In ideal, noise-free conditions, all six equations should converge to a unique solution.However, with noisy measurements, the system becomes over-constrained.For UWB modules, noise is directly correlated with the |α n,o i,j | [5], i.e., when the magnitude is low, the incident angle estimate has minimal error, but estimates above 70 • exhibit higher errors.Therefore, for a given set of phase differences, we exclude from (4) the row corresponding to the antenna pair with a very high phase difference |∆Φ n,o i,j | > ∆ Φn,o i,j .We determine that a threshold of ∆ Φn,o i,j = 165 • works well with experimentation.We then use the pseudo-inverse of the left-hand matrix, based on the remaining pairs with valid phase differences, to approximately solve (4) and obtain the noisy direction estimate.
ROA Transformation
In the orthogonal array, the antennas are arranged so that pairs of antennas are along the cardinal axis.Subsequently, only the elements that are in line with the cardinal axis have an antenna separation of λ c /2, making the phase difference estimated at these pairs proportional to actual angle of incidence.The other pairs will exhibit phase wrapping, and therefore are discarded.Hence, the transformation for ROA antenna is It is important to note that, in the case of the ROA array, the transformation simplifies to a direct assignment of the sine of the incident angle, as the antenna pairs are aligned with the cardinal axes.
USA Transformation
In both the RTA and ROA arrays, the antenna pairs were arranged so that only three antennas were in the same plane, with at least one antenna positioned outside this plane.This arrangement enabled unambiguous 3D localization.In contrast, the square array has all antenna pairs lying on a single plane, resulting in a ± ambiguity along the axis perpendicular to this plane.In this particular setup, all antennas are on the YZ plane, causing a ± ambiguity for the X-axis coordinates.Consequently, we will limit the X-axis localization readings to only positive values.
With all four antennas in the same plane, the antenna array includes redundant pairs along the X-and Z-axes.This configuration allows us to average the readings from both pairs, providing a more accurate estimate of the angle of incidence and thereby reducing bearing errors.Hence, the transformation for USA array is In the USA antenna array, we use a weighted average to combine the angles of incidence from redundant antenna pairs.The weight for the azimuth pairs is defined by γ azm and that for the elevation pairs is defined by γ ele .During experiments, we observed that for angles above 60 • , one antenna pair exhibited saturation in the measured angle of incidence, while the other pair showed a complementary saturation for angles below −60 • .This effect is illustrated in Figure 4.The weighted filter assigns less weight to the saturated measurements, making the values at higher angle of incidences more accurate.
The full AoA estimation relative localization process is formalized in Algorithm 1.With the range measurement r i,j obtained from TWR and the direction υ i,j obtained from Algorithm 1, we obtain the estimated relative localization of the system as q i,j = [ r i,j υ x i,j , r i,j υ y i,j , r i,j υ z i,j ].
Algorithm 1 Angle-of-Arrival (AoA) Estimation.Input phase of arrival at each antenna Φ n i,j Output unit vector bearing of neighbor υ i,j 1: Calculate the phase difference ∆Φ n,o i,j with the bias compensation using (2).2: Calculate the incidence angle α n,o i,j at each antenna pair using (3).3: Obtain the raw estimate of the unit-bearing vector υ − i,j using (4), ( 5), or (6) depending on the antenna array used.4: Normalize the unit vector υ i,j to length 1 using (7).return υ i,j
Messaging Protocol
Here, we discuss the specific messaging protocol ReLoki uses for simultaneous communication and localization.Only one antenna A 1 is used when ReLoki acts as the transmitter.All transactions begin with a request to initialize an RPP called Message Init.Subsequent data transfer consists of three phases: Message Transfer, TWR Ranging, and AOA Blink, as shown in Figure 5.In the Message Transfer phase, the message to be transmitted is properly formatted to a 802.15.4A packet and sent to the receiver.In a typical 802.15.4A transmission, the maximum payload is 127 bytes of data.Hence, we split the original message to packets of size 120 bytes and transmit the data.The extra 7 bytes can be used for packet header data and Cyclic Redundancy Check (CRC).The second phase is the TWR Ranging phase, implemented using the TWR protocol mentioned in Section 3.1, and estimates the distance to the source r i,j .The receiver uses only its A 1 antenna for the Message Transfer and TWR Ranging phases mentioned above.The third phase is the AoA Blink phase, used to estimate the bearing angle of the transmitter from the receiver υ m i,j , and here all four antennas are used on the receiver side.The RPP is completed upon successful reception of the messages from all three phases at the receiver.The transaction is terminated when a failure is detected at any phase of the protocol.
ReLoki Hardware Platform
Our proof-of-concept ReLoki system consists of a set of four custom antennas interfaced with a custom controller board, which in turn consists of the UWB RF system and an onboard processor.We discuss the details of the hardware design in this section.
Antenna Design
All antennas interfacing with the control board in the ReLoki system have a PCB patch antenna with a circular geometry and an associated rectangular ground plane.The circular signal patch has a radius of 11 mm and the rectangular patch's dimension is 22 mm × 7.5 mm.The separation between signal and ground patches is 1.5 mm.The design of the antenna is shown in Figure 6.The antenna is designed for best performance in the UWB channels 1, 2, and 3, as evidenced by the return loss in Figure 6b.We chose the channels with the lowest center frequency to obtain the highest possible separation between antennas in the array.This allows for higher tolerance to manufacturing errors.Hence, all testing for our system was conducted with channel 1.The individual antennas are subsequently attached to a 3D-printed base structure to create the specific four-antenna array structure.This manufacturing technique keeps the weight of the antenna to a minimum with very little interference to the received signal.Any antenna array can be manufactured with a combination of the PCB patch antenna and the 3D-printed substructure.
Controller Design
Each antenna in the array needs to interface with an RF subsystem to estimate the ToF and the phase of arrival.DW1000 is one of the most popular UWB RF Integrated Circuits (ICs) capable of both ToF and PDoA estimation.Each ReLoki controller uses four DW1000 ICs, all interfaced with an LPC55S69 microcontroller for data processing.When computing the PDoA between antenna pairs, the error in phase difference estimates increases with increasing skew of the carrier wave generated by each DW1000 IC.We use a single external clock source, the ATX-13-38400 at 38.4 MHz, to ensure that the carriers generated by all four DW1000 ICs are in sync.The clock is distributed to all DW1000 ICs through a clock buffer, the PL133.Additionally, all clock traces on the PCB are length matched to 0.1 mm.These design parameters translate to a maximum clock skew of 34 ps.Both the distribution buffer and the length matching allows for a low clock skew between DW1000 ICs.To allow the connection of any four-element antenna array, the controller utilizes Uf.L connectors.All computations for setting up data transfer and performing relative localization estimation are implemented on this controller platform.We show a block diagram of the system and the PCB in Figure 7.The total weight of one ReLoki module is only 65 g including PCB, antennas, and the 3D-printed antenna holders.
When all four antennas on the controller receive data from a single source, the DW1000 ICs will compute the time series of the Complex Impulse Response (CIR) of the received signal and store it in the in the accumulator memory [5].The phase of arrival Φ n i,j of the first path is calculated as where t n.fp i,j denotes the time in the accumulated memory of the DW1000 which it has determined to be the first path of the received signal.I n (.) and Q n (.) denote the real and complex magnitudes of the received signal in the accumulated memory.Additionally, a correction factor, called the Synchronous Frame Detection (SFD) angle β n j , is applied to the calculated phase.This phase correction is an artifact of the first path detection in the DW1000 module [5].
When a host connected to ReLoki i wants to send a message to its neighbors, it sends the message to the ReLoki through its host interface.The transmitter ReLoki initiates the RPP transaction if the airway is clear.Upon completion of the RPP, the receiving ReLoki will have both the received message and the estimated relative position of the transmitter.It then combines both into a single message for the host and notifies the connected host of the received message.The host can then read these data and use them as needed, making ReLoki a plug-and-play localization solution for any existing system.
Remark 1. Inter-module clock synchronization:
In some of the TDoA-based UWB positioning solutions, complex timing synchronization between beacons is mandated to properly compute the time difference.In the ReLoki system, the TWR ranging protocol does not require any timing synchronization between modules.Additionally, the phase difference is calculated between antenna pairs on the same module.Consequently, we only need clock synchronization between DW1000 ICs on the same module.Thus, for both TWR and AoA estimations, ReLoki does not mandate clock synchronization between two ReLoki modules.
Muti-Agent and Multi-Beacon Support
Only one sender-receiver pair can perform the UWB-based position sensing at any given time.To localize multiple modules, a form of time multiplexing must be implemented.The default state of all ReLoki modules is to wait for Message Init.Any valid 802.15.4 packet will trigger the receive state of the UWB module.This can be used as a way for all modules to gauge the air traffic within their communication range.Thus, for any ReLoki wanting to transmit a message, the transmission will be delayed for a random time if that ReLoki is already participating as a receiving agent.The transmission happens only if the airway is clear at the end of this timeout.This is a specific implementation of carrier sensing based on the UWB states that is supported by DW1000, which we leverage for Carrier Sense Multiple Access (CSMA).Additionally, specific ID-based filtering is implemented in the handshake message so that the data can be sent to a specific receiver.The implementation of both CSMA and ID-based filtering ensures that there is only one RX-TX pair active at any given time.
Performance Analysis and Experiments
We test the performance of the ReLoki on two fronts.First, we experimentally measure the errors in estimation and generate a covariance map.Second, to demonstrate the realworld performance of the system, we set up experiments where the ReLoki system is mounted on physical robots and remotely controlled.
Covariance Maps
The covariance map for ReLoki is the map of the expected error at different regions of the sensing domain.Having a lower value in this covariance map will translate to a lower localization error.To determine the covariance map, we set up an experiment where one ReLoki is mounted on a pan-tilt setup, as shown in Figure 8, which allows us to test all possible orientations the transmitter will have with respect to the receiver.We take 50 readings for multiple pan-tilt range combinations.The pan range for the RTA and ROA antennas is The covariance map is obtained using two factors.First, we obtain the average error from the ground truth data Cov e (q m i,j ) := E[ q m i,j − q m i,j ] 2 for all 50 readings taken at the relative range of pan-tilt q m i,j .We obtain the bearing ground truth from the pan and tilt mechanism, which moves the ReLoki system to a specific relative bearing with respect to the source.The range ground truth is obtained from manually measuring the distance to this source in the experimental run.We also obtain the covariance (spread) of these 50 readings Cov σ (q m i,j ) at the set relative position.The final error measured is a very conservative estimate of the actual covariance and computed as Cov(q m i,j ) = det |diag(Cov e (q m i,j )) + Cov σ (q m i,j )|.
A lower value here means a lower cost in (1).
RTA Antenna
We show the measured covariance map for the RTA antenna in Figure 9. Looking at the covariance maps, we can see that there are very high errors of more than 100 • at elevation angles −90 • , −75 • and −60 • .These errors are attributed to the ReLoki controller electronics being in the path of the incoming signal at these angles, causing skewed estimates.We also see higher errors at an elevation angle of 90 • , with an error of up to 80 • , which may be due to the antenna characteristics of the chosen antenna providing very little illumination from the source at this relative angle.Additionally, we see that at elevation angles −15 • -−45 • and close to an azimuth of −30 and +60 there are higher errors in the range of up to 45 • deviation from actual values.We attribute this to the ground plane of the protruding antenna A 4 interfering with phase calculation in antenna A 1 and A 2 .In other regions, the error in azimuth and elevation angles are within 15 • and will provide good localization performance.The range measurements during the TWR ranging phase come to an accuracy of around 25 cm on average up to a maximum range of 20 m.
ROA Antenna
We also show the covariance maps of the ROA array in Figure 9. Similar to the errors observed in the RTA array, we notice high errors at elevation angles below −60 • due to obstructions caused by the electronics.Likewise, we observe similarly high errors at angles above 60 • .This may be attributed to very poor RF illumination of the receiving antenna array from the source.Outside the previously mentioned elevation angles, we observe excellent performance for bearing localization, with errors within 15 • .We also see a very similar performance in ranging measurements between the ROA and RTA arrays as both use the same setup and algorithm for ranging.
We additionally show the comparison of the RTA array to the ROA array in Figure 9. Here, we can observe a significant reduction in errors at higher elevation angles for the RTA antenna as noted by the green regions in the figure.At lower elevation angles, the errors are low in both implementations, with very small differences except for a few conditions, as mentioned above.We can conclude that in applications where the elevation angles between agents are restricted to lower values, the orthogonal array provides a slightly better estimate in the operational regions of such application.However, for a full 3D localization system, the RTA array provides better overall performance than the orthogonal array over the entire operational region.
USA Antenna
We show the covariance maps for the USA antenna array in Figure 10.In this antenna array, we notice an overall reduction in error as compared to the RTA and ROA antenna arrays.The redundant antenna pair, in the USA antenna, provides significant reduction in localization errors.When the azimuth and elevation angles are between −45 • and 45 • , the errors in localization are within 10 • .With bearing angles approaching higher values, the errors go as high as over 100 • .This is consistent with the overall AOA performance of individual antenna pairs.Hence, the USA antenna is more suited for beacon-tag applications with the advantage that a single beacon can fully localize a tag in its operational domain.The range measurements and the errors were very similar to the ROA and RTA antenna pairs.
Maximum Operating Frequency
We have observed that each RPP between agents take up to 46 ms per transmission based on the clock output observed between transfers.This timing includes handshake, one data packet, TWR, and finally, AOA transactions, which are the smallest transactions possible.Hence, we can estimate at most 20 transactions between agents per second, which decreases with the size of data being transmitted.So, for N s sensors the max frequency for each sensor will be 20/(N s − 1).
Robot Localization
To show the expected real-world performance, we conduct localization experiments with the ReLoki system in experimental scenarios.In the first scenario, the ReLoki is mounted on Turtle bots.These experiments showcase the infrastructure-free 3D localization.In the second scenario, the ReLoki module is handheld by a human operator, which demonstrates the use of ReLoki as a mobile beacon.The system performs the associated relative localization task and the localization performance is compared to the ground truth data.
RTA Antenna
In this experiment, we set up a three-robot system: Two agents (Agent 1 and 2) are in motion, while the third (Agent 3) is static.Agent 1 is executing a rectangular loop motion and Agent 2 is executing a straight-line trajectory, as shown in Figure 11.We record the relative position of the two agents in motion as measured by Agent 3. Note that the estimating agent can also be moving, but we leave it fixed to obtain a better visualization of the system in action.We also show the filtered (using a low-pass filter) localization data output of the experiment along with the ground truth data captured from an overhead OptiTrack system in Figure 11.Here, we see very good congruence of the estimated trajectory to the ground truth, with a maximum localization error of 50 cm, and an average localization error of 23 cm.This localization error is slightly higher than the expected localization error of range-only trilateration-based UWB systems [30], but here we only use one localization pair which makes this system very decentralized.The localization performance is on par with the AoA-based 3D localization shown in [24], along with the added benefit of full 3D localization when using the RTA array.The localization errors can be further reduced by investigating the effects of a filter tuned to the specific motion type of the mobile platform, which will be one of the areas of investigation going forward.
For a system with two agents transmitting, with the CSMA scheme active, the maximum achievable sampling rate for each agent is 10 samples per second.In the experimental results showcased above, the localization happened as often as possible.To study the effects of scaling the system to a greater number of agents, we can artificially throttle the sampling rate.Table 1 shows the localization statistics with varying sampling rates.Here, we can clearly see the increase in localization error with a decreasing sampling rate.Thus, depending on the localization performance required for the application, the ReLoki system will be limited in the number of simultaneously transmitting agents.In this experiment, we use one tag hand-carried by a human operator and two beacons.We set up the ReLoki with the human operator as the transmitter and the ones on the beacons as the receivers.The human operator moves the tag in an hourglass pattern and transmits localization packets every 100 ms to both beacons.All received data along with the sensed relative localization data from the beacons are piped onto a ground station.The received localization data are transformed to the coordinate frame of the first beacon and combined into a single estimate of the tag relative to this beacon.Finally, the localization data are filtered using independent low-pass filters on each beacon.The results of localization using only one beacon and a combination of two beacons are shown in Figure 12.
Looking at the results, we see that with only one beacon, the maximum localization is 70 cm.The highest localization error in this case occurs when the tag is closer to the beacon and at a higher azimuth angle.Introducing the localization data from the second beacon reduces the localization errors to less than 45 cm, with mean localization errors of 16 cm.This performance is close to the range of trilateration-based localization with only three beacons [30], with the added benefit of using a smaller number of beacons.
Remark 2. Multi-path effects:
The PDoA estimation method used by DW1000 modules is resilient to multi-path effects as long as the LoS path is not obstructed.The UWB modules have the capability of path separation when reflections are present, as shown in [5].With the specific estimation running on the DW1000 modules that allow detection of the first path, when the LoS path is unobstructed, the phase difference measured is guaranteed to be the phase difference of the LoS signal.However, there will be a significant increase in the error when the LoS path is attenuated.In this case, the phase difference detected will be that of the first arriving reflected signal and will contribute to increased errors depending on the specific environment.
Conclusions
In this paper, we propose a novel UWB-based relative localization system called ReLoki that leverages angle of arrival information from a four-antenna array in tandem with traditional ranging measurements to estimate the 3D relative position to any other participating ReLoki module.The paper discusses the system design and implementation for RTA, ROA, and USA antennas and shows that the RTA and ROA antenna arrays are better suited for full 3D relative localization systems and the USA array excels as an ad hoc mobile beacons.Both the RTA and ROA arrays have higher localization errors at higher elevation angles, but at the lower elevation angles between −60 • and 60 • , we achieve a localization error of less than 50 cm.To combat these higher elevation errors, we need more research into antenna characteristics (e.g., effects of omnidirectionality and polarization) and other algorithmic improvements (e.g., per-antenna non-linear mapping or better antenna filtering).We also show the performance of the USA antenna array being better than the RTA and ROA arrays by leveraging the redundant antenna pairs to obtain better bearing estimates.We show a slightly better localization error with a maximum error of 45 cm which is on par with the errors expected with trilateration, yet with a smaller number of beacons.
One of the main applications of the ReLoki platform is relative localization in indoor aerial systems like Lighter-than-Air (LTA) agents, where the low weight of the platform is an advantage.One of the main detriments here is the NLoS performance of the bearing estimations in the case of multi-agent systems and the characterization in a given environment.Our future focus will be on further studies on NLoS performance and non-linear estimation algorithms for better localization performance.
Figure 1 .
Figure 1.Illustration of the relative localization problem.On the left, we show ReLoki attached to an existing motion platform and capable of relative localization based on fully onboard sensing.Here, the RX agent senses the relative positions q i,j of the TX agents w.r.t its body frame whenever a message is received from j.On the right, we show the scenario where ReLoki can act as a mobile beacon.All beacons are capable of localizing a transmitting agent in 3D and adding more beacons will improve estimates.
Figure 2 .
Figure 2. Illustration of the 4-antenna configurations that can be used with ReLoki.Here, we show the ROA, where the antennas are placed orthogonal w.r.t the central antenna, the RTA, where the antennas are placed at the vertices of a regular tetrahedron, and the USA, where the antennas are placed as a square on the same plane.
Figure 3 .
Figure 3. Illustration of angle of incidence for RTA, ROA, and USA Antennas.The angle of incidence measured is used for bearing estimates based on the specific geometry of the antenna array.
Figure 4 .
Figure 4. Angle of incidence measured for the redundant pairs.Here, the measured value is the average of 20 readings.The plot shows the saturation of the angle of incidence measured over 60 • in one pair and under −60 • in the other 3.1.4.Normalization Due to the inherent noise in the UWB antenna pairs, the unit vectors computed may not have unit magnitude and hence we normalize the vector as
Figure 5 .
Figure 5. Timing diagram showing the different phases of transmissions.Message Transfer phase is shown in red, TWR Ranging phase in blue, and AoA Blink phase in green.
Figure 6 .
Figure 6.Single-antenna design for ReLoki.(a) Finished PCB antenna along with the copper plane showing the circular patch antenna and the ground plane.(b) Return loss for the designed antenna showing less than −10 dB return loss in almost all the UWB band for Ch. 1, 2, and 3. (c) Center frequency and the bandwidth of the UWB bands supported by proposed antenna and DW1000.
Figure 7 .
Figure 7. ReLoki controller design.(a) ReLoki hardware block diagram showing the components.Here, we start with the host i initiating a communication request.ReLoki connects to host i and transmits the data.The information is transferred to the receiving ReLoki where it is then combined with the estimated localization data.Finally, the data are sent to the receiving host j.(b) ReLoki PCB design showcasing the different components mentioned in the block diagram.
[−180 • , 180 • ] and for the USA antenna it is [−90 • , 90 • ] due to the X-axis ambiguity mentioned in Section 3.1.The elevation ranges for all antenna arrays are in the range [−90 • , 90 • ] in steps of 15 • and ranges from 1.5 m to 7.5 m in steps of 1 m.
Figure 8 .
Figure 8. ReLoki experimental setup for covariance measurement.On the left, we have the pan and tilt mechanism and on the right we have the test setup for the 1.5 m range from source.
Figure 9 .
Figure 9. Covariance maps for RTA and ROA antennas.In the top left we have RTA and in the top right we have the ROA array.A darker color means lower error.On the bottom, we show the comparison of RTA antenna array to the ROA antenna array.Here, green boxes represent lower errors for RTA and red represents lower errors for ROA.Yellow a represents comparable performance (combined azimuth and elevation difference within 10 • ) between both.
Figure 10 .
Figure 10.Covariance map for USA antenna.On the top, we have the covariance maps, with darker colors showing lower errors in localization and lighter colors showing higher errors in localization.On the bottom, we show the average of measured vs actual values for azimuth and elevation for 50 readings at a given pan-tilt pair.
Figure 11 .
Figure 11.Localization experiment with RTA antenna on ReLoki.On the left, we have the composite of overlayed frames from the video captured during the experiment.Agent 1 is executing a rectangular motion and Agent 2 is executing a straight-line motion.On the right, we have the output from ReLoki as seen by Agent 3 as well as the Opti-Track data captured.We show both the raw estimation data, in a lighter color, and filtered data using a low-pass filter in a darker color.
Figure 12 .
Figure 12.ReLoki beacon test.On the top, we show the experimental setup.Two beacons are placed 8 m apart and the human operator moves the tag in a hour glass pattern.On the bottom, we show plots of the localization data along with the captured MoCAP data.Here, we show localization data with only one beacon active on the right side and both beacons active on the left.The unused beacon is marked with an "X".We show the localization errors in both cases.
Table 1 .
Effect of sampling rate on localization performance for RTA antenna.Here, we show the maximum localization error, average localization error, and the standard deviation for localization estimations of both robots in the experiment.Additionally, we also show the maximum number of agents supported with the specified sampling rate. | 9,713.2 | 2024-08-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
A Novel Dye-Sensitized Solar Cell Structure Based on Metal Photoanode without FTO/ITO
Traditional dye-sensitized solar cells (DSSC) use FTO/ITO containing expensive rare elements as electrodes, which are difficult to meet the requirements of flexibility. A new type of flexible DSSC structure with all-metal electrodes without rare elements is proposed in this paper. Firstly, a light-receiving layer was prepared outside the metal photoanode with small holes to realize the continuous oxidation-reduction reaction in the electrolyte; Secondly, the processing technology of the porous titanium dioxide (TiO2) film was analyzed. By testing the J–V characteristics, it was found that the performance is better when the heating rate is slow. Finally, the effects of different electrode material combinations were compared through experiments. Our results imply that in the case of all stainless-steel electrodes, the open-circuit voltage can reach 0.73 V, and in the case of a titanium photoanode, the photoelectric conversion efficiency can reach 3.86%.
Introduction
In recent years, metal and metal compound electrodes have been extensively studied in DSSC. However, due to their opacity, most of them are limited to the use of a single metal electrode [1][2][3][4][5][6]. Electrode cost is a problem in the research of photoanodes and counter electrodes (CE) [7,8]. The typical metal mesh electrode technology has not yet been industrialized, and its insufficient light-receiving layer area and complex process of TiO 2 film are both limiting factors [9][10][11].
The photoanode in DSSC should have the functions to adsorb dyes [12] and transmit and collect photogenerated electrons. A good photoanode should have a large specific surface area and a nanoporous structure, which can absorb dyes to the maximum and transmit photogenerated electrons quickly, and have sufficient porosity to penetrate the electrolyte [13][14][15]. Because TiO 2 is nontoxic, cheap, and has strong ability to adsorb dyes [16,17] and transfer electrons [18], it is widely used in the fabrication of photoanodes [19][20][21][22][23][24][25]. The introduction of special nanoparticles into a TiO 2 matrix can improve the performance of DSSCs, such as adding nanodiamonds (NDs) and CdTe nanoparticles [26,27], but the scarcity of materials will increase production costs. The importance of CE lies in its high catalytic activity [28], such as the catalytic reduction of I − 3 [29]. By using Pt to reduce the oxidation-reduction potential of the I − 3 /I − 1 ion pair, it helps to increase the open-circuit voltage [30][31][32][33]. Compared with CE based on Pt, the optimized NDs layer has 78.52% equivalent performance [34]. The traditional sandwich DSSC uses expensive FTO electrodes, which reduce the cost performance, and at the same time, due to its rigidity, the battery loses its flexibility. As for the back-illuminated structure [35], light needs to pass through the CE and electrolyte before reaching the photoanode, and its efficiency is naturally limited [36]. The metal mesh used as CE can replace the transparent conductive film, but the incident light loss caused by the mesh cannot be ignored [37].
In this paper, cheap metal materials (stainless steel and titanium) are used to replace expensive rare-element electrodes to produce photoanode and CE substrates. The photoanode is prepared on the outside of the stainless steel/titanium substrate to enhance light absorption. The electrolyte passes through the photoanode with the light-receiving layer through specially designed small holes on the substrate. The experimental results show that the all-metal electrode flexible DSSC has almost the same open-circuit voltage as the traditional DSSC with FTO. The schematic diagram and physical drawing are shown in Figure 1.
Micromachines 2022, 13, x FOR PEER REVIEW 2 of 11 to pass through the CE and electrolyte before reaching the photoanode, and its efficiency is naturally limited [36]. The metal mesh used as CE can replace the transparent conductive film, but the incident light loss caused by the mesh cannot be ignored [37]. In this paper, cheap metal materials (stainless steel and titanium) are used to replace expensive rare-element electrodes to produce photoanode and CE substrates. The photoanode is prepared on the outside of the stainless steel/titanium substrate to enhance light absorption. The electrolyte passes through the photoanode with the light-receiving layer through specially designed small holes on the substrate. The experimental results show that the all-metal electrode flexible DSSC has almost the same open-circuit voltage as the traditional DSSC with FTO. The schematic diagram and physical drawing are shown in Figure 1. A light-receiving layer is prepared on the outside of a metal electrode with small holes to form a photoanode; Fiber paper serves as an insulator and an electrolyte container; a platinum black film is prepared on a metal electrode to form the counter electrode; the battery is packaged with PET and injected with electrolyte. (b) Photoelectric conversion diagram. (c) Physical drawing.
Preparation of Photoanode
First, holes were punched in the stainless-steel foil, according to a certain density and size, then cleaned up and placed on a glass plate to dry. The scotch tape was glued to the dry foil, leaving a 5 × 5 mm 2 square surface which included four holes. TiO2 slurry was scraped on the foil carefully with a glass rod. The foil sample was sent into a muffle furnace for heating and taken out after natural cooling. The TiO2 was dyed with 0.4 mmol/L N719 solution dissolved in ethanol. After 24 h, the foil with TiO2 was taken out and cleaned with AE, and then air-dried. The flowchart is shown in Figure 2. The preparation steps of titanium photoanode and non-drilling FTO photoanode are the same as above.
Preparation of Photoanode
First, holes were punched in the stainless-steel foil, according to a certain density and size, then cleaned up and placed on a glass plate to dry. The scotch tape was glued to the dry foil, leaving a 5 × 5 mm 2 square surface which included four holes. TiO 2 slurry was scraped on the foil carefully with a glass rod. The foil sample was sent into a muffle furnace for heating and taken out after natural cooling. The TiO 2 was dyed with 0.4 mmol/L N719 solution dissolved in ethanol. After 24 h, the foil with TiO 2 was taken out and cleaned with AE, and then air-dried. The flowchart is shown in Figure 2. The preparation steps of titanium photoanode and non-drilling FTO photoanode are the same as above.
Preparation of Counter Electrode
The clean stainless-steel foil was placed and fixed on a screen-printing table with a 200-mesh screen. Then it was scraped with chloroplatinic acid slurry. After a few minutes, the foil with chloroplatinic acid film was placed in a heating boat and put into a tubular annealing furnace. The temperature was increased to 400 • C for 30 min. The CE could then be used after natural cooling. The preparation of FTO CE is the same as the above steps. Micromachines 2022, 13, x FOR PEER REVIEW 3 of 11 Figure 2. Photoanode preparation process.
Preparation of Counter Electrode
The clean stainless-steel foil was placed and fixed on a screen-printing table with a 200-mesh screen. Then it was scraped with chloroplatinic acid slurry. After a few minutes, the foil with chloroplatinic acid film was placed in a heating boat and put into a tubular annealing furnace. The temperature was increased to 400 °C for 30 mins. The CE could then be used after natural cooling. The preparation of FTO CE is the same as the above steps.
Photosensitizer and Electrolyte Preparation
(1) Photosensitizer preparation: each 100 mL of photosensitizer contained 47.5 mg dye N719 dissolved in AE. After the proportioning, the solution was poured into a beaker sealed with plastic wrap film and equipped with a magnetic rotor. The solution was stirred with a magnetic stirrer for 30 mins.
(2) Electrolyte preparation: each 100 mL of electrolyte contained I2 1.523 g, LiI 8.031 g, GuSCN 0.708 g, TBP 6.761 g, and PMII 7.563 g. All of them were dissolved by acetonitrile, then the acetonitrile solution was poured into a clean beaker to stir for 20 mins with a magnetic stirrer [10,38].
Device Assembly
First, the PET was cut into 3 × 3 cm 2 , cleaned with AE, and placed on a glass plate. A CE was placed on PET in the lower left corner with the platinum side facing up. Then, the CE was covered completely with a piece of 1.8 × 1.8 cm 2 dust-free paper. Finally, the photoanode and another piece of PET were placed on top of them one after the other. The battery was sealed with a hot-melt glue gun. A syringe was used to insert the electrolyte through the small hole reserved on the PET, and the hole was sealed at the end.
Characterization
The morphology of TiO2 on stainless steel and titanium sheet was observed with a positive metallographic microscope (BX51M, OLYMPUS, Tokyo, Japan). The two-dimensional and three-dimensional morphology of TiO2 surface were obtained by an atomic force microscope (DSM14049BF-1, Bruker, Billerica, America). The 100× and 500× magnified images of TiO2 films were obtained by scanning electron microscope SEM (TESCAN, Brno, Czech Republic). The J-V curve of DSSC was obtained through a combination of OAI solar simulator (LCSS150, Zolix, Beijing, China) and KEITHLEY2400 digital source meter (Oriel 94023A, Newport). The external quantum efficiency of DSSC was obtained through the quantum efficiency test system (PEC-S20, Gifu, Japan). The UV-VIS spectrum of DSSC was obtained by UV-VIS Spectrometer (UV-24500, Avantes, Appeldom, The Netherlands).
Photosensitizer and Electrolyte Preparation
(1) Photosensitizer preparation: each 100 mL of photosensitizer contained 47.5 mg dye N719 dissolved in AE. After the proportioning, the solution was poured into a beaker sealed with plastic wrap film and equipped with a magnetic rotor. The solution was stirred with a magnetic stirrer for 30 min.
(2) Electrolyte preparation: each 100 mL of electrolyte contained I 2 1.523 g, LiI 8.031 g, GuSCN 0.708 g, TBP 6.761 g, and PMII 7.563 g. All of them were dissolved by acetonitrile, then the acetonitrile solution was poured into a clean beaker to stir for 20 min with a magnetic stirrer [10,38].
Device Assembly
First, the PET was cut into 3 × 3 cm 2 , cleaned with AE, and placed on a glass plate. A CE was placed on PET in the lower left corner with the platinum side facing up. Then, the CE was covered completely with a piece of 1.8 × 1.8 cm 2 dust-free paper. Finally, the photoanode and another piece of PET were placed on top of them one after the other. The battery was sealed with a hot-melt glue gun. A syringe was used to insert the electrolyte through the small hole reserved on the PET, and the hole was sealed at the end.
Characterization
The morphology of TiO 2 on stainless steel and titanium sheet was observed with a positive metallographic microscope (BX51M, OLYMPUS, Tokyo, Japan). The two-dimensional and three-dimensional morphology of TiO 2 surface were obtained by an atomic force microscope (DSM14049BF-1, Bruker, Billerica, MA, USA). The 100× and 500× magnified images of TiO 2 films were obtained by scanning electron microscope SEM (TESCAN, Brno, Czech Republic). The J-V curve of DSSC was obtained through a combination of OAI solar simulator (LCSS150, Zolix, Beijing, China) and KEITHLEY2400 digital source meter (Oriel 94023A, Newport). The external quantum efficiency of DSSC was obtained through the quantum efficiency test system (PEC-S20, Gifu, Japan). The UV-VIS spectrum of DSSC was obtained by UV-VIS Spectrometer (UV-24500, Avantes, Appeldom, The Netherlands).
Results and Discussion
First, the experiments confirmed that the design goal of an all-metal electrode without FTO/ITO can be realized via this new structure, which has an external light-receiving layer and small holes on the photoanode. Moreover, the application of PET and metal foils also ensure the flexibility of DSSC. The measurement data of the samples show that in order to obtain sufficient output, the combination of electrode materials and the preparation process of the TiO 2 layer must be handled carefully.
Design of Holes on Photoanode
The small holes reserved on the photoanode are an important structure to ensure the penetration of electrolyte and the continued progress of the redox reaction. When the aperture is too large, it is obvious that the area coated with TiO 2 on the photoanode will decrease, resulting in the loss of the light-receiving layer. Figure 3 shows the dimensions of the two verified holes. tion process of the TiO2 layer must be handled carefully.
Design of Holes on Photoanode
The small holes reserved on the photoanode are an important structure to ensure the penetration of electrolyte and the continued progress of the redox reaction. When the aperture is too large, it is obvious that the area coated with TiO2 on the photoanode will decrease, resulting in the loss of the light-receiving layer. Figure 3 shows the dimensions of the two verified holes.
During the scraping of TiO2, Scotch tape was used to surround a 5 mm × 5 mm area covering four small holes on the electrode. For the first type shown in Figure 3a, the area of the light-receiving layer is 17.93 mm 2 , accounting for 72% of the whole area of 25 mm 2 ; for the other type shown in Figure 3b, that is 22.99 mm 2 , accounting for 92% of the entire area. The area ratio of these two types is 0.78:1. The stainless-steel photoanodes and titanium photoanodes with these different apertures have been assembled in DSSC, respectively. By comparing the J-V characteristic curves of the four samples, it is found that the hole size has no influence on the opencircuit voltage. On the other hand, whether it is titanium or stainless steel, the current density of the battery with ϕ1.5 mm holes is about 81% of that with ϕ0.8 mm holes. Looking into the light-receiving area ratio 0.78:1, it is obvious that the two ratios are very close. Consequently, we can judge that the diffusion of ions in electrolyte does not require a large pore size. Therefore, the density and size of these holes should be limited as much as possible to obtain a larger light-receiving layer area.
Influence of Heating Rate
A muffle furnace with a controllable temperature curve was used, and two groups of electrodes with different heating rates were set for experiments. After scraping TiO2 paste, the stainless-steel electrode of group A rose to 500 °C from room temperature within 45 mins, and the temperature of group B rose within 30 mins. After natural cooling, the elec- During the scraping of TiO 2 , Scotch tape was used to surround a 5 mm × 5 mm area covering four small holes on the electrode. For the first type shown in Figure 3a, the area of the light-receiving layer is 17.93 mm 2 , accounting for 72% of the whole area of 25 mm 2 ; for the other type shown in Figure 3b, that is 22.99 mm 2 , accounting for 92% of the entire area. The area ratio of these two types is 0.78:1.
The stainless-steel photoanodes and titanium photoanodes with these different apertures have been assembled in DSSC, respectively. By comparing the J-V characteristic curves of the four samples, it is found that the hole size has no influence on the open-circuit voltage. On the other hand, whether it is titanium or stainless steel, the current density of the battery with φ1.5 mm holes is about 81% of that with φ0.8 mm holes. Looking into the light-receiving area ratio 0.78:1, it is obvious that the two ratios are very close. Consequently, we can judge that the diffusion of ions in electrolyte does not require a large pore size. Therefore, the density and size of these holes should be limited as much as possible to obtain a larger light-receiving layer area.
Influence of Heating Rate
A muffle furnace with a controllable temperature curve was used, and two groups of electrodes with different heating rates were set for experiments. After scraping TiO 2 paste, the stainless-steel electrode of group A rose to 500 • C from room temperature within 45 min, and the temperature of group B rose within 30 min. After natural cooling, the electrode was taken out, and the TiO 2 was transferred to the metallographic microscope. Figure 4a,c shows these patterns of the group A electrode with 100 times and 500 times magnification, respectively. Figure 4b,d shows those of group B. It can be intuitively observed that when the temperature is raised faster, a large number of cracks and defects appear on the surface of the group B electrode, and the intact film area accounts for only about 60%. Compared with group A, it can be observed that when the heating rate is slow, the surface of TiO 2 can be relatively flat, and the texture can be consistent with that of stainless steel. ure 4a,c shows these patterns of the group A electrode with 100 times and 500 times magnification, respectively. Figure 4b,d shows those of group B. It can be intuitively observed that when the temperature is raised faster, a large number of cracks and defects appear on the surface of the group B electrode, and the intact film area accounts for only about 60%. Compared with group A, it can be observed that when the heating rate is slow, the surface of TiO2 can be relatively flat, and the texture can be consistent with that of stainless steel. Two sets of electrodes were used to assemble DSSC, respectively. The solar energy test system was used under AM1.5 standard sunlight. It can be seen from Figure 5 that when the heating rate of the prepared electrode is 16 °C/min (Group B), the short-circuit current density is only half of the case of 11 °C/min (Group A). According to the morphology of the electrode surface, it can be inferred that the large-area damage caused by the heating rate to the film cannot be ignored, which will damage the photoelectric conversion ability. Two sets of electrodes were used to assemble DSSC, respectively. The solar energy test system was used under AM1.5 standard sunlight. It can be seen from Figure 5 that when the heating rate of the prepared electrode is 16 • C/min (Group B), the short-circuit current density is only half of the case of 11 • C/min (Group A). According to the morphology of the electrode surface, it can be inferred that the large-area damage caused by the heating rate to the film cannot be ignored, which will damage the photoelectric conversion ability.
Comparison between Metal Photoanode Battery and FTO Electrode Batteries
Four groups of control experiments were set, as shown in Table 1. The experimental procedure was the same as those described in Section 2, only the different electrodes were assembled into four groups of batteries. It is worth noting that the photoanode light-receiving layer of the stainless-steel substrate faced the outside of the battery; on the contrary, that of the FTO substrate faced the inside.
Comparison between Metal Photoanode Battery and FTO Electrode Batteries
Four groups of control experiments were set, as shown in Table 1. The experimental procedure was the same as those described in Section 2, only the different electrodes were assembled into four groups of batteries. It is worth noting that the photoanode light-receiving layer of the stainless-steel substrate faced the outside of the battery; on the contrary, that of the FTO substrate faced the inside. The J-V characteristics of the four groups are shown in Figure 6a, and the performance indexes are shown in Table 2. The results imply that the difference of photoanode substrate is the main factor that causes the photoelectric conversion ability of the battery. The difference in the short-circuit current density is the most obvious. The short-circuit current densities of group A(SS) and B(SF) were 3.8 mA/cm 2 and 5.2 mA/cm 2 , respectively. FTO was used as the photoanode substrate in group C(FS) and D(FF), and the short-circuit current density reached 17.0 mA/cm 2 and 18.2 mA/cm 2 , respectively. The difference is nearly four-fold. The reasons for the huge difference should be diverse. First, the most likely problem is the interface. The interface between the metal photoanode and TiO 2 may be not well-combined, and the work function may not match very well. Secondly, the quality of the TiO2 film may be not good enough. Thirdly, there is leakage. Some pretreatment on the surface of the metal substrate may further improve the conversion efficiency. In addition, we have further studied the reasons for the large differences in the s circuit current density of the four groups. Figure 7a,c shows the morphology of TiO tered on FTO, and Figure 7b,d shows that of the stainless-steel electrode under a scan electron microscope. Firstly, it can be found from Figure 7b that using the same me as FTO, the porous titanium dioxide layer was successfully prepared on the stainlesselectrode; however, comparing Figure 7a,b, the coverage area of the porous titanium oxide layer prepared on FTO is denser. In Figure 7c, the TiO2 layer uniformly cover FTO surface, and the porous structure is relatively uniform, but there are still some i ularities, which may be caused by the uneven particle size of P25-TiO2. In Figure 7d irregularity of the titanium dioxide layer on the stainless steel is clear and the cave deeper. At the same time, it can be found that there is almost no influence on the change of the substrate of the CE, though it is undeniable that FTO still has a slight advantage in the current characteristics as a CE.
Another key indicator of the battery, the open-circuit voltage, can also be seen in Figure 6a and Table 2. These open-circuit voltages of the four groups of batteries A, B, C, and D are 0.73 V, 0.73 V, 0.76 V and 0.76 V, respectively. They are very similar.
In addition, we have further studied the reasons for the large differences in the short-circuit current density of the four groups. Figure 7a,c shows the morphology of TiO 2 sintered on FTO, and Figure 7b,d shows that of the stainless-steel electrode under a scanning electron microscope. Firstly, it can be found from Figure 7b that using the same method as FTO, the porous titanium dioxide layer was successfully prepared on the stainless-steel electrode; however, comparing Figure 7a,b, the coverage area of the porous titanium dioxide layer prepared on FTO is denser. In Figure 7c, the TiO 2 layer uniformly covers the FTO surface, and the porous structure is relatively uniform, but there are still some irregularities, which may be caused by the uneven particle size of P25-TiO 2 . In Figure 7d, the irregularity of the titanium dioxide layer on the stainless steel is clear and the caves are deeper. Figure 8 shows the AFM characterization of the TiO2 layer on FTO and stainless steel, respectively. It can be observed that porous TiO2 on FTO has more details and is intensive, while TiO2 on stainless steel is smoother, except for some burrs. The average roughness Ra of the film on FTO is 0.0182 μm, and the average roughness Ra of the film on stainless steel is 0.0403 μm. The height difference of the stainless-steel surface is larger. It can also be seen from Figure 8d that the TiO2 layer on the stainless-steel surface has a large wave shape. Combined with the metallographic microscope diagram, it implies that the fluctuation of the film surface is caused by the texture of the metal surface, which is also the reason for its large maximum roughness. Figure 8 shows the AFM characterization of the TiO 2 layer on FTO and stainless steel, respectively. It can be observed that porous TiO 2 on FTO has more details and is intensive, while TiO 2 on stainless steel is smoother, except for some burrs. The average roughness Ra of the film on FTO is 0.0182 µm, and the average roughness Ra of the film on stainless steel is 0.0403 µm. The height difference of the stainless-steel surface is larger. It can also be seen from Figure 8d that the TiO 2 layer on the stainless-steel surface has a large wave shape. Combined with the metallographic microscope diagram, it implies that the fluctuation of the film surface is caused by the texture of the metal surface, which is also the reason for its large maximum roughness.
Influence of Different Metal Electrodes
Titanium foil is the most suitable metal electrode material for DSSC, with the exception of stainless steel. The difference of the J-V curve between the titanium photoanode dyesensitized battery and the stainless-steel photoanode dye-sensitized battery is shown in Figure 6b. In these batteries, stainless-steel electrodes prepared with platinum black are used as the CE.
It can be seen that the titanium electrode adopting this new structure has remarkable photoelectric conversion ability. In the case of the titanium photoanode, the open-circuit voltage decreases slightly, but the short-circuit current density and filling factor have been significantly improved. The short-circuit current density can reach 8.87 mA/cm 2 which is 2.3 times that of stainless steel.
Influence of Different Metal Electrodes
Titanium foil is the most suitable metal electrode material for DSSC, with the exception of stainless steel. The difference of the J-V curve between the titanium photoanode dye-sensitized battery and the stainless-steel photoanode dye-sensitized battery is shown in Figure 6b. In these batteries, stainless-steel electrodes prepared with platinum black are used as the CE.
It can be seen that the titanium electrode adopting this new structure has remarkable photoelectric conversion ability. In the case of the titanium photoanode, the open-circuit voltage decreases slightly, but the short-circuit current density and filling factor have been significantly improved. The short-circuit current density can reach 8.87 mA/cm 2 which is 2.3 times that of stainless steel.
Conclusions
In order to eliminate the restriction of FTO/ITO in DSSC production, a new DSSC structure with all-metal electrodes and electrode preparation process is studied in this paper. First, our research shows that if the electrolyte has sufficient fluidity, the density and size of the holes should be restricted as much as possible to obtain a larger lightreceiving layer area. Second, the heating rate for preparing TiO2 film should not be too fast. The short-circuit current density at a heating rate of 11 °C/min is twice that of a heating rate of 16 °C/min. Third, the metal electrodes have little influence on the open-circuit voltage of DSSC. In the case of a totally stainless-steel electrode, the open-circuit voltage can be 0.73 V, which is almost same as a traditional batterie. Moreover, the photoanode substrate material and the morphology of TiO2 film are key factors of short-circuit current density. When the titanium is used as a photoanode substrate, and there is no need for a complicated process and any additional treatment, the photoelectric conversion efficiency can reach 3.86%. This is 4.65 times higher than stainless steel. At the same time, the shortcircuit current density can reach 8.87 mA/cm 2 , and the filling factor can reach nearly 57%.
Conclusions
In order to eliminate the restriction of FTO/ITO in DSSC production, a new DSSC structure with all-metal electrodes and electrode preparation process is studied in this paper. First, our research shows that if the electrolyte has sufficient fluidity, the density and size of the holes should be restricted as much as possible to obtain a larger light-receiving layer area. Second, the heating rate for preparing TiO 2 film should not be too fast. The short-circuit current density at a heating rate of 11 • C/min is twice that of a heating rate of 16 • C/min. Third, the metal electrodes have little influence on the open-circuit voltage of DSSC. In the case of a totally stainless-steel electrode, the open-circuit voltage can be 0.73 V, which is almost same as a traditional batterie. Moreover, the photoanode substrate material and the morphology of TiO 2 film are key factors of short-circuit current density. When the titanium is used as a photoanode substrate, and there is no need for a complicated process and any additional treatment, the photoelectric conversion efficiency can reach 3.86%. This is 4.65 times higher than stainless steel. At the same time, the short-circuit current density can reach 8.87 mA/cm 2 , and the filling factor can reach nearly 57%. In summary, it can be concluded that an all-metal electrode structure with an external light-receiving layer and small holes on the photoanode can be used for DSSC without any FTO/ITO. | 6,609.4 | 2022-01-01T00:00:00.000 | [
"Materials Science"
] |
Study of Silicon Nitride Inner Spacer Formation in Process of Gate-all-around Nano-Transistors.
Stacked SiGe/Si structures are widely used as the units for gate-all-around nanowire transistors (GAA NWTs) which are a promising candidate beyond fin field effective transistors (FinFETs) technologies in near future. These structures deal with a several challenges brought by the shrinking of device dimensions. The preparation of inner spacers is one of the most critical processes for GAA nano-scale transistors. This study focuses on two key processes: inner spacer film conformal deposition and accurate etching. The results show that low pressure chemical vapor deposition (LPCVD) silicon nitride has a good film filling effect; a precise and controllable silicon nitride inner spacer structure is prepared by using an inductively coupled plasma (ICP) tool and a new gas mixtures of CH2F2/CH4/O2/Ar. Silicon nitride inner spacer etch has a high etch selectivity ratio, exceeding 100:1 to Si and more than 30:1 to SiO2. High anisotropy with an excellent vertical/lateral etch ratio exceeding 80:1 is successfully demonstrated. It also provides a solution to the key process challenges of nano-transistors beyond 5 nm node.
Introduction
In order to overcome challenges such as short channel effect brought by scaling down metal-oxide-semiconductor field-effect transistors (MOSFETs), many new devices e.g., fin field effective In comparison with the conventional spacer process, the inner one has more process challenges. As shown in Figure 2a, the requirement of conventional spacer etching is that the spacer material on the top and bottom sides of the gate need to be etched completely, leaving the spacer material on the sidewall. Therefore, this process does not require a high etching selectivity and anisotropy. However, the higher anisotropy and etching selectivity are required to meet the requirements without any causing device failures in process of inner spacers formation [12,17], as shown in Figure 2b,b'.
SiNx with atomic ratio for Si to N of 3:4 is commonly used as spacer material [18,19]. For the etching of SiN, CF 4 /O 2 /N 2, CF 4 /CH 4 , SF 6 /CH 4 and NF 3 /CH 4 were used for conventional plasma etching in the early period, but the selectivity etching of SiN to SiO 2 and Si are not high when the polymer is produced properly [20]. Using neutral beam reaction system, the SiN etch selectivities of 18.6 to SiO 2 and 6.2 to Si can be achieved [21]. Later, BEE Kastenmeier et al. found that the use of microwave remote plasma can significantly improve the etch selectivity of SiN to Si and SiO 2 , and the ratio can reach to 70:1 [22]. However, because of the characteristics of partial isotropic etching, remote plasma is more suitable for SiN sacrificial layer removal than SiN spacer etching. Figure 1. Process flow of nanowires with inner spacer: (a) source/drain Fin recess for opening active area; (b) SiGe cavity etching for defining the growth position and size of the inner spacer; (c) inner spacer film deposition; (d) controlled etching of spacer film and formation of inner spacer; (e) source and drain epitaxial growth; (f) dielectric deposition and planarization; (g) dummy gate removal and silicon nanowires formation; (h) filling and planarization of high-K metal gates; (i) interlayer dielectric deposition; (j) metal contact plug and current direction when device is on.
Figure 2.
Spacer morphology and inner spacer process challenges: (a) conventional spacer; (b) inner spacer; (b') the process challenges need to be overcome from conventional spacer to inner spacer. In recent years, quasi-atomic layer etching (QALE) of SiN has emerged [23], mainly using a two-step alternating method. At first, the surface is modified by using hydrogen ion implantation or plasma treatment, and then etching this surface with F-based process gas. Then, these two processes are alternately performed to achieve the purpose of quantitative etching. This method takes into account both the etching selection ratio and the anisotropic, but the quasi-atomic layer etching equipment is complex and the productivity is low, and no related public report shows that it has been used in the GAA nanowire inner spacer module [24][25][26][27].
Materials and Methods
In this work, a novel gas mixture of CH 2 F 2 /CH 4 /O 2 /Ar was used for etching the SiN inner spacer in GAA transistors in a conventional inductively coupled plasma (ICP) tool. This method avoids using specially designed hardware equipment and offers higher process efficiency than solutions such as ALE. Moreover, the conformal deposition and selective anisotropic etching process of inner spacer were also systematically studied. Firstly, the influence of the film deposition process on the filling effect of the inner spacer is discussed by comparing the plasma enhanced chemical vapor deposition (PECVD) and low-pressure chemical vapor deposition (LPCVD) methods. Later, the effects of main etching process parameters (CH 4 flow, O 2 flow and pressure) on the etch selectivity, anisotropy and etching morphology are investigated. Finally, high-resolution scanning electron microscope (HRSEM) (Hitachi Inc, Tokyo, Japan), high-resolution transmission electron microscope (HRTEM) (Thermo Fisher scientific Inc., Waltham, MA, USA)and energy dispersive spectrometer (EDS) (Thermo Fisher scientific Inc., Waltham, MA, USA) were used to analyze the microscopic details of the filling and etching of the inner spacer.
Materials and Methods
All the materials in this work were performed on 8-inch (100) silicon wafers. The experimental process and method are shown in Figure 3: Inc., Santa Clara, CA, USA )) and low-pressure chemical vapor deposition (LPCVD) (AMAT Centura 200 (Applied Materials Inc., Santa Clara, CA, USA )) equipments were used to grow 40 nm SiN in the filling experiments. The growth temperature of PECVD was at 400°C, while for LPCVD was at 750 °C. The growth temperatures at these steps were kept below 800°C , in order to avoid the interdiffusion at the interfaces between Si/SiGe [30].
Step 5: Finally, the prepared samples were etched in an ICP tool (TCP 9400DFM(Lam Research Inc., Fremont, CA, USA)), where a gas mixture of CH2F2/O2/CH4/Ar and a chuck temperature of 80 °C were used. The research focuses on the effects of etching process parameters on selection ratio, anisotropy (vertical/lateral etch ratio) and etch morphology. Step 1: Three cycles of SiGe/Si multilayers were grown by using reduced pressure chemical vapor deposition (RPCVD) technique [28,29], and then an oxide-nitride-oxide (ONO) hard mask were grown on the top silicon by applying plasma enhanced chemical vapor deposition (PECVD). In order to examine the film filling and etching performance of inner spacer in detail, Si 0.72 Ge 0.28 stacks with different thicknesses are designed.
Effect of Thin Film Process on Gap Filling
Step 2: 3 µm equally spaced line arrays were patterned, and the whole structure including the hard mask and Si 0.72 Ge 0.28 /Si stack are vertically etched to the substrate silicon by using the plasma etching. Finally, oxygen plasma is used to remove the photoresist [15].
Step 3: In the ICP etching tool, the Si 0.72 Ge 0.28 layers were selectively etched by CF 4 /O 2 /He gas without any bias power, to obtain the lateral depth of 50-70 nm [15].
Step 4: For the cavity formed in the step 3, PECVD (AMAT Producer 200 mm(Applied Materials Inc., Santa Clara, CA, USA )) and low-pressure chemical vapor deposition (LPCVD) (AMAT Centura 200 (Applied Materials Inc., Santa Clara, CA, USA )) equipments were used to grow 40 nm SiN in the filling experiments. The growth temperature of PECVD was at 400 • C, while for LPCVD was at 750 • C. The growth temperatures at these steps were kept below 800 • C, in order to avoid the interdiffusion at the interfaces between Si/SiGe [30].
Step 5: Finally, the prepared samples were etched in an ICP tool (TCP 9400DFM (Lam Research Inc., Fremont, CA, USA)), where a gas mixture of CH 2 F 2 /O 2 /CH 4 /Ar and a chuck temperature of 80 • C were used. The research focuses on the effects of etching process parameters on selection ratio, anisotropy (vertical/lateral etch ratio) and etch morphology.
Effect of Thin Film Process on Gap Filling
Inner spacers require thin films to grow uniformly to the sidewalls in the cavity; therefore, a good gap filling ability is required for the film growth. Atomic layer deposition (ALD) high-K materials (such as HfO 2 , ZrO 2 ) have very good filling properties, but these materials increase parasitic capacitance and are detrimental to device performance, therefore these materials are not suitable choices. However, the fabrication of the GAA Si-Ge based nanowire devices using FinFET process flow usually requires special nanowire/sheet selective etching and surface processing including interfacial layer removal, diameter reduction and rounding in the advanced replacement metal gate (RMG) module [31,32]. These processes may bring great fabrication challenges for conventional low K material spacer. Therefore, the highly resistant and density SiN (K value is~7) is still the best choice for spacer materials [12]. Meanwhile, for structures with lateral openings, then high-density plasma chemical vapor deposition (HDPCVD), which has good filling performance in the vertical hole structure becomes theoretically ineffective [33] and the damage caused by high-density plasma is inevitable. In this study, the filling effects of two conventional SiN thin film deposition techniques, PECVD and LPCVD were compared. The results are shown in Figure 4: The filling effect of LPCVD silicon nitride is significantly better than that of PECVD. The HRSEM micrographs reveals that, there are obvious holes in the PECVD grown layers, and the ratio of the voids in the original cavity: SiGe layers 10 nm > 20 nm > 30 nm. Meanwhile, silicon nitride which was grown by LPCVD did not show any holes in SiGe cavity with depths of 10 nm, 20 nm or 30 nm, showing LPCVD as a better conformal growth. More importantly, LPCVD SiN has better corrosion resistance than PECVD to facilitate subsequent process integration. This good property of using LPCVD is due to lower chamber pressure and higher temperature, which results in slower growth rate, better conformal coverage and higher density [34]. More detailed results on LPCVD silicon nitride will be discussed in the subsequent TEM and EDS characterizations in Section 3.5.
Nanomaterials 2020, 10, x FOR PEER REVIEW 5 of 11 Inner spacers require thin films to grow uniformly to the sidewalls in the cavity; therefore, a good gap filling ability is required for the film growth. Atomic layer deposition (ALD) high-K materials (such as HfO2, ZrO2) have very good filling properties, but these materials increase parasitic capacitance and are detrimental to device performance, therefore these materials are not suitable choices. However, the fabrication of the GAA Si-Ge based nanowire devices using FinFET process flow usually requires special nanowire/sheet selective etching and surface processing including interfacial layer removal, diameter reduction and rounding in the advanced replacement metal gate (RMG) module [31,32]. These processes may bring great fabrication challenges for conventional low K material spacer. Therefore, the highly resistant and density SiN (K value is ~7) is still the best choice for spacer materials [12]. Meanwhile, for structures with lateral openings, then high-density plasma chemical vapor deposition (HDPCVD), which has good filling performance in the vertical hole structure becomes theoretically ineffective [33] and the damage caused by high-density plasma is inevitable. In this study, the filling effects of two conventional SiN thin film deposition techniques, PECVD and LPCVD were compared. The results are shown in Figure 4: The filling effect of LPCVD silicon nitride is significantly better than that of PECVD. The HRSEM micrographs reveals that, there are obvious holes in the PECVD grown layers, and the ratio of the voids in the original cavity: SiGe layers 10 nm > 20 nm > 30 nm. Meanwhile, silicon nitride which was grown by LPCVD did not show any holes in SiGe cavity with depths of 10 nm, 20 nm or 30 nm, showing LPCVD as a better conformal growth. More importantly, LPCVD SiN has better corrosion resistance than PECVD to facilitate subsequent process integration. This good property of using LPCVD is due to lower chamber pressure and higher temperature, which results in slower growth rate, better conformal coverage and higher density [34]. More detailed results on LPCVD silicon nitride will be discussed in the subsequent TEM and EDS characterizations in Section 3.5.
Effect of CH4 Flow on Inner Spacer Etching
More investigations were carried out to find the impact of CH4 gas flow on the etching profile while all other parameters were kept constant. The CH4 flow rate was varied by applying the following conditions: 80 mTorr/source RF 250 W/bias RF 35 W/x sccm CH4/25 sccm CH2F2/20 sccm O2/50 sccm Ar. The results are shown in Figure 5a. When there is no CH4 gas in the reaction chamber, it is found that the silicon nitride on sidewall is etched completely, while the top hard mask is totally consumed and the Si/SiGe stack is seriously damaged (Figure 5b), due to the etch selectivity as well as anisotropy are poor in absence of CH4. Meanwhile, as CH4 is inserted in the chamber, then the anisotropy during the etching is significantly improved. This is linked to the C-based polymer produced by CH4 passivates the sidewalls and increases the vertical/lateral etch ratio. This refers to the fact that CH4 reaction system has the highest C/F ratio compared to other CHxFy mixed gases. The
Effect of CH 4 Flow on Inner Spacer Etching
More investigations were carried out to find the impact of CH 4 gas flow on the etching profile while all other parameters were kept constant. The CH 4 flow rate was varied by applying the following conditions: 80 mTorr/source RF 250 W/bias RF 35 W/x sccm CH 4 /25 sccm CH 2 F 2 /20 sccm O 2 /50 sccm Ar. The results are shown in Figure 5a. When there is no CH 4 gas in the reaction chamber, it is found that the silicon nitride on sidewall is etched completely, while the top hard mask is totally consumed and the Si/SiGe stack is seriously damaged (Figure 5b), due to the etch selectivity as well as anisotropy are poor in absence of CH 4 . Meanwhile, as CH 4 is inserted in the chamber, then the anisotropy during the etching is significantly improved. This is linked to the C-based polymer produced by CH 4 passivates the sidewalls and increases the vertical/lateral etch ratio. This refers to the fact that CH 4 reaction system has the highest C/F ratio compared to other CH x F y mixed gases. The vertical/lateral etch ratio increases with the decrease in the thickness of the SiGe layer, because the aspect ratio dependent etch rate (ARDE) effect leads to a lower lateral etch rate for small-sized trenches under the same conditions [35].
At the same time, the increasing flow rate of CH4 improves the etch selectivity of silicon nitride to Si and SiO2. The mechanism is explained as the C and H elements in CH4 combine with the N in silicon nitride to form volatile HCN, which promotes the silicon nitride etching reaction. In addition, the selectivity ratio of silicon nitride to Si is higher than that of SiO2 because of C in CH4 which combines F in CH2F2 and O in SiO2 to form volatile COF2 does not affect the Si in similar way. Then, this trend reaches a peak when the flow of CH4 was 20 sccm, and the profile is relatively well controlled (Figure 5d). Finally, by continuing to increase CH4, the contribution to the anisotropy becomes small and the contribution of sidewall passivation reaches to a saturation level. The increase of CH4 flow greatly reduces the proportion of the F-based source gas CH2F2, then the silicon nitride etching rate decreases since the Si in silicon nitride needs to be combined with more F atoms to generate volatile SiF4. As the etch selectivity decreases, the hard mask material on the top of the structure is significantly consumed and the roughness of the sidewalls becomes worse (Figure 5e).
Effect of O2 Flow on Inner Spacer Etching
In these experiments, O2 flow was changed while the other process parameters were unchanged as following: 80 mTorr/source RF 250 W/bias RF 35 W/20 sccm CH4/25 sccm CH2F2/x sccm O2/50 scccm Ar. The results are shown in Figure 6a. When there is no oxygen, a polymer deposition occurs on the surface of the structure (Figure 6b). When the O2 flow is increased to 10 sccm, the deposition is reduced, but the deposition is still visible on the side walls (Figure 6c). The polymer produced in the reaction is too heavy and the sidewalls are difficult to be completely etched. The mechanism can be explained that in the absence of O, CH2F2 and CH4 can easily form CHxFy polymers. Then, introducing At the same time, the increasing flow rate of CH 4 improves the etch selectivity of silicon nitride to Si and SiO 2 . The mechanism is explained as the C and H elements in CH 4 combine with the N in silicon nitride to form volatile HCN, which promotes the silicon nitride etching reaction. In addition, the selectivity ratio of silicon nitride to Si is higher than that of SiO 2 because of C in CH 4 which combines F in CH 2 F 2 and O in SiO 2 to form volatile COF 2 does not affect the Si in similar way. Then, this trend reaches a peak when the flow of CH 4 was 20 sccm, and the profile is relatively well controlled (Figure 5d). Finally, by continuing to increase CH 4 , the contribution to the anisotropy becomes small and the contribution of sidewall passivation reaches to a saturation level. The increase of CH 4 flow greatly reduces the proportion of the F-based source gas CH 2 F 2 , then the silicon nitride etching rate decreases since the Si in silicon nitride needs to be combined with more F atoms to generate volatile SiF 4 . As the etch selectivity decreases, the hard mask material on the top of the structure is significantly consumed and the roughness of the sidewalls becomes worse (Figure 5e).
Effect of O 2 Flow on Inner Spacer Etching
In these experiments, O 2 flow was changed while the other process parameters were unchanged as following: 80 mTorr/source RF 250 W/bias RF 35 W/20 sccm CH 4 /25 sccm CH 2 F 2 /x sccm O 2 /50 scccm Ar. The results are shown in Figure 6a. When there is no oxygen, a polymer deposition occurs on the surface of the structure (Figure 6b). When the O 2 flow is increased to 10 sccm, the deposition is reduced, but the deposition is still visible on the side walls (Figure 6c). The polymer produced in the reaction is too heavy and the sidewalls are difficult to be completely etched. The mechanism can be explained that in the absence of O, CH 2 F 2 and CH 4 can easily form CH x F y polymers. Then, introducing O 2 will generate volatile CO which reduces the amount of polymer formation but allowing other elements such as F to be released for etching silicon nitride [35]. When the amount of O 2 reaches to 20 sccm, an equilibrium point is obtained. The etch selectivity and anisotropy are improved and as a result the etch profile becomes better (Figure 6d). As the amount of O 2 continues to increase, the etching appears isotropic (Figure 6e is a typical SiN isotropic etching). The reason for this outcome is that O 2 excessively consumes C in the reaction gas to form CO volatiles, which leads to a serious shortage of the CH x F y amount which is necessary for the protection of the side walls.
Nanomaterials 2020, 10, x FOR PEER REVIEW 7 of 11 O2 will generate volatile CO which reduces the amount of polymer formation but allowing other elements such as F to be released for etching silicon nitride [35]. When the amount of O2 reaches to 20 sccm, an equilibrium point is obtained. The etch selectivity and anisotropy are improved and as a result the etch profile becomes better (Figure 6d). As the amount of O2 continues to increase, the etching appears isotropic (Figure 6e is a typical SiN isotropic etching). The reason for this outcome is that O2 excessively consumes C in the reaction gas to form CO volatiles, which leads to a serious shortage of the CHxFy amount which is necessary for the protection of the side walls.
Effect of Pressure on Inner Spacer Etching
It is well known that the process pressure is an important parameter for plasma etching, because it greatly affects the average free path and energy of ions. In this study the influence of pressure on the etch profile has been investigated according to parameters: x mTorr/source RF 250 W/bias RF 35 W/20 sccm CH4/25 sccm CH2F2/20 sccm O2/50 sccm Ar. It can be seen from Figure 7a that the etching selection ratio has been increasing along with the increasing pressure, especially over 50 mT. This mechanism can be explained by increasing the pressure and reducing the bombardment energy of the ions [35]. The anisotropy is slightly reduced, but the change is relatively small, which also helps to completely etch the silicon nitride on the outside of the sidewall. From Figure 7b to 7e, it can be seen that the amount of silicon nitride remaining on the sidewall gradually decreases until Figure 7e which shows no obvious residue (more detailed characterization will be performed at 3.5), and the remaining hard mask is getting thicker and thicker, indicating that the selectivity becomes getting higher. It should be noted that, due to the limitation of the vacuum gauge of the etcher tool (TCP 9400 DFM), the full-scale pressure can only be tested to 80mT, therefore, whether higher pressure has better results remains to be studied in the future.
Effect of Pressure on Inner Spacer Etching
It is well known that the process pressure is an important parameter for plasma etching, because it greatly affects the average free path and energy of ions. In this study the influence of pressure on the etch profile has been investigated according to parameters: x mTorr/source RF 250 W/bias RF 35 W/20 sccm CH 4 /25 sccm CH 2 F 2 /20 sccm O 2 /50 sccm Ar. It can be seen from Figure 7a that the etching selection ratio has been increasing along with the increasing pressure, especially over 50 mT. This mechanism can be explained by increasing the pressure and reducing the bombardment energy of the ions [35]. The anisotropy is slightly reduced, but the change is relatively small, which also helps to completely etch the silicon nitride on the outside of the sidewall. From Figure 7b-e, it can be seen that the amount of silicon nitride remaining on the sidewall gradually decreases until Figure 7e which shows no obvious residue (more detailed characterization will be performed at 3.5), and the remaining hard mask is getting thicker and thicker, indicating that the selectivity becomes getting higher. It should be Nanomaterials 2020, 10, 793 8 of 11 noted that, due to the limitation of the vacuum gauge of the etcher tool (TCP 9400 DFM), the full-scale pressure can only be tested to 80mT, therefore, whether higher pressure has better results remains to be studied in the future. In order to provide a larger insight of our results, a comparison was made with previously published references as shown in Table 1. The etching selectivity ratio in this study have some advantages over conventional etching results. Comparing with remote plasma and QALE, the selectivity to SiO2 is lower, but it has obvious advantages in etching anisotropy, which is crucial to control the accuracy of the final thickness of inner spacer.
Material Quality and Interface Analysis
In order to more accurately characterize the results of the relatively optimal processes in this study, TEM and EDS characterizations were performed on the relatively optimal conditions of In order to provide a larger insight of our results, a comparison was made with previously published references as shown in Table 1. The etching selectivity ratio in this study have some advantages over conventional etching results. Comparing with remote plasma and QALE, the selectivity to SiO 2 is lower, but it has obvious advantages in etching anisotropy, which is crucial to control the accuracy of the final thickness of inner spacer.
Material Quality and Interface Analysis
In order to more accurately characterize the results of the relatively optimal processes in this study, TEM and EDS characterizations were performed on the relatively optimal conditions of LPCVD filling and etching inner spacers. The outcome is shown in Figure 8. The figure shows that the silicon nitride film fills the Si/SiGe stack cavity in the sidewall very well, and only a small gap is found in the high-resolution TEM picture. These small gaps do not affect device integration and performance, but on the contrary, these gaps will improve device performance as it will further reduce parasitic capacitance [36].
found in the high-resolution TEM picture. These small gaps do not affect device integration and performance, but on the contrary, these gaps will improve device performance as it will further reduce parasitic capacitance [36].
The EDS mapping results show the distribution of silicon nitride film. The Si, Ge and O elements are basically consistent with the expected design results, there is no Ge diffusion during the LPCVD process. The C element is mainly from the TEM sample preparation, because it uses a carrier containing C and can only be referenced. It can be seen from Figure 8b that after the inner spacer is etched, except for the silicon nitride in the Si/SiGe stack cavity, the silicon nitride in the other positions have been etched completely. In particular, the end of the Si nanosheet is free of N elements (there is no silicon nitride residues), and only a thin layer of SiO2 is formed. Subsequently, this silicon oxide can be further removed during the growth of the epitaxial source and drain.
Conclusions
Inner spacer for GAA nano-structure, LPCVD silicon nitride has significantly better cavity filling effect than PECVD. The conventional ICP etching tool and the optimized CH2F2/O2/CH4/Ar gas mixtures can control the silicon nitride inner spacer etching effect very well. The ratio of silicon nitride etch selectivity to Si is more than 100:1, and that for the selectivity to SiO2 is more than 30:1. The vertical/lateral etch ratio is related to the thickness of SiGe, that is, the thinner the thickness of SiGe is, the higher the ratio is. For the nano-structure with a SiGe thickness of 10 nm, the vertical/lateral etching ratio reaches 80:1. The high-resolution TEM and EDS mapping results show that the SiN on the end face of the nanosheet is totally etched while the SiN in the cavity remains The EDS mapping results show the distribution of silicon nitride film. The Si, Ge and O elements are basically consistent with the expected design results, there is no Ge diffusion during the LPCVD process. The C element is mainly from the TEM sample preparation, because it uses a carrier containing C and can only be referenced. It can be seen from Figure 8b that after the inner spacer is etched, except for the silicon nitride in the Si/SiGe stack cavity, the silicon nitride in the other positions have been etched completely. In particular, the end of the Si nanosheet is free of N elements (there is no silicon nitride residues), and only a thin layer of SiO 2 is formed. Subsequently, this silicon oxide can be further removed during the growth of the epitaxial source and drain.
Conclusions
Inner spacer for GAA nano-structure, LPCVD silicon nitride has significantly better cavity filling effect than PECVD. The conventional ICP etching tool and the optimized CH 2 F 2 /O 2 /CH 4 /Ar gas mixtures can control the silicon nitride inner spacer etching effect very well. The ratio of silicon nitride etch selectivity to Si is more than 100:1, and that for the selectivity to SiO 2 is more than 30:1. The vertical/lateral etch ratio is related to the thickness of SiGe, that is, the thinner the thickness of SiGe is, the higher the ratio is. For the nano-structure with a SiGe thickness of 10 nm, the vertical/lateral etching ratio reaches 80:1. The high-resolution TEM and EDS mapping results show that the SiN on the end face of the nanosheet is totally etched while the SiN in the cavity remains relatively intact. This method proposed in this study has the advantages of simple hardware equipment, high etching selectivity and excellent vertical/ lateral etching ratio. | 6,658.4 | 2020-04-01T00:00:00.000 | [
"Physics"
] |
Slope Stability Analysis of the Gbeni Earth Dam (GB3) in Rutile-Sierra Leone
Earth slopes stability analysis is important in the design and construction of earth dams under different loading conditions. Several factors such as difference in water level or rapid drawdown in the reservoir, right after construction, and steady state seepage may result in instability of Earth Dams for all possible combinations. In this study, three scenarios were evaluated for the Gbeni Earth Dam (GB3) at Sierra Rutile Mining Company in Sierra Leone. It is a zoned earth-filled embankment dam with respective upstream and downstream slopes of 1:3 and 1:2, having a horizontal sand blanket downstream. The upstream slope was subjected to both rapid drawdown in the reservoir of the earth dam and end of construction condition when there is no water present in the reservoir, whilst the downstream slope was assessed for steady seepage and end of construction conditions. The shape of the critical slip surface of both slopes was evaluated using a circular failure surface. The objective of this study was to assess the stability of these slopes under the above conditions using traditional analysis according to the theory of limit states and safety factor in GEO 5 software and its subprograms. The level of water on the upstream and downstream banks, the geotechnical properties of soil materials and boundary conditions of the dam were used as contribution variables and safety factors as the desired output. The study found out that the factor of safety against sliding of the upstream slope marginally drops within the short period after the start of rapid drawdown of water in the reservoir of the dam. Also, the downstream slope was found to be more stable under steady seepage. This accounts for the uncertainties involved in the strength of material, pore pressures in impervious clay core material, and long-term loading condition.
Introduction
Sierra Rutile Mining Company is one of the oldest mining companies in Sierra Leone that relies on dams for various mining processes. The company has well over thirty dams, ranging from earth-filled to tailings that have existed for many years. Gbeni Earth Dam is the third water storage dam within the Gbeni area, one of the company's mining environments. The dam is 300m long, 100m wide and 18m high with 1:3 upstream and 1:2 downstream slopes. A 25m wide and 100m long horizontal sand drainage blanket was also incorporated at the downstream grade in order to handle seepage of water within the dam. GB3 Earth dam was designed to handle an estimated volume of 5,386,000m 3 of water.
In order to investigate the stability of an Earth dam, it is necessary to run a thorough evaluation of the slopes under different loading conditions. Craig, (2004) maintained that, assessing the stability of earth slopes is key in defining their behavior at the end of all possible loading conditions [3].
For slope stability analysis, the verification methodology of structure safety based on the factor of safety is historically the oldest and most widely used approach. According to Stark & Jafari, (2017), the principal advantage is its simplicity and rationality [11]. When performing the analysis using this method, neither the load nor the soil parameters are reduced by any of the design coefficients. The factor of safety represents the ratio of total available shear strength of the soil to shear stress required to maintain equilibrium along a potential surface of sliding. This factor indicates a relative measure of how stable a slope is under various loading combinations, However, it does not precisely indicate the actual margin of safety. It follows that a relatively large factor of safety implies relatively low shear stress levels in the embankment or foundation and, hence, relatively small deformations. Das, (2007) suggests that, the minimum factors of safety for use in the design of slope stabilization should follow rationally from an assessment of several factors, which include the extent of planned monitoring of pore pressures and assumptions and uncertainties involved in the strength of construction material [4].
According to Terzaghi & Peck., (1948), there are different slope stability assessment methods. In this study, an effective stress state of soil properties was used, and the most critical slip circle centre according to Bishop, Fellenius/Peterson, Spenser, Janbu, and Morgenstern-Price methods was determined [12]. With these methods in GEO 5 computer program, the potential slip surface and safety factors of the GB3 earth dam were determined for all probable loading combinations.
Case Analysis and Loading Conditions
This study was performed using traditional analysis according to the theory of limit states and safety factor in GEO 5 software and its subprograms in assessing the stability of both upstream and downstream slopes by subjecting each slope to the appropriate loading conditions. Consideration was given to all loading conditions which may result in instability of the GB3 water storage earth dam.
Rapid Drawdown Condition in the Reservoir
Slope stability analysis during the rapid drawdown in the reservoir is an important consideration in the design of earth dams. Fluctuations in reservoir water level may cause stability issues to the upstream grade mainly as a result of the removal of the supporting water. Ground water specified within the slope body using any one of the five analysis options influences the analysis in two different ways, when calculating the weight of the soil mass and when defining the shear forces. When the reservoir is rapidly emptied and drawn down, the stabilizing effect of the water on the upstream face is lost, but the pore-water pressures within the embankment may remain high. As a result, the stability of the upstream face of the dam can be much reduced. According to Bishop & Bjerrum, (1960) and Craig, (2004), rapid drawdown is an important condition controlling the design of the upstream slope in embankment dams [1][2][3]. In particular, slides due to rapid drawdown can lead to reduced reservoir capacity and dam failure. In their book on earth and earthrock dams, Sherard et al., (1963) describe several upstream slope failures attributed to rapid drawdown conditions [10]. In this study, the 1:3 upstream slope of the GB3 Earth Dam was subjected to this condition and analyzed in GEO 5 software and its subprograms.
End of Construction Condition
For earth Dams, the critical condition to be analyzed is at the completion of embankment dam construction but before filling with water. In this case, there is no water table present in the reservoir and in the body of the dam. For this loading case, Khanna et al., (2014) maintained that excess pore pressures may be induced in impervious zones of the embankment or foundation [8]. As a result of this, the stability of both the 1:3 upstream and 1:2 downstream slopes was analyzed for this loading condition.
Steady State Seepage Condition
When an earth dam is constructed, especially when the reservoir is full of water, there is some steady seepage within and into the embankment. To assess the effect of this condition and the uncertainties involved in material strengths, pore pressures in impervious material such as the incorporated core zone, and long-term loading, the stability of the 1:2 downstream slope of the Gbeni Earth Dam was evaluated for this loading condition. The material properties of the various zones of the dam are presented in Table 1 below.
Results and Discussion
In this slope stability analyses of the GB3 Earth Dam, the effective stress state soil properties were used, and the most critical slip circle centre according to five analysis methods was considered. The factor of safety against failure of both upstream and downstream slopes of the Earth Dam for three discrete loading conditions was evaluated according to Bishop, Fellenius/Peterson, Spenser, Janbu, and Morgenstern-Price methods. This section, therefore, presents discussion of the results obtained from GEO 5 computer program and its subprograms.
Stability of Upstream Slope -End of Construction and Rapid Drawdown Conditions
The upstream slope of the Gbeni Earth Dam was built to two loading conditions; the end of construction (dry condition) when there is no water table present in the reservoir and the embankment dam body and the rapid drawdown condition in the reservoir of the earth dam. Typical failure surfaces for the stability of this slope under both conditions and safety factors obtained during the analysis are presented below.
Stability of Downstream Slope -End of Construction and Steady Seepage Conditions
The downstream slope of the GB3 Earth Dam was also built to two loading conditions; the end of construction (dry condition) when there is no water table present in the reservoir and in the embankment dam body and to steadystate seepage condition when the reservoir is full of water. Typical failure surfaces for the stability of the downstream slope under both conditions and safety factors obtained during the analysis are presented below. The factors of safety for all methods under the three discrete loading conditions are greater than the minimum acceptable value of 1.5 as put forward by (Coduto, 1999) [2]. The factor of safety against sliding of the upstream slope was found to decrease immediately when the dam was subjected to a rapid drawdown of water in the reservoir. This was as a result of the dissipation of excess pore pressure with time. It leads to an increase in effective stresses in the soil and hence an increase in its shear stress. Loading due to unbalanced seepage forces may have rendered the upstream slope unstable; since the reservoir level is reduced during the rapid drawdown of water. From the five analysis methods, Fellenius was found to present the minimum safety factor for all loading conditions of both upstream and downstream slopes. The soils at the GB3 Dam are not homogeneous as put forward by (Fellenius, 1936) [7]. The presence of water tables and pore water pressures at the dam are not taken into account. As a result, the determination of the location of the most critical slip circle centre in the earth embankment dam by using Fellenius method could not be accurate enough. However, this determination provides a benefit in reducing the number of computational trials. The difference between the various analysis methods is less than 6% as put forward by (Duncan, 1996) [5]. In fact, safety factors obtained from Janbu and Morgenstern analysis methods are seen to agree fairly well for all loading conditions.
Conclusion
Based on the slope stability analysis results, the following conclusions can be deduced when considering the three examined construction and / or loading conditions: The GB3 Earth Dam is stable under all probable loading conditions.
The overall minimum stability factor for the end of construction loading case of the upstream slope of the earth Dam but before filling with water was 2.28. This provides stability of the upstream slope against sliding failure under this condition.
The overall minimum stability factor for the rapid drawdown loading case of the upstream slope of the earth Dam was 1.79. This implies stability of the upstream slope against sliding failure under this condition.
The overall minimum stability factor for the end of construction loading case of the downstream slope of the earth Dam but before filling with water was 1.83. This implies stability of the upstream slope against sliding failure under this condition.
The overall minimum stability factor for the steady-state seepage loading case of the downstream slope of the earth Dam but prior to filling with water was 2.02. This implies stability of the upstream slope against sliding failure under this condition.
The factor of safety against sliding of the upstream slope was found to decrease immediately when the dam was subjected to rapid drawdown of water in the reservoir. This was as a result of the dissipation of excess pore pressure with time. It leads to an increase in effective stresses in the soil and hence an increase in its shear stress. Loading due to unbalanced seepage forces may have contributed to this reduction; since the reservoir level is reduced during the rapid drawdown.
The difference between the various analysis methods is less than 6% as put forward by (Duncan, 1996) [5]. In fact, safety factors obtained from Janbu and Morgenstern analysis methods are seen to agree fairly well for all loading conditions. Though stable, the upstream slope if subjected to constant evacuation of water from the reservoir may be exposed to collapse in the long term. | 2,867.2 | 2020-12-16T00:00:00.000 | [
"Engineering",
"Environmental Science"
] |
A Combined Computational and Experimental Approach to Studying Tropomyosin Kinase Receptor B Binders for Potential Treatment of Neurodegenerative Diseases
Tropomyosin kinase receptor B (TrkB) has been explored as a therapeutic target for neurological and psychiatric disorders. However, the development of TrkB agonists was hindered by our poor understanding of the TrkB agonist binding location and affinity (both affect the regulation of disorder types). This motivated us to develop a combined computational and experimental approach to study TrkB binders. First, we developed a docking method to simulate the binding affinity of TrkB and binders identified by our magnetic drug screening platform from Gotu kola extracts. The Fred Docking scores from the docking computation showed strong agreement with the experimental results. Subsequently, using this screening platform, we identified a list of compounds from the NIH clinical collection library and applied the same docking studies. From the Fred Docking scores, we selected two compounds for TrkB activation tests. Interestingly, the ability of the compounds to increase dendritic arborization in hippocampal neurons matched well with the computational results. Finally, we performed a detailed binding analysis of the top candidates and compared them with the best-characterized TrkB agonist, 7,8-dyhydroxyflavon. The screening platform directly identifies TrkB binders, and the computational approach allows for the quick selection of top candidates with potential biological activities based on the docking scores.
Introduction
Alzheimer's disease (AD) has become one of the most challenging chronic ageassociated diseases of the 21st century [1].Despite years of intense research and numerous on-going trials, new therapies capable of delaying the onset, slowing progression, or improving the cognitive effects of AD are still needed [2,3].Over 400 small molecular compounds developed in preclinical studies focusing on β-amyloid (Aβ) plaques or neurofibrillary tangles failed to progress through clinical trials [2].The professionals in the field strongly suggest that alternative targets and approaches are explored to identify new potential drugs for the prevention and/or treatment of AD [4].Numerous studies have shown the link between brain-derived neurotrophic factor/tropomyosin kinase receptor B (BDNF/TrkB) pathway activation and the improvement of neurological disorders, such as AD or major depressive disorder [5][6][7][8].Accumulating evidence demonstrates BDNF, and its receptor TrkB, expression decrease in AD, and similar reductions exacerbate hippocampal dysfunction in animal models of AD [9].Decreased levels of BDNF have also been reported in the serum and brain of AD patients [10][11][12].Tau overexpression or hyperphosphorylation down-regulates BDNF expression in primary neurons and AD animal models [13][14][15].Additionally, BDNF has been found to have protective effects on Aβ-induced neurotoxicity in vitro and in vivo [16], and BDNF administration directly into the rat brain has been shown to increase learning and memory in cognitively impaired animals [17].Therefore, BDNF/TrkB signaling may serve as a valid target for ameliorating neurological and psychiatric disorders, including AD [8,18].Targeting the BDNF/TrkB signaling pathway for the development of therapeutics for AD will potentially enhance our understanding of the disease [19].Unfortunately, the natural ligand, BDNF, cannot penetrate the blood brain barrier (BBB), leading to poor bioavailability in the brain [20].Alternatively, small molecule activators have been explored as a potential drug candidate targeting the BDNF/TrkB pathway [21][22][23], such as 7,8-dihydroxyflavone (7, [21, 24,25], 7,8-DHF derivatives [26], and the mimetics of TrkB binding domains (loop II) of BDNF (e.g., LM22A compounds) [27,28].For example, a derivative of 7,8-DHF (R13; 4-Oxo-2-phenyl-4H-chromene-7,8-diyl bis (methylcarbamate)) is currently under consideration as a potential drug for AD [26].These studies suggest that small molecule agonists may have the potential as BDNF alternatives to activate the TrkB pathway.Recently, it has also been shown that several antidepressants (such as fluoxetine or imipramine) work through directly binding to TrkB and promoting BDNF signaling, further stressing the importance of pursuing TrkB as a valid target to treat various neurological disorders [29].For example, TrkB activation in parvalbumin interneurons was required to promote reversal learning in spatial and fear memory by antidepressants [30].However, the mechanism of small molecular TrkB activators has not been systematically studied in terms of binding location, the correlation of binding affinity of activities, and the structural diversity of small activators.
It has been a challenge to directly identify binders for transmembrane receptors because the receptors require boundary lipids for proper function.We recently developed a novel magnetic screening nanoplatform (MSN) based on cell-membrane-coated magnetic nanoclusters with immobilized receptors within the membrane [31,32].The immobilized transmembrane receptors can be used to directly identify binding compounds through specific ligand-receptor interactions.This method works as a screening funnel to quickly narrow down a large number of molecules to binding compounds.The coated magnetic nanoclusters enable rapid binder isolation [32].MSN can directly identify binders from a library mixture, and pure compounds are not necessary.In addition, effective approaches using transmembrane protein receptors as screening targets for direct compound identification are very limited [33][34][35] because transmembrane receptors require boundary lipids to function properly.MSN technology uses receptors within the cell membrane as functional receptors.The ability of MSN technology to screen compound mixtures and the use of transmembrane receptors as screening targets are two distinct benefits.The design of MSN technology was based on the central hypothesis that direct drug-receptor binding is essential to therapeutic functions.The screening experiments typically lead to a list of receptor binders.However, the binding affinity, biological activities, and differentiation between activators and inhibitors remain challenging.Taking TrkB as an example, several studies have shown multiple binding sites are available for this receptor.It is highly challenging to experimentally differentiate the binding location of a binder and highly laborand cost-intensive to evaluate every single compound.
In this paper, we report a combined computational and experimental approach to studying TrkB binders for the potential treatment of neurodegenerative disease.Experimentally, we screened two different mixtures (Gotu Kola plant extract and the NIH clinical collection library [36]) using MSN screening technology with TrkB as the screening targets.The resulting TrkB binders were evaluated with Fred Docking software 2.2.0 [37], employing Chemgauss as its empirical-based scoring function, to elucidate the binding sites and binding affinity.Subsequent experiments were conducted to test the selected compounds.An overview of the experimental and computational process is illustrated in Figure 1.Our study suggested strong synergy between the docking results and compound activities evaluated by dendritic arborization in isolated Aβ-overexpressing hippocampal neurons.A strong association between TrkB activation and neuron development was identified.Furthermore, our findings demonstrate the feasibility of using docking studies to systematically explore all potential binding sites, allowing one to effectively narrow down the top candidate compounds.Our combined approach will not only greatly benefit drug discovery processes using TrkB transmembrane proteins as targets but also allow for the evaluation and validation of any previously reported TrkB binders.
Molecules 2024, 29, x FOR PEER REVIEW 3 of 18 sites and binding affinity.Subsequent experiments were conducted to test the selected compounds.An overview of the experimental and computational process is illustrated in Figure 1.Our study suggested strong synergy between the docking results and compound activities evaluated by dendritic arborization in isolated Aβ-overexpressing hippocampal neurons.A strong association between TrkB activation and neuron development was identified.Furthermore, our findings demonstrate the feasibility of using docking studies to systematically explore all potential binding sites, allowing one to effectively narrow down the top candidate compounds.Our combined approach will not only greatly benefit drug discovery processes using TrkB transmembrane proteins as targets but also allow for the evaluation and validation of any previously reported TrkB binders.
Magnetic Drug Screening Nanoplatform
Our recently developed MSN based on cell-membrane-coated magnetic nanoclusters with immobilized TrkB allowed for the direct identification of TrkB binders from mixtures.This screening platform is able to identify and extract binding compounds through specific ligand-receptor interactions, which quickly narrows down a large number of molecules to binding compounds, while magnetic iron oxide nanoclusters inside enable rapid binder isolation (Figure 1a).We demonstrated the proof-of-concept [31] and feasibility [32] of this MSN technology to identify binders from small-molecule libraries and plant extracts.Using this MSN screening platform with TrkB as the screening target, we have effectively identified a list of TrkB binders from Gotu Kola plant extract [32].To evaluate whether the TrkB binders have the anticipated biological activity of activating TrkB, we performed preliminary testing on one of the TrkB binders, 4-O-cafeoylquinic acid (4-O-CQA), using dendritic arborization assay.The assay was performed using hippocampal neurons isolated from 5xFAD mice and cultured in vitro [32].The isolated 5xFAD hippocampal neurons treated with 4-OCQA at 1 μM for several days significantly increased arborization in these Aβ-expressing neurons [32].
However, it is a highly labor-intensive and costly process to evaluate the biological activity of every identified TrkB binder.In addition, the screening process only leads to
Magnetic Drug Screening Nanoplatform
Our recently developed MSN based on cell-membrane-coated magnetic nanoclusters with immobilized TrkB allowed for the direct identification of TrkB binders from mixtures.This screening platform is able to identify and extract binding compounds through specific ligand-receptor interactions, which quickly narrows down a large number of molecules to binding compounds, while magnetic iron oxide nanoclusters inside enable rapid binder isolation (Figure 1a).We demonstrated the proof-of-concept [31] and feasibility [32] of this MSN technology to identify binders from small-molecule libraries and plant extracts.Using this MSN screening platform with TrkB as the screening target, we have effectively identified a list of TrkB binders from Gotu Kola plant extract [32].To evaluate whether the TrkB binders have the anticipated biological activity of activating TrkB, we performed preliminary testing on one of the TrkB binders, 4-O-cafeoylquinic acid (4-O-CQA), using dendritic arborization assay.The assay was performed using hippocampal neurons isolated from 5xFAD mice and cultured in vitro [32].The isolated 5xFAD hippocampal neurons treated with 4-OCQA at 1 µM for several days significantly increased arborization in these Aβ-expressing neurons [32].
However, it is a highly labor-intensive and costly process to evaluate the biological activity of every identified TrkB binder.In addition, the screening process only leads to binders, but no quantitative affinity information and no receptor binding sites can be obtained.Therefore, we explored a docking computation approach to evaluate the binding affinity of the TrkB binders and to elucidate the binding location within the receptor.
Docking of TrkB Binding Compounds from Gotu Kola Plant Extract
To visualize and identify the binding mechanism of the Gotu Kola plant compounds, we selected TrkB-D5 as the receptor for docking these ligands.TrkB-D5 is considered as a favorable target for neurological and psychiatric disorder agonists, which bind to this TrkB domain to mimic the binding of Brain-Derived Neurotrophic Factor (BDNF) [38][39][40][41][42].We leveraged the high-resolution X-ray crystallographic data of TrkB-D5 in the NT-4/5TrkB complex (PDB ID: 1HCF) [43].Derived from human sources and accessible through protein databanks [44], this complex comprises four chains designated as A, B, X, and Y.In this configuration, chains A and B form a neurotrophin 4 homodimer, while chains X and Y correspond to two monomers of TrkB-D5. Figure 2a depicts the structure of TrkB-D5:NT-4/5 complex formed by chain X and chain A. Our research primarily concentrates on chain X of PDB ID 1HCF to simulate the docking process between the TrkB-D5 domain and selected agonist candidates.The protocol for optimizing this structure is detailed in the Docking Computation section.
binders, but no quantitative affinity information and no receptor binding sites can be obtained.Therefore, we explored a docking computation approach to evaluate the binding affinity of the TrkB binders and to elucidate the binding location within the receptor.
Docking of TrkB Binding Compounds from Gotu Kola Plant Extract
To visualize and identify the binding mechanism of the Gotu Kola plant compounds, we selected TrkB-D5 as the receptor for docking these ligands.TrkB-D5 is considered as a favorable target for neurological and psychiatric disorder agonists, which bind to this TrkB domain to mimic the binding of Brain-Derived Neurotrophic Factor (BDNF) [38][39][40][41][42].We leveraged the high-resolution X-ray crystallographic data of TrkB-D5 in the NT-4/5TrkB complex (PDB ID: 1HCF) [43].Derived from human sources and accessible through protein databanks [44], this complex comprises four chains designated as A, B, X, and Y.In this configuration, chains A and B form a neurotrophin 4 homodimer, while chains X and Y correspond to two monomers of TrkB-D5. Figure 2a depicts the structure of TrkB-D5:NT-4/5 complex formed by chain X and chain A. Our research primarily concentrates on chain X of PDB ID 1HCF to simulate the docking process between the TrkB-D5 domain and selected agonist candidates.The protocol for optimizing this structure is detailed in the Docking Computation section.Using the OEDocking Graphical User Interface [37], we identified five potential binding regions for the TrkB agonists as depicted in Figure 2b.In our study, we utilized the FRED docking score with the Chemgauss4 scoring function [37], which enhances molecular docking accuracy by refining the assessment of hydrogen bond directionality and metal chelator interactions, to rank the most favorable binding pockets for our TrkB binders (Table 1).The default parameters in the FRED 2.2.0 software were employed in this study.In particular, while Chemgauss4 is used for the optimization phase, Chemgauss3 is utilized for the exhaustive search phase.Furthermore, the FRED software defines the "negative image" to eliminate poses that either clash with the protein or extend too far from the binding site.Specifically, the negative image describes the shape of the active site and is stored as a potential grid surrounding it.This image highlights areas where ligand atoms can make many contacts with the active site atoms without clashing and indicates likely positions for ligand atoms during optimal binding.According to the Fred Docking scores in Table 1, binding site 1 (BP 1) is the most favorable region for the investigated agonists, but depending on the compound, the other four binding sites are also involved to some extent.Table S2 presents a detailed analysis of the binding site regions (BP1 to BP5) within the TrkB-D5 protein, highlighting the key amino acids responsible for hydrogen bonding and hydrophobic interactions.Each binding site region is defined by a Using the OEDocking Graphical User Interface [37], we identified five potential binding regions for the TrkB agonists as depicted in Figure 2b.In our study, we utilized the FRED docking score with the Chemgauss4 scoring function [37], which enhances molecular docking accuracy by refining the assessment of hydrogen bond directionality and metal chelator interactions, to rank the most favorable binding pockets for our TrkB binders (Table 1).The default parameters in the FRED 2.2.0 software were employed in this study.In particular, while Chemgauss4 is used for the optimization phase, Chemgauss3 is utilized for the exhaustive search phase.Furthermore, the FRED software defines the "negative image" to eliminate poses that either clash with the protein or extend too far from the binding site.Specifically, the negative image describes the shape of the active site and is stored as a potential grid surrounding it.This image highlights areas where ligand atoms can make many contacts with the active site atoms without clashing and indicates likely positions for ligand atoms during optimal binding.According to the Fred Docking scores in Table 1, binding site 1 (BP 1) is the most favorable region for the investigated agonists, but depending on the compound, the other four binding sites are also involved to some extent.Table S2 presents a detailed analysis of the binding site regions (BP1 to BP5) within the TrkB-D5 protein, highlighting the key amino acids responsible for hydrogen bonding and hydrophobic interactions.Each binding site region is defined by a unique set of amino acids that contribute to the overall stability and function of the protein-ligand complex.Interestingly, our experimentally tested bioactive compound, 4-O-cafeoylquinic acid, showed high levels of binding affinity to all five binding sites and with much high binding in site 1.Although dicaffeopllauiric acid and castillicetin have slightly stronger binding to BP1 than 4-O-cafeoylquinic acid, the involvement of other binding sites is significantly lower than 4-O-cafeoylquinic acid.The trend is very similar to the Fred Docking scores of the known TrkB agonist, 7,8-dihydroxyflavone, where BP1 has high binding affinity, with the involvement of all other four binding sites.The synergy between the docking studies and the experimental observation suggests that our docking computation provides an effective tool to predict the activity of TrkB binders.
Table 1.Fred Docking scores (in kcal/mol) for all five different binding sites for compounds from Gotu Kola extracts.BP stands for binding site.The notation "-" denotes that the software failed to generate the pose because the docking score is out of the acceptable range.Here, the known TrkB agonist, 7,8-dihydroxyflavone, was included for comparison.
Screening of the NIH Clinical Collection Library
The plant extracts serve as a great resource for bioactive compounds; however, because of the large number of compounds and the diverse structures, the properties of certain compounds are not readily available.Therefore, we further applied our MSN technology to screen the NIH clinical collection library.The NIH clinical collection library contains 708 compounds used in human clinical trials that have known therapeutic functions, well understood molecular mechanisms of action, and available structural and physicochemical properties.Following a similar procedure to that used to screen Gotu kola plant extracts and to the compound analysis process, we have identified 17 compounds as TrkB specific binders.The binder identification process based on ultra-performance liquid chromatography (UPLC) chromatograms of binders eluted from MSN with TrkB and MSN without TrkB was shown in Figure S1.Peaks only appeared in UPLC chromatograms of MSN with functional TrkB but not in control MSN from TrkB-null cells were analyzed by mass spectrometry as binders.Mass spetrometry was performed in both negative and positive ionization mode, but the positive mode exhibits much better signals.Therefore, positive ionization mode was used to analyze the TrkB binders.The compound matching process was achieved by comprising the compound-detected masses with the NIH compound data (Figure S2).Subsequently, we applied the same docking studies described previously to the identified compounds.
The identified TrkB binders along with the Fred Docking scores (in kcal/mol) for all five different binding sites are shown in Table 2. Similar to the binding behaviors of the TrkB binders from Gotu kola plant extracts, BP1 is the favorable binding site for all the binders from the NIH library.In addition, the binding affinities for the stronger binders were also very close such as dicaffeopllauinic acid (−7.63 kcal/mol) and 4-O-cafeoylquinic acid (−7.02 kcal/mol) for binders from plant extracts and nadolol (−8.13 kcal/mol) and clofazimine (−7.36 kcal/mol) for binders from the NIH library.Nadolol is currently an FDA-approved β-blocker with known therapeutic functions; well-understood molecular mechanisms of action; and available solubility, safety, and pharmacokinetic properties.Most importantly, β-blockers have been shown to lower the risks of AD [45].In fact, the phase I trial is in process for studying the cognitive impacts of the combination of clenbuterol and nadolol on mild dementia due to AD or other diseases [46].In contrast, valproic acid that has been commonly used in the treatment of epilepsy and bipolar disorder, was shown to be related to TrkB [47], and is considered to be a partial TrkB agonist.The ability of MSN with functional TrkB to identify valproic acid also indicated the effectiveness of MSN for TrkB binder identification.Therefore, those two compounds were selected as top candidates for TrkB activation associated with neuron elongation using primary hippocampal neurons isolated from embryonic C57BL6 mice.First, the isolated primary neurons were treated with BDNF, the physiological ligand in the human body.Primary neurons exposed to 25 ng/mL of BDNF were observed to have increased arborization, which is directly linked to increased neuroplasticity.The arborization was attenuated in the presence of the TrkB selective inhibitor, ANA-12, as shown in Figure 3. Similarly, we tested valproic acid and nadolol using the same dendritic arborization assay.Both molecules are FDA-approved drugs with well-established bioavailability, toxicity, and BBB penetration data.Specifically, valproic acid is readily soluble in water (50 mg/mL) and penetrates the BBB [48].Nadolol is slightly soluble in water (46.4 µg/mL) and has been previously shown to cross the BBB [49].Valproic acid at 500 µM and nadolol at 10 nM concentration increased dendritic arborization compared to the control.The increase in arborization elicited by nadolol was as effective as the one observed for BDNF, the positive control.The increase in dendritic arborization caused by valproic acid treatment was less effective than nadolol treatment but was significantly higher compared to negative control, which are in agreement with the Fred Docking scores reported in Table 2. Most importantly, the dendritic arborization of the primary neurons exposed to both nadolol and valproic acid was attenuated by the co-treatment with a selective TrkB inhibitor, ANA-12.ANA-12 blocks the neurotrophic actions of BDNF without compromising neuron survival.ANA-12 was shown in animal models to penetrate the BBB and block the cognitive effects associated with TrkB activation.TrkB agonists can activate the TrkB/BDNF pathways, which has been linked to cognitive effects based on the literature.However, in vitro primary neuron activation does not guarantee cognitive effects because the BBB penetration and brain bioavailability of componds are critical.However, studies have shown that 7,8-DHF-treated 5xFAD mice demonstrated cognitive improvement [50] and overexpression of BDNF [51]; we believe that compounds activating the TrkB/BDNF pathway with high brain bioavailability may have potential cognitive effects.
BDNF, the positive control.The increase in dendritic arborization caused by valproic acid treatment was less effective than nadolol treatment but was significantly higher compared to negative control, which are in agreement with the Fred Docking scores reported in Table 2. Most importantly, the dendritic arborization of the primary neurons exposed to both nadolol and valproic acid was attenuated by the co-treatment with a selective TrkB inhibitor, ANA-12.ANA-12 blocks the neurotrophic actions of BDNF without compromising neuron survival.ANA-12 was shown in animal models to penetrate the BBB and block the cognitive effects associated with TrkB activation.TrkB agonists can activate the TrkB/BDNF pathways, which has been linked to cognitive effects based on the literature.However, in vitro primary neuron activation does not guarantee cognitive effects because the BBB penetration and brain bioavailability of componds are critical.However, studies have shown that 7,8-DHF-treated 5xFAD mice demonstrated cognitive improvement [50] and overexpression of BDNF [51]; we believe that compounds activating the TrkB/BDNF pathway with high brain bioavailability may have potential cognitive effects.
Detailed Analysis of TrkB and Top Candidate Interactions
TrkB signaling has been explored as a therapeutic target for neurological and psychiatric disorders [29,38,52].However, the development of TrkB agonists has not made notable progress, especially with small molecule agonists.In addition, it is known that the binding location of TrkB agonists affects the regulation of the types of disorders [42].Therefore, it is important to elucidate the interactions of TrkB binders and TrkB.Here, we will investigate the interaction details of the top candidates based on computed binding affinities: nadolol (ΔΔG = −8.13kcal/mol), valproic acid (ΔΔG = −4.83kcal/mol), and dicaffeoylquinic acid (ΔΔG = −7.63kcal/mol).These binding energies were calculated using FRED docking scores based on the Chemgauss4 scoring function.The Chemgauss4 scoring function employs Gaussian smoothed potentials to evaluate the complementarity of ligand poses within the active site, focusing on interactions such as shape complementarity, hydrogen bonding (both with the protein and implicit solvent), and metal-chelator interactions.Consequently, our top candidate, nadolol, was selected based on its strong shape complementarity with the binding site.Additionally, nadolol forms the most hy-
Detailed Analysis of TrkB and Top Candidate Interactions
TrkB signaling has been explored as a therapeutic target for neurological and psychiatric disorders [29,38,52].However, the development of TrkB agonists has not made notable progress, especially with small molecule agonists.In addition, it is known that the binding location of TrkB agonists affects the regulation of the types of disorders [42].Therefore, it is important to elucidate the interactions of TrkB binders and TrkB.Here, we will investigate the interaction details of the top candidates based on computed binding affinities: nadolol (∆∆G = −8.13kcal/mol), valproic acid (∆∆G = −4.83kcal/mol), and dicaffeoylquinic acid (∆∆G = −7.63kcal/mol).These binding energies were calculated using FRED docking scores based on the Chemgauss4 scoring function.The Chemgauss4 scoring function employs Gaussian smoothed potentials to evaluate the complementarity of ligand poses within the active site, focusing on interactions such as shape complementarity, hydrogen bonding (both with the protein and implicit solvent), and metal-chelator interactions.Consequently, our top candidate, nadolol, was selected based on its strong shape complementarity with the binding site.Additionally, nadolol forms the most hydrogen bonds with crucial active site residues, enhancing its binding stability.These components contribute to its higher docking scores.As a positive reference, we also provided the binding pocket analysis of 7,8-dihydroxyflavone (7,8-DHF), a small molecule that acts as a selective agonist to the TrkB receptor, specifically binding to its extracellular domain D5 [53][54][55].Our computed binding affinity of 7,8-DHF is ∆∆G = −6.27kcal/mol (Tables 1 and 2), which is comparable to our top compounds, nadolol and dicaffeoylquinic acid.
Figure 4 illustrates the 3D structures of the TrkB-D5 domain in complex with four different ligands: 7,8-DHF, nadolol, valproic acid, and dicaffeoylquinic acid, highlighting the spatial arrangement and interactions between the protein and the ligand of the binding pocket BP1.In each subfigure, the TrkB-D5 structure is rendered as a grey surface representation to depict the overall shape and binding pocket.The small molecule is shown as a ball-and-stick model with carbon atoms in cyan, oxygen atoms in red, and hydrogen atoms in white.To reveal the binding mechanism between the potential agonist and TrkB-D5 receptor, we first analyzed the interactions between TrkB-D5 (chain X) and NT-4/5 (chain A) in the complex of PDBID: 1HCF.
Figure 4 illustrates the 3D structures of the TrkB-D5 domain in complex with four different ligands: 7,8-DHF, nadolol, valproic acid, and dicaffeoylquinic acid, highlighting the spatial arrangement and interactions between the protein and the ligand of the binding pocket BP1.In each subfigure, the TrkB-D5 structure is rendered as a grey surface representation to depict the overall shape and binding pocket.The small molecule is shown as a ball-and-stick model with carbon atoms in cyan, oxygen atoms in red, and hydrogen atoms in white.To reveal the binding mechanism between the potential agonist and TrkB-D5 receptor, we first analyzed the interactions between TrkB-D5 (chain X) and NT-4/5 (chain A) in the complex of PDBID: 1HCF. Figure 5 depicts the hydrogen bonds as well as the residues involved in hydrophobic effect.This analysis was generated by Ligplot+ [56] and re-rendered using Marvin [57].In the contact map, the hydrogen bonds are represented by green dashed lines, indicating the distance between donor and acceptor atoms.These bonds are expected to be crucial for the stability and specificity of the interaction between TrkB-D5 and NT-4/5.Key residues forming the hydrogen bonds in TrkB-D5 include His353, Gly383, Asp298, and His335.The hydrophobic contacts are depicted by curved lines around the involved residues.These interactions contribute to the binding affinity and stabilization of the complex.Significant hydrophobic contacts include residues such as Met379, Gly380, Pro382, Thr296, His343, Phe291, and Val336 in the TrkB-D5 domain, interacting with various residues in NT-4/5.It is reasonable to expect that agonists will involve similar residues of the TrkB-D5 receptor in forming hydrogen bonds and hydrophobic contacts.Similar to the interaction analysis in the 1HCF complex, we also revealed the details of hydrogen bonds and hydrophobic contacts of the selected ligands in Figure 6. Figure 5 depicts the hydrogen bonds as well as the residues involved in hydrophobic effect.This analysis was generated by Ligplot+ [56] and re-rendered using Marvin [57].In the contact map, the hydrogen bonds are represented by green dashed lines, indicating the distance between donor and acceptor atoms.These bonds are expected to be crucial for the stability and specificity of the interaction between TrkB-D5 and NT-4/5.Key residues forming the hydrogen bonds in TrkB-D5 include His353, Gly383, Asp298, and His335.The hydrophobic contacts are depicted by curved lines around the involved residues.These interactions contribute to the binding affinity and stabilization of the complex.Significant hydrophobic contacts include residues such as Met379, Gly380, Pro382, Thr296, His343, Phe291, and Val336 in the TrkB-D5 domain, interacting with various residues in NT-4/5.It is reasonable to expect that agonists will involve similar residues of the TrkB-D5 receptor in forming hydrogen bonds and hydrophobic contacts.Similar to the interaction analysis in the 1HCF complex, we also revealed the details of hydrogen bonds and hydrophobic contacts of the selected ligands in Figure 6.
Table 3 highlights the specific interactions between those compounds and the TrkB-D5 domain, including details on the hydrogen bonds, distances, and angles for each interaction.It is reported that 7,8-DHF forms several hydrogen bonds with residues Gly344, Phe305, and His335.Note that His335 is also involved in the hydrogen bonds of TrkB-D5 and NT-4/5 complex.7,8-DHF's binding energy of −6.27 kcal/mol indicates moderate binding affinity.This relatively higher (less favorable) binding energy is attributed to the fewer and less optimal hydrogen bond interactions compared to other ligands.The most effective agonist among the compounds investigated is nadolol, with a binding energy calculated at −8.13 kcal/mol.This small molecule exhibits extensive hydrogen bonding with key residues Asp298, Thr296, Gly344, His343, Thr306, and Phe305.The presence of these five hydrogen bonds is responsible for the lower binding energy between nadolol and the TrkB-D5 receptor.Nadolol and 7,8-DHF share two common hydrogen bonds involving residues Gly344 and Phe305.Although the Phe305 hydrogen bond in nadolol (2.64 Å, 57.2 • ) appears to be weaker than the corresponding bond in 7,8-DHF (2.85 Å, 110.6 • ), the higher number of hydrogen bonds in nadolol contributes to its stronger binding affinity.Specifically, nadolol features strong hydrogen bonds with His343 (2.73 Å, 118.4 • ) and Asp298 (2.14 Å, 96.9 • ), which significantly enhance its binding stability and affinity.Table 3 highlights the specific interactions between those compounds and the TrkB-D5 domain, including details on the hydrogen bonds, distances, and angles for each interaction.It is reported that 7,8-DHF forms several hydrogen bonds with residues Gly344, Phe305, and His335.Note that His335 is also involved in the hydrogen bonds of TrkB-D5 Table 3 highlights the specific interactions between those compounds and the TrkB-D5 domain, including details on the hydrogen bonds, distances, and angles for each interaction.It is reported that 7,8-DHF forms several hydrogen bonds with residues Gly344, Phe305, and His335.Note that His335 is also involved in the hydrogen bonds of TrkB-D5 Valproic acid forms a single hydrogen bond with His335 of TrkB-D5, resulting in a binding affinity of −4.83 kcal/mol, which is not very promising.Notably, both 7,8-DHF and dicaffeoylquinic acid also form hydrogen bonds with His335, but these bonds are weaker in terms of distance and angle (see Table 3).Additionally, His335 of TrkB-D5 forms a hydrogen bond with Glu13 of NT-4/5 in their complex (see Figure 4).This suggests that His335 plays an important role in the binding mechanism of TrkB-D5 with its ligands.However, it may not be crucial for strengthening the binding affinity, as evidenced by 7,8-DHF, which has a weak hydrogen bond with His335 yet still exhibits low binding energy.
Table 3. Hydrogen analysis between the top candidates and TrkB-5 domain.The first atom in the hydrogen bond representation is from the considered ligand.The second component, specified with the residue ID it is located in, is the acceptor (or donor) from TrkB-D5.
Hydrogen Bond
Distance (Å) Angle ( Dicaffeoylquinic acid is the second strongest binder after nadolol, with an estimated binding energy of −7.63 kcal/mol, attributed to seven hydrogen bonds.Both dicaffeoylquinic acid and nadolol form hydrogen bonds with the same residues of TrkB-D5, including Asp298, Gly344, Phe305, and His343.While dicaffeoylquinic acid forms a weaker hydrogen bond with Asp298 compared to nadolol, its hydrogen bond with Gly344 is much stronger.However, the binding energy of dicaffeoylquinic acid (−7.63 kcal/mol) is less competitive than that of nadolol (−8.13 kcal/mol).Therefore, it can be concluded that forming a strong hydrogen bond with Asp298 is crucial for effective binding to TrkB.In support of this, Asp298 plays an important role in the hydrogen bond interactions between TrkB-D5 and NT-4/5 (Figure 4).All of the key amino acids responsible for the hydrogen bonding and hydrophobic interactions of the binding site regions (BP1 to BP5) within the TrkB-D5 protein are shown in Table S2.Each binding site region is defined by a unique set of amino acids that contribute to the overall stability and function of the protein-ligand complex.
To futher confirm the stability of these top compounds, we have carried out molecular simulations using the Desmond package (MD) from Schrodinger v2024-1.These simulations were performed to understand the stability and conformational changes of the protein-ligand complexes over a 10 ns period.The TIP3P model was used for the solvent, and the OPSL4 force field was applied for the simulations.Data were recorded at intervals of 10 ps.After 10 ns, the RMSD values for both the protein and ligands were mostly within the acceptable limit of 2.5 Å, indicating stable complexes throughout the simulation period, as shown in Figure S3.
Discussion
Our MSN screening nanoplatform was based on the specific ligand-receptor interactions, where the outcome of screening experiments is receptor binders, while the biological functions are not guaranteed.Therfore, bilogical activity tests are normally performed to confirm the identification of compounds.In addition, for compound screening, the specificity of the interaction with the target is critical.To ensure the specificity of TrkB binders, we applied two strategies: (1) a control cell without the particular receptors and (2) blocking the receptor with a known blocker; as controls.For example, for TrkB binder identification, cells overexpressing TrkB and cells without TrkB (control) were used.In addition, a well known inhibitor ANA-12 was also used to block the TrkB.For identified TrkB binders, dendritic arborization assay was used to evaluate the biological activities of TrkB binders from the screening experiments.Dendritic arborization is a functionally relevant in vitro endpoint as it reflects the potential to modulate synaptic plasticity, which serves as an indicator for TrkB activation [58].
To develop the docking methods for the evaluation of the binding location and binding affinity of the identified compounds, we selected TrkB-D5 as the receptor for docking.TrkB-D5 is considered as a favorable target for neurological and psychiatric disorder agonists, which bind to this TrkB domain to mimic the binding of Brain-Derived Neurotrophic Factor (BDNF) [38][39][40][41]59].Subsequently, we used the experimentally tested compound identified by MSN from Gotu kola plant extracts to validate the docking methods.Our experimentally tested bioactive compound, 4-O-cafeoylquinic acid, showed high levels of binding affinity to all five binding sites, with much high binding in site 1.Although dicaffeopllauiric acid and castillicetin have slightly stronger binding to BP1 than 4-Ocafeoylquinic acid, the involvement of other binding sites is significantly lower than 4-Ocafeoylquinic acid.This trend is very similar to the Fred Docking scores of the known TrkB agonist, 7,8-dihydroxyflavone, where BP1 has high binding affinity, with the involvement of all other four binding sites.The synergy between the docking studies and the experimental observation suggests that our docking computation provides an effective tool to predict the activity of TrkB binders.
Subsequently, we applied a similar experimental screening process to the NIH clinical collection library and docking methods to the identified binders.The Fred Docking scores (in kcal/mol) of the identidied compounds from the MSN screening of the NIH library showed very similar binding behaviors to the TrkB binders from the Gotu kola plant extracts, where BP1 is the favorable binding site and only some compounds are involved in all of the binding sites.In addition, the binding affinities for the stronger binders were also very close such as dicaffeopllauinic acid (−7.63 kcal/mol) and 4-O-cafeoylquinic acid (−7.02 kcal/mol) for binders from plant extracts and nadolol (−8.13 kcal/mol) and clofazimine (−7.36 kcal/mol) for binders from the NIH library.These results suggest that the MSN screening process is highly reliable and applicable to different screening sources.In addition, the docking methods can be effectively used to predict the binding affinity and binding sites.
To further validate the docking methods, the biological activities of two candidates (nadolol-the strongest binding affinity, and valproic acid-medium binding affinity) were experimentally tested.The increased dendritic arborization of the primary neuron treated by nadool was similar to that of BDNF, but treatment with valprioc acid was less effective.These results suggested the correlation of the Fred Docking scores and the biological activities of the binders.In addition, the dendritic arborization of the primary neurons exposed to both nadolol and valproic acid was attenuated by the co-treatment with a selective TrkB inhibitor, ANA-12.This observation indicated the direct involvement of TrkB activation and further proved the syngery of the docking methods and the MSN screening for the identification of potenrial TrkB agonists.
Detailed binding studies also suggest that the strength and number of hydrogen bonds significantly influence the binding affinity of ligands to the TrkB-D5 domain.Specifically, the presence of strong hydrogen bonds with key residues such as Asp298 and His343 is crucial for achieving low binding energies and thus high binding affinity.Additionally, while His335 plays a role in the binding mechanism, it may not be critical for enhancing binding strength, as evidenced by the moderate affinity of 7,8-DHF and the weak binding of valproic acid.Overall, these findings highlight the importance of specific hydrogen bond interactions in determining the binding efficacy of potential TrkB-D5 agonists.
Magnetic Screening Nanoplatform (MSN)
MSN using TrkB as a screening target was prepared using our previously reported protocols [32].Briefly, cell membrane fragments were first isolated from SH-SY5Y cells overexpressing TrkB and TrkB-null cells treated with hypotonic buffers.Subsequently, 3 mL of cell membrane fragment Bis-Tris buffer (20 mM, pH 7.2) solution from 107 cells was mixed with 1.0 mL of sterilized 1.0 mg/mL polyacrylic acid-coated iron oxide nanoclusters.The mixture was vortexed briefly and incubated on ice for 30 min at room temperature.Then, the mixture was tip sonicated for 120 s (following the method 27% amplitude, 5 s on, and 5 s off) using a Branson Digital Sonifier with a micro one-eighth-inch tip.After characterization, MSN with TrkB was stored at 4 • C until further experiments were conducted.
Screening of NIH Library
We have previously established the screening conditions for hot water plant extract and small molecule DMSO mixtures, and compound elution protocols using MSN with TrkB [32].Here, we used the NIH clinical collection library as the screening source that contains 708 compounds used in human clinical trials with known therapeutic functions, biological mechanisms of action, and available structural and physicochemical properties.To perform the fishing experiment, a mixture of compounds from the library was prepared by combining 1 µL of each compound (10 mM).The mixture was then diluted with ammonium acetate buffer (50 mM, pH 7.4) 100 times followed addition of 250 µL of MSN.The mixture was incubated for 20 min at 37 • C to facilitate ligand binding.Then, the MSN-with bound compounds were magnetically separated from the mixture and washed three times using ammonium acetate buffer (250 µL, 50 mM, pH 7.4).Finally, MSN-bound compounds were eluted with 250 µL of methanol/ammonium acetate buffer (1/9, 5/5, and 9/1 v/v) to release compounds of different polarities.The elution profiles were analyzed using a Waters Xevo G2xs quadrupole-time-of-flight mass spectrometry with an iclass ultra-high-performance liquid chromatography system.
Docking Computation of TrkB and Its Agonists
In this study, we used Fred [37], a computational docking tool from OpenEye, to perform all docking calculations.We first employed the flipper utility from the OpenEye OMEGA module [60] to systematically identify and assign configurations to stereocenters with unspecified stereochemistry within our compounds.This process was initiated by providing the input file containing SMILES representations of compounds with indeterminate stereochemistry.The flipper tool was then used to generate isomers for these compounds, each of which was labeled with a distinct identifier to ensure their uniqueness.This process allows for the enumeration of five distinct isomers per stereocenter.Then, we determined the predominant tautomeric and protonation states for each isomer at physiological pH (7.4), a crucial step for accurately representing the chemical structures under biological conditions.We utilized the tautomeric application from the OpenEye QUACPAC module [61] for this calculation, where the input is the SMILES file containing all stereoisomeric forms from the previous step and the output is the most stable tautomeric and protonation state for each isomer.
To prepare the initial ligand pose for the docking simulations, we utilized the oeomega tool [60] from the OpenEye software suite 2024.1.0.This process transforms two-dimensional SMILES representations of the ligands into three-dimensional structures ready for docking.The output is saved in SDF format.For this ligand pose generation, the creation of conformers is permitted without strict stereochemical constraints, which is relevant when the stereochemistry is undefined or variable.To ensure the accuracy of our docking simulations, it is crucial to begin with a fully prepared receptor structure.This preparation entails adding any missing protons or residues that may be absent from the initial protein data file.Completing the receptor's structure provides a more reliable foundation for subsequent simulations and analyses.For this purpose, the profix tool from JACKAL software [62] is employed.Specifically, we selected the "-fix 0" option, designed to address missing atoms, including protons, within the receptor without attempting to reconstruct missing residues.This focused approach ensures that our receptor models are proton-complete, a crucial factor for accurate hydrogen bond network predictions.
To determine the potential binding sites of TrkB, we leverage the capabilities of the OEDocking Graphical User Interface (GUI) [37] developed by OpenEye.The OEDocking, renowned for its intuitive design and advanced computational algorithms, allows us to efficiently identify sites on the protein surface that are likely to accommodate ligand molecules.To begin, we loaded the three-dimensional structure of our target protein into the OEDocking GUI.The software then performs a comprehensive analysis of the protein's surface, employing a combination of geometric and chemical heuristics to detect cavities that could serve as feasible binding sites based on the molecular cavity detection algorithms.Once the potential sites are identified, the OEDocking GUI provides an option to save these sites for further analysis.We utilized this feature to preserve the identified sites in a designated file format, which facilitates subsequent steps in our workflow, such as docking simulations.Lastly, we employed Fred to dock the chosen binding sites with the ligands, using the initial poses generated in preceding steps.The most favorable binding site regions are then ranked according to the Fred Docking scores.
Structures of Docked Compounds
In the context of docking simulations, our investigation targets the binding interactions of eight specific compounds derived from the Gotu Kola plant with the TrkB receptor [32].Additionally, we are examining the binding mechanisms of the top 20 compounds as identified through experimental methods from the NIH repository.The SMILES representations of these compounds are listed in Table S1.
Biological Activity Evaluation of Top Compounds
The binding interactions of the TrkB receptor with the eight specific compounds derived from the Gotu Kola plant from our previous studies [32] showed the strong agreement of the binding score and experimental activities.Therefore, using the binding score as guidelines, we selected two additional compounds with distinct binding affinities from the NIH clinical collection library as top candidates for experimental verification, namely, nadolol and valproic acid.We performed the initial evaluation of these two compounds to evaluate their effects on dendritic arborization elongation using primary hippocampal neurons isolated from embryonic C57BL6 mice following the previously described method [62,63].Briefly, hippocampal neurons were isolated from embryos on gestational day 18 and plated at a density of 130,000 cells in a 60 mm dish containing three glass coverslips.After 3 h, coverslips were flipped into 60 mm dishes containing mouse neural stem cell-derived glial cells.Co-culture continued for 14 days, at which point cells were treated with BDNF at 25 ng/mL, nadolol at 10 nM, and valproic acid at 500 µM.One week later, cells were fixed in 4% paraformaldehyde and stained with Anti-MAP2B.Immunostained neurons were imaged with a Zeiss ApoTome2 microscope, and blinded Sholl analyses were performed to assess dendritic complexity using the Fiji platform.The Sholl analysis was measured as intersections of neurites through concentric circles around the cell body moving out at 10 µm intervals.Here, BDNF treatment was used as a positive control.Additionally, the experiments were performed by the co-treatment with a selective TrkB inhibitor ANA-12.ANA-12 blocks the neurotrophic actions of BDNF without compromising neuron survival.ANA-12 co-treatment allowed one to confirm the involvement of TrkB activation in the neurite development.
Conclusions
In conclusion, we reported a combined computational and experimental approach to study the TrkB binders for potential neurological disorder treatment.The Fred Docking scores (in kcal/mol) of experimentally identified TrkB binders from docking simulation displayed strong agreement with the top candidates.The experimentally tested top candidates, such as 4-O-QCA and nadolol, effectively increased the dendritic arborization of primary hippocampal neurons isolated from AD-mouse models.From the docking simulation, these top candidates showed the high affinity of the most favorable binding site (BP1), with the involvement of the other four binding sites.This observation is also true for the best-characterized, thoroughly in vivo-tested, small molecule TrkB agonist, 7,8-dihydroxyflavone.These studies not only confirmed the effectiveness of our magnetic drug screening platform in identifying biological active compounds from mixtures but also showed the feasibility of using docking simulation to predict the biological activities of compounds.Particularly, our detailed interaction analysis demonstrated that specific hydrogen bonds, especially those involving key residues such as Asp298 and His343, are crucial for strong binding affinities.This combined experimental and computation approach is poised to explore the BDNF/TrkB pathway as a viable target for drug development.This study will lead to a list of TrkB binders with potential biological activities.Our future studies will focus on in vivo efficacy studies in animal models.Depending on the structures of the compounds, further structural alteration or formulation lead compounds may be needed for BBB penetration and enhanced brain bioavailability in order to be advanced to the next phase of drug development.Furthermore, once compounds stimulating the BDNF/TrkB pathway are validated, different mouse models of other neurodegenerative diseases can further evaluate these compounds.Finally, the developed drug-screening assay will be transferable to other transmembrane targets related to neurodegenerative diseases.Our combined approach will not only greatly benefit drug discovery processes using TrkB transmembrane proteins as targets but also allow for the evaluation and validation of any previously reported TrkB binders.
Figure 1 .
Figure 1.A schematic illustration of (a) the binder identification process using TrkB as the screening target, and (b) the docking computational studies and resulting information.
BP1:Figure 1 .
Figure 1.A schematic illustration of (a) the binder identification process using TrkB as the screening target, and (b) the docking computational studies and resulting information.
Figure 4 .
Figure 4. Binding pocket representations for (a) 7,8-dihydroxyflavone; (b) nadolol; (c) valproic acid; and (d) dicaffeoylquinic acid.The protein surface is shown in a transparent gray, while the molecules are displayed in space-filling models with carbon in cyan, oxygen in red, nitrogen in blue, and hydrogen in white.
Figure 4 .
Figure 4. Binding pocket representations for (a) 7,8-dihydroxyflavone; (b) nadolol; (c) valproic acid; and (d) dicaffeoylquinic acid.The protein surface is shown in a transparent gray, while the molecules are displayed in space-filling models with carbon in cyan, oxygen in red, nitrogen in blue, and hydrogen in white.
Table 2 .
Fred Docking scores (in kcal/mol) for all five different binding sites for compounds from the NIH library.BP stands for binding site.The notation "-" denotes that the software failed to generate the pose because the docking score is out of the acceptable range.Here, the known TrkB agonist, 7,8-dihydroxyflavon, was included as a comparison. | 10,757.6 | 2024-08-23T00:00:00.000 | [
"Medicine",
"Computer Science",
"Chemistry"
] |
The Mechanism of Action and Clinical Efficacy of Low-Dose Long-Term Macrolide Therapy in Chronic Rhinosinusitis
Various chronic inflammatory airway diseases can be treated with low-dose, long-term (LDLT) macrolide therapy. LDLT macrolides can be one of the therapeutic options for chronic rhinosinusitis (CRS) due to their immunomodulatory and anti-inflammatory actions. Currently, various immunomodulatory mechanisms of the LDLT macrolide treatment have been reported, as well as their antimicrobial properties. Several mechanisms have already been identified in CRS, including reduced cytokines such as interleukin (IL)-8, IL-6, IL-1β, tumor necrosis factor-α, transforming growth factor-β, inhibition of neutrophil recruitment, decreased mucus secretion, and increased mucociliary transport. Although some evidence of effectiveness for CRS has been published, the efficacy of this therapy has been inconsistent across clinical studies. LDLT macrolides are generally believed to act on the non-type 2 inflammatory endotype of CRS. However, the effectiveness of LDLT macrolide treatment in CRS is still controversial. Here, we reviewed the immunological mechanisms related to CRS in LDLT macrolide therapy and the treatment effects according to the clinical situation of CRS.
Introduction
Low-dose long-term (LDLT) macrolide therapy is a type of treatment in which the dosage is lower than that used to treat an acute bacterial infection and the duration is longer than that normally used. The regimen was first reported for the treatment of patients with diffuse panbronchiolitis with LDLT erythromycin in Japan in 1984 [1]. Since then, it has been widely used for chronic airway diseases such as chronic obstructive pulmonary disease (COPD), asthma, diffuse panbronchiolitis, bronchiectasis, cystic fibrosis, and idiopathic pulmonary fibrosis [2]. LDLT macrolide therapy has been found to enhance lung function and reduce the frequency and severity of exacerbations in people with these conditions [3]. It is thought that the immunomodulatory and anti-inflammatory potency of macrolides, through various mechanisms, can effectively control these diseases, as can their antimicrobial properties [4]. In addition, macrolide antibiotics have been reported to have therapeutic potential through immunomodulation in a variety of different diseases, such as rheumatoid arthritis, coronary artery disease, non-small cell lung cancer, periodontitis, and blepharitis [5,6].
Chronic rhinosinusitis (CRS) is one of the chronic inflammatory diseases of the upper respiratory tract. CRS has a similar pathophysiology to the above-mentioned lower airway inflammatory diseases, particularly asthma [7,8]. In an early study in 1970, macrolide therapy was able to reduce corticosteroid doses in patients with asthma [9]. A systematic review of the effects of long-term macrolide treatment on asthma found that the treatment reduced exacerbations and symptoms but did not significantly increase lung function [10]. In a multicenter randomized controlled trial (RCT) conducted in patients has also been found to affect mucociliary clearance and epithelial barrier function [26]. These effects may play a role in the pathophysiology of CRS (Figure 1).
Reducing Proinflammatory Cytokines
The major immune regulatory effect of macrolides is to reduce the production of proinflammatory cytokines in various inflammatory cells. Macrolides decrease the production of IL-6 and tumor necrosis factor (TNF)-α [27,28]. Azithromycin inhibits the inflammasome and reduces IL-1β secretion in monocytes and macrophages [29,30]. These inhibitory effects are regulated by the alteration of cellular signaling pathways, such as mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase 1/2 (ERK1/2), and nuclear factor (NF)-κB [4].
In CRSsNP patients, the nasal mucosa was cultured with clarithromycin, and the secretion of IL-5, IL-8, and granulocyte-macrophage colony-stimulating factor (GM-CSF) was decreased [31]. In addition, transforming growth factor (TGF)-β and NF-κB were decreased when nasal mucosal tissues were treated with clarithromycin [32]. However, the results were inconsistent in human samples treated with 250mg of clarithromycin for three months.
After treatment with clarithromycin for eight weeks in CRSwNP patients, levels of IL-6, IL-8, and IL-1β in the nasal secretions were reduced [33]. Another study published by the same author showed decreased eosinophilic inflammatory markers, including regulated on activation, normal T cell expressed and secreted (RANTES), and eosinophilic cationic protein (ECP), after eight weeks of clarithromycin treatment [34]. In an in vitro study, erythromycin suppressed the production of eotaxin and RANTES in a lung fibroblast cell line (human fetal lung fibroblasts 1) [35]. Postoperative clarithromycin treatment significantly reduced ECP levels in nasal secretion at 12 and 24 weeks, but not in the control group [36]. However, conflicting results were found, with no difference in ECP level of nasal secretions between LDLT erythromycin and the placebo group [37].
Reducing Proinflammatory Cytokines
The major immune regulatory effect of macrolides is to reduce the production of proinflammatory cytokines in various inflammatory cells. Macrolides decrease the production of IL-6 and tumor necrosis factor (TNF)-α [27,28]. Azithromycin inhibits the inflammasome and reduces IL-1β secretion in monocytes and macrophages [29,30]. These inhibitory effects are regulated by the alteration of cellular signaling pathways, such as mitogen-activated protein kinase (MAPK), extracellular signal-regulated kinase 1/2 (ERK1/2), and nuclear factor (NF)-κB [4].
In CRSsNP patients, the nasal mucosa was cultured with clarithromycin, and the secretion of IL-5, IL-8, and granulocyte-macrophage colony-stimulating factor (GM-CSF) was decreased [31]. In addition, transforming growth factor (TGF)-β and NF-κB were decreased when nasal mucosal tissues were treated with clarithromycin [32]. However, the results were inconsistent in human samples treated with 250 mg of clarithromycin for three months.
After treatment with clarithromycin for eight weeks in CRSwNP patients, levels of IL-6, IL-8, and IL-1β in the nasal secretions were reduced [33]. Another study published by the same author showed decreased eosinophilic inflammatory markers, including regulated on activation, normal T cell expressed and secreted (RANTES), and eosinophilic cationic protein (ECP), after eight weeks of clarithromycin treatment [34]. In an in vitro study, erythromycin suppressed the production of eotaxin and RANTES in a lung fibroblast cell line (human fetal lung fibroblasts 1) [35]. Postoperative clarithromycin treatment significantly reduced ECP levels in nasal secretion at 12 and 24 weeks, but not in the control group [36]. However, conflicting results were found, with no difference in ECP level of nasal secretions between LDLT erythromycin and the placebo group [37].
Inhibition of Neutrophil Recruitment
IL-8, also known as C-X-C Motif Chemokine Ligand 8 (CXCL8), has been identified as a function of neutrophil recruitment. Erythromycin can inhibit the production of IL-8 by neutrophils and eosinophils [38,39]. Previously, Suzuki et al. reported that the administration of roxithromycin to patients with CRS reduced neutrophil counts and IL-8 levels in nasal secretion [40]. This effect was also confirmed by an RCT with 64 CRSsNP patients [41].
Reduced production of IL-8 and IL-1β can block the extravascular transmigration of neutrophils through inhibition of transcription factors such as NF-κB and activator protein-1 (AP-1) [42]. In healthy subjects and COPD patients, short-term administration of azithromycin reduced IL-8 and soluble vascular cell adhesion molecule (VCAM)-1 and modulated neutrophil function [43,44]. Furthermore, azithromycin suppressed the proliferation and cytokine production of CD4 + T cells, especially IL-17 secretion via the mammalian target of the rapamycin (mTOR) pathway [45].
Mucus Secretion and Mucociliary Clearance
Macrolides can reduce the expression of MUC5AC in airway epithelial cells [26]. Clarithromycin and erythromycin effectively inhibited the expression of MUC5AC in human nasal epithelial cells from CRSwNP patients [46]. Azithromycin also significantly reduced MUC5AC expression in human nasal epithelial cells [47]. In rats stimulated with intratracheal lipopolysaccharide (LPS), roxithromycin treatment significantly reduced Muc5ac expression and NF-κB nuclear translocation in the bronchial epithelium [48]. Azithromycin and clarithromycin showed the same effect in ovalbumin (OVA)-sensitized and LPS-instilled rats [46,47]. In human bronchial epithelial cells, clarithromycin inhibited the expression of MUC5AC and IL-13-induced goblet cell hyperplasia [49,50]. Similar to other inhibitory mechanisms, clarithromycin had an impact on NF-κB inactivation. In a CRS mouse model, the level of IL-10 was increased, and Muc5ac expression was inhibited by erythromycin treatment [51].
In patients with acute purulent rhinitis, clarithromycin treatment for two weeks reduced secretion volume and increased mucociliary transportability [52]. Clarithromycin had the same effect of significantly reducing mucus viscosity and nasal clearance time in CRS patients treated for four weeks [53,54]. In a three-month RCT in patients with CRS, saccharine transit time was significantly improved in the roxithromycin group compared to the placebo group [41]. Improvement in mucociliary clearance, as measured by saccharine transit time, persisted after 12 months of follow-up [55].
Epithelial Barrier Function
Asgrimsson et al. reported that azithromycin, but not erythromycin, induced expression of tight junction proteins, including claudin-1, claudin-4, occludin, and junctional adhesion molecule-A, and increased epithelial integrity in human bronchial epithelial cells [56]. During Pseudomonas aeruginosa infection in vitro, pretreatment with azithromycin prevented epithelial barrier dysfunction and enhanced recovery [57]. The potential protective effects of macrolides on human respiratory epithelium were investigated in vitro [58,59]. Macrolides, such as roxithromycin, clarithromycin, and azithromycin, reduced the production of reactive oxygen species generated by activated neutrophils [58]. These agents were able to attenuate the injurious effects of bioactive phospholipids and neutrophil-induced epithelial damage [59]. Lastly, roxithromycin treatment increased ciliary movement and mucociliary transport velocity in the rabbit trachea [60].
Inhibition of Biofilm Formation
Biofilms are a surrounding structure of microorganisms that can provide resistance to host immune responses and antimicrobial agents and are also important for CRS pathophysiology [61]. Bacterial biofilms induced by Staphylococcus aureus or Pseudomonas aeruginosa contribute to the severity and refractoriness of CRS [62]. Korkmaz et al. reported the biofilm eradication effect of eight weeks of clarithromycin treatment in a RCT in CRSwNP patients [63]. Compared to the mometasone furoate nasal spray group (1 of 11), there were more patients (6 of 12) with biofilm disappearance in the clarithromycin treatment group. Previous in vitro studies found that macrolides can inhibit the production of bacterial proteins and reduce biofilm formation by Pseudomonas aeruginosa [64,65]. Recently, antibiotic (ciprofloxacin and azithromycin)-eluting sinus stents have been experimentally demonstrated to inhibit Pseudomonas aeruginosa-induced biofilms [66]. The authors demonstrated that prolonged release of ciprofloxacin and azithromycin for 28 days reduced biofilm formation and eliminated existing biofilms.
Effects on Tissue Fibrosis
Several in vitro studies have shown that macrolides inhibit fibroblasts in nasal polyps. When nasal polyp-derived fibroblasts were treated with roxithromycin and then stimulated with lipopolysaccharide, fibroblast proliferation was inhibited [67]. This suppression phenomenon was actually observed in the fibroblasts of CRSwNP patients treated with roxithromycin for one month [68]. In addition, roxithromycin inhibited the production of nitric oxide [69], IL-6 and RANTES [69], matrix metalloproteinase (MMP)-2, and MMP-9 [70] in TNF-α-stimulated nasal polyp fibroblasts. Another in vitro study with nasal polyp fibroblasts demonstrated that erythromycin and roxithromycin treatment reduced TGF-β-induced α-smooth muscle actin (a myofibroblast marker), collagen production, nicotinamide adenine dinucleotide phosphate oxidase 4, and reactive oxygen species production [71]. Collectively, these findings indicate an inhibitory effect of macrolide treatment on fibroblast-induced nasal polyp formation and may explain the mechanism of polyp size reduction in patients with CRSwNP.
Comparison of Clinical Efficacy in CRS
Although there are few direct comparative studies of each clinical situation, RCT studies of macrolides are summarized. Most of the studies compared placebo with macrolides (Table 1), and some compared treatment with conventional CRS treatment, intranasal corticosteroid spray (Table 2). Herein, the results of clinical trials were comprehensively reviewed according to the presence or absence of nasal polyps, type of inflammation, total IgE level, and the presence or absence of allergy.
CRSwNP vs. CRSsNP
After long-term clarithromycin treatment (8 to 12 weeks) in 20 CRSwNP patients, 40% of patients had a reduction in nasal polyp size and a significant decrease in IL-8 levels in lavage fluid, while 60% remained unchanged [81]. Preoperative treatment with 500 mg of clarithromycin for eight weeks reduced polyp recurrence at 6 and 12 months postoperatively [77]. Computed tomography (CT) findings and SNOT-20 improved in CRSwNP patients treated with mometasone furoate monotherapy and LDLT clarithromycin combination therapy for eight weeks, but there was no statistically significant difference between the two groups [63]. In 52 CRSwNP patients treated with LDLT clarithromycin for 12 weeks, there were significant reductions in the Sinonasal Outcome Test (SNOT)-20 and Lund-Kennedy endoscopy score [82]. In addition, 54% (28 of 52) of those who improved on SNOT-20 had lower total IgE levels than others.
In CRSsNP patients, treatment with 150 mg roxithromycin for three months showed significant improvement in sinonasal symptoms (SNOT-20), nasal endoscopy findings, and mucociliary transit time [41]. When comparing the effects of mometasone furoate and LDLT clarithromycin at three months, there was no significant difference in visual analog scales of symptoms or endoscopic findings between the two groups [80]. Eight weeks of erythromycin treatment also showed clinical improvement in CRSsNP patients [83].
However, in a mixed cohort of CRSwNPs (52.0%) and CRSsNPs, except for severe polyposis, LDLT azithromycin did not differ between treatment and placebo groups [78]. Treatment with LDLT azithromycin for three months after FESS in CRS patients improved SNOT-22 compared to conventional treatment [76], whereas erythromycin treatment after FESS was ineffective in CRSwNP (55.2%) and CRSsNP [37]. There was no additional effect of clarithromycin for three months with budesonide aqua nasal spray in patients with CRS (56.8% with nasal polyps) [84]. Haruna et al. retrospectively analyzed patients who received LDLT macrolides treatment for 8-20 weeks; the clinical effect was good in CRSsNP patients, whereas the effect increased after polypectomy in CRSwNP patients [85]. In a biomarker study for the prediction of the macrolide treatment group in patients with CRS postoperatively, nasal tissue IgG4 level and overall symptom score were identified as predictive factors for refractoriness [86]. However, there was no difference in refractory rate between the LDLT clarithromycin treatment group (18 of 74) and the fluticasone propionate spray group (17 of 75).
Type 2 vs. Non-Type 2
Treatment with macrolides (clarithromycin or roxithromycin) for 2-3 months improved clinical symptoms in CRS patients, and the degree of clinical improvement was inversely correlated with eosinophil counts in the peripheral blood, the nasal smear, and the sinus mucosa [87]. However, the number of neutrophils, mast cells, and mononuclear cells did not correlate with symptomatic improvement, and the number of interferon-γ and IL-4-positive cells also did not correlate.
Zeng et al. compared the efficacy of fluticasone propionate nasal spray versus LDLT clarithromycin for postoperative treatment in CRS of different phenotypes. The study found that both medications were effective in reducing symptoms, but there were no significant differences between eosinophilic (>10% eosinophils/total infiltrating cells) and non-eosinophilic CRSwNP groups [79]. Asians, who are generally known to have more non-type 2 CRS, showed better treatment effects of LDLT macrolides than non-Asians in a meta-analysis [88].
A recent study showed that long-term treatment with clarithromycin was effective in CRSwNP patients without tissue eosinophilia (>10 eosinophils/high power field) [74]. When comparing oral steroids alone with oral steroids plus clarithromycin for 12 weeks in CRSwNP patients who underwent FESS, symptom scores and endoscopy scores improved significantly in the add-on treatment group [74]. In a case-control study of LDLT clarithromycin after surgery, responders (19 of 28, 67.9%) had lower blood eosinophil counts (0.16 ± 0.11 versus 0.39 ± 0.36 10 9 /L) and tissue eosinophilia (>10 eosinophils/high power field, 17.6% versus 62.5%) compared to non-responders [89]. According to these studies, patients with type 2 inflammation of CRS have a lower response to LDLT macrolide therapy.
Aspirin or nonsteroidal anti-inflammatory drug (NSAID)-exacerbated respiratory disease (AERD/NERD) is characterized by asthma, CRSwNP, and aspirin or nonsteroidal anti-inflammatory drug intolerance [90]. In AERD patients with eosinophilic nasal polyps (>40% eosinophils), LDLT azithromycin treatment significantly reduced symptoms (visual analog scale and SNOT-22) and the need for surgery (74% versus 14%) compared to placebo [75]. In addition, another study showed that azithromycin significantly improved disease clearance in AERD patients compared to placebo [73]. In patients with refractory CRS who failed surgery and medical treatment, azithromycin treatment not only alleviated symptoms but also significantly reduced the amount of Staphylococcus aureus [72,73]. These recent studies have demonstrated that LDLT macrolide treatment is also effective in CRS patients with eosinophilic inflammation.
Normal vs. High Total IgE
Previous studies reported that only CRS patients with normal serum IgE levels (<200 µg/L [41] or ≤250 U/mL [87]) benefited from LDLT macrolide treatment. However, the relationship between total serum IgE levels and LDLT macrolide treatment effects is still controversial. According to the studies showing that LDLT macrolide treatment was effective, the total serum IgE in the patient group was 188.63 ± 57.25 IU/mL [77] and 165.0 ± 195.2 µL/L [37]. Maniakas et al. reported that total serum IgE was higher in the azithromycin success group compared to the azithromycin failure group [91]. In addition, atopy status did not affect the clinical effect of clarithromycin in CRSsNP patients [80].
Recently, of the 100 CRS patients who were administered LDLT roxithromycin, 29 were determined to be responders [92]. Among clinical parameters, including nasal secretion and serum IgE, IL-5, blood eosinophil/neutrophil, allergy, asthma, and nasal polyps, total IgE in nasal secretions was the only predictor of responder in multivariate models (odds ratio 4.76, 95% confidential interval 1.29-17.58). The authors suggest that local total IgE is a reliable biomarker instead of serum total IgE.
Allergic vs. Non-Allergic Patients
Yamada et al. evaluated the effectiveness of LDLT clarithromycin in patients with nonallergic CRSwNP and found a significant reduction in nasal polyp size and IL-8 secretion in 40% (8 of 20) of patients [81]. CRS patients with or without allergies have different responses to treatment with LDLT macrolides. In patients with confirmed CRSwNP allergy status by skin prick test, ECP levels in nasal secretions decreased in allergic patients, and IL-6 levels decreased only in allergic patients after eight weeks of clarithromycin treatment [34]. However, allergic status had no impact on the clinical efficacy of LDLT macrolides [85,93].
Other Considerations
There are several factors to consider while prescribing LDLT macrolides, including the type of medication, dose, duration, and timing of treatment. Patient characteristics such as age, underlying disease, and comorbidities should be considered in the treatment. In addition, we should be aware of the adverse effects of long-term treatment. Although there has been no well-designed study that directly compared the treatment effects between macrolides in CRS, there was a recent study comparing the effects of two drugs. Comparing the effects of clarithromycin and azithromycin treatment for four weeks, azithromycin was more effective for complete resolution of symptoms and CT scores [94]. In a systematic review and meta-analysis of clarithromycin in CRS compared with the intranasal corticosteroid spray, there was no significant difference in effectiveness [95]. However, combined treatment with clarithromycin and intranasal corticosteroid spray markedly improved clinical symptoms, endoscopic findings, and Lund-Mackay CT scores.
Duration of Treatment
LDLT macrolide treatment is known to be more effective the longer the treatment period. After treatment with 8 to 12 weeks of clarithromycin in CRS patients, symptoms and endoscopic findings improved in 71.1% of participants, and the clinical effect was correlated with the duration of treatment [96]. In a meta-analysis, the effects were more favorable in patients taking LDLT macrolides for 24 weeks in comparison to 8 and 12 weeks [97]. Treatment with clarithromycin for 24 weeks after FESS resulted in better CT scores compared to those for 12 weeks [36]. Treatment with clarithromycin showed clinical effects after four weeks and reached its maximum effect at 12 weeks in patients with CRSsNP [80]. Nakamura et al. compared the clinical efficacy of LDLT clarithromycin in patients with CRS postoperatively. In the 6-month treatment group, the rate of asymptomatic improvement was higher at 12 months after surgery than in the 3-month treatment group [98]. Taken together, the longer the treatment period, the better the clinical outcome of LDLT macrolides.
Pediatric CRS Patients
Some evidence for LDLT macrolide treatment has also been reported in pediatric patients with CRS [99]. A retrospective review of six patients (mean age: 7 ± 3.4 years) who were treated with either roxithromycin or clarithromycin found that macrolide addon therapy improved nasal symptoms and reduced thick mucus secretions [100]. After administration of clarithromycin at a half dose (5-8 mg/kg) for eight weeks to 54 children with CRS, 63.0% were cured and 31.5% were improved [101].
Therapeutic effects of LDLT macrolides, such as improving lung function and reducing exacerbations, have been demonstrated in chronic inflammatory diseases of the lower respiratory tract, such as severe asthma and cystic fibrosis in children [99]. Unfortunately, no randomized, placebo-controlled clinical trials of LDLT treatment in children have been conducted.
Adverse Effects of LDLT Macrolides
During LDLT macrolide treatment for 8 to 12 weeks, there is no strong evidence of the development of drug-resistant bacteria strains [16]. However, LDLT azithromycin use over 12 to 24 months in pediatric patients with bronchiectasis resulted in an increased presence of macrolide-resistant organisms [102]. Macrolides also carry a risk of prolongation of the QT interval and consequent torsades de pointes arrhythmia [103,104]. On the other hand, the incidence of torsades de pointes with erythromycin was very rare (four cases out of 34,000 patients treated) [105]. In a national cohort that included 66,331 CRS patients, the risk of mortality and cardiovascular events was not significantly increased in patients who had been prescribed macrolides, particularly clarithromycin, compared to penicillin [106]. Clarithromycin treatment was known to increase the risk of stroke and myocardial infarction, but a nation-wide cohort study showed no association with overall mortality or long-term cardiovascular death [107,108]. Nonetheless, caution is required if the patient is at risk of a cardiac event prior to the initiation of LDLT macrolide treatment [16].
Conclusions
Because CRS is a highly heterogeneous disease entity, the clinical efficacy of LDLT macrolide therapy is variable. Several RCTs have demonstrated that LDLT macrolides can improve symptoms and quality of life in patients with CRS, particularly those with CRSsNP, normal total IgE levels, and corticosteroid resistance [109]. The immunomodulatory and anti-inflammatory properties of macrolides contribute not only to the reduction of neutrophilic inflammation but also to the decrease of eosinophilic inflammation, mucus clearance, and mucosal stabilization. In addition, it can increase the effectiveness of treatment by removing bacterial biofilm and preventing or reducing polyp formation by inhibiting tissue fibrosis. These various mechanisms may have an impact on CRS treatment for numerous clinical conditions. The efficacy of LDLT macrolide therapy may be influenced by the endotype and phenotype of CRS.
Further research is needed to fully understand the mechanisms underlying the therapeutic effect of macrolides in CRS and to identify the most appropriate patients for this treatment approach, including non-antibiotic macrolides [2,110]. Currently, non-antibiotic macrolides such as EM900, an erythromycin derivative, are being developed and researched and may be spotlighted as an important treatment modality for CRS in the future [111]. Nonetheless, current evidence suggests that low-dose, long-term macrolide therapy is a promising option for the management of CRS. LDLT macrolide treatment may be the main treatment for certain subtypes of CRS and may be used as an additional treatment with corticosteroids for other types of CRS. | 5,233.2 | 2023-05-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Catalan words avoiding pairs of length three patterns
Catalan words are particular growth-restricted words counted by the eponymous integer sequence. In this article we consider Catalan words avoiding a pair of patterns of length 3, pursuing the recent initiating work of the first and last authors and of S. Kirgizov where (among other things) the enumeration of Catalan words avoiding a patterns of length 3 is completed. More precisely, we explore systematically the structural properties of the sets of words under consideration and give enumerating results by means of recursive decomposition, constructive bijections or bivariate generating functions with respect to the length and descent number. Some of the obtained enumerating sequences are known, and thus the corresponding results establish new combinatorial interpretations for them.
Introduction and notation
Catalan words are particular growth-restricted words and they represent still another combinatorial class counted by the Catalan numbers, see for instance [12, exercise 6.19.u, p. 222]. This paper contributes to a recent line of research on classical pattern avoidance on words subject to some growth restrictions (for instance, ascent sequences [2,6], inversion sequences [5,10,14], restricted growth functions [4,9]) by investigating connections between sequences on the On-line Encyclopedia of Integer Sequences [11] and Catalan words avoiding two patterns of length 3.
Through this paper we consider words over the set of non-negative integers and we denote such words by sequences (for instance w 1 w 2 . . . w n ) or by italicized boldface letter (for instance w and u). The word w = w 1 w 2 . . . w n is called a Catalan word if w 1 = 0 and 0 ≤ w i ≤ w i−1 + 1 for i = 2, 3, . . . , n.
Catalan words are in bijection with maybe the most celebrated combinatorial class having the same enumerating sequence: Dyck paths (i) . Indeed, in a length 2n Dyck path collecting for the up steps the ordinates of their starting points we obtain a length n Catalan word, and this construction is a bijection. See Figure 1 where this bijection is depicted for an example. We denote by C n the set of length n Catalan words and c n = |C n | is the nth Catalan number 1 n+1 2n n . A pattern is a word with the property that if i occurs in it, then so does j, for any j with 0 ≤ j < i. A pattern π = π 1 π 2 . . . π k is said to be contained in the word w = w 1 w 2 . . . w n , k ≤ n, if there is a sub-word of w, w i1 w i2 . . . w i k , order-isomorphic with π 1 π 2 . . . π k . If w does not contain π, we say that w avoids π, see for instance Kitaev's seminal book [7] on this topic.
For a pattern π, we denote by C n (π) the set of length n Catalan words avoiding π, and c n (π) = |C n (π)| is the cardinality of C n (π) and C(π) = ∪ n≥0 C n (π). For example, C n (101) is the set of length n Catalan words avoiding 101, that is, the set of words w in C n such that there are no i, j and k, 1 ≤ i < j < k ≤ n, with w i = w k > w j . So, C 4 (101) = {0000, 0001, 0010, 0011, 0012, 0100, 0110, 0111, 0112, 0120, 0121, 0122, 0123}. Likewise, if π is the set of patterns {α, β, . . . }, then C n (π) and C n (α, β, . . . ) denote both the set of length n Catalan words avoiding each pattern in π; and c n (π) = c n (α, β, . . . ) and C(π) = C(α, β, . . . ) have similar meaning as above. A descent in a word w = w 1 w 2 . . . w n is a position i, 1 ≤ i ≤ n − 1, with w i > w i+1 . The (ordinary) generating function of a set of pattern avoiding Catalan words C(π) is the formal power series C π (x) = n≥0 c n (π)x n = w∈C(π) x |w| , where |w| is the length of the word w. In our case of generating function approach for counting classes of pattern avoiding Catalan words we consider the descent number as an additional statistic obtaining 'for free' the bivariate generating function C π (x, y) = w∈C(π) x |w| y des(w) , where des(w) is the number of descents of w. With these notations, the coefficient of x n y k in C π (x, y) is the number of Catalan words of length n avoiding π and having k descents, and for a set S of Catalan words S(x) and S(x, y) have similar meaning. For a word w = w 1 w 2 . . . w n and an integer a, we denote by (w + a) the word obtained from w by increasing by a each of its entries, that is, the word (w 1 + a)(w 2 + a) · · · (w n + a). In our constructions we will often make use of two particular families of Catalan words: those avoiding 10 (i.e., with no descents) and we call these words weakly increasing (or w.i for short) Catalan words; and those avoiding 00 (and thus necessarily avoiding 10) and we call these words strictly increasing (or s.i for short) Catalan words. It is easy to see that for each length n ≥ 1 there are 2 n−1 w.i. Catalan words and one s.i. Catalan word.
The remaining of this paper is structured as follows. In the next section we characterize pattern avoiding ascent sequences which are Catalan words, establishing ties with some similar enumerative results for ascent sequences in [2]. In Section 3 we consider classes of Catalan words avoiding both a length two and a length three pattern. In the next sections we discuss Catalan words avoiding two patterns of length three, in increasing order of their complexity: obvious cases (Section 4), cases counted via recurrences (Section 5) and cases counted via generating functions (Section 6); these results are summarized in Table 2. We conclude with some remarks and further research directions. 2 Catalan words vs. ascent sequences An ascent in a word w = w 1 w 2 . . . w n is a position i, 1 ≤ i ≤ n − 1, with w i < w i+1 , and asc(w) denotes the number of ascents in w. Closely related to Catalan words are ascent sequences introduced in [3] and defined as: the word w = w 1 w 2 . . . w n is called an ascent sequence if w 1 = 0 and 0 ≤ w i ≤ asc(w 1 w 2 . . . w i−1 ) + 1 for i = 2, 3, . . . , n, and A n denotes the set of length n ascent sequences, and A = ∪ n≥0 A n . Similarly as for Catalan words, if π is a pattern, then A n (π) is the set of length n ascent sequences avoiding π, and A(π) = ∪ n≥0 A n (π). Clearly, C n = A n for n ≤ 3, and C n ⊂ A n for n ≥ 4, and this inclusion is strict, for instance 0102 ∈ A 4 \ C 4 . It turns out that, for particular patterns π, A n (π) collapses to C n (π) for any n, and this behaviour where the pattern 0102 plays a critical role is discussed below.
Proposition 1. If w ∈ A \ C, then w contains the pattern 0102.
Proof: If w = w 1 w 2 . . . w n is an ascent sequence which is not a Catalan word, then there is an i such that w i ≥ w i−1 + 2, and let k be the smallest such i. It follows that w i ≤ w i−1 + 1 for any i, 2 ≤ i ≤ k − 1, or equivalently w 1 w 2 . . . w k−1 is a Catalan word. Thus, if w i > 0, 2 ≤ i ≤ k − 1, then each symbol less than w i occurs in the prefix w 1 w 2 . . . w i−1 . We distinguish two cases: (i) w k−1 is not the maximal symbol of the prefix w 1 w 2 . . . w k−1 , and (ii) otherwise.
(i) In this case there exist i and j, 1 ≤ i < j < k − 1, such that w j = w k−1 + 1 and w i = w k−1 . It follows that w i w j w k−1 w k is an occurrence of 0102.
(ii) In this case the prefix w 1 w 2 . . . w k−1 has a descent (otherwise, since w is an ascent sequence, the maximal possible value for w k is w k−1 + 1), and let j be such a descent, that is w j > w j+1 , j + 1 < k − 1.
As noticed above, the symbol w j+1 already occurs in w 1 w 2 . . . w j−1 , say in position i. Thus, w i w j w j+1 w k is an occurrence of 0102.
Since the only patterns of length three of 0102 are 001, 010, 012 and 102, we have the following consequence of Proposition 2.
Pattern avoidance in ascent sequences was initiated in [6], and in [2] ascent sequences avoiding a pair of patterns of length three are considered and exact enumeration for several such pairs are given. In light of Corollary 1 it can happen that if a pattern of the avoided pair is one of the four specified in this corollary, then the resulting ascent sequences are Catalan words as well. The pairs of avoided patterns for which ascent sequences and Catalan words coincide, and for which the enumeration has already been considered in [2] are highlighted in the summarizing Table 2. In order to keep the present article self-contained we fully consider these cases, our proofs being alternative to those in [2].
Avoiding a length two and a length three pattern
There are three patterns of length two, namely 00, 01 and 10, and we have: Proposition 3. The number of Catalan words avoiding a pattern of length two and a pattern π of length three is given by: c n (00, π) = 0 if π = 012 and n ≥ 3, 1 elsewhere; c n (01, π) = 0 if π = 000 and n ≥ 3, 1 elsewhere; , 011, 012}, 2 n−1 elsewhere, where F n is the nth Fibonacci number defined by F 0 = F 1 = 1 and F n = F n−1 + F n−2 , n ≥ 2.
Finally, a Catalan word avoids 10 if and only if it avoids 010. It follows that c n (10, π) = c n (010, π), which falls in the case of avoidance of two length 3 patterns and the corresponding proofs are given in the next section, see also Table 2.
Superfluous patterns
If the pattern τ contains the pattern σ, then clearly C n (σ, τ ) = C n (σ); but this phenomenon can occur even when σ and τ are not related by containment and in this case, following [2], we say that τ is a superfluous pattern for σ. For example, any word in C n (012) is a binary word, and thus any pattern with at least three different symbols is a superfluous pattern for 012. In Table 1 are listed all pairs of superfluous patterns of length three. It is worth to mentioning that superfluousness is a transitive relation, for instance 201 is superfluous for 021 which in turn is superfluous for 011. So, a pattern can be superfluous for several other ones, for instance τ = 201 is superfluous for each of the patterns 001, 010, 011, 012, 021, 101, 120. Also it is easy to see that if τ is superfluous for σ, then τ is larger lexicographically than σ.
Ultimately constant sequences
It can happen that the number of Catalan words avoiding a pair of length 3 patterns is constant for enough long words. The only two such cases are given below. Proof: If n ≥ 3, then C n (000, 011) = {0x, x0, x(n − 1)} where x is the word 01 · · · (n − 2), and the first point follows. If a Catalan word avoids 012, then it is a binary word. In addition if its length is larger than 4 it necessarily contains three identical entries, and so C n (000, 012) = ∅ for n ≥ 5. Considering the initial values of c n (000, 012) the second point follows.
Counting sequence n Proof: The proof is based on the easy to understand description given below for the corresponding sets of Catalan words:
Proof: If π = {011, 100} and w ∈ C n (π), n ≥ 2, then either − w = 0 k u, 0 ≤ k ≤ n − 1, with u a s.i. Catalan word (of length at least one), or with u a s.i. Catalan word (of length at least two). In the first case there are n possibilities for w and n − 2 possibilities in the second case, and the result holds. If π = {011, 120} and w ∈ C n (π), n ≥ 2, then either and as previously the result holds.
Proof: If a Catalan word avoids 101, then it is unimodal (that is, it can be written not necessarily in a unique way as uv with u a weakly decreasing and v weakly decreasing word). In addition, if the word avoids 000, then its maximal value occurs at most twice, and when it occurs twice this happens in consecutive positions. We denote by D n the subset of words in C n (π) where the maximal entry occurs once and by E n that where it occurs twice, d n = |D n |, e n = |E n |, and c n (π) = d n + e n . Any word w = w 1 · · · mm · · · w n−1 ∈ E n−1 with its maximal value m occurring twice can be extended into a word in D n by one of the transformations: w → w 1 · · · m(m + 1)m · · · w n−1 , and w → w 1 · · · mm(m + 1) · · · w n−1 , and any word w = w 1 · · · m · · · w n−1 ∈ D n−1 with its maximal value m occurring once can be extended into a word in D n by: Conversely, any word in D n , n ≥ 2, can uniquely be obtained from a word in D n−1 or in E n−1 by reversing one of the transformations above, so d n = 2 · e n−1 + d n−1 , Reasoning in a similar way we have e n = 2 · e n−2 + d n−2 , Thus, for n ≥ 3, e n = d n−1 , and finally and with the initial conditions c 1 (π) = 1 and c 2 (π) = 2, the result follows.
The number of words in each of the first two cases is c n−1 (π). The number of length k w.i. Catalan words is 2 k−1 , so the number of words in the last case is Thus, c n (π) = 2c n−1 (π) + 2 n−2 − 1, and after calculation the result holds.
In the first case the number of words w is 2 n−1 and in the second case the number of words w is 2 n−2 − 1.
In last case the number of words w is Combining these cases and considering the initial values of c n (π) the result holds.
If π = {101, 110} and w ∈ C n (π), n ≥ 3, then either The number of words w in each of the first two cases is c n−1 (π). For the last case, for each k there is exactly one word w, so their number is (n − 2) in this case. So c n (π) = 2c n−1 (π) + n − 2 and after calculation the result holds.
Proof: If w ∈ C n (π), n ≥ 4, then either − w = 0u0 with u a length (n − 2) binary word, or − w = 0(u + 1) with u a length (n − 1) w.i. Catalan word, or − w = 0u0(v + 1) with u a binary word and v w.i. Catalan word. The number of words in each of the first two cases is 2 n−2 and the number of words in the last case is and so c n (π) = 2 n−1 + (n − 2)2 n−3 , which gives the desired result.
If a Catalan word avoids both 102 and 110, then it has at most one descent. In the second part of the proof of the next proposition we need the following technical lemma which gives the number of Catalan words in C n (102, 110) with one descent and avoiding the pattern 00 before the descent (note that in this case avoiding 00 is equivalent to avoiding equal consecutive entries). The set of these words is empty for n ≤ 2, it is the single word set {010} for n = 3, and {0100, 0101, 0120, 0121} for n = 4. Lemma 1. Let D n be the set of words in C n (102, 110) having one descent and avoiding 00 before the descent. Then |D n | = n 6 (n − 1)(n − 2).
Proof: A word belongs to D n , n ≥ 3, if and only if it can be written as and after calculation the statement holds.
Proof: If π = {021, 102} and w ∈ C n (π), n ≥ 3, then either − w = 0u with u a length (n − 1) binary word, or − w = 0 · · · 0(u + 1)0 · · · 0 with u a length k, 2 ≤ k ≤ n − 1, w.i. Catalan word other than 00 · · · 0 and w beginning by at least one 0. The number of words in the first case is 2 n−1 and the number of those in the second case is and combining the two cases the result holds.
If π = {102, 110} and w ∈ C n (π), then either − w is a w.i. Catalan word, the number of such words is and v is a word belonging to D n−k . Indeed, the first case corresponds to words with no descents, the second one to those with a descent and no occurrences of 00 before the descent, and the third one to those with both descent and occurrences of 00 before the descent (the rightmost such occurrence is in positions k and k + 1). By Lemma 1, the number of words in the third case is Finally, combining the three previous cases the desired result holds.
Sequences involving binomial coefficients
In this part we use the notation ab · · · cd ↑ i e · · · f to denote that the entry d is in position i in the word ab · · · cde · · · f . Proposition 12. If π = {001, 210}, then c n (π) = n 3 + n for n ≥ 3 (A000125 in [11]).
Proof: If w ∈ C n (π), then it has at most one descent. If w has no descents, then it has the form w = 01 · · · (m − 1)m · · · m and there are n such words. If w has one descent, then it has the form w = 01 · · · (m − 1)m · · · mk · · · k with 0 ≤ k < m < n − 1. It follows that there is a bijection between the family of 3-element subsets of {0, 1, . . . , n − 1} and the words in C n (π) with one descent: Combining the two cases we have c n (π) = n 3 + n. Proof: In any of the six cases for π, the set C n (π) is in bijection with the family S of subsets of {2, . . . , n} with at most two elements. We give below explicit definitions for such bijections, where the empty set is mapped to 0 · · · 0 ∈ C n (π) by each of them.
Sequences involving Fibonacci(-like) numbers
As in Proposition 3, we consider the sequence of Fibonacci numbers (F n ) n≥0 defined as F 0 = 1, F 1 = 1 and F n = F n−1 + F n−2 for n ≥ 2.
Proof: For π = {000, 001} the proof is up to a certain point similar to that of Proposition 7. A word belonging to C n (π) is unimodal and its maximal entry occurs once or twice in consecutive positions. Let D n denote the subset of words in C n (π) where the maximal entry occurs once and E n denote that where it occurs twice. If w ∈ C n (π) has its maximal entry m, then the insertion of (m + 1) after the leftmost occurrence of m in w produces a word in D n+1 , and the insertion of (m + 1)(m + 1) produces a word in E n+2 . It is easy to see that these transformations induce a bijection between C n (π) and D n+1 , and between C n (π) and E n+2 , and thus between C n−2 (π) ∪ C n−1 (π) and C n (π). It follows that c n (π) satisfies a Fibonacci-like recurrence, and by considering the initial values for c n (π) the result holds.
For π = {000, 010}, a word w ∈ C n (π) is characterized by: w is w.i. and w does not have three consecutive equal entries. So w can be represented by the binary word b 1 b 2 . . . b n−1 with no two consecutive 1s This representation is a bijection between C n (π) and the set of binary words of length (n − 1) without two consecutive 1s, which cardinality is the Fibonacci number, see for instance [13].
Proof: If w ∈ C n (π), n ≥ 2, then either − w = 0u with u ∈ C n−1 (π), or − w = 0(u + 1) with u ∈ C n−1 (π), or − w = 0(u + 1)0 with u ∈ C n−2 (π). In both of the first two cases the numbers of words w is c n−1 (π) and it is c n−2 (π) in the last case. So, c n (π) satisfies Pell numbers recurrence and considering its initial values the statement holds.
Counting via generating function
Here we give bivariate generating functions C π (x, y) where the coefficient of x n y k is the number of Catalan words of length n having k descents and avoiding π, for each of the remaining pairs π of patterns of length 3. Plugging y = 1 in C π (x, y) we obtain C π (x) = C π (x, 1) where the coefficient of x n is the number of Catalan words of length n avoiding π. All the obtained enumerating sequences are not yet recorded in [11], except that for: π = {100, 120} and for π = {110, 120} (see Corollary 3) and presumably for π = {100, 210} (see Corollary 15). In almost all the proofs of the next propositions the desired generating function is the solution of a functional equation satisfied by it.
Proposition 19. If π = {100, 120}, then Proof: A word w ∈ C(π) is in one of the following cases: − w is a w.i. Catalan word, − w = u(m − 1)(v + m) where u is a w.i. Catalan word other than 00 · · · 0, m is the largest (last) entry of u and v ∈ C(π). The generating function for the words of the first form is 1−x 1−2x and the generating function for the words of the second form is Combining these cases we deduce the functional equation below which solution gives the desired result: Proposition 20. If π = {110, 120}, then Proof: A word w ∈ C(π) is in one of the following cases: − w is a w.i. Catalan word, − w = u(m + 1)v where u is a non-empty w.i. Catalan word, m is the largest (last) entry of u and v is a word of the form mm · · · m(x + m + 1) with at least one m in its prefix and x ∈ C(π). The generating function for the words of the first form is 1−x 1−2x . For the second form, the generating function for the words u is 1−x 1−2x − 1 = x 1−2x , and the generating function for the words mm · · · m(x + m + 1) is x 1−x · C π (x, y). Thus, the generating function for the words of the second form is Combining these cases we deduce the functional equation The functional equations in the proofs of Propositions 19 and 20 are different but the resulting bivariate generating functions are the same. Instantiating y by 1 in C π (x, y) of these propositions we have the next corollary.
and c n (π) is the sequence A034943 in [11].
Proposition 21. If π = {021, 110}, then Proof: A non-empty word w ∈ C(π) is in one of the following cases: − w = 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − w = 0(u + 1) where u is a non-empty w.i. Catalan word; the generating function for these words is x · x 1−2x , − w = 01u where u is a non-empty w.i. Catalan word; the generating function for these words is where u is a s.i. Catalan word of length at least three and w ending by at least one 0; the generating function for these words is y · x 4 (1−x) 2 . Combining these cases and considering the empty word which contributes with 1 to C π (x, y), we deduce the functional equation Corollary 4. If π = {021, 110}, then Proposition 22. If π = {110, 201}, then .
Proof: A non-empty word w ∈ C(π) is in one of the following cases: − w = 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − w = 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − w = u0 · · · 0 where u is a s.i. Catalan word of length at least 2 and w ending by at least one 0; the generating function for these words is y · x 3 (1−x) 2 , − w = 010 · · · 0(u + 1) where u is a non-empty word in C(π) and w beginning by 010; the generating function for these words is y · x 3 · 1 1−x · (C π (x) − 1). Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Proposition 23. If π = {102, 201}, then Proof: A non-empty word w ∈ C(π) is in one of the following cases: − w = 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − w = 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − w = 0(v + 1)0 · · · 0 where u is a non-empty word in C(π) and w ending by at least one 0; the generating function for these words is y · x 2 · 1 1−x · (C π (x, y) − 1), − w = 01 · · · 1u where u is a binary word beginning by a 0 and different from 0 · · · 0, or equivalently, u a word in C(012) other than 0 · · · 0; the generating function for these words is . Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Corollary 6. If π = {102, 201}, then Proposition 24. If π = {100, 110}, then .
Proof: A non-empty word in C(π) has one of the following forms: − 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − u(m + 1)(m + 2)v where u and v are non-empty s.i. Catalan words, m is the largest entry of u and the length of v is less than or equal to that of u; the generating function for these words is where u is a non-empty s.i. Catalan word, m is the largest entry of u and v ∈ C(π); the generating function for these words is y · x 3 · 1 1−x 2 · C π (x, y). Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Proposition 25. If π = {000, 110}, then .
Proof: A non-empty word in C(π) has one of the following forms: − 0(u + 1) where u ∈ C(π); the generating function for these words is x · C π (x, y), − uu(v + m + 1) where u is a non-empty s.i. Catalan word, m is the largest (last) entry of u and v ∈ C(π); the generating function for these words is y · x 2 1−x 2 · C π (x, y), − u(m + 1)v where u and m are as above, and v is a non-empty s.i. Catalan word of length less than that of u; the generating function for these words is y · x 3 · (1 + x) · 1 (1−x 2 ) 2 . Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Proposition 26. If π = {000, 102} or π = {000, 201}, then Proof: If π = {000, 102}, then a non-empty word in C(π) has one of the following forms: − 0(u + 1) where u ∈ C(π); the generating function for these words is x · C π (x, y), − 00(u + 1) where u ∈ C(π); the generating function for these words is x 2 · C π (x, y), − 0(u + 1)0 where u is a non-empty word in C(π); the generating function for these words is y · x 2 · (C π (x, y) − 1), − 01(u + 2)01 where u ∈ C(π); the generating function for these words is y · x 4 · C π (x, y).
Proposition 27. If π = {000, 120}, then Proof: A non-empty word in C(π) has one of the following forms: − 0(u + 1) where u ∈ C(π); the generating function for these words is x · C π (x, y), − 00(u + 1) where u ∈ C(π); the generating function for these words is x 2 · C π (x, y), − 0101(u + 2) where u ∈ C(π); the generating function for these words is y · x 4 · C π (x, y). Apart from these general cases, there are two other fixed length ones: − 010; the corresponding generating function is y · x 3 , − 0110; the corresponding generating function is y · x 4 . Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation C π (x, y) = 1 + x · C π (x, y) + x 2 · C π (x, y) + y · x 4 · C π (x, y) + y · x 3 + y · x 4 . Corollary 10. If π = {000, 120}, then Proposition 28. If π = {201, 210}, then Proof: A non-empty word w in C(π) has one of the following forms: − 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − 01 · · · 1u where u is a non-empty word in C(π) and 01 is a prefix of w; the generating function for these words is y · x 2 1−x · (C π (x, y) − 1), − 0(u + 1)0 . . . 0 where u is a w.i. Catalan word other than 0 · · · 0 and w ending by a 0; the generating function for these words is y · . Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation C π (x, y) = 1+x·C π (x, y)+x·(C π (x, y)−1)+y · Corollary 11. If π = {201, 210}, then Proposition 29. If π = {102, 210}, then Proof: A non-empty word w in C(π) has one of the following forms: − 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − 01 · · · 1u where u is a non-empty word in C(012), and w begins by 01; the generating function for these words is y · for the generating function of C 012 (x, y)), − 0(u + 1)v where u is a w.i. Catalan word of length at least 2 different from 0 · · · 0 and v is a nonempty word in C(010, 012) (see Proposition 5); the generating function for these words is . Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Corollary 12. If π = {102, 210}, then Proposition 30. If π = {100, 102}, then Proof: A non-empty word in C(π) has one of the following forms: − 0u where u ∈ C(π); the generating function for these words is x · C π (x, y), − 0(u + 1) where u is a non-empty word in C(π); the generating function for these words is x · (C π (x, y) − 1), − 0(u + 1)0 where u is as above; the generating function for these words is y · x 2 · (C π (x, y) − 1), − 011 · · · 1(u + 2)01 where u is as above; the generating function for these words is y · x 4 1−x (C π (x, y) − 1), − uv where u and v are binary words of length at least 2 of the form 011 · · · 1; the generating function for these words is y · x 4 (1−x) 2 .
Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation C π (x, y) = 1+x·C π (x, y)+x·(C π (x, y)−1)+y·x 2 ·(C π (x, y)−1)+y· Corollary 13. If π = {100, 102}, then In the proof of the next proposition we need the following lemma where the generating functions for some particular subsets of C(000, 210) are given.
Lemma 2. The bivariate generating function corresponding to 1. the set A of words uu with u a non-empty s.i. Catalan word is A(x, y) = x 2 + yx 2 · x 2 1−x 2 and A(x) = x 2 1−x 2 ; 2. the set B of words uv with u and v non-empty s.i. Catalan words and the length of v is less than or equal to that of u is B(x, y) = y · x 2 1−x 2 · 1 1−x ; 3. the set D of words uv with u and v non-empty s.i. Catalan words and the length of v is less than that of u is D(x, y) = y · x 2 1−x 2 · x 1−x . Proof: 1. For any even n there is exactly one word of this form, so the monovariate corresponding generating function is x 2 1−x 2 ; and only words of length larger than two have one descent. 2. The transformation (uu, x) → uxu where uu ∈ A and x is a s.i. Catalan word defines a bijection between pairs of such words and B, and thus B(x, y) = y · A(x) · 1 1−x . 3. Similarly as point 2.
Proof: A non-empty word in C(000, 210) has one of the following forms: u with u ∈ D and D as in Lemma 2; the generating function for these words is D(x, y) = y · x 2 1−x 2 · x 1−x , u(m + 1)(m + 1)(x+ m + 2)v with u and v non-empty s.i. Catalan words and the length of v is less than or equal to that of u, m is the largest entry of u, and x ∈ C(000, 010); the generating function for these words is see Lemma 2 and Proposition 14), -0(u + 1) where u ∈ C(π); the generating function for these words is x · C π (x, y), uu(v + m + 1) where u is a non-empty s.i. Catalan word, m the largest entry of u and v ∈ C(π); the generating function for these words is A(x, y) · C π (x, y). Combining these cases and adding 1 corresponding to the empty word we deduce the functional equation Corollary 14. If π = {000, 210}, then In the proof of the next proposition we need the following lemma where the generating functions for two subsets of C(100, 210) are given.
1. The generating function corresponding to the set E of words uv with u a w.i. Catalan word, v a non-empty s.i. Catalan word and the largest entry of v is equal to that of u minus 1 is The generating function corresponding to the set F of words uv with u a w.i. Catalan word, v a nonempty s.i. Catalan word and the largest entry of v is less than that of u minus 1 is Proof: 1. If E n is the set of words of length n in E, then E n = ∅ for 0 ≤ i ≤ 2, E 3 = {010} and E 4 = {0010, 0110}. With u and v as above, for any n ≥ 3, the transformation uv → uav, with a the maximal entry of u, transforms a word in E n into one in E n+1 where the maximal entry occurs at least twice, uv → u(a + 1)v(b + 1), with a and b the maximal entries of u and of v respectively, transforms a word in E n into one in E n+2 where the maximal entry occurs once. Any word in E n , n ≥ 5, except 0 · · · 010, can be obtained in a unique way from either a word in E n−1 or in E n−2 by one of these transformations. This yields the recurrence |E n | = 1 + |E n−1 | + |E n−2 | for n ≥ 5, and the desired generating function is precisely that of the sequence (|E n |) n≥0 . 2. Any pair of words (w, x) with w = uv ∈ E (with u and v as above) and x a non-empty w.i. Catalan word can be transformed into the word uxv ∈ F , and (w, x) → uxv is a bijection, so the generating function for F is that for E multiplied by x 1−2x .
Proof: First we consider only words in C(π) having at least one descent, and we denote by G(x, y) the corresponding generating function, and clearly C π (x, y) = 1−x 1−2x + G(x, y). A word in C(π) with at least one descent has one of the following forms: − u(α + s + 1)(v + s + t + 1) where u and v are both w.i. Catalan words, α belongs to the set E defined in Lemma 3, and s is the largest symbol of u (and for convenience −1 if u is empty) and t that of α; the generating function for these words is y · − u(α + s + 1) where u and s are as above, and α belongs to the set F defined in Lemma 3; the generating function for these words is y · 1−x 1−2x · x 3 (x−1)(x 2 +x−1) · x 1−2x , − u(α + s + 1)(v + s + t + 1) where u and s are as above, α belongs to E, v is a word in C(π) with at least one descent, and t is the largest symbol of α; the generating function for these words is y · 1−x 1−2x · x 3 (x−1)(x 2 +x−1) · G(x, y). It follows that G(x, y) satisfies the functional equation · G(x, y).
Finally, solving it and adding the generating function for the Catalan words with no descents (that is, w.i. Catalan words) the statement holds.
Final remarks
Catalan words are in bijection with Dyck paths (see Figure 1) and thus pattern avoiding Catalan words correspond to restricted Dyck paths. For instance, a Catalan word avoiding 012 corresponds to a Dyck path of height at most two. In this context, it can be of interest to investigate how our results on pattern avoiding Catalan words translate to corresponding restricted Dyck paths. Even if in this article we restrict ourselves to the avoidance of two patterns of length 3, some classes considered here can be trivially extended to larger length patterns, for instance C(102, 201) = C(01012, 01201). In this light, it can be of interest to explore Catalan words avoiding patterns of length 4 or more, triples of patterns or generalized patterns. | 10,182.2 | 2019-12-22T00:00:00.000 | [
"Mathematics"
] |
Melamine sponge skeleton loaded organic conductors for mechanical sensors with high sensitivity and high resolution
In recent years, due to the development of flexible electronics, flexible sensors have been widely concerned and applied in intelligent robots, brain-computer interfaces, and wearable electronic devices. In this paper, we propose a low-cost and high-efficiency sensor component preparation method. The sensor component tetrathiafulvalene-tetracyanoquinodimethane/melamine sponge (TTMS) takes a melamine sponge as a flexible substrate. And the sponge is metallized with the tetrathiafulvalene-tetracyanoquinodimethane (TTF-TCNQ) which is an organic conducting molecule to construct a conductive pathway. The physical load approach is used to ensure the advantages of low cost and efficient manufacturing. TTMS can withstand 8000 compression cycles which exhibits its good mechanical stability. And 1000 cycles of cyclic voltammetry scanning proved it also had good electrical stability. TTMS can distinguish pressure changes of 100 Pa and respond quickly to pressure application and release. The TTMS can be assembled to form an array of sensors that can distinguish the position and intensity of pressure. Therefore, the excellent performance of the sensor is expected to promote the commercial application of the piezoresistive sensor.
Introduction
Nowadays, flexible electronic materials are developing very rapidly, such as flexible display materials [1], flexible electromagnetic shielding materials [2], flexible batteries [3,4], and flexible energy storage materials [5,6]. They are becoming more and more important and common in our lives. We focused on flexible pressure sensors in flexible wearable electronics, whose applications include electronic skins [7], brain-computer interfaces [8,9], and medical monitoring equipment [10][11][12][13][14][15]. According to the working principle of the sensor, pressure sensors could be divided into piezoresistive [16][17][18][19][20][21], capacitive [22,23], and piezoelectric [24][25][26][27][28][29]. Capacitive pressure sensors had high sensitivity and were mostly used in electronic skin and brain-computer interfaces, but their high sensitivity also made them vulnerable to external interference [30]. Piezoelectric sensors were suitable for dynamic measurements and could respond quickly to pressure changes, but not for static tests [31]. The piezoresistive sensor had a simple structure, simple preparation, and little influence by external temperature and humidity, and had a certain scale production and application potential [32]. Flexible piezoresistive sensors were prepared from a variety of materials, including metal nanowires [33,34], carbonbased conductive materials (carbon nanotubes, graphene, etc.) [35][36][37], conductive network constructed by synergistic interaction of multiple metal oxide heterojunctions in the flexible substrate [38], and self-supporting conductive system constructed by conductive fiber structure [39]. For example, Mu et al. [35] proposed an e-skin preparation scheme for anchoring carbon nanotubes (CNTs)/graphene oxide (GO) hybrid 3D conductive networks on porous polydimethylsiloxane (PDMS) layers. The sensor showed excellent sensitivity (gauge factor of 2.26 under a pressure 1 3 loading of 1 kPa) and a highly reproducible response within 5000 cycles of tension, bending, and shear. It not only detected wrist pulses and distinguishes between different surface roughness, but also responded significantly to the slightest tick of a feather (~ 20 mg) and could also be used to monitor human respiration in real time. Cheng et al. [40] fabricated a highly sensitive MXene-based piezoresistive sensor inspired by bioinspired micro-spinous microstructures, which can effectively increase the sensitivity of the pressure sensor and the limit of detectable fine pressure. The obtained piezoresistive sensor showed high sensitivity (151.4 kPa −1 ), relatively short response time (< 130 ms), subtle pressure detection limit of 4.4 Pa, and excellent cycle stability over 10,000 cycles. The sensor showed great performance in real-time remoted monitoring and quantitative detection of pressure distribution. However, structures of heterojunctions and elf-supporting were relatively complex and had higher requirements for experimental conditions [41]. Some materials also had a high cost. The preparation process often used high-temperature annealing, water bath reaction, etc. Metal nanoparticles faced the problem of reduced electrical conductivity caused by oxidation [42]. Therefore, it was very important to develop a simple method to prepare a stable sensor.
The organic conductive material may be an ideal sensing material [43][44][45][46]. In 1954, Akamatu et al. discovered the conductive charge transfer complex salts of perylene bromide complexes [47]. Since then, a variety of organic conductive materials have been developed, including a variety of small molecules, charge transfer complexes, oligomers, and conductive polymers [48]. Their conductive mechanism was mainly due to the interaction between adjacent molecules and the intrinsic electronic structure of the extended molecule. And they were expected to be applied in practice because of their flexibility, easy modification, flexible processing, and high universality. Tetrathiafulvalene-tetracyanoquinodimethane (TTF-TCNQ) was a highly conductive charge transfer complex (CTC) with metal-like conductivity over a wide temperature range (350-59 K) [49][50][51]. It was first reported in 1973 by Bloch et al. [52]. Their chemical formula was shown in Fig S1. Some studies have shown that TTF is the electron donor and TCNQ was the electron acceptor by density functional theory (DFT) [53]. The main skeleton of TTF contains S atoms, which contributed to solid intermolecular S•••S interactions. These S•••S interactions and π-π and •CH-π interactions affected the stacking structure of TTF molecules in the crystal and further affect the electrical conductivity of the material [48]. The high conductivity of TTF-TCNQ CTC was attributed to a "herring bone"-type crystal structure formed by the flat TTF and TCNQ, in which orbitals on adjacent molecules overlapped to form continuous one-dimensional bands [50].
Herein, we proposed an efficient technical method to fabricate pressure sensors using organic conducting molecule TTF-TCNQ. Polyvinyl butyral (PVB) was an adhesive; TTF-TCNQ CTC was physically combined with skeletons of melamine sponge (MS) to form pressure-sensitive sensing element TTMS (Fig. 1). The TTMS had a high resolution of 100 Pa, a rapid response capability of 260 ms, and an ultra-high sensitivity of 90.7 kPa −1 in the range of 20 kPa to 104 kPa. These excellent electrical properties resulted from a dramatic increase in the density of conducting molecules in the compressed state of the sensor. Moreover, a pressure sensor could be obtained by array arrangement of multiple TTMS, which could reflect the strength and position of the force. Its accuracy was closely related to array size, which could be adjusted freely. Ecoflex was used to replace threedimensional-printed polylactic acid (PLA) substrate, and the fully flexible sensor was expected to be applied to intelligent robot behavior sensing and other aspects.
Preparation of TTF-TCNQ
TTF and TCNQ are dispersed in an alcohol solution; then, the liquid is transferred to a mortar and ground to a dry powder. The grinding process can be repeated to fully grow organic molecular conductor crystals. For a more detailed preparation process, see reference [54].
Preparation of TTMS
The sponges were cut into 0.7 cm × 0.7 cm × 0.5 cm squares by a mold. 0.5 wt.% PVB/ethanol solution was prepared by dissolving 0.5 g PVB in 99.5 g ethanol. Then, 1.8 g TTF-TCNQ was added to 30 g 0.5 wt.% PVB/ethanol solution. Ultrasonic dispersing for 30 s, followed by rapid stirring for 5 min. Finally, the sponge was immersed in TTF-TCNQ/ PVB/ethanol solution and kept in a vacuum at 1000 Pa for 3 min. Finally, TTMS can be obtained after natural drying [55].
Preparation of sensor
The sensor was 3D printed by using polylactic acid (PLA). The sensor slot was designed to be 0.7 cm × 0.7 cm × 0.2 cm. The first and third layers of the sensor were copper tape perpendicular to each other, which was used to construct a conductive network that uniquely identified the location of the signal source. TTMS were then placed into each sensing unit. TTMS could be connected to copper tape more closely through conductive silver paste. Ecoflex was applied to fabricate a fully flexible sensor with a conductive channel width of 0.5 cm. The Ecoflex substrate differs from the PLA substrate in that the Ecoflex was divided into upper and lower parts.
Characterization
X-ray powder diffraction data were collected using an XRD (D/max 2500, Rigaku, Japan) with Cu Kα radiation (λ = 1.54178 Å). Micromorphological images were recorded using a field emission scanning electron microscope (FE-SEM, LEO-1530, Zeiss, Germany) with an EDX attachment module. An X-ray photoelectron spectrometer (ESFalab 250Xi, Thermo Fisher, America) equipped with an Al Kα radiation source (1487.6 eV) and a hemispherical analyzer with a pass energy of 30.00 eV was employed to obtain surface element information. The thermogravimetric analysis (TGA) and differential thermal gravity (DTG) analysis were performed using a thermogravimetric analyzer (STA 449 F3, Jupiter, Germany). Fourier transform infrared spectroscopy (FT-IR, VERTEX 70 V).
Mechanical and electrical test
The real-time resistance and current of sensors upon various deformations were obtained by a combined instrument consisting of a computer-controlled electrochemical workstation (CHI 660E, CH Instrument, China) and a universal mechanical tester (Zwicki-Z1.0, ZwickRoell GMbH&Co. KG, Germany) with a double silver electrode system. The relative current change was defined as [56,57]: The conductivity calculation formula was: where σ (S/m) is the conductivity, ρ (Ω/m) is the resistivity, L (m) is the width of the PVA/AgNPs hydrogel sensor, A (m 2 ) is the cross-sectional area of the sensor, and U (V) and I (A) are the applied voltage and the corresponding current which was available through the electrochemical workstation, respectively.
And the formula for calculating GF was [58]: And the formula for calculating sensitivity was [59]: where k (kPa −1 ) is the sensitivity, and P (kPa) is the pressure.
Results
We used Fourier transform infrared spectroscopy (FT-IR) to analyze the difference of chemical bond information between MS and TTMS (Fig. 2a). The enhancement of the peak at 900-1300 cm −1 was due to the joint action of C-S, C = S, and benzene ring, and the newly formed peak at 2200 cm −1 was due to the introduction of C≡N in TCNQ. The increase of peak at 2850 cm −1 and 2920 cm −1 was due to the increase of C-H, while the newly formed peak at 3070 cm −1 is caused by C = C vibration. Raman spectra (Fig. 2b) showed that the characteristic principal vibration modes at 980 cm −1 , 1202 cm −1 (C = CH bending), and 1603 cm −1 (C = C ring stretching) confirmed the presence of the TTF and TCNQ phases [60]. The composition differences between MS and TTMS were analyzed by thermogravimetric analysis (TGA) and derivative thermogravimetry (DTG) in the N 2 atmosphere (Figs. 2c and S1), which showed that TTMS prepared using 6% TTF-TCNQ concentration contained approximately 45% TTF-TCNQ by weight. TTF-TCNQ began to decompose at about 185 ℃ and gradually completes decomposition at 400 ℃. The curve decline after 400 ℃ was due to the decomposition of MS at high temperature until the decomposition is completed at 700 ℃. At 900 ℃, the residual matter after TTMS decomposition was about 3.8%, which may be formed under high temperature. The blue curve (6% compressed) in the figure reflected the material after 10000 compression tests. It could be seen that the physical composite mode using PVB as the adhesive was very stable, and the content of TTF-TCNQ was the same as that without compression. XRD curve could obviously show that TTF-TCNQ has been attached to MS in TTMS (Fig. 2d). We marked multiple crystal planes of TTF-TCNQ, which could be well corresponding to literature [61]. Then, the C, N, O and S elements in TTMS were analyzed by X-ray photoelectron spectroscopy (XPS) (Figs. 2e, f and S1b-d). There were two binding energy peaks of N 1 s [62], 399.1 eV and 397.5 eV, corresponded to N≡C and N≡C − in TCNQ, respectively. The binding energy peak of S 2p was complicated, including 2p satellite peak, 2p 1/2 and 2p 3/2 [63][64][65][66][67]. The S element came from TTF, and its electron energy was affected by TCNQ. The peak at 168.8 eV was not classified into the above three electron states but was commonly found in metal sulfates. In both metal sulfates and TTF-TCNQ, S element acts as an electron donor, thus explained the origin of the peak at 168.8 eV. In the binding energy peak of C element C 1 s, 288.7 eV corresponded to C-S in TTF, 286.1 eV corresponded to C≡N in TCNQ and C-O-C in PVB, and 284.8 eV corresponded to carbon in benzene ring and C-C. The peak of C1s at 283.3 eV was commonly seen in the carbon and metal-binding material, so it may be formed by Raman spectra. c TGA curves. d XRD curves. e, f XPS spectra. g, h, i SEM images and mapping of S element the influence of the C element in TTF or TCNQ. This unique peak position in C and S elements also supported the charge transfer mentioned above. All of the O element came from PVB, the binding energy peak of the O element correspond to C-O-C, C-O, and C = O. We observed the microstructure of TTMS with a scanning electron microscope (Fig. 1g), and it could be seen that TTF-TCNQ CTC was loaded on the MS skeleton, and there would be more accumulation of TTF-TCNQ at the skeleton joints (red arrow). Then, we scanned the distribution of S elements on the surface through mapping (Fig. 1h, i), and it could be seen that the distribution of S elements was completely consistent with TTF-TCNQ.
Next, we studied the mechanical and electrical properties of the sensing element. We measured the modulus of TTMS prepared with three different TTF-TCNQ usage amounts (Fig. 3a). It could be seen that the increase of TTF-TCNQ usage would increase the modulus of TTMS, but also increased its conductivity. Thus, we determined the usage of 6%TTF-TCNQ. The stress-strain curve could also reflect the influence of usage on mechanical properties of TTMS (Fig. 3b), and the increase of stress of this material would also change significantly with the increase of strain. Subsequently, we tested the response of TTMS to current under different pressure states (Fig. 3c). And optical photographs of 0-70 kPa compression are shown in Fig. S2. The higher the pressure, the higher the current represents, the lower the resistance. Therefore, a piezoresistive sensor could be constructed based on this sensing mechanism. We further (Fig. 3d), which could maintain the stability of electrical signals under 1000 cycles of cyclic voltammetry (CV). In addition, TTMS could keep a stable rate of current change even after 10000 cycles of compression (Fig. 3e). We also gave the detail diagram of the current change rate in the first, middle and last three periods in the compression process (Fig. S1e). The periods were 5th-9th, 3998th-4002th and 7903th-7908th, respectively. After 8000 compression cycles, the current change rate decreased by about 5%.
Then, we measured several main evaluation indexes of the sensor, including response time, resolution, gauge factor and sensitivity. TTMS could respond quickly to stress application and release (Fig. 3f), with response times of 260 ms and 160 ms respectively. There was no obvious limit on the left end of the stress holding platform (red arrow). Therefore, we took the time from the stress applied to 90% of the maximum stress as the response time of TTMS to the stress application. This problem did not exist in the stress release stage. The sensor had a very high GF value (Fig. 3g) and sensitivity (Fig. 3h). In 65-70 kPa, GF was 131704.6. The sensitivity of 5-15 kPa was 8.5 kPa −1 , 15-20 kPa was 37.0 kPa −1 , and 20-105 kPa was 90.2 kPa −1 . This excellent performance came from the state change of TTF-TCNQ. In the beginning, TTF-TCNQ existed in the form of aggregates, which were bonded to the sponge skeleton by PVB, and the contact of aggregates forms a conductive path similar to the sponge skeleton. When TTMS was compressed, the density of TTF-TCNQ increased. At this time, there was not only a conductive path on the horizontal plane, but also a more complex conductive path on the vertical direction due to the pressure promoted more TTF-TCNQ contact. Therefore, the electrical conductivity of TTMS was greatly increased under compression, further affecting GF value and sensitivity. In Fig. 4 The sensor responded to different pressure and different shapes. a 30% compression. b 40% compression. c 50% compression. d Box. e Rectangle. f Circle this paper, the rate of change of current rather than the rate of change of resistance was chosen to calculate GF value and sensitivity. The conductivity of TTMS ranged from 2.3 × 10 −4 S/m initially to 1.79 S/m at 70% compression (Fig. S1f). This value increased by 5 orders of magnitude, and the significant increase in conductivity was due to the increase of conductive paths under compression. We tested current conditions from 0.1 kPa to 0.5 kPa pressures, and TTMS was able to resolve a minimum stress difference of 0.1 kPa (Fig. 3i). The higher the resolution of the sensor, the more accurate the signal transmitted [68].
Finally, several sensor elements TTMS were used to construct a 5 × 5 array sensor (Fig. S3a). The sensor substrate was 3D printed, and the conductive channels were pre-designed on the substrate and filled with conductive copper tape to construct the conductive channels. To achieve complete flexibility of the sensor, we used the outer surface of the sensor prepared by Ecoflex and also used conductive copper tape to construct the conductive path ( Fig. S3b-d).
We tested the response of the sensor to different normal force (Fig. 4a-c). The results showed that the sensor could clearly distinguish different normal force, and the approximate range of normal force could be obtained from calorific value. Then, we also tested the response of the sensor under different shapes of pressure (Fig. 4d-f). The shapes of pressure were box, rectangle and circle. The electrical signal data was presented in the form of a heat map. It could be seen that the sensor could reflect the position of pressure applied under the pressure of these three shapes. In addition, we could determine the corresponding relationship between a certain calorific value and pressure, which was through the relationship between the pressure measured in the previous experiment and the current change rate. The calculation of calorific value is the same as the current change rate. But at the same time, we could also see that the sensors did not accurately represent the original shape of the pressure. Such defects resulted from the low precision caused by the small size of the sensor. Due to the small number of pixels in the sensor, part of the information would be lost in the feedback. This defect could be remedied by adjusting the size of the sensor. In this paper, the fast and efficient preparation method and simple sensor assembly method could greatly reduce the difficulty of expanding the sensor scale.
Conclusion
Due to the high conductivity, stability, and flexibility of organic molecular conductors, we constructed a conductive system by using TTF-TCNQ. And it was applied to mechanical sensing with a flexible substrate. The electrical and mechanical stability of the sensor TTMS benefited from the double guarantee of TTF-TCNQ and MS stability. The high conductivity of 1.79 S/m at the maximum compression was derived from the diversification of conductive paths, which also made the sensor have a high sensitivity of 90.2 kPa −1 . The response time of TTMS was 260 ms and 160 ms when the force was applied and released, which ensured that the sensor could respond quickly to stress changes. At the same time, TTMS's high resolution of 100 Pa pressure greatly improved the detection range of the sensor. In general, the performance of the flexible pressure sensor prepared by TTF-TCNQ was excellent, and it had application potential in artificial intelligence, soft robot, and other aspects. The simple and efficient preparation method also increased the possibility of the sensor being widely used.
Author contribution Wu Yufeng was responsible for all the experimental and paper writing parts. Wu Jianbo and Lin Yan provided the characterization test equipment. He Xian helped to complete the characterization of SEM and XRD. Liu Junchen and Pan Xiaolong helped to revise the writing of the paper. Lei Ming and Bi Ke provided experimental ideas, experimental direction, and financial support.
Data availability
The data sharing is not applicable to this article.
Conflict of interest
The authors declare no competing interests. | 4,673.2 | 2022-12-06T00:00:00.000 | [
"Materials Science"
] |
Phylogenetic analysis of the distribution of deadly amatoxins among the little brown mushrooms of the genus Galerina
Some but not all of the species of ’little brown mushrooms’ in the genus Galerina contain deadly amatoxins at concentrations equaling those in the death cap, Amanita phalloides. However, Galerina’s ~300 species are notoriously difficult to identify by morphology, and the identity of toxin-containing specimens has not been verified with DNA barcode sequencing. This left open the question of which Galerina species contain toxins and which do not. We selected specimens for toxin analysis using a preliminary phylogeny of the fungal DNA barcode region, the ribosomal internal transcribed spacer (ITS) region. Using liquid chromatography/mass spectrometry, we analyzed amatoxins from 70 samples of Galerina and close relatives, collected in western British Columbia, Canada. To put the presence of toxins into a phylogenetic context, we included the 70 samples in maximum likelihood analyses of 438 taxa, using ITS, RNA polymerase II second largest subunit gene (RPB2), and nuclear large subunit ribosomal RNA (LSU) gene sequences. We sequenced barcode DNA from types where possible to aid with applications of names. We detected amatoxins only in the 24 samples of the G. marginata s.l. complex in the Naucoriopsis clade. We delimited 56 putative Galerina species using Automatic Barcode Gap Detection software. Phylogenetic analysis showed moderate to strong support for Galerina infrageneric clades Naucoriopsis, Galerina, Tubariopsis, and Sideroides. Mycenopsis appeared paraphyletic and included Gymnopilus. Amatoxins were not detected in 46 samples from Galerina clades outside of Naucoriopsis or from outgroups. Our data show significant quantities of toxin in all mushrooms tested from the G. marginata s.l. complex. DNA barcoding revealed consistent accuracy in morphology-based identification of specimens to G. marginata s.l. complex. Prompt and careful morphological identification of ingested G. marginata s.l. has the potential to improve patient outcomes by leading to fast and appropriate treatment.
Introduction
Galerina, a genus of small, yellow-orange or yellow-brown mushrooms, includes species that have been implicated in dozens of poisoning cases worldwide [1].However, information about exactly which of the >300 species in the genus [2] pose a poisoning risk is incomplete and confusing.This is partly because Galerina species are difficult to identify using just morphological characters.In part, toxin analysis has usually involved destructive sampling, leaving no voucher material to confirm identification.DNA barcoding has not previously been applied to link identifications of specimens with toxin analysis, and toxins have not been assayed from diverse Galerina species.Here, we connect vouchered Galerina specimens to DNA barcode sequences and to amatoxin presence and absence in the context of the most complete molecular phylogeny of the genus to date.
Although individual Galerina mushrooms are small, the amatoxins can have dramatic consequences if ingested.Given the amatoxin LD 50 (amount of substance required to kill 50% of the test population) of 0.1 mg/kg body weight, 10 fruiting bodies of one of the toxic species would be sufficient to poison a child weighing 20 kg [1].Serious illness has resulted in people of various ages when Galerina mushrooms have been confused with edible or hallucinogenic mushrooms and eaten in quantity.By the time serious symptoms appear, 2-4 days after eating mushrooms, the toxins have inflicted serious damage on the liver and other internal organs.A family in Japan including a six-year-old boy ate soup containing what were probably Galerina fasciculata, possibly mistaken for wild enoki mushrooms [3].The older family members experienced nausea and diarrhea and then recovered, but the boy's condition became progressively worse.Some 36 hours after eating the soup, the boy was admitted to the hospital; 72 hours after the meal, his liver failed.Following treatment, he slowly recovered, to be discharged after 15 days [3].A 32-year-old Swedish woman saute ´ed and ate Galerina marginata, mistaking them for honey mushrooms (Armillaria species).She was admitted to the hospital 17 h later with vomiting and diarrhea, and with blood enzyme levels indicating liver damage [4].She recovered after nine days in the hospital.Two days after their cafeteria erred by serving a locally sourced 'mushroom dish' that likely contained Galerina sulcipes, a group of 13 coworkers in China, aged 19-56 required 10 days of hospitalization to recover from liver and kidney damage [5].Although details are unavailable, in 2011, three Galerina poisoning cases including one fatality were reported in North America [6].There is no known antidote for amatoxin ingestion, but case studies show that supportive therapy, such as replacing electrolytes and keeping the patient hydrated saves lives [7,8].Better knowledge of the taxonomic distribution of amatoxin production may allow for better documentation of the geographic range and abundance of toxic species.If ingested mushrooms can be identified as amatoxin-containing species earlier, appropriate treatment can be initiated earlier, likely improving outcomes.
Deadly amatoxins in Galerina mushrooms have been documented since the mid-20 th century.In 1954, two patients consumed what was later identified as Galerina venenata and presented with symptoms mirroring poisoning by the death cap, Amanita phalloides [9].Prompted by these poisoning cases, Tyler and Smith [10] used paper chromatography to show that G. venenata contains α-and β-amanitin-two of the amatoxins, the toxic peptides identified from the genus Amanita.
To discuss the relationships of the toxin producers among the large number of Galerina species, infrageneric clades become relevant.A series of authors have subdivided the genus into subgenera and sections; e.g.Gulden and Hallgrı ´msson [11] and Smith and Singer [12].The infrageneric taxa applied by different authors are only partially congruent with one another or with molecular phylogenies [13].For clarity of communication, Gulden et al. [13] designated four infrageneric clades in their molecular phylogenies as informal groups "Naucoriopsis," "Galerina," "Tubariopsis," and "Mycenopsis," pointing out that the names "largely reflect already recognized morphology-based subgenera or sections within Galerina."Our results are largely congruent with these earlier studies and so we recognize Gulden et al. [13]'s four provisional clades as subgenera.We also apply "Sideroides" as a subgenus, based on an infrageneric taxon first used in Smith and Singer's monograph [12].
Previous phylogenetic and toxin studies placed known Galerina toxin-producers in subgenus Naucoriopsis [1,13].Within Naucoriopsis, amatoxins have been reported in the G. marginata s.l.species complex [1].Five other species that are also reported to contain amatoxins are likely to be members of Naucoriopsis, although without verification by DNA barcoding.Muraoka et al. [14] and Muraoka and Shinozawa [15] purified amatoxins from cultures of G. fasciculata and G. helvoliceps.Besl [16] extracted amatoxins from cultures of G. beinrothii; from dried mushrooms of G. badipes; and from both cultures and dried mushrooms of the G. marginata species complex.Besl et al. [16] also reported negative results; toxins were not detected from four specimens selected from among the ~200 Galerina species from outside Naucoriopsis.
Of the toxin producers associated with specimen vouchers, the culture Galerina 'marginata' CBS 339.88 is the best studied.The Joint Genome Institute sequenced its complete genome.Luo et al. [17] characterized its genes responsible for α-amanitin synthesis and used hybridization to indicate that the same genes are present in G. venenata CBS 924.72, and G. badipes (CBS 268.50).Surprisingly, G. badipes reportedly produced γ-amanitin but not the more common α-or β-amanitin [16].
The number of Galerina species that produce toxins is unclear.Until recently, most Galerina species have been described and delimited based on micro-and macromorphological differences.Smith and Singer's [12] monograph on the genus distinguished 199 species of Galerina.However, Gulden et al. [18] showed that nuclear ribosomal internal transcribed spacer (ITS) sequence variation did not support the monophyly of species from vouchers labeled G. marginata, G. autumnalis, G. unicolor, G. oregonensis, and G. venenata.Gulden et al. synonymized all of these under G. marginata.The study left unclear whether other species should be included in G. marginata.The possibility remained that cryptic species may be contained in a group that we refer to as 'G.marginata s.l.'.
Galerina appears polyphyletic in molecular phylogenies that draw on ITS and large ribosomal subunit (LSU) data [13,18].Gymnopilus appears nested within Galerina's subgenus Mycenopsis with a Bayesian posterior probability of 1.00.Other Galerina species were intermingled with Phaeocollybia, Hebeloma and other genera, mostly without strong Bayesian support [13].Suggesting that some of the apparent Galerina polyphyly reflected lack of data, when Matheny et al. [19] used more data, 4508 aligned sites from a combination of ribosomal and RPB2 (encoding the RNA polymerase II second largest subunit B150) gene sequences, phylogenies no longer showed Phaeocollybia and Hebeloma intermingled with Galerina.Matheny et al. transferred Galerina clavus, which was clearly not a Galerina, to a new genus, Romagnesiella [19].These results suggested encouragingly that including RPB2 with ribosomal gene data might clarify the infrageneric structure of Galerina, putting the toxic species in a larger phylogenetic context.
Our goal was to resolve relationships and clarify the phylogenetic distribution of amatoxins among Galerina species.To more closely characterize poisonous species, we aimed to analyze DNA and toxins of vouchered Galerina collections from the UBC Herbarium in the Beaty Biodiversity Museum (https://herbweb.botany.ubc.ca/herbarium/search.php?Database=fungi).Many of these are recently accessioned collections made by regional mycologists, especially Oldriska Ceska and Paul Kroeger.Discovering which Galerina species contain amatoxins is technically straightforward because a small amount of fungal tissue suffices for both toxin analysis and DNA barcoding.Two studies [20,21] have demonstrated that amatoxins are readily detected and quantified via liquid chromatography-mass spectrophotometry from as little as 8 mg dried Amanita, even in herbarium specimens that were 17 years old.Using preliminary ITS phylogenies to represent the diversity of clades in Galerina, we selected specimens for toxin analysis and for sequencing of partial LSU and RPB2 regions.To help guide applications of names, we borrowed specimens including types determined by A.H. Smith and sequenced their ITS1 regions.We quantified α-amanitin concentrations from a diverse sample of 62 DNA-barcoded UBC Galerina specimens and eight species from closely related genera.Integrating toxin data in a broad phylogenetic framework gives us new power to predict toxicity from morphology and to speed identifications of specimens involved in possible poisoning cases.
Taxon sampling, DNA amplification and phylogenetic analysis
For this study, we re-analyzed ITS sequences of Galerina specimens from UBC determined previously by Bazzicalupo et al. [22].For each collection, DNA extraction, PCR amplification, and ITS sequencing had been replicated [22].We analyzed the ITS sequences of 147 Galerina collections from which we recovered the same sequence in each of two independent extractions (S1 Table ).For RPB2 and LSU amplifications, we extracted additional DNA from specimens selected to represent the diversity of lineages as estimated from preliminary analyses of the ITS data.We extracted DNA from 5-20 mg of gill tissue following instructions in the Qiagen DNEasy Plant Mini Kit for PCR amplification with Illustra PuReTaq Ready-To-Go PCR beads (GE Healthcare: Mississauga, ON, Canada).We used primers LR0R and LR5 [23] for LSU gene amplifications.For RPB2, we initially used primers bRPB2-6F and bRPB2-7.1R[24].The PCR cycles began with an initial denaturation at 95˚C for 5 min, followed by 30 cycles of 95˚C denaturation for 30 sec, 55˚C annealing for 30 sec, 72˚C elongation for 30 sec, increasing the elongation time by 4 sec per cycle and concluding with a final elongation at 72˚C for 7 minutes.For RPB2 samples that gave only weak bands or no bands at all, we re-amplified the product in nested PCR reactions using bRPB2-7R [24] and a re-designed internal forward primer berniF 5' ATG GTG TGC CCT GCG GAA AC.For forward and reverse Sanger sequencing, we used BigDye Terminator v3.1 (Thermo Fisher Scientific: MA, USA) following the manufacturer's instructions.The UBC Sequencing and Bioinformatics Consortium performed the electrophoresis.
The 368 ITS sequences analyzed included 161 sequences from UBC specimens of Galerina, Hebeloma and Gymnopilus, genera representing the family Hymenogastraceae.To help associate names with clades, we sequenced the ITS1 regions from 14 Galerina specimens from MICH and examined by A.H. Smith, including types where possible.Also to help associate names with clades, we used sequences from Gulden et al. [13,18].We used a series of BLAST searches to select additional GenBank sequences to represent the known diversity in the genus and we included ITS sequences of Psilocybe in addition to Galerina, Hebeloma and Gymnopilus in the analysis.We selected 154 sequences from the 5' end of the LSU, 28 of them determined for this study, and 78 RPB2 sequences, 24 from this study to represent Galerina and closelyrelated families Hymenogastraceae, Strophariaceae, Crepidotaceae, Inocybaceae, Tubariaceae, Bolbitiaceae and Cortinariaceae.For voucher information and GenBank accession numbers, see S1 Table .We used the MAFFT online server with the L-INS setting [25] to obtain initial alignments for each locus, then refined the alignments manually using Mesquite 3.5 [26].For the RPB2 dataset, we excluded introns from the final alignment.Using jModelTest 2 [27] implemented on the CIPRES portal [28], we selected, as best nucleotide substitution models, (AICc) GTR+I +G for the ITS and LSU datasets; TIM1+I+G for RPB2 codon position 1; and TVM+I+G for RPB2 codon positions 2 and 3.For analyses, we approximated the best models using GTR+I +G throughout.For each individual alignment and the concatenated alignment, we used RAxML v.8.2. 10 [29] on the CIPRES portal to infer maximum likelihood trees from 200 'thorough' searches.We used 500 bootstrap replicates to assess branch support.Conflicts in the topologies from individual loci generally involved weakly to moderately supported nodes (<70% bootstrap) (S2-S4 Figs), so we concatenated the alignments in Mesquite.
For subsequent analyses of concatenated LSU, RPB2 and ITS data, the ITS regions of the more distant outgroups were too variable to align, and so we included only species of Galerina, Gymnopilus, Psilocybe and Hebeloma.We included sequence data from each specimen analyzed for toxins and from each specimen represented by data from LSU or RPB2 sequence regions.We included a representative of each unique ITS haplotype.To increase geographical sampling, we included a representative of each country of origin from among sequences with the same haplotype.The resulting dataset included 337 taxa and 4401 aligned positions and is available through DRYAD: https://doi.org/10.5061/dryad.r7sqv9s9z.We partitioned the input alignments by locus, and for RPB2, by codon position.We again used RAxML for 200 likelihood searches and 500 bootstrap replicates.
Amatoxin detection
We analyzed amatoxin concentrations from 70 specimens, from 62 Galerina, four Gymnopilus, three Hebeloma, and one specimen of Flammula alnicola.For Galerina specimens, we analyzed two ~5 mg replicate tissue samples for 36 of these specimens.We analyzed only one ~5 mg sample each from 26 specimens that were too small to allow replicated sampling.We tested four tissue disruption methods to compare and maximize amatoxin extraction efficiency: (1) no tissue grinding, (2) grinding with a plastic pestle, (3) grinding with a wooden stir stick and (4) vortexing the tissue with a glass bead.Tissue grinding with a wooden stir stick was most efficient and we used it for all subsequent samples.After grinding, we added 50% methanol to each tube at a ratio of 40 μL/mg starting tissue.
After 24 hours, we centrifuged samples at 13,300 rpm for 10 minutes in an accuSpin Micro 17 centrifuge (Thermo Fisher Scientific: MA, USA) and transferred the supernatant to a new 1.5 mL tube.To remove � 50% of the 50% methanol solution, we spun samples for 30-60 minutes in a Savant SPD111V SpeedVac (Thermo Fisher Scientific: MA, USA) and then added sterile water to reconstitute the solution to a final volume of 200 μL.We centrifuged samples again at 13,300 rpm, for 10 minutes.Finally, we loaded 110 μL of the supernatant into individual 1.5 mL glass autosampler vials with 0.15 mL glass inserts.As a positive control, we included one vial containing 110 μL of 0.2 μg/μL α-amanitin standard (SIGMA A2263) dissolved in water.Injection volume for high-performance liquid chromatography/mass spectrometry (HPLC/MS) analysis was 100 μL.
We performed chromatographic separation using a Proto 300 C18 column (RS-2546-W185, Higgins Analytical: CA, USA) attached to an Agilent 1200 series HPLC, multi-wavelength detector, and Agilent 6120 Quadrupole MS (Agilent Technologies: CA, USA), with detection at 220, 280, 295 and 310 nm [30].Elution solution A was 20 mM ammonium acetate pH 5 and solution B was 100% acetonitrile.The flow rate was 1 mL/min, with a gradient of 100% solution A to 100% solution B over 20 minutes.A column re-equilibration period of 10 minutes at 100% solution A was included at the end of each run.
We first determined presence or absence of α-amanitin via HPLC and UV absorbance and confirmed the results by MS.The α-amanitin standard showed an absorption peak at 310 nm at 8.5-minute retention time, coupled with strong MS signals for an ion with a mass/charge (m/z) ratio of 919.We first checked the chromatograms for each Galerina sample for 310 nm peaks at 8.5 minutes and we scanned extracted ion chromatogram MS data for compounds with a mass/charge ratio of 919 at 8.5 minutes.Where UV absorbance, retention time, and MS showed evidence of α-amanitin, samples were recorded as positive.Samples were recorded as positive for β-amanitin based on a peak with the retention time of 8.0 minutes that is expected under the chromatography conditions used [30].Samples that did not produce a distinct peak at 310 nm at 8.0 or 8.5 minutes and that lacked compounds with the expected mass/charge ratio were considered toxin-negative.
Species delimitation
To delimit putative Galerina species, we used the online version of Automatic Barcode Gap Discovery (ABGD) [31] under the assumption that within species sequence variation is usually lower than the variation between species.We included the 314 ITS sequences from Galerina, Psilocybe, and Gymnopilus samples that were at least 500 bp long, repeating the analysis with and without a Kimura 2-parameter correction for multiple substitutions.
The ABGD software gives a range of broader or narrower estimates of species boundaries.To choose among alternative estimates, we assumed that characters of sister species evolve to show reciprocal monophyly [32], that conspecific isolates would in many cases form well-supported clades, but would lack well supported subclades [33], and that closely related species might differ in ecology [18,34].We did not apply a correction for multiple hits in the final analysis because preliminary results showed that a Kimura correction increased both the number of single-sequence species and the number of paraphyletic species (with no evidence of reciprocal monophyly).Our final ABGD analysis produced seven alternative estimates of possible species boundaries, based on a set of priors for the maximum percent within-species divergence that ranged from 0.001 to 0.0215.These priors bracket the range of reasonable levels of within-species divergence.The prior of 0.001 gave 71 putative species, many represented by only a single sequence and nested within another species.The prior of 0.0215 put all collections in one species in spite of many supported subclades.A prior of 0.0028 with recursive partitions resulted in 63 putative Galerina species, six of them nested among G. marginata s.l.No arbitrary prior is likely to be perfect and in some cases, the 63-group partition lumped wellsupported sister taxa with consistent identifications or created paraphyletic putative species.Of the alternatives, the partition giving 63 Galerina species had the advantages of producing a high proportion of putative species that formed clades with moderate to high bootstrap support of 70% or more, and relatively few paraphyletic species, while dividing the G. marginata s. l. clade into species consistent with patterns of sequence variation in the ITS regions.
For additional support for species delimitation, we examined alignments for patterns of polymorphisms among ITS sequences from closely related putative species [34] in the G. marginata s.l.complex.Where collection localities of delimited species were near one another, as for many of the B.C. collections, interbreeding between close relatives with different ITS sequence variants would be expected to lead to double peaks in ITS sequences that represent heterozygosity.We examined chromatograms, correcting sequences to note double peaks in areas of otherwise clean sequence, with special attention to sites that were polymorphic across species.We considered that fixed sequence differences between sympatric populations of 10 or more specimens pointed to reproductive isolation.
Amatoxins in the Galerina marginata species complex
We examined the distribution of amatoxins across the Galerina phylogeny (Figs 1 and 2).Of the 62 Galerina samples assayed, all 24 amatoxin-positive samples belonged to G. marginata s. l. in Naucoriopsis (Figs 1 and 2).We detected amatoxins in dried herbarium samples collected from 2004-2013 (S2 Table ).Quantification was more difficult in samples from some of the herbarium specimens than others due to high background noise in the chromatograms.When amatoxin was detected, its concentration showed no obvious correlation with sample age (S2 Table ).
The 24 samples that were positive for α-amanitin fell into two delimited species within G. marginata s.l.: G. venenata and G. castaneipes.Average amatoxin concentrations in G. venenata were significantly higher than the toxin concentration in G. castaneipes at P < 0.05 (tvalue = 2.56; p-value = 0.018; Cohen's d = 1.1).The average toxin concentration from the nine G. venenata samples was 1.58 mg/g dry weight or (assuming that 88% of fresh samples was water, p. 75 in Walton [35]), ~189 μg/g estimated wet weight (S2 Table ).Based on expected HPLC retention times, all nine G. venenata samples also contained β-amanitin.The average toxin concentration from 14 G. castaneipes samples was 0.99 mg/g dry weight or (assuming 88% of fresh weight is water) ~117 μg/g estimated wet weight.A peak with the expected retention time for amatoxin appeared to be present but could not be quantified in one of the 15 samples of G. castaneipes, and for two additional G. castaneipes samples, toxin concentrations were too low to quantify in at least one of the replicated measurements.Nine of the 14 G. castaneipes samples contained β-amanitin.In two samples, the presence of a β-amanitin peak was ambiguous.Four samples of G. castaneipes showed no trace of β-amanitin.
Amatoxins were not found in any of the genera closely related to Galerina; amatoxins were not detected from the four Gymnopilus spp., the three samples of Hebeloma or the sample of Flammula alnicola.(Figs 1 and 2).We did not detect α-or β-amanitin in Galerina badipes F27620, which represents the sister clade to G. marginata complex within Naucoriopsis (
Molecular and morphological identification of toxic Galerina
Herbarium specimens were accurately identified to Galerina and its infrageneric groups (S3 Table ), based on morphological identifications later confirmed by DNA barcoding.Importantly, collections of the toxin-containing G. marginata s.l. were usually correctly identified to this clade, and all those tested had been recognized as members of Naucoriopsis.This is encouraging evidence that toxic galerinas can be distinguished from other mushrooms in cases of accidental ingestion and possible poisoning, albeit with some level of expertise and with the use of microscopic characters.
Phylogenies show that many putative Galerina species recognized by ABGD are monophyletic, many with >70% bootstrap support (Fig 2, S1-S3 Figs; S1 and S3 Tables).However, within each infrageneric group, the application of names to species-level clades is inconsistent (S1 Fig) .The inconsistency of species-level identifications even by specialists in the genus points to the lack of congruence between morphological characters and genetically defined species.).
Of the toxin-containing species, Galerina castaneipes (Figs 3a, 4a and 4b), as delimited by ABGD, appears monophyletic in all analyses (S1-S4 Figs).It includes the type specimen G. castaneipes AH Smith 55523, collected on rotting oak wood in Grant's Pass, Oregon.Although conifer wood is more common in the region, all of the other 20 collections of G. castaneipes identified from sequencing come from collections (where wood type was recorded) from rotting hardwood, from Quercus garryana or Arbutus menziesii, geographically from the southeastern tip of Vancouver Island, British Columbia.
Galerina venenata contains A.H. Smith's 1953 type specimen of that species and is common among North American and European collections (Figs 2, 3b, 4c and 4d venenata and G. castaneipes overlapped, suggesting that parental mycelia of the two species would have had opportunities to interbreed.However, the alignment of the ITS regions shows three sites with fixed differences between the two species and little evidence of continuing genetic exchange in the form of shared ITS polymorphisms (S5 Fig) .Three sequences from collections identified as species from outside G. marginata s.l.appeared in the G. venenata clade.Of these, UBC F27894 and UBC F22840 were initially identified as G. badipes, and UBC F24580 was identified as G. jaapii.On reexamination, all three specimens had predominantly 4-spored basidia, characteristic of G. venenata, rather than the 2-spored basidia characteristic of G. badipes and G. jaapii.The specimen UBC F24580 had a few pleurocystidia; this character and the shape of its cystidia led to its re-identification as G. venenata.
We label one clade "G.marginata" in the absence of another name that would apply to the group.No type specimen of G. marginata is available to clarify the application of the name.The clade receives 87% bootstrap support but to be monophyletic, it would have to include specimen G. marginata UWODD6MO221929, designated by ABGD as a different species (S1A Fig) .Specimens identified as G. marginata appear in four of the putative species of G. marginata s.l.In both of these clades, the number of monophyletic putative species is greater than the number of species names applied to collections, and application of species identifications appears almost to be random within delimited species (S1 Fig) .In subgenus Galerina, four names are applied to collections, but eight putative species are delimited by ABGD (S3 Table ).Other than the G. alpestris clade, each delimited species includes specimens with two or more different herbarium identifications.The clade we label as 'G.vittiformis' includes a paratype of Smith's G. vittiformis var.bispora and specimens from N. America, Norway and Greenland.It is unclear whether this clade would also include the European type of G. vittiformis.Four clades labeled here as 'Galerina aff.vittiformis sp.2-5', received over 90% bootstrap support each.Some ).
Pattern of confused application of names to species in non-toxic clades. Application of species names is similarly problematical in subgenera
Galerina infrageneric clades.Several Galerina infrageneric clades, variously considered as subgenera, sections, or stirpes in previous publications (see S3 Table and Gulden et al. [13]) receive strong support from concatenated data.The divergence order of taxa at the base of Naucoriopsis is unsupported but a core clade in Naucoriopsis that includes G. jaapii and G. castaneipes receives 79% bootstrap support (S1 Fig)
Toxin-producing Galerina species are in sect. Naucoriopsis
All known producers of amatoxins in Galerina fall into subgenus Naucoriopsis and most are in Galerina marginata s.l.This includes the 24 specimens that we identified by sequence data as G. castaneipes and G. venenata, all containing detectable amatoxin quantities.Accurate quantification from the dried specimens was difficult in some cases due to unidentifiable background peaks in chromatograms, possibly attributable to products of tissue breakdown before drying was complete.The estimated concentrations of amatoxin in fresh samples, 189 μg/g in G. venenata and 116 μg/g in G. castaneipes are comparable to 78-244 μg/g fresh weight, levels Enjalbert et al. [1] reported from 27 samples from specimens in the G. marginata complex.It is also comparable to amatoxin concentrations ranging from 172-367 μg/g fresh weight in Amanita phalloides [1].
Also in G. marginata s.l., in Naucoriopsis, and reported as toxin-positive [36], Galerina sulciceps is a tropical species found in greenhouses.Toxin tests and DNA sequence barcodes are not yet available for the same collection of G. sulciceps.The ABGD delimitation shows that the sequence from a single collection of the species is distinctive enough to be delimited along with G. physospora in a species separate from G. marginata, G. castaneipes and G. venenata.Because G. physospora is close to, if not synonymous with G. sulciceps, it seems likely to also contain amatoxins, as does G. patagonica, also in the G. marginata s.l.species complex, based on similar reasoning.
Three other species reported in the literature as toxin-positive, Galerina beinrothii [16], G. helvoliceps and G. fasciculata [14,15], could not be included in our molecular analyses due to lack of DNA sequence data.Galerina beinrothii [37] and G. fasciculata [38] were originally described as close to G. marginata.Smith and Singer [12] similarly placed G. helvoliceps near G. marginata.These results further support our conclusion that the amatoxin-producing Galerina species are found within the G. marginata s.l.species complex in subgenus Naucoriopsis.
While we detected α-amanitin in all samples tested from G. marginata s. l., β-amanitin was consistently present in the nine G. venenata samples but was undetectable from four of 15 G. castaneipes (S2 Table ).Tyler and Smith [10] detected β-amanitin in North American samples in the initial discovery of amatoxins in Galerina.Besl et al. [16] detected β-amanitin in all samples assayed that contained α-amanitin.However, Luo et al. [17] did not detect β-amanitin or a gene encoding it in the published genome of G. marginata CBS 339.88 [39], which based on its ITS sequence (GenBank MH862132.1)falls in G. venenata.The β-amanitin toxin appears to be genetically encoded in Amanita [40,41].Sgambelluri et al. [30] speculated that some toxin producing fungi contain an enzyme such as a deaminase that could convert the asparagine in α-amanitin to the aspartic acid in β-amanitin.Walton (p.75) [35] suggested that the low levels of β-amanitin peaks may also be an artifactual deamination product of α-amanitin breakdown but that the levels of β-amanitin reported by Enjalbert et al. [1] are much too high to be explained by this phenomenon.
Toxin status in G. badipes (sect.Naucoriopsis) is uncertain.Galerina badipes is the only Galerina species outside of G. marginata s.l. that is reported to contain amatoxins but we did not detect α-or β-amanitin in our sample of G. badipes.Besl et al. [16] detected γ-amanitin, a post-translational variant of α-amanitin [35].Post-translational conversion of α-amanitin to γ-amanitin could explain why neither α-nor β-amanitin have been detected in G. badipes mushrooms, even though Luo et al. [17] detected the genes necessary for α-amanitin synthesis in a mycelial culture of the species.Further, RNA blotting showed a much weaker α-amanitin signal from G. badipes compared with G. marginata [17].We note, however, that Luo et al. did not test for amatoxin presence using HPLC/-MS.A possible explanation that is consistent with our results and those of Besl et al. [16] is that in G. badipes, α-amanitin may be present but below detection limits.We believe that the UBC F27620 collection of G. badipes is correctly identified because its sequence matches others from G. badipes from Gulden et al.'s [13] study.Toxins in vouchers of G. badipes from across its geographical range should be analyzed.Given the confusing results, G. badipes has to be presumed to be toxic when implicated in accidental ingestions.
We did not test other members of sect.Naucoriopsis such as G. jaapii, which may be restricted to Europe, or other species such as Galerina triscopa that appear to be related to sect.Naucoriopsis, although with bootstrap support <50%.While this study adds to the evidence that amatoxins evolved once in the common ancestor of the G. marginata species complex, further analysis of additional early diverging Naucoriopsis species could point to earlier origin or to a more complex pattern of toxin gain and loss.
Potential pharmaceuticals from amatoxins and associated genes from Galerina species.Although best known as toxins, amatoxins and other cycloamanides may also have uses in medical therapies.Amatoxins conjugated with anti-tumor antibodies show potential for treating cancer [35,[42][43][44].Cyclic peptides with other biological activities may find other uses as pharmaceutical products.Some have desirable pharmaceutical properties such as stability and rapid absorption into the bloodstream [45].
Amatoxins are expensive because they are purified from the mycorrhizal Amanita phalloides mushrooms [45].Unlike the as-yet-uncultured A. phalloides, the saprotrophic Galerina species like members of Naucoriopsis grow at least slowly in culture, yielding from 0.5-1 mg amatoxin/g dry weight [17].Isolating a wider range of Galerina species in pure culture may lead to the discovery of strains that grow faster and produce more amatoxin.Genetic engineering may expand the range of useful cycloamanides produced from Galerina species' genes.Sgambelluri et al. [45] expressed POPB, encoding the enzyme prolyl oligopeptidase B, important in post-translational processing of amatoxins [17] from G. marginata in Saccharomyces cerevisiae to catalyze the cyclization of 100 different straight-chain peptide substrates ranging from 7-16 residues to cycloamanide configurations.The POPB genes from other Galerina species may further expand the range of potentially therapeutic cycloamanides.
Morphological and ecological characteristics to recognize toxin producing Galerina in poisoning cases
Mushroom poisoning by amatoxins is difficult to diagnose because it takes two to four days after ingestion before serious symptoms appear.Basidiospores and cystidia can survive cooking or ingestion and should be sought in stomach contents or the remains of a meal containing the mushroom if poisoning is a possibility.Individual mushrooms may be atypical of their genus or species; different species often grow in close proximity and a patient may have eaten a mixture of different mushroom species.Despite these caveats, a combination of habitat, mushroom size and habit, and microscopic characters allow for recognition of Galerina and of the toxic species in sect.Naucoriopsis [46] (Table 1, Fig 4).
Evolutionary relationships of clades within Galerina
The RPB2 data contributed here improves the resolution of infrageneric relationships among Galerina.In contrast to phylogenies in Gulden et al. [13], our gene trees from concatenated data support Galerina sections Naucoriopsis and Galerina as sister clades, consistent with their shared microscopic features [11].Also consistent with morphology, trees that include new RPB2 sequences remove various other genera (Phaeocollybia, Agrocybe, Alnicola, Hebeloma, Flammula) from the nested positions within Galerina that they take in LSU gene trees in S4 Fig and in Gulden et al. [13].
On the other hand, this study, like Gulden et al. [13] shows Gymnopilus spp.evolving from within Galerina subgenus Mycenopsis.Gulden et al.'s analysis supported this relationship with a posterior probability of 1.0 from LSU data.With a smaller sample of Gymnopilus and Galerina but with RPB2 as well as rDNA data, Matheny et al. [19] showed the same nested relationship.Gymnopilus and Galerina share spore characters including shape, ornamentation, presence of a plage and a dextrinoid reaction, and their cystidia may be similar in form, providing support for a recent shared ancestry [13,47].Still problematical and in need of analysis Various; can be similar to G. marginata s.l., in others with more or less inflated tip; or 'tibiiform', bone-shaped with a thin, well delimited neck between an expanded base and tip.In some species only at gill edges, not on gill faces.
Various, some as in G. marginata s.l.; others differ in shape, ornamentation, or by being completely smooth or lacking a dextrinoid reaction.
Habitat
On rotting wood, turf, grass or moss.
Often in moss, some on rotten wood and herbs. https://doi.org/10.1371/journal.pone.0246575.t001 from more loci is the unsupported sister relationship between a clade of Psilocybe species and five Galerina species that form subgenus Sideroides.
Conclusion
This study combines a multi-locus sequence phylogeny with HPLC/MS toxin analysis data.
The identifications of herbarium specimens to species correlated poorly with genetic species in this study as in previous analyses [13,18], possibly because keys based on morphology fail to capture the amount of within-and among-species morphological variation.In spite of this, at a higher taxonomic level specimens are reliably identified as members of Naucoriopsis, the clade of species that produce toxins.Prompt morphological identification should enable recognition of likely amatoxin-containing mushrooms, speeding diagnosis and treatment for patients who have ingested these deadly toxic mushrooms.
Fig 1 .Fig 2 .
Fig 1.All 24 toxin-positive mushroom specimens are in subgenus Naucoriopsis of Galerina.We assayed for toxins in 70 collections representing 17 species of Galerina and 8 species in related genera.Each fraction is the number of samples positive for α-amanitin over the total number of specimens tested.Clade colors correspond to Galerina subgenera or to species of Gymnopilus and Psilocybe that appear nested in Galerina (S3 Table).https://doi.org/10.1371/journal.pone.0246575.g001 Fig 2 and S1 Fig).No amatoxins were detected among 37 Galerina samples representing the diversity of sections outside of Naucoriopsis (Fig 3).
Fig 3 .
Fig 3. Toxin containing specimens in Galerina subgenus Naucoriopsis are shown in the top row; in the lower row are examples of species in the non-toxin producing subgenera.Each species name is followed by the specimen's UBC voucher accession number; the Mushroom Observer photograph accession number; and in italics, the name of the subgenus that includes the species.(a, b) Specimens producing positive tests for amatoxins.(a) G. castaneipes F28078 MO119849, Naucoriopsis.White arrow points to inrolled cap margin in a young mushroom.(b) G. venenata F26281 MO153552 Naucoriopsis.Black arrows point to membranous rings around the stems.(c) G. nana F25541 MO102538 Naucoriopsis (affiliation is uncertain).(d) G. atkinsoniana F28226 MO137762 Galerina.(e) G. dimorphocystis F25868 MO129940 Tubariopsis.(f) G. subcerina F25303 MO84732 Mycenopsis.(d, e) Specimens not tested, but ITS sequences match specimens without detectable toxins.(f) Specimens tested, no toxins detected.Scale bar (f) is 1 cm.Scales are not available for the other images, but estimating from the mosses and cone, caps on mushrooms (a, b) are up to ~3 cm wide.Caps on mushrooms (c-f) are ~1 cm or less wide.https://doi.org/10.1371/journal.pone.0246575.g003 , S1 Fig).A.H. Smith's 1958 type of G. cinnamomea var.cinnamomea falls within the same clade.The G. venenata clade appears monophyletic in the RPB2 tree (S3 Fig) but not in the ITS or concatenated trees with better taxon sampling (Fig 2 and S1 Fig).Collection localities of the UBC specimens of G.
Fig 4 .
Fig 4. Microscopic characters of toxic Galerina marginata complex include brown, minutely roughened spores with a plage and bottle shaped cystidia.Although not specific to toxic Galerina species, these characters in any ingested mushrooms justify medical action to mitigate possible poisoning by amatoxins.(a-d) Basidiospores.(a, b) G. castaneipes F26244.(c-e) G. venenata, (c) F26281, (d) F18374, (e) cystidium of F26281.The alphanumeric codes are each specimen's UBC voucher accession number.Arrows designate the plage, the smooth area on the adaxial side of the spore just above the apiculus (arrowheads).Scale bars, 10 μm.Spores are all to the same scale.https://doi.org/10.1371/journal.pone.0246575.g004 Galerina sect.Galerina appears as the sister group to Naucoriopsis, with 95% bootstrap support from RPB2 (S3 Fig) and 76% support from the concatenated dataset (S1 Fig).Section Tubariopsis appears as sister to the clade comprising Naucoriopsis and Galerina, although with <50% bootstrap support (S1 Fig).Gymnopilus species are consistently nested within Galerina subgenus Mycenopsis in each individual gene tree (S2-S4 Figs) and the concatenated tree (Fig 2 and S1 Fig).A subset of species of Mycenopsis share a most recent common ancestor with Gymnopilus with 88% bootstrap support and the clade including all Mycenopsis and Gymnopilus species receives 66% bootstrap support (S1 Fig).The clade of five Galerina species from Sideroides receives 98% support from concatenated data but it is distantly related to the other Galerina species and instead appears, without strong support, as sister to Psilocybe (Fig 2, S1 Fig).The phylogeny of RPB2 sequences (S3 Fig) shows greater resolution and overall higher support levels for relationships among Galerina species compared with the phylogenies from the LSU (S4 Fig).With very low support values, the LSU phylogeny shows Galerina as highly paraphyletic with other genera including Agrocybe, Hebeloma, Psilocybe and Cortinarius.
S1 Fig. Phylogeny showing Galerina collections tested for amatoxins with species delimita- tions and country of provenance.
In this maximum likelihood tree numbers at nodes represent bootstrap support >50% from concatenated ITS, LSU and RPB2 data.Support values are omitted from some deeply nested clades due to graphic constraints.Light grey boxes show monophyletic, delimited Galerina species.Darker grey boxes show delimited but paraphyletic species.A species/clade name is given in each box.Sequence names from original identifications are followed by a voucher identifier.Where applicable, the number of collections from the same country with the same sequence is given in parentheses.+TOX in magenta, α-amanitin is present; -TOX in green, no amanitins were detected.Vertical lines designate subgenera as follows: Black, G. marginata s. l.; solid purple, Naucoriopsis; dashed purple, possible Naucoriopsis; green, Galerina; blue Tubariopsis; gold Mycenopsis; red Sideroides.Orange designates Gymnopilus spp.nested within Galerina.In this maximum likelihood tree of 368 sequences, thickened branches represent bootstrap support >50% from ITS data.Branch thickening is omitted from some deeply nested clades due to graphic constraints.Light grey boxes show monophyletic delimited Galerina species.Darker grey boxes show delimited but paraphyletic species.Sequences that are not boxed were less than 500 bp in length and not included in ABGD species delimitation.A species/clade name is given in each box.Sequence names from original identifications are followed by a voucher identifier and preceded by a number to help locate the same voucher in RPB2 and LSU gene trees.Vertical lines designate subgenera as follows: Black, G. marginata s. l.; solid purple, Naucoriopsis; dashed purple, possible Naucoriopsis; green, Galerina; blue Tubariopsis; gold Mycenopsis; red Sideroides.Orange designates Gymnopilus spp.nested within Galerina.In this maximum likelihood tree with 78 taxa, numbers at nodes represent bootstrap support >70% from RPB2 data.Support values are omitted from some deeply nested clades due to graphic constraints.Light grey boxes show monophyletic, delimited Galerina species.A species/clade name is given in each box.Sequence names from original identifications are followed by a voucher identifier and preceded by a number to help locate the same voucher in ITS and LSU gene trees.Vertical lines designate subgenera as fol- (DOCX) S2 Fig. Phylogeny of ITS sequences.lows: Solid purple, Naucoriopsis; dashed purple, possible Naucoriopsis; green, Galerina; blue | 9,336.6 | 2021-02-10T00:00:00.000 | [
"Biology",
"Environmental Science"
] |
Differentiation Between Organic and Non-Organic Apples Using Diffraction Grating and Image Processing—A Cost-Effective Approach
As the expectation for higher quality of life increases, consumers have higher demands for quality food. Food authentication is the technical means of ensuring food is what it says it is. A popular approach to food authentication is based on spectroscopy, which has been widely used for identifying and quantifying the chemical components of an object. This approach is non-destructive and effective but expensive. This paper presents a computer vision-based sensor system for food authentication, i.e., differentiating organic from non-organic apples. This sensor system consists of low-cost hardware and pattern recognition software. We use a flashlight to illuminate apples and capture their images through a diffraction grating. These diffraction images are then converted into a data matrix for classification by pattern recognition algorithms, including k-nearest neighbors (k-NN), support vector machine (SVM) and three partial least squares discriminant analysis (PLS-DA)- based methods. We carry out experiments on a reasonable collection of apple samples and employ a proper pre-processing, resulting in a highest classification accuracy of 94%. Our studies conclude that this sensor system has the potential to provide a viable solution to empower consumers in food authentication.
Introduction
Food authentication is a process to analyze the composition of food to ensure the food is "what it says on the tin". It is increasingly needed in many areas of science, agriculture and business for various reasons, including the growing demand for high-quality food products. Classical chemistry analyzes materials through chemical reactions that mostly occur in the liquid phase, which is generally expensive, time consuming and requires professional laboratory techniques for food authentication. This approach is therefore unsuitable for routine authentication in consumer markets and unable to effectively control food fraud, such as organic food mislabeling and mixing.
In last decade, there has been a growing trend towards fast and non-destructive approaches for food authentication. A popular approach is to differentiate one food type from another by using spectroscopic techniques such as near-infrared (NIR), Fourier-transform infrared (FTIR) and nuclear magnetic resonance (NMR) with the aid of chemometrics (chemical pattern recognition) [1][2][3]. It studies the interaction between material and electromagnetic radiation as a function of light intensity over wavelength or frequency. Then the specific chemical compositions or physical properties of the material can be revealed by means of chemometrics analysis. This approach has been investigated in many food
Sensor System
The proposed sensor system aims to acquire image data from certain objects, i.e., organic and non-organic apples, by coupling low-cost measurements with computer vision techniques. Using a simple flashlight to illuminate an apple, a diffraction image is generated and captured by a diffraction grating sheet and camera, respectively. Then we apply a series of computer vision techniques, including image pre-processing, segmentation and rainbow generation to convert the diffraction image into a sample vector for analysis. The overall sensor system is shown in Figure 1.
Sensor System
The proposed sensor system aims to acquire image data from certain objects, i.e., organic and non-organic apples, by coupling low-cost measurements with computer vision techniques. Using a simple flashlight to illuminate an apple, a diffraction image is generated and captured by a diffraction grating sheet and camera, respectively. Then we apply a series of computer vision techniques, including image pre-processing, segmentation and rainbow generation to convert the diffraction image into a sample vector for analysis. The overall sensor system is shown in Figure 1.
Diffraction Grating and Image Acquisition
Diffraction gratings are an essential optical component in many fields such as spectroscopy, lasers techniques and optical communication. When polychromatic light reflects from or passes through a diffraction grating, it disperses into several rays travelling in different directions and each ray contains a unique wavelength or color. If we use a flashlight to illuminate an apple, spectra are generated on both sides of the apple which are severely dispersive (see Figure 2a). Diffraction grating can equidistantly slit a spectrum by wavelengths and produce rainbow color spectrum (also called rainbow, shown in Figure 2b) if a wide spectrum light source is being used. As different chemical elements (or compounds) have unique spectra which differ in the composition of the spectral lines, it is possible to detect the presence of a specific element by analyzing the intensity of a spectral line. In this paper, we choose a 60 × 40 nm diffraction grating sheet (less than 3 US dollars) as the experimental object. According to Figure 2c, rainbow images of an apple are generated via the diffraction grating sheet and their color spectra are symmetrically distributed.
To reduce the influence of ambient light and produce images of high quality, the experiment was conducted in a dark environment. There is no surface contamination or damage in each apple and no surface preparation was carried out prior to image acquisition. We place a flashlight 20 cm away from the apple and set diffraction grating sheet right behind the flashlight, which can ensure that the light source is effectively focused on the apple surface. After generating rainbow images, a smartphone is used to photograph the whole experimental environment.
Diffraction Grating and Image Acquisition
Diffraction gratings are an essential optical component in many fields such as spectroscopy, lasers techniques and optical communication. When polychromatic light reflects from or passes through a diffraction grating, it disperses into several rays travelling in different directions and each ray contains a unique wavelength or color. If we use a flashlight to illuminate an apple, spectra are generated on both sides of the apple which are severely dispersive (see Figure 2a). Diffraction grating can equidistantly slit a spectrum by wavelengths and produce rainbow color spectrum (also called rainbow, shown in Figure 2b) if a wide spectrum light source is being used. As different chemical elements (or compounds) have unique spectra which differ in the composition of the spectral lines, it is possible to detect the presence of a specific element by analyzing the intensity of a spectral line. In this paper, we choose a 60 × 40 nm diffraction grating sheet (less than 3 US dollars) as the experimental object. According to Figure 2c, rainbow images of an apple are generated via the diffraction grating sheet and their color spectra are symmetrically distributed.
To reduce the influence of ambient light and produce images of high quality, the experiment was conducted in a dark environment. There is no surface contamination or damage in each apple and no surface preparation was carried out prior to image acquisition. We place a flashlight 20 cm away from the apple and set diffraction grating sheet right behind the flashlight, which can ensure that the light source is effectively focused on the apple surface. After generating rainbow images, a smartphone is used to photograph the whole experimental environment.
The Proposed Framework
To capture a single rainbow image, the background needs to be removed. Image segmentation is the process of detecting objects or interesting areas from input image which plays a key role in object recognition. We propose a framework which can efficiently extract one rainbow image, as shown in Figure 3. We use pre-segmentation techniques to pre-process the image, including grayscale processing and mathematical morphology. Then we apply median filter to denoise the image. By using OSTU (Nobuyuki Otsu) method [21], a single rainbow image is extracted from the original image and converted into color histogram vectors in RGB color space. Figure 4 shows the original and processed images by the above procedures.
The Proposed Framework
To capture a single rainbow image, the background needs to be removed. Image segmentation is the process of detecting objects or interesting areas from input image which plays a key role in object recognition. We propose a framework which can efficiently extract one rainbow image, as shown in Figure 3. We use pre-segmentation techniques to pre-process the image, including grayscale processing and mathematical morphology. Then we apply median filter to denoise the image. By using OSTU (Nobuyuki Otsu) method [21], a single rainbow image is extracted from the original image and converted into color histogram vectors in RGB color space. Figure 4 shows the original and processed images by the above procedures.
The Proposed Framework
To capture a single rainbow image, the background needs to be removed. Image segmentation is the process of detecting objects or interesting areas from input image which plays a key role in object recognition. We propose a framework which can efficiently extract one rainbow image, as shown in Figure 3. We use pre-segmentation techniques to pre-process the image, including grayscale processing and mathematical morphology. Then we apply median filter to denoise the image. By using OSTU (Nobuyuki Otsu) method [21], a single rainbow image is extracted from the original image and converted into color histogram vectors in RGB color space. Figure 4 shows the original and processed images by the above procedures.
Image Denoising
Digital images are generally affected by noise from the imaging instrument and the external environment during digitization and transmission. In this paper, we use the median filter for image denoising. The main idea of median filtering is to replace the value of a point in a digital image or digital sequence with the median of neighboring points. As a result, the surrounding pixel values are close to their real value and the isolated noise points are eliminated. The median filtering is especially useful for denoising which can efficiently preserve edges [22]. After image denoising, to retain the maximum information of color, we used the hue, saturation and value (HSV) color model, which is close to human visualization for preliminary processing the rainbow image.
The OSTU Method
The OSTU method [21] is an automatic image segmentation algorithm which find a threshold that minimizes the weighted within-class variance. In this paper, we use the OSTU method to distinguish a rainbow image from background due to its simplicity and effectiveness. It firstly converts input images to grayscale images and count the number of gray levels K (0 < K < 255). Then pixels are divided into background class C1 and object class C2 by a threshold T, C1 and C2 are within the interval of [1, T] and [T + 1, K], respectively. We define the total number of pixels as L and the probability of appearance grayscale level as μ, the algorithm is summarized as follows: Calculate probability appears of C1: Calculate probability appears of C2: Calculate the average grey level of C1: Calculate the average grey level of C2: Calculate the sum of the average grey level:
Image Denoising
Digital images are generally affected by noise from the imaging instrument and the external environment during digitization and transmission. In this paper, we use the median filter for image denoising. The main idea of median filtering is to replace the value of a point in a digital image or digital sequence with the median of neighboring points. As a result, the surrounding pixel values are close to their real value and the isolated noise points are eliminated. The median filtering is especially useful for denoising which can efficiently preserve edges [22]. After image denoising, to retain the maximum information of color, we used the hue, saturation and value (HSV) color model, which is close to human visualization for preliminary processing the rainbow image.
The OSTU Method
The OSTU method [21] is an automatic image segmentation algorithm which find a threshold that minimizes the weighted within-class variance. In this paper, we use the OSTU method to distinguish a rainbow image from background due to its simplicity and effectiveness. It firstly converts input images to grayscale images and count the number of gray levels K (0 < K < 255). Then pixels are divided into background class C1 and object class C2 by a threshold T, C1 and C2 are within the interval of [1, T] and [T + 1, K], respectively. We define the total number of pixels as L and the probability of appearance grayscale level as µ, the algorithm is summarized as follows: Calculate probability appears of C1: Calculate probability appears of C2: Calculate the average grey level of C1: Calculate the average grey level of C2: Calculate the sum of the average grey level: Calculate the interclass variance: Calculate: The optimal threshold T can be obtained when g(T) achieves the maximum value. Figure 5 shows two apples from different classes (organic vs. non-organic) and their rainbow images. Basically, the two apples cannot be visually identified by their physical appearance. If we compare the extracted rainbow images, it is still difficult to tell the difference between the organic and non-organic apples merely based on the radiance or the rainbow size, because specific distinctions may be caused by instrumental and experimental artifacts. In order to differentiate organic apples from non-organic ones precisely, we convert each rainbow image into a sample vector for further analysis. Calculate the interclass variance: Calculate: The optimal threshold T can be obtained when g(T) achieves the maximum value. Figure 5 shows two apples from different classes (organic vs. non-organic) and their rainbow images. Basically, the two apples cannot be visually identified by their physical appearance. If we compare the extracted rainbow images, it is still difficult to tell the difference between the organic and non-organic apples merely based on the radiance or the rainbow size, because specific distinctions may be caused by instrumental and experimental artifacts. In order to differentiate organic apples from non-organic ones precisely, we convert each rainbow image into a sample vector for further analysis.
Feature Vector Representation
Here, we convert the obtained rainbow images to feature vectors in red, green, and blue (RGB) color space in order for representing color information. The RGB color space generally uses the superimpositions of three primary colors to form a diversity of colors. Specifically, it can be represented as a unit cube in 3-dimentional Cartesian coordinate system, where colors are linearly combined by three primary colors of different ratios [23]. This can be represented as:
Feature Vector Representation
Here, we convert the obtained rainbow images to feature vectors in red, green, and blue (RGB) color space in order for representing color information. The RGB color space generally uses the superimpositions of three primary colors to form a diversity of colors. Specifically, it can be represented as a unit cube in 3-dimentional Cartesian coordinate system, where colors are linearly combined by three primary colors of different ratios [23]. This can be represented as: where F is a certain color, W1, W2 and W3 is the ratio of red, green and blue color luminance, respectively. In this work, we set the value of W1, W2 and W3, respectively to 0.2, 0.7 and 0.1 by trial and error. As each rainbow image (100 by 100 pixels) is comprised of RGB color channels, three image matrices can be transformed into a data matrix according to (8). We calculate the mean of each row and obtain a 1-by-100 feature vector. The framework of converting rainbow image into feature vectors and the obtained raw data are shown in Figures 6 and 7a, respectively.
The Nonlinear Problem
From Figure 7a, samples from different apple species present an intuitive distinction within variables ranging from 40 to 60, which shows the three species can be visually identified via the above procedures. However, if we aim to classify these samples as organic or non-organic, this figure can barely provide enough distinctions and the classification decision requires further data analysis. To obtain an overview of the distinctions of species and types, raw apple data were subjected to principal
The Nonlinear Problem
From Figure 7a, samples from different apple species present an intuitive distinction within variables ranging from 40 to 60, which shows the three species can be visually identified via the above procedures. However, if we aim to classify these samples as organic or non-organic, this figure can barely provide enough distinctions and the classification decision requires further data analysis. To obtain an overview of the distinctions of species and types, raw apple data were subjected to principal component analysis (PCA), as shown in Figure 8. Different species of apple are well separated while organic and non-organic samples are nonlinearly distributed. We attempt to discover high-performance classification rules for differentiating organic apples from non-organic ones by using a pattern recognition framework. Figure 8. Different species of apple are well separated while organic and non-organic samples are nonlinearly distributed. We attempt to discover highperformance classification rules for differentiating organic apples from non-organic ones by using a pattern recognition framework.
Pattern Recognition Framework
The pattern recognition framework for the classification of organic and non-organic apple data consists mainly of pre-processing, modelling, validation and classification procedures, as shown in Figure 9. It firstly applies pre-process techniques to reduce noise effects and unwanted variations existed in raw data which are caused by instrumental and experimental artifacts. A suitable pre-processing can decrease the error rate and complexity of classification models. Detailed information about pre-processing apple data will be provided in Section 4. Modelling procedure then uses classification algorithms to reveal the relationship between training data and their corresponding classes. To achieve the highest classification accuracy possible, the parameter(s) of a classifier requires to be optimized in validation procedure. Finally, the optimal model is selected to classify testing data. pattern recognition framework. . A pattern recognition framework for classifying organic and non-organic apple data. Figure 9. A pattern recognition framework for classifying organic and non-organic apple data.
Classifiers
Five classification algorithms, namely, k-NN, SVM, PLS-DA, KPLS-DA and LW-PLS classifier (LW-PLSC) were applied to classify raw and pre-processed apple data. Among these algorithms, k-NN and PLS-DA are baseline methods commonly used in pattern recognition and chemometrics which yield simple and effective models and are computation efficient. SVM, KPLS-DA and LW-PLSC are comparably complex in modelling and generally provide better prediction performance under nonlinear conditions. These algorithms are summarized in the following sections.
k-Nearest Neighbors (k-NN)
The k-NN classification algorithm is a popular method which classifies a query depending on the classes of its neighboring samples. If most of the k closest samples belongs to a certain class, the query will be assigned to this class. Specifically, k-NN directly assigns the query to the class of its nearest neighbor when k equals to 1. The k-NN method is theoretically simple but will degrade in performance under high-dimensional condition if the metric is based on Euclidean distance [24].
Support Vector Machine (SVM)
The SVM classifier aims to find an optimal hyperplane which correctly separates the samples of the different classes while maximizing the shortest distances from the hyperplane to the nearest samples for each class [25]. It can be extended to nonlinear classification by mapping the input data into feature space via kernel functions.
Partial Least Squares Discriminant Analysis (PLS-DA)
PLS regression is a standard method for processing chemical data which assumes the investigated system is driven by a set of underlying latent variables (LVs, also called latent vectors, score vectors, or components). It extracts LVs by projecting both X and Y onto a subspace such that the pairwise covariance between the LVs of X and Y is maximized. To ensure the mutual orthogonality of the LVs, this procedure is iteratively carried out by using deflation scheme which subtracts from X and Y the information explained by their rank-one approximations based on score vectors. PLS-DA is the classification use of PLS regression by transforming categorical responses into numerical responses using dummy matrix coding [26].
Kernel Partial Least Squares Discriminant Analysis (KPLS-DA)
KPLS-DA maps the original data X into Hilbert feature space F (ϕ: R d → F): and then constructs a PLS-DA model for classification. The mapping procedure is performed by kernel function which calculates the similarity between two sample vectors. In this paper, we use the Gaussian kernel due to its efficiency which is defined as: where σ is Gaussian width of the kernel function.
Locally Weighted Partial Least Squares Classifier (LW-PLSC)
LW-PLSC is a JIT method proposed in our recent work for modelling nonlinear data, which extends LW-PLS to the classification use. For a given query, it respectively enlarges and lessens the influence of neighboring and remote samples towards a PLS-DA model. As a result, the global nonlinearity can be lessened. Two parameters of LW-PLS, localization parameter ϕ and LVs controls sample weights and model complexity, respectively.
Performance Evaluation
We firstly partition the apple data into training and testing sets by using DUPLEX splitting [27] with the ratio of 2:1. DUPLEX splitting maintains the same diversity in both sets, so that the data in each set follows the statistical distribution of the overall data [28]. Then leave-one-out cross validation is implemented to obtain the optimal parameter(s) of each algorithm and the validation accuracy. With the selected optimal parameter(s), each algorithm constructs a classification model on training set and use it to predict samples from testing set. The classification results are finally evaluated by overall accuracy and per-class accuracy.
Data Pre-Processing
As the image data collected by our sensor system is a new type of data, to the best of our knowledge, there is no report in the literature about how to pre-process such data. We adopt typical pre-processing techniques used in spectroscopy and signal processing to improve the classification performance as well as the simplicity of models, including smoothing, baseline correction and normalization. The basic requirement of pre-processing is that it should decrease or maintain the model complexity without significant loss of useful information. By checking these techniques and their combinations, only Savitzky-Golay smoothing [29] (fitted by a polynomial of degree two and a 33-point moving window) can best improve the validation performance of five algorithms. Figure 7b shows the effects of smoothed data. The validation accuracy of smoothed data drastically exceeds that of raw data, as shown in Table 1.
Parameter Optimization and Classification Performance
The optimal parameters of five algorithms are set by leave-one-out cross validation on training set. The number of nearest neighbors in k-NN is chosen from 1 to 49 with an interval of 2, while the number of LVs in PLS-based methods does not exceed 10 in case of overfitting. We set the value of penalty parameter in SVM from 1 to 8 and select Pearson VII kernel function (PUK). A grid search approach is used in KPLS-DA (LV * s) and LW-PLSC (LV * ϕ). The σ in KPLS-DA is adjusting from 10 −3 to 10 3 on a logarithmic scale, while the ϕ in LW-PLSC is varying from 0.1 to 25.
Here, we demonstrate the grid search of the optimal number of LVs and ϕ for LW-PLSC, which is depicted in Figure 10a, as a mesh plot. LW-PLSC obtains the peak validation accuracy when the number of LVs and ϕ equals to 2 and 15, respectively. We further graphically present three PLS-based algorithms with varying LVs in Figure 10b by fixing other parameters (σ in KPLS-DA and ϕ in LW-PLSC) to their optimal values. LW-PLSC drastically outperforms PLS-DA for each LV and achieves the highest accuracy of 94% by selecting only 2 LVs. This demonstrates locally weighted modelling has low complexity and is efficient in capturing the data nonlinearity. KPLS-DA can also provide a comparable result to LW-PLSC but uses 7 more LVs. The validation results of five algorithms and their corresponding optimal parameters are provided in Table 1. Both k-NN and SVM obtain around 90% validation accuracy.
The overall classification accuracy and per-class accuracy on testing set are shown in Table 1. LW-PLSC achieves the highest overall accuracy of 94% among five algorithms, while SVM and KPLS-DA present the same performance at 92%. PLS-DA yields the least precise result due to the nonlinear distribution of data. By looking at the classification accuracy of each class, SVM achieves the best result of 92.7% for identifying the non-organic class, whereas KPLS-DA achieves the best result of 100% for identifying the organic class. LW-PLC obtains comparably balanced accuracies for every class so it achieves the best overall accuracy of 94%. The overall classification accuracy and per-class accuracy on testing set are shown in Table 1. LW-PLSC achieves the highest overall accuracy of 94% among five algorithms, while SVM and KPLS-DA present the same performance at 92%. PLS-DA yields the least precise result due to the nonlinear distribution of data. By looking at the classification accuracy of each class, SVM achieves the best result of 92.7% for identifying the non-organic class, whereas KPLS-DA achieves the best result of 100% for identifying the organic class. LW-PLC obtains comparably balanced accuracies for every class so it achieves the best overall accuracy of 94%.
Conclusions
This paper presents a new sensor system, which consists of low-cost hardware and pattern recognition software, for food authentication, i.e., differentiating organic apples from non-organic ones. The sensor system can effectively acquire rainbow images from food samples, which are converted into non-standard spectral data. To overcome the instrumental and experimental artifacts in such non-standard spectral data and to address the inherent nonlinearity problem in such data, appropriate and effective pre-processing and locally weighted modelling are adopted. Experiments show that the proposed sensor system has achieved a highest classification accuracy of 94% for identifying and distinguishing between organic and non-organic apples, demonstrating the potential of the new sensor system as a rapid, non-destructive and low-cost solution for food authentication. In our future work, we will optimize the hardware components and test the sensor system on a variety of foods.
Author Contributions: J.N. conceived and performed the experiment; S.W. and J.N. wrote the paper; S.W. and J.N. analyzed the data; W.H. suggested the framework of the sensor system and revised the manuscript; G.G. gave some advice to complete the research work; and L.Y. provided assistance in sampling.
Acknowledgments: This work is supported by Fujian science and technology department project (No.
Conclusions
This paper presents a new sensor system, which consists of low-cost hardware and pattern recognition software, for food authentication, i.e., differentiating organic apples from non-organic ones. The sensor system can effectively acquire rainbow images from food samples, which are converted into non-standard spectral data. To overcome the instrumental and experimental artifacts in such non-standard spectral data and to address the inherent nonlinearity problem in such data, appropriate and effective pre-processing and locally weighted modelling are adopted. Experiments show that the proposed sensor system has achieved a highest classification accuracy of 94% for identifying and distinguishing between organic and non-organic apples, demonstrating the potential of the new sensor system as a rapid, non-destructive and low-cost solution for food authentication. In our future work, we will optimize the hardware components and test the sensor system on a variety of foods.
Author Contributions: J.N. conceived and performed the experiment; S.W. and J.N. wrote the paper; S.W. and J.N. analyzed the data; W.H. suggested the framework of the sensor system and revised the manuscript; G.G. gave some advice to complete the research work; and L.Y. provided assistance in sampling. | 6,253.8 | 2018-05-23T00:00:00.000 | [
"Agricultural and Food Sciences",
"Computer Science"
] |
Aczel–Alsina T-norm based group decision-making technique for the evaluation of electric cars using generalized orthopair fuzzy aggregation information with unknown weights
Data management and finding precise outcomes from large amounts of information are among the biggest challenges for scientists. The technique of multi-attribute group decision-making (MAGDM) is a valuable tool for investigating fuzzy data precisely. The key objective of the paper is to redefine the q-rung orthopair (q-RO) fuzzy set (FS) (q-ROFS) in the term of interval-valued and proposed new aggregation operators (AOs) based on the Aczel-Alsina (AA) t-norm (TN) and t-conorm (TCN) operations. The AA operational laws are a generalized form of existing TNs and TCNs and give more reliable results because they can fluctuate in their parametric values. The concept of interval-valued enlarges the space of membership degree (MD) and non-membership degree (NMD) for decision-makers. By taking qth power, the interval-valued q-ROFS (IV-q-ROFS) structure. The IV-q-ROFS can handle the uncertainty and vagueness in data, then interval-valued intuitionistic FS (IV-IFS) and interval-valued Pythagorean FS (PyFS) (IV-PyFS) and provide accurate results. The thought of power AOs (PAOs) makes the relationship between weight vectors and reduces the chances of uncertainty in aggregated results. By taking advantage of PAOs, this article is devoted to introducing the interval-valued q-ROF Aczel-Alsina power-weighted averaging (IV-q-ROFAAPWA) and interval-valued q-ROF Aczel-Alsina power-weighted geometric (IV-q-ROFAAPWG) operators. The fundamental axioms of AOs, idempotency, boundedness, and monotonicity, are also discussed. To illustrate the importance of suggested AOs, the real-life problem of electric car selection was solved by applying the MAGDM method using the proposed IV-q-ROFAAPWA and IV-q-ROFAAPWG operators. The comparison of proposed AOs with currently present AOs is also part of the article. We finally constructed solid conclusions.
Introduction
Decision-making (DM) problems are always considered a hot area in the history of mathematics.In this scenario, many mathematicians present different theories.The crip set theory is an indispensable concept in DM sciences.In the idea of crips set theory, there are only two possible ways to present uncertain information: "Yes: or "No."There is space to give the information in which human options are involved.This is the main drawback of the crisp set theory [1].Introduced a fuzzy set (FS) to eliminate this limitation.FS can describe human opinion in terms of MD under the range [0,1].Over time [2], introduced the intuitionistic FS (IFS) by highlighting the deficiencies in FS theory.He also extended FS's rage by adding the idea of NMD in IFS.The range of IFS is [0, 1].In addition, the concept of IFS cannot deal with high-range data, such as if the sum of 0.6 and 0.7 is 1.3.This is a violation of FS theory.So, to overcome this type of problem, PyFS discussed in Ref. [3].PyFS sum of the square of MD and NMD always lies between the range [0, 1] and it is the generalized format of IFS.But the structures of IFS and PyFS are unable to print the assessment information like (0.9, 0.6) or (0.7,0.8).To reduce this issue [4], introduced q-ROFS by taking the q power on MD and NMD.The assembly of q-ROFS is considered superior to IFS and PyFS because when we take q is equal to 1 and q is equal to 2, the shape of q-ROFS is turned into the IFS and PyFS, respectively.
The MAGDM algorithm is one of the best procedures to aggregate the information and find suitable alternatives.In literature there are many MAGDM algorithms are defined in various fuzzy environments; for example, An interval-valued IFS for MAGDM problems defined in Ref. [5].The technique of MAGDM in IFS framework is proposed in Ref. [6], and The solution to the MAGDM problem using a PyFS environment based on Einstein AOs given in Ref. [7].[8] provided the thought of an interval-valued PyFS in the soft fuzzy environment for finding the solution to MAGDM issues and Zhou and [9] provided the solution to MAGDM issues using the Linguistic PyFS environment.The concept of decision-making science uses the Linguistic PyFS derived in Ref. [10], and MAGDM methodology using the IV-q-ROFS framework for Maclaurin symmetric mean AOs discussed in Ref. [11].[12] presented the impression of the MAGDM problem through interval-valued neutrosophic (IVN) information, and the application of MAGDM based on the interval-valued hesitant Einstein-prioritized AOs discussed in Ref. [13].[14] proposed the Choquet integral AOs for MAGDM problems and a practical example related to the MAGDM method based on the t-spherical fuzzy (TSF) set (TSFS) structure is provided in Ref. [15].[16] diagnosed AOs by applying the theory of TSFS for group decision-making.
AOs are the primary tool for the assessment of uncertain and fuzzy information.In the MAGDM problems, many AOs based on TN and TCNs are defined by many researchers, such as the MAGDM method based on the prioritized AA picture fuzzy AOs defined in Ref. [17] and the solution of MAGDM based on Maclaurin symmetric AOs introduced in Ref. [18].The notion of q-ROF Aczel-Alsina AOs (q-ROFAAAOs) was proposed in Ref. [19], and the idea of Interval-valued IF variable hybrid weighted AOs was proposed in Ref. [20].[21] proposed power Maclaurin symmetric AOs based on the PyFS framework.The complex TSF PAOs solve the DM problem offered in Ref. [22], while the idea of Picture fuzzy (PF) Maclaurin symmetric AOs is defined in Ref. [23].Th concept of t-spherical fuzzy (TSF) PAOs was proposed in Ref. [24].Ullah et al. gave the DM problem solution using interval-valued TSF AOs [25].The thought of generalized orthopair FS given in reference [26,27] provided the solution for constructing the Fangcang shelter hospital by applying the spherical FS theory.The idea of digitalizing transport systems using the concept of spherical FS was offered in reference [28].The idea of complex q-ROFS using AA operations was discussed in Ref. [28], and [29] presented the solution of urban transport planning using the concept of decision-making sciences.The decision-making technique is used to find suitable path selection for public transportation, as discussed in Ref. [29].
Presented [30] the thought of triangular norms for fuzzy metric spaces.Mathematicians define many TNs and TCNs to solve MAGDM issues; for example, the AA norm concept was first introduced [31] in 1982.The noticeable feature of AA TN and TCN is the significant priority of the changeability of parameters.The IF soft AOs based on Einstein TN and TCN defined into [32] and IF hybrid AOs proposed in Ref. [33], Einstein geometric AOs based on IFS given in Ref. [34], and AOs depend on Archimedean TCN and TN for TSF defined in Ref. [35].Entropy-based Hamacher AOs in the IFS framework proposed by Ref. [36], Dombi TN and TCN [37], Einstein TN and TCN [14], and Hamacher TN and TCN [38].In recent years [32], have extensively studied the features and related aspects of TNs.The Hamacher AOs PyFS is defined in Ref. [39].The PAOs for IFS for Frank TN and TCN were defined in Ref. [40].The concept of AA operation for hesitant q-ROFS, defined in Refs.[41,42], discussed the AA laws for complex spherical FS.The concept of Bonfirroni AOs for data aggregation was discussed by Ref. [43], and [44] proposed the methodology for the Parsimonious best-worst technique for the assessment of travel mode.
In literature, many researchers solve car selection problems through the MAGDM technique using multiple fuzzy frameworks, for example, the section on commercial electric vehicles through fuzzy logic presented in Ref. [45] and an application for finding the suitable electric car using spherical fuzzy environment proposed in Ref. [46].[47] developed the methodology for choosing an electric vehicle with high social acceptance, and [48] suggested the fuzzy analytic hierarchy method for car selection [49].Presented the undefined TOPSIS method for finding a suitable charging station for electric cars and the idea of solar electric car selection using the fuzzy CPPRAS model given in Ref. [50].[51] Provided the fuzzy logic-based electric car speed control system, and [52] introduced electric vehicles' environmental and economic impacts [53].Highlighted the importance of electric cars using fuzzy logic approaches and suitable place selection for charging stations utilizing DM sciences presented in Ref. [54].
The constructed AOs in IV-q-ROFS environments are practical tools for dealing with ambiguous and fuzzy information and utilizing the MAGDM problems.IV-q-ROFS is a generalized framework compared to other present interval-valued IFS and interval-valued PyFS frameworks.For q = 1 and q = 2 the system of IV-q-ROFS is turned into the interval-valued IFS and PyFS, respectively.So, it is seen that the structure of interval-valued IFS and interval-valued PyFS are the parts IV-q-ROFS.Hence, the motivation for the proposed approach is shown as follows: 1. Developed the notion of IV-q-ROFS and a few operations, then demonstrated their properties.2. Proposed a few extended IV-q-ROFAAPWA and IV-q-ROFAAPWG operators and provided their fundamental axioms.The main advantages of the suggested approach are discussed as follows: The proposed work is based on the interval-values q-rung orthopair fuzzy values (IV-q-ROFVs), giving decision makers more N. Zhang et al. reliability and freedom for better data aggregation.The structure of IV-q-ROFS has the superior format of the IV-IFS and IV-PyFS and can deal with fuzzy information where these sets fail.Also, in the presented approach, we use the concept of PAOs, which provides more accuracy in aggregated results by making the relation between weight vectors of attributes.One more significant factor of the developed technique is AA operations because AA operations give more accuracy in aggregated outcomes and provide more diversity and preciseness in results due to the changeability of the parametric value.Considering the above-discussed feature, we construct the IV-q-ROFAAPWA and IV-q-ROFAAPWG operators and offer a MAGDM algorithm for solving real-life problems.Also, compare with existing methods to check the applicability of the developed approach.
This article is structured as follows: Section 2. discusses some basic definitions to help understand the article.In section 3. we proposed IV-q-ROFAAPWA and IV-q-ROFAAPWG operators using q-ROFS information.The MAGDM procedure is discussed in Section 4. A numerical is offered in Section 5. Sensitivity analysis is provided in Section 6 by changing the parameters.The compression with existing AOs is also specific in Section 7. Section 8 discussed the results and significance of the developed approach.Finally, a conclusion is certain in Section 9.
Preliminaries
This segment presents some basics of the q-ROFS that are important to understand the developed results.
q-rung orthopair fuzzy set
By using the independent parameter q [4], generalized the concept of IFSs by developing the framework of q-ROFSs.It is a more effective tool than IFSs because the variety of the NMG and MG for fuzzy and unclear data to solve real-life problems is vast.Definition 1. [4] For a universal set (US) U, the q-ROFS can be denoted as: Then we defined hesitant degree P(x) as (ᵯ, ᵰ), ϰ ∈ U for q-ROFVs is explained as follows: [4] Suppose universal set U, the IV-q-ROFS can be defined as: The pair of hesitancy degree ([ᵯ l , ᵯ u ], [ᵰ u , ᵰ u ]), κ ∈ U for IV-q-ROFVs is specified as follows: ), (i = 1, 2, …, n) the score value (SV) c on the function can be explained below: And A(U ) be the accuracy value (AVs) is restricted as given by: For two IV-q-ROFVs, ) the SV of U i and A(c) is the AVs of U i then c 1 > c 2 where the notion ″> ″ represents ''more suitable'' if also S(c 1 ) > S(c 2 ) or S(c 1 ) = S(c 2 ) and A(c 1 ) > A(c 2 ) holds.
Definition 4. [55] For two IV-q-ROFVs, ) the distances between a and b can explained below in Eqution 1.
Definition 5.The PAO was proposed by Yager [44], and it is explained in Eqution 2.
N. Zhang et al.
Where Eqution 3. discussed the support values of the PAOs.
and Sup(β i , β j ) states the consideration of β i and β j and it must follow the following situations: To make the linked values emphasize and support one another, the connection of aggregate values and their weight vector (WV) of PAOs will depend on their justifications.
Aczel Alsina operatinal laws for IV-q-ROFVs
This section develops the Aczel-Alsina [24] operating laws using IV-q-ROFVs.Here, we further discuss some hypotheses that are essentially connected.Definition 10. [57] For two IV-q-ROFVs U ) and the idea z and Ⱨ to denote the AATNs and AATCNs.So the intersection and union marked by Q P respectively and IV-q-ROFVs can explained below: ) )
IV-q-ROF power aggregation operators
Built on the operational guidelines for IV-q-ROFVs connected to Aczel-Alsina operations provided in part 3, this part discusses IV-q-ROFAAPWA and IV-q-ROFAAPWG aggregation operators.
Definition 12. For assembly of IV-q-ROFVs
) and (i = 1, 2, …, n), and IV-q − ROFAAPWA : γ n →γ, discussed in Equation (8). as follows: In this γ can be a collection of all IV-q-ROFVs and A(β i ) = ∑ n j=1,j∕ =i Sup(β i , β j ), So IV-q-ROFPA is called the IV-q-ROFAAP aggrigation operator.On behalf of suitability, study Then, Eq. (1) will become ) in this i = 1, 2, 3, …, n so the calculating outcome is also a IV-q-ROFVs through Definition 8, is also a IV-q-ROFV.The AO can be shown in Equation (9). as follows: Where ξ i and i = 1, 2, …, n is the set of included weights such that and ξ i > 0, ∑ n i=1 ξ i = 1.Proof: To satisfy that the theorem is valid for n = 2, we apply a mathematical induction technique involving the AA operation explained in Definition 8.
. Through applying the above, we have Hence satisfied that it is accurate for n = 2. So, let the theorem be accurate for n = k, we get Now, let the theorem be valid for n = k + 1, we get Thus, this theorem is valid for n = k + 1.So, it is valid for all real values.
Now, by using Definition 1, we obtained Hence, the theorem is valid for n = 2. So, let this theorem be accurate for n = k.We get Now, let this theorem be valid for n = k + 1.We get N. Zhang et al.
Hence, this theorem is valid for n = k + 1.So, this statement is valid for all real values lying between [0, 1].
Multi-attribute group decision-making algorithm based on the q-ROFSs
Here in, using the suggested method in an IV-q-RFS environment, we construct a MAGDM methodology.suppose ɉ = {ɉ 1 , ɉ 2 , …, ɉ ɱ } are ᶆ th attributes and ȁ = {ȁ 1 , ȁ 2 , …, ȁ ȵ } are said to ɳ alternatives for selection.Also ϖ i be the WV of the attributes that are allotted through the decision-makers, and its all-time fulfils that ∑ n i=1 ϖ i = 1.The WV of the decision-maker D p is represented through γ p (k = 1, 2, …, p) and 0 ≤ γ i ≥ 1.Through applying the IV-q-ROFS data, make a decision matrix Ų = ( Ŝk ) ɱ×ȵ in this IV-q-ROFS in- ) be the collection of attributes x j the D p provide alternatives e j .
and assessed the values of the e j all the time lie in 0 ≤ [(( Lastly, make the IV-q-RFS decision matrix Ų = ( Ŝk ) ɱ×ȵ is designed by using the q-ROFS concept.
Usually, befit and cost type are two types of attributes: Let U i be a margin and U c i be the decision matrix's cost value.There is no need for adjustment when the attributes (margin and cost) are of the exact precise nature.If the margin and cast properties are different, they must be modified.
We suggest a method utilizing the IV-q-ROFAAPWA and IV-q-ROFAAPWG algorithms to elucidate the MAGDM problem, applying the IV-q-ROS concept in this case to select the optimal choice.The following is a list of the steps in this algorithm: Step 1. Utilize the formula below to calculate the support values: In this a, b = 1, 2, …, p; j = 1, 2, …, m; i = 1, 2, …, n which fulfil all environments of support function than can be debated in Eqs. ( 1)-( 3), in this Sup(β a ij , β b ij ) displays the distance among dual IV-q-ROFVs β a ij and β b ij and in Definition 3, it is explained. Step Step 3. Compute the answer to the WV ξ k i linked through the IV-q-ROFVs β b ij .
Step 4. Using our newly created IV-q-ROFAAPWA and IV-q-ROFAAPWG operators, as shown below, To calculate all the values of Ґ i .
N. Zhang et al.
〉
Step 5. Utilizing the following formula, one may get the value of A(β ij ).
Step 6. Determining the weight vector's value ξ k i related to the IV-q-ROFVs β k ij .
Step 7. Using the suggested AOs discussed above, completely aggregate the data for each attribute: Step 8. Provide the Ranking of alternatives using the SV formula from Liu et al. [58].
Step 9. Finally, organize the different alternatives to illustration first-rate choice.
A numerical example
The importance of electric cars cannot be overstated where nature's fuel resources will end.One of the essential qualities of an electric vehicle is that its engine has a much better ability to convert the stored electric energy to driving power (kinetic energy) with minimum energy wastage.On the other hand, conventional internal combustion engine vehicles waste more fossil fuel energy in the form of heat energy.The new emerging electric car technology helps eliminate fossil fuel dependency and provides low-cost traveling.Electric cars are friendly to the environment.For example, they cannot produce nitrogen oxide, volatile organic compounds, and noise pollution like conventional vehicles.The rapid growth of electric cars is revolutionizing the automobile industry.Soon, all traditional care will fail to new innovative electric car technology.In this challenging situation, it is a big problem for ordinary humans to select the best electric car.Many well-reputed car production companies are working in the market, such as Tesla in the United States, Nissan in Japan, BMW in Germany, Volkswagen in Germany, Hyundai in South Korea, etc.All these companies claim to provide well-updated technological advancement in their vehicles.So, in this confusing situation, by using our suggested IV-q-ROFAAPWA and IV-q-ROFAAPWG operators, we can solve the MAGDM problem.
Step 4. Putting the calculating' all values in order of their rightness for their SV.When applying the IV-q-ROFAAPWA operator, it is found that Ų 5 is the better selection among all the companies, but Ų 4 is the better decision between all the companies when using the IV-qROFAAPWG operator.In Table 6, the ranking arrangement is displayed.
Sensitive analysis of parameters
The sensitivity analysis of the parameters utilized in the suggested AOs is covered in this section.This analysis demonstrates how the two parameters q and Ή affect the produced AOs.We use graphical representation to convey our astute observations and the variable effects of the parameters on ranking order.
Effect of Ή
From the suggested real-life example, it is seen that change in Ή by the decision makers causes the difference in the sequence of ranking order.For this, when we take Ή = 1, 3, 5, 7, 11, …n and in the whole procedure, we take q = 3.Then, Table 7 shows changes to the IV-q-ROFAAPWA operator's ranking order.Table 7 shows what happens when substituting Ή = 1, 3, 5, 7, 11, …n for the IV-q-ROFAAPWG operator.
Table 7 presents the change in aggregation findings by changing the parametric value of Ή in proposed IV-q-ROFAAPWA operators.It is noticed that when we take the value of Ή is 1, 3, 5, 7, 9, then significant changes occur in results; on the other hand, we found that when we place Ή = 11 and all upcoming value odd values, then there will be no changes appear in aggregated results.We also noticed that no answer is obtained if Ή is assumed to be an even number.Graphical representation of Table 7 can be seen in Fig. 2.
Table 8 shows the possible changes that occurred by changing the parametric value during data aggregation by applying IV-q-ROFAAPWG operators.It is significant that when we take Ή = 1, 3, 5, 7, 9, then the ranking sequence of aggregated information varies by the variation of the parametric value of Ή.On the other side, we observed that when we tale Ή = 11 and for further all odd deals, the ranking order will remain the same, which means that when Ή = 11 is considered a stability point for IV-q-ROFAAPWG 5.
Table 6
Ranking of SVs. Ordering operators.Also, we found that no answer is obtained when we assume Ή is an even number.A graphical representation of Table 8 is given in Fig. 3.The horizontal line shows the range of score values, and the vertical line shows the range of alternatives.The above figure shows the graphical interpretation of information discussed in Table 7 from the diagrams, we quickly observed that when we take Ή = 1, then Ų 5 be the best alternative, while when Ή = 3, 5 Ų 1 be the best option.However, when we take Ή = 7, 9, 11 then, the finest result is Ų 5 .
The Fig. 3. Shows the variational effects of parameter Ή on the proposed IV-q-ROFAAPWG operator.The horizontal line shows the range of score values, and the vertical line shows the range of alternatives.The information from the above diagram is as follows: when we take the value of parameter Ή = 1, 3, then the best raking alternative is Ų 1 while when we take Ή = 5, the finest optimum result is Ų 5 .However, when we place the value of parameter Ή = 7, 9, 11, then the best resultant is Ų 5 by utilizing the developed IV-q-ROFAAPWG operator.
Effect of q
The operators, IV-q-RPFAAPWA and IV-q-ROFAAPWG, are observed for q = 3 and the IV-q-ROF information.It can be shown that
Table 7
Ranking of the sequence of IV-q-ROFAAPWA by changing in parameter Ή.
Ordering the SV using IV-q-ROFAAPWA Result not found Fig. 2. The geometrical representation of the IV-q-ROFAAPWA operator's score value with variation in Ή.
Table 8
Ranking of the sequence of IV-q-ROFAAPWG by changing in parameter Ή.
Ordering the SV using IV-q-ROFAAPWG Result not found N. Zhang et al. changing the value of q has no impact on the arrangement of the same options.Therefore, for the IV-q-RPFAAPWA and IV-q-ROFAAPWG operators, we can modify the value of q.Tables 9 and 10 show the varying properties of q for the IV-q-RPFAAPWA and IV-q-ROFAAPWG operators.Table 9 shows the effect of parameter q on our suggested method.It is noticed that when we vary the value of q in the IV-q-ROFAAPWA operator, then there is no effect on ranking results.The ranking sequence remains the same for q-ROFS information.Also, we found that there will be no effect of even or odd values on our developed operator.The geometrical depreciation of Table 9 it is provided in Fig. 4.
Fig. 4 represents the information presented in Table 9.The horizontal line shows the range of score values, and the vertical line shows the range of alternatives.It is noted from the above diagram there is no effect on ranking alternatives by taking the variation of parameter q in the IV-q-ROFAAPWA operator.For all value of q = 3, 4, 5, …, n the ranking sequence will remain same such as Ų 5 > Ų 3 > Ų 2 > Ų 1 > Ų 4 and Ų 5 is the best option among all the possibilities.By adjusting the value of q in the developed IV-q-ROFAAPWG operator, we can observe from the information the ranking sequence remains unchanged.However, as the parameter q is increased, the score values rapidly decline, but the ranking order will remain the same.
Fig. 5 illustrate the information provided in Table 10.The horizontal line shows the range of score values, and the vertical line shows the range of alternatives.It is noted from the above figure there is no effect on the ranking sequence by taking the variation of parameter q in the proposed IV-q-ROFAAPWG operator.For all value of q = 3, 4, 5, …, n the ranking order will remain same such as 4 and Ų 5 is the finest alternative of all the options.Fig. 3.The geometrical representation of the IV-q-ROFAAPWG operator's score value with variation in Ή.
Table 9
The ranking sequence of SVs by varying q in IV-q-ROFAAPWA.
Table 10
The ranking sequence of SVs by varying q in IV-q-ROFAAPWG.
Fig. 4. The geometrical representation of the IV-q-ROFAAPWA operator's score value with change by q.
Fig. 5.The geometrical representation of the IV-q-ROFAAPWG operator's score value with change by q.
N. Zhang et al.
For convenience, the pictorial representation of Table 11 is presented in Fig. 6.A comparison of our suggested approach with existing AOs was observed, presented in Refs.[59][60][61].And some AOs are not able to deal with interval-valued q-ROF data like frameworks are discussed in Refs.[38,[62][63][64].
The diagram represents the graphical view of Table 11.The horizontal line shows the range of score values, and the vertical line shows the range of alternatives.We compare our diagnosed IV-q-ROFAAPWA and IV-q-ROFAAPWG operators with other existing AOs.For example, AOs presented in Refs.[59][60][61][62].It is noticed from the graphical view.Ų 1 be the finest option by using AOs developed in Ref. [62], while Ų 1 be the best option by applying AOs proposed in Ref. [60] and Ų 5 be the best alternative using the AO operator submitted in Ref. [59].
Results and discussions
This section discusses the novelty of IV-q-ROFS, AATN, AATCN, and PAO concepts.Also shows how proposed operators give more freedom to decision-makers for data aggregation in a precise way, where many fuzzy environments fail to deal with vague information.
The topic of decision-making sciences is always trending and interesting for mathematicians.In this regard, many mathematicians and researchers proposed fuzzy frameworks for data aggregation with preciseness.The thought of interval-valued FS gives a more generalized form of many simple FS because it can discuss MD's lower and upper terms; instead, FS only discusses the thought of MD.By following the same scenario, the idea of IV-q-ROFS can deal with information given in the form of IV-FS, IV-IFS, and IV-PyFS, and it is considered the superior shape of existing structures.Using the idea of IV-q-ROFS, AA operational rules, and a combination of PAOs, the novel approach of IV-q-ROFAAPWA and IV-q-ROFAAPWG operators is proposed.Also, investigate some necessary axioms of AOs such as idempotency, monotonicity, and boundedness.To show the worth of the developed technique provided the algorithm based on the MAGDM methodology and solve the real-life car selection problem.
The car selection problem is one of the trending issues nowadays.A good car is considered a reflection of your personality and represents your life status.So, in the age of technology, it is challenging to select the most appropriate car within a given budget.In this regard, many mathematicians present different thoughts and techniques for selection procedures.However, our diagnosed theory applies to previously existing structures such as IV-FS, IV-IFS, and IV-PyFS.In the developed approach, the power qth gives a large amount of freedom to decision-makers for data aggregation.The consequences of the proposed AOs are discussed as follows: Table 11 Comparative study.
Consequences of proposed methodology
Some interesting facts about diagnosis theory are given as follows:
〉
It is noticed that when we place the value of NMD at zero and q = 1, the proposed AOs are reduced into the interval-valued FS (IVFS) structure.Also, when we take q = 1, the defined system is turned into the interval valued-IFS (IVIFS) environment.However, by taking the value of q = 2, the diagnosed theory is converted into the shape of interval-valued PyFS (IVPyFS).Hence, it is concluded that our suggested work is suitable for already existing fuzzy frameworks such as IVFS, IVIFS, and IVPyFS.Also, our proposed work is more generalized.N. Zhang et al.
3 .
Developed MAGDM algorithm based on proposed AOs and solved numerical examples.4. Give sensitivity analysis by changing the values of parameters and discuss the flexibility and superiority of AOs.
Fig. 6 .
Fig. 6.The geometrical representation of comparison with existing AOs.
Table 5
SV of aggregated outcomes. | 6,500.2 | 2024-03-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
ARNS: Adaptive Relay-Node Selection Method for Message Broadcasting in the Internet of Vehicles
The proper utilization of road information can improve the performance of relay-node selection methods. However, the existing schemes are only applicable to a specific road structure, and this limits their application in real-world scenarios where mostly more than one road structure exists in the Region of Interest (RoI), even in the communication range of a sender. In this paper, we propose an adaptive relay-node selection (ARNS) method based on the exponential partition to implement message broadcasting in complex scenarios. First, we improved a relay-node selection method in the curved road scenarios through the re-definition of the optimal position considering the distribution of the obstacles. Then, we proposed a criterion of classifying road structures based on their broadcast characteristics. Finally, ARNS is designed to adaptively apply the appropriate relay-node selection method based on the exponential partition in realistic scenarios. Simulation results on a real-world map show that the end-to-end broadcast delay of ARNS is reduced by at least 13.8% compared to the beacon-based relay-node selection method, and at least 14.0% compared to the trinary partitioned black-burst-based broadcast protocol (3P3B)-based relay-node selection method. The broadcast coverage is increased by 3.6–7% in curved road scenarios, with obstacles benefitting from the consideration of the distribution of obstacles. Moreover, ARNS achieves a higher and more stable packet delivery ratio (PDR) than existing methods profiting from the adaptive selection mechanism.
Introduction
The Internet of Vehicles (IoV) can play an important role in reducing traffic pressure and improving driving safety. Relay-node selection is the basis of IoV, and has attracted significant attention from researchers in recent years . By appropriately selecting relay-nodes to forward messages, we can expand the coverage of messages with high time efficiency. Such methods aim to select a relay-node quickly and cover more range in one hop.
Based on the difference in obtaining information of neighbor nodes, relay-node selection methods can be classified into beacon-based relay-node selection methods (called beacon-based
•
According to the specific distribution of obstacles in the real world, the optimal position-selection is redefined, and a curved road relay-node selection method suitable for the actual situations is proposed. • A criterion of classifying road structures is proposed to judge the road structure in complex scenarios. • Based on the above work, an adaptive relay-node selection method is designed to suit two real-world situations: the differences of the road structures in the communication range of the different senders, and multiple road structures in the communication range of one sender.
The rest of the paper is organized as follows: Section 2 briefly introduces related work on relay-node selection methods. The problems of message broadcasting in RoI, which include complex road structures and the impact of obstacles, are analyzed in Section 3. An adaptive relay-node selection method based on the exponential partition is presented in Section 4. Section 5 demonstrates the performance of ARNS compared to other methods, and finally, we draw conclusions in Section 6.
Related Work
Several methods are proposed for relay-node selection in IoV, as discussed in the following. Greedy perimeter stateless routing (GPSR) [19] obtains the location of neighbor nodes through periodic flood beacons, and selects the relay-node in each hop using the greedy algorithm. When the greedy algorithm fails, the relay-node is selected with the right-hand rule. The advantage of GPSR is that it can be applied in all road structures. However, the information update of neighbor nodes in GPSR is not real-time, and it limits the performance. What is more, GPSR mainly considers end-to-end message propagation and does not fully consider message broadcasting. In order to improve the performance of message broadcasting, a real-time adaptive dissemination system (RTAD) is proposed in [20], and it defines two metrics-informed vehicles and messages received-and selects the most suitable beacon-based method for different RoIs based on the simulation results of the two metrics. Its advantage is that the message broadcasting in urban scenarios is achieved with better overall performance. However, it still has the problem of lacking real-time information, which is the same as GPSR and is only suitable in urban scenarios.
Urban multi-hop broadcast protocol (UMB) [21] is a black-burst-based relay-node selection method, which solves the problem of lacking real-time information in the beacon-based method. It aims to maximize message progress by selecting the farthest vehicle as the relay-node. The sender broadcasts a Request-To-Broadcast (RTB) packet in its communication range. Upon the reception of RTB, nodes, i.e., vehicles, broadcast a channel jamming signal, i.e., black-burst, for a duration that is proportional to the node's distance from the sender. Then, the farthest node transmits the Sensors 2020, 20, 1338 3 of 18 longest black-burst, and performs forwarding. The disadvantage of UMB is that it has a relatively high communication delay since it spends the longest black-burst to select the farthest node to perform forwarding. Binary-partition-assisted broadcast protocol (BPAB) [22] is a binary partitioning broadcast method based on the black-burst, and solves the problem of UMB. It deploys a binary partitioning scheme and a novel contention mechanism. The binary partitioning scheme iteratively divides the range, which is the communication range in the first iteration and the selected segment in other iterations, into multiple segments. In addition, the farthest segment which contains nodes is selected by the aid of the black-bursts. Then, through a novel contention mechanism, a node is randomly selected as the relay-node in the farthest segment. Compared with the previous methods, BPAB achieves a lower and more stable delay, but it only works on the straight road or the junction. Trinary partitioned black-burst-based broadcast protocol (3P3B) [23] is a trinary partitioning broadcast method. Improving on BPAB, 3P3B uses a trinary partitioning method instead of the binary partitioning, and introduces mini-DIFS in the channel access period before the start of relay-node selection to reduce the channel access delay. With these improvements, it achieves a lower delay than BPAB, but it only considers the relay-node selection in straight road scenarios. Exponent-based partitioning broadcast protocol (EPBP) [24] is an exponential partitioning broadcast method. Improving on 3P3B, it divides the communication range of sender into N part segments for N iter iterations. The width of segment increases exponentially with the increase of its distance from the relay-node's optimal position. Then, a non-empty segment closest to the optimal position is selected as the final segment. Finally, a node in the final segment is randomly selected as the relay-node through an exponential back-off method. The delay of the partitioning process is called partition delay, and the delay of the exponential back-off process is called contention delay. Due to the exponential partition, EPBP has a lower and more stable delay than 3P3B. However, EPBP still is suitable for the straight road scenarios. In order to solve the problem, a complete EPBP-based curved road relay-node selection method is proposed in [25]. It implements relay-node selection in curved road scenarios through three modes: the normal selection, the reverse selection, and the double-direction selection. When a vacant appears in the normal selection, it will enter the double-direction selection. At this time, the reverse selection and the normal selection are performed simultaneously, and the farthest point from the sender in a vacant as the end point in the reverse selection. Through the three modes, it achieves a high broadcast coverage. However, it has a disadvantage in that it does not consider the influences of obstacles. Thus, an EPBP-based junction relay-node selection method is proposed in [26]. Improving on EPBP, it implements relay-node selection in junction scenarios with obstacles through two phases: the junction phase and the branch phase. It selects the node close to the center of the junction as the relay-node in the junction phase, and selects the furthest node on each branch as the relay-node in the branch phase. Compared to BPAB, it achieves a lower delay. However, it does not consider the situation where the branches are not a straight road.
Though these black-burst-based methods [21][22][23][24][25][26], including our EPBP-based work [24][25][26], show better performance compared to the beacon-based methods [19,20], they are only suitable for a certain road structure, e.g., the methods in [24,27] are only for straight roads, that in [26] are only for junctions, and in [25] are only for curved roads. However, in the real world, varied road structures may exist in RoI and multiple road structures in the communication range of the sender. Moreover, the distribution of the obstacles can affect the relay-node selection. Therefore, in this paper, we have designed ARNS by fully considering the above situations to achieve better robustness. In the next section, we will describe the scenarios and state the problems.
Scenario Description and Problem Statement
In real-world IoV, the selection of relay-nodes needs to consider the high mobility of vehicles, the diversity of road structures, and the existence of obstacles in RoI to achieve higher coverage with lower delay. EPBP and its derived methods can well solve the problem of real-time caused by the high Sensors 2020, 20, 1338 4 of 18 mobility of vehicles, but it fails to completely solve the problem of broadcasting in RoI with various road structures and obstacles.
For example, Figure 1 shows an area where various road structures and obstacles exist, and it is assumed to be an RoI of the message generated by Node S 0 at Point H. The road structure on the west of the road section HI ( HI indicates the road section connecting Point H and Point I is a curved road with a junction J 1 and is surrounded randomly by green woods. Additionally, the road structures on the east of HI are the straight roads with junctions and there are buildings around these junctions. Woods and buildings are obstacles that can prevent the dissemination of messages. The message is expected to cover the RoI, so the ends of the road at the RoI boundary are the termination positions of broadcast. It should be noted that we only consider relay-node selection in vehicle to vehicle (V2V), and nodes can obtain not only their own position by using GPS, but also the local information about roads and obstacles by using GIS. To solve the problems of relay-node selection in the scenarios described above, in the next section, we propose an adaptive relay-node selection method that adaptively selects a relay-node selection method suitable for the current scenario according to the road structures and obstacles within the communication range of the sender.
Method Design
In this section, we will propose ARNS to solve the problems described in Section 3, but before that, we need to improve the EPBP-based methods to make them suitable for real-world scenarios. Therefore, the content of this section is organized as follows: we first propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles, then, develop a criterion of classifying road structures. Moreover, we improve the EPBP-based junction relay-node selection method [26] to resolve the problems of multiple road structures existing in the communication range of the sender. Finally, an adaptive relay-node selection method based on these above works is proposed. The goal of this method is to achieve full coverage of RoI with the lowest delay.
EPBP-Based Relay-Node Selection Method Suitable for Curved Road Scenarios with Obstacles
Based on the analysis in Section 3, we first define Optimal Position and Vacant to facilitate the description of the relay-node selection method in curved road scenarios with obstacles.
is the point that is closest to the terminal point of the curved road in the direction of message broadcasting, where is the set of the A process of message broadcasting is illustrated in Figure 1. Node S 0 is an original sender, and a message broadcasted by S 0 is expected to cover the region shown in the map, i.e., the RoI of the message. Obviously, the road structures in the communication ranges of Node S 1 , S 2 , and S 4 are the straight road, the junction, and the curved road, respectively, so the corresponding relay-node selection methods, i.e., the method in [24] for straight road scenarios, that in [26] for junction scenarios, and that for curved road scenarios in [25], are adopted according to the road structure. However, one problem needs to be solved, and that is how to distinguish road structures. Moreover, the road section in the communication range of one sender maybe consists of two or more road structures, not one typical road structure discussed in the existing works. This scenario is given as an example in Figure 1 as the road section in the communication range of Node S 3 . The range covers a junction and three curved road sections, neither the typical junction with several straight branches nor the typical curve only including the curved road section. Thus, in order to realize the node-selection in real-world scenarios, the first problem should be resolved as follows.
Problem 1: how to classify the road structure?
of 18
The broadcasted message is expected to cover the whole RoI at the cost of as little time as possible. Thus, in one hop, the node at the farthest position from the sender in the direction of message broadcasting is the most favorite relay-node. The farthest position is defined as the optimal position [24]. In the real world, the obstacles will affect the location of the optimal position. The line-of-sight condition in straight road scenarios is good because no obstacle affects the communication range of the sender, thus existing relay-node selection methods [21][22][23][24] use the point farthest from the sender as the optimal position on the straight road scenarios. In junction scenarios, obstacles such as buildings generally exist near junctions, and the existing relay-node selection methods [21,22,26] applicable for junction scenarios select a node close to the center of the junction as the relay-node of the first hop, and achieve the maximum coverage of all branches with the second hop to complete message broadcasting. In curve scenarios, the general relay-node selection methods [13,14] consider that obstacles are generally around road corners, so the corner of the curved road is marked as the optimal position to eliminate the impact of obstacles on the message broadcasting. However, in the specific scenarios, the effect of obstacles on the location of the optimal position needs to be analyzed differently. As shown in Figure 1, the road section BF is out of the sight of Point A due to the blocking by Obstacle O 1 , so the sender at Point A can only use corner Point B as the optimal position to realize the relay-node selection in this curved road scenario. However, the road section EG has a good line-of-sight condition because of no blocks, so the sender at Point E can directly select the farthest Point G in its coverage area as the optimal position. Therefore, by considering the specific distribution of obstacles within the communication range, we can select the proper optimal position to achieve the maximum coverage of one-hop and reduce the delay of the relay-node selection. Thus, the second problem to be resolved is described as follows.
Problem 2: how to determine the optimal position? As shown in Figure 1, there are two road sections that are not covered by the broadcast: one is road section 1 indicated by the blue solid line, which is within the communication range of Node S 4 , but not covered by the signal of Node S 4 because of the obstruction of Obstacle O 1 ; another is road section 2 indicated by the black solid line, which is outside the communication range of Node S 3 and S 4 . As we aim to achieve full coverage of RoI, the location of the optimal position ensures that the broadcasting message can cover these road sections, i.e., road section 1 and 2 .
It should be noted that we only consider relay-node selection in vehicle to vehicle (V2V), and nodes can obtain not only their own position by using GPS, but also the local information about roads and obstacles by using GIS.
To solve the problems of relay-node selection in the scenarios described above, in the next section, we propose an adaptive relay-node selection method that adaptively selects a relay-node selection method suitable for the current scenario according to the road structures and obstacles within the communication range of the sender.
Method Design
In this section, we will propose ARNS to solve the problems described in Section 3, but before that, we need to improve the EPBP-based methods to make them suitable for real-world scenarios. Therefore, the content of this section is organized as follows: we first propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles, then, develop a criterion of classifying road structures. Moreover, we improve the EPBP-based junction relay-node selection method [26] to resolve the problems of multiple road structures existing in the communication range of the sender. Finally, an adaptive relay-node selection method based on these above works is proposed. The goal of this method is to achieve full coverage of RoI with the lowest delay.
EPBP-Based Relay-Node Selection Method Suitable for Curved Road Scenarios with Obstacles
Based on the analysis in Section 3, we first define Optimal Position and Vacant to facilitate the description of the relay-node selection method in curved road scenarios with obstacles. Definition 1. Optimal Position P opt ∈ {Node 1 } ∪ {Node 2 } is the point that is closest to the terminal point of the curved road in the direction of message broadcasting, where {Node 1 } is the set of the intersections of the sender's communication boundary and the curved roads that are not blocked by obstacles; {Node 2 } is the set of the intersections of the curved road and the tangents to the profile of the obstacles from the sender.
Definition 2.
Vacant is the segment of the curved road that is not covered by the communication ranges of the sender and the relay-node because of the high curving rate of the curved road and the blocking by the obstacles.
Taking Figure 1 as an example, road section 1 and 2 are both vacant, because road section 1 is not covered by the signal of Node S 4 due to the obstruction of Obstacle O 1 , and road section 2 is not within the communication range of Node S 3 and S 4 .
Next, we improved the reverse selection [25] to solve the problem of vacant-caused reduction of broadcast coverage. When a sender finds that there is a vacant between itself and the sender in the previous hop, it enters the reverse selection. At this time, it serves as an initial sender of the reverse selection and broadcasts an RTB packet to start the normal selection and the reverse selection simultaneously. The reverse selection chooses the nearest corner to the initial sender in the reverse direction as the optimal position, and the endpoint of the vacant closest to the previous sender as the termination of the reverse selection. In the reverse direction, only the reverse selection continues until it completely covers the vacant. To distinguish three states of relay-node selection-only the normal selection, only the reverse selection, and the concurrence of the normal selection and the reverse selection, we added a mode flag into the RTB packet. Moreover, we assigned black-bursts with different frequencies to avoid interfering with each other between nodes in different states. Based on the above definitions and descriptions, we propose an EPBP-based relay-node selection method suitable for curved road scenarios with obstacles. The pseudo code is as follows in Algorithm 1.
Next, we take Nodes S 3 and S 4 in Figure 1 as an example to illustrate our proposed method, and assume that Node S 3 is used as a sender to start message broadcasting. According to Definition 1, Node S 3 determines Point E as the P opt_norm . Then, we made a circle with Point E as the center and the distance between Node S 3 and Point E as the radius, EPBP was performed on the circle as shown in Figure 1, and Node S 4 was selected as the relay-node. After Node S 4 receives the message from Node S 3 , as a new sender it determines that road section 1 and 2 are both vacant according to Definition 2. Then, it starts both the normal selection and the reverse selection: The initial sender S 4 of the reverse selection chooses Point A as the terminal point of the reverse selection, chooses the corner (Point B) as the optimal position for the reverse selection, and selects Point G as the optimal position for the normal selection according to Definition 1. Then an RTB packet was broadcasted by Node S 4 to inform nodes within its communication range that both the reverse selection and the normal selection were started at the same time. Finally, Node S 7 was selected as the relay-node in reverse selection and Node S 8 was selected as the relay-node in the normal selection. After that, Node S 7 as a sender only performs the reverse selection, and Node S 8 as a sender only performs the normal selection. if there is an area between S and S pre that is blocked by or out of the communication range of S and S pre . 6 Determine the area as a vacant. 7 Phase 2. RTB Packet Broadcast Phase: 8 if there is a vacant between S and S pre 9 Set the mode flag of RTB packet to 3 (means simultaneously start the normal selection and the reverse selection). 10 Determine the optimal position P opt_norm in the message propagation direction according to Definition 1. 11 Choose the nearest corner as the optimal position P opt_rev in the reverse direction. 12 Determine the endpoint of the vacant closest to S pre as the termination of the reverse selection P rev_end . 13 Add P opt_norm , P opt_rev , P rev_end into the RTB packet. 14 else if be on the road between S and S pre . 15 Set the mode flag of RTB packet to 2 (means start the reverse selection). 16 Choose the next corner as the optimal position P opt_rev in the reverse direction 17 Update P opt_rev in the RTB packet. 18 Else 19 Set the mode flag of RTB packet to 1 (means start the normal selection). 20 Determine the optimal position P opt_norm in the message propagation direction according to Definition 1. 21 Update P opt_norm in the RTB packet. 22 Broadcast the RTB packet. 23 Phase 3. Relay-Node Selection Phase: 24 if the mode flag of RTB packet is 3 25 Start EPBP with P opt_norm as the optimal position, n norm ∈ N is not blocked by is selected as the relay-node in the message propagation direction. 26 Simultaneously, start EPBP with P opt_rev as the optimal position, and n rever ∈ N is not blocked by is selected as the relay-node in the reverse direction. 27 else if the mode flag of RTB packet is 2 28 Start EPBP with P opt_rev as the optimal position. 29 n rever ∈ N is not blocked by is selected as the relay-node. 30 Else 31 Start EPBP with P opt_norm as the optimal position. 32 n norm ∈ N is not blocked by is selected as the relay-node. 33 Relay-node selection finished
Criterion of Classifying Road Structures
In this subsection, we define a criterion to classify three typical road structures (junction, straight road, and curved road): In previous works [25,26], the broadcasting in junction scenarios is completed through two-hop relay-node selection, and its message propagation direction is multidirectional. In curved road scenarios, there will be both normal and reverse relay-node selection for broadcasting, and the message propagation is bidirectional. To achieve full coverage of RoI, the priority of judgment for the criteria of road structures is junction, curved road, and straight road. It is widely accepted that the criterion for judging whether a road structure is a straight road or not is whether the line-of-sight condition exists. We define a curving rate to facilitate the definitions of the curved road and the straight road, as follows.
Definition 4.
The curving rate β is expressed as where l is the length of the road within the communication range of the sender in the message propagation direction, and R is the communication radius.
Based on the definition of curving rate, we give the definitions of curved road and straight road.
Definition 5.
Curved road scenario is a scenario that β > β ε when no junction exists in the communication range of the sender in the message propagation direction, where β ε is a threshold.
Definition 6.
Straight road scenario is a scenario that β ≤ β ε when no junction in the communication range of the sender in the message propagation direction.
We discuss the value of threshold β ε based on whether obstacles on the roadside affect the line-of-sight propagation. When obstacles on the roadside affect the line-of-sight propagation of the message, the road will have at least one roundabout. For this circumstance, the road length must be more than twice the road width w beyond the communication radius, that is, where (l ε > R + 2 * w).
Adaptive Relay-Node Selection Method
In this subsection, we design an adaptive relay-node selection method based on the criterion of road structures, combining the relay-node selection method in curved road scenarios with obstacles proposed in Section 4.1 and an improved EPBP-based junction relay-node selection method in this subsection.
The termination condition of message broadcasting is to achieve complete coverage of RoI. That is, all ends of the roads at the RoI boundary are covered by the broadcasting message. Moreover, in order to avoid the multiple coverage of a message on one road section, the termination condition in junction scenarios is that the message covers the RoI boundary, or that the branch has been covered by the same message.
The EPBP-based junction relay-node selection method [26] includes a junction phase and a branch phase. It is suitable for urban scenarios where each branch of junctions is a straight road. Two types of nodes are selected successively as relay-nodes in the junction phase and the branch phase, which are closest to the center point of the junction and to the farthest point in the branches. However, in the real world, the branch of the junction, e.g., Junction J 1 in Figure 1, may not be a straight road.
Therefore, we improve the EPBP-based junction relay-node selection method as follows. In the branch phase, first, the sender of the branch phase, i.e., the relay-node at the center of the junction, uses GIS information and the criterion of road structures to determine the road structure of each branch when it enters the branch phase. Then, according to the judgment result, a method suitable for the structure of branch is selected to complete the relay-node selection in the branch phase.
The flow diagram of the improved method is shown in Figure 2. The improved method realizes the adaptive relay-node selection in the branch phase. Compared with the original method [26], it has stronger robustness in real-world scenarios. The termination condition of message broadcasting is to achieve complete coverage of RoI. That is, all ends of the roads at the RoI boundary are covered by the broadcasting message. Moreover, in order to avoid the multiple coverage of a message on one road section, the termination condition in junction scenarios is that the message covers the RoI boundary, or that the branch has been covered by the same message.
The EPBP-based junction relay-node selection method [26] includes a junction phase and a branch phase. It is suitable for urban scenarios where each branch of junctions is a straight road. Two types of nodes are selected successively as relay-nodes in the junction phase and the branch phase, which are closest to the center point of the junction and to the farthest point in the branches. However, in the real world, the branch of the junction, e.g., Junction J1 in Figure 1, may not be a straight road.
Therefore, we improve the EPBP-based junction relay-node selection method as follows. In the branch phase, first, the sender of the branch phase, i.e., the relay-node at the center of the junction, uses GIS information and the criterion of road structures to determine the road structure of each branch when it enters the branch phase. Then, according to the judgment result, a method suitable for the structure of branch is selected to complete the relay-node selection in the branch phase. The adaptive relay-node selection mechanism is shown in the flowchart in Figure 3. First, ARNS determine whether the broadcast completely covers RoI. If a full coverage of RoI is not achieved, the criterion of road structures is adopted to judge the road structure within the current communication scenario. If the road structure is judged as a junction scenario, we adopt the improved EPBP-based junction relay-node selection method to realize relay-node selection in the current scenario; if the judgment result is a curved road scenario, we use the method proposed in Section 4.1 to select a relay-node in the current scenario; if the judgment result is a straight road scenario, we directly adopt the intersection of the sender's communication boundary and road in the message propagation direction as the optimal position to implement straight road relay-node selection through EPBP. What is more, to ensure the security of message transmission, a caching optimization method [41] is used for each vehicle. The adaptive relay-node selection mechanism is shown in the flowchart in Figure 3. First, ARNS determine whether the broadcast completely covers RoI. If a full coverage of RoI is not achieved, the criterion of road structures is adopted to judge the road structure within the current communication scenario. If the road structure is judged as a junction scenario, we adopt the improved EPBP-based junction relay-node selection method to realize relay-node selection in the current scenario; if the judgment result is a curved road scenario, we use the method proposed in subsection 4.1 to select a relay-node in the current scenario; if the judgment result is a straight road scenario, we directly adopt the intersection of the sender's communication boundary and road in the message propagation direction as the optimal position to implement straight road relay-node selection through EPBP. What is more, to ensure the security of message transmission, a caching optimization method [41] is used for each vehicle.
Results and Analysis
To prove the effectiveness, simulations were conducted on a real-world map shown in Figure 1, which is a part of the urban map in Zhangjiajie city, Hunan Province, China. In addition, to reflect the real-time advantages with the black-burst, ARNS was compared with a beacon-based method that uses RTAD [20] to select relay-nodes in urban scenarios and the GPSR method [19] on the curved road combined with the adaptive mechanism proposed in this paper. Additionally, a black-burst-based method was also used for comparison, which substitutes EPBP with 3P3B in ARNS (called the 3P3B-based method), to verify ARNS' improvement. These results and analysis are presented in subsection 5.2.
In addition, to demonstrate the advantages of considering obstacles in curved road scenarios, we compared ARNS with the complete relay-node selection method [25], which is an EPBP-based
Results and Analysis
To prove the effectiveness, simulations were conducted on a real-world map shown in Figure 1, which is a part of the urban map in Zhangjiajie city, Hunan Province, China. In addition, to reflect the real-time advantages with the black-burst, ARNS was compared with a beacon-based method that uses RTAD [20] to select relay-nodes in urban scenarios and the GPSR method [19] on the curved road combined with the adaptive mechanism proposed in this paper. Additionally, a black-burst-based method was also used for comparison, which substitutes EPBP with 3P3B in ARNS (called the 3P3B-based method), to verify ARNS' improvement. These results and analysis are presented in Section 5.2.
In addition, to demonstrate the advantages of considering obstacles in curved road scenarios, we compared ARNS with the complete relay-node selection method [25], which is an EPBP-based relay-node selection method well-qualified in the curved road scenarios without considering obstacles. These results will be discussed in Section 5.3.
Introduction of Evaluation
We simulated these above approaches in VANET using MATLAB with the Monte Carlo method [42]. Since we focused on the relay selection in the link level, the simulation environment just includes the 802.11p MAC layers. The major simulation parameters of VANET are given in Table 1, and are identical to those used in [20,23,25]. In each simulation, Node S 0 was used as the original sender. The intersections of each road and RoI boundary were used as the terminal points of broadcast on this road. Since the roads in Figure 1 have different widths, for ease of expression, we classified them with the number of the lanes n lane in both directions (n lane = 2,4,6), and vehicle density λ in this paper is defined as the vehicle density on a single lane.
In order to assess the performance of ARNS under a wide range of vehicle densities, we set the minimum interval between vehicles to be 4 m, and the minimum number of vehicles within communication range to be two vehicles. Thus, when the communication range was set to 200 meters, the lowest vehicle density was 0.01 vehicles/meter and the highest was 0.25 vehicles/meter. The vehicles were located randomly following the Poisson distribution with λn lane . The maximum speed v max of vehicles complies with the rule related to safe inter-vehicle distance [43,44]. Note that the inter-vehicle distance is defined as the distance between the heads of the adjacent vehicles. Each vehicle chose a random speed following a uniform distribution in [ 1 2 v max , v max ] at the beginning of the simulation, and kept the chosen speed during the simulation. Lane change and overtaking were not modeled for vehicle movement. From the simulation results shown in Figure 4, a single simulation duration, i.e., the end-end delay, is less than 6.2 ms, and the maximum movement distance of a node is 0.21 m corresponding the vehicle speed of 120 km/s. Thus, the above assumptions about the vehicle running are reasonable. The experimental environment was simulated in MATLAB, the same as [25], because the conclusion in [45] pointed out that the vehicle movement has little influence on the relay-node selection. PDR is expressed as a ratio of the number of successful broadcasting messages to the total number of simulations. Successful broadcasting means that no packet loss occurs during the entire broadcasting process.
Maximum hops maxhops N is expressed as the maximum number of hops that a message is broadcasted from Node S0 to the terminations of RoI. Broadcast coverage cov γ is expressed as a ratio of the length of the road covered by broadcasting to the length of the entire road.
Evaluations of ARNS
In this subsection, we compare ARNS with RTAD and the 3P3B-based method in the same environment. We show the advantages of ARNS in three aspects, including end-to-end delay, maximum hops, and PDR. The simulation results show as follows. End-to-end delay and packet delivery ratio (PDR) are metrics widely used to evaluate the efficiency and reliability of message broadcasting in IoV [21][22][23][24][25][26][27]. In addition, a metric called maximum hops was proposed to evaluate the reliability of end-to-end delay. The metrics of broadcast coverage, partition delay, and contention delay were used to measure the improvement of considering obstacles. In this section, we show the comparisons of all method schemes in terms of six metrics: end-to-end delay, partition delay, contention delay, PDR, maximum hops, and broadcast coverage. The definitions of the metrics are described below.
End-to-end delay T end is expressed as a total delay from the instant when Node S 0 starts broadcasting to the instant when RoI is completely covered. T end is the sum of one-hop delay. In the black-burst-based methods, the partition delay T part and the contention delay T cont dominate the one-hop delay. Thus, in the results of Section 5.3, T part and T cont are used to demonstrate the improvement of ARNS in the curved road scenarios. T part is expressed as an average value of the partition delay in each hop, and T cont is expressed in the same way.
PDR is expressed as a ratio of the number of successful broadcasting messages to the total number of simulations. Successful broadcasting means that no packet loss occurs during the entire broadcasting process.
Maximum hops N maxhops is expressed as the maximum number of hops that a message is broadcasted from Node S 0 to the terminations of RoI.
Broadcast coverage γ cov is expressed as a ratio of the length of the road covered by broadcasting to the length of the entire road.
Evaluations of ARNS
In this subsection, we compare ARNS with RTAD and the 3P3B-based method in the same environment. We show the advantages of ARNS in three aspects, including end-to-end delay, maximum hops, and PDR. The simulation results show as follows. Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process.
In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum hops of ARNS declines in a stable trend, while the maximum hops of the beacon-based method are already saturated. Sensors 2020, 20, x FOR PEER REVIEW 13 of 18 Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process.
In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum hops of ARNS declines in a stable trend, while the maximum hops of the beacon-based method are already saturated. Figure 6 presents the PDR of the three methods. It can be clearly seen that, as vehicle density ascends, PDR declines. PDR of ARNS is better than that of both the 3P3B-based method and the beacon-based method. Additionally, PDR of ARNS is more stable than the other two. The reasons can be derived as follows. Firstly, for the beacon-based method, nodes in its routing table may travel out of the communication range during the beacon interval, resulting in the loss of message packets. In this case, we will re-transmit. However, if the number of re-transmissions reaches the maximum Figure 6 presents the PDR of the three methods. It can be clearly seen that, as vehicle density ascends, PDR declines. PDR of ARNS is better than that of both the 3P3B-based method and the beacon-based method. Additionally, PDR of ARNS is more stable than the other two. The reasons can be derived as follows. Firstly, for the beacon-based method, nodes in its routing table may travel out of the communication range during the beacon interval, resulting in the loss of message packets. In this case, we will re-transmit. However, if the number of re-transmissions reaches the maximum times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods. Figure 4 shows the end-to-end delay obtained by each method with varying vehicle density. RTAD has the largest delay as it needs more hops to complete message broadcasting. In contrast, ARNS has the lowest delay as it costs the fewest hops by adaptively selecting the relay-nodes. Furthermore, we can see that, as vehicle density increases, end-to-end delay first decreases and then increases. The decrease is because message broadcasting can be implemented with fewer hops when vehicle density gets higher. The increase is due to the larger contention delay because of more nodes in the contention process.
In Figure 5, the maximum hops of three methods are depicted to indicate the reliability of end-to-end delay shown in Figure 4. RTAD has the most hops as it selects corners as the optimal positions in curved road scenarios. In contrast, ARNS has the least hops since it improves the location of the optimal position. Moreover, with the increase of the vehicle density, the maximum
Evaluations of ARNS in the Scenario with Obstacles
In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection.
As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S 3 selects Node S 4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases. times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods.
Evaluations of ARNS in the Scenario with Obstacles
In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection. Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%.
Sensors 2020, 20, x FOR PEER REVIEW 14 of 18 times, the message packet is still missing, then the broadcast is considered as a failure. However, the relay-node selection of ARNS is real-time, so ARNS is more stable than the beacon-based method. Secondly, compared with the 3P3B-based method, the partition phase of ARNS selects a smaller segment than the based-3P3B method. Then fewer nodes participate in the random contention phase. This results in the gain for PDR of ARNS. Therefore, PDR of ARNS is the most stable among the three methods.
Evaluations of ARNS in the Scenario with Obstacles
In this subsection, we simulated ARNS and the complete relay-node selection method on the curved road [25], which do not consider obstacles, to show the advantages of considering obstacles in three aspects of broadcast coverage, partition delay, and contention delay. The simulation results of partition delay and contention delay indicate that the proposed method ARNS significantly reduces the delay of the relay-node selection. As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S3 selects Node S4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases.
Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%.
Conclusions
In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender As shown in Figure 7, the broadcast coverage of the curved road method decreases with the increase of vehicle density. It was because when vehicle density was low, the curved road method selects relay-nodes along the curved road to achieve broadcast coverage. When vehicle density increases, the curved road method gradually reduces the number of times to select relay-nodes along the curved road. However, the number of times to select relay-nodes across the curved road increases gradually (in Figure 1, for example, Node S3 selects Node S4 as a relay-node). Thus, the broadcast coverage of the curved road method gradually decreases.
Choosing different optimal positions in the same scenario will cause different partition delays and contention delays. Thus, as shown in Figures 8 and 9, ARNS has obvious advantages in the partition delay and contention delay. As vehicle density increases, these advantages become more apparent. At a high density of 0.25 vehicle/meter, the partition delay of ARNS was reduced by 16.4% compared with the complete method, while the contention delay of ARNS was reduced by 52.2%. These results are reflected in the end-to-end delay as shown in Figure 10, and compared with the complete method, ARNS can reduce end-to-end delay on a curved road by up to 16.3%.
Conclusions
In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender Figure 10. End-to-end delay on the curved road.
Conclusions
In this paper, we proposed the ARNS method for the relay-node selection in complex road scenarios. To the best of our knowledge, it is the first time developing an adaptive relay-node selection mechanism considering the road structure within the communication range of the sender in each hop. ARNS adopts the favorable relay-node selection method according to the road structure. In addition, the effect of obstacles was considered. It was demonstrated through simulation that ARNS is superior to methods based on 3P3B [23] and RTAD [20] in terms of the end-to-end delay and PDR, and superior to the complete method [25] in terms of the broadcast coverage and one-hop delay. In a real-world road scenario, we showed that ARNS reduces end-to-end delay by at least 13.8% compared to the beacon-based method, and the broadcast coverage of ARNS was increased by 3.6-7% compared with the complete method.
In the future, we plan to extend our work to relay-node selection on 3D road structures, such as overpass structures and parking lot structures, and utilize AI [46][47][48][49][50][51] to optimize the method [52,53] in complex 3D scenarios. | 11,615.8 | 2020-02-29T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Automatic Hybrid Attack Graph (AHAG) Generation for Complex Engineering Systems †
: Complex Engineering Systems are subject to cyber-attacks due to inherited vulnerabilities in the underlying entities constituting them. System Resiliency is determined by its ability to return to a normal state under attacks. In order to analyze the resiliency under various attacks compromising the system, a new concept of Hybrid Attack Graph (HAG) is introduced. A HAG is a graph that captures the evolution of both logical and real values of system parameters under attack and recovery actions. The HAG is generated automatically and visualized using Java based tools. The results are illustrated through a communication network example.
Introduction
As a result of the rapid advancement of complex engineering systems such as infrastructure, communications, energy systems, industrial automation, artificial intelligence, and cyber-physical systems, new research directions in modeling, monitoring, diagnosis, optimization, and control have emerged in recent years [1].For instance, a control chart scheme for production processes was developed by [2] for monitoring the mean time between two events under the neutrosophic statistics using the belief estimator for the neutrosophic gamma distribution.The diagnosis and correction of many production problems which often cause huge loss to the production unit can substantially be improved with the utilization of the effective control chart technique.
In [3], the advantages of using Proportional-Integral (PI ) controller for pH control in the raceway reactor during the whole day against traditional On/Off control were demonstrated.The paper also presented an event-based control architecture for Proportional-Integral-Derivative (PID) controllers.The objective is to tune a classical time-driven PI for pH control in the raceway reactor, and then to add event-based capabilities, but keeping the initial PI control design.The event-based systems allow a trade-off between control performance and control effort, which is perfect for the microalgae process in raceway reactors.The performed tests were oriented to establish a trade-off between control effort and control performance and present an alternative to traditional control.
The applicability of the Distributed Model Predictive Control (DiMPC) was investigated in [4] to deal with the constraints in the steam/water loop of a steam power plant.A comparison was conducted between the Decentralized Model Predictive Control (DeMPC), the Centralized Model Predictive Control (CMPC), and the DiMPC.The results showed the effectiveness of the DiMPC [5] designed an Optimal Nonlinear Adaptive Control (ONAC) strategy to achieve optimal parameter tuning of Nonlinear Adaptive Control (NAC) for Voltage Source Converter (VSC) operating in both rectifier mode and inverter mode where an optimal and robust control can be achieved under different operation scenarios.
A novel pole-zero cancelation method was proposed by [6] for Multi-Input Multi-Output (MIMO) temperature control in heating process systems.In the proposed method, the temperature differences and the transient response of each point can be controlled by considering the dead time and the coupling effect of the MIMO system.In [7], the Slow-Mode-Based Control (SMBC) method combined with decoupling and dead-time compensation was applied to the MIMO temperature control system.The temperature differences and the transient response of all points can be controlled and improved by making the output of the fast modes follow that of the slow mode.The results were then compared to the conventional PI control and gradient temperature control methods.
The Deep Deterministic Policy Gradient (DDPG) technique for the optimum boost control on a Variable Geometry Turbocharger (VGT)-equipped engine is implemented by [8].The proposed DDPG algorithm is compared with a fine-tuned PID controller to validate its optimality.The results showed that the control performance based on the proposed DDPG algorithm can achieve a good transient control performance from scratch by autonomously learning the interaction with the environment, without relying on model supervision or complete environment models.
A framework was proposed by [9] to use the structural information in each Possible Conflict (PC) for fault diagnosis of complex industrial system to design a different kind of executable model.They proposed to build grey-box models based on a state space neural network architecture derived from that structural information in the PC, which links measurements with equations, and consequently with parameters related to faulty behavior.The state space Neural Networks (ssNN) were used to track the system behavior, but once a fault detection was confirmed, the structural information in the models and the consistency-based diagnosis paradigm were used to perform fault isolation.
Total decomposition of nonstationary variables for distributed monitoring of nonstationary industrial processes was handled by [10], in which different variable blocks were separated with both overlapping and nonoverlapping relationships considered, capturing different nonstationary characteristics.A two-level monitoring strategy was designed that can supervise both the local cointegration relationships and the interrelationship among different nonstationary blocks with enhanced interpretation of the nonstationary process.
A novel diagnosis framework was proposed by [11] for considering the deep feature learning and cross-domain feature distribution alignment simultaneously for industrial applications.Extending the Marginal Distribution Adaptation (MDA) to Joint Distribution Adaptation (JDA), the proposed framework can exploit the discrimination structures associated with the labeled data in source domain to adapt the conditional distribution of unlabeled target data, and thus, guarantee a more accurate distribution matching [12].They reviewed over 220 technical research programs in total, with more attention on the recent developments of the fault diagnosis approaches and their applications during the last decade.Knowledge-based fault diagnosis, hybrid fault diagnosis, and active fault diagnosis were reviewed comprehensively.The distinctive advantages and various constrains of these diagnosis methods were commented on.A recent survey by [1] also summarized papers on monitoring and diagnosis for complex engineering systems.
The implementation of diagnostic and prognostic architectures can aid the implementation of advanced control algorithms in a resilient control system to recognize sensor degradation, as well as failures with industrial process equipment associated with the control algorithms [13].A resilient control system is one that maintains state awareness and an accepted level of operational normalcy in response to disturbances, including threats of an unexpected and malicious nature [14].As a result, it is hard to obtain true expectations about the consequences when a fault/attack occurs effecting or compromising the system.Hence, evaluating systems resilience in terms of stability, performance, and recovery time is crucial and valuable for cost management and design tradeoff.The traditional Attack graphs can generate various attack scenarios compromising the system in terms of violations of a security property.However, they are only concerned with tracking the logical changes in the system parameters under attacks, as captured by the pre and post-conditions.Therefore, the novelty of this work lies in introducing and generating automatically a Hybrid Attack Graph (HAG) using our new Automatic Hybrid Attack Graph (AHAG) Java based tool that combines logical and real values of system parameters.In fact, HAG can be used to provide additional information about the expected changes that could affect the system state by different attack scenarios constituting the graph.In such cases, attack scenarios could be compared for the worst attack scenario that would compromise a system.This can be done by determining the associated real values (i.e., the resilience levels as determined in [15]).The results are illustrated through two-communication networks example.The networks models and security properties (written using Architecture Analysis & Design Language (AADL) [16] and AADL Annex Assume Guarantee REasoning Environment (AGREE) plug-in [17] that relies on JKind model-checker tool [18]) are fed to the AHAG tool, which generates all possible attack scenarios of the systems model and visualizes the graphs using Unity software [19].
This paper extends the conference version [20] significantly by introducing the concept of the Hybrid Attack Graph (HAG), and a new developed tool for its automatic generation (AHAG).The tool is implemented on communication networks example.The remainder of this paper is organized as follows: Section 1.1 reviews the related work.Section 2 describes the model based attack graph implementation through illustrative communication networks example.Section 3 explains the Level-of-resilience assessment.Section 4 presents the Hybrid Attack Graph.Section 5 introduces our AHAG tool to automatically generate the Hybrid attack graph and shows the experimental results.Section 6 summarizes and presents certain future directions.
Related Work
Existing Attack graph generation tools can be summarized as follows [21].One research implemented a tool that consisted of three main pieces: a model builder (it takes as input information about network topology, configuration, and a library of attack rules), an attack graph generator (SPIN), and a Graphical User Interface (GUI) for graphical presentation.Topological Vulnerability Analysis (TVA), Network Security Planning Architecture (NETSPA), and Multi-host Multi-stage Vulnerability Analysis (MULVAL) tools [22].These tools can explore all potential methods an attacker can use to corrupt an enterprise network by determining the configuration information of the hosts and the network.The Cauldron tool implemented by [23] automatically mapped all paths of vulnerability by correlating, aggregating, normalizing, and fusing data from various assets.The Network Attack Graph GENeration (Naggen) tool is developed by [24] to generate attack graph.Other research [5] has illustrated how logical attack graph complexity can be simply elevated when a network becomes denser and larger [25].They utilized the MULVAL tool, but they changed its engine so that the trace of evaluation was recorded and sent to a graph builder.
In [26], the New Symbolic Model Checker (NuSMV) model checker was implemented to develop Attack graph counterexamples.The value iteration method was also implemented to determine the reliability of Attack graph, which allowed designers to identify which nominal set of security defenses would assure systems safety.A model-checking-based Automated Attack Graph Generator and Visualizer (A2G2V) was proposed by [27].The proposed A2G2V algorithm used existing model-checking tools, an architecture description tool, and C code to generate an attack graph that enumerates the set of all possible sequences in which atomic-level vulnerabilities can be exploited to compromise system security.The A2G2V tool required building three main functions: a counterexample parsing function, a cyclic testing function, and a Luster model editing function.
In [28], a heuristic algorithm was used to automatically determine the optimal security hardening through cost-benefit analysis [29].They used multiple algorithms, such as Vulnerability Node Matching, Attack Graph Optimization, Maximum Loss Flow, Path Seeking, and Multi-Objective Augmented Road Sorting, to investigate the vulnerability, estimate the global path, and determine the optimal attack path.The surveys conducted by [30,31] illustrated the state-of-the-art technologies in Attack graph construction in computer networks, and their potential development challenges.Alerts were also analyzed from intrusion detection system, and how well the approaches scaled to larger networks.
A Hybrid Attack Graph (HAG) was modeled by [32], using linguistics and type extensions of traditional attack graph given the necessary inputs of asset, topology, and fact declarations in the typed grammar.The HAG was illustrated through an example of an attacker who is trying to compromise an automotive car by forcing it to drivetoward a wall.The tool automates the task of creating HAGs by compiling the inputs specified and making the appropriate connections.A Hybrid Attack Dependency Graph (HADG) was presented by [33], which allowed discretization into intervals of the reachable and related ranges of the system's state variables and their evolution over the execution of attacks with duration.The HAG generation software of [33] was used by [34] to model a Cyber physical System's attack for a smart grid in which an attacker has to obtain access to a Supervisory Control and Data Acquisition (SCADA) system to cause the transformer to overheat.Thus, transformer temperature was the continuous value in question and was discretized into intervals.The HAG was built by matching exploit patterns to a particular system state.
The concept of adding priorities to HAGs was introduced by [35]; if multiple exploit preconditions were met in a given state, only those with the highest priority value would be expanded into new states of the attack graph.Lower priority exploits would then only be considered if all exploits with higher priorities failed to meet their preconditions.By applying exploit priorities as a heuristic measure, attack graph states can be explored in a more strategic manner.
A novel Hybrid Attack Model (HAM) was introduced by [36] that combines Probabilistic Learning Attacker, Dynamic Defender (PLADD) game model and a Markov Chain model to simulate the planning and execution stages of a bad data injection attack in power grid.The hybrid model is shown to be capable of modeling long time-to-completion actions in the preparation stage and short time-to-completion actions in the execution stage.Table 1 summarizes the main characteristics and limitations of the existed studies in HAG generation as compared to our Automatic Hybrid Attack Graph (AHAG) tool.
Main Characteristics Limitations
Hybrid Attack Dependency Graph (HADG) [33] • Replaces the state transition graph with a dependency graph.
•
A hybrid attack takes a range of real values as preconditions and output a range of values as postconditions.
•
Provides the capability of modeling continuous state variables and their evolution over the execution of attacks with duration.
•
The process for generating HADGs must be articulated and formalized and its performance characterized to better handle continuous variables.
Networked Systems Examples
The two images of Figure 1a,b show two communication networks with identical clients and services (Email, File Transfer Protocol (FTP), and Video), but with different topologies [15].In both networks, the routers R 1 , R 2 , and R 3 are built with Routing Protocol (RIP) [37], where the traffic is rerouted under faults as a recovery action through a redundant path (if it exists) based on hop count [37].In addition to that, R 1 is linked to three Local Area Networks (LANs); LAN 1 , LAN 2 , and LAN 3 , where every LAN has 10 clients.R 2 is linked to LAN 4 , which has 10 clients as well.Router R 3 has three more links, which connect it, respectively, to an Email server through the Internet, an FTP server, and a Video workstation.The second communication network, CN 2 , has similar clients and services as CN 1 , but has a different topology.
Formal System Description for Networked Examples
The generation of Attack Graph requires an overall formal description of the system model and security property being investigated, encoded in Architecture Analysis and Design Language (AADL) and checked using JKind.Here, the formal descriptions of CN1 and CN2, respectively, are given as follows [20].
Formal System Description for Networked Examples
The generation of Attack Graph requires an overall formal description of the system model and security property being investigated, encoded in Architecture Analysis and Design Language (AADL) and checked using JKind.Here, the formal descriptions of CN 1 and CN 2 , respectively, are given as follows [20].
Set of Connection Links L ⊆ R × R, R × N, R × S; Labeled l ij ≡ Link is placed between component i and component j (static parameters).
5.
System Connectivity C = L; Boolean c ij = 1 if there is a connection between component i and component j (dynamic variables).
6.
System Stability T; Boolean t = 1 if system is stable (dynamic variable).
7.
System Performance P ⊆ S; Boolean f k = 1 if ftp service is provided to LAN k, Boolean e k = 1 if Email service is provided on LAN k and Boolean v k = 1 if Video service is provided on LAN k (dynamic variables).
8.
System recovery Action R; Variable r ∈ {p, a, d}, in case of normal operation r = p, in case of recovery action r = a, and in case of no action can be done, r = d (dynamic variables).9.
Number of faulted Links that occur sequentially N; Variable n ∈ {0, 1, 2}, in case of no fault n = 0, in case of first fault n = 1, and in case of second fault n = 2 (dynamic variables).• Pre(a 13 ) ≡ (c 13 = 1)
Attack Scenarios Implementation
In this section, we present the Attack Graphs resulted by running the JKind model checker for the encoded AADL CN 1 and CN 2 descriptions, respectively, against the security property ϕ [20].JKind is an infinite state model checker for checking safety properties of synchronous systems [38], which are expressed in Lustre, a formally defined, declarative, and synchronous dataflow programming language for programming reactive systems [39].The Verification is based on k-induction and property directed reachability using a back-end Satisfiability Modulo Theories (SMT) solver.A verified property is determined to be true for all runs of the system.A property violation is reported with an explicit Counter-Example (CE), which is given here as an attack scenario (a sequence of attack and recovery actions resulting in system disruption).
In our work, the CN 1 and CN 2 descriptive models included entities and their interfaces and connections, which were encoded using Architecture Analysis and Design Language (AADL), within the open-source integrated development environment (Osate2).The AADL models were embedded with the AGREE Annex plug-in [17] that is used to specify the component models and system-level security properties.AGREE also translates the AADL+Annex models and properties to Lustre language, which JKind can verify against a security property of concern and delivers the result as a CE, if it exists [40].
Figures 2 and 3 show the Attack graphs for CN 1 and CN 2 , respectively [20], capturing the state evolution of the two networks dynamical variables given in the earlier formal description under attack instances.These graphs are visualized using Unity tool that supports two-dimensional (2D) and three-dimensional (3D) graphics, drag-and-drop functionality, and scripting using C# [41].
with an explicit Counter-Example (CE), which is given here as an attack scenario (a sequence of attack and recovery actions resulting in system disruption).
In our work, the CN1 and CN2 descriptive models included entities and their interfaces and connections, which were encoded using Architecture Analysis and Design Language (AADL), within the open-source integrated development environment (Osate2).The AADL models were embedded with the AGREE Annex plug-in [17] that is used to specify the component models and system-level security properties.AGREE also translates the AADL+Annex models and properties to Lustre language, which JKind can verify against a security property of concern and delivers the result as a CE, if it exists [40].
Figures 2 and 3 show the Attack graphs for CN1 and CN2, respectively [20], capturing the state evolution of the two networks dynamical variables given in the earlier formal description under attack instances.These graphs are visualized using Unity tool that supports two-dimensional (2D) and three-dimensional (3D) graphics, drag-and-drop functionality, and scripting using C# [41].From the obtained graphs it can be seen that both networks have six attack scenarios, resulting in networks loss stability as determined by the unbounded traffic loss over time [42].Each attack is a sequence of faults and recovery actions occurring sequentially as follow: From the obtained graphs it can be seen that both networks have six attack scenarios, resulting in networks loss of stability as determined by the unbounded traffic loss over time [42].Each attack is a sequence of faults and recovery actions occurring sequentially as follow: S1: a13 → a23 13 S2: a13 → a12 13
Level-of-Resilience Assessment
Each path in the graph is a single attack scenario and has an associated Level-of-Resilience (LoR) [20].The following definition can be utilized to identify the worst case Level-of-Resilience of a system with its Attack graph given.A system is the worst resilient to an attack scenario in the graph if it acquires the highest loss of stability, the highest loss of performance, or the highest recovery-time.
Definition 1 ([20]
).Given a system M and an attack graph A G comprising a set of attack scenarios S ≡ ∪ S i , i ∈ {1, . . ., z}, where z is the number of attack scenarios, we say that LoR(M, S i ) is the worst if: The next definition compares the LoR of many systems against an Attack scenario.A system is the most resilient to an attack scenario if this attack acquires a smallest loss of stability, a smallest loss of performance, or smallest recovery-time.
Definition 2 ([20]
).Given a set of systems M ≡ ∪M j , j ∈ {1, . . ., y}, where y is the number of systems, and an attack scenario S i ∈ S, we say that LoR(M i , S i ) > LoR(M-M i , S i ) if:
Hybrid Attack Graph (HAG)
The states of Attack graph reflect the logical evolution of system parameters (e.g., system stability and services are either true or false) during attacks until reaching the final states where the security property ϕ is violated and hence are the worst states.As multiple attacks can reach the same final state, it is essential to identify the worst attack scenario.A Hybrid Attack Graph (HAG) associates real values to all attack scenarios terminating in these final states.These real values correspond to the Level-of-Resilience parameters determined from the system dynamical response.The detailed computations of LoS R , eventual LoP R and RT of networks CN 1 and CN 2 under A G scenarios are given in [43], and [15].We formally define HAG as follows.Definition 3. A Hybrid Attack Graph (HAG) of a system model M is a data structure representing a union of all attack paths comprising A G annotated with the associated Levels-of-Resilience.
Algorithm 1 compares the Levels-of-Resilience, associated with the attacks comprising an Attack graph, and alerts the worst-case scenario.
Automatic Hybrid Attack Graph (AHAG) Tool
In Section 2, the attack scenarios were generated by repeatedly running AGREE, and updating the security property in every run to exclude the previously generated attack scenarios.Here, the automatic generation of attack scenarios is presented.
The Automatic Hybrid Attack Graph (AHAG) tool was developed through NetBeans, an Integrated Development Environment (IDE) for Java [44].NetBeans allows applications to be developed from a set of modular software components called modules.In addition to Java development, NetBeans has extensions for other languages like Personal Home Page (PHP), C, C++, and fifth version of Hyper Text Markup Language (HTML5).Maven, a build automation tool used primarily for Java projects, was selected as the main project for the AHAG tool in NetBeans [45].
When running an AGREE based JKind model-checker for Lustre models within Osate2, it can only produce one counterexample at a time.However, if executed more than once, it may repeat the same counterexample.The AHAG tool shown in Figure 4 confirms that a different new counterexample is produced (if exists) each time the JKind is called, and automatically, all possible attack scenarios are visualized using Unity software.
AHAG tool takes only the first Lustre model (a translation of the system model and the security property from AGREE).Then, it generates all possible combinations of potential attack scenarios (i.e., CEs) as (.lus) format files.Next, AHAG communicates with the model checker JKind through the Command Prompt Commands (CMD) to iteratively check the system model and the potential CE against the security property within the Lustre files.Doing so, AHAG generates all corresponding results as separate Excel files, illustrating if the potential CE is truly an attack scenario or not.Once all Excel files are generated, AHAG converts them to (.csv) files to be used later along with LoR values in Unity Visualizer.The AHAG Algorithm 2 is as follows.The user inserts the first Lustre Model as input and defines the attack instances/actions constituting the attack paths in a single-dimensional array.In addition to that, the user has to choose the maximum expected length n of an attack scenario.AHAG in turns, generates all potential combinations of attack scenarios A n , where A is number of attack instances/actions.Each new potential attack scenario is stored as a variable CE_1 within (.lus) generated files.Next, JKind is called iteratively to check these files against the security property ϕ.If CE_1 violates the security property, then it is a true attack scenario, which belongs to the Attack graph, and the result is given as an Excel sheet (.xlsx).Otherwise, AHAG will reject the CE_1.Afterward, since it is easier for Unity visualizer to read (.csv) files, AHAG converts xlsx to csv files and feed them to Unity.The AHAG Algorithm 2 is as follows.The user inserts the first Lustre Model as input and defines the attack instances/actions constituting the attack paths in a single-dimensional array.In addition to that, the user has to choose the maximum expected length n of an attack scenario.AHAG in turns, generates all potential combinations of attack scenarios A n , where A is number of attack instances/actions.Each new potential attack scenario is stored as a variable CE_1 within (.lus) generated files.Next, JKind is called iteratively to check these files against the security property φ.If CE_1 violates the security property, then it is a true attack scenario, which belongs to the Attack graph, and the result is given as an Excel sheet (.xlsx).Otherwise, AHAG will reject the CE_1.Afterward, since it is easier for Unity visualizer to read (.csv) files, AHAG converts xlsx to csv files and feed them to Unity.The generated attack scenarios were visualized using Unity visualizer tool and the Levelsof-Resilience values, including Level-of-Stability-Reduction (LoS R ), Level-of-Performance-Reduction (LoP R ) in the networks applications (given by FTP R , Email R , and Video R ), and the Recovery-Time (RT) are annotated to the attack scenarios terminating at the final states using Algorithm 1.This generates the HAG for CN 1 and CN 2 , as shown in Figures 5 and 6, respectively.It can be seen that Attack scenario S 4 was chosen as the worst attack scenario (highlighted in red).The detailed computations of Levels-of-Resilience values for networks CN 1 and CN 2 are given in [43], and [15].By comparing the generated HAGs to the traditional Attack graphs of Figures 2 and 3, respectively, it is clear how HAG can aid in differentiating attack scenarios terminating in the same state, yet with distinguishable resilience levels.Thus, network designers can have a better overview on the attack scenario that the network is most vulnerable to.), and the Recovery-Time (RT) are annotated to the attack scenarios terminating at the final states using Algorithm 1.This generates the HAG for CN1 and CN2, as shown in Figures 5 and 6, respectively.It can be seen that Attack scenario S4 was chosen as the worst attack scenario (highlighted in red).The detailed computations of Levels-of-Resilience values for networks CN1 and CN2 are given in [43], and [15].By comparing the generated HAGs to the traditional Attack graphs of Figures 2 and 3, respectively, it is clear how HAG can aid in differentiating attack scenarios terminating in the same state, yet with distinguishable resilience levels.Thus, network designers can have a better overview on the attack scenario that the network is most vulnerable to.
Conclusions
In this paper, we presented a new concept of Hybrid Attack Graph (HAG), which captures both the logical changes in the system parameters under attacks (as determined by the pre and post-conditions), and the real values of the levels of resilience parameters associated with the attacks constituting the graph.The nearest C-language based tool, Automated Attack Graph Generator and Visualizer (A2G2V), proposed by [27], interacts with the model-checker JKind for the generation of the attack paths one at a time, and with another tool Graph visualization (Graphviz) for visual display of the attack graph.Similar to our Automatic Hybrid Attack Graph (AHAG) tool, A2G2V tool requires one-time modeling effort to obtain the system description for components, connectivity, services, and their vulnerabilities.However, AHAG integrates the Levels-of-Resilience values obtained from system's dynamical response with the Attack graph.This automatically generates a HAG, which can aid system designers to investigate and select the worst LoR system design and its corresponding attack scenario from the graph.This ensures an appropriate defense and countermeasures placement in the system.The results were illustrated through a communication networks example.
7 .
System Performance P S; Boolean fk = 1 if ftp service is provided to LAN k, Boolean ek = 1 if Email service is provided on LAN k and Boolean vk = 1 if Video service is provided on LAN k (dynamic variables).8. System recovery Action R; Variable r ∈ {p, a, d}, in case of normal operation r = p, in case of recovery action r = a, and in case of no action can be done, r = d (dynamic variables).9. Number of faulted Links that occur sequentially N; Variable n ∈ {0, 1, 2}, in case of no fault n = 0, in case of first fault n = 1, and in case of second fault n = 2 (dynamic variables).10.Attack Instance AI A × R × R, A × R × N, A × R × S, Labeled aij m ≡ Attack a on the Link between component i and component j, where m L is a sequence of the previous faulted link(s) if exists.
if New Lustre.csvcontains CE_1 = False delete (New Lustre.csv)goto loop 4 generate violating attack scenarios The generated attack scenarios were visualized using Unity visualizer tool and the Levels-of-Resilience values, including Level-of-Stability-Reduction (LoSR), Level-of-Performance-Reduction (LoPR) in the networks applications (given by , , and
Table 1 .
Main characteristics and limitations of the existed studies in Hybrid Attack Graph (HAG) generation. | 6,666.8 | 2019-11-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Molecular and Functional Diversity of Distinct Subpopulations of the Stressed Insulin-Secreting Cell's Vesiculome
Beta cell failure and apoptosis following islet inflammation have been associated with autoimmune type 1 diabetes pathogenesis. As conveyors of biological active material, extracellular vesicles (EV) act as mediators in communication with immune effectors fostering the idea that EV from inflamed beta cells may contribute to autoimmunity. Evidence accumulates that beta exosomes promote diabetogenic responses, but relative contributions of larger vesicles as well as variations in the composition of the beta cell's vesiculome due to environmental changes have not been explored yet. Here, we made side-by-side comparisons of the phenotype and function of apoptotic bodies (AB), microvesicles (MV) and small EV (sEV) isolated from an equal amount of MIN6 beta cells exposed to inflammatory, hypoxic or genotoxic stressors. Under normal conditions, large vesicles represent 93% of the volume, but only 2% of the number of the vesicles. Our data reveal a consistently higher release of AB and sEV and to a lesser extent of MV, exclusively under inflammatory conditions commensurate with a 4-fold increase in the total volume of the vesiculome and enhanced export of immune-stimulatory material including the autoantigen insulin, microRNA, and cytokines. Whilst inflammation does not change the concentration of insulin inside the EV, specific Toll-like receptor-binding microRNA sequences preferentially partition into sEV. Exposure to inflammatory stress engenders drastic increases in the expression of monocyte chemoattractant protein 1 in all EV and of interleukin-27 solely in AB suggesting selective sorting toward EV subspecies. Functional in vitro assays in mouse dendritic cells and macrophages reveal further differences in the aptitude of EV to modulate expression of cytokines and maturation markers. These findings highlight the different quantitative and qualitative imprints of environmental changes in subpopulations of beta EV that may contribute to the spread of inflammation and sustained immune cell recruitment at the inception of the (auto-) immune response.
Beta cell failure and apoptosis following islet inflammation have been associated with autoimmune type 1 diabetes pathogenesis. As conveyors of biological active material, extracellular vesicles (EV) act as mediators in communication with immune effectors fostering the idea that EV from inflamed beta cells may contribute to autoimmunity. Evidence accumulates that beta exosomes promote diabetogenic responses, but relative contributions of larger vesicles as well as variations in the composition of the beta cell's vesiculome due to environmental changes have not been explored yet. Here, we made side-by-side comparisons of the phenotype and function of apoptotic bodies (AB), microvesicles (MV) and small EV (sEV) isolated from an equal amount of MIN6 beta cells exposed to inflammatory, hypoxic or genotoxic stressors. Under normal conditions, large vesicles represent 93% of the volume, but only 2% of the number of the vesicles. Our data reveal a consistently higher release of AB and sEV and to a lesser extent of MV, exclusively under inflammatory conditions commensurate with a 4-fold increase in the total volume of the vesiculome and enhanced export of immune-stimulatory material including the autoantigen insulin, microRNA, and cytokines. Whilst inflammation does not change the concentration of insulin inside the EV, specific Toll-like receptor-binding microRNA sequences preferentially partition into sEV. Exposure to inflammatory stress engenders drastic increases in the expression of monocyte chemoattractant protein 1 in all EV and of interleukin-27 solely in AB suggesting selective sorting toward EV subspecies. Functional in vitro assays in mouse dendritic cells and macrophages reveal further differences in the aptitude of EV to modulate expression of cytokines and maturation markers. These findings highlight the different quantitative and qualitative imprints of environmental changes in subpopulations of beta EV that may contribute to the spread of inflammation and sustained immune cell recruitment at the inception of the (auto-) immune response.
INTRODUCTION
Type 1 diabetes (T1D) is an autoimmune disease caused by the destruction of the insulin-producing beta cells in the pancreas leading to chronic hyperglycaemia and serious longterm complications such as cardiovascular disease, neuropathy, nephropathy and blindness [reviewed in (1)]. More than 30 million of people suffer from T1D worldwide (www.idf.org). T1D and its sequelae reduce life expectancy of patients by more than eleven years (2). Pathogenesis of T1D is characterized by inflammatory events in the beta cell microenvironment causing innate immune activation followed by progressive infiltration of the islets of Langerhans in the endocrine pancreas by auto-reactive cytotoxic T-lymphocytes. Disease etiology has only partially been elucidated, but results from a complex interplay between genetic and environmental factors collectively engendering functional defects in the immune system and the beta cell itself. Environmental changes in toxins, pathogens, nutrients in particular glucose overload, and low physical activity have been suggested to be responsible for the 3.4% annual increase in disease incidence (3). Due to its demanding secretory function, the beta cell is extremely sensitive to stress. Insulin accounts for up to half of the cell's protein content (4) and rapid changes can exceed the endoplasmic reticulum's (ER) folding capacities leading to the accumulation of misfolded proteins e.g. potential neoantigens within the lumen of the ER. By interaction with built-in sensors, these misfolded proteins trigger the unfolded protein response (UPR), a signaling pathway that aims to restore homeostasis by enhancing the cell's folding capacity and translational attenuation. However, chronic stress can cause the UPR to initiate apoptosis. Beta cell stress and apoptosis has been associated with T1D pathogenesis (5,6), yet, how stressed beta cells trigger innate immune responses at disease initiation has not been fully elucidated.
Extracellular vesicles (EV) are membrane-bound vesicles released by healthy and diseased cells. Three major types of EV can be distinguished based on their origin and biogenesis pathway: apoptotic bodies (AB), microvesicles (MV) and exosomes [reviewed in (7)]. AB are large 1,000 -5,000 nm vesicles released by cells undergoing apoptosis (8). In contrast to other EV, AB may contain cellular organelles and their constituents including elements from the nucleus, mitochondria, the Golgi apparatus, the ER and the cytoskeleton. MV are formed by outward budding and scission of the plasma membrane. Typically, the size of MV ranges from 100-1,000 nm. In line with their pathway of formation, MV contain mainly cytosolic and plasma membrane-associated proteins such as tetraspanins. Exosomes result from the inward budding of the membrane of the endosome leading to the formation of 30-120 nm intraluminal vesicles that can be released upon fusion of the endosome with the plasma membrane. Throughout its maturation, the endosome is an important site of bidirectional translocation of substances between the cytoplasm and the endosome. In consequence, the packing of specific cargo molecules into EV and their release are intimately linked to the state of the releasing cells. As conveyors of biological active material from their cell of origin to neighboring or distant recipient cells, EV act as mediators in cell-to-cell communication fostering the idea that they may constitute the missing link between beta cell stress and immune activation [reviewed in (9)].
While beta AB have been successfully used to induce tolerance in diabetes-prone non obese diabetic (NOD) mice (10), evidence accumulates that exosomes derived from pancreatic beta cells contribute to T1D development. Strikingly, all known beta autoantigens are directly or indirectly linked to secretory pathways and 13 localize to secretory granules, synaptic vesicles, the ER and the trans-Golgi network (TGN) (11). Several studies showed that human and mouse beta EV contain major auto-antigens of type 1 diabetes such as glutamic acid decarboxylase 65 (GAD65), glucose-transporter 2 (GLUT-2), islet-associated antigen 2 (IA-2), zinc transporter 8 (Znt8), and insulin (12)(13)(14)(15). Interconnections between the TGN secretory pathway of autoantigens and the endosomal compartment where exosome biogenesis occurs have convincingly been demonstrated by immunofluorescent studies co-localizing the auto-antigen GAD65 with the TGN protein 38, but also with endosomal ras-related protein in brain 11 (Rab11) and the exosomal markers flotillin-1 (FLOT1) and CD81 in vesicular structures at the peripheral membrane (12). Exosomes from healthy beta cells efficiently trigger antigenpresenting cell (APC) activation and T-cell proliferation in vitro and accelerate islet infiltration by immune cells in non-obese diabetic resistant mice in vivo (13). In human T1D patients, healthy beta EV mediate B-and T-cell activation (16). It has further been hypothesized that aberrant sorting in stressed beta cells could fuel release of misfolded immunogenic proteins and danger-associated molecular patterns (DAMP) inside EV. With the aim to explore roles of beta EV in T1D pathogenesis, attempts are made to recreate the beta cell environment by adding a mild cocktail of the proinflammatory cytokines TNFα, IFNγ, and IL-1β present in the pancreas at disease initiation (12,17,18). EV ferry short non-coding microRNA (miRNA) that have the aptitude to repress translation of target genes in recipient cells (19), a well-documented mechanism termed RNA interference (RNAi) (20). MiRNA in exosomes derived from beta cells under inflammatory conditions contribute to the spread of beta cell apoptosis (17). However, the biological relevance of miRNA transfer has been questioned by estimates of 1,000 copies required per recipient cells to allow for effective target gene regulation (21). More recently, six specific GU-rich miRNA sequences have been identified (let-7b/c, miR-21, miR-7a, miR-29a/b) that may stimulate immune signaling by binding to the Toll-like receptor-7 (TLR-7), independently of RNAi. Packed into EV, these miRNA sequences act as DAMP exacerbating inflammation in cancer, neurological and autoimmune settings (22)(23)(24)(25)(26)(27)(28). Beta EV T-and B-cell activation in NOD mice was impaired in NOD. MyD88 −/− mice suggesting a role for TLR-signaling in EV-mediated immune responses (13,29).
To date, the molecular and functional diversity of EV in the beta cell's secretome has not been thoroughly explored. The majority of beta EV studies focuses on small exosomelike vesicles and AB and no studies on contributions of beta MV have been published to our knowledge. Because subtypes of beta EV potentially exert detrimental or protective effects in the immune balance, side-by-side comparisons are mandatory to evaluate their role in T1D pathogenesis. We herein sought to investigate on changes in the relative composition of the vesiculome as well as the partition of the candidate autoantigen insulin and immunostimulatory miRNA sequences inside AB, MV and exosome subpopulations derived from equal amounts of healthy and stressed beta cells and their impact on innate immune responses. As current isolation methods do not allow distinguishing between exosomes of endosomal origin and small MV, the latter will be called small EV (sEV) throughout this study.
Mice
NOD/ShiLtJ mice were obtained from Charles River Laboratories (L'Abresle, France), bred and housed in a pathogen-free environment at ONIRIS' Rodent Facility (Agreement #44266). Six to ten weeks old female mice were used in the study. All animal procedures were approved by the Pays de la Loire regional committee on ethics of animal experiments (APAFIS#9871). All possible efforts were made to minimize animal suffering.
Bone marrow-derived dendritic cells (bmDC). Bone marrow progenitor cells were isolated from femurs and tibias from NOD Shi/LTJ female mice and cultured in complete RPMI medium (Eurobio) i.e., supplemented with 1% heat-denatured syngeneic mouse serum along with 1 mM sodium pyruvate, 100 IU/mL penicillin, 100 µg/mL streptomycin, 2 mM L-glutamine, nonessential amino acids and 20 µM beta-mercaptoethanol. Medium was supplemented with 20 ng/mL GM-CSF (PeproTech, Neuillysur-Seine, France) and 5 ng/mL IL4 (BioLegend, London, UK) and 2 x 10 6 cells were cultured in 10 mL of medium per 100 mm Petri dish for 10 days. On day four and nine, an additional 10 and 5 mL of complete culture media were added, respectively. On day 7, 10 mL of culture media were refreshed. On day ten, flow cytometry routinely revealed CD11c + > 90% purity of bmDC cultures.
Antibodies & Reagents
Protein concentration was determined by a Bradford protein assay using Coomassie plus assay reagent (Fisher Scientific). Optical densities were read on Fluostar (BMG LABTECH, Champigny sur Marne, France) and Nanodrop2000 (Fisher Scientific) spectrophotometers following the supplier's recommendations.
Markers of Hypoxia
For detection of the endogenous marker of hypoxia HIF-1α, MIN6 cells were cultured as indicated for 30 h in normoxia or hypoxia. As soon as the hypoxia chamber was opened, cells were washed and lysed with 1 mL of 50 mM Tris (pH 7.4), 300 mM NaCl, 10% (w/v) glycerol, 3 mM EDTA, 1 mM MgCl2, 20 mM b-glycerophosphate, 25 mM NaF, 1% Triton X-100, 25 µg/mL Leupeptin, 25 µg/mL Pepstatin and 3 µg/mL Aprotinin lysis buffer for 30 min on a rotating platform at 4 • C. The cells were centrifuged at 2,000 × g for 5 min and supernatants were stored at −80 • C. Proteins were quantified using the Bradford assay and HIF-1α was measured by ELISA following the supplier's instructions (R&Dsystem). Optical Density at 450 nm was measured using the FLUOstar Optima Microplate Reader (BMG Labtech, Champigny sur Marne, France).
Caspase Assay
Apoptosis was assayed by fluorescent caspase-3/7 substrate cleavage staining. Briefly, 3 × 10 5 cells/cm 2 MIN6 cells were cultured in eight-chamber labteks with coverslips. After overnight culture, cells were switched to OptiMEM 1% FCS production medium and exposed to cytokines, UV irradiation, hypoxia or left untreated. Eighteen hours later, cells were treated with 2 mM caspase-3/7 detection reagent (Fisher Scientific) for 30 min. at 37 • C and counterstained with 1 µg/ml Hoechst 33342 (Sigma). Cells were fixed with 4% PFA, washed with PBS and overlaid with Mowiol (Sigma) before analysis by fluorescence confocal imaging on a LSM780 confocal microscope (Zeiss, Oberkochen, Germany). Tiles of nine images per well were acquired and processed for semi-automatic quantitative analysis of caspase-positive cells using an in-house macro and Fiji software. Total cell count was set equal to the number of Hoechst positive regions.
Separation of Beta-EV subpopulations
EV were collected from MIN6 supernatants using a method combining differential centrifugation, ultrafiltration and size-exclusion chromatography steps. Briefly, 90 ml of 30-h supernatants from MIN6 cells were centrifuged immediately after harvest at 300 x g 10 min, 2,000 × g, 20 min (AB) and 16,600 × g, 20 min (MV). The pellets containing AB and MV were washed with PBS or RPMI and centrifuged again before use. The 16,500 supernatants were filtered 0.2 µm and concentrated on an AMICON MWCO-100 kDa cellulose ultrafiltration unit (Dutscher, Issy-les-Moulineaux, France). Approximately, 100 µl of concentrates were recovered and passed through a size exclusion chromatography column (IZON, Lyon, France). sEV were collected following the supplier's recommendations in flowthrough fractions four and eight for qEV single and qEV original, respectively. EV were stored at 4 • C for 1-3 days or at −80 • C for up to 1 year. All assays of biological activity were carried out using fresh EV. For transcriptomic analyses, 90 ml of 30-hours supernatants from MIN6 cells were centrifuged immediately after harvest at 300 x g 10 min., followed by centrifugation at 16,500 x g, 20 minutes to collect large EV (LEV) comprising both AB and MV.
Tunable Resistive Pulse Sensing (TRPS)
The size and concentration of EV were analyzed by the TRPS technique using a qNANO instrument and NP2000 (AB), NP800 (MV) and NP100 (sEV) nanopores (IZON, Lyon, France). All samples were diluted in PBS 0.03% Tween-20. After instrument calibration using 110 nm, 710 nm or 2,000 nm calibration beads (Izon), all samples were recorded with at least two different pressures. Respective particle volumes V=4/3πr 3 x nb particles were calculated based on the mean particle diameter measured, assuming spherical shape.
Cryo-Electron Microscopy
MV and sEV were applied onto glow-discharged perforated grids (C-flat TM ), prepared using an EM-GP (Leica, Germany) at room temperature in a humidity saturated atmosphere. EV samples were mixed with 10 nm diameter gold nanoparticles at a concentration of 80nM (33) and four µl of the mixture were deposited on the grids. Excess sample was removed by blotting for 0.8 to 1.2 seconds before snap-freezing of samples into liquid ethane and storage in liquid nitrogen until observation. The grids were mounted in a single-axis cryo-holder (model 626, Gatan, USA) and the data were collected on a Tecnai G 2 T20 sphera electron microscope (FEI company, The Netherlands) equipped with a CCD camera (US4000, Gatan) at 200 kV. Images were taken at a nominal magnification x 29,000 in lowelectron dose conditions. For cryo-electron tomography, singleaxis tilt series, typically in the angular range ±60 • , were acquired under low electron doses (∼0.3 e − /Å 2 ) using the camera in binning mode 2 and at nominal magnifications of 25,000x and 29,000x, corresponding to calibrated pixel sizes of 0.95 and 0.79 nm at the specimen level, respectively. Tomograms were reconstructed using the graphical user interface eTomo from the IMOD software package (34,35).
Fluorescence Imaging of EV
AB and MV were separated from MIN6 cell supernatant and stained with MemBright (MB) dye (MB-Cy3 and MB-Cy5 (200 nM) kindly provided by M Collot). The mixture was incubated for 30 minutes at room temperature with gentle rotation. EV were then centrifuged at 2,000 x g (AB) or 16,600 x g (MV) for 20 minutes and washed in PBS. Samples were transferred into Labtek wells and overlaid with Mowiol (Sigma) before acquisition of images in superresolution mode Airyscan on a Zeiss LSM780 instrument.
Quantitative RT-PCR Analysis
Total RNA including miRNA was extracted from MIN6 cells or EV derived from an equal amount of cells using the miRVana kit (Fisher Scientific) or TriReagent (SIGMA), respectively. During the initial lysis step, all samples were spiked with 10 10 copies of a synthetic analog of ath-miR-159 (Eurogentec, Angers, France). Following reverse transcription using RT-stem-loop primers, extravesicular cDNA was pre-amplified for all miRNA except the spike ath-miR-159 by 10-14 cycles of PCR using Taqman probe reagent (Solisbiodyne, Tartu, Estonia) and Taqman assays (Fisher Scientific), followed by 40 cycles of PCR on an ABI7300 instrument (Fisher Scientific). For each target, standard curves were generated using serial sample dilutions. Relative quantities (in arbitrary units) of miRNA in samples were inferred by the relative standard curve method and normalized with respect to the spike and untreated controls.
Statistical Analysis
Statistical tests were performed using either Prism GraphPad Software (Comparex, Issy-les-Moulineaux, France) or R 3.6.0 (36) with RStudio (37) and lsmeans (38) and lme4 (39) packages using tests as indicated in the figure legends. Confidence levels of 95% were considered significant. For linear mixed model, the parameters were the "type of EV" or the "treatment of the producing cells". No interaction test was deemed necessary, as analysis was performed for all "treatments" of a single "type of EV" or vice-versa. The random parameter was the individual "experiment". Post-hoc analysis was performed by the Tukey's range test for pairwise comparisons on calculated least-square means.
RESULTS
Primary islet inflammatory events have been associated with beta cell stress and failure at the origin of T1D pathogenesis (40,41). With the aim to study the impact of cellular stress on the beta vesicular secretome, murine MIN6 beta cells were either left untreated (CTL) or exposed to a cocktail of mild doses of proinflammatory cytokines (CK) encountered at disease initiation. To discriminate between inflammation-specific and general responses to cellular stress, hypoxic (1% O 2 , HX) and genotoxic (ultraviolet irradiation, UV) stress situations were introduced ( Figure 1A). Cells grown under hypoxia were assessed for the expression of endogenous and exogenous markers of hypoxia. Added to culture, pimonidazole hydrochloride forms adducts with thiol groups in proteins in cells at low oxygen tension (pO 2 < 10 mmHg). Immunofluorescent microscopy analysis revealed the presence of pimonidazole adducts in hypoxic cells ( Figure 1B). The hypoxia-inducible factor 1 (HIF-1) is a transcriptional regulator of the cellular response to low oxygen levels. Under hypoxic conditions, the subunit HIF-1α associates with the subunit HIF-1β and binds to the hypoxia response element (HRE) of target genes, initiating their expression (42,43). ELISA analysis showed a 3-fold increase in HIF-1α expression from 9 (7-12) pg/mL [median (range)] for cells grown under normoxic conditions compared to 29 pg/mL for cells grown under hypoxic conditions ( Figure 1C; p = 0.0143).
Exposure to experimental stress induced ER stress as revealed by enhanced expression of the phosphorylated form of the subunit alpha of the eukaryotic translation initiation factor 2 (p-eIF-2α) and the transcription factor C/EBP homologous protein (CHOP), two effectors of the unfolded protein response (UPR) to ER stress (Figure 1D, Supplementary Figure 1). While p-eIF-2α participates to translational attenuation with the aim to restore protein homeostasis in the ER at an early stage of the UPR, CHOP is activated belatedly after prolonged stress and controls cell fate by regulating expression of genes involved in apoptosis. After 30 h of culture, cytokine-and HX-treated cells expressed CHOP in contrast to UV-irradiated cells. This difference might be explained by altered and presumably delayed kinetics of activation of the UPR following DNA damage at random by UVlight that pass through transcriptional and translational steps prior to changes in the proteome.
All treatment conditions engendered apoptosis in MIN6 cells as shown by the significant increases in the percentage of effector caspase-3/7-positive cells (Figures 1E,F). The quantitative analysis of fluorescent caspase-substrate cleavage on confocal microscopy images (total > 8,000 nuclei counted for each situation) revealed a low percentage of 1.5 (0-5)% [median and (range)] of caspase-3/7-positive cells in untreated controls. Following exposure to stress, this percentage shifted to 20 (10-34)% for CK-, 5 (3-14)% for UV-, and 5 (1-11)% for HX-treated cells. Live cell imaging was performed to monitor the kinetics of apoptosis in individual cells (Supplementary Figure 2). In response to cytokines, caspase-3/7 activity appeared after 5 h of treatment and steadily increased. Close to all cytokine-treated cells became apoptotic by the end of the 30h incubation period in contrast to untreated controls. Collectively, these results demonstrate that stress in our experimental conditions rapidly induces critical executioners of the cellular stress response in MIN6 beta cells cumulating in apoptosis.
To assess downstream effects of cellular stress on the beta cell's secretome, AB, MV and sEV subpopulations were enriched from 30h conditioned MIN6 culture supernatants following a protocol combining differential centrifugation (44), (ultra-) filtration and size-exclusion chromatography (45) steps (outlined in Figure 2A). Western blot revealed the presence of vesicular markers i.e. the membrane proteins CD81, CD63, CD9 and flotillin, and the cytosolic protein β-actin in all EV subpopulations ( Figure 2B, Supplementary Figure 3). The ER protein calnexin was present in AB and MV but absent in sEV in line with enrichment in vesicles of endosomal origin in the latter.
Staining with the lipid probe MemBright, recently developed by Collot and colleagues (46,47), showed a heterogeneous population of round-shaped vesicles in the AB and MV fractions ( Figure 2C). Cryo-electron microscopy images of MV and sEV clearly ascertained the presence of a lipid bilayer membrane surrounding the vesicles (Figure 2D, Supplementary Figure 4). AB exceed the upper size limit of cryo-electron tomography (1µm) and have therefore not been analyzed using this technique. TRPS analysis of the EV showed a mode size [median (range)] of 1548 (1368-1790) nm, 510 (455-563) nm, and 76 (61-120) nm for AB, MV, and sEV, respectively. None of the treatments had a significant effect on the mode or mean size of the vesicles ( Figure 2E and Table 1). In healthy beta cells, large EV (AB and MV) represent 93% of the volume and 98% of the protein content of the vesicles all together, but less than 2% of the number of particles (Figure 2F). In pro-inflammatory conditions, the secretion of AB, MV and sEV was significantly enhanced as shown by the 5.5-fold, 2.1 and 4.5-fold increases of the number of particles recovered per million of cells, respectively. Commensurate to the rise in the number of vesicles, the volume occupied by the CK-EV (all subtypes) increased 4.0-fold against 2.8-fold for UV and 1.7-fold for HX EV (Figure 2G, Table 1). In line with the particle size, the number of particles per microgram of protein is much higher in MV (> 25 times) and sEV (>1E4 times) than in AB. As expected, this ratio remained constant in conditions of stress for MV and sEV, but curiously increased in AB derived under conditions of stress (CK vs CTL; p = 0.0084). In the absence of noticeable changes in the particle's volumes measured by TRPS, this increase hints to changes in the vesicle's protein content. Cytoplasmic vacuolation and inclusion of organelles and DNA fragments during the process of apoptosis possibly reduced the proportion of proteins in AB from CKtreated in comparison to AB from untreated cells.
Earlier studies provided evidence that human and mouse beta EV contain major auto-antigens of type 1 diabetes such as GAD65, islet-associated antigen 2, Znt8, and insulin (12)(13)(14)(15). Out of these, insulin is the most prominent islet autoantigen, highly abundant in beta cells and was used here to ease monitoring of autoantigen partition.
To explore how autoantigens partition into beta EV in normal and pathological conditions, we quantified the amount of total insulin comprising pro-insulin and mature insulin in EV subpopulations by ELISA ( Figure 3A). Data obtained on EV from healthy cells showed that a majority of vesicle-associated insulin was exported inside large vesicles ( Figure 3B) Following exposure to stress, insulin export was markedly enhanced in AB and sEV derived from CK-treated cells, whereas no significant changes were perceived in EV derived from cells cultured exposed to hypoxia or UV-irradiation ( Figure 3D and Supplementary Figure 5, Supplementary Table 1). This increase in insulin export relied on enhanced EV export, as the concentration of insulin inside the EV subtypes did not change following treatment.
MiRNA may act as adjuvants in immune activation and recently six miRNA with the potential to bind directly to the TLR-7 receptor of innate immunity have been described in MV and sEV (23, 24, 26-28, 48, 49). Here we wanted to investigate how TLR-binding miRNA are sorted into EV and whether their expression in these vesicles changes in situations of stress.
With the aim to compare TLR-binding miRNA expression in an equal amount of cells and large and small EV derived thereof, a synthetic ath-miR-159a was spiked into all samples prior to RNA extraction. After RT-qPCR amplification, relative quantities in samples were normalized with respect to this exogenous control as well as cells or EV derived from untreated control cells (Figure 4). The results obtained show a significant up to 3-fold drop in the expression of TLR-binding miRNA in cells in situations of pro-inflammatory stress, in parallel to an up to 13-and 48-fold increase in LEV and sEV, respectively. Although less pronounced, similar trends were observed in cells and LEV obtained under genotoxic and hypoxic conditions. No changes of TLR-binding miRNA expression were observed in sEV under genotoxic and hypoxic conditions. This enhanced TLR-binding miRNA release in vesicles could be explained either by an increase in the release in EV or by a higher concentration of these miRNA inside the vesicles. To answer this question, the RT-qPCR data was further normalized to the number of particles present in the sample as determined by TRPS. The results presented in Supplementary Figure 6, revealed an evened out expression of TLR-binding miRNA in LEV in all situations. In contrast, 4-5-fold higher quantities of all TLR-binding miRNA except for miR-29b were detected in sEV secreted under proinflammatory conditions. Taken together, these data suggest that beta cell stress and in particular, the exposure of beta cells to cytokines, favors export of immune stimulatory miRNA into EV and enrichment in sEV.
Cytokines are well-known soluble mediators in cell-to-cell communication, however evidence exist that an important fraction of biologically active cytokines are released from tissues in association with EV either bound to the surface or encapsulated in the lumen of the vesicles (50). Owing to cytokine treatments performed in our EV production workflow, we performed a mouse 13-plex CBA to assess for interferon (IFNβ, IFNγ, interleukin (IL-1α, IL-1β, IL6, IL-10, IL-12p70, IL-17A, IL-23, IL-27), granulocytes macrophage colony stimulating factor (GM-CSF), TNFα and monocyte chemoattractant protein-1 [MCP-1; also called chemokine (C-C motif) ligand 2 (CCL2)] cytokine expression in the different subpopulations of beta EV. Prior to analysis, the EV were incubated in 0.5% Triton X-100 for 30 minutes and sonicated for 1 min. to assess for internal as well as surface-associated cytokines. Six cytokines were detected in subpopulations of MIN6 beta EV (Figure 5 and Supplementary Table 2). Seven cytokines were below detection levels in all vesicles. None of the EV secreted by untreated controls exhibited the exogenous cytokines TNFα, IFNγ or IL-1β (Supplementary Table 2 All values are listed as median (range) from n = 9-21 independent experiments. Nb part: number of particles. Kruskal-Wallis test compared to EV from untreated controls. ** P < 0.01, **** P < 0.0001.
fg/E6 cells]. Cytokine profiles of EV derived from MIN6 cells exposed to hypoxia or UV-irradiation were similar to profiles from untreated cells (data not shown). APC such as dendritic cells and macrophages have been identified as recipient cells for beta EV uptake in vitro and in vivo (12,13,51,52). For side-by-side comparisons of the potential of EV subtypes to modulate APC function, primary NOD bmDC were exposed to MIN6-derived beta EV for 18 h followed by flow cytometry analysis of the expression of MHC-II and co-stimulatory CD86 and CD40 molecules (Figure 6). AB derived from cytokine-treated beta cells induced a modest albeit significant up-regulation of CD40 and MHC II expression. CD86 expression remained unchanged despite a wider variation of expression. None of the MV and sEV modulated the expression of costimulatory molecules in bmDC (Figure 6A and data not shown).
The concentration of cytokines in supernatants of murine bmDC and RAW264.7 macrophages exposed to EV in culture was assessed by CBA and ELISA (Figure 7). In bmDC, no significant influence of the EV treatments was observed on the concentrations of IL-1β, IL-1β, IL-6, IL-10, IL-12p70, or IL-23 whose levels remained low in culture supernatants, close to detection thresholds (Figure 7A and data not shown). Among cytokines expressed in beta EV derived under inflammatory situations, MCP-1 was detected in bmDC cultures with AB and MV derived from MIN6 cells cultured under inflammatory conditions ( Figure 7B). MCP-1 concentrations measured in culture supernatants are 2-4 times lower than concentrations calculated for AB input into these culture, supporting the idea of an essentially passive carry-over of MCP-1. In contrast, IL-27, which was highly expressed in CK-AB, was below detection thresholds in bmDC culture supernatants (data not shown) suggesting differential kinetics of MCP-1 and IL-27 uptake, recycling or activity in bmDC. Alternatively, sustained expression of MCP-1 in culture supernatants could be explained by de novo cytokine production by bmDC. All AB (except UV-AB) and sEV derived under inflammatory conditions led to increased levels of TNFα in bmDC culture supernatants, superior to TNFα amounts provided by these vesicles. Though CK-MV expressed 2.5-fold higher levels of TNFα than CK-sEV, no differences in the concentration of TNFα was observed in bmDC supernatants in the presence of CK-MV in comparison to CTL-MV ( Figure 7C). Co-incubation of murine RAW264.7 macrophages with AB, MV as well as sEV led to increased TNFα supernatant concentrations for EV obtained from cytokinetreated MIN6 cells, the amounts of which cannot solely be explained by passive carry-over of EV-associated TNFα. For EV from beta cells in hypoxia, AB were also able to significantly enhance TNFα secretion (Figure 7D; p = 0.0122). For all EV from UV-and HX-stressed beta cells, a tendency of TNFα induction was visible. As inflammatory stress was the most potent inducer of EV release in our hands (Figure 2), the more pronounced immune effects might be caused by a higher ratio of EV to target immune cells or a minimal concentration of EV necessary for immune activation. Taken together, our results reveal modest direct or indirect activation of dendritic cells and macrophages by beta EV. Evolution of sorting of insulin toward EV following exposure to stress. Data from n = 7-11 independent experiments are shown with median and range and compared using the Kruskal-Wallis test (*P < 0.05 ***P < 0.001).
DISCUSSION
Living cells release heterogeneous populations of EV that constitute a means for surrounding and distant tissue crosstalk. Beta sEV have been shown to drive innate and adaptive prodiabetogenic immune responses, but the functional diversity of the beta secretome as a whole, and the impact of beta cell stress on the beta EV repertoire have not been explored yet.
We observe that experimental exposure of MIN6 beta cells to either inflammatory cytokines, low oxygen tension or UV-irradiation rapidly induces ER-stress and subsequently apoptosis. While p-eIF-2a, indicative of translational attenuation is observed in all situations of stress, the apoptosis-mediating transcription factor CHOP is barely detected in UV-treated MIN6 cells. This is in line with earlier observations that DNA damage through UV irradiation alone is insufficient to induce CHOP expression (53). Although low doses of cytokines were used here in comparison to similar beta cell studies (17,54,55), the percentage of caspase-3/7 positive beta cells were 4-fold higher in CK-treated cells than in cells following UV irradiation or cultured under hypoxia, illustrating the particular propensity of beta-cells to undergo apoptosis in response to inflammatory stressors (56).
Stress and inflammation have been repeatedly reported to enhance EV secretion, including from beta cells (12,57). Here, quantitative side-by-side comparisons of EV subtypes isolated from an equal amount of beta cells, reveal a consistently higher release of EV exclusively under inflammatory conditions. To Following exposure to stress, MIN6 cells were cultured for 30 h followed by isolation of large EV (comprising AB and MV) and sEV. All samples were spiked with an exogenous control prior to total RNA extraction and processed for quantitative RT-PCR. After amplification, relative quantities were normalized with respect to the spike and untreated controls. (B) Quantitative RT-PCR analysis of miRNA expression in a fixed number of cells and EV derived thereof. Individual replicates from 4 to 6 independent EV isolations are represented as fold-changes compared to mean expression values measured in untreated controls. Tukey's range test *P < 0.05, **P < 0.01 and ***P < 0.001. Indeed, in response to stress, cancer cells secrete EV which have been shown to contribute to survival of surrounding cancer cells and drug resistance (58). In contrast, sEV from cytokinetreated beta cells induce beta cell apoptosis in naïve beta cells (17) suggesting an EV-mediated spread of inflammatory cellular constituents. Following cytokine exposure, it has been shown that chaperones of the UPR promoting DAMP-signaling, namely calreticulin, Gp96, ORP150 and the heat-shock protein HSP-90α are packed into beta sEV (12,54). Beta EV have also attracted interest for their aptitude to transport self-antigens (12)(13)(14)(15). In the present study, the partition of the highly abundant insulin protein was monitored in EV subpopulations derived under normal and pathological conditions. In our hands, 1.5% of untreated MIN6 cells continuously undergo apoptosis in culture. These apoptotic cells release AB containing 91% of the particulate secretome's insulin content in line with the role of AB in the disposal of cellular material in efferocytosis (59)(60)(61)(62). Exposure to inflammatory triggers up-regulates not solely the number of vesicles released, but also the absolute amount of insulin exported with significantly higher levels of insulin measured in association with EV produced by cytokine-treated cells. Our results converge with others obtained for TLR-binding miRNA expression inside large and small EV. Inflammatory cytokines and to a lesser extent hypoxia and UV-irradiation promote TLR-binding miRNA efflux from the cell. Interestingly, this increase relies on the enhanced secretion of LEV with an unchanged miRNA content in contrast to enhanced secretion paralleled by a rise in the relative quantities of these miRNA sequences packed inside individual sEV. Taken together, our data provide strong evidence that sorting of immunogenic material inside subpopulations of EV is not a random process and is profoundly altered in inflammatory settings.
MIN6 beta EV in the steady state contain low levels of MCP-1 and IL-23 and undetectable levels of a panel of eleven other cytokines involved in pathways of inflammation. Experimental exposure of MIN6 cells to inflammatory cytokines engenders drastic changes in the expression of the same cytokines in large AB and MV, but also of de novo produced MCP-1 (in AB, MV, and to a lower extent in sEV) and IL-27 (AB) cytokines. Passive exogenous cytokine carry-over is obviously a concern in the interpretation of immune functions of EV. However, it has to be stressed that beta cells facing immune insults in situ, most likely discard cytokines of beta or immune cell origin in an analogous manner that is to say inside large rather than small vesicles according to our data.
Several studies, including genome-wide association studies (63)(64)(65), demonstrate pathogenic roles of the cytokine IL-27 in T1D. Transgenic NOD IL-27 receptor knockout mice are resistant to disease and blockade of IL-27 delays T1D onset in NOD mice (66,67). MCP-1 is a chemokine involved in immune cell recruitment. Exported in exosomes, MCP-1 has been shown to contribute to inflammation in nephropathies (68,69). In the context of T1D, chemotaxis assays showed that subnanomolar amounts of MCP-1 produced by beta cells are sufficient to attract monocytes (70). It has been shown earlier, that mouse and human islet cells constitutively express MCP-1 and produce high levels of MCP-1 peaking at 6 h of incubation in response to proinflammatory cytokines (71, 72). In islet transplantation, MCP-1 is inversely correlated to islet graft function (70,73) and attempts to block MCP-1 signaling successfully improve graft survival (74). Furthermore, T-lymphocyte exosomes induce MCP-1 expression and apoptosis in beta cells (75) illustrating the importance of MCP-1 in beta cell inflammation and failure. MCP-1 stimulation on its own results in aberrant sorting of immune regulatory miRNA into extracellular vesicles (76) in line with observations made on TLR-binding miRNA in our study. It is thus conceivable that molecular mediators of inflammation as chemokines and immunostimulatory miRNA establish and mutually maintain inflammation.
Added to culture of bmDC derived from diabetes-prone NOD mice, EV from cytokine-treated beta cells up-regulate moderately the surface expression of MHC class II and co-stimulatory CD40 molecules. In these experiments, an EV donor to recipient ratio of 1:200 was used, which would be equivalent to 5 DC in an averaged sized islet containing 1,000 beta cells, a plausible proportion in the inflamed pancreas at disease initiation. EV from CK-treated MIN6 cells exert the strongest immune effects, which could be due to cumulative effects of cargo quantity (autoantigens, proinflammatory miRNA, endogenous cytokines), increased EV release and cytokine carry-over. At least two facts argue against cytokine carry-over as the only responsible for the observed immune effects. First, AB derived under hypoxia devoid of EVassociated cytokines also significantly induced TNFα secretion in RAW264.7 macrophages. Second, IL-27 highly expressed in CK-AB was not detected in bmDC culture supernatants. Lastly, we showed earlier that TNFα secretion in RAW264.7 macrophages induced by sEV derived from untreated MIN6 cells is positively correlated to the amount of particles in culture (26,51).
The relevance of these quantitative and qualitative differences of subsets of apoptotic beta EV have to be weighted with regard to the interplay of these vesicles with cellular effectors of immunity in vivo. Obviously, enhanced EV release in situations of stress engenders higher EV to immune cell ratios. This fact should be considered in EV biological activity assays, which are frequently based on treatments with a constant number of particles. AB are known to express "find me" and "eat me" signals leading to rapid elimination by patrolling phagocytes (77). Conceivably, AB from stressed beta cells constitute a critical source of chemoattractants, beta self-antigens and danger signals that could infer with the otherwise immune silent elimination of the dead by efferocytosis. In contrast, nanosized vesicles such as sEV and small MV have half-lives of minutes to hours in vivo (78). They are in the ideal size range for transport in interstitial fluids and have been shown to efficiently diffuse to secondary lymphoid organs as spleen and draining lymph nodes (79)(80)(81). Thereby, MV and sEV from inflamed pancreatic islets could have implications in immune regulation by aberrant autoantigen and immune-stimulatory miRNA expression at nearby as well as distant sites. Taken together, our findings highlight the profound impact of inflammation in comparison to other stressors on the beta EV repertoire. Centered on stress, the induction of markers of activation and mediators of inflammation (with the exception of IL-10) by beta EV are analyzed in the present work. Further investigations on primary mouse and human islet and immune cell cultures in vitro and in pre-diabetic NOD mice in vivo are required to dissect the mechanisms of potential protective vs. pathological roles of EV subspecies from healthy and stressed beta cells in T1D development.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by Pays de la Loire regional committee on ethics of animal experiments (APAFIS#9871).
AUTHOR CONTRIBUTIONS
KG, LB, DJ, J-MB, GM, and SB designed the experiments. KG, LB, ML, and SB produced and characterized the EV and performed functional assays. DJ carried out immunohistochemical analyses. RF and LD did the confocal microscopy analyses. AD carried out cryo-tomography analyses. All authors contributed to data analysis. GM performed statistical analysis with RStudio. KG, SB, GM, and J-MB wrote the paper. All authors have read and approved the manuscript. (Zeiss, Oberkochen, Germany) in a 5% CO 2 , 20% O 2 , 37 • C atmosphere. Apoptosis was assayed by fluorescent caspase-3/7 substrate cleavage staining. Briefly, 3 × 10 5 cells/cm 2 MIN6 cells were cultured in eight-chamber labteks with coverslips. After overnight culture, cells were switched to OptiMEM 1% FCS production medium and exposed to cytokines or left untreated and 2 mM caspase-3/7 detection reagent (Fisher Scientific) were added. Images were captured every 15 min for 30 h. with an exogenous control prior to total RNA extraction and processed for quantitative RT-PCR. After amplification, relative quantities were normalized with respect to the spike, the number of particles as determined by TRPS analysis and untreated controls. (B) Quantitative RT-PCR analysis of miRNA expression in a fixed number of cells and EV derived thereof. Individual replicates from 4 to 6 independent EV isolations are represented as fold-changes compared to untreated controls. Tukey's range test * P < 0.05, * * P < 0.001 and * * * P < 0.001. | 9,738.4 | 2020-09-30T00:00:00.000 | [
"Medicine",
"Biology"
] |
Niobium oxyhydroxide as a bioactive agent and reinforcement to a high-viscosity bulk-fill resin composite
Abstract Objective The present in vitro study incorporated niobium oxyhydroxide fillers into an experimental high-viscosity bulk-fill resin composite to improve its mechanical performance and provide it a bioactive potential. Methodology Scanning electron microscopy synthesized and characterized 0.5% niobium oxyhydroxide fillers, demonstrating a homogeneous morphology that represented a reinforcement for the feature. Fillers were weighed, gradually added to the experimental resin composite, and homogenized for one minute, forming three groups: BF (experimental high-viscosity bulk-fill resin composite; control), BF0.5 (experimental high-viscosity bulk-fill resin composite modified with 0.5% niobium oxyhydroxide fillers), and BFC (commercial bulk-fill resin composite Beautifil Bulk U, Shofu; positive control). In total, 10 specimens/groups (8 × 2 × 2 mm) underwent flexural strength (FS) tests in a universal testing machine (Instron) (500N). Resin composites were also assessed for Knoop hardness (KH), depth of cure (DoC), degree of conversion (DC), elastic modulus (E), and degree of color change (ΔE). The bioactive potential of the developed resin composite was evaluated after immersing the specimens into a simulated body fluid in vitro solution and assessing them using a Fourier-transformed infrared spectroscope with an attenuated total reflectance accessory. One-way ANOVA, followed by the Tukey’s test (p<0.05), determined FS, DC, KH, and ΔE. For DoC, ANOVA was performed, which demonstrated no significant difference between groups (p<0.05). Conclusions The high-viscosity bulk-fill resin composite with 0.5% niobium oxyhydroxide fillers showed promising outcomes as reinforcement agents and performed well for bioactive potential, although less predictable than the commercial resin composite with Giomer technology.
Introduction
The development of bioactive resin composites has gained significant attention over the last decades for controlling tissue loss from dental caries and reducing the risk of secondary caries, one of the primary causes of restoration replacement. 1,2Bioactivity refers to the potential of a material to induce apatite mineral nucleation, improving the maintenance of the tooth/ material interface 3 and potentially increasing the clinical longevity of restorations.However, the primary challenge has involved developing materials with adequate remineralization by releasing therapeutic ions or anticaries agents with satisfactory mechanical properties. 4lk-fill resin composites represent a promising restorative dentistry technique for posterior teeth 5 and are popular due to the possibility of inserting higher material proportions (4 to 5 mm) than the incremental method with conventional resin composites (up to 2 mm), reducing volume shrinkage. 6Their use as a restorative material regards a simplified technique that decreases clinical time, minimizing occasional operator errors and potentially improving patients' quality of life. 5spite the advantages associated with bulk-fill resin composites, 5 applying high-viscosity bulk-fill resins to high stress-bearing areas frequently exposed to masticatory forces is still controversial because of their inferior mechanical properties compared to conventional nanohybrid resin composites. 7,8A restorative material with minor flexural properties is less capable of initiating and propagating cracks that may fracture within the body and marginal restoration areas, 9,10 making it more prone to forming gaps and developing caries adjacent to rehabilitations (secondary caries). 9Therefore, a high-viscosity bulkfill resin composite with intrinsic bioactive ability may improve the mechanical properties and affect the longterm performance of restorative materials.
Adding different nanostructures to resin composites has improved their mechanical properties. 11,12,13erall, the type or concentration of these structures affects mechanical features, 14 such as degree of conversion (DC) and microhardness (KH), considering that the refractive index of nanostructures may decrease light energy availability within the polymer. 15udies have demonstrated that high-viscosity bulk-fill resin composites present lower DC and depth of cure (DoC) than conventional ones. 16,17Thus, incorporating a nanostructure that increases the mechanical strength of high-viscosity bulk-fill resin composites without impairing DC and DoC would be interesting to improve clinical performance with the bulk-fill technique, expanding clinical applications.
Niobium oxides present remarkable physicochemical properties with high mechanical stability in many hostile environments. 18Despite the limitations of using niobium oxide as a reinforcement for dental materials, several studies have demonstrated its bioactive potential in dental composites due to its bioactivity and ability to grow hydroxyapatite crystals in contact with human saliva. 19,20Furthermore, using niobium oxide as a filler for dental materials may reduce costs and availability limitations during material development since Brazil has the largest niobium reserves in the world. 18nversely, the lack of chemical interactions on the surface of oxides may cause weak bonding interactions within organic matrices with unstable chemical bonds and no interlocking strength.To overcome that, oxide surface functionalization enables nanostructures to remain chemically stable, preventing agglomeration due to the increased nanostructure dispersion throughout the resin composite matrix. 21obium oxyhydroxide (or niobium acid) is responsible for high chemical stability and numerous active sites, and its functionalization and incorporation into resin composites may improve the performance of materials working as reinforcement and bioactive fillers.Moreover, niobium oxyhydroxide presents high catalytic activity and may form electron pairs and increase material polymerization when exposed to light irradiation. 22erefore, this study evaluated the influence of incorporating niobium oxyhydroxide fillers into an experimental high-viscosity bulk-fill resin composite, analyzing their Knoop hardness (KH), depth of cure (DoC), degree of conversion (DC), elastic modulus (E), and degree of color change (∆E).It also assessed the bioactive potential of the experimental high-viscosity bulk-fill resin composite customized with niobium oxyhydroxide fillers using a Fourier-transformed infrared (FTIR) spectroscope with an attenuated total reflectance (ATR) accessory.
Synthesis of niobium oxyhydroxide fillers and scanning electron microscopy (SEM)
Silanized niobium was achieved in two steps: i) synthesis and ii) silanization.First, the acid was synthesized under a process that consisted of preparing a 0.26mol L -1 solution of the precursor salt -niobium ammonium oxalate -and then gradually adding 1 mol L -1 of a sodium hydroxide solution under constant agitation at 65°C.Alkaline solution addition stopped when the mixture reached a pH of 7, remaining in a 70-°C oven for 72 hours.The precipitate was washed several times with distilled water.Finally, the solid was dried in a 70-°C oven for 24 hours, and its granulometry was standardized in a 200 mesh.
Secondly, the nanoparticles of niobium were silanized with the (3-mercaptopropyl) trimethoxysilane (MPTMS) silylating agent by a silanization reaction based on the methodology by Queiroga, et al. 23 (2019) applied to bentonites.The acid was previously dried in a 100-°C oven for 24 hours.Then, the solid was dispersed in xylene, followed by 10 mL of silane, which were added under constant mechanical agitation for 48 hours in a 100-°C nitrogen atmosphere.Next, the solid was washed with xylene and then ethanol and dried in a 70-°C oven for 24 hours.A Tescan microscope, MIRA 3 model, with magnifications of 5,000, 25,000, and 100,000× provided the scanning electron microscopy (SEM) analysis of niobium oxyhydroxide, as in Figure
Degree of color change (∆E)
The ∆E test (n=7) assessed color changes at different time points using a CIELab-based colorimeter (Vita Easyshade V; Vita Zahnfabrik).
The spectrophotometer was calibrated before the measurements according to the manufacturer's instructions.An initial measurement (P0) was taken 24 hours after specimen production; a second one (P1), seven days after P0; and a third measurement, after artificial aging (P2), consisting of a 24-hour water storage at 60ºC. 26All specimens were dry-stored at 37ºC without light between P0 and P1.In total, three consecutive measurements were made in the center of each specimen until value uniformity was obtained.
Bioactivity analysis
The glass slide used to develop the samples for this test was 1-mm thick, and the disc-shaped samples (4 × 4 mm 2 ) were prepared by placing the material on a stainless steel mold covered with a polyester strip over the glass slide, creating the 5-mm light tip-to-material polymerization distance. 27Then, the Valo light tip was carefully centered on the sample and light-cured with a LED-curing device, measured as previously described (1.000 mW/cm 2 for 40 seconds; VALO; Ultradent, Utah, USA). 20,28e solution was prepared with the following
Statistical analysis
The SigmaPlot software, version 12.0 (Systat Software, San Jose, CA, USA), was used to statistically analyze the data.The Shapiro-Wilk test verified normal distribution and equality of variances for all variables.One-way ANOVA, followed by the Tukey's test (p<0.05),determined FS, DC, and KH.The Shapiro-Wilk test analyzed E data as they showed abnormality (p<0.05).For DoC, ANOVA was performed, which demonstrated no significant difference between groups (p<0.05).Bioactivity data were qualitatively collected, thus dispensing with statistical analyses.
The Q-Q Plot with the simulated envelope verified the assumption of normality of ∆E value residuals.
The parametric analysis data were compared with non-parametric test findings provided by ATS statistics (ANOVA Type-Statistic).
SEM, FTIR, and mechanical properties
SEM images (Figure 2) showed irregular particle clusters without a second phase.The FTIR spectra presented in Figure 3 showed the characteristic bands of niobium oxyhydroxide.These bands initially appear at 3408 and 3142 cm -1 , corresponding to the O-H stretching in the Nb-OH bond on the surface and in the bulk, respectively. 29Meanwhile, the region from 796 to 594 cm -1 corresponds to the vibrations of the Nb-O-Nb bonds.According to Oliveira, et al. 30 (2015), these bands are crucial for identifying the bonds involving niobium and oxygen.Additionally, images showed spectral bands at 1695 cm -1 associated with surface-adsorbed water, as well as regions at 1402 and 1269 cm -1 , which are related to impurities in the niobium precursor salt. 31ble 1 demonstrates that DC was similar between BF0.5 and BFC (p>0.05)without statistically significant differences.That indicates that the monomer conversion of the bulk-fill resin composite with 0.5% niobium oxyhydroxide fillers (BF0.5) was similar to the commercial resin composite (BFC).However, the experimental resin composite used as control (BF) showed a significantly higher DC (p<0.05)than the other two groups.Although the DC of BF0.5 was lower than BF, the values were similar to BFC, and the DoC of the three groups was also comparable (p>0.05)(Table 2).The control (BF) and experimental (BF0.5)groups showed higher FS than the commercial resin (BC) (p<0.001)(Table 1).As for KH values, group BF0.5 was superior to BF and BFC for top and bottom surfaces (p=0.001)(Table 2).∆E at the initial time showed similar behavior in all three groups.Delta variability increased at the final time and was more evident in group 3.However, the groups showed no statistical differences (p=0.92)(Figure 5).
Bioactivity
Figure 6 represents the spectra of samples for all tested materials: (a) BF (control), (b) BF0.5 (resin composite doped with 0.5% of niobium), and (c) BFC (commercial resin composite with Giomer technology) in all evaluated times (T0, T1, T14, and T21).FTIR analysis showed free phosphate (PO Values in the same column with different superscript lowercase letters significantly differ from each other (p<0.05).
Discussion
This study used 0.5% niobium oxyhydroxide fillers to synthesize the experimental resin composite since the fraction of bioactive fillers in the material should remain minimal to promote remineralization by ion release without affecting mechanical properties. 32r the functionalized niobium oxyhydroxide material (HNb-MPTMS), the FTIR spectrum profile was very similar.However, this study found a distinct band was observed, serving as an indicator of the presence of silane in the 2953 cm -1 region.This observation is attributed to the vibrations of the C-H bonds in the organic compound. 33Another evidence supporting the anchoring of MPTMS in the niobium structure was confirmed by the absorption band at 1124 cm -1 , corresponding to Si-C bond vibrations, 34 and at 1012 cm -1 , associated with siloxanes (Si-O). 23sed on the micrographs (Figure 2), a homogeneous morphology with clustered particles was observed.
Silanization was once again confirmed by the presence of silicon and sulfur in the EDS data presented in This technology, known as Giomer, combines the advantages of glass ionomers (anticariogenic and self-adhesive properties) while addressing their poor esthetics and possible dehydration issues by a prereacted glass-ionomer filler surface incorporated into resin composites, offering esthetics and high bond strength. 12l t h o u g h s t u d i e s h a v e f o u n d i n f e r i o r physicomechanical properties for hybrid resin composites compared to conventional and highviscosity bulk-fill nanocomposites, 11,13,36 this study found a similar DC between groups BF0.5 and BFC (p>0.05)(Table 1) without statistically significant differences.A correlation between filler components, such as size, distribution, and flexural properties, should be considered, indicating that Giomer resin composites have higher filler contents 16 but increased bioactivity potential even after a long period.Another aspect to consider is that DC may highly depend on the quality of the three-dimensional polymeric network formed after polymerization and the variation in filler quantity.The relationship between DC and other mechanical properties is not always straightforward.DC decreases proportionally with increasing filler contents, probably due to light scattering at resin-filler interfaces. 37Moreover, incorporating fillers may increase the viscosity of some dental materials, possibly explaining their lower DC. 38However, the values in group BF0.5 neither affected their mechanical properties nor statistically differed from the commercial resin (group BFC).DoC was similar in all groups (p>0.05),potentially due to the clinical translucency similarity of the tested resin composites (BF and BFC) enabling effective polymerization inside the polymer (Table 2).Although filler incorporation can alter refractive index and light dispersion, the catalytic activity of niobium in BF0.5 might contribute to this effective polymerization rate. 22though the flexural strength data in BF0.5 failed to significantly differ from the other groups, they achieved a minimum of 80 MPa for polymer-based restorative materials, according to ISO 4049:2019.
Flexural strength was the primary indicator of the physicomechanical properties of the bulk-fill resin composite in this study because it creates tensile, compressive, and shear stresses, representing a satisfactory parameter to provide meaningful insights into the fracture strength of a material.Thus, testing under the most challenging mechanical conditions may reduce the chances of accepting a material that fails prematurely due to inadequate strength. 39reover, the survival rate and wear resistance of Giomer resin composites met ADA guidelines for tooth-colored restorative materials in posterior teeth even after four years. 36,40Therefore, comparing a low concentration of niobium fillers in a high-viscosity bulkfill resin composite with Giomer technology might be a satisfactory parameter to measure its performance in this study.
The color stability of a restorative material determines restoration success and patients' acceptance.Optical properties usually improve when nanoparticles are incorporated into restorative materials because nanoparticles are smaller than the wavelength of visible light, demonstrating better light transmittance. 41The data in this study suggest that all tested materials had similar color stability even after aging (including the control group), helping to extend the lifespan of restorative materials.However, many clinicians still suspect that the mechanical properties of bulk-fill materials might be unsuitable for clinical use in posterior teeth, 16 preferring to avoid using the 4-5mm increments recommended by manufacturers.Conversely, providing a bioactive potential to bulk-fill resin composites can be promising and attractive because these materials are used in deep cavities with larger increments than conventional composites.Further studies evaluating the use of increments up to 5 mm should be performed to find the most effective thickness for high-viscosity bulk-fill resin composites with niobium nanoparticles.
Conclusion
The high-viscosity bulk-fill resin composite with 0.5% niobium oxyhydroxide fillers presented promising outcomes as a reinforcement agent, showing good bioactive potential despite its lower predictability than the commercial resin composite with Giomer technology.
reagents: sodium chloride (NaCl), potassium chloride (KCL), di-potassium hydrogen phosphate trihydrate (K 2 HPO 4 .3H 2 O), magnesium chloride hexahydrate (MgCl 2 .6H 2 O), calcium chloride (CaCl 2 ), sodium sulfate (Na 2 SO 4 ), Tris-hydroxymethyl aminomethane (Tris) buffer, and 1M hydrochloric acid.The reagents were immersed in purified water and maintained in a magnetic stirrer.The pH was verified and adjusted to 7.4.Overall, two specimens from each group (BF, BF0.5, and BFC) (n=3) were immersed in 5 mL of the described solution and maintained in airtight containers in a 37-°C oven for the determined times: T0 -initial time, T1 -the first hour after immersing the specimens in the solution, T14 -14 days after specimen immersion, and T21 -21 days after specimen immersion.The resin composites were subjected to Fouriertransform infrared spectroscopy (FTIR -Shimadzu Corporation, Model IR Prestige 21, Kyoto, Japan) analysis at the initial time (zero), after one hour, and Obeid AT, Nascimento TR, Agassi AC, Almeida AZ, Guedes AP, Alves JM, Bombonatti JF, Velo MM 21 days of immersion in the solution.
by the surface area of the resin composites in all groups at the peaks of ~560 within T1 (BF) and T21 (BF0.5 and BFC).After 1 h of immersion in the solution, P-O bands suggested a rapid deposition of calcium phosphate at the surface area of the samples (Figure 6a, arrow).
Figure 6b (
Figure 6b (arrow) showed a deposition of PO 4-on the surface of the doped resin composite after 21 days of immersion.During this period, peaks of ~560 and ~600 appeared, typical of apatite formation.These bands may have peak replacement due to the presence of other C-O bands also found in the control group (BF).The FTIR spectra at 21 days in Figure6c showed
Figure 5 -
Figure 5-Chart of individual values for the degree of color change (ΔE).
Figure 4 .
Figure 4.Over the years, hybrid resin composites have been developed as bioactive restorative materials to expand the clinical indications of bulk-fill resin composites.They interfered with caries development adjacent to restorations due to their therapeutic function of releasing ions when in contact with the oral environment.35
Fourier-transform infrared
spectroscopy analysis assessed bioactivity by comparing different release times.Bioactive materials have a biological effect or are biologically active, bonding tissues and materials42 according to their potential to induce specific and Niobium oxyhydroxide as a bioactive agent and reinforcement to a high-viscosity bulk-fill resin composite attachment to the dentin substrate.43This study used simulated body fluid (SBF) to mimic biomimetic mineralization, providing suitable temperature, ion concentration, and pH conditions similar to human blood plasma.SBF offers an adequate supersaturated environment around the substrate and facilitates bone-like apatite depositions. 44PO 4-deposition occurred on the surface of samples containing niobium.Characteristic apatite formations appeared after 21 days, but an EDS analysis could complement these results.The resin composite that showed the greatest peak intensity refers to BFC at 21 days.
fill resin composite J Appl Oral Sci. 2024;32:e20230278 3/11 Methodology
Niobium oxyhydroxide as a bioactive agent and reinforcement to a high-viscosity bulk- | 4,183.2 | 2024-02-26T00:00:00.000 | [
"Materials Science"
] |
Progressive energy efficient least edge computation (P-ELEC) routing protocol in wireless sensor networks
The energy of nodes in Wireless Sensor Networks (WSNs) is usually limited, which has to be consumed economically in order to prolong the lifetime of the network. The imbalanced use degrades the sensor node energy quickly and leads to sensor voids, which further cause the routing hole problem. The routing hole problem ultimately affects network performance. To solve the routing hole problem, an Energy Efficient Least Edge Computation routing protocol (ELEC) is proposed in the literature. The simulation results show that ELEC achieves nearly double network lifetime by equal energy consumption in various parts of the network as compared to other existing routing techniques such as GRACE, LEACH, and AODV-EHA. This work presents a progressive Energy Efficient Least Edge Computation (P-ELEC) routing protocol, where a cluster head percentage is incremented periodically. Incrementing the cluster head minimizes the workload of each cluster head, and in turn, enhances the lifetime of the network. The simulation result shows the prolongation of a lifetime under various scenarios and modes of operation.
Introduction
*Due to the vast applications, the Wireless Sensor Network (WSNs) has witnessed notable considerations in recent research and development (Alemdar and Ersoy, 2010;Abdulkarem et al., 2020). The energy of the nodes is usually limited, which has to be consumed economically in order to prolong the lifetime of the network, otherwise leads to sensor voids (Fu et al., 2020). The sensor void caused by the degradation of energy is a major issue in WSNs. A sensor node, which is unable to disseminate the packets, is known as a void or hole. The void sensor is highly utilizing the energy, which leads to the routing hole problem in WSNs (Mohemed et al., 2017;Saranya et al., 2018). Hence the efficient use of energy among sensors is one of the fundamental research themes. Cluster based routing protocol is the energy efficient routing technique in WSNs (Deepa and Latha 2019;Thangaramya et al., 2019). Many multi-hop cluster-based routing protocols are present in literature (Zen and Ur-Rahman, 2017;Behera et al., 2019;Maitra et al., 2019), but energy unaware path selection caused routing hole problem as shown in Fig. 1 (Biswas et al., 2019). For balance energy consumption, many routing techniques are proposed by many authors (Khari, 2018;Yue and He, 2018;Bhushan and Sahoo, 2019).
To minimize the energy consumption and enhance the network lifetime through a load balancing among sensor nodes, a grid based routing technique is in Kareem and Jameel (2018). The evaluation proves that the proposed technique enhances the stability and energy efficiency as compared to the CFDASC algorithm in terms of network stability and load balancing of the entire network. The distributed Unequal Clustering Algorithm (DUCA) is proposed in Gowda and Subramanya (2019). The proposed DUCA technique makes the cluster size small near to base station because the workload of these CHs is more. The simulation result shows that the DUCA algorithm improves the lifetime by balancing the energy loads among the sensor nodes as compare to the LEACH protocol.
Cluster based routing leads to non-uniform energy consumption among sensor nodes and cluster heads. The CH near to base station has more energy consumption due to the larger load. Many-toone data routing pattern results in quicker loss and destruction of energy resource of the CH's near the sink; this is referred to as a routing hole problem. The CH's are unable to forward a packet toward the sink. Because neighbor CH depletes their energy faster and dies out due to uneven workload. As a result of this problem, the network will be partitioned, and the WSN will not be able to accomplish its designated critical function, as shown in Fig. 1. Once the hole is formed, the network operation of the remaining system is useless because data can no longer be transferred to the base station. To balance the energy consumption, it is very crucial to balance the workload among the sensor nodes and CH. To reduce the routing hole problem, the energy consumption of sensor nodes should be balanced, which demands to balanced workload among the different parts of the network. Therefore, the density of CH's among different parts of the network should be uniform. To deal with imbalanced energy consumption and to overcome the routing hole problem, a novel energy efficient least edge computation routing protocol (ELEC) is proposed by authors in Sama et al. (2019). The simulation results show the enhanced performance of ELEC as compared to other existing routing techniques such as GRACE, LEACH, and AODV-EHA. The proposed paper presents Progressive-ELEC (P-ELEC), the further evaluation of ELEC routing protocol. In P-ELEC, a cluster head percentage is incremented periodically, which minimizes the workload of each cluster head and, in turn, enhances the lifetime of the network. The proposed routing strategy proves that if the increment of CH's percentage is uniform at different parts of the network, then it is possible to balance the energy consumption in the network and prolong the network lifetime.
The simulation result shows the prolongation of a lifetime under various scenarios and modes of operation. The organization of the remaining paper is shown in Fig. 2.
Cluster based routing protocols
The proposed research focused concern is the routing hole problem; therefore on the analysis of routing hole problem, energy efficient utilization, and lifetime enhancement strategies, literature will be briefly presented in this section (Rahman et al., 2013;Zen and Ur-Rahman 2017;Wang et al., 2019). It is proven from the research that communication is the major reason for energy exhaustion. To minimize the communication energy consumption clustering technique considers that only cluster heads (CH) will forward the aggregated data sink. The issue in cluster based routing is, the CH have to handle the load as head and also forward the packets to the next CH. To balance the workload of CHs, a modified Mutual Exclusive Distributive Clustering (MEDC) protocol is presented in Chugh and Panda (2018). Proposed work balanced the workload of cluster heads and enhanced the network lifespan. Aware Cluster Based Multi-hop (MEACBM) routing protocol is presented by Toor and Jain (2019) which elects sensor node highest energy as a cluster head. After the distribution of sensor nodes and the selection of clusters, the whole network is spat into zones, and inside each zone, a mobile sensor node is deployed, which behaves as Mobile Data Collector (MDC) for gathering data from CHs. Results from the simulation show the enhancement of network performance in terms of network lifetime, throughput, security, and a number of critical nodes. But the proposed MEACBM routing protocol increases the overhead due to the mobility of sensor nodes.
To best fit the specific application, it is very important to select the most relevant routing protocol. A more suitable, appropriate, valid, and consistent clustering technique is proposed, which is adaptable and improves the lifespan of the network as compared to the existing routing protocol, LEACH (Jain and Thakur, 2019). Wang et al. (2019) proposed a Compressive Sensing-based clustering technique to minimize the power exhaustion and mitigate the hole problem. The technique rotates the roles between the Cluster Head (CH) and Backup Cluster Head (BCH), furthermore presents an Energy-Efficient Compressive Sensing-based clustering Routing (EECSR). The extensive simulation experiment shows the improved energy utilization and enhanced network lifetime of WSNs. Many of the routing strategies mitigate the routinghole problem with the additional cost or leads to other problems. The routing hole problem has been minimized by various perceptive routing strategies without even energy utilization in a network taking into account. Efficient energy consumption has been accomplished by sharing the load, energy efficient deployment techniques, and power balancing routing protocols, but still requires to take into account the energy aware path selection routing protocol.
With the consideration of issues in literature, energy efficient least edge computation (ELEC) routing protocol is proposed which uses energy aware path selection for intra-cluster multi hop routing in wireless sensor networks. The results show the improvement of ELEC, which achieves nearly double network lifetime by equal energy depletion in different parts of the network.
Progressive-energy efficient least edge computation (P-ELEC) routing protocol
In our previous work, an energy efficient least edge computation (ELEC) routing protocol in WSN is proposed to reduce the routing hole problem. A reactive routing algorithm ELEC creates the local route table whenever an event occurs. The sensor nodes close to the event detect and transmit it to the CH via single or multi-hop clustering depending on the distance. If the sensor nodes are distant from the CH, then they will send the data through multi-hop clustering; otherwise, the data are sent directly to the CH through a single hop. After data collection, the CH forwards the data to the BS via multi-hop. Then, the source CH selects the next hop CH with minimum values of edge count, energy level, and link weight.
Further evaluation of ELEC is proposed, where a cluster head percentage is incremented periodically. Incrementing the cluster head minimizes the workload of each cluster head and, in turn, enhances the lifetime of the network. The simulation shows the results taken in a variety of scenarios and methods of operation.
ELEC routing network model
Homogenous sensor nodes are deployed randomly in the wireless sensor network area. The area is divided into the cluster, and each cluster is controlled by one cluster head (CH). The sensor nodes will send the sensor data to the cluster head via single-hop or multi-hop, depending on the distance of CH from nodes. If the CH is far away, then sensor nodes will send the packets via multi-hop; otherwise, they will forward the sensor data directly to CH via single-hop. After the collection and aggregation of receiving packets, the CH will forward the data to the sink via multi-hop. The proposed protocol follows the same routing algorithm as ELEC, which considers the edge count, energy level, and link cost for the next hop neighbor selection. Route processing in the ELEC algorithm is illustrated in Fig. 3.
Theorem
The proposed theorem is to proof of that if we increase the CH percentage, this will enhance the performance of the network. To reduce the routing hole problem, the energy consumption of sensor nodes should be balanced. And for balanced energy consumption its necessary to balance the workload among the different parts of the network. Therefore the density of CH's among different parts of the network should be the same. The proposed theorem proof that if the increment of CH's percentage is equal at different parts of the network, then it is possible to balance the energy consumption in the network and prolong the network lifetime.
Theorem to proof PELEC
Suppose network lifetime of ith and (i+1)th clusters are equal where is the outer cluster. Therefore, The number of sensors in each cluster can find by the following equation As the outer cluster only generate and forward its own packets, so the total energy consumed by the outer cluster is equal to Putting the value of in Eq. 3 The ith cluster generates and sends its own packets as well as forwards and receives the packets from the outer cluster, so the total energy consumed by ith cluster is equal to where, K is a bit rate, e1 is sending energy, e2 is receiving energy, ECl is the total energy of the outer cluster, rCl is the radius of the outer cluster, C r is communication range and ri is the radius of the ith cluster ( 1 + 2 )) (5) Putting the values of Ei and Ei+1 in Eq. 2 Let Z is the total number of CH's in the network By equal ratio theorem The lifetime of the network is: where, M is the entire quantity of sensor nodes in a network; E is the entire energy of the network, and is the initial power of the sensor node. This expression in Eq. 7 shows that the density (Den) proportion of the CH's in the ith and (i+1)th clusters is identical to the density proportion of the CH's in the (i-1)th and the ith clusters, so the network lifespan of the two adjoining clusters will be equal. This shows that optimum energy consumption is achievable.
Results and discussion
In this study, an effort is made to evaluate the performance of incremental cluster heads routing protocol by answering the following questions: How much the increment of cluster head percentage avoids the routing hole problem in the wireless sensor network? How much the increment of cluster head percentage affects the lifetime of the network?
Performance metrics
For assessing the performance of the proposed routing protocol, the network lifetime is used as a performance metric. It is clear that goodness and badness of routing strategies depend on the working life of the network, i.e., a lifetime. The simulation results of the proposed strategy are shown in Figs. 4-15.
Network lifetime
The lifetime of a WSN is one of the important issues in WSN. To improve the lifetime of a network, an energy-efficient routing protocol strategy is needed. Depends on the network application, the network lifetime definition appears in different forms throughout existing research. a) Lifetime of a network means how much time the sensor network is in an operational state. b) The time until a fixed number of nodes depletes its energy (Filipe et al., 2004). c) The time when an interesting area is no longer sensed by any node (Karl and Willig, 2007).
The proposed work considers the three definitions of network lifetime. a) 1 st CH node failure: According to this definition, the lifetime of the network is the time until the first CH fails or runs out of energy. b) 10% node failure: According to this, the lifetime is the time until 10% of the total CH depletes its energy. c) Last packet received: When the last packet was received at the base station from any CH.
Network lifetime (1 st CH node failure)
Here the lifetime of a network is evaluated according to the 1 st definition of a lifetime. With the increase in CH percentage, i.e., 10%, 20%, 30%, 40%, and 50% for power multipliers 1, 5, 8, and 10, the lifetime is maximized as shown in Figs. 4-15.
The results in Fig. 4 show that the network lifetime depends on the number of CH's in a network.
It can be seen that for 0% CH, the network lifetime for different CH range multiplier (communication range of CH) is up to 300 seconds. At 10 percent CH, multiplier 1 to 8, there is no noticeable change in the lifetime, but the perceptible change at multiplier 9 and 10, i.e., 500 seconds, can be observed. At 50 percent CH of sensor nodes, multiplier 1, 2, 3 still there is no enhancement in the network lifetime, while it is 400 to 1100 seconds with multiplier 4, 5, 6, 7, 8, respectively, and noticeable improvement can be observed at multiplier 9 and 10, i.e.,1400 seconds. It can observe that at 40% and 50%, the lifetime is the same. That's why there is no need to evaluate 60%, and no difference can see in range multipliers 9 and 10. That's why it is better to consider the range multiplier 9 as optimal. The results in Fig. 5 show that the network lifetime depends on the number of CH's in a network. It can be seen that for 0% CH, the network lifetime for different CH range multipliers is up to 300 seconds. At 40 percent CH, multiplier 1 to 5, there is no noticeable change in the lifetime, but the perceptible change at multiplier 5 to 10, i.e., 800 seconds, can be seen. At 50 percent CH of sensor nodes, multiplier 1, 2, 3 still there is no enhancement in the network lifetime, while it is 500 to 800 seconds with multiplier 4 to 10. In Fig. 6, the power multiplier for all cluster head is taken as 8. The network lifetime at 0% CH for range multiplier's 1 to10 is the same, i.e., 300 seconds. At 10 percent CH, multiplier 1 to 5, there is no noticeable improvement in the network lifetime, but the little change at multiplier 9 and 10, i.e., 500 seconds, can be noticed. At 50 percent CH of sensor nodes, multiplier 1, 2 still there is no enhancement in the network lifetime, while it is 500 to 1200 seconds at with multiplier 3 to 8. The better enhancement can be seen in CH range multiplier 9 and 10, i.e., 1300, and both are the same; therefore, the CH range multiplier 9 can be considered as optimal. To evaluate the lifetime of the network here, the energy for all CH is considering power multiplier 10, as shown in Fig. 7. The lifetime of the network for CH range multiplier 1 to 10, at 0% CH is the same, i.e., 300 seconds. At 10 percent CH, little improvement in a lifetime can be observed, i.e., 500 seconds. At 50 percent CH of sensor nodes, multiplier 1, 2, 3 still there is no enhancement in the network lifetime, while it is 500 to 1200 seconds with multiplier 4, 5, 6 respectively, and noticeable improvement can be observed at multiplier 7 to 10, i.e., 1500 seconds.
Network lifetime (10% CH node failure)
The network performance is assessed according to the second definition of a lifetime, i.e., 10% CH failure. Impact of incremental CH percentage on the network lifetime with a CH range multiplier from 1 to 10, for different power multipliers 1,5,8,10 are shown in Figs. 8,9,10,and 11,respectively. The results in Fig. 8 shows that the network lifetime depends on the number of CH's in a network. It can be seen that for 0% CH, the network lifetime for different CH range multiplier (communication range of CH) is up to 500 seconds. At 10 percent CH, range multiplier 1 to 4, there is no noticeable change in the lifetime, and little increase of lifetime can be noticed at multiplier 5 to 8, but the perceptible change at multiplier 9 and 10, i.e., 800 seconds, can be observed. At 50 percent CH of sensor nodes, multiplier 1and 2 still there is no enhancement in the network lifetime, while it is 700 to 3000 seconds with multiplier 3, 4, 5, 6, 7, 8, respectively, and noticeable improvement can be observed at multiplier 9 and 10, i.e., 4000 seconds. It can observe that at 40% and 50%, the lifetime is the same. That's why there is no need to evaluate 60%, and no difference can see in range multipliers 9 and 10. That's why it is better to consider the range multiplier 9 as optimal.
The lifetime at 0% CH for all power multiplier is the same, i.e., 500 seconds. It can be observed that life is increased with the increase in CH percentage, i.e., 10%, 20%, 30%, 40%, and 50% for power multipliers 1, 5, 8, and 10. 9 shows the impact of incremental CH percentage on the network lifetime with a CH range multiplier from 1 to 10 with the power multiplier 5. At 0% CH, the network lifetime for different CH range multipliers is up to 500 seconds. At 40 percent CH, multiplier 1 to 4, there is no noticeable change in the lifetime, but the perceptible change at multiplier 5 to 10, i.e., 1000 seconds, can be seen. At 50 percent CH of sensor nodes, multiplier 1, 2, 3 still there is no enhancement in the network lifetime, while it is 1000 to 2000 seconds increment with multiplier 4 to 10. Fig. 9: Network lifetime (10% CH failure) PM=5 Fig. 10 shows the impact of incremental CH percentage on the network lifetime for the power multiplier 8 with a CH range multiplier from 1 to 10. At 0% CH, the network lifetime for different CH range multipliers is up to 500 seconds. At 40 percent CH, multiplier 1 to 3, there is no noticeable change in the lifetime, but the perceptible increase at multiplier 4 to 10, i.e., 3000 seconds, can be seen. At 50 percent CH of sensor nodes, multiplier 1, 2, 3 still there is no enhancement in the network lifetime, while it is 1000 to 3000 seconds increment with multiplier 4 to 10. 11 shows the impact of incremental CH percentage on the network lifetime for the power multiplier 10 with a CH range multiplier from 1 to 10. At 0% CH, the network lifetime for different CH range multipliers is up to 500 seconds. At 50 percent CH of sensor nodes, multiplier 1 and 2 still there is no enhancement in the network lifetime, while it is 2000 to 4000 seconds increment with multiplier 4 to 10.
Network lifetime (Last packet received)
The network performance is assessed according to the third definition of a lifetime, i.e., the last packet received. Impact of incremental CH percentage on the network lifetime with a CH range multiplier from 1 to 10, for different power multipliers 1,5,8,10 are shown in Figs. 12,13,14,and 15,respectively. The lifetime at 0% CH for all power multiplier is the same, i.e., 500 seconds. It can be observed that lifetime increases with the increase in CH percentage, i.e., 10%, 20%, 30%, 40%, and 50% for power multipliers 1, 5, 8, and 10.
Conclusion
Many researchers are making efforts to explore sensor networks. The network lifespan depends on the energy level. The imbalanced use degrades the sensor node energy quickly and leads to sensor voids, which further cause the routing hole problem. The routing hole problem ultimately affects network performance. To solve the routing hole problem, an Energy Efficient Least Edge Computation routing protocol (ELEC) is proposed in the literature. The simulation results show that ELEC achieves nearly double network lifetime by equal energy consumption in various parts of the network as compared to other existing routing techniques such as GRACE, LEACH, and AODV-EHA. The proposed paper presents progressive Energy Efficient Least Edge Computation (P-ELEC) routing protocol, where a cluster head percentage is incremented periodically.
Fig. 15: Network lifetime (last packet received) PM=10
The simulation result shows the prolongation of a lifetime under various scenarios and modes of operation. In all of the above-mentioned results at 0% CH's there is no enhancement in the lifetime for different power multiplier, which proves that the cluster based routing has a high impact on the prolongation of network lifetime. Incrementing the CH's minimizes the workload of each cluster head and in-turn enhances the lifetime of the network.
The dense deployment of CHs in the proposed work shows the enhanced lifetime of the network. But a larger number of CHs leads to redundant data transmission to the sink. To avoid redundant data transmission due to the dense deployment of CHs, the sleep and awake strategy can be implemented in the future.
Conflict of interest
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Energy-efficient routing protocols for solving energy hole | 5,355 | 2021-01-01T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
The use of high-frequency ultrasound imaging and biofluorescence for in vivo evaluation of gene therapy vectors
Background Non-invasive imaging of the biodistribution of novel therapeutics including gene therapy vectors in animal models is essential. Methods This study assessed the utility of high-frequency ultrasound (HF-US) combined with biofluoresence imaging (BFI) to determine the longitudinal impact of a Herpesvirus saimiri amplicon on human colorectal cancer xenograft growth. Results HF-US imaging of xenografts resulted in an accurate and informative xenograft volume in a longitudinal study. The volumes correlated better with final ex vivo volume than mechanical callipers (R2 = 0.7993, p = 0.0002 vs. R2 = 0.7867, p = 0.0014). HF-US showed that the amplicon caused lobe formation. BFI demonstrated retention and expression of the amplicon in the xenografts and quantitation of the fluorescence levels also correlated with tumour volumes. Conclusions The use of multi-modal imaging provided useful and enhanced insights into the behaviour of gene therapy vectors in vivo in real-time. These relatively inexpensive technologies are easy to incorporate into pre-clinical studies.
Background
The use of non-invasive and accurate methods to determine tumour volume, as well as biodistribution and transduction imaging of novel therapeutics, is essential in experimental models in vivo. In particular, for gene therapy studies, knowledge of maintenance, expression and efficacy of the vector is a fundamental part of the testing process [1]. However, this is rarely achieved during the in vivo study of a novel gene therapy strategy, as often only longitudinal calliper measurements of xenograft growth or final histology after treatment are carried out. The spread or loss of a vector is rarely detected during the course of the experiment and for cancer treatment, not all therapies will result in a reduction in tumour volume.
Therefore it is important to be able to examine the impact of a gene therapy vector during the in vivo testing phase using different assessment criteria, whilst being mindful of adhering to the principles of reduction, refinement and replacement in animal experiments.
Ultrasound is a non-invasive method that has been utilised recently for tumour growth studies in vivo and is used in the clinic for staging colorectal cancer among others [2,3]. High-frequency ultrasound (HF-US) machines are available for small animal imaging. They are relatively easy to use and give high resolution greyscale images of mouse anatomy [4]. They also give functional information on the vascular structure of xenografts through the use of contrast agents and are relatively inexpensive and portable compared to MRI machines [5]. Mechanical callipers, however, are still utilised extensively for therapeutic agent testing, especially in gene therapy applications on xenografts [6]. These are very cheap, non-invasive and allow multiple repeated measurements with no anaesthetic required. However, mechanical callipers assume that the growth of xenografts is always ellipsoid and can only measure growth above the skin surface of the animal. In addition, calliper measurements are also affected by skin thickness, subcutaneous fat layer thickness and compressibility of the tumour [7]. From our experience of xenograft growth in gene therapy and other therapeutic studies, we know that this ellipsoid growth pattern is rarely observed, especially as the tumour volume becomes large (above approximately 300mm 3 ).
A gene encoding a fluorescent or luminescent protein is often incorporated into gene therapy vectors in order to enumerate transduction efficiencies in vitro [8,9]. Moreover, these markers are also very useful for in vivo studies. Optical imaging chambers can be used to image the biodistribution of a vector when administered and can give an indication of the transduction efficiency in the target cells [10]. Optical imaging systems also allow the maintenance of a vector to be determined throughout the course of treatment, as well as examining the genetic stability of the vector over time. The first paper to prove that optical imaging could be used to measure tumour growth used bioluminescence of tumour cells in rat brain and was compared to MRI scans for tumour volume [11]. Imaging of stably-transfected cell lines containing red or green fluorescent protein (RFP or GFP) has been used to measure tumour and metastatic growth [12,13]. Recent work has also shown that fluorescent intensity correlates better with tumour volume than fluorescent area [14].
In the study described herein, we aimed to determine whether the use of HF-US measurements were more accurate than mechanical callipers in assessing xenograft volumes of tumour cells which were infected before injection with an experimental gene therapy vector. The use of HF-US to provide anatomical information on tumour growth and BFI to monitor expression of a gene therapy vector in longitudinal studies, were also analysed. The vector we used was a Herpesvirus saimiri (HVS) amplicon which contains the minimal elements for episomal maintenance without infectious capabilities [9,15]. This gamma-2 Herpesvirus amplicon can incorporate large amounts of heterologous DNA using a HVS-BAC (bacterial artificial chromosome) system and infects a broad range of human cells. The amplicon was previously stably transfected into the SW480 colorectal cancer cell line and contains a constitutively active GFP gene [16]. The presence of the GFP gene enabled monitoring of its persistence during xenograft growth in this study.
Tumour model
The colorectal cancer cell line, HCT116 was stablytransfected with an episomally-maintained Herpesvirus saimiri amplicon incorporating the GFP gene under the control of the Cytomegalovirus (CMV) promoter. These cells were grown in Dulbecco's Modified Eagle Medium (DMEM, Invitrogen) supplemented with 10% (v/v) foetal calf serum, and 4ul/ml Hygromycin B (Sigma, Poole U.K.) in 5% CO 2 at 37 o C until there were enough cells for xenograft set up (approximately 3-4 weeks from infection). Parental cell lines were grown in DMEM and serum but no Hygromycin B. Two days before injection the amplicon-transfected cells were transferred to medium without any Hygromycin B.
1 × 10 6 each of the parental and amplicon-containing cells were collected in 100ul of serum-free DMEM and injected subcutaneously into the right flank of 8-10 week old female CD1 nude mice to form xenografts. 6 mice per group were used. All experiments were performed following local ethical approval and in accordance with the Home Office Animal Scientific Procedures Act 1986.
Tumour volume measurement with mechanical callipers
Tumours were measured with mechanical callipers three times per week once the tumour became palpable (approximately 7-10 days following injection). Tumour volume was calculated as follows, unless otherwise stated: [17] Tumor volume ¼ 1=2ðgreatest longitudinal diameter Âgreatest transverse diameter 2 Þ After 40 days a final calliper measurement was taken, the xenografts were excised and weighed. If tumours exceeded the maximum permitted size of 17mm diameter, the mice were sacrificed earlier. Mechanical calliper measurements were then taken in three dimensions ex vivo and the following tumour volume was calculated, unless otherwise stated: Anatomical imaging and tumour volume measurement using HF-US Once per week, mice were anaesthetised using 3% (v/v) isofluorane and xenografts were imaged using a Vevo 770 high-frequency ultrasound machine (FUJIFILM VisualSonics, Inc, Toronto, Canada) equipped with a 40 MHz transducer. The focal depth of the transducer was placed at the mid-point of the centre of the tumour whilst scanning. A 3D scan of the tumour was then performed using the minimum step size possible for the length of tumour and regions of interest were drawn around the xenograft at approximately every 5 frames by an operator with extensive experience of HF-US and analysis [4]. A tumour volume was then calculated using the Vevo 770 version 3 software by creating a 3D reconstruction of these xenografts.
Measurement of biofluorescence
Before sacrifice at day 40, xenografts were imaged in an IVIS Spectrum (PerkinElmer, Inc, Massachusetts, USA). Standard settings for GFP were used (excitation 500nm and emission detected at 540nm) in epi-illumination at high intensity. Binning was set at 8, field of view was 13.1cm and f stop was 2. Regions of interest of the same size were drawn around each xenograft and the total radiant efficiency ([photons/s]/[μW/cm 2 ]) was calculated within this using Living Image version 4.2 software (PerkinElmer, Inc, Massachusetts, USA).
Histology and morphology of xenografts
Once the xenographs were excised, photographs were taken of the intact tumours. The tumours were then cut in half and fixed in 4% (w/v) paraformaldehyde in PBS overnight. After processing and embedding in wax, sections were dewaxed, rehydrated and stained with haematoxylin and eosin. Sections were assessed by an experienced histopathologist.
Statistical analysis
Analysis of the tumour volumes and vector expression obtained by these methods used Pearson correlations. Positive correlations produced a positive R 2 value and were considered significant if p < 0.05. Agreement between the methods was then further analysed by Bland-Altman plots where the central line (mean of differences or bias) and 2 standard deviation (SD) limits of agreement were generated. The bias was considered significant if 0 was not included within these standard deviation lines. These calculations were carried out using GraphPad Prism version 5 (GraphPad Software, Inc, La Jolla, California, USA).
Results
Comparison of tumour growth curves generated using mechanical callipers or HF-US HF-US was used to determine the tumour volume during the growth course of the xenografts derived from the parental cell line and amplicon-infected cell line and compared to the volume calculated from mechanical calliper measurements. The tumour volumes generated from the two methods are shown in Figure 1. The ampliconinfected xenograft tumours grew more slowly than the parental cells and this was detected by both measurement methods. Tumour volumes by HF-US generated smaller calculated tumour volumes than those using mechanical callipers. At day 28 for example, calliper assessed xenograft tumour volumes were calculated to be more than twice the volumes generated using HF-US imaging. This difference was even greater for the amplicon-infected xenografts as these were 3.3 times larger when measured using mechanical callipers compared to HF-US.
Comparison of tumour volume measurement methods to the volume calculated using ex vivo calliper measurements HF-US measurements correlated more closely than mechanical callipers (denoted as in vivo callipers on the graphs) to the final ex vivo calliper measurement at the end of the period of xenograft growth which is our most accurate measurement ( Figure 2 a and b). Thus the tumour growth curves in Figure 1 are an over-estimation if mechanical callipers are used compared to HF-US measurements. Alternative formulae for tumour volume calculation for both in vivo and ex vivo calliper measurements were examined and compared to HF-US (Table 1) [17]. As before, HF-US measurements correlated more closely to either ex vivo volume formula than any in vivo volume formula and no difference in correlation was found between the two ex vivo volume formulae and HF-US volumes. Using the formula π/6 × (L × W) 3/2 for in vivo calliper volumes gave a higher correlation to both HF-US volumes and to mass of tumour than the other two equations.
Comparison of tumour volume measurement methods to final tumour mass
After sacrifice, the resulting xenograft tumours were excised and weighed. Using Pearson correlation coefficients and linear regression analysis, final in vivo calliper measurements had a lower correlation coefficient to tumour mass than HF-US. The tumour volumes calculated from ex vivo calliper measurements of the excised xenograft had the highest correlation coefficient to tumour mass (Figure 3 a, b and c and Table 1). Bland-Altman graphs show a smaller 95% confidence interval between HF-US volumes and the ex vivo calliper measurement compared to the confidence interval between final in vivo calliper and the ex vivo calliper measurements (Figure 4a and b). This demonstrates a smaller difference between HF-US and the ex vivo calliper measurement methods than between in vivo and ex vivo callipers.
HF-US imaging and BFI of tumour anatomy and gene therapy vector expression
In addition to HF-US, the use of BFI allowed the persistence and expression of the amplicon to be tracked in vivo. The HF-US images and photographs show that the amplicon-containing xenografts grew in distinct lobes unlike the parental cell xenografts. These distinct lobes were visible even from day 8 on the HF-US images in comparison to the parental cell xenografts, thus allowing very early detection of anatomic differences between the two groups in vivo which was not possible to elucidate from calliper measurements alone. The detailed greyscale anatomical images using HF-US showed both lighter and darker areas (derived from areas that are more or less echogenic to ultrasound) ( Figure 5). The relatively lighter areas within the xenograft were not adipose tissue and corresponded to denser tumour tissue and from histology we observed that the darker areas are necrotic tissue and when the tumours were excised open, a liquid interior core was found (Figure 6a). Amplicon infection of the cells caused formation of syncitia (fused cells) during xenograft growth, which was not evident in the parental cell xenografts, as shown in Figure 6b. The presence of lobes seen by HF-US can also be discerned in the fluorescent image taken by the IVIS Spectrum instrument (Figure 6c).
Correlation of total radiant efficiency (fluorescence) and tumour volume measurements
The measurement of levels of fluorescence was determined for the amplicon-infected xenografts using an IVIS Spectrum and the Living Image software and plotted alongside the ex vivo calliper volume (Figure 7a). These measurements show a similar pattern for the amplicon cell line in terms of fluorescence emission and calliperderived tumour volume. In vivo calliper measurements on the final day of growth were less significantly correlated to fluorescence measurements than calliper measurements of the ex vivo xenografts (in vivo callipers, R 2 = 0.8882, 95% CI = 0.3568-0.9963, p = 0.0164 compared with ex vivo callipers, R 2 = 0.9417 95% CI = 0.5518-0.9938, p = 0.0050) (Figure 7b and c). HF-US volume measurements had a better correlation coefficient to fluorescence measurements than the ex vivo calliper measurements (R 2 = 0.8895, 95% CI = 0.5606-0.9939, p = 0.0048) (Figure 7d). However, it must be noted these are based on small numbers in each group, as only the amplicon-infected cells contained GFP and not the parental cells.
Discussion
Multimodal imaging in gene therapy applications is a useful tool to shed light on the behaviour of vectors during in vivo testing. In this study, the use of HF-US imaging identified anatomical differences during growth between the parental cell line and the vector-transfected cell line in a xenograft model, even from day 8 after implantation. It has been shown that HF-US can more accurately measure tumour volume compared to the traditional mechanical callipers, as demonstrated in this paper and by others [2,18]. The use of different ellipsoid volume formulae to generate the tumour volumes from calliper measurements made small differences in accuracy where the highest correlation to mass was found using π/6 × (L × W) 3/2 rather than the more commonly used 0.5 × L × W 2 as described previously (although based on only one paper [17]). Correlation to determining volume by water displacement would be the gold standard and would be a useful addition to this study. HF-US volume generation and mechanical calliper measurements by multiple operators would also be valuable for determining variability as these measurements are subject to bias from operators. Jensen and colleagues compared volumes determined by microCT, 18 F-FDG-microPET and external callipers, to an ex vivo reference volume calculated by weight and density [19]. They demonstrated that micro-CT was more accurate and reproducible between observers than either external callipers or 18 F-FDG-microPET. They also showed that 18 F-FDG-microPET was not so useful for determining tumour size, although there was some correlation (R 2 = 0.75). This was similar to our findings with biofluorescence imaging. As with our study, this functional tumour imaging modality is useful for metabolic imaging and should give an indication of the effect of a gene therapy vector on tumour viability. In the current study, HF-US accurately showed the slower tumour growth of the vectortransfected cell line compared to the parental cell line, as predicted from in vitro cell growth curves [16]. However, lobe formation was unexpected. We are currently investigating whether this is due to the GFP gene or other components of the vector backbone. We also demonstrated the utility of the different greyscale textures in monitoring different patterns of growth. The discrimination of areas of necrosis and high vascularity (using contrast agents) was also possible. This should allow real-time monitoring of agents that currently have little apparent effect on tumour volume but may have useful effects of antiangiogenesis or inducing cell senescence. HF-US would be of particular use for very small xenografts, orthotopic models to in transgenic mice such as the Apc Min/+ mouse, where callipers cannot access the tumour. Indeed, gene therapy vectors are also used in non-cancer applications such as diabetes or organ regeneration, where callipers may not be used to measure disease progress or regression. In these cases, HF-US would be invaluable in monitoring progress longitudinally without sacrifice of mice. In addition to HF-US images, the use of biofluorescence allowed monitoring of tumour growth patterns and cor-related well with final tumour volumes (although it must be noted this was based on small numbers with a wide variation). This technique is a simple and very quick method of visualising the tumour and much less expensive than 18 F-FDG-microPET, for example. Bio-fluorescence is also applicable to patients. It is currently being trialled in surgery on human tumours to define tumour margins for resection [20]. The monitoring of these two cell lines grown as xenografts showed that the presence and expression of the vector was maintained within the tumour over the duration of the experiment. This information is of great value for gene therapy applications as silencing of the vector can occur, which may not be evident from growth curves or even from immunohistochemistry on ex vivo tumour sections for vector proteins. Linkage of the therapeutic gene of interest to a fluorescent marker gene via an IRES (internal ribosomal entry site) sequence or as a fusion protein would yield valuable information on the efficacy of expression during the time course of an in vivo experiment. It may also be used to reduce costs by eliminating animals in which the introduction of a vector by injection has not been successful.
Conclusions
In conclusion we believe that multi-modal imaging provides useful and enhanced insights into the behaviour of gene therapy vectors in vivo. Addition of imaging to gene therapy protocols would be straightforward especially in the case of relatively inexpensive ultrasound and biofluorescence imaging. The use of multi-modal imaging can give important information on the behaviour of gene therapy vectors in real-time, rather than traditional calliper measurements and final histological examination. | 4,293.8 | 2013-11-12T00:00:00.000 | [
"Engineering",
"Medicine"
] |
AdS/CFT Correspondence with a 3D Black Hole Simulator
One of the key applications of AdS/CFT correspondence is the duality it dictates between the entanglement entropy of Anti-de Sitter (AdS) black holes and lower-dimensional conformal field theories (CFTs). Here we employ a square lattice of fermions with inhomogeneous tunneling couplings that simulate the effect rotationally symmetric 3D black holes have on Dirac fields. When applied to 3D BTZ black holes we identify the parametric regime where the theoretically predicted 2D CFT faithfully describes the black hole entanglement entropy. With the help of the universal simulator we further demonstrate that a large family of 3D black holes exhibit the same ground state entanglement entropy behavior as the BTZ black hole. The simplicity of our simulator enables direct numerical investigation of a wide variety of 3D black holes and the possibility to experimentally realize it with optical lattice technology.
I. INTRODUCTION
Spacetime geometry changes dramatically across the horizon of a black hole.Classical particles or even light that fall across the horizon can never escape, purely due to the structure of spacetime.Surprisingly, quantum correlations can be built across the black hole horizon, a phenomenon that leads to Hawking radiation [1,2].Conceptually, this mechanism is equivalent to quantum tunneling across a potential barrier [3,4].This phenomenon is not only confined to astronomical objects but can also be met in condensed matter or synthetic quantum systems.Recently, signatures of Hawking radiation have been identified in diverse systems, such as Bose-Einstein condensates [5], quantum Hall effect [6], Weyl fermions [7], critical Floquet systems [8], magnons [9] or chiral interfaces [10].
Here we present a quantum simulator of massless Dirac fermions in the gravitational background of black hole horizons.Our simulator is in three spacetime dimensions, though our approach can be effortlessly extended to higher dimensions.It has been shown that the radiation of black holes due to fluctuating gravity in the semiclassical limit is equivalent to the radiation of scalar or fermionic particles in the black hole background [35,36].Hence, our black hole simulator can numerically and analytically probe static and dynamic properties of semiclassical quantum gravity that might be otherwise inaccessible.
The simulator consists of a two-dimensional square lattice of fermions.By choosing the tunneling couplings of the lattice appropriately the system can be effectively described by Dirac fermions embedded in any black hole geometry [37].To test the validity of the simulator we employ the equivalence to the Unruh effect [38] and show that the temperature of the black hole radiation is accurately described by Hawking temperature for a wide range of black hole profiles.Subsequently, we investigate the entanglement entropy of 3D black holes.We identify the parametric regime where the BTZ entanglement entropy numerically obtained from our 3D black hole simulator is in agreement with the theoretically predicted value of the corresponding 2D CFT that lives on the boundary of the AdS spacetime [17][18][19][39][40][41][42].
Our work holds significance at both fundamental and practical levels.From a fundamental perspective, the proposed simulator offers a valuable tool to investigate quantum correlations of black holes, establishing a "black hole laboratory" for exploring unresolved questions in gauge/gravity dualities.Our work also provides further supporting evidence for the conjecture that CFT 2 also describes various non-BTZ black hole profiles near the horizon, addressing the open problem of universality [43][44][45].
In practical terms, our proposed simulator stands out as both simple and powerful, and it offers itself various generalizations.Additionally, it is based on a free theory, in contrast to corresponding conformal field theories that often involve interactions and thermal effects.This distinction introduces complexities when calculating entanglement entropy in higher dimensions.Consequently, our approach lays the foundation for investigating spatial correlations in interacting theories using a free theory in higher dimensions.Furthermore, our quantum simulator comprises a free fermion lattice that can be realized with many quantum technologies, such as cold atoms or Josephson junctions.This presents an exciting opportunity to simulate black hole physics in a laboratory setting [46,47].
II. THE MODEL
We now construct a universal simulator of 3D Dirac fermion in arbitrary black hole geometry.This simulator consists of a square lattice of fermions with positiondependent tunneling couplings.For simplicity, we employ a rotationally symmetric gravitational field with line element where F (r) is a function only of the radial coordinate r.Dirac fermions with mass m in the geometric background (1) satisfy where . The dreibeins e µ a are defined by g µν = e µ a e ν b η ab , with η ab = diag(1, −1, −1).The gamma matrices γ a satisfy the Clifford algebra {γ a , γ b } = η ab .Due to the rotational symmetry of space (1) the spinor Ψ can be written as [48].The parameter κ is a positive (non-zero) integer, corresponding to angular momentum eigenvalues [49].In the massless limit (m → 0) and in the low energy regime (κ small) the region with large r is described by / ∇ τ,r ψ(τ, r) ≈ 0. ( This derivation can be generalized to higher dimensions.We now encode the 3D Dirac equation (2) with black hole background in a simulator consisting of a square lattice of fermions.We employ a generalization of the procedure employed in [35,50,51] for 2D black holes to the case of radially symmetric 3D black holes.To avoid coordinate singularity at the black hole horizon, we perform a change of variable dt = dτ + F (r) −1 dr and work in the ingoing Eddington-Finkelstein coordinates, ds 2 = F (r)dt 2 − 2dtdr.We consider now the Dirac equation written in these coordinates.As the Dirac spinor in (3) is massless it can be written as ψ(t, r) = ϕ(t, r), −ϕ(t, r) T / √ 2, i.e. the two components depend on each other so they do not need to be encoded independently in our lattice.As a result (3) simplifies to The representation of (4) on a square lattice is obtained by discretizing the spatial position with a lattice constant a (we fix a = 1) and approximating the spatial derivatives with central differences.This is followed by substituting ϕ = f where { fi , f † j } = δ ij and using Heisenberg equation of motion i∂ t fj = [ fj , H], see Appendix A. In the low energy limit where ϕ is smooth and for slowly varying functions F (r), the resulting lattice system can be described by free fermions on a two-dimensional square lattice with nearest neighboring hopping ⟨i,j⟩ where F i is the value of F (r) with r the polar distance of the vertex i of the square lattice.The black hole geometry dictates that F (r) in (1) turns from positive to negative as r moves from outside to inside the black hole.
The horizon is positioned at r h where F (r h ) = 0. Due to the lattice nature of the simulator (A6) we can choose the couplings F i to never become zero everywhere around the circle with radius r h .Nevertheless, the transition from positive to negative values of F i faithfully encodes the black hole spacetime geometry.As we demonstrate in the following, this simulator can faithfully describe the properties of Dirac fermions near a black hole horizon, taken to be at a large radius, where (3) is valid.
Here we will begin with the BTZ black hole profile.In the presence of negative cosmological constant Λ = −1/l 2 , the most prominent solution to Einstein's equations is the three-dimensional locally AdS 3 BTZ black hole [52,53].The metric of the BTZ black hole with mass M is given by Eq. ( 1) with F BTZ = (r 2 − r 2 h )/l 2 .The horizon of the BTZ black hole is at position r h = 2l √ 2GM and its Hawking temperature is given by T H = √ 2GM /(lπ).Next we will illustrate the numerical determination of the Hawking temperature for the BTZ black hole.Various other profiles will be considered in the last section.
III. HAWKING TEMPERATURE
We demonstrate now that our simulator can faithfully reproduce the theoretically predicted Hawking temperature of black holes, see Appendix B. To determine the Hawking temperature from our black hole simulator we use the equivalence to the Unruh effect [54].Black hole metrics are approximately equal to the Rindler metric close to the horizon that has a linear profile F R (r) = η(r − r h ).A stationary observer close to the black hole horizon can be equivalently described by a locally accelerating frame of reference moving through a flat Minkowski spacetime.Therefore, they will experience the Unruh effect with a temperature given by the Hawking temperature, T H .To simulate this effect we first encode Hamiltonian H M that describes Dirac fermions in local Minkowski frame, with many-body ground state |0 M ⟩.We achieve that by taking a flat profile F i = F in our simulator of Eq. (A6).Then we simulate the local Rindler Hamiltonian which after diagonalization is given by H R = p E p c † p c p , with eigenmodes {c p }. Finally, the Rindler observer measures the mode occupation , where f FD (E p , T H ) = (e Ep/T H + 1) −1 is the Fermi-Dirac distribution at the Hawking temperature T H and E p 's are the single-particle energies of the Rindler Hamiltonian H R , see Appendix C. In the simulation, we take modes p close to the ground state, where the continuum limit holds and determine the Fermi-Dirac distribution, as shown in Fig. 1(a), from which we extract T H . Repeating this process for various BTZ profiles we find that the simulator reproduces the theoretical predicted Hawking temperatures with remarkable accuracy with an error of 0.46%, as shown in Fig. 1(b).We find that accuracy increases with lattice size.
The source of the resulting thermality is due to the fact that the Minkowski ground state exists on both sides of the horizon, whilst the local Rindler modes only have support outside of the horizon.Hence, projecting |0 M ⟩ onto {c p } effectively traces out the region inside the black hole, resulting in a thermal state.In the following, we will employ this 3D black hole simulator to investigate the entanglement entropy across the event horizon and compare it to the Bekenstein-Hawking entropy predicted.Through the AdS/CFT correspondence the Bekenstein-Hawking entropy can also be understood as the thermal entropy of the boundary CFT.
IV. ADS/CFT CORRESPONDENCE
We will first summarise how the AdS 3 /CFT 2 correspondence can theoretically determine the entanglement entropy of a 3D BTZ black hole from the corresponding 2D boundary CFT.Then we will employ our black hole simulator to calculate the entanglement entropy across the horizon and identify the parametric regime where it agrees with the CFT prediction [21,22].
In the holographic context, the Ryu-Takayanagi formula suggests that the entanglement entropy of a region A with a length ξ on the boundary CFT 2 is given by the area of the minimal surface γ in the AdS 3 spacetime that is attached to the two endpoints of region A [21,22], as shown in Fig. 2(a).In the presence of a black hole in AdS 3 spacetime, this holographic duality yields the entanglement entropy where ϵ denotes the UV cutoff, c is the central charge and β is inverse temperature [21][22][23].Note that (6) has the same expression as the entanglement entropy of a thermal 2D CFT [29].We now specialize in the case of a BTZ black hole with a large mass, M .In this semiclassical limit, the minimal path, γ, is shown in Fig. 2(a).This path gives the bipartition of both the CFT in A that wraps around the whole space and its trivial complement B as well as of the black hole where A is the outside of the black hole and B is inside.As the CFT bipartition is trivial, including the whole boundary, the entropy, S CFT , is purely thermal.On the other hand, the black hole entropy, S BTZ , probes the quantum correlations of its pure ground state across the horizon.We now fix the boundary temperature to the BTZ Hawking temperature, β = T −1 H , and use the Brown-Henneaux holographic formula, c = 3l/(2G), [39] that relates the bulk properties of the black hole with the central charge of the boundary.Taking the length scale of the boundary to be ξ ∼ 2πl, we find that the thermal 2D CFT entanglement is given by S CFT (T −1 H , 2πl) [40,55,56].We will now numerically determine the entanglement entropy of the BTZ black hole, S AdS3BH,simulator , from the black hole simulator.To that end we construct the correlation matrix C with elements the two-point correlation functions where |Φ⟩ is a manybody ground state of the Hamiltonian (A6) and i, j run through subsystem B. Then the entanglement entropy between B and A is given by where the ζ k are the positive eigenvalues of C [57,58].The leading term in the resulting entanglement entropy of the black hole is expected to satisfy area law behavior.For D = 3 the "area" law takes the form where 2πr h is the perimeter of the horizon.The constant k = 1/(4G eff ) can be expressed in terms of effective Newton constant G eff when the S(r h ) is interpreted as the Bekenstein-Hawking entropy [12,[59][60][61][62][63].We navigate around debates concerning both the species problem and the regularization problem of entanglement entropy by subsuming both issues within the definition of G eff [18,20].Notably, we can manipulate G eff by modifying the regularization, or more precisely, the lattice spacing in the model.As demonstrated in Appendix D, G eff is directly proportional to the lattice spacing.In Fig. 2(b), we show that entanglement entropy obtained from our simulator is given by an area law as in Eq. ( 8), with an effective Newton constant G eff ∼ 0.7 when L = 101.In Fig. 2(c), we see that the entanglement entropy of the 3D BTZ black hole simulator determined numerically from (7) and the entanglement entropy of the corresponding boundary CFT 2 , dictated by ( 6), align remarkably well in the semiclassical limit r h ≫ l, where (A6) is valid.This agreement is shown quantitatively in Fig. 2(c) either by increasing the radius r h for fixed curvature l or by decreasing the curvature l for fixed r h .Note that Hamiltonian (A6) describes massless free fermions, and thus it is critical.Indeed, for a fixed value of the radius, we find that the entanglement entropy of flat spacetime, encoded in the simulator by uniform couplings, F i = F , scales logarithmically with system size, as shown in Fig. 2(d).On the other hand, the entanglement entropy across the horizon of a black hole stabilizes with system size to a finite-non-zero value, as shown in Fig. 2(d).This ensures that the black hole entropy retains its "area" law behavior, unlike the flat case that depends on the system size.
V. ENTANGLEMENT ENTROPY OF VARIOUS BLACK HOLES
We now consider several lapse functions each corresponding to different black hole profiles, see Fig. 3(a).While they all have different Hawking temperatures they produce the same area law behavior in their entanglement entropy as the BTZ black hole, see Fig. 3(b).Obtaining the same entropy for different black holes is known as the problem of universality [43][44][45].Such behavior is expected for BTZ black holes with different temperatures.Indeed, in the semiclassical limit, the entanglement entropy ( 6) is given by S CFT ∝ cl/β for β ≪ l.The AdS/CFT relates r h ∝ l 2 /β and via the Brown-Henneaux formula c is proportional to l/G.Thus the entropy is independent of any particular characteristics of the black hole, such as its Hawking temperature.
Such an argument cannot be directly generalized when non-BTZ black holes are considered.Nevertheless, our simulator (A6) can explain this universal behavior for all black hole profiles parameterised by overall constants, in the following way.Overall factors in the lapse function F (r) of the black hole geometry become also an overall factor in the simulator Hamiltonian (A6).Since the two-point correlation matrix is invariant to such overall factors, the entanglement entropy stays the same for different black hole profiles even if they have different Hawking temperatures.Hence, any black hole profile that has negligible nonlinear terms around the horizon compared to the lattice spacing e.g., the ones considered in Fig. 3(a), can be described by the same thermal CFT as the BTZ black hole.
VI. CONCLUSIONS
We have shown that our simulator is able to probe the quantum correlation properties of black holes.We observed that a whole set of 3D black holes have the same entanglement entropy as the one predicted by the CFT 2 dual to the BTZ black hole.Our results are in line with the interpretation of the Bekenstein-Hawking entropy as topological entanglement entropy [64].Indeed, (6) indicates that the universal term comes from additive part, S top = c ln[sinh(πξ/β)]/3, without UV cutoff.This topological term describes the thermal entropy of black hole in the semiclassical limit.
Our universal black hole simulator is given in terms of free fermions that is analytically tractable making it viable to theoretical investigations, while it can be readily realized in the laboratory [46,47].Moreover, it can be directly applied also to higher dimensions, thus offering a simple and versatile medium to probe more complex questions, such as investigating the effect of black hole geometry on interacting fermions.The most celebrated quantum property of black holes is that quantum fluctuations escape their gravitational attraction.These fluctuations are witnessed outside the black hole as thermal radiation with temperature T H that depends on the geometrical characteristics of the black hole, as Hawking famously predicted in 1974 [2].We now demonstrate that the fermionic lattice (A6) accurately describes 3D Dirac fermions in black hole geometry by determining the temperature of the escaped radiation.
Consider for concreteness the BTZ black hole with profile F = (r 2 −r 2 h )/l 2 where l is related to the cosmological constant.Hawking temperature is given by where G is Newton constant and M is mass of the black hole related to event horizon with M G = r 2 h /(8l 2 ).Thus for given r h and l we can obtain the mass of the black hole.To investigate the Hawking radiation with our lattice model we initially prepare a wave packet |ψ(0)⟩ inside the black hole and monitor its quenched evolution as it escapes through the horizon.In particular, we initialize a single-particle state |ψ(0)⟩ = {n} λ {n} c † {n} |0⟩ in an equal superposition on the {n} sites on the inner region of the black hole horizon.Subsequently, we let the system evolve in time and we measure the probability density of the particle that is emitted outside the black hole across the horizon at a given time t.Most of the population remains trapped inside the black hole [65] until eventually some escapes, via quantum tunneling [66] through the horizon and moves to infinity.
The component of the wave packet outside the black hole corresponds to Hawking radiation if the population P (E) = |⟨E|ψ(t)⟩| 2 of modes |E⟩ with energy E that are the eigenstates of the Hamiltonian in the outer region.It is then expected that the Hawking radiation takes the thermal form P (E) ∝ e −E/T H , where T H denotes the Hawking temperature.We numerically evolve the wave packet |ψ(t)⟩ and calculate the corresponding Hawking temperature from a slope in a semi-log plot, as shown in Fig. 4(a).We find that the numerical Hawking temperature averaged over early times has an error of 3%.At last, in Fig. 4(b) we consider different l values over a range of horizons and find good agreement between the numerical and theoretical values of the Hawking temperature.
Appendix C: Unruh temperature of 3D BTZ black hole In (2 + 1)D, the Schwarzschild metric is given by where f (r) is some function such that f (r h ) = 0 and changes sign as we move across r h , where r h is the location of the event horizon.Close to the horizon to the first order we have Therefore, the metric close to the horizon is given by Let us now define the new coordinate R 2 = k(r − r h ).Using this coordinate, the metric is This looks very close to the Rindler metric describing a uniformly accelerating observer moving through a Minkowski spacetime.Let us make the final coordinate transformation ρ = 2R/k, which gives us If we simplify this metric further by assuming that ρ 2 ≪ r h , then we arrive at the metric where we have defined α = k/2 = f ′ (r h )/2.We are interested in the Dirac field on this background and the Hawking radiation generated by it.In order to derive this, we note that this metric looks like the metric for the space-time M = R 2 ×S 1 , where R 2 is a flat (1+1)D space-time endowed with the Rindler metric with acceleration α, and S 1 is a circle of radius r h .Therefore, we expect the Dirac field to exhibit the Unruh effect here and the angular portion to play no role for large r h .In this coordinate system, the massless Dirac equation reads where and / ∇ 2D is simply the case with the (1 + 1)D Rindler metric substituted in.As the system is rotationally symmetric, we take the ansatz solution ψ(t, ρ, θ) = e −imθ ϕ(t, ρ), where ϕ is a two-component spinor field.This yields the equation of motion For large r h and small angular momentum m, we arrive at so the non-trivial dynamics of the field is governed by the Dirac equation on a Rindler metric.Using the chiral gamma matrix representation γ 0 = iσ x and γ 1 = σ y , where σ i are the Pauli matrices, the (unnormalized) positive energy solutions are given by where u ± are the two component eigenvectors of σ z where σ z u ± = ±u ± [54].The negative energy solutions are simply given by the complex conjugates.These solutions are only valid for ρ > 0 as they exist only in a single Rindler wedge.As the Unruh effect requires us to measure the ground state of the Minkowski spacetime from the perspective of the Rindler observer, we must also have possession of the Minkowsi modes.The metric near the horizon can be written as where the relationship between the coordinates is given by T = ρ sinh(αt) and X = ρ cosh(αt).The unnormalized positive energy solutions (using the same gamma matrix representation) on this metric are given by where u k is the same as defined in Eq. (C10) and N is a normalisation constant.The negative energy solutions are obtained from the complex conjugate.Note that these solutions are valid for all X so extend to the other side of the Rindler wedge.Let a p,n and b p,n be the particle and anti-particle modes of the Rindler observer associated with the solutions Eq. (C10), and let A p,n and B p,n be analogous for the Minkowski observer.The Minkowski observer defines their vacuum state (or ground state) as the state |0 M ⟩ such that A p,n |0 M ⟩ = B p,n |0 M ⟩ = 0 for all p and n.On the other hand, this state will not be the vacuum for the Rindler modes which is the source of the Unruh effect.Noting that our quantum field can be expressed with respect to either the Rindler modes of Eq. (C10) or the Minkowski modes of Eq. (C12), then this induces a Bogololiubov transformation of their corresponding mode operators allowing us to relate the Rindler and Minkowski mode operators linearly as is the standard inner product for spinors on the spatial hypersurface induced by the metric of Eq. (C6).Note that in order to perform this inner product between Minkowski and Rindler modes one must express both in the same coordinate system.Using the calculations of Ref. [54], the mode occupation of the Rindler modes in the Minkowski vacuum is given by where T = α/2π = f ′ (r h )/4π.The previous calculation is exact in the Rindler frame, however, note that the Rindler frame exists only close to the horizon.The Dirac modes of the black hole frame will extend far from the horizon, however, we note that these modes reproduce the Hawking/Unruh effect well.In order to simulate this numerically on the lattice, we require two ingredients: the Minkowski vacuum |0 M ⟩ and the modes which diagonalize the Hamiltonian in the Schwarzschild frame a p,m .The vacuum |0 M ⟩ is obtained easily as the many-body ground state of a homogeneous 2D lattice Hamiltonian.We then generate the Hamiltonian of the Schwarzschild frame and diagonalize it numerically to find its modes a p,n .Then, one can calculate ⟨0 M |a † p a p |0 M ⟩ with possession of the correlation matrix of the model.This effect will only work for low energies as we are approximating a continuum effect with the lattice.
Note that the Minkowski Hamiltonian exists throughout the lattice, whereas the Schwarzschild Hamiltonian only has support outside of the horizon.This fact is the source of the thermality: probing the Minkowski modes with modes that exist only outside the horizon effectively performs the trace tr(|0 M ⟩⟨0 M |) = e −βHent , where H ent is the entanglement Hamiltonian.The fact that the modes in the Schwarzschild frame produce a thermal spectrum implies the interesting observation that the entanglement Hamiltonian must be approximately equal to the Schwarzschild frame Hamiltonian, which was discussed in Ref. [54] Appendix D: Entanglement entropy lattice regularisation As we are probing the quantum properties of the Dirac field the lattice regularisation influences the resulting entropy, S(r h ).To that end, we consider the system to be of linear size L and discretizing space with lattice spacing a = L/N where N is the number of lattice points within L. If we fix r h and L and we increase N then we obtain that G eff ∝ a, i.e. it goes to zero as N increases.If we fix N and L, i.e. fix the lattice spacing, then we obtain a fixed value for the gravitational constant G eff .Subsequently, we change the radius r h to recover the area law dependence of the entanglement entropy S A ∝ r h /L ∝ N .In the main text, we choose a = 1 and N = 101 which results in G eff ≈ 0.7.In Fig. 5, we consider other lattice spacing values and show that S diverges and G eff decreases with decreasing lattice spacing.
FIG. 1 .
FIG. 1. Hawking temperature, TH , determined from 3D black hole simulator for BTZ black hole with profile F BTZ (r) = (r 2 −r 2 h )/l 2 (orange squares).(a) Data points are numerically measured for l = 5 in a BTZ black hole profile.The solid line indicates the corresponding fit to a Fermi-Dirac distribution, fFD(E, TH ), where TH is extracted.(b) The measured temperature TH for a large range of parameter l is in agreement with the theoretical Hawking temperature within 0.46% error.Here we used system size L = 81 and horizon radius r h = 20.
FIG. 2 .
FIG. 2. (a) The holographic AdS3/CFT2 duality is illustrated for a black hole with path bipartitions the boundary in A that covers the whole CFT and its trivial complement B and wraps around the black hole horizon separating it into regions A and B. (b) Red circles indicate the entanglement entropy of a flat spacetime whereas the rest of the colours correspond to BTZ black holes with different curvatures l.While TH changes as l varies all of them have the same entanglement entropy across the horizon.The slope gives an effective Newton constant for BTZ black hole as G eff = 0.7.(c) The CFT entropy for various curvatures l (corresponding to different central charges) and the BTZ entanglement entropy.The holographic correspondence holds accurately in the large temperature limit r h ≫ l.(d) The entanglement entropy of flat spacetime (red, points) and the BTZ black hole (black, diamond) as a function of system size, L, with fixed radius r h = 20.The flat spacetime entanglement (value shifted by -44) scales with ln L (red, solid line) which indicates a violation of area-law.The black hole entanglement entropy saturates to a finite value (black line).The linear size in (b), (c) and (d) is L = 101.
FIG. 3 .
FIG. 3. (a) Different lapse functions with horizon at r h = 20 indicated by vertical black line.(b) All entanglement entropies are perfectly aligned regardless of their Hawking temperature (α = l = 5, r h = 25).The average slope of different lapse functions gives G eff ∼ 0.7 with a standard deviation given by 0.008 for L = 101.Black solid line shows the average slope.
FIG. 4 .
FIG. 4. Hawking temperature of 3D BTZ black hole for L = 61 and error analysis.(a) Hawking temperature as a function of time.The inset figure shows a semi-log plot of energy modes located outside of the horizon.The slope gives the Hawking temperature.The margin of error in the inset is given in the bracket.(b) Hawking temperature is calculated for different horizon radiuses r h and cosmological constants l −2 .Only the weighted mean over t=0.2 to t=20 is shown.A single Dirac particle is initialized in a superposition of four points at t = 0 (blue crosses) just behind the black hole horizon with radius r h = 10 on a lattice with linear size L = 61, η = 10 and α = 10 −3 .The dispersion of particle density is depicted at t = 1.(b) The particle population that escaped the black hole appears as Hawking radiation at t = 8.(c) Hawking radiation has thermal distribution.The slope on semi-log plot yields a Hawking temperature TH which is in good agreement with the theoretically predicted value, for r h = 15, η = 10 and α = 10 −3 .(d) Time-averaged Hawking temperatures over t = {0.2,8.0} are depicted for a range of parameters η.The error bars indicate the standard deviation around the mean.Good agreement with the expected Hawking temperature TH = η/(4π) is obtained apart from large values of η, due to the finite lattice spacing, and for small η, due to finite size effects. | 7,072.8 | 2022-11-28T00:00:00.000 | [
"Physics"
] |
Bacterial Community Succession in Pine-Wood Decomposition
Though bacteria and fungi are common inhabitants of decaying wood, little is known about the relationship between bacterial and fungal community dynamics during natural wood decay. Based on previous studies involving inoculated wood blocks, strong fungal selection on bacteria abundance and community composition was expected to occur during natural wood decay. Here, we focused on bacterial and fungal community compositions in pine wood samples collected from dead trees in different stages of decomposition. We showed that bacterial communities undergo less drastic changes than fungal communities during wood decay. Furthermore, we found that bacterial community assembly was a stochastic process at initial stage of wood decay and became more deterministic in later stages, likely due to environmental factors. Moreover, composition of bacterial communities did not respond to the changes in the major fungal species present in the wood but rather to the stage of decay reflected by the wood density. We concluded that the shifts in the bacterial communities were a result of the changes in wood properties during decomposition and largely independent of the composition of the wood-decaying fungal communities.
INTRODUCTION
Degradation of plant remains is important in carbon and nutrient cycling in soil (Chambers et al., 2000;Brown, 2002;Weedon et al., 2009;van Geffen et al., 2010). Fungi are the major decomposers of wood debris, which is an important fraction of an organic matter in forest ecosystems (Owens et al., 1994). Therefore, a lot of attention has been given to community dynamics and succession of fungi during wood decay (Boddy and Watkinson, 1995;Boddy, 2001;Rajala et al., 2011;Fackler and Schwanninger, 2012). However, fungi are not the only microbial inhabitants of decaying wood. Several studies have addressed the occurrence of bacteria in decaying wood and indicated that interactions between fungi and bacteria may be important for the decay processes (de Boer and van der Wal, 2008;Valášková et al., 2009).
Wood-decay fungi can be classified according to the type of decay that they cause namely white-or brown-rot. White-rot fungi are able to degrade lignin in order to get access to other polysaccharides within woody material whereas brown-rot fungi are specialized in degradation of cellulose and hemicellulose without previous lignin removal (Owens et al., 1994). The degradation of wood polymers by fungi is a complex process involving enzymes, mediators, and acidic conditions. Reactive oxygen species that are generated by fungal peroxidases and phenol oxidases have an important role in the degradation of lignin and cellulose. All together, these processes create a harsh environment for bacterial colonization.
The drastic drop of pH upon wood block colonization by the white-rot fungus Hypholoma fasciculare had a deleterious and selective effect on wood inhabiting bacteria (de Boer et al., 2010). In contrast, bacterial abundance was high in natural wood samples under decay by H. fasciculare despite high acidity and high activity of enzymes producing radical oxygen species (Valášková et al., 2009). Therefore, those bacteria must have been adapted to acidic conditions and oxidative stress (de Boer and van der Wal, 2008). Moreover, in the stages characterized by declining fungal activity (late stage of decomposition), bacteria may become more abundant by being specialized in the degradation of derivatives of lignin decomposition, mainly aromatic compounds.
Whereas, bacteria may have a negative effect on fungal community during wood degradation by competing for sugars released by the fungal extra-cellular enzymes, synergistic effects may also occur (de Boer and van der Wal, 2008). For example, bacteria can provide fungi with limiting nutrients such as iron and nitrogen via nitrogen-fixation (Brunner and Kimmins, 2003;de Boer and van der Wal, 2008;Hoppe et al., 2014) or growth factors like vitamins in exchange for part of easily accessible and degradable carbon sources released by fungal enzymes. It has been shown in some cases that bacteria-fungal consortia are able to degrade wood blocks more effectively than fungi alone (Murray and Woodward, 2003). However, the bacterial and fungal community assembly rules during wood decay are not yet well-studied.
Theories on microbial community assembly have been raised, including the "neutral theory" and the "niche theory" (Dumbrell et al., 2010). The neutral theory predicts that microbial community assembly is a stochastic random process as many species are functionally equivalent in their ability to exploit niches. Thus, their abundance will follow a zero-sum multinomial (ZSM) distribution (Hubbell, 2001;McGill et al., 2006). The niche theory predicts that the microbial community is shaped by abiotic and biotic factors, suggesting that species have unique properties to exploit unique, available niches. Their species abundances will follow pre-emption, broken stick, log-normal, and Zipf-Mandelbrot models (Macarthur, 1957;McGill et al., 2007). Furthermore, these theories suggest that a microbial community driven by environmental parameters would present a deterministic process of assembly. Here, complex interactions, both positive and negative, between bacterial and fungal communities in decaying wood were expected. To gain insight in the importance of bacteria for fungal wood decay, we focused our study on bacterial community composition in relation to changes in fungal community composition during successive stages of natural decay of pine wood. We hypothesized that both the stage of wood decay and changes in the fungal community composition during progressive fungal succession will have an impact on the bacterial community structure. Since, it is expected that one or few fungi are dominant in a decaying unit (van der Wal et al., 2015) we expect a strong selection of bacteria mediated by the dominant fungal species (Folman et al., 2008).
In the current study, we determined the composition and assembly of both bacterial and fungal communities during the stages of natural decay of pine wood. Our primary aim was to determine if dominant wood-decaying fungi drive the dynamics and composition of decaying wood-inhabiting bacterial communities.
MATERIALS AND METHODS
Site Description, Wood Sampling, and Wood Characteristics Wood samples were collected in autumn 2013 in a mixed forest located near Wolfheze village, the Netherlands (51 • 59 39 N; 5 • 47 39 E). Twenty samples of pine wood (Pinus sylvestris) were collected from fallen or standing dead tree trunks with different wood density. Wood density was furthermore used as proxy of wood decay stages. For all samples, the bark was removed and slices of wood were surface sterilized under UV light for 30 min. Saw dust was produced by drilling with a sterile drill. Highly decayed wood samples, which could not be drilled, were fragmented using sterile forceps and scalpel. Gravimetric content was determined after drying for 4 days at 60 • C. Dried wood was ground with liquid nitrogen in a mortar and carbon and nitrogen content were measured on a Flash EA1112 CN analyzer (Interscience, Breda, the Netherlands). Water extracts of saw dust were prepared by shaking 0.3 g of fresh saw dust with 6 ml milli-Q water at 300 rpm for 1 h. Water extracts were used to determine pH. Ergosterol content was determined as an indication for fungal biomass with alkaline extraction and HPLC analysis (Bååth, 2001). The wood density was established using waterdisplacement method (Olesen, 1971) allowing measurement for irregularly shaped samples.
DNA Extraction and PCR Amplification
Wood samples were taken using a sterile wood drill. Four wood dust samples were collected per wood block for separate DNA isolation per replicate. Wood dust was ground in liquid nitrogen. Total genomic DNA was extracted from 150 mg wet weight wood dust using MoBio PowerSoil TM DNA Isolation Kit (MoBio Laboratories, Inc.) according to the manufacturer's instructions.
Bacterial 16S rRNA V4 gene region was amplified using primer 515f [5 -CCATCTCATCCCTGCGTGTCTCCGACTCA G (MID-10 bases) GTGTGCCAGCMGCCGCGGTAA-3 ] const ituted of Roche 454 adaptor, 10 bp Roche MID barcodes and bacterial primer 515f, and reverse primer 806r (5 -CCTATCCCCTGTGTGCCTTGGCAGTCTCAGGGACTACV SGGGTATCTAAT -3 ) containing the 454 Life Sciences primer B and the bacterial primer 806r. Internal transcribed spacer region (ITS2) fungal ribosomal operon was amplified using primer ITS9f [5 -CCATCTCATCCCTGCGTGTCTCCGACTCAG (MID-8 ba ses) GAACGCAGCRAAIIGYGA-3 ] and ITS4r (5 -CCTATCC CCTGTGTGCCTTGGCAGTCTCAGCTTCCTCCGCTTATTGA TATGC-3 ). PCR reactions contained in 25 µl: 1 µl template DNA, 5 µM of each barcoded forward and reverse primer, 2 mM dNTP's, 1 U FastStart Expand Taq DNA polymerase (Roche), and 2.5 µl 10X PCR buffer. The PCR reactions were done under following conditions: initial denaturation step 5 min at 95 • C followed by 30 cycles of denaturation for 30 s at 95 • C, annealing at 53 • C for bacterial 16S rRNA and 58 • C for the ITS region for 30 s and extension at 72 • C for 1 min. The final extension was extended to 10 min at 72 • C. PCR products were purified using the Qiaquick PCR Purification kit (Qiagen). One library for 16S rRNA and one for the ITS region were generated by pooling 80 purified PCR products in equal quantities. Both 16S rRNA and ITS samples were subjected to pyrosequencing on a 454 Life Sciences Genome Sequencer FLX (Roche) machine by Macrogen, Inc. (Seoul, South Korea).
Pyrosequencing Data Processing
The 16S rRNA data analysis was conducted in Mothur following the standard procedure (Schloss et al., 2009). Briefly, sequences were filtered removing reads that did not have a perfect match to the degenerated primers or barcodes, as well as read that had ambiguous bases and more than six homopolymers. Denoising was done using PyroNoise. Sequences were aligned against reference alignment Silva (Quast et al., 2013). Potential chimeras were identified using the UCHIME program (Edgar et al., 2011) and removed. Filtered sequences were binned into operational taxonomic units (OTUs) based on a 97% dissimilarity cutoff from the distance matrix. One representative sequence of each OTU was assigned through a hierarchical taxonomic annotation using RDP classifier (Wang et al., 2007). For OTUbased analysis, samples were rarefied to 1,080 sequences per sample.
The ITS region sequences were filtered removing reads that did not have perfect match to the primers or barcodes, or that had ambiguous bases and more than eight homopolymers. Denoising was accomplished using PyroNoise. Minimal accepted sequence length was set to 200 bp. Sequences were assigned to the taxonomy in the UNITE v.6.0 database (Kõljalg et al., 2013). The assignment was based on the k-Nearest Neighbor algorithm using a cutoff of 80%. ITS samples were rarefied to 2,000 sequences per sample. The sequencing data are available under accession number PRJEB10643 at European Nucleotide Archive (ENA).
Statistical Analysis
Correlations between bacterial and fungal community richness and diversity in relation to wood density (proxy of wood decay stage) were calculated using linear regression models. The rarefied OTU table (bacterial community composition) and wood characteristics (pH, moisture, density, nitrogen, carbon, C/N, and ergosterol) data were used for multivariate regression tree (MRT) analysis by using the 'mvpart' (Multivariate partitioning) package (De'ath, 2007) in R (statistical programming environment), and the distance matrix was based on Bray-Curtis built by the function "gdist." Based on MRT results, the samples were classified into three categories: early, middle and late stages.
Significant differences in relative abundance (average value of three replicates) of specific bacterial genera (Supplementary Table S2) and orders (Supplementary Table S3) between categories (early versus middle, middle versus late and early versus late) were determined using Metastats (White et al., 2009). Permutations (1000 bootstrap) were applied for estimating the null distribution of the t statistic. In case of the comparison on the genus taxonomical level, false discovery rate (FDR) control for correcting for multiple comparisons was performed in Metastats. Differences between decay stages were analyzed by one-way ANOVA. Post hoc Spjøtfoll-Stoline tests (HSD Tukey for unequal replication numbers) was performed to determine significant differences.
For network analysis OTUs were classified at the genus level (both bacterial and fungal). Only genera with five or more representatives across samples were included in the analysis. The correlation matrices of all pairwise Pearson correlations on bacteria and fungi genus level in different categories (early, middle, and late) and between all samples (average value of three replicates) across three wood samples categories were calculated using R. FDR correcting for multiple comparisons was performed according to Hochberg and Benjamini's method. Only Genera with the correlation estimate <−0.80 or >0.80 and P-value < 0.05 were included in the network analysis. Microbial networks were visualized using Cytoscape (Smoot et al., 2011).
To test whether neutral or niche-based mechanisms best explained the assembly of the microbial community, we examined the species rank abundance distribution for each wood sample category defined by MRT analysis. Niche-based theory assumes that the rank abundance distribution would fit the preemption, broken stick, log-normal and Zipf-Mandelbrot models (Motomura, 1932;McGill et al., 2007). On the other hand, the neutral theory predicts that rank abundance distribution would be consistent with ZSM model (Hubbell, 2001). The species rank abundance of each sample were fit to broken stick, pre-emption, log-normal, Zipf, and Zipf-Mandelbrot rank abundance models using the command "radfit" found in the R package vegan (Oksanen et al., 2013), and the ZSM model using TeTame (Jabot et al., 2008). Firstly, we generated the Akaike Information Criterion (AIC), which is a measure of relative quality of a statistical model, providing a means for model selection. These values were calculated based on the equation AIC = −2log-likehood+2 * npar, where npar represents the number of parameters in the fitted model (Feinstein and Blackwood, 2012). In order to compare the models, we calculated the Akaike weight (W i ), which are the weight of evidence in favor of one model being the actual best model in comparison to other tested models (Burnham and Anderson, 2002). The Akaike weight was calculated from the equation
Diversity and Taxonomic Richness of Bacterial and Fungal Communities in Different Stages of Pine Wood Decomposition
Pine (Pinus sylvestris) wood samples were collected based on visual examination of the degree of decomposition. In total, 20 wood samples were collected. Classification of wood samples into the categories early, middle, and late wood decay stages was based on wood density decrease indicating progressive decay. These stages were reflected in shifts in bacterial community structures (for more details see below).
All wood samples were acidic with a significant drop in pH from early to middle and late stages ( Table 1). Wood density was negatively correlated with water content (R 2 = −0.65, P < 0.001). No significant correlations were observed for wood density versus C/N ratio or ergosterol content (R 2 = 0.18, P = 0.08 and R 2 = 0.13, P = 0.13, respectively). The details on physicalchemical characteristics of each of the sample are presented in Supplementary Table S1.
From the wood samples, a total of 523,587 high quality bacterial and 1,304,635 high quality fungal sequences were obtained. The number of reads per sample varied between 300 (sample Pi03) and 19,553 for bacterial samples, and between 2,111 and 39,245 for fungal samples. Sample Pi03 was excluded from the analysis due to amplification bias and low yield of sequence reads for both the 16S rRNA and ITS regions. For the same reason, sample Pi02 was excluded from bacterial based analysis. Both excluded samples belonged to the early stage of decay (dead tree trunk, high wood density, low water content, no discoloration).
In order to retrieve higher number of reads per sample, three out of four replicates per sample (the replicate with lowest read number was discarded) were analyzed. To perform the analysis, sequences were rarefied to 1,080 reads per sample (in total 72,360 reads) for bacterial samples and to 2,000 reads per sample (in total 144,000 reads) for fungal samples.
Cluster analysis at 97% cutoff identified 1,065 bacterial OTUs across samples with 208 OTUs comprising single sequences. The number of OTUs per sample varied between 35 for sample Pi01 with the highest wood density (0.5 g/cm 3 ), and 237 for sample Pi17 in the late stage of decomposition and wood density of 0.19 g/cm 3 .
Regression analyses were performed for alpha diversity measurements of bacterial and fungal OTUs versus wood characteristics (Figure 1). For the bacterial communities, species richness, expressed as the ChaoI estimator or the Shannon index, correlated positively and significantly (P < 0.001 and P = 0.0019, respectively) with the progress of wood-decay (Figure 1). In both cases the decay stage accounted for approximately half of the variability of the samples, with R 2 = 0.41 for ChaoI estimator and R 2 = 0.67 for Shannon diversity index versus wood density, respectively. There were no significant correlation between alpha diversity measurements and wood pH, C/N ratio or ergosterol-based fungal biomass (data not shown). For the fungal ITS region, OTUs were defined based on counts of unique sequences. Contrary to the bacterial communities, neither fungal sequence richness (ChaoI estimator) nor diversity (Shannon index) was correlated with wood density loss (Figures 1C,D, respectively).
The taxonomic affiliations of the bacterial reads are given in Figure 2A. Next to the dominant phylum Proteobacteria, especially Alpha-and Gamma-Proteobacteria, a high proportion of reads was classified as Acidobacteria ranging from 11 up to 57% (average 22%). The majority of acidobacterial sequences were affiliated with subdivisions 1 and 3, however, subdivisions 2, 4, 6, 7, 14, and 16 were also identified. Fungal community compositions almost entirely consisted of Basidioand Ascomycota with 95-100% of sequences assigned to those two phyla, respectively. Each of the samples had its own fungal community signature. As expected, communities were mostly dominated by one fungal phylotype (one sequence type; Figure 2B).
Bacterial Community in Relation to Wood Characteristics and Fungal Communities
Multivariate regression tree analyses classified the wood samples based on wood density (characteristic that reflects wood decay) into three clusters, namely: early, middle, and late wood decay stages. Samples with wood density above 0.40 g/cm 3 were classified as early stage (samples Pi01, Pi04, and Pi05). Samples with wood density between 0.40 and 0.30 g/cm 3 were classified as middle (samples Pi06, Pi07, Pi08, Pi09, Pi10, Pi11, and Pi12) and samples with wood density below 0.30 g/cm 3 as late wood decay stage (samples Pi13, Pi14, Pi15, Pi16, Pi17, Pi18, Pi19, and Pi20; Figure 2A).
The average values for each of the wood properties in the three decay stages are given in Table 1. The pH significantly dropped (P < 0.01) from early (4.90 ± 0.12) to middle (3.96 ± 0.08) and late stages (4.16 ± 0.14). The relative wood moisture (%) increased significantly at late stage: 33.62 ± 1.27 in early, 33.34 ± 1.24 in middle and 138.99 ± 1.24 in late stages (P < 0.010). The C/N ratio was significantly lower P < 0.05) at late wood decay stage (267.30 ± 125.34) as compared to early (1009.25 ± 267.19) and middle (743.02 ± 38.06) stages. Ergosterol content significantly increased at middle and late stages P < 0.03).
In total, 18 bacterial genera significantly (q-value ≤ 0.05) differed in their abundance between early and middle stages Mean values and standard error of parameters for each of the wood decay stages (early n = 3; middle n = 7; late n = 8). Differences between decay stages were analyzed by one-way ANOVA. Post hoc Spjøtvoll-Stoline tests (HSD Tukey for unequal replication numbers) was performed to determine significant differences. * ANOVA and post hoc test performed after log-transformation. # Moisture [%] = (weight of water/oven dry weight of wood) × 100. Values with the same letters (a, b, c) were not significantly different (P < 0.05).
Frontiers in Microbiology | www.frontiersin.org of wood decay, 32 between middle and late and, 28 between early and late stages (Supplementary Table S2). To simplify the output, the analysis was performed on the Order taxonomical level (Figure 3 and Supplementary Table S3). The early stage was overrepresented by sequences classified as Gammaproteobacteria especially those affiliated with the Order Xanthomonadales (31 in early and 2% in middle and late stages, respectively) and Pseudomonadales (18 in early and 1% in middle and late stages, respectively). In the middle stage, the orders affiliated with Alphaproteobacteria; Rhodospirillales (24%) and with Betaproteobacteria; Burkholderiales (8%) were overrepresented. In order to test how the relationship between microorganisms (microbial and fungal taxa) shape the bacterial community structure, we performed network interference analysis. Co-occurrence analysis were performed for each of the decay stages separately (Figure 4). The highest number of correlations were observed in middle stage followed by late and early decay stages (100, 71, and 26 nodes, respectively). The same order was followed by the average degree of node connectivity (5.76, 4.03, and 4.00). Overall we could discern increased complexity of the relations in the middle stage of decay. Interestingly, in the early and middle stages only single bacteria-fungi cooccurrence correlations were observed. Taking into account all bacteria and fungi samples for a general co-occurrence analysis (Supplementary Figure S1), the location of the genera in the network was evaluated based on the degree of centrality.
Degree of centrality defined key nodes (in this case genera) in the network with the highest number of correlations to other taxa in the network. The majority of genera identified as a key nodes were affiliated with Acidobacteria, Actinobacteria, and Proteobacteria (Supplementary Table S4). Overall, bacterial and fungal communities shared low connectivity. Out of 121 bacterial genera entering the network, only 14 showed direct connections with fungal genera (Supplementary Table S5). The majority of direct bacterial fungal connections were observed between Acidobacteria especially belonging to subdivision 1.
Neutral or Niche-Based Assembly Predictions
The species abundance distribution of all samples was fitted to six theoretical models to test whether neutral or nichebased mechanisms best explained microbial community assembly patterns in the wood samples from three degrees of decomposition (Supplementary Tables S5-S11). The choice for the best model was made based on the comparison of the AIC weight, as indicated in the Figure 5. The comparison of different rank abundance distribution models based on AIC weight values indicated that at the early stage, the data fitted both neutral and niche-based models. This result indicates stochastic mechanisms in the assembly of the community at the early stage of the wood decomposition. In other words, the composition of the microbial community at early stage is based on aleatory mechanisms, FIGURE 2 | Bacterial (A) and fungal (B) community compositions in decaying pine-wood presented as relative abundance of taxa at the phylum level for bacteria (class level for Proteobacteria) and at the order level for fungi. Bar-plots show the average abundance of three replicates per sample. Wood samples were organized according to MRT analysis based on OTUs identified in rarefied to 1,080 reads per replicate for bacterial 16S rRNA and to 2,000 for fungal ITS region per sample and wood characteristics. Taxonomic groups representing low abundance (<5% of total) are grouped and presented under the name "other." Frontiers in Microbiology | www.frontiersin.org without a selection of specific groups based on niche availability. However, in the middle and late stages, niche-based mechanisms best explained microbial community assembly, which indicates that species are selected based on their ability to inhabit and exploit new niches available during the decomposition of the wood. This is an indication that environmental parameters, such as chemical properties of the wood, are driving the community composition by selecting specific microbial groups.
FIGURE 5 | Akaike Information Criterion (AIC) weight values for six rank abundance distribution models used in this work. The AIC weight varies from 0 to 1, being the highest value the best-fit model. The color scale was used for a better visualization, where green indicates the best model.
DISCUSSION
The relationship between bacterial and fungal composition during wood decay is not yet well-studied. In our study, we focused on microbial community development during the decomposition of wood of the tree species Pinus sylvestris. We used wood density as a proxy for wood decay stage. However, it must be noted that the bacterial and fungal communities development may vary between different wood samples as a result of differences in the initial colonization.
The shift in bacterial communities in response to progressive decomposition was reflected by a positive correlation between bacterial richness and diversity versus the stage of wood decay. Interestingly, bacterial community diversity as well as richness, though strongly correlated with the stage of decay, was not linearly correlated with C/N ratio. Decaying wood is considered to have a high C/N ratio and thus may be a difficult substrate to degrade under N limitation. Furthermore, all analyzed samples, despite the stage of decay, showed high C/N ratios, suggesting nitrogen was a limiting factor for degradation. However, an increase in total N content with progressive decomposition was observed in the samples. The increase in N content may be related with microbial activities as have been shown for some fungi and bacteria. For instance, H. fasciculare was recently shown to be able to translocate N into decomposing wood from soil under the colonized wood (Philpott et al., 2014). Similarly, wood-inhabiting N-fixing bacteria were suggested to support fungi in fulfilling their N requirements (Cowling and Merrill, 1966;Hoppe et al., 2014). The identification of potential N-fixing bacteria in our study is discussed below.
The bacterial communities were dominated by Acidobacteria, Alpha-, Beta-, Gammaproteobacteria, Firmicutes, and Actinobacteria. The dominance of those phyla in decaying wood was reported before (e.g., Zhang et al., 2008;Valášková et al., 2009). Here, the early stage was dominated by two groups of bacteria, namely Xanthomonadales and Pseudomonadales. These bacteria are known to be fast growing bacterial groups and to be metabolically highly versatile. Members of the first group include phytopathogens that cause a wide variety of serious plant diseases. Pseudomonadales are affiliated with plant pathogenic bacteria as well but also described as a group of endophytic and plant beneficial bacteria. Bacteria belonging to those two groups were also previously reported as being associated with the pinewood nematode, Bursaphelenchus xylophilus, causal agent of the Pine Wilt Disease (Proença et al., 2010;Tian et al., 2011;Vicente et al., 2011) and with woodfeeding beetle larvae (Prionoplus reticularis; Reid et al., 2011). Xanthomonadaceae were previously shown to be dominant among isolates obtained from beech wood (Folman et al., 2008;Valášková et al., 2009) and wood sawdust inoculated with Phanerochaete chrysosporium (Hervé et al., 2014). Pseudomonas and Luteibacter, two of the most abundant genera, were identified as dominant in freshly cut pine wood chips (Noll et al., 2010).
Interestingly, we observed high relative abundances of Acidobacteria, especially subdivisions 1 and 3. The predominance of Acidobacteria in low pH conditions (such as decaying wood) is well-documented for members of subdivision 1 (Sait et al., 2006;Jones et al., 2009). There are also isolates of this subdivision that were previously obtained from wood (Valášková et al., 2009;Yamada et al., 2014). Supplementation of growth media with plant polymers has been suggested as a method to increase cultivability of Acidobacteria subdivisions 1 and 3 (Eichorst et al., 2011). Up to date, only two recognized and described acidobacterial species were obtained from wood: Acidicapsa ligni isolated from wood samples in advanced stage of decay and colonized by H. fasciculare (Valášková et al., 2009) and Granulicella cerasi isolated from bark of a living cherry tree (Yamada et al., 2014). There are no reports on the potential function of Acidobacteria in decomposition processes nor on potential interaction with fungi. It may be only speculated based on their described characteristics (Vorob'ev et al., 2009;Yamada et al., 2014) that these bacteria are well-adapted to wood decay condition. They may interact with fungi as both isolates are able to grow on trehalose, a disaccharide used for carbon storage by fungi. Additionally, A. ligni (no information available for G. cerasi) is able to grow on oxalate, a substrate that would be highly available as exudate of wood decomposing fungi.
The number of bacterial groups known to be associated with fungal activity and remove toxic wood compounds, thus exerting an expected positive effect on fungal communities, increased in the middle (e.g., Burkholderia, Phenylobacterium, Acidisoma) or in middle and late decay stages (Methylovirgula). Bacteria belonging to Burkholderiales have been shown to be able to degrade cellulose, having oxalotrophic activities but also being able to degrade aromatic compounds (Sahin, 2003;Bugg et al., 2011;Eichorst and Kuske, 2012;Štursová et al., 2012). Also, Phenylobacterium may be involved in removal of toxic aromatic compounds from wood (Lingens et al., 1985). Another interesting group of microorganisms abundant in middle and late stages is the genus Methylovirgula. These bacteria use methanol produced during wood decomposition as carbon source. Additionally they can utilize oxalate but also are capable of atmospheric nitrogen fixation (Vorob'ev et al., 2009). As mentioned before, due to low N availability in the decaying wood it was hypothesized that fungi may be supported by N-fixing bacteria (Cowling and Merrill, 1966). Although, we identified potential N-fixing candidates (Rhizobiales, Methylovirgula) as more abundant in middle and late stages than early stage, these two groups were not dominant in the microbial communities.
Although, the fungal community compositions were highly variable ( Figure 2B) between wood samples, bacterial communities were much less variable (Figure 2A). Also, we were able to group samples according to the bacterial community composition and wood-decay stages (wood density loss) into three decay stages. It was surprising that the bacterial communities were not connected with the fungal species present in the wood. The co-occurrence analysis showed low dependence (low connectivity) in early and middle stages of decay between bacteria and fungi. The analysis indicated that bacteria responded more to the changes in wood physicochemical properties as a result of fungal driven decomposition rather than to the cause of those changes, fungal species.
In order to test the role of changes in the physicochemical parameters of the wood on the community composition, we fit species rank abundances to assembly models. In the early stage of wood decay, our results showed that both bacterial and fungal genera distributions indicate the relevance of stochastic processes (Hubbell, 2001;McGill et al., 2006). In some systems, both deterministic and stochastic processes are responsible for constructing ecological communities (Chave, 2004). At the middle stage, all the samples fitted nichebased models. At the late stage, all bacteria samples fitted niche-models, while for fungi there was also a contribution of stochastic process. These results pointed to a microbial community selection via niche filtering, mainly at the middle stage of the wood decomposition. The selected groups of bacteria of very low abundance (some of those were specific to the decay stage) may require specific nutritional conditions although close dependence/interaction with fungi cannot be excluded. This hypothesis is supported by the species abundance distribution in the middle and late decay stages for which niche-based mechanisms better explained the microbial community assembly. Niche theory predicts that changes in species composition are not random and are related to changes in environmental variables. In our study, both bacterial and fungal communities could be fitted to the niche theory models at the middle stage of wood decomposition. Although, stage of decomposition had an effect on shaping the overall microbial community, the identification of the major cause among environmental factors is more complex as multiple factors covary and only a limited number of parameters was measured in this study. This may therefore be a good avenue for future research. In case of fungal communities, competitive interactions between fungi may have a strong and deterministic impact on the community composition. The succession type of fungal community shifts during wood degradation is welldocumented (Rajala et al., 2011;Fackler and Schwanninger, 2012).
In our study, we focused on the bacterial communities in the process of wood decay. We showed that (1) bacterial communities underwent less drastic changes compared to the fungal communities; (2) the community assemblage in the early stage was a stochastic process; over wood decay progression, the community was determined by environmental factors; (3) bacteria community composition responded to the stage of decay (reflected by the wood density) more than fungal community composition. Thus, we conclude that the changing conditions and wood properties as a result of fungal activity were more important determinants of bacterial community composition than the taxonomic classification of those fungi. This finding is also supported by the low number of co-occurrence correlations between bacterial and fungal community members.
AUTHOR CONTRIBUTIONS
AK and TS designed the study, collected all data and conducted the analyses. AK wrote the manuscript with assistance of EK. LM and EK contributed to the analyses and to the revision of the final manuscript. JV and EK provided conceptual input and contributed to the critical revision of the manuscript. All authors approved the final version of the manuscript.
ACKNOWLEDGMENTS
Authors would like to thank Márcio F. A. Leite for statistical support, Wietse de Boer for critical and valuable comments on the manuscript, Noriko A. Cassman for English editing of the manuscript and Marlies van den Berg and Iris Chardon for technical assistance. This work was financially supported by the Dutch Ministry of Economic Affairs, Agriculture and Innovation and the BE-Basic organization (www.be-basic.org). Publication 6033 of the Netherlands Institute of Ecology (NIOO-KNAW).
FUNDING
This work is supported by the BE-Basic organization.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fmicb. 2016.00231 | 7,624.4 | 2016-03-01T00:00:00.000 | [
"Materials Science",
"Environmental Science"
] |
Development of laser welding of high strength aluminium alloy 2024-T4 with controlled thermal cycle
An innovative process design, to avoid thermal degradation during autogenous fusion welding of high strength AA 2024-T4 alloy, based on laser beam welding, is being developed. A series of instrumented laser welds in 2 mm thick AA 2024-T4 alloys were made with different processing conditions resulting in different thermal profiles and cooling rates. The welds were examined under SEM, TEM and LOM, and subjected to micro-hardness examination. This allowed us to understand the influence of cooling rate, peak temperature, and thermal cycle on the growth of precipitates, and related degradation in the weld and heat affected area, evident as softening. Although laser beam welding allows significant reduction of heat input, and higher cooling rates, as compared to other high heat input welding processes, this was found insufficient to completely supress coarsening of precipitate in HAZ. To understand the required range of thermal cycles, additional dilatometry tests were carried out using the same base material to understand the timetemperature relationship of precipitate formation. The results were used to design a novel laser welding process with enhanced cooling, such as with copper backing bar and cryogenic cooling.
Introduction
Aluminium (Al) is most commonly preferred due to its superior features, such as strength-to-weight ratio, high formability, high durability, corrosion resistance with remarkable cost-efficiency, manufacturability and abundance (Miller et al., 2000;Fridlyander et al., 2002;Sakurai, 2008;Hong and Shin, 2017). There is a growing body of interest that recognises the importance of aluminium processing and its strengthening procedures. The standard features of aluminium can be advanced through some heat treatments, such as age-hardening (Oguz, 1990;Jordan, 2016;Andersen et al., 2018). On the other hand, aluminium alloys are very sensitive to their processing conditions (Nascente et al., 2002) in spite of the fact that these hardening processes provide advantageous properties through generating secondary phases, i.e. precipitates, in the matrix, which create internal stresses by inhibiting dislocation motion which is required for the plastic deformation. Obstacles to the deformation provides to increase the hardness and strength rates crucially (Ambriz and Jaramillo, 2014).
In accordance with results from literature (Sánchez-Amaya et al., 2012;Alshaer, Li and Mistry, 2017;Niu et al., 2017;Ahn et al., 2018) about aluminium welding, previous studies generally focus on problems which occur in fusion zone (FZ), and research to date has not yet determined how the coarsening of precipitates can be inhibited in the over-aged zone, i.e. heat-affected zone (HAZ). As mentioned above, these secondary phases are susceptible to the heat which has a primary role during welding (Zhang et al., 2015). In addition to other problems like hot cracking, porosity etc., softening related to negative precipitate evolution is inevitable upon welding of aluminium alloys. Generally, welding of age hardenable aluminium alloys is avoided because of the sensitivity of these secondary phases to the heat of welding, which results in coarsening of the precipitates in HAZ (Zervaki and Haidemenopoulos, 2007;Zhang et al., 2016). Arora et al. (2010) also give essential information about the peak temperature interval of HAZ of welded 2219 aluminium alloy and HAZ usually occurred between 400 °C and 150 °C. Mishra and Sidhar (2017) identified that 1 s -2 s is enough to make the precipitates in 2xxx aluminium alloys coarsened in higher temperature rates. When we consider this information, and examine the TTP diagrams of 2xxx aluminium alloys, it seems that critical average cooling rate should be higher than 250 °C/s to prevent harmful precipitation coarsening in HAZ, between the temperature intervals from 400 °C to 150 °C.
It makes the aluminium welding process challenging. Although these problems -which occur because of heat retention -can be reduced through generating novel design, considering the process characteristics and dynamics. This novel design should aim to mitigate HAZ softening.
Experimental Setup
To understand what thermal cycle is needed to mitigate/minimise HAZ softening, a range of experiments were performed, altering the parameters to vary thermal welding cycles. The effect of thermal cycling on the microstructural evolution and related mechanical properties was assessed and it was supported by light optical microscope (LOM) and secondary electron microscope (SEM) analyses.
Welding Equipment
Diode-pumped continuous wave (1.070 μm) IPG YLR ytterbium fibre laser with 8 kW of maximum power was used in these experiments, as seen by the setup found in Figure 1. To achieve a 0.6 mm beam diameter, 24.3 cm focal length was set between the baseplate and the lens. The angle between the laser beam and the base plate was set to 5° to protect the laser equipment against reflection. To assess the effect of passive cooling during laser welding, a copper backing bar was added to the jig as a heat sink, which is shown schematically in Figure 2, and it was tried to increase the cooling rate by passive cooling. Thermocouple measurements recorded change in the thermal cycle. Average cooling rates in HAZ were calculated using Equation 1 between the temperature rates of 400 ℃ and 150 ℃, which are dangerous for harmful precipitate evolution, which results in HAZ softening. The active cooling rate to mitigate HAZ softening, was decided.
Welded Materials and Parameters
During the experiments, 2 mm thick AA 2024 alloys in T4 condition were used, and the composition of AA 2024-T4 alloys used can be found in Table 1. Parameters which were used in the experiments can be found in Table 2. Heat input rates were calculated by using Equation 2.
Macro-and Microstructural Analysis
After performing the laser welding experiments, samples were cut and mounted by casting the pieces of samples into the resin. When the cold mounting was completed, samples were ground by using silicon carbide abrasive papers from coarse P240g, fine P1200g and finer P2500g. After grinding, samples were polished with the chemo-textile clothes with using 6 µm and 9 µm diamond abrasives suspensions, and at the final stage, samples were polished with 0.05 µm OP-S colloidal silica suspension. Finally, samples were etched with Keller's reagent. Afterwards, macro-and micro-images were examined by using LOM and SEM to show and compare the precipitate coarsening with varying the various heat input rates. SEM of Oxford Instruments was used to understand the characteristics of the selected areas of the specimen and to figure out the precipitate coarsening as microstructural analysis in this study.
Hardness Testing
To investigate the change in the mechanical properties, hardness mapping was conducted using the Zwick/Roell ZHV hardness machine in the facilities of the Cranfield University. The load of the indentations was 100 g and dwell time was 15 s. Hardness mapping was performed 0.05 mm below the top surface of the weldment, and the distance between two indentations was 0.5 mm. The first indentation was made into the centre of fusion zone, and 39 indentations were made during the mapping of the weld profile.
Finite Elements Modelling (FEM)
FEM model was developed on ABAQUS software to simulate the laser welding process. The domain was meshed as seen in Figure 3. Figure 4 shows that the region which was affected by the heat of welding. It was meshed densely, but unaffected areas were meshed coarsely. This model contains 3D volume elements with eight nodes (C3D8T) to make coupled temperaturedisplacement numerical analysis of heat distribution, and 8800 elements were generated on the domain.
Transient heat transfer analysis was conducted to obtain thermal cycle and to calculate the cooling rate. Moving Gaussian heat source was used by using a DFLUX subroutine, which was programmed using FORTRAN (Tsirkas, Papanikos and Kermanidis, 2003), and it was used with the generated model while loading the conditions.
User-defined surface heat flux which was managed by the DFLUX subroutine -which was used to simulate the laser welding process, and power-travel speed variables were changed in this DFLUX subroutine.
Material properties were taken from the studies of Zhao et al. (2012). 5.67 x 10-8 Wm 2 /K 4 as Stefan-Boltzmann constant and -273.15 as absolute zero temperature to obtain results in degrees Celsius were set in the model creation stage.
Experimental data were used to validate this FEM model. The increased cooling rate provided by active cooling was simulated, as seen in Figure 5. The simulated cooling rate was enough to exhibit HAZ softening and provide a remarkable clue for developing novel design by including the active cooling method.
Thermal profile and cooling rate of FEM were presented and compared with the results which were obtained from laser only welding simulations. Boundary condition was implemented into this numerical model.
The surface of the plate, except the region where the laser was applied, was fixed to 5 ℃. This boundary condition represents the cryogenically cooled surface. It was supposed that the temperature at the top surface of the substrate will drop into the pre-defined lower temperature (5 ℃) through using of cryogenic gas.
The boundary conditions were initialised during the welding step to prevent the substrate from pre-cooling. A pre-cooled substrate will not melt.
Methodology
A systematic literature review was conducted of studies regarding how age-hardenable aluminium alloys lose their advanced properties during welding processes, and how much cooling rate is required to mitigate HAZ softening in welded structures. It was found from Arora et al. (2010) and Mishra and Sidhar (2017) that processing temperatures between 400 °C and 150 °C are very critical, and the exposure to these temperatures for more than 1 s or 2 s causes disadvantageous precipitate coarsening. Therefore, it must be noted that the average cooling rate must be more than 250 °C/s. A series of welds was achieved with varying thermal cycles and its effect on resulting microstructure and properties assessed. Comparison between the data from literature and experiments was made. The outcome was used to derive a range of possible cooling rates to mitigate HAZ softening offered by standard laser processing, which includes beneficial instrumentations to reduce side effects of the heat of welding process. FEM on ABAQUS software was used to simulate laser only welding process, and to establish if the experimentally obtained cooling rate was enough to inhibit HAZ softening. At the last stage, it was checked which cooling rates are feasible with standard laser welding process and with enhanced cooled ones.
Thermal Analysis
Heat input rate changes the cooling rate and peak temperature exposure. It was revealed in Figure 6, which was obtained at the distance of 1 mm from the centre of FZ that using the higher heat input rate (150 kJ/m) during laser processing causes decreased cooling rate. Still, lower heat input (48 kJ/m) provides a higher cooling rate. It means that temperature drops in shorter times in the lower heat input laser welding processing. Hence, narrower weld profiles can be obtained by achieving small HAZ.
Fig 6.
Thermal cycle of lower and higher heat input rates.
As the cooling rate between processing temperatures of 400 °C and 150 °C is very critical, possible cooling rates in HAZ of standard laser welding and copper cooled one were measured and illustrated in Figure 7. As seen in Figure 7, the application of higher heat input rate of laser only welding causes 55 ℃/s as the average cooling rate in HAZ, but lower heat input rate provides 192 ℃/s average cooling rates in HAZ. However, it is crucial to say that using a copper bar as heat sink increases the cooling rate remarkably in the heat-affected zone. Higher heat input rate with using the copper bar as heat sink gives us 119 ℃/s as the average cooling rate in the heat-affected zone. Similarly, if the lower heat input rate when using the copper bar is applied, it gives us 444 ℃/s as the cooling rate in HAZ. Although using heat sink makes the weld profile narrower, it is not enough to prevent HAZ occurrence, so there is a need to use an external cooling device to get rid of heat-affected in the welded structures.
Macro-and Microstructural Analysis
The SEM image of the parent metal and the HAZ of the laser-welded sample can be found in Figure 8. Some dark, dark-bright and bright phases can be seen in various sizes, and a big difference in the size of the precipitates can be remarkably seen. It was revealed in Figure 8 a) that parent metal consisted of many smallsized and uniformly distributed precipitates, approximately 3 -4 µm. These should be S' and S" particles. Presence of the fine and evenly distributed metastable precipitates enhances higher strength rates because of several reasons.
Firstly, they are coherent with the matrix and the interface between fine precipitates and other phases is sufficient to limit dislocation movement in each slipping plane through a decreased rate of interfacial free energy rate. Secondly, fine precipitates are distributed in the matrix uniformly, so the structure is more homogenous. It cannot be seen in any precipitate-depleted zone, which causes to reduce strength rate of the material based on the increased rate of inter-particle spacing.
Coarsened constituents, which were illustrated in Figure 8 b), were coarsened from coherent particles due to the heat exposure during the laser welding process. Laser welding with 150 kJ/m causes the cooling rate of 55 ℃/s, which is enough to coarsen these fine-sized precipitates and these became 15 -20 µm and elongated. It means that the amount of the precipitates was decreased due to ease of migration of the atoms due to excessive heat exposure and coagulation of more solute atoms to form large-sized precipitates, which are equilibrium S phases in this case. It causes precipitatedepleted areas to occur and this decreases the mechanical properties. Coarsening and elongation of these precipitates cause to reduce the coherency, and they become incoherent with the aluminium matrix (α-phase).
The coarsening phenomenon can be clearly understood from Figure 8 that HAZ occurs because of coarsening of these main strengthening constituents (precipitates).
Achieving excellent mechanical properties by controlling the cooling rate is the primary purpose of this study. Because of this, parameters were varied to achieve higher cooling rate, which provides sufficient properties in HAZ. Application of heat input rate of 48 kJ/m offers to reduce the duration of heat exposure and it can be figured out that less number of solute atoms collide to coagulate. The distribution of precipitates was also affected by lowering the heat input rate.
Distribution of precipitates is relatively uniform after application of heat input of 48 kJ/m. Still, higher heat input rate (150 kJ/m) creates more precipitate-depleted areas than the application of heat input of 48 kJ/m. Figure 8 c) reveals that the size of precipitates is less (10 -12 µm) than the alloy which was laser welded with 150 kJ/m. It means that using a lower heat input lowers the thermal degradation, but 192 ℃/s average cooling rate is not enough to suppress the harmful precipitate transition.
Changes in the weld characteristics in terms of appearance can be identified in Table 3. Lowering the heat input makes the weld zone smaller. Underfill was decreased and weld became hourglass shape by reducing the heat input and providing stable melt during the laser welding process. Furthermore, it can be easily recognised that using a copper bar increases the cooling rate and HAZ of laser-welded structures which were additionally cooled as the cooper bar became narrower. However, solidification shrinkage and macro-sized porosity occurred because of the higher cooling rate.
Hardness Measurements
The difference between the copper backing bar in the jig and no copper bar in jig can be understood very well by checking the mechanical properties. It was revealed in Figure 9 that using the copper bar with lower heat input rates increases the material properties and makes the HAZ narrower. It was illustrated that FZ cooled faster through using the copper backing bar. Centre of FZ showed 109.1 HV with using the copper backing bar, but if no copper bar used, hardness in this area becomes 108.4 HV.
Notably, significant differences can be recognised in the area, which consists of columnar grains. As mentioned before, columnar grains occur because of slower cooling rate in this area. However, using a copper bar increases the cooling rate and, they become much smaller, so hardness rates become higher. The hardness of the areas which were not cooled by the copper backing bar are 102.4 HV and 101.3 HV. On the other hand, the hardness of the regions which were cooled by copper backing bar 107.2 HV and 104.4 HV. The real difference can be seen in the solution strengthened zone. The hardness rates in this zone which were not cooled by the copper backing bar are 140.5 HV in 1 mm, 139.6 HV in 1.5 mm and 137.9 HV in 2 mm, but the hardness rates in this zone which were cooled by the copper backing bar are 151.2 HV in 1 mm, 149.7 HV in 1.5 mm and 146.7 HV in 2 mm. In this area, precipitations were dissolved in the matrix because of the exposed higher temperatures rate which can be seen as an orange line in Figure 6. This temperature rate is not enough to melt this zone. However, it is enough to dissolve the precipitates. Then, a higher thermal gradient in this zone causes to re-precipitate in the grain boundaries in this area. As can be understood from the graph in Figure 9, HAZ locates next to the solution strengthened zone. Hardness in HAZ (3 mm) of laserwelded structure which was not cooled by a copper backing bar was around 130.7 HV, although hardness in HAZ of the copper cooled structure was around 133.2 HV. Increasing the cooling rate from 192 to 444 ℃/s by using copper heat sink provides approximately 3 HV increase in HAZ. The reason for this improvement can be clarified by that copper backing bar provides more rapid cooling, which has a positive effect on the evolution and distribution of precipitates in the aluminium matrix. More rapid cooling reduces the number of solute atoms migrating to collide and coagulate, as it can be seen in Figure 8 c). Finer and relatively uniformly distributed precipitates in the aluminium matrix, which are provided by increased cooling rate through using the copper backing bar, contribute to improving the weld profile in terms of micro-hardness rates. It is clearly recognised that using a copper backing bar increased the hardness in HAZ while decreasing the width of this area. However, it is not enough to mitigate HAZ by using copper heat sink as passive cooling. Figure 9 also shows that the width of HAZ of laserwelded structure, which was not cooled by the copper backing bar, was approximately 4 mm, although the width of HAZ of the copper cooled structure was approximately 3 mm. It means that using the copper backing bar increases the cooling rate from 192 ℃/s to 444 ℃/s in this case and improves the weld profile due to decreasing the HAZ width.
FEM Analysis
If the data is established before is examined, it should be noted that using heat sink is not enough to prevent HAZ occurrence, although it seems that it makes the weld profile narrower. However, as the highest hardness rate locates at the distance of 1 mm from the centre of FZ, the amount of average cooling rate in this zone should be enough to create sufficient welded profile. Therefore, there is a need to use an external cooling device to mitigate HAZ in the welded structures.
FEM was used to simulate active cooling and to demonstrate what type of results active cooling can give. Realistically, a cooling rate of 1679 ℃/s can be achieved with the backing bar, but a sufficient and homogenous cooling rate cannot be obtained in the distant locations from FZ. It means that some areas suffer from the lower cooling rate, which results in softening in HAZ, which is caused by precipitate coarsening. For this reason, FEM is a handy tool to simulate active cooling and experienced thermal cycle which can be found in Figure 10. Peak temperature rates are decreased in the cooled structure and temperature rates dropped very rapidly, while laser welded and the non-cooled sample was experiencing 900 ℃. As it was shown in Figure 10, 0.35 s is enough to cool the laser-welded sample from peak temperature (700 ℃) to room temperature. Figure 11 shows that decreasing the surface temperature from room temperature to 5 ℃ during laser welding provides an increase in the cooling rate, and to mitigate HAZ through average cooling rate above 2000 ℃/swhich is above the experimentally measured average cooling rate of 1679 ℃/s -which provided by passive cooling through using a copper heat sink. It means that lowering the surface temperature through active cooling provides a higher cooling rate and prevents HAZ softening. This type of external trailing cooling device enhances to suppress HAZ occurrence in high strength aluminium alloys after the laser welding process. It provides to reduce the need for post welding heat treatment (PWHT) which can provide to return pre-weld and as-received properties with the additional processes and extra costs. To summarise up, it must be worth bearing in mind that it is not possible to achieve enough cooling rate across HAZ of laser only processed structures. Therefore, some innovative design should be made for laser welding of high strength aluminium alloys, which consists of improved features in HAZ. FEM analysis of active cooling will likely be the right candidate for the prevention of HAZ softening.
Conclusion
In this work, an initial correlation between welding conditions, thermal cycle and the resulting microstructural response has been studied.
Higher heat input causes a decreased rate of cooling rate.
Heat release from weld zone causes precipitates to get coarsened in HAZ and this coarsening of precipitates decreases mechanical properties. Thermal cycle and cooling rate of welding have a critical influence on the mechanical properties due to negative precipitate evolution.
Passive cooling through using copper bar increases the cooling rate. Experiments showed that in HAZ copper bar increased cooling rate from 192 ℃/s to 444 ℃/s, but this improvement is not enough to prevent HAZ occurrence.
Solid solution strengthened area of the laser-welded sample shows higher hardness rate then, because this zone experienced a higher cooling rate (1679 ℃/s) which suppressed the precipitate coarsening.
It has been proven by FEM modelling that using additional external active cooling device/equipment will be an excellent solution to provide rapid cooling with the cooling rate above 2000 ℃/s for prevention of softening in initial zones. It can also increase the productivity of this process by improving the process characteristics with a novel design. | 5,240.2 | 2020-01-01T00:00:00.000 | [
"Materials Science"
] |
Application of error level analysis in image spam classification using deep learning model
Image spam is a type of spam that contains text information inserted in an image file. Traditional classification systems based on feature engineering require manual extraction of certain quantitative and qualitative image features for classification. However, these systems are often not robust to adversarial attacks. In contrast, classification pipelines that use convolutional neural network (CNN) models automatically extract features from images. This approach has been shown to achieve high accuracies even on challenge datasets that are designed to defeat the purpose of classification. We propose a method for improving the performance of CNN models for image spam classification. Our method uses the concept of error level analysis (ELA) as a pre-processing step. ELA is a technique for detecting image tampering by analyzing the error levels of the image pixels. We show that ELA can be used to improve the accuracy of CNN models for image spam classification, even on challenge datasets. Our results demonstrate that the application of ELA as a pre-processing technique in our proposed model can significantly improve the results of the classification tasks on image spam datasets.
Introduction
Spam mail are becoming more of a security threat [1] than an annoyance.The classification of spam mail is a major challenge faced, as the adversaries are constantly updating the tools and techniques for the creation and dissemination of spam mails.Recent advancement in the area of machine learning [2,3] have proven to be useful in the area of spam classification tasks.
Image spam is a form of spam with embedded text information inside a base image file.The text information is embedded into the image file to evade the common practise of using text filtering in spam classification task.With advanced text embedding techniques [4], extracting the text using OCR is even proving to be difficult.Alternatively, instead of using normal text filtering techniques, image spam filtering is performed using features that are extracted from the input spam images.The accuracies, complexities and performance of such filter depends mainly on the types of image features, extraction method used and classification algorithm adopted.
One commonly used approach in the image spam classification requires manual feature engineering of different type of features and the selection of the most appropriate and useful one.Larger number of features usually results in better accuracies at the cost of computation and therefore many approaches focus on selecting useful features to improve the classification accuracy and reduce the overall computation.However, as the adversaries are using various image processing techniques to make the image spam looks more like non-spam, the manual feature engineering is becoming less accurate in the classification task.
Given that Image spam are usually created by editing or changing part of the base image to include certain textual information and therefore will results in different compression level in the image.These differences in the compression level in the image could be highlighted by applying error level analysis [5] in the input image.Therefore, we proposed to apply error level analysis to the input image in this research as a pre-processing technique and create a more distinct image features, which will improve the result of the spam classification task.One of the drawback of our approach is that error level analysis works only on lossy compression such as JPEG images and other image format such as PNG are not supported.Moreover, to overcome the challenges of the manual feature engineering and selection of useful features, we propose to use Deep Learning techniques [6][7][8][9][10].Using this approach, the features are automatic extracted from the input image and therefore the accuracy and the complexity is improved.However, one of the issue in using a Deep Learning technique is the requirement of a large datasets for training and high compute to obtain a fine tuned model.
To sum up, our main contribution in the paper are as follows: • Application of Error Level Analysis as a pre-processing techniques to the input images • Use of Deep Learning approach for automatic feature extraction • Fine-tuning of the deep learning model to improve the classification task The rest of the sections of this work are organized as follows; Section 2 presents the related works.Section 3 contains the dataset description.Section 4 presents the proposed methodologies.Section 5 includes experimental results and discussion.Finally, the conclusion is placed in Section 6.
Image spam detection
Optical Character Recognition is widely used to extract textual information from a given image.Such approach is also used in image spam detection by first extracting the textual information from the given image and then applying the different text classification techniques [11][12][13][14] for the classification purpose.Textual features such as header, body, BoW, Structure, hyperlinks, attachments, Term Frequency etc [15] are commonly used features in the detection of spam mails.Advanced features such as rank score [16] which is generated by using the linkage information of the image, textual information of the image, and metadata information of the image helps improved the classification accuracy by increasing the relevance of the input image.
Instead of using textual features, many authors have used image features directly from the input spam images for the classification task.However, the resulting performance and the detection accuracy depends on the type and number of image features used.Different author manually generate the image features based on properties and meta data of the image file [17], global features including color and gradient histogram of the image file [18][19][20][21][22][23][24], some form of low level image features [25][26][27][28][29], Image texture based features related to run-length matrix, auto-regressive model, co-occurrence matrix, wavelet transform, histogram and gradient [30][31][32].Other work uses image features based on Speeded Up Robust Features (SURF) [33] and n-gram feature from the Base64 format of the image file [34].Different machine learning techniques such as KNN classifier [35], SVM [36][37], are applied for improving the classification task.
Improvement in the classification accuracy has been observed by using various form of CNN [38], Transfer Learning based on Pre-Trained Deep Learning models [39] and new preprocessing techniques such as illumination normalization techniques [40].These models show very high accuracy even on improved [36] and challenge [37] datasets, which are especially handcrafted by superimposing the spam image on the non-spam images.
Our main contribution in this paper is that, we apply error level analysis to the input spam images, to further enhance the image features and thereby improve the accuracy and performance of the classification tasks and fine-tuned the model to improve the classification accuracy.
Error Level Analysis (ELA)
Some of the image format such as the JPEG is lossy in nature and uses the transform compression such as Discrete Cosine Transform to retain the low frequency components and when image is saved or resaved, some errors are introduced.However, the amount of error introduced by each resave is not linear, and when an image is modified, the 8x8 cells containing the modifications are no longer at the same error level as the rest of the unmodified image.
Error Level Analysis (ELA) [5] is a technique to identify various portions of an image with a varying level of compression ratio.This is achieved by resaving the image at a known error rate, and then computing the difference between the resaved image and the original image as given below.
Error level ¼ ðPx À PyÞ ð1Þ
where: Error_level is the difference between the original pixel value and the compressed pixel value P x is the original pixel value in the image P y is the compressed pixel value in the image Many works in the field of image forensics uses ELA for the identification of tampered area in the input images.The works of [41][42][43][44] are based on the above approach of using ELA in image as well as video for forgery detection.
Image spams created by embedding the spam text in ordinary JPEG images usually introduce different level of compression in the embedded text portion.Using ELA, the image features extracted can enhance and improves the accuracy and performance of the detection task.The limitation of applying ELA is that, only lossy compression images such as JPEG format are supported and the result may be affected by the compression level used to generate the ELA.When lower compression level is used, the ELA image may not be able to detect any areas of manipulation.However, if the compression level is too high, then the ELA image may identify false positives.An example of an Error Level Analysis images is shown in Fig 1.
Material and methods
This section discussed the datasets used in performing the different experiments along with the various deep learning models used.
Image spam datasets used
The details of the datasets used in the experiment are shown in Table 1.
Convolutional neural network model
A convolutional neural network (CNN) is a type of artificial neural network that is specifically designed for processing data that has a grid-like structure, such as images.CNNs are able to learn to identify patterns in images, and they are often used for tasks such as image classification, object detection, and image segmentation.The basic idea behind a CNN is to use a series of convolutional layers to extract features from an image.A convolutional layer is a type of neural network layer that applies a filter to an image, which helps to identify specific features in the image.The filters are learned during the training process, and they are typically based on the features that are known to be important for the task at hand.
Once the features have been extracted, they are passed to a series of fully connected layers, which are responsible for classifying the image.The fully connected layers learn to map the extracted features to the labels of the different classes of images.
Pre-trained convolutional neural network models, which are trained on a large dataset can be used to save time and efforts by using the transfer learning technique.In our experiment, we choose the Pre-trained CNN model: BiT-M R50x1 [45], a high performing pre-trained model as our base model.The convolutional block in BiT-M R50x1 consists of a sequence of convolutional layers, followed by a shortcut connection.The shortcut connection simply adds the input to the output of the convolutional layers.This allows the convolutional block to learn residual connections, which helps to prevent the vanishing gradient problem.These blocks are of different dimensions and pooling are applied to these blocks to further reduce the dimension.
The Big Transfer (BiT) models are a powerful set of pre-trained image models that can be used to achieve excellent performance on a variety of tasks, even with few labeled samples.The models are based on the ResNet 50 architecture and are pre-trained on a large supervised dataset.They are then efficiently tuned for specific target tasks using a technique called transfer learning.
One of the key innovations of the BiT models is the use of group normalization and convolutional core weight normalization.These normalization techniques help to improve the performance of the models by making them more robust to changes in the input data.As a result, the BiT models are able to achieve state-of-the-art performance on a variety of tasks, including image classification, object detection, and image segmentation.
Proposed classification model
There are three main components in the proposed model: 1. ELA image generator.The ELA image generator is part of the pre-processing module where the pre-processing on the datasets are performed.The main pre-processing carried out are: • Resizing the input images to 224×224 dimensions, • Normalizing the image data to values between 0-1.
• Generation of ELA Image by taking the pixel difference of the input image and the compressed input image.Finally, the binary classification module consists of two dense layers as the last layer in the model.The first dense layer is of the dimension 1×1×2n ReDense Layer [46], where n is the dimension of the flatten vector and uses a Rectified Linear Unit as its activation function and the second dense layer is the output classification layer which uses a Sigmoid activation.The use of these two dense layers provides an improvement in the performance of the detection.
The parameters in the proposed model are shown in Table 2. Fine tuning is done through different values in the network hyper-parameters.The values provided in Table 3, results in the highest accuracies.
The experimental frameworks
The following frameworks were used in performing all the experiments: • Python 3.6 with OpenCV [47].
Performance measures
The performance measure is calculated using different evaluation indicators.Some of the measures are given below: Where, FP: The number of misclassified legitimate emails; FN: The number of misclassified Spam; TP: The number of correctly classified Spam; TN: The number of correctly classified legitimate emails.The confusion matrix can be used to define the performance of the classification algorithm and is given below in Table 4.
Results
Using the proposed Deep Learning model along with the pre-processing, we performed experiments on image spam datasets as mentioned earlier, namely, "Improved" [36], Challenge-A [37], Challenge-B [37], Dredze [17], and ISH [18], and the results of the experiments are then validated by using the validation sets.The (Figs 6-10) presents the ROC curve and the validation loss of different experiment, on the datasets.
From the above, we can see that the proposed model with ELA pre-processing achieved a higher accuracy than the non ELA images in all the datasets.We also note that there is insignificant difference in the accuracy for the two compression ratios used in generating the ELA images i.e., 90% and 95%.
The computational speeds of the proposed CNN model along with the speed of execution are shown in Table 5.
The confusion matrix for the various experiment on different datasets with compression ratio of 90%, 95% and Non-ELA are shown in Fig 11.The model performance measures namely the accuracy, precision, recall and f1-score for the various experiment on different datasets are shown in Table 6.
Our proposed CNN model performed very well on other publicly available datasets as given in Table 7 and achieved tremendous improvement in the accuracies as compared with the other approaches.
Conclusion
Image spam detection is a binary classification problem that uses models to extract and train features for image classification.Models that rely on manual feature extraction are not suitable when presented with specially handcrafted image spam datasets.However, models that automatically extract image features from input images, such as those based on convolutional neural networks (CNNs), perform extremely well even on challenge datasets.The performance of such models can be improved by fine-tuning the hyper-parameters of the model and pre-processing the input images.
In our experiment, we used a deep learning model based on CNN and transfer learning to extract image features from input images that had been pre-processed using error level analysis (ELA).Our proposed approach achieved extremely high levels of accuracy not only on standard image spam datasets, but also on improved and challenge datasets.With the use of ELA, we were able to increase the efficiency and reduce the computational costs of the training process.
In our future work, we hope to apply the proposed approach to other image datasets using automatic parameter tuning methods and perform various statistical analysis on the performance.
3. 2 . 1
BiT-M R50×1.The BiT-M R50x1 model is a high performance CNN model which is based on ResNet and is shown in Fig 2. The model takes as input a 224×224 colour image and generates a 2048-dimensional features vectors as output and consists a multi-class classifier as head.The model consists of Convolution Blocks shown in Fig 3 and Identity Blocks as given in Fig 4.
Fig 2 .Fig 3 .
Fig 2. BiT-M R50×1 model architecture, image by author.https://doi.org/10.1371/journal.pone.0291037.g002 The next module is the Base CNN model which has been strip of its classification head.The output of the ELA image generator is fed into the CNN model to extract the features.This is achieved by first freezing the convolution blocks along with the Identity and the pooling blocks of the Base CNN model.The input images of dimension 224×224×3 is then transformed into a 7×7×2048 through multiple stages consisting various blocks as shown in Fig 2 and finally it is flattened to output a 2048 dimensional feature vector. | 3,731 | 2023-12-14T00:00:00.000 | [
"Computer Science"
] |
Geometric Perturbation Theory and Travelling Waves profiles analysis in a Darcy–Forchheimer fluid model
The intention along the presented analysis is to develop existence, uniqueness and asymptotic analysis of solutions to a magnetohydrodynamic (MHD) flow saturating porous medium. The influence of a porous medium is provided by the Darcy–Forchheimer conditions. Firstly, the existence and uniqueness topics are developed making used of a weak formulation. Once solutions are shown to exist regularly, the problem is converted into the Travelling Waves (TW) domain to study the asymptotic behaviour supported by the Geometric Perturbation Theory (GPT). Based on this, analytical expressions are constructed to the velocity profile for the mentioned Darcy–Forchheimer flow. Afterwards, the approximated solutions based on the GPT approach are shown to be sufficiently accurate for a range of travelling waves speeds in the interval [2.5, 2.8].
Introduction
Flow of materials saturating porous media is quite prevalent in geophysical and physiological processes. These processes in particular may include gall blander with stones, tumor growth dynamics, porous catalysis, nuclear reactors cooling, soil pollution, oil enhanced recovery, fuel cells, combustion technology, water movement in reservoirs, fermentation, grain storage etc. However, much attention in previous literature on this topic has been focused to the description of porous medium contribution by classical Darcy's expression. It is obvious that Darcy's law does not account the boundary and inertia effects in the flow. In addition, the porous media in most modern processes have been characterized via higher velocities where the corresponding Reynolds number is greater than one (see [1][2][3][4][5][6][7][8][9][10] and studies therein).
There exists a wide literature dealing with the Darcy-Forchheimer flow modelling and resolution with numerical means. In [24], the authors study a fluid flow across a bank of circular cylinders with Darcy-Forchheimer drag. To this end, a numerical technique is developed to account for the irregular domain studied. In [25], the authors analyze the flow motivated by a rotating disk with heat in a threedimensional nanoliquid flow of Darcy-Forchheimer type. Firstly, the set of equations are converted into a system of ODEs and a shooting method is employed to support the construction of a numerical scheme. Note that these cited references have been selected on the basis of a representative set of examples in the field. Unlike the cited references and along the presented analysis in the coming sections, the intention is to explore solutions to a Darcy-Forchheimer model within the Travelling Waves (TW) scope. A travelling wave is a kind of wave that advances in a particular direction while retaining a fixed shape. Moreover, a travelling wave is associated with having a constant velocity throughout its course of propagation. Such waves are observed in many areas of science: For instance in combustion, which may occur as a result of a chemical reaction [11]. As well in mathematical biology to model the impulses that are apparent in a neural network [12]. Also, in conservation laws associated to problems in fluid dynamics, shock profiles are characterised as travelling waves [13]. Further, the structures in solid mechanics are typically modelled as standing waves [14].
Along the presented analysis, the relevant nonlinear flow equations are given by the following set of equations (refer to [20] for a modern derivation of typical fluids models, as Darcy and Forchheimer ones, under the scope of the Theory of Porous Media): (1.1) =(u(y, t), 0, 0) and div = 0 where P is the pressure field, = is the kinematic viscosity, F the nonuniform inertia coefficient in the porous medium, the density, the dynamic viscosity, Φ the porosity, K the permeability of the medium, represents the charges distribution and B 0 the intensity of an applied magnetic field. Firstly, differentiate (1.2) with regards to x: After integration, we obtain the constant K 1 = − 1 P x . The equation (1.2) becomes then Along the following sections, we study existence and uniqueness in the solutions. Afterwards, we make use of the TW structures of solutions and the Geometric Perturbation Theory to build solutions in the proximity of the critical point.
It shall be noted that the theory of parabolic operators and non-linear reaction theory have been widely exposed in [21,22] and [23] where existence and uniqueness general principles are exposed. Along the presented analysis, it is the intention to provide insights on existence and uniqueness of weak solutions due to the generalized initial condition u 0 (y) ∈ L ∞ (ℝ) ∩ L 1 (ℝ).
Preliminaries
The following is a definition of a weak solutions to (1.2):
Existence and Uniqueness of Solutions
Based on the weak solution defined, consider r = |y| >> r 0 (where r 0 is a finite arbitrary spatial value) and the domain B r as a ball of radium r. Admit the following equation: defined in B r × [0, T], so that along the border B r a semi-compact supported function is given to control any possible global unexpected behaviour of solutions under the non-linear terms involved, i.e: with the initial condition u(y, 0) = u 0 (y) ∈ L ∞ (ℝ) ∩ L 1 (ℝ). Based on this defined problem with borders, the following existence theorem is shown: Proof The idea is to study solution bounds based on a weak formulation under potential irregular initial data under the functional space L ∞ (ℝ) ∩ L 1 (ℝ) . To this end, a cut-off function, , is defined to control the variations along the borders of B r inspired by an idea in [15]. To this end, consider ∈ ℝ + , so that: so that
Multiplying (3.1) by and integrating in B r × [ , T], we obtain
The integral for the diffusive term reads
Then the equation (3.2)
Under the positive parabolic condition in the involved gaussian operator and for some large r >> r 0 > 1 , it is possible to make use of known results obtained to a generalized non-linear diffusion under regions with positive solutions. The spatial diffusion term 2 u y 2 << 2 u m y 2 , for m > 1 and for an arbitrary big u, so that the following results from [15] hold For m = 2, we get Then Next, consider a test function of the form where g(s) > 0 for 0 < s < t, g(t) = 0 and is chosen for convergence of (3.4). Therefore For > 5 2 and r → ∞, the right hand integrals above tend to zero. Then for any r, the right hand side term is finite with a bounding constant A. Then: Considering (3.2) and K 1 >> 1 togetherwith the introduced positive condition u > 0 : Note that the involved supporting function , t and are positive, bounded and finite in < s < t < T, then it is possible to conclude the theorem postulations on the boundness of solutions in B r × [0, T] for any pair of finite r, T, than can be considered sufficiently large. ◻ The next intention is to show the uniqueness of solutions. with > 0 arbitrary small. In addition, let define the minimal solution upon evolution of the initial condition u(y, 0) = u 0 (y).
Both the maximal and the minimal solutions satisfy the following set of equations Considering the associated weak formulation for every test function ∈ C ∞ (ℝ), the following expression holds after subs-traction: Admit the test function where K 2 and are constants. Differentiate with regards to t and y to obtain: Then where L 1 = max ℝ {û + u} is bounded as shown in Theorem 1. Since K 2 is constant, a suitable value can be chosen considering: so that with We introduce the new variables: ,
3
so that, the following system holds To determine the critical points X � = 0 and Y � = 0, then: So that the following are solutions: Consequently, X 1 , 0 and X 2 , 0 are the system critical points. The intention, now, is to make use of the Geometric Perturbation Theory to characterize the obtained critical points and to determine the orbits close such critical conditions.
Geometric Perturbation Theory (GPT)
A singular geometric perturbation approach is employed in this section to show the asymptotic behaviour of a manifold defined to make simpler the assessment of a TW analytical profile. For this purpose, define firstly the following manifold as: under the flow (4.2) and with critical points X 1 , 0 , X 2 , 0 . Admit the following perturbed manifold M close to M 0 in the critical point (X 1 , 0): where represents a perturbation close the equilibrium (X 1 , 0) and B is a suitable constant obtained after root factorization. Now, let X 2 = X − X 2 . The intention is to use the Fenichel invariant manifold theorem [12] as formulated in [13] and [14] to determine the hyperbolic condition of M . For this purpose, it is required to show that M 0 is a normally hyperbolic manifold, i.e. the eigenvalues of M 0 in the linearized frame close to the critical point, and transversal to the tangent space, have non-zero real part. This is shown based on the following equivalent flow associated to M 0 ∶ The associated eigenvalues are both real where is a perturbation close the equilibrium (X 2 , 0) and A is a suitable constant obtained after root factorization. Now, let X 1 = X − X 1 , then the Fenichel invariant manifold theorem can apply in the same manner as for critical point (X 1 , 0). Note the following equivalent flow associated to M 0 ∶ The associated eigenvalues are both real Hence M 0 is a hyperbolic manifold. In the same manner, the next intention is to show that the manifold M is locally invariant under the flow (4.2), so that the manifold M 0 can be represented as an asymptotic approach to M . For this purpose, consider the functions , i > 0 , in the proximity of the critical point X 2 , 0 . Note that is determined based on the following flows that are considered to be measurable a.e. in ℝ where = A‖X 1 ‖ is finite given the boundness properties of solutions. The distance between the manifolds keeps the normal hyperbolic condition for ∈ (0, ∞) and for sufficiently small close the critical point (X 2 , 0). Once we have shown that the manifold M 0 remains invariant with regards to manifolds M and M under the flow (4.2), the TW profiles can be obtained operating in the linearized perturbed manifolds close to M 0 .
Travelling Waves Profiles
Based on the normal hyperbolic condition in the manifold M 0 under the flow (4.2), asymptotic TW profiles can be obtained. For this purpose, consider, firstly, (4.2) so that the following expression provides the family of trajectories in the phase plane (X, Y): The intention is to determine a trajectory in the phase plane closed the equilibrium (X 1 , 0) . This is shown based on a comparison with subsolutions for a sufficiently small and supersolutions for a sufficiently large together with a topological argument and the continuity of H. Admit a → 0 then it is possible to find a suitable value of K 1 such that dY 1 ∕dX 1 > 0 while when a >> 0 , it is possible to conclude on a condition of the form dY 1 ∕dX 1 < 0 for suitable values in the involved constants. Given the continuity of H, it is possible, hence to conclude on the existence of a critical trajectory close the critical point (X 1 , 0) of the form: which implies that Solving (4.3) by using separation of variables, we obtain , which implies that close the critical point (X 1 , 0). Note that growing TW is obtained by replacing (− ) by the symmetric ( ) and taking The same process shall be repeated to determine a trajectory in the phase plane closed the equilibrium (X 2 , 0) . Admit a → 0 then it is possible to find a suitable value of K 1 such that dY 2 ∕dX 2 > 0 . Similarly, for a >> 0 , dY 2 ∕dX 2 < 0 for suitable values in the involved constants. Given the continuity of H, it is possible, hence to conclude on the existence of a critical trajectory close the critical point (X 2 , 0) of the form: which implies that Solving (4.4) by using separation of variables, we obtain which implies that close the critical point (X 2 , 0).
Note that growing TW is obtained by replacing (− ) by the symmetric ( ) and taking K 1 F = 1 , we get:
Numerical Assessments
The intention along this section is to develop a numerical simulation to determine a suitable TW-speed for which the approximated TW solution obtained in the previous section can fit the actual solution. The numerical approach has been performed considering the following bullets: • The numerical simulation is executed with the Matlab function bvp4c. This function is based on a Runge-Kutta implicit approach with interpolant extensions [19]. The bvp4c collocation method requires to specify pseudo-boundary conditions. In this case, the left boundary is considered positive (for instance f (−∞) = 1 ) and the right boundary coincides with the stationary conditions X 1 and X 2 . To simplify the numerical representations, the solutions are translated into the zero state by the standard vertical translation. • The interval for integration is assumed as (−100, 100) . It has been considered sufficiently large to avoid any impact of the boundary condition over the integration domain of interest.
. The results are compiled in Figs. 1, 2, 3, 4. It is possible to check that there exists a TW-speed (a) for which both, the approximated and simulated, solutions evolves closely. This speed has been sharply assessed to be in the interval [2.5, 2.8] (see Fig. 2). If the TW-speed is increased, both solutions diverges leading to a less accurate approximation. Note that the particular values considered for the involved parameters provide the described interval in the TW-speed. If any other combination is considered, then such assessed speed interval will be modified. Although, this is an important limitation, it is relevant to highlight that the numerical exercise has the intention of providing a validating proof for the involved analytical assessments. Hence, it is possible to conclude that the analytical paths are suitable and reproducible via a numerical scheme.
Conclusions
Along the presented analysis, existence and uniqueness results have been provided to a Darcy-Forchheimer flow. Solutions have been explored in the TW domain and approximated solutions have been obtained making used of the GPT approach. Finally, the analytical conceptions followed have been validated with the use of a numerical simulation. Note that GPT solutions have been shown to evolve close the actual solution for a range of TW-speeds a ∈ [2.5, 2.8] . The numerical assessments have been done for particular values in the involved parameters, but it permits to validate the analytical solutions so that they can be used for other parametric values.
Author Contributions JL and SR conceived the study and the overall manuscript design. JL and SR carried out the analytical studies. SR has performed the specific analytical assessment with the revision of | 3,599.4 | 2022-02-28T00:00:00.000 | [
"Mathematics",
"Environmental Science",
"Physics",
"Engineering"
] |
Magnetic field and power consumption constraints for compact spherical tokamak power plants
We describe a workflow that is able to approximate basic machine parameters (including but not limited to plasma current I p , major radius R 0 , toroidal magnetic field B 0 , fusion power P fusion ) based on the choice of maximum field B TF,max on the in-board leg of the toroidal field (TF) coil of a spherical tokamak (ST) fusion power plant (FPP), its aspect ratio A , the distance between the in-board TF leg and the plasma edge and a limited set of plasma physics parameters typical for an ST FPP. Together with an estimation of the electrical power exported as a function of the fusion gain Q , this allows the mapping of the parameter space where such power plants can be compact while still being commercially viable.
In any magnetic confinement device, the magnets play a crucial role, with the Toroidal Field (TF) magnets providing the dominant plasma confinement field in tokamaks and stellarators.For practical tokamak TF magnets, Nb 3 Sn is used for ITER [11], KSTAR [12] and planned for EU-DEMO [13], while for CFETR both a Nb 3 Sn-NbTi graded hybrid coil [14] and a REBCO coil [15] are being considered.A REBCO-Nb 3 Sn-NbTi graded coil has been proposed for the EU-DEMO central solenoid [16].
Here we consider the implications of the peak field at the in-board leg of the TF coil on the major radius and on-axis magnetic field of ST FPPs and the resulting estimated power gain Q, here estimated based on ITER-98(y,2) scaling.The necessary recycled power to support the plasma in steady state, together with estimates of the power required for the balance of plant on the fusion reactor site, are also considered in order to select a parameter range that could provide commercially relevant electricity output to the grid.Rather than aiming for accurate machine design points, the purpose is to elucidate trends and identify regions of interest in the design parameter space, which can then be further refined in system design codes, such as PROCESS [10,17], MIRA [18] or BLUEPRINT [19].
Implications of the centre column radial build for the plasma major radius
The outermost edge of the inboard leg of the TF magnet R TF,in is determined by the allowed peak field B TF,max on the magnet conductor and the total current in all TF coils (sometimes called the 'rod current') I TF : It is possible to add to this radius of the TF conductor stack an estimate of a realistic total radial build thickness t inboard which includes the thermal insulation, the vacuum barrier, the neutron shield, cooling (or heat extraction) channels, plasma facing components and their supporting structure, and a vacuum gap to the last closed flux surface to accommodate plasma position control errors and the scrape-off layer.Special inboard divertor or limiter flux expansion provisions have not been considered here but could be accommodated by raising the thickness of the total radial build, perhaps considered as additional inboard gap since they would not contribute significantly to the neutron shielding of the centre column.This then sets the inboard radius of the plasma R inboard , so a prescription of plasma aspect ratio A then sets the major radius R 0 as and the vacuum field at the geometric centre of the plasma as: ignoring ripple due to the finite number of TF coils [14].
The aspect ratio of an ST FPP should clearly be chosen in the ST-range, i.e.A < 2 [7], although an optimal power plant configuration is likely to have A > 1.4 [20].If A ≤∼ 1.8, it might be possible that no in-board blanket is required to achieve a Tritium Breeding Ratio (TBR) of > 1.This is due to the relatively small solid angle presented to the neutrons by the centre column in a low-aspect ratio tokamak, compared to that presented by its outer wall, as discussed more fully in [9].Existing and proposed ST FPP machine designs have aspect ratios in the 1.5-1.8range [1][2][3][4][5]21].Here we choose A = 1.8, although our analysis is valid for any chosen value of A, apart from increasing the inboard shield thickness to ∼1.1 m to achieve both adequate shielding and tritium breeding for A > 1.9.
In the following analysis, the total thickness of the mid-plane radial build, going from the outer radius of the TF central conductors R TF,in to the inboard edge of the plasma, t inboard , is chosen as 0.85 m.This is based on estimations of: 200 mm vacuum barrier, scaling down from ITER's double wall thickness of ∼300 mm [22,23], allowing for a more favourable aspect ratio of this cylindrical part of the vessel and the option for a higher strength structural material; 550 mm of neutron shielding, including structural material and cooling provisions; a nominal 50 mm plasma gap, defined as the distance between the first wall and the inner plasma boundary, as defined by the last closed flux surface, the separatrix in a divertor configuration, otherwise the first flux surface intercepted by a physical structure; The most critical of these is the thickness of neutron shielding, which for optimised elemental combinations provides one decade of attentuation of the neutron flux per 130-150 mm of shield thickness [24][25][26][27][28][29].Accordingly, the material selection and desired lifetime of the shielded components allows some variation of this part of the radial build.The suggested 550 mm of neutron shielding (with e.g.Zr(BH 4 ) 4 , W or WC), together with the roughly one decade of neutron flux attenuation accorded by the 200 mm of vacuum vessel wall [22,30], provides about five decades of neutron flux attenuation and is chosen to allow a few full power years of FPP operation before significant degradation of the superconductor or its potting compound and also to limit the neutronic heating load on the cryogenic system to acceptable values [31][32][33][34].This is important because the cryogenic system is generally found to represent a significant demand on the 'Balance of Plant' (BoP) recycled power of the reactor [35][36][37][38].
Projections for plasma current & power gain
At this point, given a choice of peak B TF , an in-board radial build thickness R TF,in + t inboard and aspect ratio A, the plasma parameters R 0 , minor radius a and on-axis magnetic field B 0 are essentially determined.
Estimating an approximate plasma current I p is also possible by noting that the maximum plasma elongation κ that is reliably controllable, is in the vicinity of 1.6-2.0times the natural elongation κ nat in the neutrally stable situation of a pure vertical field [39,40].A significant bootstrap current fraction is typically intended in ST FPP concepts [40][41][42], making the plasma current profile somewhat broad or even hollow, and hence the internal inductance l i quite small.The value of the bootstrap current fraction coefficient, C BS , used or implied in those studies is consistent with values of l i well below 0.6 [43], but a more conservative value of 0.6 is used to estimate the natural elongation in this work.References [44][45][46] closely agree with the relationship from [47]: Thus, a reasonable estimate for the elongation κ 95 of the poloidal flux surface containing 95% of the poloidal flux change from the minor axis of the plasma to the last closed surface, with good feedback control of the vertical position, is: Neglecting the small effects of triangularity and toroidicity, and using the first order elliptical integral of the ellipsoidal poloidal circumference of the plasma, this leads to estimates for the plasma volume V plasma (in m 3 ) and the toroidal surface area A plasma (in m 2 ): Given the estimate of plasma elongation, the plasma current I p (in MA) can also be estimated from [7,48]: with triangularity δ, safety factor q 95 , a in m and B 0 in T. For reliable operation, higher values (> 5) of q 95 are more appropriate for an ST FPP [7,49], compared to values of ∼3 preferred for A ∼ 3 tokamaks.Operating data for MAST [50,51] suggest a q 95 in the range 5-10.
Moving towards a scaling to determine the fusion power, a conservative approach to energy confinement scaling is adopted by noting that START, NSTX and MAST data fit reasonably well to the canonical ITER-98(y,2) scaling [52][53][54].It is possible to estimate the line average electron density n e (in units of 10 20 /m 3 ) required for that scaling as a fraction f Greenwald of the Greenwald density n G .For the range of indexed parabolae used as the profile class here, namely: with ρ = r/a and the density profile peaking factor 0.1 ≤ α n ≤ 2, the central electron density n e,0 can be approximated as a function of the line-averaged electron density n e , to within less than 1%, by: Hence: where n e,0,20 is the central density in units of 10 20 m −3 , I p in MA and a in m.It is possible to express the stored kinetic energy in the plasma W in two ways, using the central ion temperature T i,0 as the solved variable in our analysis.The first expression is based on the confinement time, here taken from ITER-98(y,2) [55,56], although other scalings can be applied [6,[57][58][59]: with W in MJ, I p in MA, B 0 in T, n e the average electron density now in 10 19 m −3 and M the atomic mass of the plasma fuel in AMU.For a 50-50 D-T mixture, M = 2.5.The transported power P transp can be expressed as: The fusion power P fusion (in MW) is itself a function of T i,0 and depends on the D-T reactivity (i.e. the D-T reaction cross-section σ averaged over a Maxwellian distribution of the centre of mass velocity v of the colliding ion fuel species, which are assumed to have the same temperature), for which a fit can be found in [60] and is given by the volume integration: where the term in Z i and Z eff accounts for fuel dilution by a notional single impurity of charge Z i , creating an average charge of Z eff .This modifies the expression that would apply for a pure D-T plasma with n D = n T = n e /2.The term in α n and α <σv> accounts for the integration of the product of the indexed parabolae representing profile peaking.In this equation E fusion = 17.6 • 10 6 e (J) with e being the elementary electron charge.The peaking factor α <σv> can be approximately expressed as a function of the temperature profile peaking factor α T using a fit to the numerical integration of the < σv > profile for an appropriate range of α T describing the temperature profile T = T 0 1 − ρ 2 αT : The coefficients in (20) were chosen to suit significantly peaked reactivity profiles, i.e. those with T i,0 < 65 keV or with α T > 1.5, which are typical of the solutions of interest.
The second expression for the stored kinetic energy, ignoring the non-thermal α particle population, and assuming that T i = T e (in eV), is given by: where the subscript (e,i) implies summing over electrons and ions and the factor involving Z i and Z eff accounts for the reduction in total ion density due to the lumped impurity species Z i .
Based on preset values for the plasma and engineering parameters, including R TF,in and Q, we can now numerically calculate the value of T i,0 which satisfies the simultaneous equations ( 14) and (22) and is in a realistic reactor range (i.e.below 65 keV).This then provides a value for P fusion with (19) and for the normalised ratio (in %) of plasma pressure and magnetic pressure, β N , [61] with: = 2n e,0 with T i,0 in eV.Typically the value of β N is found to be lower than 2.8 for low-n kink modes with no conducting wall close to the plasma [62,63] and in standard aspect ratio tokamaks it is generally less than 4 for ballooning modes where there is such a wall [64,65], although both limit values depend significantly on the plasma current and pressure profiles.Experimentally β N values of up to ∼6 (or 10-13•l i ) have been achieved in NSTX and in MAST [50,51,[66][67][68].Using β N , it is possible to calculate the bootstrap current fraction f BS : with q * = 2.5aB 0 (1 + κ 2 )/(AI p ) and C BS calculated from the fit provided in [43].The required current drive power P CD is then: where γ CD is the efficiency of the current drive, which can be up to 0.36 MA/m 2 MW [69,70].
The average neutron wall loading Γ n in MW/m 2 can be estimated from: where the factor 1.1 allows for the plasma gap.Given a safe level of neutron wall loading or desired first wall lifetime in MWyrs/m 2 (e.g.related to neutron damage, activation or reduced tritium breeding capability), designers can use these values to limit solutions to cases respecting that limit or that lifetime in full power years.
A similar calculation can be made to estimate the total fast neutron flux (E > 0.1 MeV) on the central TF conductor stack, and thus its lifetime, based on an upper limit of ∼ 3 • 10 22 n/m 2 [71,72].Assuming that the fast flux is 6-8 times the 14 MeV flux determined by the fusion power (as can be inferred from the analyses in [30,73]) over ∼1.1 times the plasma surface area and assuming an order of magnitude in attenuation per 0.13 m of inboard shield thickness t shield [26,28,29], this results in a lifetime (in seconds) of: t life,SC = 3 • 10 22 10 −(t shield /0.13) 1.1A plasma 6P fusion / (17.6 • 10 6 e) (28) with e the elementary electron charge, P fusion here in W and noting that t shield (in m) is approximately 0.3 m less than t inboard as defined in section 2. As for the first wall, this criterion could be used to select solutions meeting a chosen TF central conductor lifetime.Finally, it is possible to implement plasma physics checks to confirm that returned solutions are self-consistent, namely that: 1. the required current drive power P CD is less than P aux = P fusion /Q; 2. the confinement time from the scaling in (14) does not exceed the confinement time from purely ohmic heating taken as τ E,OH = 0.075n e,20 q * aR 2 0 , close to the upper bound of the data in Figure 28 of [74], and broadly consistent with the upper bound of other OH experiments including JET [75]; 3. the asserted radiated power f radiated-core P fusion (0.2 + 1/Q) is higher than the sum of the bremsstrahlung [76] and synchrotron losses [77], thus leaving a margin for balancing the total core loss power by controlled additional radiative losses from impurity line radiation; 4. the transported power P transp is greater than the H-mode threshold power scaling [56].
Many of the approximations used in this heuristic analysis could be refined to allow for more realistic design optimisations, plasma physics and control assumptions, but the resulting description of the fusion plasma is considered to be sufficiently accurate for the trends that this work endeavours to demonstrate.
Dependence of the electrical power exported on Q
An important consideration is the value of Q necessary for an ST FPP to be commercially viable, which depends on the fraction of electricity exported to the grid.For the calculation below we assume superconducting magnets for the machine, reducing the electrical demand, both for the magnet power supplies and the cryoplant.
The electrical power demand of the auxiliary heating P aux,el (plasma heating & current drive power, ignoring transmission losses) can be expressed as: where η wall is the wall-plug efficiency of the current drive and external heating systems, which can be ∼40% [38].This η aux,wall is defined as the power absorbed by the plasma with respect to the power drawn from the electrical supply.Different heating systems have different efficiencies, e.g.approximately 30-45% for LHCD and ECRH [78], 40-50% for ICRH [78,79] (with possible scope to 69% [80]) but as low as 20-30% for Neutral Beam Injectors (NBI) [78].
Here we adopt a plausible target of 40%.
If we assume that the first wall and divertor heating power could be removed at a high temperature similar to that of the blanket and shield heating, the total electrical power P total,el with a fusion power fraction of 80% from neutrons and 20% from alpha particles can be approximated as P total,el = η generator P thermal (30) = η generator (P thermal,blanket + P thermal,divertor + P aux,th ) (31) The thermal-to-electric conversion efficiency η generator is typically 35% to 37.5% [38,84], though alternative approaches might reach 45-50% [85].M EMF is the neutron Energy Multiplication Factor, which ranges between 0.9-1.35[86], and specifically 1.2-1.4 for FPP-relevant blanket configurations [87,88] and ∼ 1.4 for steel in the divertor structure [89].Here we assume an average There are many analyses of the electrical power P BoP,el required for the Balance of Plant (BoP) of a fusion power plant in the literature, each with a different subset of plant systems and losses considered.Here BoP is taken to mean all the recycled power required for reactor operation, including the primary heat extraction system coolant circulators but excluding the plasma auxiliary heating and current drive systems, which here are treated separately.Example values of the fraction of generated electricity required for the BoP, f BoP , vary widely, broadly reducing as Q rises and including 6% for STARFIRE [87], ∼30% for ARC [85], ∼64% for DEMO [38], 40-50% for CFETR [90] and an equivalent value of 46% for ITER [35].STARFIRE, ITER and DEMO are the most comprehensive of these examples, but none of them address all the plant demands collectively considered by all of them.We consider a value of f BoP = 35% of the gross electrical generated power a reasonable target for an optimised, commercial FPP.
The fraction of exported electricity f export , relevant for a commercial FPP would be: We can now substitute ( 29) and ( 32) in (33) to yield:
ST FPP reactor parameters
A plot can now be made for the various machine and plasma parameters that are an immediate consequence of the choice of permissible peak field B TF,max , in-board radial build thickness R TF,in + t inboard and Q, as well as other plasma parameters appropriate for an ST FPP.The values selected in this study are listed in Table 1 and the code is available online [94].Multiple reactor configurations are possible, but only those that satisfy the plasma physics checks (see section 3) and the neutron wall loading limit and that produce net electrical output are retained.It should be noted that no specific superconductor characteristics are used to obtain these results.
It may be necessary to select a larger R TF,in , e.g.depending on the choice of superconductor for the TF coil or to accommodate structural support.Even with state-of-the-art high-field capability, the achievable current density of Nb 3 Sn [95] is outperformed by REBCO [96], which has been demonstrated with a 45.5 T magnet insert [97].For example, the highest achievable engineering current density at 4.2 K and 20 T for an HTS REBCO tape can be as high as 3620 MA/m 2 [96], whereas well-optimised Nb 3 Sn strand would only achieve 274 MA/m 2 [95].Note that the practical 'cable' engineering current density in the winding pack -allowing for copper or aluminium stabiliser, coolant channels, insulant and any internal support structure -tends to be around 10% of the empirical limit of the strand or tape, based on existing cable designs such as the ITER TF and Central Solenoid Cable-In-Conduit-Conductor [11], HTS CORC 6-around-1 [98] and the HTS VIPER cable [99].This factor of ∼10% can be greatly improved if a non-insulating, reduced-stabiliser winding is in mind (as used in the 45.5 T insert in [97]), although that would introduce problems of both restricted current ramping speed and quench protection.All superconductors exhibit reducing critical current for increasing magnetic field strength, so that grading the current density to be highest in the lowest field region would permit the winding pack to be thinner with the same superconducting margin as a constant current density coil would have in its peak field region.For a TF central conductor stack, this could provide more space for an inboard solenoid winding if the mechanical stresses in the winding pack and the support structures were acceptable, or simply permit a smaller R TF,in to be realised if required.
Fig. 2a-d show contour plots of the exported electric power for valid reactor results for Q values of 10, 20, 30 and 40 respectively as a function of R TF,in (and thus R 0 for the chosen A (1.8) and t inboard ) and B TF,max .While Q = 10 machines could be compact designs at higher peak magnetic fields (i.e.exceeding 15 T), exporting several hundreds MW of electrical power, the derivation in section 4 shows that these are unlikely to be attractive commercially.The range of valid reactor configurations at each Q is determined by three limitations: at low B TF,max fields and low R by the condition P CD < P aux ; at low fields and high R by the plasma temperature exceeding 65 keV and at high fields and low R by the neutron wall loading limit Γ n,max .
For the remainder of the analysis we focus on the Q = 20 solutions, which allow for reasonably compact designs with R 0 ≤ 4 m.For our chosen aspect ratio ) and having a similar magnitude of exported electric power.In all cases, re-mountable joints would be desired to facilitate the replacement of the inboard leg of the TF magnets.The alternative is to increase the thickness of the neutron shielding to improve the lifetime, at the expense of machine compactness.Predictions of alternative confinement time scalings can be represented by varying the H factor on the ITER-98(y,2) scaling.For example the equivalent from the ST scaling in Ref. [6] would be H = 2.27.In Fig. 4 we have considered a range of solutions for Q = 20, with the same engineering and plasma physics parameters as in Table 1, but varying the H factor between 1.0 and 2.5, to reflect the effect of different confinement scalings.It is clear that for increasing H-factors, the machine size decreases significantly for the same net electric
Summary
In summary, we have proposed a methodology to infer basic machine and plasma parameters for an ST FPP, based on the peak field on the TF conductor in the centre column and a limited number of physics and engineering assumptions and approximations, endeavouring to respect the established dimensionless operational space of ST experiments [50,51,67].The important impact of the recycled power required for the Balance of Plant has been estimated as part of this assessment.This estimate is intended to cover the electrical demand of all the auxiliary systems required to keep the power plant site operational as well as the primary heat transfer system circulators, considered as additions to the necessary plasma heating & current drive systems.This workflow can, in principle, guide tokamak designers in terms of the realistic parameter space for a power plant more likely to be economically viable, which can subsequently be explored with system design codes.Inevitably the design parameter space is a trade-off between compactness (diameter, plasma volume) and component life time (neutron wall loading).In the cases considered, a sufficient thickness of neutron shielding has been assumed to assure that the TF central conductor stack, including its insulation if present, will serve for several full power years at the specified average neutron wall loading [32][33][34].For Q = 20 designs, the peak field on the in-board TF conductor needs to be at least 15 T, in order that the net electricity exported to the grid can plausibly achieve a commercially relevant fraction of the gross generated electrical power, while also resulting in ST FPP reactors (A < 2) that are significantly more compact in diameter, height and plasma volume than the current EU-DEMO design (A = 3, B TF,max = 12 T [100,101]).
Figure 1 :
Figure 1: Fraction of exported electricity as a function of Q for different fractions f BoP of P total,el , based on eq.(34).
Figure 2 :
Figure 2: Exported electrical power (in MW) as a function of outer radius of the in-board TF leg R TF,in and B TF,max for reactor designs meeting the criteria in Table 1 and for Q values of (a) 10; (b) 20; (c) 30 and (d) 40.
Figure 3 :
Figure 3: Contour plots of selected reactor parameters as a function of outer radius of the in-board TF leg R TF,in and B TF,max for reactor designs meeting the criteria in Table 1 and for Q = 20.The parameters are (a) B 0 (T); (b) Ip (MA); (c) β N ; (d) Γn (MW/m 2 ); (e) TF central conductor life (full power years, fpy) (f) f BS ; (g) T 0 (keV) and (h) n e,0 (10 20 m −3 ).
Figure 4 :
Figure 4: Exported electrical power (in MW) as a function of outer radius of the in-board TF leg R TF,in and B TF,max for reactor designs with Q = 20 and meeting the criteria in Table 1 with H factor values of (a) 1.0; (b) 1.5; (c) 2.0 and (d) 2.5.
Table 1 :
[100]a & reactor parameters chosen for the analysis.nat,theplasmavolumewouldexceed that of EU-DEMO if R 0 > 5.0 m[100].Fig.3a-fshowotherrelevantreactor parameters such as B 0 , I p , β N , Γ n , f BS and T 0 .The most compact ST FPP designs in this range require magnetic fields on the conductor above 20 T, but the neutron wall loading is close to the Γ n limit, implying shorter in-vessel reactor component lifetimes.An example is given as Variant 1 in Table2.Variant 2 in the same table can be considered a compromise: more than double the plasma volume, but reduced neutron wall loading and a TF central conductor life approaching 2 full power years (fpy).A more conservative reactor design (Variant 3) would limit B TF,max to ∼12 T, as in ITER and EU-DEMO, but, even at the smallest radius possible, it will have a consequently larger plasma volume, approaching that of EU-DEMO (2214 m 3 , [ | 6,200.6 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
Hypothetical Dark Matter/Axion Rockets: And the Neutrinos without SUSY Problem
We will attempt to discuss the presence of Dark Matter (DM) and candidates for Axions from non SUSY processes, in ways we think would tie into their discovery via the (Linear Hadron Collider) LHC. Once vetted and confirmed by LHC measurements, we can eventually talk about the feasibility of a DM/Axion ramjet. We reference material brought up by the author in IDM 2008 [1], material discussed by Hooper [2] which was initially in (Identification of Dark Matter) IDM 2008, but which is an arxiv entry now [3], Meissner and Nicholai neutrino physics, without SUSY [4] article, and if one assumes commonality of DM with variants of Neutrino physics [5], we can also discuss a solution to the conundrums brought up by Wired Magazine [6] as to a DM Abstract
ram jet, which is further elaborated upon in [7] [8].We conclude with a discussion as to the fact that Corda in [9] as well as S. Nojiri, S. D. Odintsov [10], brought up alternatives, as well as in Dark Matter which are also an alternative to the possible formation of (sterile) neutrinos, without the use of SUSY [4] article, which the author brings up, since the ram jet model as brought up is assuming hinges upon dark matter, as far as a propulsion protocol, to get about the problems brought up in [6].
What Can Be Said about Axions?
The author estimates that at near-light speeds, the available axion-power would be about 3 watts/cm 2 times 2 β γ × where ( ) β = is the velocity relative to light, and [8] is the square of the relativistic mass-increase factor.At a velocity of 99.9% c, the available power from axions would be about 1500 watts/cm 2 , enough power for a modest energy-efficient space drive.And the faster we go, the more such power becomes available.Note though that this is a long way from saying that we are having a viable interstellar vehicle candidate.We are saying that in principle that a Photon rocket MAY be improved upon, and that this DM/Axion [11] destruction via intense E & B fields is an avenue toward making a more powerful Photon rocket.We are leaving as a full blown R and D project the feasibility of obtaining Axions/DM as part of phase transitions [12] [13] in the first place, which will be the last part of our article.
And, Axions [14] are one of the components of the DM candidates we will bring up.
To do this, we will review the contents the author presented in IDM 2008, with a goal toward isolating neutrino/Axion/DM candidates [14], as well as the available masses of DM, and of Neutrino candidates.The key point we will raise is this: we wish to improve what is known as a Photon rocket, i.e., propulsion without conventional propulsion.
We will speak of the different DM and axion candidates, select the candidate we think is most pertinent to propulsion, and then discuss the basic physics of the photon rocket, with an explanation of how to upgrade the photon rocket.The axion rocket concept is the modus operandi we will follow.i.e., a large magnetic field in a chamber can change axions to giving photonic components.Due to constraints upon mass we can carry, we favor the concept similar to axion ram jet for interstellar travel.i.e., we wish to find a ram jet that has greater thrust than the photon rocket, and better than the axion rocket itself.Using propulsion candidates similar to DM may provide a way to gain further improvements to the axion rocket, which in itself as a ram jet is an improvement over both the simple photonic rocket and the axion ram jet rocket.
Reviewing the Highlights of the IDM 2008 Talk: Abandoned DM
Candidates, Leading to Present Candidates for DM?
We will in this section give reasons as to why the particular candidate for DM which we will invoke for the paper was used.Doing so requires that we discuss why some of the baryonic and non baryonic candidates for DM have been recently largely abandoned.
We will briefly define some of the known candidates for dark matter which have been abandoned.The first of them is the MACHO [15].Briefly put, MACHOs are DM candidates with masses up to one tenth of a solar mass.i.e., the Macho concept heavily relies upon relatively inert matter galactic baryonic dark matter seem most likely to be in the form of compact objects and could be in one of two mass windows: either in the brown dwarf regime or in the mass range corresponding to supermassive black holes.
Muramaya [15] [18].The idea being that the sterile neutrino with the mass in the keV range might answer some DM parameter problems which have dogged astrophysicists.Adding 3 righthanded (sterile) neutrinos to the Standard Model (SM) can solve several "beyond the Standard Model" problems within one consistent framework.Great, except that they are pretty much ruled out via the fact that its mass is so small, and that the Tremaine-Gunn argument [19] to the effect that "For Neutrinos to dominate the halo of dwarf galaxies, one would need to pack them so much that the Pauli Exclusion principle would be violated."Traditional neutrino masses, if Warm Dark Matter [20] existed would interfere with structure formation in the early universe.i.e., we can state that if we look at neutrinos moving at nearly the speed of light, that would interfere with structure formation, often erase it at small scales.This notwithstanding what the author views as the mistake made by Ruchayskiy's overly optimistic view of sterile neutrino as a DM candidate [21].
So what about Champs?M. Taoso, G, Bertone and A. Masiero (2008) [22] give a list of requirements for non baryonic DM, a ten point test litany for a non baryonic DM candidate to match up to.We will reproduce it here."Namely: An extraordinarily rich zoo of non-baryonic dark matter candidates has been proposed over the last three decades.Here we present a ten-point test that a new particle has to pass in order to be con- Champs, specifically massive, charged DM candidates, are largely ruled out due to their excessive mass, and also due to energy levels between 1 to 1000 TeV.We urge the readers to read M. Taoso, G, Bertone and Masiero's (2008) paper [22].The main objection is that CHAMPS have many similarities to heavy hydrogen, and that we would see traces of them in the ocean.Suffice to say no such traces have been detected.Assume to the contrary, that CHAMPS may still be a viable candidate, we would be up against the datum that the proposed particles would weigh at least 100,000 times the mass of the proton, too heavy to be created by the world's most powerful particle accelerator, the Large Hadron Collider.Furthermore, CHAMPS would be too massive to produce experimentally detectable light, or electro magnetic radiation, even in a magnetic field in space.
Still As the author reported in IDM 2008, this leads to a DM candidate which we report having a mass value of between 300 to over 400 GeV, which fits in strictures that Muramaya (2006) [15] gave in his Les Houches lecture on extensions of the standard model and DM physics.
Meissner & Nicolai: Extending the Standard Model
With classically conformal Langrangian, with the usual Higgs doublet and one extra weak scalar field: This leads to a statement about the existence of the so called Majoran candidate for an axion candidate, without invoking SUSY [4] ( ) ( ) ( ) The expression a(x) yields a pseudo-Goldstone particle associated with "spontaneous breaking of a new global (modified Lepton number) symmetry".And this a(x) shares properties with the axion.This is partly due to conformal symmetry eliminating the existence of conformal Lagrangian contributions.So we get masses for particles like neutrinos-heavier than the SUSY neutrino candidate, but having the same "branching ratio" (a(x) is massless).In our treatment of this problem, we assume that Meisssner and Nicholai [4] are almost right.i.e., that the axion is 10 −9 the rest mass of an electron in GeV value.But that what they calculated is close enough to be still with value and merit to review.Meissner and Nicholai (2008) [4] worked with a classically conformal Lagrangian model for which [4] ( ) ( )( ) ( ) We begin first with an interaction Lagrangian with the usual Higg's doublet Φ and an extra weak singlet complex scalar field ( ) . We minimize the effective potential varying values of the effective coupling constants to Then we work with H + = Φ Φ , and deal with the case when the φ carries lepton charge.Then if we assume h.c. is the Hermitian conjugate, we make the following identifications in the Lagrangian: Q i and L i are the left-chiral quark and lepton doublets, U i and D i are the right-chiral up-and down-like quarks; while E i are right-chiral electron-like leptons, and This is a mixture of symmetry arguments and numerical minimization of the parameter space, using [4 .
This will lead to conformal symmetry reduction of the Classical conformal Lagrangian due to power zero to power 2 terms do not arise in the conformal Lagrangian, while we would say they are normally expected in this type of Lagranginan.In addition the Higgs boson would not be needed since it would break the conformal symmetry of Equation (2) above, i.e., of this Lagrangian.Equation ( 3) is important in the resultant current calculations and was an aid to us getting the DM bound as given in Equation (5) below.
Parameter Space Treatment in Order to Isolate a DM Candidate
Meissner and Nicholai [4] eventually obtained the following averaged out parameter space values, namely The value of 477 GeV m ϕ′ = so obtained is the very upper limit of DM candidate watts/cm 2 , enough power for a modest energy-efficient space drive.
First Principles of an Axion/DM Ramjet
We should state specifically that we are thinking of converting axion/DM "particles" to, after intersecting them with a magnetic field into photons.i.e., we are improving upon the specifications for a Photon rocket drive.Let us first review a few basics of the photon rocket, then go to how to convert axions/DM to photons.
Currently proposed photon rocket designs include the Nuclear Photonic Rocket and the Antimatter Photonic Rocket (first proposed by Eugen Sanger in the 1950s) [23].In a Nuclear Photonic Rocket, a nuclear fission reactor is used to directly heat tungsten coils or graphite blocks to white-heat at the focus of a parabolic reflector.While using a laser to produce the light beam would provide much better collimation, this is offset by the reduction in efficiency incurred by powering a laser rather than using black-body radiation directly (a nuclear fission reactor will generally output at least 5 to 10 times more energy as heat than it can the electricity it could generate).Now, we can talk about a photon rocket in terms of destruction of DM/Axions via intense E & M fields.Note that in doing this we are paying attention to the Wired (2008) [6] article purporting to show that as quoted from the article: "And then there's the issue of fuel.It would take at least the current energy output of the entire world to send a probe to the nearest star, according to Brice N. Cassenti [6], an associate professor with the Department of Engineering and Science at Rensselaer Polytechnic Insti-tute.That's a generous figure: More likely, Cassenti says, it would be as much as 100 times that".So, we can only talk about perhaps a ram jet engineering construction, i.e., scooping up Axions/DM from the interstellar void and using that as a fuel source.So how do we get around this?It so happens that the mass values as ascertained above in the authors IDM 2008 meeting presentation, of perhaps up to several hundred GeV is the only way possible to get high frequency.
As can be inferred from P. Sikivie (1983) [24], "Every axion which is converted to a photon with the same total energy and going in the same direction produces a momentum kick of ( ) where m is the axion rest mass."If we make a swap between axions and DM, or use a mass of several hundred GeV as a starting point due to the calculations so referenced above and avoid the absurdity mentioned in the Wired (2008) article, i.e., not think of carrying feul to the stars, but use the ram jet modus operandi, we can possibly think of obtaining a working, upgraded Photon rocket which would improve our chances of greatly improved space propulsion.
For a DM rocket, if the DM has some of the axion properties, this would likely mean working with velocities nearer to.0.1% (light-speed), not the near c values spoken above.The main point of first of all confirming a viable candidate for the DM particle, after we try to confirm some of the DM physics would be in getting a realistic propulsion candidate so we could get perhaps a ship of the order of magnitude of at least an air craft carrier in bulk to travel at one tenth the speed of light.Furthermore, before we do this, it would be helpful if we confirmed a prediction given by Meissner and Nicholai (2008), namely, the coupling of the Meissner Axion candidate to photons, of the order of [4] ( ) Either this energy release, if handled appropriately, and/or a DM candidate merely improving the so called axion ram jet energy release figure given in the estimates above would begin to yield a more practical candidate for improving the efficiency of a photon rocket and/or a ramjet based on axion conversion, via strong magnetic fields into a powerful light source beam.
Conclusions
Our short article has focused upon several themes.We avoid the absurdity of the idea of carrying an energy supply of the magnitude of the Earths entire energy output with the space craft for a journey to the stars.However, the real engineering problems lie ahead in a radical upgrade of the Proton rocket ship.
i.e., photon rockets provide the maximum exhaust velocity (c) and the minimum exhaust mass (zero); they represent the theoretical maximum in specific impulse, but provide very low thrust, given by 2PR/c, where P is the emission power in watts and R is the efficiency of the reflection/collimation apparatus.
i.e., how does one get sufficiently good R for the efficiency of the reflection/collimation apparatus working optimally, especially if the push per axion goes down because the crucial momentum kick is obtained from speeding up the axion mass-energy packets to light speed, and as their incoming velocities begin to approaches c the kick gets smaller, reducing the effectiveness of the drive at high velocities?
Lots of engineering headaches abound here, but the even bigger one is an axion that has a mass about 1/400,000,000 of an electron mass, and that there should be about half a trillion of them in each cubic centimeter of space in the vicinity of the earth, more per cc near the galactic center, but only 200,000 per cc in intergalactic space.Figuring that the DM candidates are 10 5 larger in GeV mass value than electrons, which are in turn about 10 9 times larger than axion candidates, we obtain then almost 10 14 the axion momentum kick if we look at C = V contribution.However, as one approached the speed of light, any purported adaptation of Equation ( 4) would be very, very problematic.We expect at most 1 c values in velocity, which would make the DM convert to Photons for an upgraded Photon rocket not relativistic, but still a huge improvement over simpler versions of the Photon rocket, even say the Eugen Sanger [23] version written above.
Furthermore, this is viable and useful to consider, especially when the axion may be
Further Considerations. Very Important to Keep in Mind. Future Research Directions?
It is important to keep in mind [9] [10], that further investigations of Dark Matter candidates are imminently feasible.Note that this paper makes use of the "sterile neutrino" paradigm of DM, which may be effectively challenged by extended theories of gravity as emphasized in both [9] [10].If Sterile neutrinos [26], do not hold, then the work done in [9] [10] must be upheld, and investigated, and this to find candidates for the ram jet as written up in [7], perhaps independently of the Sterile neutrino hypothesis, if [26] is indeed falsified.
sidered a viable DM candidate.(I) Does it match the appropriate relic density?(II) Is it cold?(III) Is it neutral?(IV) Is it consistent with BBN? (V) Does it leave stellar evolution unchanged?(VI) Is it compatible with constraints on self-interactions?(VII) Is it consistent with direct DM searches?(VIII) Is it compatible with gamma-ray constraints?(IX) Is it compatible with other astrophysical bounds?(X) Can it be probed experimentally?" doing some CKM matrix style decompositon, we find that the Lagrangian represented in Equation (1) and with the SM terms as given by Equaution (2) will admit two global U(1) symmetries, so that both the standard baryon symmetry and modified lepton number symmetry scheme for DM leading to a mass range for WIMP candidates below 200 GeV, with a DM WIMP mass being greater than 200 GeV leading to a quark and gluon de composition and decay of the WIMPs.So their scheme as presented in IDM 2008 favored masses within an order of magnitude of 100 GeV.So if we eliminate ultra massive DM candidates this way, and get masses within the range, say of 100 to 400 or so GeV, with favored values likely within sight of 100 GeV, we can now talk about what the behavior of an Axion (Majoran) DM ramjet would be, and why it is likely obeying about 1500 First, in IDM 2008, the author predicted a mass range DM up to about 400 GeV, per particle, and received vetting of this prediction from Dan Hooper [3] who specified a preferred 100 -200 GeV range for DM candidates, for reasons stated in the manuscript.If Axions are indeed roughly equivalent to the DM candidates, this mass range in itself adds credibility toward implementation of Equation (4), leading credence to the authors estimation of a thrust value of for DM production if we approach V = C (the speed of light) of 10 14 × 3 watts/cm 2 times 2 β γ × .
, if CHAMPs are negatively charged, they might have bound to iron and other elements to create supermassive varieties that could be detected by their weight.These elements might also absorb and emit telltale X-rays that could be observed by tele- scopes This leads to a long shot for CHAMPS.Not completely ruled out, but still very difficult to observe.So we have chosen other candidates to consider.So we deal with cold dark matter as our DM/Axion candidate basis for starting this inquiry.To do so, we reference what the author presented in IDM 2008 about DM, a takeoff of Meissner and Nicholai's (2008) [4] work on non-standard neutrino physics. | 4,415.6 | 2016-08-23T00:00:00.000 | [
"Physics"
] |
Perfect stimulated Raman adiabatic passage with imperfect finite-time pulses
We present a well-tailored sequence of two Gaussian-pulsed drives that achieves perfect population transfer in STImulated Raman Adiabatic Passage (STIRAP). We give a theoretical analysis of the optimal truncation and relative placement of the Stokes and pump pulses. Further, we obtain the power and the duration of the protocol for a given pulse width. Importantly, the duration of the protocol required to attain a desired value of fidelity depends only logarithmically on the infidelity. Subject to optimal truncation of the drives and with reference to the point of fastest transfer, we obtain a new adiabaticity criteria, which is remarkably simple and effective.
I. INTRODUCTION
The STIRAP (Stimulated Raman Adiabatic Passage) protocol got its first validation in an experiment [1,2] where partially overlapping Stokes and pump laser beams were employed to transfer the population from a lower energy state to a higher vibrational state without populating the intermediate level in a three-level system consisting of molecular vibrational states. This was done with a non-trivial pulse arrangement (usually referred to as counter-intuitive sequence), where the Stokes pulse precedes the pump pulse. This selective and precise adiabatic transfer of population has been a subject of much interest from a theoretical as well as experimental perspective [3,4]. Due to its intrinsic robustness against practical imperfections, STIRAP has been widely adopted in various different experimental systems [5,6]. Most importantly, the success of the protocol (even from the theoretical point of view) relies on the fulfilment of the adiabaticity criteria [7][8][9][10][11][12]. As per quantum adiabatic theorem [13,14], the system, which is initialised in an eigenstate, follows the corresponding eigenstate of the instantaneous Hamiltonian. However, a widely acceptable quantitative criteria for adiabaticity is still lacking [15]. An interesting approach based on local adiabaticity criteria is discussed in [16], where the Hamiltonian generating adiabatic evolution is designed in a such a way that it fulfils the local adiabaticity condition at infinitesimal time steps, which is further used to obtain the adiabaticevolution version of the Grover's search algorithm.
Here we present an adiabaticity criteria which is demonstrably sufficient for achieving perfect population transfer. Our criteria is markedly different from the existing ones and is surprisingly effective despite its simplicity. The key concept of our analysis is based on the most sensitive point of the dynamics, which is in the middle of the sequence, where the rate of evolution of the quantum state is the highest. Also, a high-fidelity STI-RAP requires the pulse sequence to be implemented in *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>an optimal time, which involves optimal truncation of the drives, as well as optimal width of the drives and relative placement of the drives in the pulse sequence. Here we show how these issues can be solved. Another important aspect is the power of the pulses, which we obtain optimally with the help of our newly introduced adiabaticity criteria. We analyse the situation in detail and arrive at analytical expressions that lead to a perfectly tailored STIRAP.
The protocol studied here is experimentally implementable to any three-level system. The set of parameters presented here can be directly used in a circuit QED based experimental setup with a multi-level Josephsonjunction artificial atom [17]. There are also ways to suppress the non-adiabatic excitations by employing shortcuts to adiabaticity [6]. In superadiabatic(sa)-STIRAP in three-level systems, an additional counterdiabatic pulse is needed, realizing direct coupling between the initial and the target states. A circuit QED based setup implementing saSTIRAP protocol in a three-level system has been demonstrated in Ref. [18] and its robustness against various experimental imperfections has been analysed in Ref. [19].
The Hamiltonian governing the STIRAP for a threelevel system in the computational basis {|0 , |1 , |2 }, in the dispersive regime and under the rotating wave approximation, is given by [17] where the time-varying amplitudes of the driving fields are chosen as Gaussians with equal standard deviation σ. These Gaussians are separated in time by an amount t s given by Ω 01 (t) = Ω 0 01 e −t 2 /2σ 2 and Ω 12 (t) = Ω 0 12 e −(t−ts) 2 /2σ 2 . An adiabatic evolution is ideally infinitely slow and would require the system to be in an eigenstate of the instantaneous Hamiltonian at all times. At the twophoton resonance condition (i.e. δ 01 = −δ 12 ), a convenient choice of the eigenvector is the dark state |D = cos Θ|0 − sin Θ|2 , which does not have any dependence on the intermediate level |1 . Here the mixing angle Θ is defined by Θ = tan −1 Ω 01 (t)/Ω 12 (t).
II. OPTIMAL PULSE DURATION
Adiabatic drive in principle demands infinitely long operation time for a complete transfer of population. Ideally, as required by STIRAP, Gaussians pulses are of infinite extent. However, to cope with the experimental limitations on pulse generation and to minimize the losses due to decoherence, one would have to truncate the Gaussians Ω 01 (t) and Ω 12 (t) optimally. Therefore there is a tradeoff between the loss in the transfer fidelity that can be afforded and the total pulse time. Revisiting the mixing angle, while assuming Ω 0 01 = Ω 0 12 , we write where we introduce the parameter r = t s /σ. We truncate this STIRAP pulse sequence (consisting of drives Ω 01 (t) and Ω 12 (t)) from left at time t = t i = −n t σ + t s = −(n t − r)σ, which we call initial time point, and from right at t = t f = n t σ, which we call final time point, where n t is a real number (n t ∈ R). The total pulse duration is therefore T = t f − t i = (2n t − r)σ. We fix the values of r and σ and present the corresponding dynamics of Θ(t) versus total pulse duration as shown in Fig. 1, where different curves correspond to different values of n t . The width of the Gaussian may be fixed to any arbitrary value (here σ = 30 ns) as this does not effect the variation of Θ(t) in a given total time T . Ideally, during the STIRAP drive, the mixing angle Θ(t) is expected to vary from 0 to π/2, while in reality, a finite-time sequence effectively varies Θ(t) from Θ i → 0 to Θ f = π/2 − Θ i . A closer look at Fig. 1 immediately concludes that a choice of small enough n t might result in a large Θ i e.g. in Fig. 1(b) the blue curve marked with circles has n t = 0 and Θ i > π/6 while the orange curve marked with squares corresponds to n t = 4 and Θ i ≈ 0. Thus, for a given r, lower values of n t result in a poor transfer fidelity. To make it worse, real situations by default have Θ i ≈ 0, therefore a too small value of n t is susceptible to create errors which can be difficult to trace. An elaborated picture of the ideal situation is presented in Fig. 1(b), where curves corresponding to n t = 1, 4 have Θ i = 12.6 0 and 0.64 0 respectively, with total pulse duration being 105 ns and 315 ns respectively. Clearly in this case n t = 4 has a much more desirable outcome than that of n t = 1, despite the high time cost. Another important factor that plays a role in the time management of the STIRAP implementation is the relative separation between the two pulses (r = t s /σ). Comparing curves corresponding to n t = 1 in Fig. 1(a) and (b), it is found that for r = −2, Θ i ≈ 1 0 and the total pulse duration is 140 ns. Thus an optimal combination of n t and r provides an efficient STIRAP without compromising much with In each of these figures, blue curve with circular markers, red curve with triangles, black curve with diamonds, and orange curve with squares correspond to nt = 0, 1, 2, 4 respectively (also specified at the right end of each curve in part (b)). respect to the time cost. A quite thorough picture can be obtained from the contour plot in Fig. 2, wherein mixing angle corresponding to the final state (Θ f = π/2 − Θ i ) is plotted for different combinations of n t and r. The final value of the mixing angle ranges from Θ f = 45 0 (corresponding to r = 0, n t = 0 and thus no time evolution) to Θ f = 90 0 , which corresponds to a complete transfer of population from |0 → |2 .
Next, we quantify the threshold for a desired fidelity of the STIRAP protocol. From Eq. (2), As stated earlier, due to the STIRAP driven dynamics, our three-level system is in the dark state |D at all times. We parametrize the initial dark state with close proximity to the ground state |0 by such that the ideal case is recovered when −→ 0. Since we know that during the evolution under the STIRAP Hamiltonian our system is in the dark state at all times, the final state reads For both the initial and final states, the parameter is a measure of infidelity. Indeed, the fidelity is (5), one may easily arrive at Thus the total pulse duration is which, as expected, is directly proportional to the widths of the Gaussians. A larger value of |r| corresponds to faster truncation (smaller n t ) and is overall advantageous in terms of the total pulse duration. For small enough ( 2 << 1), Eq. 7 leads to ∝ e rT /(2σ) . Thus, the infidelity decreases exponentially with total time. Fig. 3(a) contains plots of n t vs −r, where different curves correspond to different values of . It is interesting to note that there may exist the STIRAP sequence even for negative values of n t when the relative separation between the two Gaussians is large. However, the total transfer time is positive as expected, which is clearly seen in Fig. 3(b) showing the variation of T /σ with −r at corresponding values of . Consider the vertical green line at r = −1.5 in Fig. 3(b), and the values of T /σ while it intersects different curves plotted at different values of . The smaller is, the higher the fidelity, which requires larger values of T /σ for a fixed value of r. For instance, assuming σ = 40 ns, a total transfer time of 184.2 ns, 122.8 ns, 79.9 ns, 61.3 ns, and 14.6 ns are required to obtain final values of the mixing angle Θ f to be 89.9 0 , 89.2 0 , 85.9 0 , 81.9 0 , and 48.6 0 respectively. Another interesting situation arises when Ω 0 01 = Ω 0 12 , which influences the left and right truncation limits, however the total transfer time remains unchanged. This situation is discussed in detail in Appendix B.
III. ADIABATICITY CRITERIA
Next, we evaluate the optimal value of the pulse amplitude corresponding to the optimal transfer time calculated in the last section. The total pulse area then may be compared with the total energy required to achieve the selective population transfer. The adiabatic criteria for a STIRAP implementation implies that at any arbitrary time t the effective area A(t) is much greater than the time rate of change of the mixing angle, which upon integration, gives rise to the global adiabaticity condition, as discussed in [17]. For a deeper insight into the protocol, let us look at the time dependence of these quantities. The plots of the time-varying amplitudes of the driving fields (Ω 01 (t) and Ω 12 (t)), effective area A(t), and rate of change of the mixing angleΘ(t) are shown in Fig. 4, where σ = 35 ns, Ω 0 01 /(2π) = 44 MHz, and Ω 0 12 /(2π) = 37 MHz. It is easy to notice that the rate of change of mixing angle is maximum in the middle of the sequence where Ω 01 (t) = Ω 12 (t).
Ideally (for → 0), at t = t I , the mixing angle is Θ = π/4 and the populations are p 0 = p 2 = 0.5. Also the STIRAP sequence is fastest and more prone to errors at this point, such that a non-zero p 1 occurs close to this point. Thus t = t I plays an important role. It is also clearly seen in Fig. 4 that the variation of the mixing angle with time attains its maximum value at t = t I , when Ω 01 (t) = Ω 12 (t).
Thus, in most of the cases, especially the ones corresponding to poor performance of STIRAP, the time t = t I corresponds to close values of the terms on the left and right hand sides of the inequality in Eq. (8). In Fig. 4, at t = t I , a vertical blue line intersects various curves, such that A(t I )/(2π), Ω 01 (t I )/(2π) = Ω 12 (t I )/(2π),Θ(t I ) and the time axis are labelled by points A, O, Q, and P respectively. An intuitive argument based on observation leads to a non-trivial relation, that has to be obeyed for a better performance of the STIRAP, given by, or alternatively, as Ω 01 (t) = Ω 12 (t) at t = t I . Time (t I ) is given by, where α = Ω 0 01 /Ω 0 12 . Further, and at t = t I ,Θ Note that even for α = 1,Θ (t), attains its maximum value at t = t I , where Ω 01 (t) = Ω 12 (t). Irrespective of the asymmetry introduced by different pulse amplitudes (Ω 0 01 and Ω 0 12 ), the rate of population transfer reaches its maximum at t = t I . From Eqs. 9-13, the condition for a better STIRAP result is given by, For Ω 0 01 /(2π) = Ω 0 12 /(2π) = Ω, α = 1, and the above inequality results into This is the adiabaticity criteria for the STIRAP population transfer, which is obtained by assuming the system to be in the dark state at all times (see Eq. 4). The STI-RAP population transfer calculated using Eqs. 6 and 15 (for α = 1) will be labelled in the following as parameter 'Set 1'. The dependence of the right side of Eq. 15 with respect to −r is plotted as shown in Fig. 5(a) with continuous black curve with markers. The corresponding population transfer obtained from Set 1 (for = 0.05) is shown in Fig. 5(b) with black markers. It is noteworthy that the plot of p 2 versus −r (see Fig.5(b)) is independent of the values of σ and is in fact dependent on .
On the other hand, the total pulse duration T is directly proportional to σ and the drive amplitude Ω is inversely proportional to σ. Furthermore, a larger value of leads to less efficient population transfer in significantly shorter time. The results of Eq. (15) are compared with the global adiabaticity criteria, σΩ >> √ π/4 [17]. In Fig 5(a), the continuous blue line corresponds to σΩ = √ π/4, which means that for the global adiabaticity criteria to be satisfied, the product σΩ must lie significantly above the blue line. On comparison between the continuous blue curve and black curve with markers in Fig 5(a), we find that it is possible to have an effective population transfer even when the global adiabaticity criteria is clearly violated in the region |r| < 0.5. For instance, considering a STIRAP evolution with r = −0.4, σ = 5 ns, Ω = 70 MHz, and optimally tailored time of 115 ns (where n i = n f = 11.3 with = 0.01), starting from the ground state yields a final state population, p 2 ≈ 0.99 that goes beyond the global adiabaticity condition (stated earlier), and T Ω = 8.02, which is < 10 and hence violates the adiabaticity condition reported in [20,21]. Also, for larger values of |r|, Fig 5(a) presents much disparity between the two adiabaticity criteria. Further, on close observation of the p 2 population profile in Fig 5(b), we find that the population transfer is not very efficient for |r| < 1, and that the above example of a perfect transfer at r = −0.4 is a mere coincidence. These imperfections originate in the assumption of dark state dynamics, which is not valid for small values of |r| due to spurious excitations. They can be compensated by higher power of the drive. Based on these phenomenological considerations, we can design an optimal set of parameters for |r| < 1. We call this r−conditioned set of parameters and label it as Set 2, given by where α = 1. The plot for the right hand side of the inequality in Eq. (17) We also simulated STIRAP where the product σΩ is close to but lesser than the respective right hand sides of Eq. (17), and find that we can still arrive at a good enough population transfer in certain situations. This is especially true for large values of |r|. Thus, we conclude that the adiabaticity condition in Eq. (17) is sufficient but not necessary for a perfect population transfer.
IV. A PERFECT STIRAP PROTOCOL
A demonstration of the improvement achieved by employing the conditions in Eqs. (16) and (17) is shown in Fig. 6, where population p 2 of the second excited state (|2 ) at the final time t = t f is plotted as a function of σ and −r, with α = 1 and = 0.05. Fig. 6(a) shows the p 2 with fixed n t = 3 and Ω = 45 MHz. Thus, any arbitrary point on the p 2 -map in Fig. 6(a) satisfies σΩ > √ π/4.
The simulation of a perfectly tailored STIRAP utilizing the conditions in Eqs. (16) and (17) is shown in Fig. 6(b). The resultant population profile demonstrates a well-tailored STIRAP protocol for the desired population transfer. Fig. 6 (b) shows clear improvement relative to results shown in Fig. 6 (a). For a practical implementation, the total time-cost and pulse-power evaluation are also important. The corresponding maps of the total pulse duration and maximum pulse-amplitude in the same ranges of σ and −r are shown in the Appendix in Fig. 7. The wide range of resultant T and Ω values provide flexibility to the protocol. Larger values of further lead to significant reduction in the time-cost. In turn, a choice of slightly larger σ can significantly reduce the amplitude Ω. Alternatively, when evaluating the experimental feasibility various parameters can be constrained and a perfect population transfer can be designed with the help of these interweaved parametric equations and graphs.
For a desired value of , and with the help of Fig. 3, Eqs. (6)or (7), and Eq. (15), one can easily obtain an experimentally feasible set of parameters r, σ, n t , and Ω that leads to a perfect STIRAP. It is noteworthy that the efficacy of this perfect STIRAP protocol does not rigidly rely on the calculated parameters. In fact, parameters such as n t , Ω can be considered as the respective lower bounds to achieve population transfer with infidelity 2 . Larger values of these parameters will only make the transfer more efficient. This makes the protocol robust against the experimental imperfections.
V. DISCUSSION AND CONCLUSIONS
We presented a well-tailored STIRAP protocol that leads to a perfect population transfer |0 → −|2 alongside with flexibility in the choice of parameters. For a given , a combined choice of parameters n t and r already determines the final population to be transferred. Furthermore, the choice of σ determines the total pulse duration (T ) and the corresponding calculated value of the amplitude Ω is responsible for the pulse power. A trade off between σ and Ω values can be settled by evaluating the experimental feasibility. We also discussed the relatively general situation, where the Gaussian drives can have unequal maximum amplitudes controlled by the parameter α = Ω 0 01 /Ω 0 12 . The results for α = 1 are discussed in the main text, while a detailed analysis is presented in the appendices. The analysis presented here relies on a simple set of calculations and observations, however the end results are non-trivial. In conclusion, our calculations for the STIRAP drives lead to a perfect population transfer within the reach of experimentally feasible scenario and without the help of any additional shortcuts to the adiabaticity. | 4,876.2 | 2022-04-11T00:00:00.000 | [
"Mathematics"
] |
Disparate Effects of Mesenchymal Stem Cells in Experimental Autoimmune Encephalomyelitis and Cuprizone-Induced Demyelination
Mesenchymal stem cells (MSCs) are pleiotropic cells with potential therapeutic benefits for a wide range of diseases. Because of their immunomodulatory properties they have been utilized to treat autoimmune diseases such as multiple sclerosis (MS), which is characterized by demyelination. The microenvironment surrounding MSCs is thought to affect their differentiation and phenotype, which could in turn affect the efficacy. We thus sought to dissect the potential for differential impact of MSCs on central nervous system (CNS) disease in T cell mediated and non-T cell mediated settings using the MOG35–55 experimental autoimmune encephalomyelitis (EAE) and cuprizone-mediated demyelination models, respectively. As the pathogeneses of MS and EAE are thought to be mediated by IFNγ-producing (TH1) and IL-17A-producing (TH17) effector CD4+ T cells, we investigated the effect of MSCs on the development of these two key pathogenic cell groups. Although MSCs suppressed the activation and effector function of TH17 cells, they did not affect TH1 activation, but enhanced TH1 effector function and ultimately produced no effect on EAE. In the non- T cell mediated cuprizone model of demyelination, MSC administration had a positive effect, with an overall increase in myelin abundance in the brain of MSC-treated mice compared to controls. These results highlight the potential variability of MSCs as a biologic therapeutic tool in the treatment of autoimmune disease and the need for further investigation into the multifaceted functions of MSCs in diverse microenvironments and the mechanisms behind the diversity.
Introduction
Mesenchymal stem cells (MSCs) have potential therapeutic applications for a wide range of diseases as they offer many of the same benefits as embryonic stem cells without the logistical limitations.MSCs are a heterogeneous and multipotent population of stem cells with diverse functions that include protective and trophic effects such as inhibition of apoptosis and fibrosis, promotion of angiogenesis, progenitor cell maintenance, chemo-attraction, repair and both inhibition and enhancement of immunity, reviewed recently in [1].
Although MSCs have been shown to exert inhibitory immune-modulatory properties, additional studies have shown opposite effects.For example, MSCs were immunogenic in a model of graft-versus-host disease (GvHD) and induced a cytotoxic memory T cell response [16].In vitro demonstrations of suppression have also not been recapitulated in some in vivo settings, as MSCs lacked significant effect on experimental autoimmune neuritis [17].Furthermore, we have recently shown a differential effect of MSCs on different effector subsets of CD8+ T cells [18].While MSCs suppressed Tc17 development, they enhanced IFNγ-producing CD8+ T cell function and exacerbated CD8+T cell-mediated MOG 37-50 EAE.In our studies, MSCs enhanced early IL-2 production, which promoted Tc1 responses yet antagonized acquisition of the Tc17 program [18].
A growing literature in MS has focused on the roles of oligodendrocytes (OL) and neuroprotection in disease and therapy, independent of immune suppression [19].A limitation of the standard EAE models is that it is difficult to separate the effects of therapies on immune suppression, which then leads to a decrease in immune-mediated demyelination, from direct toxic effects on neurons and/or OLs [2].During demyelination, myelin-producing OLs undergo apoptosis and myelin loss [19,20].In response, oligodendrocyte progenitor cells (OPCs) proliferate and migrate to demyelinated areas to facilitate remyelination, but this remyelination process is typically incomplete or defective [19].To assess the neuro-protective capacity of MSCs in a non-T cell mediated setting, models of chemically-induced demyelination, such as cuprizone and lysolecithin, have been employed.These models have the advantage of inducing demyelination via toxicity to OLs, without substantive involvement of the lymphocytic immune system and with predictable location and timing.Cuprizone is a copper chelator which results in reproducible demyelination of several brain regions including the corpus callosum and hippocampus [19,21,22].Treated mice exhibit rapid and robust OL loss and demyelination followed by a period of remyelination.Although the effect of MSCs on inflammatory immune cells in neuro-degenerative disease is under investigation, little work has addressed the potential for MSCs to prevent demyelination in vivo by providing trophic support.
Despite conflicting reports of their effects on EAE, and a dearth of knowledge on how they impact non-T cell mediated demyelination, MSCs are currently being evaluated in human clinical trials for efficacy in MS. [3,4,18,23,24].In order to more comprehensively address the effects of MSCs on neuro-autoimmune disease, we evaluated their action on neurological pathologies that were separated into models of either classical MOG-induced EAE or chemically induced demyelination.Our results indicate differential therapeutic efficacy within these two avenues and support the importance of dissecting the specific mechanisms that govern MSC responses in order to maximize their future therapeutic use.
Mice
Female C57BL/6 mice were purchased from the National Cancer Institute (NCI).NOD-SCID mice were purchased from Jackson Laboratories and bred in-house; females were used for all experiments.
Ethics
All studies were approved by the Johns Hopkins University School of Medicine Animal Care and Use Committee.Mice were monitored daily by both laboratory personnel and veterinary staff and sacrificed as appropriate at signs of distress.For all experiments, animals were euthanized by isoflurane inhalation (Forane, Baxter Healthcare Corp., Deerfield, IL, www.baxter.com), followed by rapid cervical dislocation.For anesthesia in preparation for perfusions, animals were administered 5 mg/ml Nembutal Sodium solution (pentobarbital sodium injection, USP, Oak Pharmaceuticals, Lake Forest, IL, www.akorn.comprotocols were followed to assure ethical treatment of animals, and veterinary care is provided for mice.
After 24-48 hr, media was discarded, cells were washed with PBS, and fresh MMM without additional factors was given to cells.Conditioned media from these MSCs was harvested after 24 hours and 0.22μm filtered.
MOG 35-55 EAE Induction, Behavioral Analysis and Ex Vivo T Cell Analysis
To induce EAE, 100 μg pure MOG in complete Freund's adjuvant (CFA) (8mg/ml M.tb) (Thermo Scientific, Waltham, MA, www.thermoscientific.com and Difco Laboratories, Detroit, MI, www.bd.com, respectively) was injected in the abdomen subcutaneously on day 0. Pertussis toxin of 250 ng was administered intraperitoneally (i.p.) on days 0 and 2. For mice treated with MSCs, 5 x10 6 MSCs in phosphate-buffered saline vehicle (PBS) were injected i.p. on days 3 and 8.Control mice received PBS vehicle.Mice were monitored daily by a blinded observer for behavioral EAE symptoms and were scored on a point system as previously reported [25].Mice never progressed beyond EAE score 4. Suffering alleviated by adding mouse food pellets and hydration packs directly inside cages if mice were at least EAE score 2.5.For ex vivo T cell analysis, mice from 14 days after EAE immunization were killed, perfused with cold Hanks buffered saline solution, and whole brains were harvested.Brains were dissociated, filtered, and inflammatory infiltrates were isolated in Percoll gradients as previously reported [24].Inflammatory cells were immediately restimulated with cell stimulation cocktail (as described) for 4 hours, stained, and analyzed by flow cytometry.
T Cell Intracellular Cytokine Analysis
T cells from polarization assays were re-stimulated with 2 μl/ml cells of Cell Stimulation Cocktail (eBioscience), which contains PMA/Ionomycin/Brefeldin-A/monensin, for 5 hours at 37°C.Cells were then stained with cell surface and intracellular antibodies using the Foxp3 staining buffer set (eBioscience).Antibodies used for cell analyses were conjugated to FITC, PE, PerCp, and APC, and are as follows: CD4-PerCp, IFNγ-FITC, (both from BD Biosciences, San Jose, CA, www.bdbiosciences.com),and IL-17A-APC (eBioscience).Cells were analyzed on a BD FacsCalibur.
Western Blot Analysis
Mice were anesthetized with isoflurane (Abbott Laboratories, Chicago, IL, www.abbott.com)and perfused through the left ventricle using chilled 1X Hank's buffered saline solution [HBSS (Cellgro, Corning, NY, www.cellgro.com)].Brains were removed and dissected under a Motic SMZ-168 microdissection microscope (Motic, Richmond, BC, Canada, www.motic.com).The corpus callosum from one hemisphere and hippocampus from the other were removed and immediately frozen on dry ice.The tissue was later thawed and homogenized on ice using a handheld homogenizer in RIPA buffer (Boston Bioproducts, Ashland, MA, www.bostonbioproducts.com)with protease and phosphatase inhibitors (both from Sigma-Aldrich, St. Louis, MO, www.sigmaaldrich.com).Protein concentration was determined by performing a Bicinchoninic Acid (BCA) assay with a Pierce BCA Protein Assay Kit (Thermo Scientific).Protein was separated by running 25μg (Corpus Callosum) or 40μg (Hippocampus) on NuPage 12% Bis-Tris Acrylamide Gels (Life Technologies, Carlsbad, CA, www.lifetechnologies.com) in NuPAGE MOPS SDS Running Buffer (Life Technologies).Separated protein was then transferred to Bio-Rad Midi nitrocellulose membranes (Bio-Rad, Hercules, CA, www.bio-rad.com)using a Bio-Rad Trans-Blot Turbo transfer system (Bio-Rad).Membranes were washed and blocked in 5% Non-Fat Milk in 1X Tris-Buffered Saline with .1% Tween-20 (TBS-T) for 1 hour at room temperature.Blocked membranes were then probed with primary antibody for myelin basic protein (MBP), SMI-99 (Covance, Princeton, NJ, www.covance.com)or for actin, AC-74 (mouse monoclonal anti-βactin, Sigma-Aldrich) in blocking solution overnight at 4°C.Membranes were subsequently washed in TBS-T and then probed with secondary goat anti-mouse IRDye 1 680RD or 800CW (Licor, Lincoln, NE, www.licor.com) in blocking solution for 1 hour at room temperature.Membranes were then washed and imaged using a Licor Odyssey (Licor).Quantitative measures were obtained using Image Studio 2.0 software (Licor).
Black Gold Staining
Mice were anesthetized with sodium pentobarbital (Akorn Pharmaceuticals, Lake Forest, IL, www.akorn.com)and perfused through the left ventricle using chilled 1X HBSS followed by 4% w/v paraformaldehyde (Sigma-Aldrich).Brains were removed and post-fixed in 4% w/v paraformaldehyde overnight at 4°C.Afterwards, brains were moved to 30% w/v sucrose (Sigma-Aldrich) and kept at 4°C for approximately 48 hours.They were then frozen in isopentane (Sigma-Aldrich) and kept at -80°C until sectioning.20μm sections were obtained on a cryostat and transferred to superfrost plus microscope slides (Fisher Scientific, Waltham, MA, www.fishersci.com).Sections were stained using Black Gold II Myelin Staining Kit (EMD Millipore, Billerica, MA, www.emdmillipore.com) according to manufacturer's instructions and imaged on an Olympus BX41 microscope (Olympus, Waltham, MA, www.olympuslifescience.com).
In vitro OPC Flow Cytometric Analysis
After 7-9 days of OPC culture in MSC-CM, OPCs were harvested from plates by micropipette and stained for flow cytometry.For MBP staining, OPCs were stained with the myelin basic protein monoclonal antibody SMI99 (Covance), followed by AlexaFluor 488-conjugated, F (ab') 2 goat anti-mouse IgG (H+L) secondary antibody staining (Life Technologies).For evaluation of apoptosis, cells were stained with Annexin V-APC using the Annexin V Apoptosis Detection kit (eBioscience).Cells were analyzed on a BD FacsCalibur.
Ex vivo Oligodendrocyte Flow Cytometry
Female C57BL/6 mice were cuprizone treated and administered MSCs or PBS as previously described.After 14 days of treatment, mice were anesthetized with isoflurane and perfused through the left ventricle using chilled 1X Hank's buffered saline solution.Corpora callosa were dissected from brains under a Motic SMZ-168 microdissection microscope and were then manually minced and processed for single cell suspension as described [27] for flow cytometry.For oligodendrocyte identification, cells were stained with anti-Galactocerebroside, clone mGalC, Alexa Fluor488 Conjugate (EMD Millipore).For evaluation of death, cells were stained with Annexin V-APC (BD Biosciences) and 7-AAD (BD Pharmingen) as suggested by manufacturers.Cells were analyzed on a BD FacsCalibur.
Statistics
All statistics were conducted using GraphPad Prism (GraphPad, San Diego, CA).
Mesenchymal stem cells do not impact the course of CD4+ T cellmediated EAE
We assessed the potential of MSCs to improve the disease course in the classical MOG 35-55 , CD4+ T cell-initiated EAE.Mice were immunized and then administered two doses of MSCs during the priming phase of disease.With this regimen, the MSCs had no impact on either the duration or severity of EAE disease (Fig 1).
Mesenchymal stem cells differentially affect the development of effector CD4+ T cell subsets
We next conducted in vitro studies to investigate the effects of MSCs on specific T cell subsets involved in EAE.Spleen-derived, naïve CD62L+ CD4+ T cells were activated with α-CD3/α-CD28 and either kept activated but un-polarized or polarized towards T H 1 cells with the cytokines IL-2 and IL-12.We also polarized T cells towards two different EAE-associated T H 17 cell populations with the cytokines IL-6 and TGF-β1 ± the cytokines IL-23 and IL-1β [6,28,29].Comparable to our in vitro studies, we found that mice undergoing EAE exhibited a significantly higher frequency of T H 1 cells in brains with MSC treatment, though the T H 17 cell frequency was unaffected (Fig 3C).
MSCs ameliorate cuprizone-mediated demyelination in the corpus callosum
Within the EAE model, much of the pathology results secondary to the immune response, and thus we next sought to assess the specific effects of MSCs on the abundance of myelin, independently of its effects on the lymphocytic compartment of the immune system.To this end, we tested the impact of MSC therapy on cuprizone-mediated demyelination, which causes a lessinflammatory, chemically-induced demyelination of the corpus callosum and hippocampus.C57BL/6 mice were kept on cuprizone feed continuously and MSCs were injected on a weekly basis.After four weeks, mice were euthanized and their brains dissected and analyzed for myelin content via two different strategies (Fig 4A -4C).We analyzed focal effects by using the myelin stain Black Gold and quantified total myelin content by Western blot analysis of the affected regions of the brain.To eliminate the possibility of B and T cell-mediated immune effects on demyelination, we extended our cuprizone studies to NOD-SCID mice.In our experiments, while C57BL/6 mice exhibited significant demyelination after 4 weeks on cuprizone, there was a delayed effect in NOD-SCID mice, which did not reveal substantial demyelination until after 8 weeks.In addition, while C57BL/6 mice exhibited demyelination in both corpus callosum and hippocampus, the demyelination in NOD-SCID mice was primarily localized to the hippocampus.
As shown by the myelin stain Black Gold, mice that received MSCs had higher quantities of myelin (Fig 4B).Quantitatively, MSC administration produced a significant albeit modest effect on myelin abundance, as shown from the Western analysis (Fig 4C and 4D).A similar trend was observed in NOD-SCID mice, although the effect fell slightly short of statistical significance (Fig 4D).
MSCs decrease oligodendrocyte death
We next assessed whether MSCs might have direct effects on myelin-producing cells in vitro that might at least partially contribute to the observed effect of MSC treatment.Murine Mesencult Media (MMM) or IFNγ-containing MMM (for MSC immune stimulation) was given to MSCs for 24-48hr.These media were then removed from MSCs and these cells were washed with PBS before the addition of fresh MMM, which was then harvested after 24hr, filtered, and used as MSC-CM for OPC culture.OPCs were isolated, expanded for 3 days, and then exposed to conditioned media from un-stimulated MSCs or MSCs stimulated with the pro-inflammatory cytokine interferon-gamma (IFNγ), as MSC function has been shown to be influenced by inflammatory environments [14].Effects of MSC-CM on differentiation-induced cell death of MBP-expressing OLs were quantified by FACS analysis of Annexin positivity on MBP+ OLs.Conditioned medium harvested from un-stimulated MSCs inhibited apoptosis of MBP+ OLs (Fig 5A).This inhibitory effect was enhanced when conditioned medium was harvested from MSCs that had been pre-treated with the inflammatory cytokine IFNγ.Thus MSC-mediated inhibition of OL apoptosis may contribute to the increased amount of myelin measured in cuprizone-mediated demyelination.
To address the in vivo relevance of MSC effects on oligodendrocyte survival, we administered MSCs to mice undergoing cuprizone-mediated demyelination and evaluated the survival of galactocerebroside (GalC) + oligodendrocytes from the corpus callosum.MSC treatment resulted in a modest positive effect on both the total number of oligodendrocytes and on cell survival (Fig 5B).
Discussion
Multiple pre-clinical therapies have been proposed for the amelioration of autoimmune disease, including bi-specific antibody and antibody multimer treatment, oral tolerance induction, and peptide immunotherapy [30][31][32][33][34][35][36][37][38][39][40][41][42].These approaches have the advantage of adding specificity to the target tissue to be protected from immune attack, but may be limited in practicality by many variables, including inability to reach the target issue for protection, non-cell specific expression of target tissue antigens, location of antigen expression, and the potential inability of toleragenic cells to suppress activated, memory auto-antigenic T cells.In addition, in the context of neuro-autoimmune disease such as MS, demyelination may occur independent of inflammation, in which case these therapies would be of little use [19].Hence there is a need for therapies that suppress inflammation and promote regeneration of damaged tissue as a result of autoimmune disease.
MSCs possess a number of distinctive characteristics that make them very appealing as a therapeutic modality.Their plasticity and diverse functions allow them to contribute to the repair and healing of multiple types of diseases [2][3][4]8].This varied nature, however, has a double edge to it, in that MSCs may have the potential to acquire undesired properties which may be influenced by the environment in which either they are produced or to which they migrate.Given their differential effects, the present studies sought to evaluate their impact on two distinct elements that characterize the disease process of MS, namely the autoimmune response and demyelination.
Pre-clinical investigations of the impact of MSCs on the widely used immune based rodent MS model, EAE, have yielded mixed results [3,5,18,23,24].There are a number of important differences in the model systems tested, which may at least partially account for some of the disparate outcomes.The types of immune responses along with the state of the immune system at the time of delivery (e.g., in an activated or suppressed state) and the subtype of predominant T cell response could all contribute to differences, and further identifying the mechanisms of action of these cells will be critical to greater understanding.
To more extensively determine the effects of MSCs on the inflammatory and non-inflammatory events of neuro-degenerative disease, we evaluated the impact of MSCs on both the immune-generated model, MOG 35-55 EAE, which relies on the induction of an autoreactive T cell response and on the cuprizone model of myelin destruction, which produces a chemicallyinduced demyelination.Surprisingly, we found that MSCs differentially affected each disease process, producing no detectable impact on disease progression in immune-mediated EAE, but exerting a positive effect on myelin abundance in cuprizone-mediated demyelination.
In previous studies, we reported that MSCs exacerbated MOG 37-50 EAE, which is predominantly mediated by pro-inflammatory Tc17 cells and Tc1 cells [18,45].Interestingly, we found that MSCs differentially affected the development of effector CD8+ T cells depending on the specific subtype.While MSCs potently prevented Tc17 development, they enhanced IFNγ production and cytotoxicity of activated, non-polarized CD8+ T cells.These cellular effects relied on the early T cell production of IL-2, which is enhanced by MSC co-culture.Considering the differential effect of MSCs on effector CD8+ T cell development, we evaluated the effect of MSCs on discrete effector subsets of CD4+ T cells.Consistent with our previous studies on CD8+ T cells, MSCs increased IFNγ production in activated, non-polarized CD4+ T cells, with a more noticeable increase in the presence of IL-12, while reducing the frequency of T H 17 cells, as has been extensively observed in other studies [11][12][13].MSCs also increased the frequency of T H 1 cells in the brains of MOG 35-55 EAE mice.These effects on the generation of CD4+ T cell effector subsets known to cause disease may provide one possible explanation of the failure of the MSCs to modulate MOG 35-55 EAE pathogenesis in this setting and have relevance to MS in which it has been postulated that heterogeneity of the disease may be partly determined by relative propensity for T H 1 vs T H 17 immune deviation [43].
Ultimately, demyelination and subsequent axonal pathology lead to disability in MS [2,19,44], and many current therapies are targeting neuroprotection and remyelination.To assess the protective effects of MSCs, while minimizing confounding immune effects, we employed the cuprizone model of demyelination and OL toxicity [26,28,29].While the EAE model has many appealing features that make it relevant to the study of MS, one limitation is the intertwined nature of the immune response and demyelination.As demyelination is a direct result of the immune response, therapeutic interventions that suppress the immune response generally suppress the demyelination.However, progressive demyelination, with subsequent axon injury and loss, may occur somewhat independently of inflammation or persist after immune resolution [46,47].The cuprizone model allows an evaluation of the potential for MSCs to directly impact de/re-myelination in a less immune-dependent setting, independent of B and T cells.Our results revealed that mice fed cuprizone and treated with MSCs had higher quantities of myelin than did cuprizone-fed untreated mice.This result is consistent with two previous reports demonstrating positive effects of MSCs on demyelination.One group reported that grafting of MSCs led to an increase in OPC migration to the lesion, remyelination, and axon conduction velocity.This group hypothesized that the secretion of trophic factors plays a significant role in the beneficial effect [46].A second report utilized adipose-derived MSCs, which were injected i.v., and also found a beneficial effect of the MSCs on the quantity of myelin [48].One contrasting study showed no significant effect of MSCs on demyelination [49], although there were numerous differences in experimental design, including a substantially shorter time frame after administration of MSCs (3.5-7 d) prior to analysis, which may have contributed to these differences.
The isolation of individual variables in multifactorial models is complex, and while cuprizone is generally considered to be non-immune mediated, some studies have suggested that there is a contributing immune element [50,51].As many of the immune effects of MSCs have been ascribed to their actions on T cells, we assessed whether MSCs would produce a similar effect in immune-deficient mice.As the NOD-SCID mice lack B and T cells, this model provides a useful system for determining whether the observed effects of MSCs on myelin abundance might be due to secondary effects on lymphocytes.The results of our studies showed a similar trend towards a beneficial effect in the immunocompromised mice, further supporting the hypothesis of a direct role of MSCs on OPCs and/or OLs.
One possible explanation for our finding of beneficial effects on myelin abundance is that MSCs directly impact the survival of OPCs and/or OLs during apoptosis, possibly resulting from their trophic effects.We thus evaluated a culture system in which conditioned medium from MSCs was added to differentiating OPCs.Our results revealed that factors secreted by MSCs led to a significant survival advantage for the MBP+ OLs.In addition, MSCs exerted a positive effect on oligodendrocyte survival in vivo, as demonstrated in our studies of MSC administration to mice undergoing cuprizone-mediated demyelination.These results are consistent with previous studies in which MSC production of factors such as HGF positively impacted the generation of OPCs and neurons from cultured neurospheres [5].Additional studies also showed that MSCs produced anti-apoptotic and growth factors that together exerted a protective effect on neural axons [52,53].Thus MSCs may simultaneously modulate different aspects of pathogenic processes, which is of particular significance in MS/EAE, since there can be both an immune-mediated and a direct demyelinating component [2,44].
Taken together, these results indicate the potential for MSCs to act on the CNS directly and produce a beneficial effect on myelination in a toxic environment.
While MSCs clearly have potent therapeutic capabilities, a greater understanding of their plasticity within different settings will be necessary to optimize their clinical use.Mechanistic analyses of their various functions may help to reveal the specific nature of the separate functions that lead to these outcomes.Ultimately, identifying the diverse and individual capabilities of MSCs will help to harness their potential.
MSCs were added to the polarizing cells and T cells were analyzed for proliferation and canonical cytokine production 3 days post-activation.MSCs exerted differential effects on CD4+ T cells, depending on the T cell effector subset.MSCs dramatically inhibited T H 17 cell proliferation (Fig 2A).Interestingly, MSCs still significantly suppressed T H 17 proliferation when the T H 17-enhancing cytokines IL-23 and IL-1β were added.In contrast, activated and T H 1 cells proliferated to a similar extent regardless of MSC presence, though MSCs tended to increase the percentage of undivided cells (Fig 2B).MSCs strongly suppressed differentiation into T H 17 cells, even with the exogenous addition of the IL-17-lineage stabilizing and expansion cytokines IL-23 and IL-1β (Fig 3A).Activated, un-polarized CD4+ T cells do not produce significant quantities of IL-17A but do produce low levels of IFNγ.During MSC co-culture, these cells increased IFNγ production, albeit modestly (Fig 3B).The pro-inflammatory cytokine IL-12 potently induces IFNγ expression and drives T H 1 development.Even after IL-12 administration, MSCs significantly enhanced the frequency of IFNγ-producing T H 1 cells and level of IFNγ production.
Fig 1 .
Fig 1. MSCs do not affect the MOG 35-55 EAE disease course.C57BL/6 mice were immunized with MOG 35-55 in complete Freund's adjuvant.At days 3 and 8 post-induction (black arrows), mice were administered either murine MSCs or phosphate-buffered saline vehicle.Mice were scored daily in a blinded fashion.Data shown are a combination of two independent experiments.Statistical analysis was done with the Mann-Whitney U-test.doi:10.1371/journal.pone.0139008.g001
Fig 2 .Fig 3 .
Fig 2. MSCs differentially affect the proliferation of effector CD4 + T cells in vitro.Naïve CD62L + CD4 + T cells were cultured in the absence or presence of MSCs at a T cell: MSC ratio of 4:1.CD4+ T cells were activated with plate-bound α-CD3 and soluble α-CD28 ± polarizing cytokines and neutralizing antibodies.After 72 hours, T cells were harvested, re-stimulated with cell stimulation cocktail for 5 hours, stained, and analyzed by flow cytometry.Shown are representative histograms of CFSE dilution, gated on CD4 + T cells, of undivided incomplete T H 17 cells (without IL-23/IL-1β) and complete T H 17 cells (with IL-23/IL-1β) (A), and bar graphs of the undivided incomplete and complete T H 17 populations, activated CD4 + T cells, and T H 1 cells (B), in the absence and presence of MSCs.Compiled bar graph data are from three separate experiments (n = 5/treatment group).Significance was measured by Student's t-test, with *p<0.05 and **p<0.01.doi:10.1371/journal.pone.0139008.g002
Fig 4 .
Fig 4. MSCs modestly suppress demyelination in cuprizone-treated mice.(A) C57BL/6 (C57) or NOD-SCID (N-S) mice were fed cuprizone (Cupz, 0.2 w/w)-containing rodent diet continuously for either 4 (C57) or 8 (N-S) weeks, concomitant with weekly i.p. injections of either MSCs or PBS vehicle, which were typically initiated one week prior to cuprizone feed.Mice were then euthanized for downstream brain myelination analyses.MSC administration resulted in higher quantities of myelin, as evidenced by Black Gold staining (C57 rostral corpus callosum (CC) shown in B) and by Western blot probing of MBP large (21.5 kD) and small (18.5 and 17.5 kD) isoforms in C57 CC and N-S hippocampus(Hp) (C).Quantitative myelin measurements from either C57 or N-Sderived combined large and small MBP isoforms were generated from Western blot densitometries (D).Each graph shows the combined results from two individual experiments (n = 15-17 C57and 7-8, (N-S).Significance was measured by Student's t-test, with *p<0.05.doi:10.1371/journal.pone.0139008.g004
Fig 5 .
Fig 5. MSCs decrease oligodendrocyte death.(A) Oligodendrocyte progenitor cells (OPCs) were isolated from neonatal rat brains and cultured in PDGF-supplemented Sato medium.For preparation of MSCconditioned media (MSC-CM), MSCs were plated in Mouse Mesencult medium (MMM) alone or supplemented with interferon-gamma (IFNγ).After 24-48 hr, media was discarded, MSCs were washed with PBS, and subsequently received fresh MMM without additional factors.Conditioned media from these MSCs were harvested after 24 hours and filtered.After 3-4 days of culture in PDGF-supplemented Sato medium, OPCs then received 1:1 ratio of Sato medium: MMM (-MSC-CM, conditioned-medium), 1:1 of Sato: unstimulated MSC-CM (+MSC-CM), or 1:1 of Sato: IFNγ-stimulated MSC-CM, for a period of 7-9 days.Cells were then harvested and MBP/Annexin V-stained for downstream flow cytometric analysis.Cells were gated on MBP+ cells for Annexin V analysis.Shown are a representative histogram and a bar graph consisting of data from three independent experiments.(B) To evaluate the effect of MSC action on oligodendrocyte death in vivo, female C57BL/6 mice were cuprizone-fed and treated with MSCs or phosphate-buffered saline vehicle as previously described (n = 5/treatment group).After 14 days of treatment, corpora callosa were harvested from brains and manually minced.Single cell suspensions were prepared, stained for the oligodendrocyte marker galactocerebroside (GalC) and for Annexin V and 7-AAD to evaluate cell survival (Annexin V neg /7-AAD neg cells).Cells were analyzed by flow cytometry.Shown are compiled bar graph data from two independent experiments.Significance was measured by Student's t-test, with *p<0.05.doi:10.1371/journal.pone.0139008.g005 | 6,356 | 2015-09-25T00:00:00.000 | [
"Biology",
"Medicine"
] |
Analysis of the radiated electric field strength from in‐house G.fast2 data carrying wire‐line telecommunication network
Correspondence Josip Milanovic, RF Spectrum Monitoring Department, Croatian regulatory authority for network industries, R. F. Mihanovica 9, Zagreb, Croatia Email<EMAIL_ADDRESS>Abstract G.fast profile 212a technology is the perfect choice for an operator offering a broadband service, as it operates using the existing copper telecommunications infrastructure (cables) already installed in user premises. Unfortunately, such telecommunications infrastructure is not designed to transmit data at high frequencies used by G.fast technology, resulting in radiation during signal transmission. This radiation can have a direct impact on the performance and reliability of radio services operating in the same frequency range. In order to limit such radio interference, International Telecommunication Union proposed radiation limits for wired telecommunications networks. This paper provides a comparison between ITU-T K.60 Recommendation with the measurements of the electric field radiation from the telecommunications network when the G.fast profile 212a signal is transmitted through different types of telecommunications cables. The aim of this comparison is to assess whether the radiation from the telecommunications network in this study meets the radiation limits defined in ITU-T K.60 Recommendation and, therefore, whether this radiation can be a source of interference to radio services operating in the same frequency range. In addition, this paper provides an analysis of the impact of cable construction on the total irradiated field from the in-house part of the telecommunications network.
INTRODUCTION
User demands for higher data rates are constantly increasing, forcing telecommunications operators to invest in the development of telecommunications infrastructure. Although fibre to the home architecture (FTTH) is emerging as the best longterm solution for providing gigabit connectivity, difficulties in the installation process, legal constraints, and the cost of fibre deploying postpone the massive use of fibre. In order to provide ultra-fast broadband access to places where the use of FTTH is not a cost-effective solution, it is necessary for telecommunications operators to effectively exploit and reuse the existing copper telecommunications infrastructure [1,2]. A technology that is attracting increasing interest from operators looking to provide fibre-like throughput over existing copper telecommunications network is G.fast profile 212a (G.fast2) technology. This technology uses a frequency bandwidth up to 212 MHz offering an aggregated data rate of up to 2 Gbps in local loops shorter than 200 m [3][4][5]. Moving the fibres closer to user's premises and reusing the existing copper telecommunications infrastructure enables ultra-fast data transmission without "lastmile" installation challenges, which enables the implementation of G.fast2 technology in an economically and technically feasible way [6,7].
Although G.fast2 uses an improved precoding scheme, ultrafast data rate is generally achieved by using higher frequency bandwidth compared to the bandwidth used by other DSL technologies (e.g., ADSL and VDSL technologies). Unfortunately, increasing throughput by increasing frequency bandwidth has a number of disadvantages. It is known that a copper wire is not a perfect transmission medium and that part of the signal energy is radiated in the air when the signal passes through the wire [8][9][10]. The radiated electromagnetic field from the telecommunications network is a potential source of interference to radio services operating in the same frequency range disabling the radio service from operating as planned [11]. This is especially important in critical services where such radiation can directly affect the safety and security of human life. In order to limit unwanted emissions from the telecommunications network, various radiation limits have been proposed. The most commonly used radiation limits for the assessment of radio interference caused by the radiation from wireline telecommunications networks is defined in ITU-T K.60 Recommendation.
Although customer premise equipment (CPE) has been identified as a main source of interference in the in-house part of the telecommunications network, cables used for data transmission can also negatively contribute to the entire interference situation [12]. This paper presents the results of measuring the electric field (E-field) radiation from the in-house part of the telecommunications network during the transmission of the G.fast2 signal, as well as a comparison of measured values and the ITU-T K.60 radiation limit values. The measurements were performed when using different types of telecommunications cables for data transmission to assess the effect of cable construction on the entire radiated field from the in-house part of the telecommunications network.
The paper is organized as follows: Section 2 provides information on the network topology as well as technical specifications regarding the tested cables. Section 3 provides ITU-T K.60 Recommendation that define radiation limits. Section 4 provides information on the measurement methodology and procedure, while the measurement results, along with the E-field radiation rating, are presented in Section 5. Concluding remarks are given in Section 6.
BROADBAND TELECOMMUNICATION INFRASTRUCTURE
Increased demand for ultra-fast broadband brings fibre deeper into the distribution network. While fibre is still considered the most future-proof access technology for high broadband services, for most internet providers, the use of fibre can be both; expensive and time consuming. The introduction of fibre is particularly problematic in the final meters of the telecommunications network leading to and within user's premises ("lastmile" part of the network). This is the main reason why internet operators, in order to provide ultra-fast broadband connection in an economically acceptable way, are forced to combine existing copper telecommunications infrastructure with optical fibres [13][14][15][16].
Depending on the spot where the fibre terminates, telecommunications topologies vary. The telecommunications topology mainly used for G.fast2 signal transmission is fibre to the distribution point topology (FTTdp), fibre to the building topology (FTTB) and fibre to the door topology (FTTD) [17,18]. FTTdp network topology leads fibre cables to a distribution point (dp) located at a distance of up to 200 m from the user's premise which is then connected to the existing copper infrastructure that connects each individual user [19]. In the FTTB topology, the fibre reaches the boundary of the building (e.g. basement) while in FTTD scenario, the fibre reaches the boundary of the living space, such as the connection point outside the user's premise wall. FTTx topology mainly used for G.fast2 technology is shown in Figure 1.
According to [8], the wire-line telecommunications network includes telecommunications cables, their in-house cable extensions and telecommunications terminal equipment that are crucial to ensure efficient operation between internet service provider (ISP) and CPE. The critical part of the telecommunications network, in terms of interference, is the in-house part of the network. Interference problems within in-house part of the telecommunications network are often results of the radiation from the CPE and radiation from telecommunication cables used for data transmission. In order to ensure that electrical equipment does not interfere with other services and equipment, CPE equipment must be designed in accordance with the relevant electromagnetic compatibility (EMC) requirements [20,21]. The EMC principle of telecommunications cables is that the radiation from communication signals should be kept inside the cables. Since most telecommunications cables already installed in the user's premises are not designed for data transmission at frequencies used by the G.fast2 technology, radiation from the cables occurs. This radiation can increase overall network radiation above the defined radiation limits and is usually result of insufficient shielding requirements, inadequate cable construction, improper cable installation and/or inadequate cable maintenance.
The most common technique used to reduce cable radiation is twisting the wires in the cable (twisted cables). A twisted cable consists of two insulated wires twisted around each other with a twist length of less than λ min /4, where λ min represents the minimum wavelength of the signal in the cable. The reduction in radiation in a pair of twisted wires results from the fact that the two wires carry signals of equal magnitude and opposite sign, resulting in mutual cancellation of the field generated by the cable (Figure 2(b)). If the imperfection of the geometrical symmetry of the wire pair with respect to the earth is present, differential signal will generate a common-mode excitation of the wire pair, resulting with increased cable radiation (Figure 2(c)) [22,23].
To describe the ability of a cable to reduce unwanted radiation, the term balanced is used. Cable balance, in terms of voltage, is defined as [24] b where U com represents the common mode voltage and U diff the differential mode voltage. On the decibel scale, cable balance is often described as longitudinal conversion loss (LCL), defined as [25] LCL = −20 ⋅ log 10 To additionally reduce radiation of copper cables as well as to reduce external radio impact, the twisted cooper wires in modern xDSL cables are covered with aluminium tape (shielded cables). Although unshielded cables have purer technical characteristics than shielded cables, unshielded cables are often used as part of the wire-line telecommunications network due to lower costs. Taking this into account and the fact that in-house installation is usually made of unshielded cables, interference in the reception of radio signals operating in the same frequency band as G.fast2 technology could be expected. In order to analyse the level of radiation from the in-house part of the telecommunications network when different types of cables are used, in this study radiation was observed when a cable TK33U, a cable TK59U, an UTP cat.5E cable and an S/FTP cat.7A cable are used. The cables were selected to reflect the main design features and specifications of the telecommunications cables mainly used for xDSL signal distribution. The technical specifications of cables used in this paper are given in Table 1. The mark of the cable core construction consists of three groups of number symbols mutually connected by "×" sign, marking: first group-number of basic elements in the cable; second groupway of stranding of basic elements; third group-cable conductor diameter.
SPECTRAL COMPATIBILITY AND RADIATION LIMITS FOR WIRE-LINE TELECOMMUNICATION NETWORKS
In recent decades, the use of radio frequency spectrum has increased dramatically. The RF spectrum enables significant advances in technology from the mobile network to high-speed wireless Internet. Without its application our modern life would not be possible. Since the radio frequency spectrum is a limited natural resource, it must be managed in a professional, objective and efficient manner. Achieving efficient radio spectrum management implies interference-free operation of radio communication services.
Unfortunately, data transmission over a wired telecommunications network causes electromagnetic radiation and potentially interferes with a radio service operating nearby [26]. To achieve ultrafast throughput, modern broadband wire-line technologies use higher frequency bandwidth [27]. However, higher frequencies also mean higher signal attenuation (shorter loop length), higher power consumption and increased number of radio services that could be affected by cable radiation. Radio services which could be affected by the network radiation when transmitting G.fast2 signal are: amateur services, aeronautical services, broadcasting services, government services, radio navigation services, maritime services, distress and safety services, etc. The radiated electromagnetic field can cause intolerable error in signal reception or, if the radiation is too high, it can cause loss of communication. This scenario has to be avoided on priority basis, especially if the radiation affects services that have a direct impact on human life, such as: security services, safety services and the welfare of social services.
Telecommunications authorities have proposed various radiation restrictions to prevent unwanted emission from the telecommunications network as well as to protect radio services operating in the same frequency range as broadband wire-line services. The most commonly used recommendation defining radiation restrictions from wire-line telecommunications networks is ITU-T K.60 Recommendation proposed by the International Telecommunication Union (ITU). Although the ITU-T K.60 Recommendation sets radiation limits for the frequency range from 9 kHz to 3 GHz, in this paper and due to the specification of the nominal frequency range of the antenna, only limits from 30 to 212 MHz are considered. This frequency range also corresponds to the frequency range used by G.fast2 technology when compatibility with VDSL2 profile 30a technology must be achieved. According to the ITU-T K.60 Recommendation, the radiation limit for the previously defined frequency range is 40 dB μV/m at a distance of 3 m from the wire-line telecommunications network [8].
MEASUREMENT SETUP
G.fast2 technology enables Gigabit connection by transmitting high frequency signals over an existing copper telecommunications network. To assess the radiation level from the in-house part of the telecommunications network, the E-field radiation level is measured when G.fast2 modem and telecommunication cable are installed in the anechoic RF chamber while the digital subscriber line access multiplexer (DSLAM) is installed outside the chamber. All electronic equipment used in this paper complies with EMC directive and standards (Council Directive 2015/863/EU and 2014/30/EU), that is, the emitted emissions from electrical devices are below the defined radiation limits.
Although the electronic equipment used in this study was manufactured in accordance with EMC directive, it does not mean that overall radiation from telecommunications network in user premises meets the radiation limits defined by the international authorities. The reason for this is that the telecommunications network in addition to electronic equipment also consists of existing copper telecommunications cables that are not usually designed to transmit data at frequencies used by G.fast2 technology, resulting in increased radiation when the signal passes through it [28]. Therefore, in order to analyse the influence of cable construction on the measured level of E-field radiation, the measurement is performed for different types of telecommunication cables. The individual cable length is 6 m, and the technical specifications of the cables used in this study are given in Section 2. Anechoic chamber is an ideal location to test radiation from a telecommunications network since it is a radiation free environment where only radiation from an object of interest (G.fast2 modem and copper telecommunication cable in this case) is present. To reduce reflection inside the chamber, pyramidal RF absorbers were placed on the walls inside the chamber. The internal dimensions of the anechoic chamber in which the measurements were performed were 7.62 × 5.18 × 5.49 m (length × width × height).
The measurements of the radiated E-field level were performed using an R&S ESMD receiver and a Schwarzbeck VULB 9160 linear polarized logarithmic broadband antenna designed for a nominal frequency range from 30 to 1000 MHz. The antenna was placed on a tripod 1.7 m above the ground level and 3 m from telecommunications cable, as defined in [8]. The measurement setup is shown in Figure 3.
The R&S ESMD receiver was set to measure the frequency range from 30 to 212 MHz with a frequency step of 51.75 kHz. The frequency step was equal to the subcarrier spacing in G.fast2 technology as defined in [29]. To achieve the E-field level in dB μV/m, the antenna correction factor as well as the connection loss and cable loss between the antenna and the receiver were added to the values measured on the receiver.
MEASUREMENT RESULTS
The radiated E-field level from telecommunications network installed in the anechoic chamber is measured when the antenna is placed in a vertical position because a higher radiated signal is measured in that position. According to the specifications defined in [8], the E-field radiation was measured using a peak detector and a measuring bandwidth of 120 kHz. The G.fast2 aggregate transmit power is set up to 4 dBm as it is specified in [29]. Due to the comparison simplicity, the mean value,x, of the radiated E-field value was calculated for each cable used in this paper. To obtain the E-field reference level, the measurement was first performed when the G.fast2 modems were turned off. This measurement result represents the noise level in the anechoic chamber and will be used for comparative reasons to estimate the radiation level from telecommunications network installed in the chamber. The result of the referent (noise) level measurement is presented in Figure 4. Since the radiation from the modem power supply unit (AC/DC adapter) also contributes to the total network radiation, in order to estimate this radiation, the measurement was performed when one modem was turned on as well as when six modems were turned on (in this measurement mode, the modems were not synchronized with the DSLAM; there was no data transmission in the telecommunications cable). The result of this measurement is shown in Figure 5. As expected, radiation in the anechoic chamber increases with increasing number of modems in power-ON mode.
The results of measuring the E-field radiation when the G.fast2 signal is transmitted through a TK33U cable are shown in Figure 6. To analyse the cumulative effect of network radiation increasing when several twisted wire pairs in the cable are used simultaneously, E-field radiation is measured when one twisted pair of wires (one modem) is used, as well as when six twisted pairs of wires (six modems) are used for data transmission. Although a situation where several twisted wire pairs are simultaneously used in a single user premise to transfer G.fast2 data is unlikely to be found in practice, this measurement can be used as a good indicator to estimate the increase in radiation when the number of used wire pairs increases. Figure 6 shows that, although the CPE equipment used in this study meets all relevant EMC requirements, radiation from the in-house part of the telecommunications network is significantly above the Measured E-field radiation from TK33U telecommunication cable radiation limits when G.fast2 signal is transmitted through a TK33U cable, for example the highest E-field level is measured at 128.6 MHz and is 48.5 dB μV/m which is 8.5 dB above limit defined in ITU-T K.60 Recommendation. Such high radiation increase is result of the common mode current produced by the G.fast2 modem which then propagates to the modem G.fast port and radiates to the surrounding area via the connected cable. In addition, Table 2 shows that mean E-field level in the frequency range from 30 to 212 MHz when G.fast2 signal is not transmitted isx = 6.4 dB μV/m and when G.fast2 signal is transmitted via a single twisted wire pair of TK33U cable mean E-field level increases tox = 33 dB μV/m. The radiation additionally increases when the number of used twisted wire pairs increases, for example when six twisted wire pairs are usedx = 43.1 dB μV/m which is 36.7 dB above the referent
FIGURE 7
Measured E-field radiation from TK59U telecommunication cable E-field level and 10.1 dB above the mean E-field level when one pair of twisted wires is used. Since the R&S ESMD receiver is set up to measure the frequency range from 30 to 212 MHz with a frequency step of 51.75 kHz (which is equal to the subcarrier spacing), it can easily be shown that 16.8% of the subcarriers used for data transmission have E-field level above the limits defined in the ITU-T K.60 Recommendation when one twisted wire pair of TK33U cable is used. From the presented results it is evident that the proposed radiation limits are too optimistic regarding the radiation from telecommunications network of which the TK33U cable is an integral part. The presented results also indicate the need to significantly reduce the G.fast2 power spectral density in order to achieve compatibility with the proposed radiation limits if a TK33U cable is used.
The measurement results of the E-field radiation when the G.fast2 signal is transmitted through the TK59U telecommunication cable is shown in Figure 7. As expected, the radiation from the telecommunications network when the TK59U cable is used is lower than the radiation from the network when the TK33U cable is used. That is due to the fact that TK59U cable, unlike the TK33U cable, has a sheath (aluminium tape) FIGURE 8 Measured E-field radiation from UTP cat.5E telecommunication cable that reduces unwanted emission from the cable, as presented in Table 1. The highest radiated E-field level from the network when using the TK59U cable is measured at 164.8 MHz and is 43.5 dB μV/m which is 3.5 dB above the limit defined in ITU-T K.60 Recommendation. As it can be seen in Table 2, the number of subcarriers above the proposed limits also decreases compared to the measurement when using a TK33U cable.
The results of E-field radiation measurements when the G.fast2 signal is transmitted through UTP cat.5E cable are shown in Figure 8. It is evident that the radiation from the telecommunications network when using UTP cat.5E is lower than when the TK33U cable is used. These are the results of a cable construction (i.e. lower twisting length), that allows efficient data transmission at a higher frequency and reduces radiation from UTP cat.5E cable. Figure 8 also shows that the radiation from the telecommunications network increases with the increase in the number of twisted wire pairs used. It should be noted that the largest number of twisted pairs of wires in the UTP cat.5E cable is four (eight copper wires). The highest radiation when using one pair of twisted wire is measured at 119.45 MHz and is 41.6 dB μV/m, while when four pairs of twisted wire are used, the highest radiation is measured at 106.9 MHz and is 47.2 dB μV/m. Furthermore, when one pair of twisted wires is used, the radiation is 1.6 dB above the limit defined in ITU-T K.60 Recommendation, more precisely, only 9 subcarriers (0.2 % of the total number of subcarriers) have E-field level above the limit defined in ITU-T K.60 Recommendation. These results can be used as a good indicator of the radiation that can be expected in user premises given that UTP cat.5E cable is often used for installation of the in-house and the in-building network.
In order to analyse the effect of E-field radiation from the telecommunications network when a higher quality cable is used, the G.fast2 signal is transmitted through an S/FTP cat.7A cable. This cable was developed with strict specifications regarding protection against crosstalk and electromagnetic interference. The S/FTP cat.7A cable has twisted wire pairs that are individually wrapped in aluminium-laminated plastic foil and the entire wires are additionally covered with a common tinned FIGURE 9 Measured E-field radiation from S/FTP cat.7A telecommunication cable copper braid. This cable is designed to operate at frequencies up to 1000 MHz enabling 10-gigabit Ethernet connection. The results of measuring the E-field radiation from the telecommunications network when using the S/FTP cat.7A cable is presented in Figure 9. As expected, due to the improved shielding of all cables used in this paper, the lower level of E-field radiation is measured when an S/FTP cat.7A cable is used. The mean radiation level when using one pair of twisted wires is x = 25.4 dB μV/m which is 0.9 dB lower than when using a TK59U cable, 2.3 dB when using a UTP cat.5E cable and 7.6 dB when using a TK33U cable with the same number of twisted wire pairs. The highest E-field level is measured at 122.8 MHz and is 39 dB μV/m indicating that all subcarriers are below the limits proposed in ITU-T K.60 Recommendation.
If the wires are not perfectly balanced, due to the introduced differences in the amplitude and phase of the signals in the cable, the common-mode signal will cause an increase in radiation from the telecommunications cable [30]. To simulate radiation from an unbalanced cable, measurement is provided when one of the wires of the twisted pair of wires in the TK33U cable is disconnected, as shown in Figure 2c. The results of E-field radiation measurements when TK33U balanced and an unbalanced cable are used compared to the limits defined in ITU-T K.60 Recommendation are shown in Figure 10.
From Figure 10 it can be seen that the E-field radiation when using an unbalanced cable is higher than when using a balanced TK33U cable (one pair of twisted wires). In particular, the mean value of the E-field radiation, when using a TK33U unbalanced cable is 13 dB higher than when using a TK33U balanced cable. Maximum radiation when using an unbalanced cable is measured at 75 MHz and is 59.8 dB μV/m, which is 19.8 dB above the limit value defined in ITU-T K.60 Recommendation. According to the results presented in Figure 10 and Table 2, it is evident that the radiation from the telecommunications network when an unbalanced cable is used can cause serious problems with radio signal reception.
Since an unbalanced cable is the result of a faulty cable condition, to prevent radio reception disturbances, the cable must be repaired or replaced as soon as possible.
CONCLUSION
In this paper, the radiation from the in-house part of the telecommunications network when the G.fast2 signal was transmitted through TK33U cable, TK59U cable, UTP cat.5E cable and S/FTP cat.7A cable was measured and analysed. The presented results show that the radiation from the telecommunications network significantly depends on the cable structure used for G.fast2 data transmission, that is, although the electronic equipment is manufactured in accordance with EMC directive, the radiation from the network could be above the defined radiation limits if an inadequate cable is used. This could pose a serious problem in the process of implementing G.fast2 technology as most telecommunications cables already installed in user premises are not designed for data transmission at high frequencies used by G.fast2 technology.
The measurement results showed that the highest radiation level was measured when a TK33U cable was used, while the lowest radiation level was measured when TK59U and S/FTP cat.7A cables were used due to the improved shielding of the cable. In addition, the radiation from the in-house part of the telecommunications network was measured when a TK33U unbalanced cable was used. As expected, the results show that the radiation increases when using a TK33U unbalanced cable compared to using a TK33U balanced cable.
In order to reduce unwanted emission from the wire-line telecommunications network as well as to protect radio services operating in the same frequency range, ITU-T K.60 radiation limits have been proposed. To assess whether or not the radiation from the telecommunications network meets the radiation limits, a comparison between measured E-field radiation values and radiation limit values was also made. The measurement results show that only the radiation from the in-house part of the telecommunication network when using a S/FTP cat.7A cable meets the radiation limits defined in ITU-T K.60 Recommendation while the radiation from the network when using other cables is above the limits defined in ITU Recommendation.
This clearly indicates that full protection of radio services is not possible due to the fact that it would require very low radiation from the network which could not be achieved without significant investment in telecommunications infrastructure. Therefore, in order to ensure coexistence between broadband wire-line telecommunications networks and radio services operating in the same frequency range, effective techniques and methods must be applied to reduce the radiation from the telecommunications network (e.g. notching certain subchannels, reducing power spectral density, etc.).
ACKNOWLEDGMENTS
This work and the study behind it would not have been possible without the exceptional technical and financial support of the Croatian regulatory authority for network industries (HAKOM). We would also like to thank the following people without whom we would not have been able to complete this research, Goran Jurin M.Sc. for his enthusiasm, motivation, and immense knowledge as well as our colleagues from the RF Spectrum Monitoring Department for assistance in data collection and insightful comments. | 6,290.2 | 2021-03-05T00:00:00.000 | [
"Physics"
] |
Interevent-time distribution and aftershock frequency in non-stationary induced seismicity
The initial footprint of an earthquake can be extended considerably by triggering of clustered aftershocks. Such earthquake–earthquake interactions have been studied extensively for data-rich, stationary natural seismicity. Induced seismicity, however, is intrinsically inhomogeneous in time and space and may have a limited catalog of events; this may hamper the distinction between human-induced background events and triggered aftershocks. Here we introduce a novel Gamma Accelerated-Failure-Time model for efficiently analyzing interevent-time distributions in such cases. It addresses the spatiotemporal variation and quantifies, per event, the probability of each event to have been triggered. Distentangling the obscuring aftershocks from the background events is a crucial step to better understand the causal relationship between operational parameters and non-stationary induced seismicity. Applied to the Groningen gas field in the North of the Netherlands, our model elucidates geological and operational drivers of seismicity and has been used to test for aftershock triggering. We find that the hazard rate in Groningen is indeed enhanced after each event and conclude that aftershock triggering cannot be ignored. In particular we find that the non-stationary interevent-time distribution is well described by our Gamma model. This model suggests that 27.0(± 8.5)% of the recorded events in the Groningen field can be attributed to triggering.
1. Gamma IAFT model details and maximum-likelihood equations 1
.1. Notation and survival analysis.
In this Supplementary Information we use the standard notation in probability theory; random variables are represented by capital letters (e.g. X), while their realizations are written as lower-case letters (e.g. x). A probability is denoted as P(·), the expectation as E(·) and H0 represent a statistical null-hypothesis. Estimates of parameters will be indicated by a hat,·.
We will use (and shortly state) standard concepts from survival analysis. The mathematical proofs and definitions used are well described in several textbooks, e.g. (1), but will be mostly omitted here. In survival, or failure time, analysis the central concept is the hazard rate, the instantaneous event rate, equal to where S(u) = P(U > u) = 1 − F (u) is the survival function. In statistical seismology this instantaneous event rate is often referred to as the intensity function.
The following equivalence relations are of great importance. Let H(u) equal u s=0 h(s)ds, the integrated (or accumulated) hazard, and f (u) represent the probability-density function (pdf, f (u) = − dS(u) du ), then
Gamma
Instantaneous Accelerated-Failure-Time model. The pdf and survival function of a random variable U0 that follows a Gamma(τ0, k) distribution equal . (6) where Γ(k, s) = ∞ s x k−1 exp(−x)dx is the upper incomplete Gamma function. Equivalently, the hazard function equals Finally the integrated hazard function equals H0(u) = log (Γ(k)) − log Γ(k, u τ0 ) .
Let t equal the process time starting at t = 0. In this research we model the hazard by a Gamma hazard changing over time as a result of a time-varying scale parameter τ (t). More specifically, to model the interevent times in the Groningen field we model the Gamma scale parameter as a function of (time-varying) local characteristics. The scale parameter represents the inverse of the background rate and equals Since, assuming τ (t l + u) > 0, lim u→∞ h(u) − 1/τ (t l + u) = 0, the interpretation of 1/τ (t) as the background rate is indeed appropriate for this time-varying model. The IAFT model introduced can also be fit for more-parameter distributions of which the long-time hazard equals a Poisson hazard, i.e. converges to a constant. In the Groningen case study we have only access to the monthly compaction rate and therefore the scale parameter is constant per month. The background rate is thus a discontinuous function with jumps at the end of each month. Let n(u + t l ) equal the number of mutually exclusive months at time u after t l . In this context, let sj equal the time in days, after t = t l , at the end of month j, with the exceptions that s0 = 0 and s n(u+t l ) = u. Then, = n(u+t l ) j=1 log Γ(k, The local occurrence probabilities, P(event occurred in region x | event occurred at time t), equal The local (region specific) hazard at time t in region x is defined as hx(u, t l ) = w(x, t l + u)h (u). Finally, the log-likelihood of the observed interevent times and event locations in the Groningen field equals * ll((u1, x1), ..., (un, xn)) = log (f ((u1, x1), ..., (un, xn))) where n is the total number of events, ui, respectively xi, are the interevent time and the region of the i th event and t l,i = i−1 j=1 uj.
Maximum-likelihood estimates.
The parameters (k, τ0, β) of the Gamma IAFT model are estimated by maximizing the log-likelihood defined in Eq. (13). These maximum-likelihood estimates are hard to find numerically as a result of the existence of local maxima for our complex log-likelihood. If τ (t) is constant over the course of each interevent time, then the log-likelihood simplifies to Now, the joint distribution is a member of the exponential family. Therefore, the log-likelihood is concave and a unique maximum exists, see e.g. Appendix A in (3). Finding the parameters such that the gradient of the log-likelihood equals zero gives rise to this unique maximum. The components of the gradients can be derived and are equal to ∂ll((u1, x1), ..., (un, xn)) * In this research we ignore the ongoing (censored) interevent time at the end of the catalog since its influence was negligible. More generally, the log-likelihood for a process with censored times and ψ(x) = d dx log(Γ(x)) equals the Digamma function. Moreover, the Hessian of Eq. (14) is equal to ∂ll ((u1, x1), ..., (un, xn)) ∂k∂τ0 = n, where Our optimization strategy is as follows. We numerically find the roots of the gradient (here the Hessian is used). These parameter values are now used as the starting values to numerically (Broyden-Fletcher-Goldfarb-Shanno algorithm (4) is used) find the maximum-likelihood estimates for the complex model.
Determination of model covariates and parameter values for Groningen
In this study we have excluded events from the KNMI catalog based on a magnitude of completeness equal to 1.3 as suggested in (5).
Candidate covariates Groningen.
The covariates taken into account in the case-study analysis can be found in Table S1. The stationary covariates are derived from the geological top Rotliegend surface model from the NAM provided via TNO (6). For the compaction covariates we use the compaction model provided by Shell (7) For stability of the numerical optimization of the likelihood the covariates are transformed. First the yF (x) is scaled by 10 −5 , yT (x) by 10 −1 , yĊ (x, t) by 10 5 and yC (x, t) by 10. Subsequently, all covariates are standardized by their mean (over the regions, see Table S1), e.g. yC (x, t) := 10yC (x, t) − 1.47. The numerical values of the stationary covariates obtained by this procedure can be found in Furthermore, we considered a critical total compaction by introducing a truncation c, mimicking the compaction level at which all faults in a region have reached their fault strength. The Gamma time-scale parameter used in the Groningen case study equals
Final model.
In statistics it is common to rely on a forward-or backward-selection procedure to select covariates that should be included in the model. However, the optimal model can easily be missed when covariates are highly correlated. We therefore fit all possible models including an effect of the total compaction (since this is the only covariate that can explain the increase in intensity over time). Each model is fit using the approach explained in Section 1.3. However, the derivative of ll ((u1, x1), ..., (un, xn)) with respect to the compaction-truncation variable c does not exist. The gradient can be derived given a value of c. For several values of the c variable we numerically find the roots of the gradient (here the Hessian is used). Subsequently, the different c fits are compared and the parameters giving rise to the highest log-likelihood are selected. In turn these estimates are used as the starting values of the numerical optimization. The final model is selected based on Akaike's information criteria (AIC) (8) to prevent overfitting. Below we present the (log-likelihood-based) best-fitting model with J covariates involved.
It is important to notice that the k estimate proves insensitive to the model choice, thus our estimate for the fraction of triggered events is insensitive to the model choice. Based on the AIC the model with covariates total compaction, compaction rate, total fault density, percentage of faults with specific strike angle, throw-thickness ratio and truncation is selected. The effect estimates and corresponding standard errors are presented in Table S4.
The final estimates define the development of τ (t), which evolution is displayed in Fig. S2. This background rate, 1/τ (t), is partitioned over the 15 regions using the weights of the Gamma IAFT model. For a subset of the regions, the evolution of the proportion of seismic events over time is displayed in Fig. S3.
The effect estimates and cap estimate presented in Table S4 can be transformed back such that they can be interpreted in the units of the original covariates introduced in Table S1, e.g. the cap variable in meters compaction is estimated as 0.94+1.47 10 with a corresponding standard error of 0. 044 10 , and are presented in Table 1 of the main text. The final model thus includes a Table S3. Maximum-likelihood Gamma IAFT-model parameter estimates using J = 3, ..., 10 covariates respectively. truncation on the total compaction. This effect is shown for each region in Fig. S4. The Gamma IAFT model suggests that all faults in the most active region 10, the Loppersum area, were critically stressed at the end of 2007, when the local compaction was 0.24 meter.
Risk of bias due to successive-event overlaps in seismograms.
In real catalogs, short interevent times cannot be detected because waveforms of successive-event overlaps in seismograms. As a result it is impossible to observe short interevent times below some threshold. The observed interevent times are thus realization of a conditional distribution (random interevent time U > u0, for some threshold u0). If the probability that U ≤ u0 is reasonably large, then this conditional distribution seriously deviates from the full distribution. Parameter estimates obtained by fitting the non-conditional distribution will then be biased. Instead one can fit the conditional distribution by maximizing where fi, Si, Hi are the pdf, survival function and integrated hazard function of the i th interevent-time distribution. The potential bias is the result of the n i=1 Hi(u0) contribution that lowers minus the log-likelihood. It is thus important to evaluate n i=1 Hi(u0) after the model has been fitted. If the contribution is low, then the conditional distribution is close to the full distribution and the thresholding will not result in bias of the estimates.
In the Groningen catalog, which we used to fit the final model (Tables S3 and S4), the smallest interevent time was equal to 0.00069 days (1 minute). From Groningen seismograms, see e.g. (9,10), it becomes clear that the time between the first-P-wave arrival at a seismic station and the moment when the signal amplitude has decayed below the first-P-wave amplitude is in the order of seconds. Thus, it is reasonable to assume that all interevent times larger than or equal to 1 minute are observed. Using the model fitted in this study we find that 397 i=1 Hi(0.00069) = 0.2, which is extremely small compared to the final minus log-likelihood of 2335.8. In this case we thus do not suffer from time-threshold bias. In the case the thresholded distribution does deviate from the unconditional distribution one should maximize Eq. (27) instead of Eq. (13). As an example we present the thresholded-distribution fit using u0 = 5 minutes, such that we fit 395 interevent times, which estimates are presented in Table S5. Indeed the estimates are similar to those presented in Table S4.
Comparison with the memoryless IAFT model.
To test the presence of earthquake-earthquake interaction we compare the final Gamma IAFT-model fit with the fit of an Exponential (k = 1)-model. The maximum-likelihood estimates of the parameters of the Exponential model equal: Based on the likelihood-ratio test (p < 0.0001) we reject the hypothesis H0 : k = 1 and conclude that the improved fit cannot be explained by chance alone. It is important to note that both models are fitted to the Groningen dataset and thus roughly expected the same number of events in the period October 1 st 1995 -October 1 st 2018. The difference between the models is clearly illustrated by plotting the temporal hazards over time, see Fig. S5. The hazard of the Gamma IAFT model is far more spiky than the hazard of the Exponential IAFT model. The Gamma IAFT model results in a more clustered profile of events as illustrated in Fig. 3a.
Model validation with simulated Groningen data
The final model obtained in Section 2.2 for the Groningen field results in an estimate of the non-stationary driving function 1/τ (t) and an interevent-time distribution after each event time t l . Since in the Groningen case the time scales of variation of τ vs. t and h vs. u turn out to be well separated, the instantaneous distribution, following each t l , can be approximated by a Gamma distribution. As a result the time variation of τ can be scaled out, giving a single Gamma, Γ(k, 1), distribution vs. the variable U/τ (t l + U ) over the full time period. This function is shown in Fig. S6 together with the actual rescaled, byτ (t), Groningen interevent data. The pdf of (Wald-Wolfowitz runs test, p = 0.17), see Fig. S6. Thus, once we account for the change in intensity over the years the mixed distribution of background and triggered interevent times can be well described by this universal Gamma curve. However, since the data are scarce, the agreement between curve and data is in itself no sufficient validation of our model, and more formal validation tests will be presented below.
Simulation description.
To simulate past-event catalogs based on the final model we rely on the theory of survival analysis. In general a random variable with cumulative distribution function (cdf) F is generated by drawing a realization of a Uniform [0, 1] random variable V and evaluating the inverse cdf at this V , F −1 (V ). Equivalently, one could evaluate the survival function, S = 1 − F , at the random V . From survival analysis we know F (u) = 1 − exp (−H(u)). Our Gamma IAFT model defines the hazard rate, and thus the integrated hazard H(u), presented in Eq. (11), and the cdf.
The maximum-likelihood estimates of the final model theoretically follow an asymptotic multivariate normal distribution † , see e.g. (11). More precisely whereθn are the maximum-likelihood estimates for (k, β1, ..., βJ ) based on a sample of size n, θ = (k, β1, ..., βJ ), d → represents convergence in distribution and I θ equals the Fisher information matrix. Letθ represent the maximum-likelihood estimates of the Gamma IAFT-model parameters. We estimate the total varianceΣ, by computing (numerically) the inverse of the Hessian of the log-likelihood evaluated at the maximum-likelihood estimatesθ. The first step in the simulation of an earthquake catalog consists of drawing a realization θ of the multivariate normal N θ ,Σ .
Subsequently, a clock is set to t = 0 (October 1995 in the Groningen case study) and a interevent time U1 is realized. Furthermore, a multinomial random variable, with probabilities equal to the Gamma IAFT-model weights at time U1, is realized to determine the event location. Next, the clock is set to U1, after which a second interevent time, U2, is realized based on the integrated hazard after U1 and the location of this event is generated based on the weights at time U1 + U2. These steps continue until N events are realized such that Thousand generated catalogs based on the models fitted to the interevent times from the Groningen catalog (Gamma IAFT and Exponential IAFT) are shown in Fig. S7. † This theory holds for the model presented as Eq. (10) and Eq. (12). However, for model Eq. (26) used in the Groningen case, θ = (k, c, β 1 , ..., β J ) and the theory breaks down since c → min{c, y C (x, t)} is not differentiable w.r.t to c at y C (x, t). In practice the parameter estimates will not change if we use a smooth approximation of the min function. Using this approximation the theory does apply and thus allows us to rely on the discussed result by numerically deriving the inverse of the Hessian of the log-likelihood evaluated at the maximum-likelihood estimatesθ. The empirical distribution of the Cox-Snell residuals can be compared to the unit Exponential distribution using the Kolmogorov-Smirnov (KS) test (13). For the final model the p-value of this test is equal to 0.681, from which we conclude that the proposed Gamma IAFT model seems suitable. The p-value of the KS test comparing the unit Exponential with the Cox-Snell residuals of the Exponential IAFT model instead equals 0.009, as a result of the underestimation of the clustering. This explains the superiority of our final IAFT model compared to the Exponential IAFT model once more.
Randomness.
The distribution of the Cox-Snell residuals can be used to test for overall performance. However, in non-stationary processes it is very important to investigate the randomness in the sequence of the residuals. E.g. in a setting where the Cox-Snell residuals are expected to be smaller at the beginning of the catalog and larger towards the end, the proposed model does not describe the underlying mechanism of the data. To test whether the Cox-Snell residuals (sorted on event date) are independent and identically distributed we apply Wald-Wolfowitz runs test (13). The resulting p-value equals 0.159, from which we conclude that there is no reason to doubt the appropriateness of the model.
Spatial fit.
We have stressed that appropriate modeling of the spatial heterogeneities is key to model the temporal clustering. To validate the spatial fit we use Pearson's chi-squared test (13). More specifically, the test statistic equals .
The expected counts, E Gamma IAFT [Xi], per region are estimated from the thousand simulations and presented in Fig. S9 (left). For the Groningen data the statistic equals χ 2 = 37.8 and the contributions per region are presented in Fig. S9 (right The distribution of χ 2 assuming the Gamma IAFT model is approximated using the thousand simulations. Based on this empirical distribution we find P(χ 2 > χ 2 data ) = 0.048.
This probability is rather low and thus questions the spatial fit of our model. However, with a closer look at the expected and realized counts in chi-square statistic of χ 2 = 24.5, with a p-value of P( χ 2 > χ 2 data ) = 0.24. This shows that our model is capable of reproducing the spatial distribution over the field.
Hindcast.
3.3.1. Gamma IAFT model. Finally, we investigate how well we can forecast the number of events in the period 2014-2018 based on the information available before that period. This is of particular interest since the production of gas has drastically decreased from this period on, giving rise to compaction rates that have not been observed before. When we want to interpret our claims causally we should be able to predict event counts in a production scenario that has not been observed so far. The maximum-likelihood estimates of the Gamma IAFT model based on the period 1995-2013 are given in Table S7. Simulations for the 2014-2018 period of the Gamma IAFT model with the parameters from Table S7 are presented in Fig. S11.
3.3.2.
Ignoring spatial heterogeneity. The importance of taking care of the spatial heterogeneity can be illustrated by studying the hindcast with a model with no spatial (ns) heterogeneity and only describing the temporal process. Now, we model the scale parameter of the Gamma IAFT model, τns, as The maximum-likelihood estimates of this temporal model based on the period 1995 − 2013 can be found in Table S8. In this case, the compaction truncation did not improve the model fit. The simulations based on this model can be found in Thus The Gamma IAFT model can be used to stochastically decluster an earthquake catalog by labeling events as a triggered or background event based on random (Bernoulli) experiments.
Theorem 2. The probability of an event to originate from the background process, given the interevent time u, equals
where h k=1 (u) = 1 τ (t l +u) and equals the Gamma IAFT hazard at u when setting k equal to 1.
Proof. Let U = min(U0, U1), be the random interevent time starting at t l where U0 and U1 equal the times to the next background event and to the next triggered, by earthquake-earthquake interaction, event respectively. Here U0 is the interevent time, after t l of an inhomogeneous Poisson process with (background) rate 1/τ (t l + u). Using Bayes' theorem and the survival-analysis equivalence relation Eq. (4), given U = u, we find h 0 (u)+h 1 (u) and substituting h0(u) = 1/τ (t l + u) gives the result.
By drawing a Uniform random variable, V , for each event we label the event as a background event when V < p background (ui) and as a triggered event otherwise. To decluster the catalog of Groningen we start by drawing Gamma IAFT-model parameter realizations from the multivariate normal distribution discussed in Section 3.1 and start labeling each event (background or triggered) using p background (ui) given θj. Some examples of stochastically declustered catalogs of Groningen can be found in Fig. S13. At the end of the catalog (October 2018) the declustered catalogs contain on average 73% of the events.
Fraction of background events.
We would like to compute the % of background events in the Gamma IAFT model. To do so we are interested in the expected value of the background-event probability, the ratio between the background rate and the total hazard of the Gamma IAFT model, at interevent time U (after t l ). The complex interevent-time pdf can be derived using the survival-analysis equivalence relations whereh(u),f (u) andS(u) equal respectively the hazard function, pdf and survival function of the stationary Gamma distribution with scale parameter τ (t l + u). Since the pdf and cdf have no elegant closed form expression it is hard to derive Let us assume there are a finite number of intensity changes over the period of interest. Now, we can define the n intensity changes at time s1, s2, ..., sn after t l . During the interval [t l + si, t l + si+1) the background rate is constant and can be referred to as 1 τ i . After time t l + sn, the background rate stays constant at the level of 1 τ n+1 . Without loss of generality we assume s0 = 0. Then the expected proportion of background events can be computed as If the scale parameter τ (t) does not change during each interevent period, then each interevent time is Gamma distributed, with shape parameter k and a scale parameter that might differ per interevent time, and the expression simplifies to Γ(k+1) Γ(k) = k. If the scale parameter τ (t) does change, E [p background (U )] will differ from k, but can be derived via simulation. For each simulation 1 n j n j i=1 p background (ui) can be computed and by the Central Limit Theorem, the average over all simulations will converge to E [p background (U )]. For the Gamma IAFT model using the maximum-likelihood estimates, derived from the Groningen catalog, this fraction equals 0.73 and indeed equals k. This is a consequence of the differences between 1/τi and 1/τi+1 being small and thus the complex distributions are very comparable to (constant scale-parameter) Gamma distributions.
Confidence interval of the fraction of background events in Groningen.
In an induced-seismicity study, like Groningen, the stochastic process underlying the data can be modeled. The observed data can then be viewed as one specific realization of the stochastic model. Note that the fraction of background events can differ per realization of the stochastic model and can deviate from the expected fraction of the process (the latter has been discussed in the previous section). For the real data the event times are known and the fraction should thus be estimated conditioned on these event times. The i th event has a probability equal to p background (ui) to be a background event, thus the expected number of background events is equal to n i=1 p background (ui). For a Gamma IAFT model with the maximum-likelihood estimates, the expected number of background events is equal to the expected number of the stochastic process, E n i=1 p background (Ui) , discussed in the previous section, and thus equals 290 (73% of the total).
Each event label, Li = 1 event i is a background event , is Bernoulli distributed with the just-defined background probability. Therefore, the number of background events, N = n i=1 Li, follows a so-called Poisson binomial distribution with parameters p1, p2, ..., pn (14). Confidence limits of the number of background events in a realized catalog of a known Gamma IAFT model are thus equal to the quantiles of this Poisson binomial distribution. In practice the parameters of the Gamma IAFT model are unknown and therefore estimated, which introduces uncertainty. Let θ represent the real Gamma IAFT model parameters, then the distribution of N | θ is the Poisson binomial distribution just introduced. Now, let θ itself be a multivariate random variable with law f θ . Then, the distribution of N equals where Θ is the support of the θ variable, namely {k > 0, τ0 > 0, βC ∈ R, ...}. Let us assume that θ follows the asymptotic multivariate normal distribution of the maximum-likelihood estimate (of the Gamma IAFT model) discussed in Section 3.1.
Since it is analytically very difficult to compute the 2.5% and 97.5% quantiles we have used a Monte Carlo approach to derive the distribution of N . For j = 1 to j = 10000: 1. We have generated θj from N θ ,Σ .
2. We have generated Nj from the Poisson binomial distribution with p background (ui).
Finally, the 2.5% and 97.5% quantiles of the empirical distribution are computed. For the Groningen catalog (n = 397), we have concluded that the confidence interval (CI) for the number of background events equals (256, 324), which equals (64.5%, 81.6%) of all events. ‡ Note that the CI of k, based on the asymptotic normal distribution ofk, equals (66.8%, 79.2%) and is thus tighter. ‡ To verify that the empirical distribution represents the distribution of N we check that the CI based on only the first five thousand simulated catalogs, (255, 325), is similar to the CI based on all simulations.
Comparison with ETAS-based models and data
The Epidemic-Type Aftershock-Sequence (ETAS) model is often used in practice to analyze seismicity (15). In the ETAS model the occurrence rate of aftershocks at time t due to an event at time v decreases according to the modified Omori law K (c+t−v) 1+θ , where K, c and θ are model parameters (16). Each event of magnitude m triggers aftershocks with a rate proportional to 10 α(m−m min ) , where mmin is the magnitude of completeness. The hazard of the temporal ETAS model equals Here 1/τ (t) represents the background rate as in the Gamma IAFT model, n(t) equals the total number of past events before time t, vi and mi represent the event time and magnitude of the i th event. The ratio of background-to-total events can, only in steady-state, be directly related to the so-called branching ratio, g, the number of first-generation daughter events per background event (17). For θ > 0 and α < b, we obtain that g = K is the parameter of the Gutenberg-Richter distribution. The integrated hazard of the ETAS model equals The integrated hazard defines the survivor function S ETAS (t), which allows us to simulate with the ETAS model, following the same lines as explained in Section 3.1.
Groningen-based ETAS model.
In this study our focus is on the Gamma IAFT model rather than an ETAS model, since the small data base and the complexity of a non-stationary driving force asks for a parsimonious parameter set to avoid arbitrariness. We want to show that this parameter-poor Gamma model can nevertheless be effectively used on a Groningen catalog simulated with a temporal ETAS model. Based on the Cox-Snell test we assume that the τ0 and β estimates obtained from the Gamma IAFT fit to the Groningen catalogs are accurately describing the background rate. To find values for the parameters c, θ, α and g that give rise to synthetic catalogs that are representative of the real Groningen catalog we evaluate the log-likelihood 16)). In Table S9, per c value, the parameter sets that give rise to the two best fits and the corresponding log-likelihood values are presented. Accurate estimation of the four remaining ETAS parameters requires a sufficiently large number of observations. To illustrate that in the Groningen catalog this is not the case, we compared the two best fits, (c = 6h, α = 0.3, θ = 0.05, g = 0.8) and (c = 6h, α = 0.3, θ = 0.10, g = 0.4). Their log-likelihoods are insignificantly different and indicate a strong intercorrelation of θ and g. The resulting systems are however vastly different in their cross-over times for saturation of individual aftershock sequences (17), and hence in intersequence overlap. Consistent with the choice of the Gamma timescale τ (t) as the true input background rate, we selected the fit with g = 0.4, i.e., limited intersequence overlap (18). Synthetic ETAS-simulated catalogs mimicking the Groningen data are then obtained with c = 6 hours, θ = 0.10, α = 0.3 and g = 0.4 see Fig. S14. Based on 500 simulations our temporal ETAS model gives rise to a mean fraction of 76.6% background events over the course of the Groningen catalog, see Fig. S16a. (Indeed this does not equal 1 − g , as the system is not in steady state.) On a par with the Gamma IAFT model the temporal ETAS model did not consider individual event locations, although differences in compaction behavior over the Groningen field were accounted for via the used time-varying background rate.
Gamma IAFT-model fit to ETAS simulations.
To show the appropriateness of the Gamma IAFT model we first fitted it, following Section 1.3, to the union of all interevent times from all 500 simulations, by optimizing the log-likelihood: ll IAFT (u1,1, ..., u1,n 1 , u2,1, ..., u500,n 500 | k, β which gives in particular k = 0.746. Following the same procedure as in the paper and Section 3.2.1, we have compared the cdf of the Cox-Snell residuals from this Gamma IAFT-model fit, considering its maximum-likelihood estimates, to the cdf of a unit Exponential, see Fig. 3a (main text) and Fig. S15. We conclude that the interevent times generated by the temporal ETAS model are indeed well approximated by the Gamma IAFT-model distribution.
Background fraction estimation based on Gamma
IAFT model or ETAS model. As the fraction of aftershocks is a stochastic quantity (19), the 'true' expected background fraction, µBF (g, θ, c, β), differs per synthetic catalog and equals Fig. S16a. We would like to investigate the performance of the estimation of the expected background fraction by fitting the ETAS model or the Gamma IAFT model to the synthetic ETAS catalogs.
Efficiency.
For each synthetic catalog, we have compared the estimated background fractions to µBF (g, θ, c, β). A histogram of the difference is presented as Fig. S16b. The median of these errors equals 0.03 and −0.03 for the ETAS and the Gamma IAFT fit respectively. We conclude that, despite the finite sample size, the background rate and % of background events are appropriately estimated with the Gamma IAFT model. The variances of these background-fraction estimates are referred to as σ 2 ETAS and σ 2 IAFT . We findσ 2 ETAS = 0.05 and σ 2 IAFT = 0.04 respectively. The standard error is significantly higher when fitting an ETAS model (Brown-Forsythe test, H0 : σ 2 ETAS ≤ σ 2 IAFT , p < 10 −4 ). Hence, even for the synthetic Groningen-based ETAS catalogs, the fraction of triggered events can be estimated more efficiently by fitting a Gamma IAFT model.
5.4.
Power of the Cox-Snell test for validating Gamma behavior. Actual catalogs as well as synthetic ETAS catalogs do not always conform to Gamma interevent distributions (20). To show how sensitively our approach can validate the appropriateness of the Gamma distribution we simulated 1000 times 397 events for different stationary ETAS models. We used τ /c = 10 5 , setting the values of n and θ such that, depending on the branching ratio and the tail of the Omori function, both catalogs far from and close to the asymptotic steady state were obtained; the base case n = 0.9, θ = 0.03, which is still far from steady state, corresponds to Fig. 1 in (20). The settings give rise to distributions that deviate from Gamma behavior, see Fig. S17. Consequently, the (Gamma) maximum-likelihood estimators of the shape (k) and scale (τ ) parameters deviate from the real fraction of background events and the inverse background rate respectively. In these settings the maximum-likelihood estimator of the shape parameter,k, should thus not be interpreted as the fraction of background events. Fig. S17. Empirical estimate of the pdf for U τ , using a logarithmic binning of all (1000 · 396) simulated interevent times with the ETAS model (black line), the Γ( % background, 1) pdf (cyan) and the Γ(k, 1) pdf (blue) for settings A-E described in Table S10.
Nevertheless, as shown in Table S11, the power (probability of rejecting the null-hypothesis) of the goodness-of-fit test (of the Gamma distribution), based on the Cox-Snell residuals discussed in 3.2.1, is very high for these scenarios. So the Cox-Snell test can sensitively verify the appropriateness of the Gamma model. The interpretation of the Gamma IAFT model presented in this research is only valid after such formal model validation. | 8,106 | 2021-02-11T00:00:00.000 | [
"Geology"
] |
Synthesis of a hollow-structured flower-like Fe3O4@MoS2 composite and its microwave-absorption properties
In order to realize the characteristics of new types of wave-absorbing materials, such as strong absorption, broad bandwidth, low weight and small thickness, a hollow-structured flower-like Fe3O4@MoS2 composite was successfully prepared by simple solvothermal and hydrothermal methods in this paper. The structural properties were characterized by X-ray diffraction, X-ray photoelectron spectroscopy, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Besides, the microwave properties and magnetic properties were measured using a vector network analyzer and via a hysteresis loop. SEM and TEM images revealed that MoS2 nanosheets grew on the surface of hollow nanospheres. The results showed that the composite exhibited excellent absorbing property. When the molar ratio of Fe3O4 and MoS2 was 1 : 18, the minimum reflection loss value reached −49.6 dB at 13.2 GHz with a thickness of 2.0 mm and the effective absorption bandwidth was 4.24 GHz (11.68–15.92 GHz). Meanwhile, the effective absorption in the entire X-band (8–12 GHz) and part of the C-band (4–8 GHz) and Ku-band (12–18 GHz) could be achieved by designing the sample thickness. In addition, the hollow structure effectively reduced the density of the material, which was in line with the current development trend of absorption materials. It could be predicted that the hollow core–shell structure composite has a potential application prospect in the field of microwave absorption.
Introduction
With the advent of 5G, the relationship between wireless communication installations and human livelihoods is increasingly inseparable. Microwave radiation is becoming one of the most serious factors threatening human wellbeing, 1,2 so it is urgent to settle the issue of electromagnetic contamination. Consequently, the exploration of electromagnetic absorbers is imbued with distinct real-world signicance owing to the characteristics of strong absorption, 3 low density, 4 broad bandwidth and small thickness. 5 Compared to the conventional electromagnetic shielding materials, use of microwave absorbers is regarded as a more efficacious method to dominate electromagnetic contamination. 6 A good electromagnetic absorber must have impedance matching and attenuation characteristics to eliminate the problem of secondary reection contamination and allow the electromagnetic energy to be converted into heat energy in the material. 7 According to the fundamental formula, when the dielectric constant 3 (3 ¼ 3 0 À j3 00 ) is equal to the permeability m (m ¼ m 0 À jm 00 ), the incident wave can totally enter the material without reection. 8 And the attenuation characteristics depend on the loss mechanism itself, which consists of magnetic loss, dielectric loss and conductivity loss. In addition, it has been reported that the morphology and construction have non-negligible effects on the absorbing properties. 9 Traditional ferrite materials possess exclusive features of strong saturation magnetization, high complex permeability as well as being of low cost, which have been extensively applied in the eld of microwave absorption. 10,11 Ferrosoferric oxide (Fe 3 O 4 ) as the simplest ferromagnetic substance has been widely used in dye degradation, 12 biomedical applications, 13 hydrogen storage, 14 energy storage devices, 15 electromagnetic absorbers, 16 etc. Especially in the domain of microwave absorption, natural resonance and eddy-current effect reveal unexceptionable magnetic loss in the high-frequency range. Nevertheless, the high permeability and low permittivity of the material mean it is difficult for it to satisfy the requirements of new wave-absorbing materials. In recent years, some combinations of ferromagnetic and graphene materials have been shown to exhibit excellent microwave absorption performance. The advantages are as follows. First, satisfactory impedance matching can be achieved by adjusting the electromagnetic parameters. Second, the completely different loss mechanisms produce benecial synergistic effect. For example, Wang et al. 17 investigated novel ower-like CoFe 2 O 4 @graphene complexes, in which hundreds of CoFe 2 O 4 microspheres are combined using graphene as a medium to form a whole ower-like structure. The characteristic appearance utilized multipolarization, hierarchical and synergistic effect to reveal an excellent miraculous absorbing capacity. The minimum reection loss (RL) value reached À42 dB at 12.9 GHz with a thickness of 2.0 mm and the effective absorption (less than À10 dB) reached 4.59 GHz (11.2-15.79 GHz). Bateer et al. 18 prepared NiFe 2 O 4 @RGO composite; the minimum RL value reached À27.7 dB at 9.2 GHz with a thickness of 3.0 mm and the effective absorbing bandwidth was 3.1 GHz. This material disperses well in nonpolar solvents, which has broad application prospects. However, the costly graphene and the high density of ferrite present new challenges. Therefore, it is necessary to nd an alternative to graphene and to reduce the density of ferrite. This is because light and economical absorbing material has practical signicance in industrial production.
The two-dimensional (2D) material molybdenum disulde (MoS 2 ) with graphene-like layered structure plays a signicant role as a semiconductor, 19 solid lubricant, 20 catalyst, 21 etc. In the past few years, scientic researchers have discovered that MoS 2 has superb dielectric loss property and is of low weight, which have made it a popular material in the preparation of lightweight electromagnetic absorbers. High-purity MoS 2 can be prepared by means of hydrothermal treatment. Generally speaking, its morphology can be described as like a blooming ower which is made of aky structures stacked on top of each other. This peculiar appearance will contribute to the amplication of specic surface area and consolidation of microwave absorbing capacity. 22 Furthermore, the raw material is cheap and available and the yield is high, making it promising for replacement of graphene.
In this paper, hollow pellets of Fe 3 O 4 were prepared by a facile solvothermal method in order to reduce the density. Then, MoS 2 nanosheets were gradually grown on the hollow Fe 3 O 4 microspheres by hydrothermal treatment and the obtained Fe 3 O 4 @MoS 2 composite showed a hollow ower-like structure. The morphology and structure of Fe 3 O 4 MPs, MoS 2 MPs and Fe 3 O 4 @MoS 2 MPs were investigated and their electromagnetic parameters and microwave absorption properties were explored. The results indicated that the composite possessed properties of good microwave absorption, wide bandwidth and low mass.
Synthesis of hollow Fe 3 O 4
The synthesis of hollow Fe 3 O 4 was achieved by a facile hightemperature reaction. Firstly, 2.0 g FeCl 3 $6H 2 O, 2 g CH 4 N 2 O, 2 g PEG-400 and 70 ml EG were mixed in a glass beaker, and stirred for a while by a magnetic stirrer until the solid substance was completely dissolved and the solution became transparent orange. Then all the liquid was transferred into a 100 ml Teon liner equipped with a stainless steel reactor and the temperature was maintained at 200 C for 16 h. Aer the product was cooled to normal temperature, it was collected by a magnet and washed with deionized water and anhydrous ethanol three times. Finally it was dried at 60 C and ground ready for further use. The synthesis process of the Fe 3 O 4 @MoS 2 composite was as follows: a certain amount of Fe 3 O 4 powder, (NH 4 ) 6 Mo 7 O 24 $4H 2 O and thiourea were dissolved in 60 ml deionized water, and then the mixture was transferred into a 100 ml Teon liner and kept at 180 C for 12 h. The product was collected by centrifugation and washed with deionized water and anhydrous ethanol three times. At last, it was dried at 60 C and ground ready for further use. Here, we obtained multiple sets of target products by adjusting the molar ratio of the two materials, and denoted them as T3, T4, T5. Meanwhile, T1 and T2 represented pure Fe 3 O 4 and MoS 2 , respectively. Table 1 presents the detailed data.
Characterization
Here, X-ray diffraction (XRD; Rigaku Ultimate IV, scanning angle from 10 to 80 ) and X-ray photoelectron spectroscopy (XPS; Thermo Scientic K-Alpha) were utilized to characterize the compositions of materials. The surface morphology was investigated by scanning electron microscopy (SEM; Hitachi High-Technologies-4800) and transmission electron microscopy (TEM; JEOL JEM 2100) imaging. Simultaneously, the magnetic property was measured by hysteresis loop (VSM, Lake Shore 7404).
The electromagnetic parameters were obtained with a vector network analyzer (VNA; N5224A), which measured bands in the 2-18 GHz range. A diagram of the VNA is shown in Fig. 1. Firstly, powders of samples were mixed with paraffin at 1 : 1 mass ratio. Then each was compressed into a concentric ring with a speci-ed size (F out ¼ 7 mm, F in ¼ 3.04 mm) by a standard mold. Finally, the data of the samples were calculated by the analysis soware and simulated the electromagnetic parameters of the samples under different thicknesses.
Mechanism of structure formation
The formation process of the hollow Fe 3 O 4 /MoS 2 ower-like structure composite is shown in Fig. 2 with hollow structure were prepared by a solvothermal method.
. Firstly, Fe 3 O 4 pellets
The synthesis route is revealed as follows: The hollow structure can be explained by the Ostwald ripening mechanism. During the reaction, the growth of the nanospheres is due to the combination of the grains. It is well known that the chemical potential of a particle decreases with an increase of particle size, which results in the energy of the internal grains being greater than nanospheres. 23,24 Finally, under the action of high temperature, grains gradually dissolve and diffuse to the surface of the sphere to form again. At this time, the energy of nanospheres reaches the lowest value, and the hollow structure is formed. Then the MoS 2 nanosheets were formed on the surface of the spheres. The reaction equations are as follows: 25 The process of the hydrothermal method is similar to that of crystallization in nature. Herein, the cations on the surface of Fe 3 O 4 and the anions in molybdate solution attract each other due to the Coulomb force, and the MoS 2 crystal nucleus is formed at the growth site. With the diffusion of ions to the surface of crystal nucleus and deposition, the crystal will grow directionally along the specic direction and form a unique morphology.
Structural properties
The characterization of crystal structure is conducive to analyze phase components and purity. Fig. 3a shows the XRD patterns of the samples (T1-T5). By referring to Fe 3 O 4 (JCPDS no. 75-0033) and MoS 2 (JCPDS no. 75-1539) standard cards, we can see that the strong diffraction peaks of pure Fe 3 O 4 (T1) correspond to the face-centered cubic structure planes of (1 1 1), (2 2 0), (3 1 1), (2 2 2), (4 0 0), (4 2 2), (5 1 1), (4 0 0) and (5 3 3). And the principal diffraction peaks of pure MoS 2 (T2) correspond to the planes of (0 0 2), (1 0 0) and (1 0 2). As for the composite samples (T3-T5), the correlative characteristic peaks of the two component materials are displayed, and the diffraction angle values are at the identical positions. All the patterns have distinct characteristic peaks without impurity peaks, indicating favorable purity and high crystallinity of the products. XPS was used to analyze the surface elements of the samples (Fig. 3b-f) and the full wide-scan spectrum (Fig. 3b) displays the coexistence of Fe, O, Mo and S in the composite. Fig. 3c depicts the O 1s spectrum of composite T4 and it shows four peaks at 529.83 eV, 530.9 eV, 531.6 eV, and 533.06 eV, which correspond to the lattice oxygen, H-O bond, oxygen vacancies and adsorbed water on the surface respectively. 26 The S 2p spectrum shows two peaks at 161.23 eV and 162.38 eV, which correspond to S 2p 3/2 and S 2p 1/2 orbitals respectively. What is more, the peak at 168.58 eV proves the existence of the S-O bond (Fig. 3d). As is shown in Fig. 3e, the two peaks at 710.88 eV and 724.73 eV represent Fe 2p 3/2 and Fe 2p 1/2 orbitals while the small peak at 715.13 eV indicates the existence of the Fe-Mo bond. This heterogeneous structure is due to the existence of a large number of atomic vacancies in MoS 2 . In the hydrothermal process, Fe ions diffuse into MoS 2 at the interface. 27 Fig. 3f shows the Mo 3d spectrum and the two peaks at 228.33 eV and 231.48 eV represent Mo 3d 5/2 and Mo 3d 3/2 orbitals. Obviously, the Mo element comes from 1T MoS 2 (228.38 eV and 231.53 eV), 2H MoS 2 (229.98 eV and 232.98 eV), and MoO 3 (235.58 eV and 232.38 eV). 28 Therefore, the results of XRD and XPS are favorable evidence for the existence of composite.
The morphology of a material has a great inuence on the microwave-absorbing property. Therefore, SEM and TEM observations are indispensable. Fig. 4a-f show the microscopic morphology of each sample. As for pure Fe 3 O 4 (T1), the diameter of each nanosphere is about 550 nm, and the size is uniform and the dispersion is good because the solvothermal method provides a stable and mild environment for growth (Fig. 4a). It can be seen from the TEM image that pure Fe 3 O 4 consists of hollow spheres with outer diameter of 550 nm and inner diameter of 300 nm (Fig. 4b). This unique structure follows the Ostwald ripening mechanism, which not only reduces the system density effectively, but also contributes to the internal multiple reection loss of EMW. 9 The pure MoS 2 (T2) looks like some owers made of nano-akes stacked together and it is not very dispersive and has an obvious agglomeration phenomenon (Fig. 4c). It can be predicted that the sheet structure extends in all directions greatly increasing the specic surface area of the material. Then the petal structure grows on the microspheres. As is shown in Fig. 4d, when the molar ratio of Fe 3 O 4 to Mo precursor is 1 : 10, the nanosheets only grow locally on the microspheres and most of the microspheres are exposed, which indicates the amount of Mo precursor does not reach the ideal proportion. Aer further increasing the Mo precursor ratio, it can be seen that the surface of the microspheres is completely covered with nanoakes (Fig. 4e). The composite looks like a ower in full bloom and each microsphere is about 850 nm in size. MoS 2 nanosheets construct a conductive network around the composite. The dipole polarization generated by the defects and the rapid movement of the polarized electrons provide efficient conduction loss. Meanwhile, the heterogeneous structure of Fe 3 O 4 -MoS 2 produces interfacial polarization and the interior of the composite provides magnetic loss. Compared to T1 and T2, we can predict that the synergies between the component materials will lead to better microwave absorption. 29 Fig. 4f shows the microstructure of T5; with the increase of Mo precursor, it is difficult for the microspheres to provide enough growth sites on the surface, and the remaining nanosheets can only grow between the petals. In brief, combined with the above structural properties, we successfully prepared hollow-structured owerlike Fe 3 O 4 @MoS 2 .
Microwave properties
It is well known that impedance matching and attenuation characteristics determine the absorbing properties of materials. Ideal impedance matching requires the minimum reectivity of the EMW on a material's surface. The reectivity is determined by the reection coefficient (R), 30 the formula being expressed as follows: 31,32 where Z 0 and Z in represent the air impedance and the interfacial impedance of the material and E and H are the electric and magnetic eld strengths. They are all related to complex permittivity (3) and complex permeability (m). As mentioned above, when 3 is equal to m, Z in and Z 0 are matched and R reaches zero. According to Z in and Z 0 , we can obtain the following formulas for calculating the reection loss: 33 Here, j is the imaginary part of a complex number, f is the frequency of incident wave, d is the thickness of absorbing layer and c is the speed of light in a vacuum. The effective EMW absorption band refers to the frequency range when the reection loss is lower than À10 dB. In this case, we consider that 90% of EMW will be absorbed. The reection loss diagrams of samples T1-T5 are shown in Fig. 5.
We have selected the obvious absorption curves for each sample and marked the corresponding thicknesses. As for T1 ( Fig. 5a and f), the magnetic loss generated by natural resonance means the minimum reection loss value reaches À12.78 dB at 5.12 GHz with a thickness of 5.5 mm and the effective absorbing bandwidth is 1.52 GHz. Obviously, pure Fe 3 O 4 has no palpable absorption in other bands except the C band, and the weak absorption capacity, narrow bandwidth and large thickness are not satisfactory. The EMW absorption of pure MoS 2 (T2) is shown in Fig. 5b and g. Unlike T1, it has distinct absorption peaks in the X and Ku bands. When the thickness is 2.0 mm, the minimum reection coefficient is À15.65 dB with effective absorption of 5.2 GHz (11.84-17.04 GHz). It exhibits wide bandwidth and small thickness, and can achieve effective absorption within 5.28-18 GHz by designing the thicknesses. Fig. 5c-e show the absorption of composites T3, T4, and T5. For T3, the absorption performance is signicantly improved and the minimum reection loss value reaches À29.50 dB at 5.44 GHz, which indicates that MoS 2 plays a positive role. Effective absorption can be achieved in the range of 4. 16-14.24 GHz when the thickness is changed in the range of 2.5-5.5 mm. However, the improvements of effective absorbing bandwidth and thickness are limited. With an increase of Mo precursor, the maximum absorption peak moves towards high frequency. As displayed in Fig. 5d, the reection loss of T4 is À49.6 dB at 13.2 GHz and the effective absorbing bandwidth is 4.24 GHz (11.68-15.92 GHz) when the thickness is only 2.0 mm. With an increase of thickness, the sample still maintains outstanding reection loss. At a thickness of 2.5 mm, 3.0 mm and 3.5 mm, the effective absorption bandwidth is 3.2 GHz (9.28-12.48 GHz), 3.12 GHz (7.52-10.64 GHz) and 2.32 GHz (6.48-8.8 GHz), respectively. The EMW absorption capacity could not continue to increase with the addition of Mo precursor. The minimum reection loss value of T5 is À35.25 dB at 14.8 GHz. This is because the growth of excessive MoS 2 among adjacent petals alters the ower-like structure and affects multiple reection loss. Moreover, the increase of dielectric constant destroys the electromagnetic balance. Surprisingly, the effective absorbing bandwidth is 5.52 GHz . In the process of adjusting the sample thickness from 2.0 mm to 4.0 mm, it was found that the product could achieve effective absorption in the frequency band 5.2-18 GHz, which contains the X band, the Ku band and most of the C band. The reection loss curve of each sample with a thickness of 2.0 mm is depicted in Fig. 5k, which shows that the microwave absorption intensity and effective absorption bandwidth of composites T4 and T5 were much better than those of single-material T1 and T2 under the condition of relatively small thickness. Although sample T4 has the highest microwave absorption intensity, the effective absorption bandwidth is slightly smaller than that of sample T5, which is attributed to the addition of excessive Mo precursor. In addition, the absorption curves could shi to the low-frequency direction during the process of increasing the sample thickness. This variation can be explained by the quarter-wavelength matching theoretical equation: 34 4f m ffiffiffiffiffiffiffi ffi j3mj p ðn ¼ 1; 3; 5; .Þ (12) where t m is the sample thickness and f m is the corresponding frequency. When the value of n is 1 and c is the constant speed of light, t m is inversely proportional to f m . Therefore, we can acquire strong absorption in corresponding frequency band by designing an appropriate thickness. However, it is difficult to achieve both strong absorption and small thickness in the lowfrequency region. The microwave attenuation mechanism is closely related to electromagnetic parameters, which can be expressed by the following formula: 35 ðm 00 3 00 À m 0 3 0 Þ þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðm 00 3 00 À m 0 3 0 Þ 2 þ ðm 00 3 00 þ m 00 3 0 Þ 2 q r (13) where a is the attenuation constant, f is the frequency, c is the speed of light in vacuum, m 0 and 3 0 represent the real part of the dielectric constant and permeability, and m 00 and 3 00 represent the imaginary part of dielectric constant and permeability. It is widely known that the real part of electromagnetic parameters indicates the EMW storage capacity, and the imaginary part of electromagnetic parameters represents the EMW loss capacity. In order to explore the mechanism of loss, we calculate the dielectric constant, permeability and the tangent of loss. The relevant data are shown in Fig. 6.
As a high dielectric loss material, MoS 2 has the highest values of 3 0 and 3 00 among the ve sets of samples, and the curve has an obvious downward trend with an increase of frequency. However, Fe 3 O 4 is completely opposite to MoS 2 : it has the lowest values of 3 0 and 3 00 and the variation of curves is not signicant at 2-18 GHz. As shown in Fig. 6a-c, when increasing the ratio of Mo precursor from T3 to T4, the dielectric coefficient and dielectric loss of the composites are increased. This is due to the leading role of MoS 2 nanosheets in the construction of an electron transport network. We know that the rapid transfer of electrons contributes to conduction loss and the nanosheets provide channels for electron conduction. As shown in Fig. 6b and c, 3 00 and tan d 3 have similar curve distributions, which indicates that 3 00 can reect the dielectric loss indirectly. Fig. 6df show the complex permeability and magnetic loss. We can see that pure Fe 3 O 4 (T1) and pure MoS 2 (T2) have the highest and lowest values of m 0 and m 00 , respectively. With an increase of Mo precursor ratio, the magnetic loss of the samples decreases gradually. Comparing with the variation of complex permittivity, we nd that Fe 3 O 4 has the characteristics of high magnetic loss and low dielectric loss, while MoS 2 has those of low magnetic loss and high dielectric loss. Therefore, by controlling the ratio, a composite with double loss mechanism can be obtained and the electromagnetic parameters can be balanced. Besides, there are some resonance peaks located at 2-8 GHz for all samples (Fig. 6e and f). Magnetic loss is divided into hysteresis loss, eddy-current loss and residual loss. Since natural resonance exists at relatively low frequency, we consider that these characteristic peaks are caused by natural resonance.
In order to better understand the magnetic property of samples T1, T3, T4, and T5, hysteresis loop tests were carried Fig. 6 Plots of the real part of the permittivity (a) and permeability (d), the imaginary part of the permittivity (b) and permeability (e) and the tangent of dielectric loss (c) and magnetic loss (f) for samples T1-T5. out and the results are shown in Fig. 7. The data of saturation magnetization and coercivity are exhibited in Table 2.
We know that high saturation magnetization is usually accompanied by high initial permeability and the high initial permeability leads to high magnetic loss. As is shown in Table 2, pure Fe 3 O 4 (T1) exhibits the highest saturation magnetization of 75.95 emu g À1 with the highest coercivity of 37.5 Oe, while saturation magnetization of 29.36, 13.52, and 5.46 emu g À1 and coercivity of 32.5, 28.6, and 30.9 Oe are found for T3, T4 and T5. It follows that the magnetic loss of samples can be arranged in the following order: T1 > T3 > T4 > T5; the result is consistent with the curves in Fig. 6f. In addition, the dielectric loss of samples shown in Fig. 6c can be ranked in the following order: T2 > T5 > T4 > T3 > T1; and according to Fig. 5a-j, the order of absorption intensity is T4 > T5 > T3 > T2 > T1. The three different sets of results suggest that remarkable absorbing capacity is not determined by a single loss mechanism, but by the synergistic effect of multiple loss mechanisms.
The magnetic loss of ferrite is caused by resonance and eddy current loss. The C 0 (C 0 ¼ m 00 (m 0 ) À2 f À1 ) values of the samples are calculated from the magnetic permeability and the curves are drawn. In Fig. 8a, each sample has some obvious vibration peaks in the frequency range of 2-10 GHz. Considering that natural resonance generally occurs in the lower frequency band, it can be considered that it is caused by natural resonance. In addition, when the C 0 value of the sample remains constant, the magnetic loss at this time is considered to be eddy current loss. Fig. 8a shows that in the frequency range of 10-18 GHz, C 0 values of all curves tend to atten out or even stop changing, so there is eddy current loss in the sample.
In order to further illustrate the microwave attenuation capacity of the samples, the attenuation constant (a) of each sample was calculated and the results are shown in Fig. 8b. According to the formula of the attenuation constant, an increase of m 00 , 3 00 and frequency is benecial to increase its value. In samples T1-T5, the attenuation constant of sample T1 is the smallest, which is consistent with the reection loss result. And the curve increases rst and then decreases, because T1 has the highest m 00 at low frequencies. Surprisingly, the highest attenuation constant was found for sample T2, which reached a maximum of 277 at a frequency of 18 GHz. The attenuation constant curves of samples T2-T5 showed an upward trend on the whole, but uctuated slightly due to the inuence of m 00 . In general, a good attenuation constant value should be accompanied by an excellent microwave loss capacity. However, compared with samples T4 and T5, the reection loss value of sample T2 is obviously lower. This indicates that the attenuation constant is not the only factor affecting the loss capacity of microwaves.
As mentioned earlier, attenuation and impedance matching are important factors in determining microwave loss ability. Fig. 9 shows the impedance modulus corresponding to the maximum absorption curves of each sample (T1-T5). We know that when Z ¼ 1, a material reaches impedance matching. The corresponding impedance moduli of T4 and T5 at frequencies of 13.2 GHz and 14.8 GHz are 0.975 and 0.961, which are almost a perfect match. As for T1, T2 and T3, their corresponding impedance moduli at frequencies of 5.12 GHz, 14.4 GHz and 5.44 GHz are 1.576, 0.7058 and 0.7686, respectively. Obviously, the combination of MoS 2 and Fe 3 O 4 effectively adjusts the electromagnetic parameters, which leads to the Fe 3 O 4 @MoS 2 composite having better impedance matching than single materials. Although T2 has the largest attenuation constant, poor impedance matching has a negative effect on its microwave absorption ability.
In summary, the three composite samples exhibit better absorption ability, effective bandwidth and thickness than each single component. The reason is attributed to the unique construction, the suitable impedance matching and double loss mechanism. The loss model of EMW is shown in Fig. 10. First, the ower-like surface which is stacked by MoS 2 sheets greatly increases the specic surface area of the material and enables it to receive EMW from all directions. The multiple reection of incident waves is one of the most signicant factors to remove EMW, which occurs not only between the adjacent nanosheets, but also between each ower-like nanosphere. 24 Second, under a high-frequency electromagnetic eld, there are a large number of atomic vacancies on MoS 2 nanosheets to generate dipole polarization, and the polarized electrons that gain energy move toward the inner core. Third, the heterostructure of Fe 3 -O 4 @MoS 2 is conducive to the accumulation of free charge, which makes each Fe 3 O 4 sphere negatively charged outside and positively charged inside, resulting in the phenomenon of interfacial polarization and a great loss of electromagnetic energy. 36 Finally, the residual EMW enter the sphere and disappear as heat under the action of resonance and eddy current loss. Meanwhile, the internal hollow structure causes multiple reections of EMW, which accelerates the loss of electromagnetic energy. The excellent loss property and good impedance matching mean that the Fe 3 O 4 @MoS 2 composite has remarkable absorption strength (À49.6 dB) and satisfactory effective absorption bandwidth (4.24 GHz) at an extremely small thickness. In recent years, a lot of research has been done on Fe 3 O 4based materials, and Table 3 lists results for some other absorbers. Through comparison, it is found that the Fe 3 O 4 @-MoS 2 composite is expected to be an outstanding microwaveabsorbing material.
Conclusion
In short, a hollow-structured ower-like Fe 3 O 4 @MoS 2 composite was successfully prepared by simple solvothermal and hydrothermal methods. Compared with Fe 3 O 4 and MoS 2 , excellent impedance matching and synergies between materials play a positive role, with the composite exhibiting strong absorption, broad bandwidth and small thickness. In Fig. 8 The magnetic eddy current (a) and attenuation constant (b) for samples T1-T5. Fig. 9 The impedance modulus corresponding to the minimum reflection loss curve of each sample. particular, sample T4 has the highest absorption intensity of microwaves, its reection loss is À49.57 dB at 13.2 GHz and the effective absorbing bandwidth is 4.24 GHz (11.68-15.92 GHz) when the thickness is only 2 mm. Effective absorption in the entire X-band (8-12 GHz) and part of the C-band (4-8 GHz) and Ku-band (12)(13)(14)(15)(16)(17)(18) can be achieved by designing the sample thickness. Sample T5 has the largest effective absorption bandwidth. When the thickness is 2 mm, the effective absorption bandwidth is 5.52 GHz , and the minimum reection loss value is À35.25 dB. By adjusting the sample thickness, effective absorption in the 5.2-18 GHz frequency band can be realized. In addition, the hollow structure not only effectively reduces the density of material, but also has a positive effect on microwave absorption. It can be predicted that the hollow-structured ower-like composite has a potential application prospect in the eld of microwave absorption. | 7,170.8 | 2021-06-03T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Single-machine rescheduling problems with learning effect under disruptions
Rescheduling in production planning means to schedule the sequenced jobs again together with a set of new arrived jobs so as to generate a new feasible schedule, which creates disruptions to any job between the original and adjusted position. In this paper, we study rescheduling problems with learning effect under disruption constraints to minimize several classical objectives, where learning effect means that the workers gain experience during the process of operation and make the actual processing time of jobs shorter than their normal processing time. The objectives are to find optimal sequences to minimize the makespan and the total completion time under a limit of the disruptions from the original schedule. For the considered objectives under a single disruption constraint or a disruption cost constraint, we propose polynomial-time algorithms and pseudo-polynomial time algorithms, respectively.
1.
Introduction. Learning effect in manufacturing process means that many similar products or parts are produced continuously and repetitiously, which makes the manufacturer/worker to perform subsequent operating faster than before. The impact of learning on productivity in manufacturing was first observed by Wright [22], who used a learning curve to model the aircraft industry. Recently, Anzanello and Fogliatto [1] provided a good review of learning curve. The concept of learning effect was first introduced into machine scheduling problems by Biskup in 1999. For several scheduling problems with learning effect, Mosheiov [17] gave several examples to demonstrate that the optimal schedules are very different from the corresponding classical versions of them. Bachman and Janiak [4] considered some single-machine scheduling problems with a job's processing time depended on the position of this job in a sequence. What's more, Biskup [6] surveyed most of articles recently published about machine scheduling problems with learning effects. Recently, Wang et al. [21] considered single-machine scheduling problems with controllable processing time, truncated job-dependent learning and deterioration effects to minimize a cost function containing several classical objectives and total resource cost. There are still many interesting topics to be focused, readers can refer to Yin et al. [25], Vahedi-Nouri et al. [20] and Cheng et al. [8], etc.
In actual manufacturing environment, unforeseen disruptions often occur, such as the arrivals of new jobs, the change of order priority and the change of release dates, etc. Such uncertainties create a need for rescheduling jobs to meet the emergent request. Rescheduling means to schedule the jobs again together with a set of new jobs so as to minimize objectives. There are many researchers to study rescheduling problems. For example, Hall and Potts [11,12] considered a rescheduling problem in a single-machine setting with newly arrived jobs to minimize the maximum lateness and the total completion time under a limit of disruptions. Azizoglu and Alagoz [3] considered the rescheduling problems with a period of unavailability on one machine due to disruption. They traded off between two conflicting criteria: total flow time and the number of disrupted jobs, and illustrated that all the efficient schedules with respect to the two criteria can be found in polynomial time. Yang [24] studied a single-machine rescheduling problem with the arrivals of new jobs and compressed processing time. Yuan and Mu [26] developed a single machine rescheduling problem with release dates to minimize makespan under the maximum sequence disruption. While, Zhao and Tang [28] studied two single-machine rescheduling problems with linear deteriorating jobs under a disruption to minimize the total completion time, which can be solved in polynomial time. Liu and Young [16] provided three approximation algorithms to study a single-machine rescheduling problem to minimize some cost objectives while the rescheduled jobs without excessively disrupting the original schedule.
In recent years, there are also many research study rescheduling problems in multiple-machine settings. Ozlen and Azizoglu [18] considered a unrelated parallel machines rescheduling problem with a disruption on one of the machines, they provided polynomial-time algorithms to minimize total disruption cost among the minimum total flow time. Chiu and Shih [9] considered an integrated model that analyzed both preventive maintenance and rush orders in a two-machine flow shop using two different rescheduling methods. Katragjini et al. [14] developed rescheduling algorithms to consider flow shop rescheduling layouts and generated three types of disruption that interrupted the original schedules simultaneously. Liu and Zhou [15] investigated an identical parallel-machine rescheduling problem, and proposed two polynomial time algorithms for lexicographically optimizing two conflicting rescheduling criteria: the total completion time and the disruption cost. Sun et al. [19] investigated a hybrid flow shop rescheduling problem to minimize the total weighted completion time, the total waiting time, and the difference in the number of operations processed on different machines for different stages in the original schedule and revised schedule. There are also other references to be focused on, such as, Filar et al. [10], Hoogeveen et al. [13], Zhang et al. [27] and Arnaout [2], etc.
The above references few attempted to take into account learning effect in rescheduling problems. However, there are many applications in production or service operation, i.e., emergent orders and emergency patients, etc, which should be dealt quickly so as to decrease loss. In this paper, we consider single-machine rescheduling problems with learning effect under disruption constraints to minimize the makespan and the total completion time of the new jobs being integrated into the schedule, respectively. For such problems, we propose polynomial-time algorithms to find the optimal job sequences, respectively.
The remainder of this paper is organized as follows. Notations and formulation of the considered problem are described in Section 2. Section 3 provides structural results to be useful for the proposed problems, and studies several rescheduling models with a single disruption constraint. The rescheduling models with several practical measures of disruptions for minimizing the makepan and the total completion time will be considered in Section 4. Finally, concluding remarks and ideas for future research are given in Section 5.
2. Formulation and notations. A set of original jobs £ O = {J 1 , ..., J n O } is to be processed on a single-machine setting without being allowed preemption. We assume that the jobs in £ O have been scheduled optimally to minimize some classical objective and π * is an optimal schedule. Let £ N = {J n O +1 , ..., J n O +n N } denote a set of new jobs which will be inserted to be processed along with jobs in £ O , write as £ = £ O ∪ £ N . Let n = n O + n N . If Job J j ∈ £ is processed at rth position in a sequence, its actual processing time is denoted as p jr = p j r α , where −1 ≤ α < 0 is the learning index of all jobs, and p j denotes normal processing time. Suppose the new jobs arrive together at time zero after that the sequence of the jobs in £ O has been determined without being processed. For any schedule σ of the jobs of £, we define the following notations: • C j (σ) denotes the complete time of job J j ∈ £.
• J i ≺ J j denotes the normal processing time of job J i and J j satisfies p i ≤ p j .
• D j (π * , σ) = |r 2 − r 1 | denotes the position disruption of job J j ∈ £ O , i.e., if J j is the r 1 th job to be processed in π * and the r 2 th job to be processed in σ, respectively.
3. Rescheduling problem with a single disruption constraint. In this section, we study rescheduling problems with learning effect under a single disruption constraint. Before discussing the models, we give several lemmas which will be used in the subsequent sections.
an optimal schedule can be obtained by sequencing the jobs in a non-decreasing order of p j (SPT-rule).
From Hall and Potts [11], we know that if a rescheduling problem has the (SPT, SPT) property, then a polynomial-time algorithm can be designed. Therefore, if the objective is to minimize the makespan or the total completion time, we assume that the jobs in £ O are indexed and sequenced in SPT-order in π * , i.e. p 1 ≤ p 2 ≤ ... ≤ p n0 for all objectives. In the following, we will show that the SPT-rule apply to the several considered rescheduling problems. Let £ M = {J j ∈ £ N |p j < p n0 }, obviously, we have the following lemma. Lemma 3.2. For problems 1|p jr = p j r α , D max (π * ) ≤ k|γ and 1|p jr = p j r α , ∆ max (π * ) ≤ k|γ, where γ ∈ {C max , C j } under the job set (£ O , £ N ) can be polynomially reduced to the corresponding problem under the job set (£ O , £ M ). Lemma 3.3. For Problems 1|p jr = p j r α , D max (π * ) ≤ k|γ and 1|p jr = p j r α , ∆ max (π * ) ≤ k|γ, where γ ∈ {C max , C j }, all have an optimal schedule with no idle time between jobs, and (i) a schedule for problem 1|p jr = p j r α , D max (π * ) ≤ k|γ is feasible if and only if the number of jobs of £ N scheduled before the last job of £ O is no less than k; (ii) a schedule for problem 1|p jr = p j r α , ∆ max (π * ) ≤ k|γ is feasible if and only if the total actual processing time of jobs of £ O scheduled before the last job of £ O is no less than k.
Proof. The process of proof is similar to that of Lemma 1 in Hall and Potts [11], and omitted here. Proof. For jobs in £ O , suppose there exists an optimal schedule σ * in which all jobs are sequenced in non-SPT order. Let J i be the job with the smallest index that appears relatively later to other jobs of £ O in σ * than in π * , and J j (i < j) be the last job of £ O that precedes job J i in σ * . Suppose jobsJ 1 ,J 2 , ...,J h are processed between J j and J i . Let the starting time of job J j be s and be scheduled in the rth position in σ * . By interchanging the position of job J j and job J i to get a new scheduleσ, and the starting time of job J i is s which is scheduled in the rth position inσ, we have Since i < j, which implies p i < p j , then we have Therefore, jobsJ 1 ,J 2 , ...,J h are completed earlier inσ than in σ * , which means that the value of corresponding objective function does not increase after the interchange. On one hand, since the job J i is processed at ith position in π * and job J j at jth position, we suppose that job J j is processed at r 1 th position inσ. If and C i (π * ) < C j (π * ), then, we have ∆ j (π * ,σ) < ∆ i (π * , σ * ) and ∆ i (π * ,σ) < ∆ i (π * , σ * ). Therefore, we have Based on the above analysis,σ is a feasible and optimal sequence.
A finite number of repetitions of this argument show that there exists an optimal schedule in which the jobs in £ O are sequenced in SPT order as in π * , and a similar job insertion argument establishes that the jobs in £ N can also be sequenced in SPT order. Obviously, an optimal schedule should not be no-idle time in π * ; otherwise, by removing this idle time maintains feasibility and decreases the value of objective function.
From lemma 3, we know that there are at most k jobs in £ N to be sequenced before the last job in £ O . Thus, By referring to the (SPT, SPT) property and lemma 4, we propose the following algorithm.
Algorithm I
Step 0. Set the value of k (≥ 0).
Step 1. Index jobs of £ N in SPT-order: Step 2. If p n O +1 > p n O , then goto Step 5; otherwise, choose m(≤ k) feasible jobs.
Step 3. Sequence jobs J 1 , ..., J n O , J n O +1 , ..., J n O +m in SPT-order in the first n O + m positions, and update the job set £ O .
Step 4. Sequence jobs J n O +m+1 , ..., J n O +n N in SPT-order in the last n N − m positions, and update the job set £ N .
Step 5. Sequence the job sets of £ N following £ O .
Obviously, from Step 1, we know that sequencing n N jobs requires O(n N log n N ) time. In Steps 3 and 4, it requires O(n) time to merge the first m jobs of the SPT-ordered jobs of £ N with the jobs of £ O as sequenced in π and then place the last n N − m jobs of the SPT-ordered jobs of £ N at the end of the schedule. Therefore, the complexity of Algorithm I is O(n + n N log n N ), and we have the following proposition.
Similarly, we consider problem 1|p jr = p j r α , D j (π * ) ≤ k| C j . Lemma 1 and Lemma 4 show that a local optimal partial schedule can be found by merging several appropriate jobs of £ O and £ N in SPT-order, respectively. An optimal merging can be performed subject to the given constraint on the total sequence disruptions by the following dynamic programming algorithm.
Algorithm DP
Step 0. Set the value of k (≤ n O n N ).
Step 1. Index jobs of £ N in SPT-order: Step Step 3. Let f (i, j, δ)= minimum total completion time of a partial schedule for jobs J 1 , J 2 , ..., J i and J n O +1 , ..., J n O +j , where the total sequence disruption is equal to δ. (Boundary condition f (0, 0, 0 Step 5. Optimal solution value min 0≤δ≤k f (n O , n N , δ).
Obviously, for the recurrence relation in Step 4, the term f (i − 1, j, δ − j) corresponds to the case where the partial schedule ends with job J i ∈ £ O , and j jobs of £ N appear before job J i in such a partial schedule. The increase in the total sequence disruptions is equal to j. The term f (i, j − 1, δ) corresponds to the case where the partial schedule ends with job J n O +j ∈ £ N . The number of the total sequence disruption is still equal to j. Thus, we have For problem 1|p jr = p j r α , D j (π * ) ≤ k|C max , we only modify the recurrence relation of the above algorithm into f (i, j, δ) Similarly, we have the following proposition.
Proposition 3. An optimal schedule for problem 1|p jr = p j r α , D j (π * ) ≤ k|C max , can be found in O(n 2 O n 2 N ). Similarly, we consider 1|p jr = p j r α , ∆ max (π * ) ≤ k|γ, where γ ∈ {C max , C j }. The (SPT, SPT) property of Lemma 4 and Lemma 1 shows that an optimal schedule is found by merging the SPT-order lists of jobs of £ O . We know that C j (π * ) can be obtained by sequencing the j jobs in £ O in SPT-order. Because of the (SPT, SPT) property of Lemma 2, we have C j (σ) ≥ C j (π * ) for J j ∈ £ O . Thus, the constraint ∆ max (π * ) ≤ k reduces to C j (σ) ≤ C j (π * ) + k, equivalently, which means job J j ∈ £ O with a deadlined j = C j (π * ) + k andd j = ∞ for job J j ∈ £ N , and it is obvious thatd 1 ≤d 2 ≤ ... ≤d n O . Once the deadlines have been computed, the problem can be denoted as 1|p jr = p j r α ,d j |γ, where γ ∈ {C max , C j }. Since Smith (1956) solved the problem 1|d j | C j by a backward procedure in which a feasible job with the largest due-date is assigned to the last unfilled position in a sequence, we propose an algorithm through modifying Smith's rule to solve the problem.
Step 1. Index jobs of £ O in SPT-order: J 1 ≺ J 2 ≺ ... ≺ J n O , and £ N in SPTorder: J n O +1 ≺ J n O +2 ≺ ... ≺ J n O +n N , then reindex the jobs of £ in SPT-order and let ζ be the set of jobs sequenced after job J n O .
Step 2. If job J n O is sequenced before job J n O +1 , then terminate; otherwise, let job J n O +j be the last job of £ N that sequenced before job J n O .
Step 3. Compute C j (£) for all jobs of £ O , if C j (£) ≤d j , goto Step 5; otherwise, remove job J no+j from £.
Step 5. Sequence the jobs of ζ in SPT-order after £ and obtain the optimal schedule. Proposition 4. An optimal schedule for problem 1|p jr = p j r α , ∆ max (π * ) ≤ k|γ, where γ ∈ {C max , C j }, can be found in O(n + n N log n N ).
Proof. The jobs in £ N are indexed in SPT-order, so that the job to be scheduled in the last unfilled position in the sequence is either the last unscheduled job in π * or the last unscheduled job among the SPT-indexed in £ N . Because the ordering of the jobs of £ N requires O(n N log n N ) time and the schedule construction requires O(n) time. From lemma 3, it only remains to enumerate all possible ways of merging the SPT-order lists of jobs of £ O and £ N . Algorithm MS does so by comparing the cost of all possible state transitions to find an optimal schedule, while the recurrence relation requires constant time for each set of the values of the state variables. Thus, the overall time complexity of Algorithm MS is O(n + n N log n N ). Now, we focus on problem 1|p jr = p j r α , ∆ j (π * ) ≤ k| C j . We know that the problem can be deduced to 1| ∆ j (π * ) ≤ k| C j when α = 0, and which is proved to be binary NP-hard (Hall and Potts, 2004). Therefore, problem 1|p jr = p j r α , ∆ j (π * ) ≤ k| C j is at least binary NP-hard. Lemma 1 and lemma 4 show that local optimal partial schedule can be found by merging several appropriate jobs of £ O and £ N by SPT-order, respectively. For such problem, an optimal merging can be performed subject to the given constraint on the total sequence disruptions by the following dynamic programming algorithm.
Obviously, for the recurrence relation in Step 5, the term corresponds to the case where the partial schedule ends with job J i ∈ £ O , and j jobs of £ N appear before job J i in such a partial schedule. The increase in the total sequence disruption is equal to j. The term f (i, j − 1, τ ) corresponds to the case where the partial schedule ends with job J n O +j ∈ £ N . The total sequence disruption is still equal to j. Thus, we have Proposition 5. An optimal schedule for problem 1|p jr = p j r α , For problem 1|p jr = p j r α , ∆ j (π * ) ≤ k|C max , we only modify the recurrence relation of the above algorithm into f (i, j, δ) Similarly, we have the following proposition. Proposition 6. An optimal schedule for problem 1|p jr = p j r α , ∆ j (π * ) ≤ k|C max can be found in O(n O n N min{n O P N , n N P O }).
4.
Rescheduling objective and disruption cost problem. In this section, we consider rescheduling problems to minimize the sum of several classical objectives and the costs of position disruption caused by inserting new jobs. For such problems, we try to find an appropriate partial sequence to insert the original job sequence so as to minimize the objectives. That is, if there exist n N new jobs to be inserted to the original job sequence, the position disruption reaches up to n N times at most, Thus, according to the definition of D max , we use the approach which has solved problem 1|p jr = p j r α , D max (π * ) ≤ k|C max (k = 0, 1, 2, ..., n N ) to study problem 1|p jr = p j r α |C max + µD max (π * ). Let σ k denote an optimal schedule and compute min 0≤k≤n N {C max (σ k ) + µk}, we propose the following algorithm.
Algorithm II
Step 1. Index the jobs of £ N in SPT-order, i.e.
Step 2. If job p n O ≤ p n O +1 , then terminate; otherwise, define j such that number n O + j is the index of the last job of J N sequenced before job J n O .
Step 2.1 Let r j be the processed position of job J n O +j , and x j be the number of those jobs which sequenced after jobs J n O +j up to and including J n O .
Step 2.2 Set σ j to be the schedule defined by the current job sequence, and compute C max (σ j ).
Step 3.1 Form schedule σ j−1 from σ j by moving job J n O +k immediately after job J n O .
Proposition 7. For problem 1|p jr = p j r α |C max + µD max (π * ), an optimal schedule can be found Algorithm II in O(n + n N log n N ).
Proof. Algorithm II computes min where σ k is found by solving problem 1|p jr = p j r α , D max (π * ) ≤ k|C max , and it generates an optimal schedule since all the possible values of k are considered.
Step 1 requires O(n N log n N ) time to construct a SPT schedule of all jobs of £ N , and O(n) time to construct a SPT schedule of all jobs, while Step 2 requires O(n) time.
Step 3 performs at most j times sine j ≤ n N , and each application requires constant time. Therefore, the computational complexity of Algorithm II is O(n + n N log n N ).
The following numerical example will illustrate the working principle of the above algorithm. Obviously, if all the jobs are rescheduled by SPT rule, the objective value is 53.4803. By using above algorithm, there exist a finite number of new jobs (such as j = 4) to be inserted to the original sequence to construct a new sequence, i.e. σ j = (J 1 , J a , J 2 , J 3 , J b , J c , J 4 , J 5 , J d , J 6 , J e , J f ), and according to Step 3.1, we have σ j−1 = (J 1 , J a , J 2 , J 3 , J b , J c , J 4 , J 5 , J 6 , J d , J e , J f ). Thus, we have r j = 9, x j = 1 and obtain A j = (p n O , p n O +4 ) = (p n O , p d ), B j = (r α j , (r j +x j ) α ) = (9 α , 10 α ). Therefore, C max (σ j ) = C max (σ j−1 ) + (p n O , p d )H 2 (9 α , 10 α ) T . Repeating such computation, we have the optimal sequence from such k sequences and the optimal cost is 51.032.
Secondly, we consider problem 1|p jr = p j r α | C j + µD max (π * ). For such problem, we propose the following algorithm.
Algorithm III
Step 1. Index the jobs of £ N in SPT order.
Step 2. If job p n O ≤ p n O +1 , then terminate; otherwise, define j such that n O + j is the index of the last job of J N that is sequenced before job J n O .
Step 2.1 Let r j be the processed position of job J n O +j , and x j be the number of those jobs which are sequenced after jobs J n O +j up to and including J n O . Step Step 2.3 Set σ j to be the schedule defined by the current job sequence, and compute h∈£ C h (σ j ).
Step 3.1 Form schedule σ k−1 from σ k by moving job J n O +k immediately after job J n O .
Step 3.2 Compute Step 4. Find the optimal schedule σ * = σ k * such that k * = arg min Proposition 8. For problem 1|p jr = p j r α | C j +µD max (π * ), an optimal schedule can be found by Algorithm III in O(n + n N log n N ).
Proof. The proof is similar to that of Proposition 4. If all the jobs are rescheduled by SPT rule, the objective value is 225.4965. Similar to do the procedure in example 1 and using above algorithm, we can obtain the optimal schedule by sequencing the jobs in the order as and the optimal cost is 223.5387.
Algorithm IV
Step 1. Index the jobs of £ N in SPT order.
Step 2. If job p n O ≤ p n O +1 , then terminate; otherwise, define j such that n O + j is the index of the last job of J N that is sequenced before job J n O .
Step 2.1 Let r j be the processed position of job J n O +j , and x j be the number of those jobs which are sequenced after jobs J n O +j up to and including J n O .
Step 2.2 Set σ j to be the schedule defined by the current job sequence, and compute C n O (σ j ) and C max (σ j ).
Step 3.1 Form schedule σ j−1 from σ j by moving job J n O +k immediately after job J n O . Step Step 4. Find schedule σ k * such that k * = arg min Proposition 9. For problem 1|p jr = p j r α |C max + µ∆ max (π * ), an optimal schedule can be found by Algorithm IV in O(n + n N log n N ).
Proof. The process of proof is similar to that of Proposition 4. If all the jobs are rescheduled by SPT rule, the objective value is 47.0544. Similar to do the procedure in example 1 and using above algorithm, we can obtain the optimal schedule by sequencing the jobs in the order as and the optimal cost is 45.5074.
Algorithm V
Step 1. Index the jobs of £ N in SPT order.
Step 2. If job p n O ≤ p n O +1 , then terminate; otherwise, define j such that n O + j is the index of the last job of J N that is sequenced before job J n O .
Step 2.1 Let r j be the processed position of job J n O +j , and x j be the number of those jobs which are sequenced after jobs J n O +j up to and including J n O .
Step 2.2 Set σ j to be the schedule defined by the current job sequence, and compute C n O (σ j ) and C max (σ j ).
Step 3.1 Form schedule σ j−1 from σ j by moving job J n O +k immediately after job J n O .
Proposition 10. For problem 1|p jr = p j r α | C j + µ∆ max (π * ), an optimal schedule can be found by Algorithm V in O(n + n N log n N ).
Proof. The process of proof is similar to that of Proposition 4. If all the jobs are rescheduled by SPT rule, the objective value is 216.5706. Similar to do the procedure in example 1 and using above algorithm, we can obtain the optimal schedule by sequencing the jobs in the order as J 1 ≺ J a ≺ J 2 ≺ J b ≺ J c ≺ J d ≺ J 3 ≺ J 4 ≺ J 5 ≺ J e ≺ J f ≺ J g , and the optimal cost is 216.1061.
5.
Conclusions. In this paper, we consider single-machine rescheduling problems with learning effect under disruption constraints. In the proposed scheduling models, learning effect means that the workers gain experience from producing jobs and make the actual processing time of jobs shorter than their normal processing time, while rescheduling means to schedule the sequenced jobs again together with a set of new arrived jobs so as to generate a new feasible schedule and create disruptions. The goal of optimizing system performance is addressed by minimizing the makespan or the total completion time of the new jobs being integrated into the schedule under the limited disruption constraints, respectively. For the considered problems, by using several proposed lemmas and classical scheduling rules, we provide polynomial-time algorithms of minimizing the makespan and the total completion time problems under several disruption constraints between the original and revised schedules, respectively. For some models, we also designed dynamic programming algorithms to analyze them. However, in this paper, we only study single-machine rescheduling problems with learning effect consideration for the determinate new arrival jobs, therefore, we will extend the models to dynamic new job arrivals and multiple machines in the future. | 7,422.8 | 2017-09-01T00:00:00.000 | [
"Engineering"
] |
Influence Research of Digital Inclusive Finance on Innovation Efficiency of Agricultural Science and Technology in Jiangsu Province
: Based on the balanced panel data of 13 Jiangsu prefecture-level cities from 2011 to 2020, this study utilizes the overall effect model and intermediary effect model to explore the impact of digital inclusive finance on innovation efficiency of agricultural science and technology. It is found that the digital inclusive finance has a positive and direct impact on the innovation efficiency of agricultural science and technology in Jiangsu. By further inspection, the marketization level has a masking effect between digital inclusive finance and innovation efficiency of agricultural science and technology, and urbanization level plays a part intermediary role between them. Concerning the positive effect of digital inclusive finance on agricultural science and technology innovation efficiency, central region, southern region and north region rank first, second and third respectively in Jiangsu province. Finally, the forecast model shows that in Central, Northern and Southern Jiangsu, the improvement of innovation efficiency of agricultural science and technology will benefit from the driving power of digital inclusive finance except Nantong.
Introduction
With the Reform and Opening up deepening, innovation activities in the agricultural sector have become increasingly active, but still face problems such as uneven allocation of production factors, low levels of economic development and a wide income gap between urban and rural residents, which hinder the sustainable improvement of the innovation efficiency of agricultural science and technology.In recent years, with the development of digital technology and deepening financial changes, digital inclusive finance has been more widely used, with an average annual increase of 29.1% in the Digital Inclusive Finance Index between 2011 and 2020.To further promote agricultural technology innovation and high-quality development, the 14th Five-Year Plan emphasized the important role of digital inclusive finance in leading agricultural technology innovation and accelerating the process of agricultural modernization.The overall development of digital inclusive finance in Jiangsu Province is relatively good, ranking among the top in China in terms of financial market players, products and services as well as external environment construction.At the same time, the driving effect of science and technology innovation on agricultural development is increasingly prominent.The contribution rate of agricultural science and technology innovation in Jiangsu Province currently reaches 55.7%, exceeding the national average by 8%, but still lower than 20% in developed countries, and the driving effect of science and technology innovation has yet to be further strengthened.This paper will discuss the direct effect of digital inclusive finance on the innovation efficiency of agricultural science and technology in Jiangsu Province and the indirect effect through the degree of marketization and the level of urbanization, based on the influence mechanism of digital inclusive finance on the innovation efficiency of agricultural science and technology.
Literature review and research hypothesis 2.1. Research on the effects of digital financial inclusion
Regarding the exploration of the effects of digital inclusive finance, scholars such as Arner, Khin and Ozili focused on macro effects such as social well-being, infrastructure development and economic growth observations [1][2][3][4][5].While in terms of the impact of digital inclusive finance on technological innovation, studies have also focused more on cities, enterprises and manufacturing industries.For example, Li studied the effects of digital inclusive finance on enterprise innovation in China from an inclusive perspective [4] and Pan explored the effects of digital inclusive finance in enhancing innovation in cities [5].In general, there is a relatively lack of research on the effect of agricultural technology innovation involving digital financial inclusion.
Research on the evaluation of the innovation efficiency of agricultural science and technology
Research on the evaluation of the innovation efficiency of agricultural science and technology has been the focus of academic discussion.For example, Du and Zhao analyzed the innovation efficiency of agricultural science and technology in China using the DEA model and SFA model respectively [6,7].Xiao and Chen measured the innovation efficiency of agricultural science and technology in Anhui Province and Henan Province, respectively, using DEA models [8,9].Most domestic scholars evaluate the innovation efficiency of agricultural science and technology from two dimensions: innovation input and innovation output, but there is not yet complete agreement on the selection of specific measurement indicators.Based on the relevant literature, most of them choose the number of scientific researchers and research input funds as agricultural innovation input indicators [7][8][9], and the number of agricultural patents [8] and agricultural value added [10] as agricultural innovation output indicators.Regarding the evaluation methods, domestic and foreign scholars mostly use statistical regression analysis, data envelopment analysis (DEA), stochastic frontier analysis (SFA), and the method of assigning weights to quantitative evaluation of science and technology innovation efficiency.The DEA method is the most widely used method in evaluation of agricultural science and technology innovation efficiency, as it can compare and evaluate multiple decision units at the same time but does not require any prior processing of the scale and construction of functional forms, thus effectively avoiding subjectivity.
Mechanisms of how digital inclusive finance influences the innovation efficiency of agricultural science and technology
Digital inclusive finance significantly improves the efficiency of financial services through information technology and digital product innovation, empowers disadvantaged groups, helps to stimulate the research and development of agricultural innovation subjects, and has an important impact on the innovation efficiency of agricultural science and technology.Specifically, digital inclusive finance improves the availability of funds for agricultural SMEs and farmers, and promotes the popularization and low cost of financial services.As digital inclusive finance has the characteristics of convenience, wide coverage and commercial sustainability, it meets the characteristics of frequent innovation activities and small capital needs of SMEs, and can effectively alleviate the financing difficulties faced by agricultural SMEs and farmers in science and technology innovation.Secondly, digital inclusive finance can break through the limitations of time and space, use digital technology and the Internet as a carrier to optimize the flow of savings and credit products and services and simplify payment procedures.Accordingly, digital inclusive finance saves time and transaction costs in accessing funds and effectively meets the capital needs of agricultural SMEs and farmers, especially during the special period of the COVID-19.Thirdly, digital inclusive finance can help farmers with weak financial risk identification skills to avoid financial risks, thus stimulating their innovation momentum.In recent years, the digital economy has strengthened the roots of financial inclusion, enhanced the transparency of financial market operations, effectively discouraged risks such as financial fraud, and stimulated agricultural SMEs and farmers to carry out innovative activities with the support of digital inclusive finance (see Figure 1).Accordingly, this paper proposes research hypothesis 1: digital inclusive finance directly contributes to the innovation efficiency of agricultural science and technology.
Intermediary effects of marketization, urbanization
The rapid development of digital inclusive finance can optimize the structure of capital supply, expand the scale of capital supply, help agricultural SMEs and farmers break the dilemma of difficult and expensive financing, stimulate innovation momentum and consumption potential, promote the cross-domain flow of technology, services and commodities, and promote a further increase in the level of marketization.The increased level of marketization strengthens the mobility of factors, which can provide a good external environment to support the flow of innovation factors into agriculture and the stimulation of innovation dynamics of researchers, promote healthy competition in agricultural science and technology innovation, and help strengthen the efficiency orientation of agricultural innovation activities as well as the quality assurance of innovation results.Accordingly, this paper proposes research hypothesis 2: digital inclusive finance contributes to the improvement of the innovation efficiency of agricultural science and technology by promoting the process of marketization.Under the support of digital inclusive finance's advanced digital technology and complex big data risk control models, financial institutions can identify loan risks accurately, expand the scale of credit and improve the efficiency of financial services, which in return bring about financial security and efficiency to support the funds needed for urbanization.At the same time, digital inclusive finance, with its long-tail effect, can provide agricultural SMEs and farmers with timely and appropriate financial products and services at a lower cost, alleviating their financial troubles in entering the ranks of urbanization.Under the continuous promotion of urbanization, it will boost the inflow of scientific and technological innovations into rural areas, enhance small and medium-sized agricultural enterprises and farmers' sense of awareness and experience of the benefits of agricultural science and technology, and stimulate their enthusiasm for independent innovation.Consequently, it enhances the attraction of rural areas to external resources, and the inflow of advanced educational resources and financial support from cities will help the development of scientific and technological innovation activities.Owing to this, the paper proposes research hypothesis 3: digital inclusive finance promotes the innovation efficiency of agricultural science and technology through increased urbanization.With reference to the measurement indicators of related studies [6][7][8][9][10][11][12][13], this paper constructs a system of agricultural science and technology innovation efficiency indicators from two dimensions of input and output (as shown in Table 1), among which the input indicators are not recorded in the available statistics.Thus, the data treatment of the number of agricultural R&D personnel, full-time equivalents of agricultural R&D personnel and agricultural R&D expenditure is borrowed from Chen and other scholars.1The results based on the DEA method show that the innovation efficiency of agricultural science and technology in 13 prefecture-level cities in Jiangsu province all show an increasing trend during the period 2011-2020.Among them, Nanjing, Wuxi, Changzhou and Suzhou have already reached the optimal level of 0.9 or above in 2020.Nantong, Yangzhou and Zhenjiang, which are slightly behind the above cities, have already reached an average agricultural science and technology innovation efficiency of 0.827 in 2020.Although Huai'an, Yancheng, Taizhou and Suqian do not have a high starting point in terms of agricultural innovation efficiency, the rate of increase is significant, with an average increase of 3.9% between 2011 and 2020.In contrast, the innovation efficiency of agricultural science and technology in Xuzhou and Lianyungang rose at a smaller rate, with an incremental increase of about 1.5% over the decade.Meanwhile, from 2011 to 2020, the innovation efficiency of agricultural science and technology in all prefecturelevel cities in southern Jiangsu province was higher than that in Jiangsu province, while the innovation efficiency of agricultural science and technology in central and northern Jiangsu province was lower than that in Jiangsu province, with the exception of Nantong.Nantong was ahead of Jiangsu Province in agricultural science and technology innovation efficiency until 2014, while it gradually fell behind after 2014 (see Table 2).
Model settings
In order to study the impact of digital inclusive finance on the innovation efficiency of agricultural science and technology in Jiangsu Province, and test the mechanism through which digital inclusive finance promotes the efficiency of agricultural science and technology innovation by increased degree of marketization and urbanization, the overall regression model and the mediating effect model are constructed in this paper as shown below.
In Equation ( 1), Sci it represents the agricultural science and technology innovation efficiency index of the i prefecture-level city in the t year; Dfi it represents digital financial inclusion (composite index) for the i prefecturelevel city in the t year; X represents the group of control variables affecting the innovation efficiency of agricultural scientific and technology; γ i and δ t denoting urban fixed effect and time fixed effect, respectively; ε it is a random perturbation term.M in equations ( 2) and (3) are mediating variables, and the remaining signs are similar to population regression models.
Variable design and data sources
Based on the previous mechanism analysis, this paper selects innovation efficiency of agricultural science and technology as the explained variable, digital inclusive financial index (composite index) as the core explanatory variable, agricultural productivity development, industrial structure, economic development level and infrastructure construction as the control variables, and marketization degree and urbanization rate as the intermediary variables (see Table 3).In the data collation process, linear interpolation is used to complete the missing values of some variables.Explained variable: The explained variable in this paper is the innovation efficiency of agricultural science and technology (Sci), calculated using data envelopment analysis.
Core explanatory variable: The core explanatory variable in this paper is digital inclusive finance (Dfi), which is measured using the Digital Inclusive Finance Composite Index published by the Digital Finance Centre of Peking University [14].
Control variables: This paper focuses on the study of the innovation efficiency of science and technology in the agriculture field, so choose the total annual power of agricultural machinery in each region is used to measure the level of agricultural productivity development, the value added of the primary industry as a proportion of GDP measures the industrial structure, the logarithm of GDP measures the level of economic development; and infrastructure construction is measured by the total area of crops sown in each region.
Intermediary variables: The degree of marketization is reflected by the total marketization process score in the Fan Gang Marketization Index; the urbanization rate is the proportion of resident population to total population.
Descriptive statistics
The calculation results show that the mean value of agricultural science and technology innovation efficiency from 2011 to 2020 is 0.79, and the standard deviation is small, indicating that the difference in agricultural science and technology innovation efficiency among 13 prefecture-level cities in Jiangsu Province is not disparate.In terms of the digital inclusive finance development index, the maximum value is 313.90 and the minimum value is 50.35,indicating that digital inclusive finance in Jiangsu Province has increased significantly in the past ten years and the upward trend is more obvious.The minimum value of the economic development level is 15.36, while the maximum value is 18.81, indicating that the economic development level of Jiangsu Province varies significantly among prefecture-level cities.The large difference in the extreme values of agricultural productivity development, industrial structure and infrastructure construction reflects the different endowment of agricultural resources and the importance attached to agricultural development and development orientation of the 13 prefecture-level cities.
Overall effect
Based on the estimation results of equation ( 1), it is proved that digital inclusive finance has a direct effect on the innovation efficiency of agricultural science and technology, and the test results of model (1) to model (5) show that equation (1) itself is robust (see Table 5).Model (1) only introduces the variable of the level of development of digital inclusive finance, while model (2) adds four control variables of agricultural productivity development, industrial structure, economic development level and infrastructure construction to model (1).Models (3) to ( 5) add urban fixed effects, time fixed effects and two-way fixed effects in that order.The results of model (5) show that for digital inclusion finance, it rejects the original hypothesis at the 1% significant level, indicating that digital inclusion finance will have a significant positive impact relationship on the innovation efficiency of agricultural science and technology, which will increase by 0.532 percentage points when each 1% increase in digital inclusion finance, which is also consistent with the results of hypothesis 1 above.
Robustness test
To further test the robustness of the results, following three evaluation dimensions of digital inclusive finance are introduced and estimated separately in this paper (as shown in Table 6).The results of the robustness test are basically consistent with the regression results in Table 5, and the signs of the coefficients are consistent.Although the depth of use of digital inclusive finance does not significantly affect the innovation efficiency of agricultural science and technology, the significant positive effects of the breadth of coverage and digitalization of digital inclusive finance on the innovation efficiency of agricultural science and technology still exist, and the coefficient of the breadth of coverage (0.944) is larger than the baseline regression coefficient, indicating that digital inclusive finance has a greater effect on the innovation efficiency of agricultural science and technology in terms of the breadth of coverage, which can expand financial services to remote areas that are difficult to be reached by traditional financial institutions, improve the financial accessibility of agricultural operators, meet their financing needs arising from agricultural science and technology innovation, and help improve the innovation efficiency of agricultural science and technology.
Heterogeneity test
The calculation of model (9) to model (11) shows that digital inclusive finance has a significant positive effect on the change of agricultural science and technology innovation efficiency in central, southern and northern Jiangsu regions, and the original hypothesis is rejected at 1%, 5% and 10% significant levels, respectively (see Table 7), thus indicating that the promotion effect of digital inclusive finance on agricultural science and technology innovation efficiency in central Jiangsu region is higher than that in southern and northern Jiangsu regions.However, the promotion effect in northern Jiangsu region appears to be weaker, indicating that digital inclusive finance has a less significant effect on agricultural innovation efficiency improvement generated in regions with weak financial services and innovation bases.
The intermediary effect test of marketization and urbanization
From the test results of the mediating effect of marketization, the regression coefficient of digital inclusive finance on the degree of marketization is positively significant, indicating that digital financial inclusion has improved the level of marketization development.After the variables of digital financial inclusion and marketization degree are added to the regression model, the regression coefficient of marketization degree is significantly -0.128 (see models (12) and (13) in Table 8).At this point, the mediating effect between the degree of marketization in hypothesis 2 and digital financial inclusion and agricultural innovation efficiency is not valid.The reason may be related to the lack of capital flow deviation under effective supervision in current practice.According to the research method of Wen [15], λ1β2 and α 1 are different signs, so the theory is based on masking effect, and the effect size is | � � / � | � ������ .In other words, when the degree of marketization is controlled and the flow of capital is strictly regulated, the promoting effect of digital financial inclusion on agricultural scientific and technological innovation will be enhanced.According to the test results of the intermediary effect of urbanization, the regression coefficient of digital inclusive finance on the urbanization rate is positively significant, which indicates that digital inclusive finance raises the level of urbanization.After including both digital inclusive finance and urbanization rate variables in the regression model, the coefficients of both are still significant (see model (14) (15) in Table 8), proving that raising the level of urbanization plays a part in digital inclusive finance promoting agricultural science and technology innovation Hypothesis 3 was verified by the partial mediating effect of increased urbanization in the development of efficiency.The accelerated urbanization process can enhance the close connection between rural and urban areas, promote the flow of urban science and technology resources to rural areas, and enhance the capacity for autonomous innovation in agriculture.
Drive prediction
Based on the analysis of the impact mechanism of digital inclusive finance on the innovation efficiency of agricultural science and technology in Jiangsu Province in the previous section, a Grey prediction model GM (1, 1) was constructed by MATLAB statistical software from 2011 to 2020 to simulate the trend of agricultural science and technology innovation efficiency in southern, central and northern Jiangsu Province in the next five years (i.e.2022-2026) (see Figure 3, Figure 4 and Figure 5).The simulation results show that during 2022-2026, the trend of digital financial inclusion in the 12 prefectures in southern Jiangsu, northern Jiangsu and central Jiangsu, excluding Nantong, which is highly consistent with the trend of agricultural science and technology innovation efficiency, which shows a significant increase in the basic trend.In 2026, Nanjing, Suzhou, Changzhou and Wuxi are all forecast to reach 1 (extreme value), while Yangzhou and Suqian are both forecast to reach 0.95 or above, ranking the top in the central and northern regions respectively.
Policy recommendations (1) Strengthening the promotion and publicity of digital inclusive finance
The development of digital inclusive is still at initial stage, and farmers and agricultural SMEs are not strongly aware of it.To solve this, the authority should enhance the layout of digital inclusive finance business outlets, the construction of modern information infrastructure in rural areas across the province, especially in northern Jiangsu, and further increase the coverage rate of outlets as well as the speed of information dissemination.Accordingly, the authority should strengthen the publicity of digital inclusive finance policies and business knowledge, popularize necessary education to enhance the financial literacy of agricultural business entities and improve the utilization efficiency of digital inclusive finance.
(2) Strengthening the supervision of the flow of digital inclusive finance science and technology credit funds When the authority increases the proportion of digital inclusive finance credit for agricultural science, technology and relieving the financial bottleneck constraint of agricultural science and technology innovation, it is also necessary to further strengthen the supervision of digital inclusive finance credit projects in order to prevent the erroneous allocation of science and technology credit funds and ensure that the funds are "dedicated".To prevent the erroneous allocation of technology credit funds and to ensure that the funds are used exclusively for the intended purpose, it is necessary to further strengthen the examination and supervision of digital inclusive finance projects and funds.It should also strengthen monitoring during the lending process and verify the true flow of funds by reviewing the vouchers for the use of funds, focusing on the match between the innovative use of credit funds and the client's recent substantive innovative activities; and strengthen post-loan management services and timely management of the settlement of client funds and evaluation of client creditworthiness.
(3) Increasing the financial support for urbanization by digital inclusive finance The authority should strengthen the credit support function of digital inclusive finance for rural public goods and services, increase financial support for urbanization infrastructure, the budget for credit funds for modern education, healthcare and information in rural areas.Besides, it is necessary to strengthen subsidized financial support for the introduction of agricultural science and technology talents.At the same time, it is needed to actively explore a joint capital injection model based on digital inclusive finance and developers to further promote high-level urbanization, enhance the attractiveness of rural areas in gathering external resources, promote the flow of more high-quality urban resources, scientific and innovative elements into rural areas and agriculture.
Figure 1 .
Figure 1.Mechanisms of how digital inclusive finance influences the innovation efficiency of agricultural science and technology
Table 1 .
Agricultural science and technology innovation efficiency input-output indicator system
Table 2 .
Output results of agricultural science and technology innovation efficiency in Jiangsu Province from 2011 to 2020
Table 3 .
Name and source of model variables
Table 5 .
Overall effect model
Table 6 .
The test of agricultural science and technology innovation efficiency from various dimensions of digital inclusive finance Variable (
Table 7 .
Heterogeneity test of different locations in Jiangsu Province
Table 8 .
Intermediary effects model | 5,286.4 | 2023-01-01T00:00:00.000 | [
"Economics"
] |
Is there 1.5-million-year-old ice near Dome C, Antarctica?
. Ice sheets provide exceptional archives of past changes in polar climate, regional environment and global atmospheric composition. The oldest dated deep ice core drilled in Antarctica has been retrieved at EPICA Dome C (EDC), reaching ∼ 800 000 years. Obtaining an older paleoclimatic record from Antarctica is one of the greatest challenges of the ice core community. Here, we use internal isochrones, identified from airborne radar coupled to ice-flow modelling to estimate the age of basal ice along transects in the Dome C area. Three glaciological properties are inferred from isochrones: surface accumulation rate, geothermal flux and the exponent of the Lliboutry velocity profile. We find that old ice ( > 1.5 Myr, 1.5 million years) likely exists in two regions: one ∼ 40 km south-west of Dome C along the ice divide to Vostok, close to a secondary dome that we name “Little Dome C” (LDC), and a second region named “North Patch” (NP) located 10–30 km north-east of Dome C, in a region where the geothermal flux is apparently relatively low. Our work demonstrates the value of combining radar observations with ice flow modelling to accurately represent the true nature of ice flow, and understand the formation of ice-sheet architecture, in the centre of large ice sheets.
Introduction
Since around 800 000 years ago, glacial periods have been dominated by a ∼ 100 000-year cyclicity, as documented in multiple proxies from marine, terrestrial and ice core records (Elderfield et al., 2012;Jouzel et al., 2007;Lisiecki and Raymo, 2005;Loulergue et al., 2008;Lüthi et al., 2008;Wang et al., 2008;Wolff et al., 2006).These data have provided evidence of consistent changes in polar and tropical temperatures, continental aridity, aerosol deposition, atmospheric greenhouse gas concentrations and global mean sea level over numerous glacial cycles.Conceptual models (Imbrie et al., 2011) have been proposed to explain these asymmetric 100 000-year cycles in response to changes in the configuration of the Earth's orbit and obliquity (Laskar et al., 2004), and involve threshold behaviour between different climate states within the Earth system (Parrenin and Paillard, 2012).The asymmetry between glacial inceptions and terminations may, for example, be due to the slow build-up of ice sheets and their rapid collapse once fully developed due to glacial isostasy (Abe-Ouchi et al., 2013).Observed sequences of events and Earth system modelling studies (Fischer et al., 2010;Lüthi et al., 2008;Parrenin et al., 2013;Shakun et al., 2012) have shown that climate-carbon feed-F.Parrenin et al.: Is there 1.5-million-year-old ice near Dome C, Antarctica? backs also play a major role in the magnitude of glacialinterglacial transitions.
Critical to our understanding of these 100 000-year glacial cycles is the study of their onset, during the Mid-Pleistocene Transition (MPT; Jouzel and Masson-Delmotte, 2010), which occurred between 1250 and 700 kyr BP thousands of years before 1950; Clark et al., 2006), and most likely during Marine Isotope Stages (MIS) 22-24, around 900 kyr BP (Elderfield et al., 2012).Prior to the MPT, marine sediments (Lisiecki and Raymo, 2005) show glacial-interglacial cycles occurring at obliquity periodicities (40 kyr) and with a smaller amplitude.The exact cause for this MPT remains controversial and several mechanisms have been proposed, including the transition of the Antarctic ice sheet from a wholly terrestrial to a part-marine configuration (Raymo et al., 2006), a hypothesis which is, however, unsupported by long-term simulations (Pollard and DeConto, 2009); a nonlinear response to weak eccentricity changes (Imbrie et al., 2011); merging of North American ice sheets (Bintanja and Van de Wal, 2008); changes in sea ice extent (Tziperman and Gildor, 2003); a time varying insolation energy threshold (Tzedakis et al., 2017); a threshold effect related to the atmospheric dust load over the Southern Ocean (Martínez-Garcia et al., 2011); and a long-term decrease in atmospheric CO 2 concentrations (Berger et al., 1999), the latter hypothesis being challenged by indirect estimates of atmospheric CO 2 from marine sediments (Hönisch et al., 2009).
A continuous Antarctic ice core record extending back at least to 1.5 Myr BP would shed new light on the MPT reorganisation (Jouzel and Masson-Delmotte, 2010), by providing records of Antarctic temperature, atmospheric greenhouse gas concentrations and aerosol fluxes prior to and after the MPT.The opportunity to measure cosmogenic isotopes ( 10 Be) would also provide information on changes in the intensity of the Earth's magnetic field, especially during the Jaramillo transition (Singer and Brown, 2002).Retrieving Antarctica's "oldest ice" is therefore a major challenge of the ice core science community (Brook et al., 2006).A necessary first step towards this goal is to identify potential drilling sites based on available information on ice-sheet structure and accompanying age modelling (Fischer et al., 2013;Van Liefferinge and Pattyn, 2013).
The maximum age of a continuous ice core depends on several parameters (Fischer et al., 2013).Mathematically, the age χ of the ice at a level z above bedrock can be written as follows: where D(z) is the relative density of the material (< 1 for the firn and = 1 for the ice), a(z) is the accumulation rate (initial vertical thickness of a layer, in metres of ice yr −1 ), τ (z) is the vertical thinning function, i.e. the ratio of the vertical thickness of a layer in the ice core to its initial vertical thickness at the surface, and H is the total ice thickness.Increasing the maximum age χ max can be obtained by increasing H or by decreasing a or τ .At first glance, one might select a site where H is maximum and a is minimum, but this neglects the importance of τ , notably through basal melting.In general, τ decreases toward the bed and, in steady state, reaches the value µ = m/a, where m is the basal melting.m is therefore a crucial parameter of the problem, as it destroys the bottom of the ice record.As ice is a good insulator, H either increases the ice temperature towards melting for frozen basal ice conditions, or, when melting is present, m increases with H and with the geothermal flux underneath the ice sheet (Fischer et al., 2013).Consequently, "oldest-ice" sites have a better chance to exist where ice is not overly thick as to lead to basal melting (Seddik et al., 2011), yet thick enough to contain a continuous ancient accumulation.The distance of a site to the ice divide is also an important parameter.This distance influences the profile of τ , which is increasingly non-linear right at a dome.Therefore, χ max can be up to 10 times larger at a dome than a few kilometres downstream (Martín and Gudmundsson, 2012).Moreover, assuming a largely constant ice sheet configuration across glacial cycles, an ice record close to the divide has travelled a shorter horizontal distance and therefore has a better chance of being stratigraphically undisturbed (Fischer et al., 2013).
The depth-age profile in an ice sheet can be obtained using radar observations at VHF ranges to identify englacial reflections (e.g.Fujita et al., 1999) and trace them as isochrones across the ice sheet (Cavitte et al., 2016;Siegert et al., 1998).Until now, such analysis has been restricted to the top ∼ three-fourths of the ice thickness in East Antarctica.However, depth-age information from internal layers can be used in conjunction with ice flow models and age information from ice cores to extrapolate down to the bed.Radar observations allow estimates of poorly known ice-sheet parameters, such as the geothermal flux (Shapiro and Ritzwoller, 2004) and past changes in the position of ice domes and divides.
The Dome C sector is one of the target areas for the "oldest-ice" challenge and has a number of distinct benefits over other regions: it has already been heavily surveyed by geophysical techniques (Cavitte et al., 2016;Siegert et al., 1998;Tabacco et al., 1998), a reference age scale has been developed through the existing ice core work (Bazin et al., 2013;Veres et al., 2013) and it is logistically accessible from nearby Concordia Station.In this study, we concentrate on airborne radar transects (Fig. 1), which are all related to the EDC ice core.These data resolve the bed (Young et al., 2017) and internal isochrones (Cavitte et al., 2017) and are suitable for the oldest-ice search (Winter et al., 2017).The isochrones are dated up to about 366 kyr BP using the most recent AICC2012 chronology established for the EDC ice core (Bazin et al., 2013;Veres et al., 2013).We extrapolate the age of the isochrones toward the bed using an ice flow model in order to identify potential oldest-ice sites along these transects.We also build maps of surface accumulation rate, geothermal flux and of a linearity parameter of the ver- (Fretwell et al., 2013) while the thin grey transparent lines represent the surface elevation (Fretwell et al., 2013).The red square in the inset show the location of the zoomed map around EDC.The red star is the location of the EDC drilling site.The orange squared areas are oldest-ice candidates from Van Liefferinge and Pattyn ( 2013).The red dotted line is the OIA/JKB2n/X45 radar line displayed in Fig. 3. tical velocity profile.The spatial and temporal variations of surface accumulation rates are discussed in detail in a companion paper (Cavitte et al., 2017).
Method
We use a 1-D pseudo-steady (Parrenin et al., 2006) ice flow model, which assumes that the geometry, the shape of the vertical velocity profile, the ratio µ = m/a and the relative density profile are constant in time.Only a temporal factor R(t) is applied to both the accumulation rate a and basal melting m: where a(x) and m(x) are the temporally averaged accumulation and melting rates at a certain point x.Under the pseudosteady assumption, the vertical thinning function is given by where ω is the horizontal flux shape function (Parrenin et al., 2006).While there is no physical reason to assume co- .R(t) proportionality factor applied to accumulation and melting rates (see Eq. 2).The plot is cut at 1 Myr for better readability.R(t) is based on the accumulation record at EDC for the last 800 kyr (Bazin et al., 2013;Veres et al., 2013).
variance of basal melting and surface accumulation, comparison with a transient dating model (Parrenin et al., 2007) shows errors of only 6 % maximum in the evaluation of the thinning function.Moreover, the fact that there is an analytical expression for the thinning function allows one to drastically reduce the computation time, an important factor since the 1-D model needs to be applied on many locations and with many different sets of parameters.A steady age χ steady is first calculated assuming a steady accumulation a and a steady melting m.Then the real age χ is calculated using the following equation (Parrenin et al., 2006): R(t) (Fig. 2) is directly inferred from the accumulation record of the EDC ice core (Bazin et al., 2013;Veres et al., 2013).Beyond 800 kyr BP, it is assumed to be equal to 1; that is to say that the accumulation before 800 kyr BP is assumed equal to the average accumulation over the last 800 kyr.The horizontal flux shape function is determined using an analytical expression (Lliboutry, 1979;Parrenin et al., 2007): where ζ = z/H is the normalised vertical coordinate (0 at bedrock and 1 at surface) expressed in ice equivalent, and p a parameter modifying the non-linearity of ω (the smaller p, the more non-linear ω).This formulation assumes that there is a negligible basal sliding ratio, as is the case at EDC (Parrenin et al., 2007).This might not be the case elsewhere, but adding a basal sliding term has a similar effect as increasing the p parameter for the top ∼ three-fourths of the ice sheet.The age of the ice at any depth is deduced from Eq. (1) using the relative density profile at EDC (Bazin et al., 2013).
To compute the basal melting, we use a simple steady-state 1-D thermal model.We solve the heat equation (neglecting the heat production by deformation since there is minimal horizontal shear) as follows: where T is the temperature, u z is the vertical velocity, ρ i = 917 kg m −3 is the ice density (Cuffey and Paterson, 2010) and k T (W m −1 K −1 ) is the thermal conductivity (Cuffey and Paterson, 2010), is given by and c (J kg −1 K −1 ), the specific heat capacity (Cuffey and Paterson, 2010) is given by The boundary conditions are where T S = 212.74K is the average temperature at the surface assumed to be equal to the one at Dome C (Parrenin et al., 2013), G 0 is the geothermal flux and T f , the fusion temperature is given by Ritz (1992): where P = 10 6 Pa is the partial pressure of air and P , the pressure, is approximated by the hydrostatic pressure: where g = 9.81 m s −2 is the gravitational acceleration.We used this formula since it gives the best agreement to the measured temperature profile at EDC (Passalacqua et al., 2017).The basal melting is given by where G is the vertical heat flux at the base of the ice sheet and L f = 333.5 kJ kg −1 is the latent heat of fusion (Cuffey and Paterson, 2010).
To prevent p from being < −1 (Eq. 5 has a singularity for p = −1), we write The values of a, G 0 and p are reconstructed using a variational inverse method and using the radar isochrone constraints.The cost function to minimise is formulated using a least-squares expression: where N is the number of isochrones (3 ≤ N ≤ 18, see Table 1 and Fig. 3), d iso i and χ iso i are the depths and ages of the isochrones respectively, σ iso i is the confidence interval on their age and χ mod is the modelled age.p prior = ln(1 + 1.97) is the a priori estimates of p , inferred from the age-scale model of the EDC ice core (Parrenin et al., 2007) and σ p = 2 is its standard deviation, chosen to be sufficiently large to allow for a large range of p values.G 0,prior = 51 mW m −2 is the a priori estimate of the geothermal flux calculated from satellite magnetic data (Fox Maule et al., 2005;Purucker, 2013), and from analysis of the heat required to maintain melting above subglacial lakes (Siegert and Dowdeswell, 1996).σ G 0 = 25 mW m −2 is the uncertainty in the geothermal flux (Fox Maule et al., 2005;Purucker, 2013); the total uncertainty of the age of isochrones σ iso is composed of (1) the uncertainty in the depth of the traced isochrones (Cavitte et al., 2016), transferred in age, and (2) the uncertainty of the AICC2012 age of the isochrone at the EDC site.
To solve the least-squares problem formulated in Eq. ( 16), we used a standard Metropolis -Hastings algorithm (Hastings, 1970;Metropolis et al., 1953) with 1000 iterations.This allows one not only to obtain a most probable modelling scenario, but also to quantify the posterior probability distribu- tion, in particular the confidence intervals or the modelled quantities.
Results and discussions
In our forward modelling, we used the 1-D pseudo-steady assumption.This assumption is very convenient numerically because in this case, there is an analytical expression for the thinning function (Eq.3).Therefore, there is no need to use a costly Lagrangian scheme, following the trajectories of ice particles.Of course, the reality is more complex than the pseudo-steady assumption because the temporal variations in melting and accumulation rates are not related and are not the same for each point in space.In Parrenin et al. (2007), we used a more complex age model with a ratio µ and with an ice thickness allowed to vary in time.The results are very similar with the pseudo-steady model.This is because melting is small compared to the accumulation, and the variations in ice thickness are small compared to the total ice thickness.Regarding the spatial pattern of accumulation, we assumed that it is stable in time, which is roughly confirmed by the inversion of internal layers (Cavitte et al., 2017).Moreover, the 1-D assumption dominates the uncertainty since we do not take into account horizontal advection and dome movement.Therefore, we suggest the pseudo-steady assumption is good enough for a 1-D model.
An example age profile along the OIA/JKB2n/X45 radar transect (see Fig. 1 for its position) is displayed in Fig. 3. From these profiles, maps of the modelled age at 60 m above the bed, minimum age at 60 m above the bed (at 85 % confidence level), the height above the bed of the 1.5 Ma isochrone and temporal resolution at 1.5 Myr are displayed in Fig. 4. We use 60 m above the bed as this is the height at EDC below which the ice becomes disturbed such that it cannot be interpreted stratigraphically (Tison et al., 2015).The modelled basal melting m and inferred steady accumulation rate a, geothermal flux G 0 and p parameter of the vertical velocity profile are displayed in Fig. 5.
The bottom age inferred at EDC at 3200 m is 785 kyr, which is remarkably close to the age of ∼ 820 kyr inferred from the analysis of the ice core (Bazin et al., 2013;Veres et al., 2013).This 35 kyr difference represent a depth mismatch of 24 m.This is a confirmation of the method used, despite its assumptions (i.e.1-D, pseudo-steady, Lliboutry velocity profile).
There are two main regions where the basal age is modelled to be older than 1.5 Myr.The first one is situated close to Little Dome C (LDC), ∼ 40 km south-west of EDC.In this region that we call LDC Patch (LDCP), the ice thickness is several hundred metres lower than at EDC, thus reducing the likelihood of basal melting.The second region is 10-30 km north-east of EDC in the direction of the coast, at a place where the ice thickness is comparable to the one at EDC but with a lower geothermal flux.We call this re-gion "North Patch" (NP).In those two oldest-ice spots, the height above the bed of the 1.5 Myr isochrone is modelled to be greater than 150 m.The temporal resolution at 1.5 Myr is ∼ 10 kyr m −1 , which is sufficient to resolve the main climatic periods (Fischer et al., 2013).
Our LDCP area is generally consistent with Candidate A of Van Liefferinge and Pattyn (Van Liefferinge and Pattyn, 2013) although our area is smaller and constrained to the subglacial highlands under LDC.Van Liefferinge and Pattyn (2013) did not find a candidate at NP.However, the geothermal heat flux maps they relied on have a lower spatial resolution than the details we examine here.Our model does not predict very old ages for Candidates B-E of Van Liefferinge and Pattyn (2013), although the 1-D assumption is problematic in those areas since ice particles experienced very different ice thickness conditions along their path.
One possible limitation of our simple ice sheet model is that it does not allow for a layer of accreted ice.We argue that there are no discernable accretion features in the UTIG radargrams, although it is possible that the accretion features do not show up in the basal layer which is difficult to interpret.
We now examine the other variables inferred from the inversion.Basal melting is of course negligible at these two oldest-ice spots.Melting is, however, significant around EDC (which is consistent with known basal melting at this place), on the other side of LDC and on the bed ridge adjacent to the Concordia Subglacial Trench (called here the Concordia Ridge), consistent with the observation of subglacial lakes (Wright and Siegert, 2012;Young et al., 2017).While it is surprising that basal melting is so large across the ridge of the bed, where the ice thickness is smaller, the 1-D assumption is probably invalid in this region, since the ice has been significantly advected horizontally over regions with very different basal conditions (i.e. over the wet-based Concordia Subglacial Trench and then over the adjacent Concordia Ridge which likely has a frozen base).The average surface accumulation rate shows a large-scale north-east-south-west gradient probably linked to a precipitation gradient.It also shows small-scale variations linked to surface features and probably due to snow redistribution by wind.The spatial and temporal variations of accumulation are the subject of a companion paper to this study (Cavitte et al., 2017).For the geothermal flux, it should be noted that its reconstruction is only relevant when there is some basal melting (i.e. a temperate base).When the base is cold, its evaluation mainly relies on the prior used for the least-squares cost function.Indeed, below the threshold of zero melting, further decreasing the geothermal flux has no effect on the basal melting, and therefore no effect on the modelled age.In the EDC region, the geothermal flux is estimated around 60 mW m −2 .A high geothermal flux of ∼ 80 mW m −2 is also estimated on the ridge adjacent to the Concordia Subglacial Trench.Again, these results should be taken with caution since they could be an artifact due to the 1-D assumption used.The p value inferred
Conclusions
We developed a simple 1-D thermo-mechanical model constrained by radar observations to infer the age in an ice sheet.We identified two areas where the age of basal ice should exceed 1.5 Myr.They are located only a few tens of kilometres away from the French-Italian Concordia station, which could provide excellent logistical support for deep drilling.
The first area, LDCP, is close to a secondary dome and on a bedrock massif where ice thickness is only ∼ 2700 m.It is located only ∼ 40 km away from the Concordia station in south-westerly direction.The second area, NP, is 10-30 km north-east of Concordia in the direction of the coast.These "oldest-ice" candidates will be subject to further field studies to verify their suitability.A 3-D model approach would be necessary to study the effect of horizontal advection.Using the shape of the isochrones, which is better constrained than their absolute age, would shed more light on this problem.The possibility of a layer of stagnant ice should also be investigated.Ultimately, in situ study of the age of the bottommost ice at these sites will soon be feasible at minimal operational costs using new rapid access drilling technologies (Chappellaz et al., 2012;Schwander et al., 2014), which will provide in situ measurements to further assess the age of the basal ice and the integrity of the ice core stratigraphy.If successful, this next step will open an exciting opportunity for expanding the EDC records ∼ 700 kyr further back in time, which could help reveal the mechanisms controlling the last major climate reorganisation across the MPT.
Figure 1 .
Figure1.Radar transects used in this study (dotted blue and red lines).The light colour scale represents the bedrock elevation(Fretwell et al., 2013) while the thin grey transparent lines represent the surface elevation(Fretwell et al., 2013).The red square in the inset show the location of the zoomed map around EDC.The red star is the location of the EDC drilling site.The orange squared areas are oldest-ice candidates from Van Liefferinge and Pattyn (2013).The red dotted line is the OIA/JKB2n/X45 radar line displayed in Fig.3.
Figure 2
Figure 2. R(t) proportionality factor applied to accumulation and melting rates (see Eq. 2).The plot is cut at 1 Myr for better readability.R(t) is based on the accumulation record at EDC for the last 800 kyr(Bazin et al., 2013;Veres et al., 2013).
Figure 3 .
Figure 3. One-dimensional ice flow simulation along the OIA/JKB2n/X45 radar transect (see red dotted line in Fig. 1 for location).(a) Various inferred parameters (plain lines) as well as their 15th and 85th percentiles (dashed lines).From top to bottom panels: average surface accumulation rate, geothermal heat flux, p + 1 parameter of the velocity profile, average basal melting, bottom age 60 m above bedrock, height above bed of the 1.5 Myr isochrone and resolution of the 1.5 Myr isochrone.(b) Modelled age (in colour scale; white is for ages older than 1.5 Myr), together with observed isochrones (in white) and bed (in thick black).Note the two main oldest-ice candidates at distance 25 km (North Patch, NP) and at distance 75 km (Little Dome C Patch, LDCP).
F
.Parrenin et al.: Is there 1.5-million-year-old ice near Dome C, Antarctica?
Figure 4 .
Figure 4. Various bottom-age-related variables along the radar transects, in vivid colours.The bedrock and surface elevations (greyscale and isolines respectively) are shown as in Fig. 1.LDCP and NP are the two old ice patches that we discuss in this study.(a) Modelled bottom age at 60 m above bedrock.(b) Minimum bottom age at 60 m above bedrock with 85 % confidence.(c) Height above bed of the 1.5 Myr isochrone.(d) Temporal resolution for the 1.5 Myr modelled isochrone.
Figure 5 .
Figure 5. Various variables reconstructed by the inverse method along the radar transects, in vivid colour scale.The bedrock and surface elevations (greyscale and isolines respectively) are shown as in Fig. 1.(a) Modelled temporally averaged basal melting.(b) Inferred temporally averaged surface accumulation rate.(c) Inferred geothermal flux.(d) Inferred p vertical velocity parameter.
Table 1 .
Age and total age uncertainty of the 18 isochrones used in this study. | 5,805.8 | 2015-04-01T00:00:00.000 | [
"Environmental Science",
"Geology"
] |
Corporate Digital Responsibility New Challenges to the Social Sciences
Contemporary practitioners and scientists more and more frequently highlight the extraordinarily rapid process of implementation of new technologies – including those based on artificial intelligence – and unpredictable consequences of such actions. Therefore, it is important to be an active participant in the debate on the relation between human and modern technologies, a debate based on interdisciplinary scientific knowledge. The article refers to selected ideas related to knowledge management, organisational learning, knowledge area, or innovation environment. The challenge which social science researchers face, next to examining the theoretical aspects, is the application of various calculation methods and new technologies to make quicker and easier decisions in social contexts – with regard to various groups of people, e.g. employees, customers, or voters. Apart from the new methods, another serious challenge is to raise social awareness regarding the digital responsibility in certain groups such as managers or, more generally, employers and employees. The responsibility of the elite and scientific authorities should consist in instilling awareness in one another and approaching the new phenomenon with care. Potential threats may completely change our civilisation. The presented discussion is based on literature study which included selected theories and reports of research centres and scientific bodies. A particularly interesting case study discussed in this article includes TOP CDR initiative and a report prepared by SW RESEARCH agency in cooperation with Procontent public relations and digital marketing agency. The conclusions of this report indicate that corporate digital responsibility (CDR) may be a pioneering area for in-depth empirical studies. The nature of the topic, despite being clearly related to
sociology, requires interdisciplinary approach and cooperation of numerous circles, not only scientific ones. K e y w o r d s: corporate social responsibility, corporate digital responsibility, technology, artificial intelligence
Towards Corporate Digital Responsibility -Future or Nowadays Challenges?
Contemporary scientific authorities and, more and more frequently, political leaders highlight new kinds of threats to global labour market posed by automation and mass implementation of solutions based on artificial intelligence (AI). Development of new technologies, robotics, and process automation threatens current workplaces in both industry and the service sector. These processes may create social unrest, and their consequences are difficult to foresee due to the dynamic nature of their progression.
The aim of the article is to characterise new challenges in corporate digital responsibility and new research areas which emerge in that field for social sciences. The author will identify certain theoretical aspects and potential consequences related to threats posed by the development of new technologies, artificial intelligence, automation, and digitalisation of social environment on a large scale. Selected thematically, relevant reports of scientific bodies, employers' organisations, and companies collaborating with scientific circles will be analysed. The author will analyse in particular the TOP CDR initiative, which is the first project of this kind in Poland, focused on digitally responsible enterprises.
A Few Words about Methods
The analysis is based on theoretical considerations substantiated by selected research data. The theoretical themes referred to are part of the author's selective attempt to indicate significant areas of possible future research. The scope of the analysis is largely based on literature study of selected concepts and is therefore significantly limited. All empirical remarks refer to existing data, reports of research centres and scientific bodies. The analysis will also include the TOP CDR report prepared by SW RESEARCH agency in cooperation with Procontent public relations and digital marketing agency.
Development of Technologies in the Field of Artificial Intelligence -Responsibility and Challenges in Social Context
For many years scientists and experts from virtually all scientific fields have been discussing the relationship between human and technology, which is developing at an increasing rate. These considerations include not only new ways of learning or human reaction to resulting changes in the reality, but also possible social processes which occur or will occur in the future due to technologisation and increasing presence of machines and robots in everyday life.
The challenge which social science researchers face is the application of various calculation methods and new technologies to make quicker and easier decisions in social contexts with regard to various groups of people, e.g. employees, customers, or voters. This allows to gather data faster and identify digital traces of human activity -either in social networks or in information obtained during behavioural studies using mobile phones. Access to this type of data provides the ability to stimulate various behaviours. Specialised software used on this kind of data has been perfected at an increasingly fast rate since the 1990s towards study of methods of AI operation. Initially, data compilation software was created to build databases using a specific type of reasoning mechanisms. Nearly 70 years have passed since the first widely recognised definition of artificial intelligence was presented by Alan Turing in 1950. At the time, artificial intelligence was understood, for the purpose of the conducted experiment, as an ability of a machine to perform cognitive tasks effectively without making human interrogator realise that that the respondent is a machine (Turing, 1950). Nowadays, programmers are focused on the creation of intelligent behaviour patterns which may be utilised in computer software. The goal is to develop a model allowing machine to imitate sophisticated human manifestations of intelligence: making decisions under uncertainty, analysis and synthesis of natural languages, conducting logical reasoning, diagnosis, expertise and participation in logic-based games such as chess. Machines already have achieved the ability to learn and perfect their behaviour on the basis of new experience. Using algorithms and specific data, the machines can, through the process of induction, transition from supervised learning to unsupervised learning (Russell & Norvig, 2003).
More and more often, people are being replaced by machines, devices, and appropriate software, all through learning specific forms of response based on the output data. Nowadays, the ethical question should be a top priority for scientists, because many people might be really hurt by these algorithms (Suchacka & Horáková, 2019, p. 917).
This has specific consequences -chances and threats. At the threshold of revolution initiated by introduction of artificial intelligence into various areas of socioeconomic life, an increasing number of socially sensitive practitioners and scientists call for the need to create a complex strategy of AI development. The analysis of selected AI development programmes conducted by Digital Poland Foundation in 2018 points to differences in approach to this matter between various countries. Depending on the government's policy, emphasis is placed on retaining scientific leadership and development of basic research around AI (France), ensuring national security, order and monitoring behaviour of the citizens (China), maintaining leadership in robotics, increasing the level of industrialisation and supporting ageing society (Japan). The report characterises world's most prominent centres of innovation and highlights the need to promote economy based on knowledge, cooperation, and sharing experience with the support of the regional and national level authorities (Digital Poland Foundation, 2018). The aware and responsible decision-makers should create conditions favourable for close integration of the worlds of science and business and accelerated commercialisation of the results of their cooperation.
The responsibility of the elites and scientific authorities should consist in raising awareness about AI and taking the new phenomenon seriously. Potential threats may completely change our civilisation. The research conducted on the matter is still focused primarily on technical and IT issues, despite the fact that great minds of our times like Bill Gates, Elon Musk, and Stephen Hawking have been warning us against development of a model of artificial intelligence able to continuously improve itself. It is difficult to image what is becoming the realitymachine surpassing human.
The most controversial is the use of artificial intelligence in the army [armed forces], from rockets or jets to all kinds of infrastructure control systems. At this stage, it is assumed that people are in control, not threatened by computers deciding anything themselves. It will be this way until the artificial intelligence begins to modify its goals. Even if it is possible for the machine to become self-conscious, it will still have to set tasks for itself and find a justification for them, and that part is not immediately obvious. At the moment, intelligent technologies assist us with acquiring knowledge quickly, learning new behaviors outside the traditional system of education, which, however, should not be completely eliminated. A constant reflection, inherently sociological, should accompany these technological changes, for the algorithms behind ethical actions are the very traits of humanity and it is quite difficult to assume that the machine will accept them or will develop them itself without error of proper access path (Suchacka & Horáková, 2019, p. 919).
From the very beginning, the development of artificial intelligence has been examined with disregard for the notion of self-awareness of a machine. This may have dramatic consequences at the onset of final revolution when AI will combine knowledge from three areas of science encompassing mechanisms of matter, life, and mind. It is a matter of time before machine outsmarts and surpasses human intelligence.
Review of Selected Ideas and Initiatives Related to the Impact of Technology on Changes in Global and Local Labour Markets
Setting aside the considerations on futuristic visions of artificial intelligence seizing control over the world, it can be assumed with certainty that the most obvious area of contact between human and machine is the workplace. At work one can observe how changes in production, way of completing certain tasks, or use of more complicated tools can directly affect the entire professional life. This entails the need for continuous learning and, in order to meet that necessity, creation of human capital management strategies. Staff resources, knowledge accumulated by the employees, access to information, and ability to skilfully use it are factors which largely determine the success of contemporary companies. A German researcher Peter Drucker emphasised that traditional importance of current economic resources -labour, land, and money -gradually decreases. Slowly, the income from these sources is losing significance, and the only -or at least the main -sources of wealth are information and knowledge. "In fact whichever traditional industries managed to grow in the last 40 years did so because they restructured themselves around knowledge and information" (Drucker, 1999, p. 149). According to British analysts, "Traditional managerial systems have been developed to persuade bored people to keep their noses to the grindstone. But how do you manage people who keep the company's most valuable resources in their heads? […]" (Micklethwait & Wooldridge, 2000, p. 135).
Moreover, according to an American scholar Peter Senge, contemporary companies owe their competitive edge to their ability to learn and constantly using that ability. Senge points out that: At the heart of a learning organization is a shift of mind -from seeing ourselves as separate from the world to connected to the world, from seeing problems as caused by someone or something "out there" to seeing how our own actions create the problems we experience. A learning organization is a place where people are continually discovering how they create their reality. And how they can change it (Senge, 2004, p. 28).
Undoubtedly, such an approach, in combination with technological development, facilitates achieving high efficiency indicators and economic growth rate. This long-term attitude derives advantages from emphasising the ability of problem identification, skilful adaptation to conditions of the surroundings and innovativeness based on knowledge management. Japanese scholars noted that properly designed knowledge management system allows a company to acquire, analyse, and use knowledge to make quicker, wiser, and better decisions leading to the achievement of competitive advantage.
The main issue related to knowledge-based management is the conversion of the so-called tacit knowledge -personalised and rooted in individual's experience, skills, intuition and values he or she embraces -to accessible, explicit knowledge, usually codified (Nonaka & Takeuchi, 2000, p. 9).
Although their considerations were basic and covered organisation and public undertaking, new ideas referring to broader view on the matter started to develop at that time. These ideas referred to knowledge-based economy development characterised by emergence of regional innovation systems (Cooke, 1997), innovative milieus (Matteaccioli, 2006), clusters of innovation (Porter, 1998) and learning regions (Florida, 1995) for which paradigm of geographical proximity is very significant (Rallet, 2007). Particularly interesting were the analyses conducted in the context of regional networks facilitating formation of knowledge region. An example of such area in Poland is Silesia, which is the most industrialised area of the country and which is undergoing extraordinary metamorphosis into a learning region (Suchacka, 2014).
A special role in this process is played by universities and research institutions which attract creative individuals and collaborate in creation of network of connections with economy: Universities are producers of knowledge and technology. This is a very important function and no other institution of the State can replace them in that role. In the 19th century, Humboldt was the first one to suggest that such task should be given to universities. Universities combine immense potential of highly-educated scholars. Provided that they are properly equipped with tooling and test equipment, a legion of young doctoral student minds is able to complete every research task. Of course, large corporate laboratories and government science bodies conduct research on a massive scale and frequently are leaders in new technologies and factories of new inventions. However, it is university with its freedom of scientific research and great potential of doctorate students which became breeding ground of new ideas and theories taking the lead on the scientific frontier in terms of boldness and originality (Galwas, 2010, p. 11). Undoubtedly, the pace of contemporary economic changes is an effect of unprecedented acceleration of technological innovation development. In these dynamic times, real social responsibility, not only out of concern about company's good image, is often forgotten.
Corporate Social Responsibility (CSR) and Corporate Digital Responsibility -Origin and Relationships
Contemporary managers are aware that technological and digital development is unavoidable. Their awareness is changing also due to recognition of specific threats. Sense of responsibility unites certain groups of entrepreneurs, scientists, and decision-makers. Specific practical actions, studies, and theoretical academic works were an answer to the rising concerns. Already in the 1960s, the idea of sustainable development gave impulse for a new approach to solving social, economic, and environmental issues. This idea is the source of corporate social responsibility (CSR). The main principles of this approach are related to maintaining balance in business activity between three kinds of capital -economic, human, and natural. This correlates with an increasingly evident civil pressure and growing trend of business self-regulation. Transparency of operation, clarity of rules applied in practice to employees, customers and contractors, as well as participation in major local community events are becoming important for a large portion of the society. Expectations of various social groups -employees and customers, providers and contractors, environmental and social organisations, government and local authorities -are becoming a challenge for many managers. This leads to in-depth analyses, cooperation of certain circles, shaping new kinds of relationships between natural environment, human and business. All social forces more and more often agree that in order to survive and develop they need each other. In long-term perspective, this leads to changes in social awareness and perception of social environment by entrepreneurs. The demands placed on contemporary business are not only related to meeting specific customer needs, but also to preventing any damage or degradation of natural and social resources. Although corporate social responsibility is a voluntary activity of an enterprise, more and more frequently this activity takes place in the context of a wider process -corporate self-regulation. The manifestation of this process is the appearance of the so-called good practices. Those include actions undertaken to reduce corruption and fraud, and to increase transparency of rules and principles which guide entrepreneurs. Integrity of words and actions, concern for customer trust, investors' attention, and employees' pride are proof of social sensibility of an enterprise. Spontaneous practices, which have been applied in this matter for a long time, have been formalised in two EU documents which are considered as archetypal set of guidelines for future development -Green Paper on CSR (2001) and White Paper on CSR (2006). In response to these publications, the interested enterprises concordantly endeavoured to voluntarily integrate and create new forms of cooperation between business and public authorities. Despite continuing absence of developed and implemented standards, attempts are being made to create uniform and generally accepted procedures which would cover all areas of responsibility and suggested procedural instruments. Following the technological and digital development in recent years, more attention is given to socially responsible creation and implementation of innovation. Managers extend their knowledge in that matter by participating in special educational programmes and studies. Higher education institutions continuously strive to improve the level of educational services to train high-grade specialists with desirable abilities and professional skills, regardless of their location. Managers, as an extremely busy group of people, eagerly reach for new forms of acquiring knowledge such as e-learning (Morze, 2016).
For providing the educational services, institutions must create an open information and educational e-environment, which will be used in open learning: an innovative system of evaluation of scientific research, management, and implemented remote access to educational resources, an integral part of which is an e-learning system (Morze & Buinytska, 2019, p. 12).
Corporate Digital Responsibility (CDR), which in recent months has been taking formal shape, is a new initiative within social responsibility. It follows the trend of improvement of knowledge related to responsibility in business. CDR means the awareness of duties binding the organisations active in the field of technological development and using technologies to provide services. Generally, this approach consists in trying to achieve balance and lead technological development in a direction in which technology will have positive impact on the surroundings.
Initiators of this new approach to social responsibility recognise chances and risks presented by deployment of new technologies. New technologies save time, offer new possibilities and improve the standards of living in general. On the other hand, they pose a threat by facilitating new kind of addiction or exposing to invasive and aggressive practices of individuals who exploit sensitive data and destroy trust between people. The dynamic development of technologies threatens also global labour market due to automation and mass implementation of solutions based on artificial intelligence. Jobs are disappearing both in industry and the service sector. This creates social unrest and may even influence changes in political and educational systems. Experts and scholars aware of this dynamic process emphasise that businesses and employees have far less time to thoroughly examine social consequences of ongoing implementations related to digitalisation.
In parallel with these changes attempts are made to introduce systemic regulations and provide support for persons who have lost their jobs as a result of automation and artificial intelligence. An example of these efforts is the reform of the European Globalisation Adjustment Fund adopted by the European Parliament in January 2019. The main task of the fund, which was renamed to the European Fund for Transition (EFT), is to address the negative impact of globalisation and technological changes. Provided that certain criteria are met, companies based in a Member State of the EU which are laying off employees can apply for support from the EFT.
The survey of main sources of fear of average Americans conducted by Chapman University (USA) reveals that the respondents were more afraid of people in the workforce being replaced by machines than death. In the report "American Fear Survey" (2018), these fears placed 48 and 54, respectively. More people realise that in the near future robots will be able to complete the same tasks as humans. The transition to automation and robotisation has already gathered considerable momentum. In Japan, the USA, and South Korea, there are already several hundred robots operating on production lines per 10,000 workers. The manhour costs of human work on production line are also rising.
The fears that technology will destroy more jobs that it will create are on the rise. New positions of employment such as drone operator, social media administrator, or autonomous vehicle engineer are emerging. Robots like Da Vinci, which serves as a surgeon's assistant, help save human lives. Therefore, the main challenge is not to oppose the process of digitalisation, but to skilfully adjust the labour market, effectively use technologies, ensure data security, and improve employees' qualifications, especially with regard to digital competences.
Polish people, compared to other European nations, demonstrate low awareness of threats posed by automation. The study conducted in 2018 by Pew Research Center show that only 24% of the respondents believe that within the next 50 years human workforce will be replaced by robots and computers. In Greece, 52% of the respondents shared this view. The survey was performed in 9 countries from 21 May to 10 August 2018 and in the United States in 2015, 2016, and 2017 on a group of 9,670 respondents. The potential inability to find another job was what respondents feared the most as this may lead to social stratification in terms of income. The majority of the participants believe that the responsibility for preparing the workforce for changes rests on the government. Polish respondents also indicated schools (62%) and employers (46%).
In the era of dynamic digital development, implementation of business targets is executed in a responsible manner. This is especially emphasised by large corporations as part of their PR campaign. Corporate digital responsibility is focused on ensuring that new technologies and, most importantly, data are used productively and wisely. A manifestation of this is the creation of comprehensive framework on data security, training programmes which prepare employees to managing digital information in difficult situations. A serious approach to concerns of customers, employees, and partners can be beneficial for a company.
The main areas of corporate digital responsibility, within which certain measures are taken, focus on potential changes. The measures consist primarily in changing business models -creating new ones as a response to emerging technological products. This is accompanied by changes in the work arrangement: increase in intensity of teleworking or formation of virtual teams. Such changes are accompanied by an influx of data and content in the Internet as well as a rapid improvement of required digital competences. Steps taken by employers with regard to CDR should include allowing employees to obtain necessary digital competences. However, this responsibility does not exclude or is even compliment by taking actions such as: • ensuring that employees can rest through disconnection from digital world of the company, • ensuring that traditional forms of social and inter-employee relationships are maintained, • preventing replacement of traditional forms of contact with virtual communication, • taking interest in and defining standards ensuring protection of the data processed within a company, • fighting against digital addiction and breakdown of social relations caused by technological development (TOP CDR programme document, ). In practice, these actions are already taken by many companies which are aware of the problem and include this kind of goals in their strategies of socially responsible business.
TOP CDR Initiative -Analysis of Assumptions and Survey Report
In spring 2019, as a result of cooperation between SW RESEARCH agency and Procontent public relations and digital marketing agency, a decision was made to start an initiative to promote CDR and conduct a survey among employers on corporate digital responsibility. The programme document defines main goals of Top CDR initiative: • preparation of good practices document based on surveys of employees and employers, • rewarding good practices within CDR and promotion of good CDR practice models, • communicating opportunities and threats related to technological development which should be covered by CDR regulations (TOP CDR programme document, ). The efforts are supported by the Council of Experts which is composed of representatives of government institutions, technological companies, nongovernmental organisations (NGO), and universities specialised in the field of new technologies development. Under the programme, an online platform will be created to gather in one place the debate on CDR in Poland, foreign reports, events, and news related to the notion of CDR. The results of the CDR opinion survey conducted in May 2019 were used to compile the "CDR in Poland" report. During a special debate, experts expressed their opinion on the results. There is also a plan to develop a guide to good CDR practices and to organise a "Technologically Responsible Company" competition. Companies winning that title should be characterised by compliance with good digital practices, concern for safety of their employees in the Internet, and organisation of learning activities rising CDR awareness of their employees.
The "CDR in Poland" report presenting employees' fears related to automation and robotisation of work is particularly interesting. The survey was conducted at the turn of May and June 2019 by SW RESEARCH agency using computerassisted web interviewing (CAWI) via SW Panel available on-line. The study group consisted of 1,010 participants from companies employing more than 50 people. The majority of the respondents were women (53.1%). The majority of the respondents were in 25-34 (36.4%) and 35-49 (32.7%) age groups. Respondents with higher (54%) and secondary education (40.1%) were dominant in the study group.
Survey questionnaire was prepared in cooperation between SW RESEARCH and Procontent agencies. It consists of 5 Likert-type scale questions in which respondents could express their opinion on specific issues and questions concerning personal data. The survey is an analysis aimed at initial investigation of the problem and may be a part of preparation for serious scientific research.
The results yielded by the survey are quite interesting. One of three respondents believe that automation of work through robotics will force him/ her to professionally retrain or change job within the next 10 years. The opposite opinion was expressed by 40% of the respondents which indicates low awareness of the problem or a complete lack of such danger. The latter assumption seems to be confirmed by high percentage of respondents (60%) noting lack of staff reduction due to implementation of new technologies or automation in the last 3 years. However, almost one third of the participants admitted that such reduction had taken place in their workplace. Particularly interesting were the results concerning activities outside working hours: reading e-mails, messages on corporate messengers and social networking profiles managed by their company. Only 16% of the respondents admitted that they dedicate some of their free time to such activities every day. One of four participants do it several times a week and one of four respondents stated that they do not use corporate messengers after work at all. The authors of the survey asked also about associations with the phrase "digitally responsible company" allowing to choose 3 from provided options. The majority of answers (more than 40%) pointed to conducting training related to developing digital competences helping to consciously use devices connected to the Internet, be able to distinguish fake news and propaganda activities from legitimate information. Almost 40% of the respondents indicated also enabling employees to obtain necessary digital competences to prevent job losses due to automation. The participants were somewhat divided on the issue of taking advantage of technological improvements brought by automation and robotisation. Nearly one third of the respondents (30.5%) stated that they do not want help from robots and prefer to use services provided by humans. The majority of the respondents with the opposite opinion would like to use a robot to complete household chores on everyday basis and one in four participants would like to own a "smart house" or be served by an "automatic cashier." According to the survey results, the majority of the participants, provided that they are not aware of any danger from automation, would eagerly chose to benefit from technological improvements, whilst not excluding human contact. Employers are expected to enable their employees to improve their digital competences and provide training related to that matter. The data indicate that there is no relationship between these answers and the educational level, age, or gender of the respondents.
In conclusion, it should be stressed that the conducted survey was not strictly scientific and is preparatory to further and more detailed analyses. There is an evident need for more in-depth investigation of the topic, as evidenced by consulting companies showing particular interest in this issue.
Conclusions
The development of modern technologies, robotics and process automation creates new kinds of threats. The impact of technology on social life is becoming so dynamic that human can no longer anticipate and remedy the consequences. As a result of specific social trends, more and more companies begin to devote attention to the need to take measures and initiate cooperation between scientific, political, and economic circles. Raising social awareness with regard to the impact of technologies -especially the latest ones related to artificial intelligence -on social life and people is a great challenge for many important circles and authorities. The actions taken thus far were basically grounded upon economic motives and constituted an element of PR campaign. Numerous scientific trends such as theory of sustainable development, theories of knowledge management, concepts of learning organisation and region, creation of innovation networks and innovation environments supported and inspired companies to take proper care of their human capital and build social capital in the broader sense. This was facilitated also by the concept of corporate social responsibility (CSR) which in recent years has been extended with corporate digital responsibility (CDR). This new trend of entrepreneurs' interest is a rich area for profound empirical studies. The nature of this topic, despite being of clearly sociological origins, requires an interdisciplinary approach and cooperation of numerous circles, not only scientific ones. decyzji w kontekstach społecznych -odnośnie różnych grup ludzi, np. pracowników, klientów, czy wyborców. Poza nowymi metodami badawczymi poważnym wyzwaniem jest budowanie świadomości społecznej w zakresie cyfrowej odpowiedzialności w określonych grupach, jak chociażby menadżerowie, czy szerzej pracodawcy i pracownicy. Odpowiedzialność szerokich elit i autorytetów naukowych powinna polegać na wzajemnym uświadamianiu, a także poważnym traktowaniu nowego zjawiska. Potencjalne zagrożenia mogą całkowicie odmienić naszą cywilizację. Przedstawione rozważania oparte są o studia literaturowe uwzględniające wybrane teorie oraz raporty ośrodków badawczych i instytucji naukowych. Szczególnego rodzaju studium przypadku stanowi inicjatywa TOP CDR oraz omówiony w artykule raport przygotowany we współpracy agencji badawczej SW RESEARCH oraz agencji public relations i digital marketing Procontent. Wnioski z tego raportu dowodzą, że cyfrowa odpowiedzialność przedsiębiorstw corporate digital responsibility (CDR) stanowić może pionierski temat do wielu pogłębionych studiów empirycznych. Natura tematu mimo wyraźnie socjologicznych źródeł wymaga podejścia interdyscyplinarnego i współpracy wielu środowisk -nie tylko naukowych. | 7,074.2 | 2019-01-01T00:00:00.000 | [
"Computer Science"
] |
Website quality evaluation: a model for developing comprehensive assessment instruments based on key quality factors
Purpose – The field of website quality evaluation attracts the interest of a range of disciplines, each bringing its own particular perspective to bear. This study aims to identify the main characteristics – methods, techniques and tools – of the instruments of evaluation described in this literature, with a specific concern for the factors analysed, and based on these, a multipurpose model is proposed for the development of new comprehensive instruments. Design/methodology/approach – Followingasystematicbibliographicreview,305publicationsonwebsite qualityareexamined,thefield ’ sleadingauthors,theirdisciplinesoforiginandthesectorstowhichthewebsites being assessed belong are identified, and the methods they employ characterised. Findings – Evaluations of website quality tend to be conducted with one of three primary focuses: strategic, functional or experiential. The technique of expert analysis predominates over user studies and most of the instruments examined classify the characteristics to be evaluated – for example, usability and content – into factors thatoperate atdifferent levels, albeitthatthere islittle agreement onthe names used inreferring tothem. Originality/value – Based on the factors detected in the 50 most cited works, a model is developed that classifiesthesefactorsinto13dimensionsandmorethan120generalparameters.Theresultingmodelprovides acomprehensiveevaluationframeworkandconstitutesaninitialsteptowardsasharedconceptualizationofthedisciplineofwebsitequality.
Introduction
Over the last three decades, websites have become one of the most important platforms on the Internet for disseminating information and providing services to society.Shortly after their first appearance, the need to evaluate website quality became evident.The earliest analyses were developed by experts in human-computer interaction and comprised usability heuristics the core elements of analysis that make it possible to operationalize and assess the parameters.Thus, for example, the dimension of "information architecture" includes "labelling" as one of its parameters and this, in turn, includes, among others, "conciseness", "syntactic agreement", "univocity" and "universality" as its indicators.
To evaluate these indicators, website quality studies employ different methodologies, experimental and quasi-experimental as well as descriptive and observational, typical of the associative or correlational paradigm.Likewise, such evaluations might adopt either qualitative or quantitative perspectives, undertaking both subjective and objective assessments.Similarly, they might employ either participatory and direct methodsas they record user opinionsor non-participatory or indirect methodssuch as inspection or web analytics.
In the case of participatory methods, user experience (UX) studies have focused on user preferences, perceptions, emotions and physical and psychological responses that can occur before, during and after the use of a website (Bevan et al., 2015).The most frequently employed techniques are testingwhich resorts to the use of such instruments as usability tests, A/B tests and task analyses; observationcentred on ethnographic, think-aloud and diary studies-; questionnairesincluding surveys, interviews and focus groups; and biometricswhich uses eye tracking, psychometric and physiological reaction tests, to name just a few (Rosala and Krause, 2020).
Among the most common methods of inspection, we find expert analysis, a procedure for examining the quality of a site or a group of sites employing guidelines, heuristic principles or sets of good practices (Codina and Pedraza-Jim enez, 2016).The most common instrument is that of heuristic evaluation, in which a group of specialists judge whether each element of a user interface adheres to principles of usability, known as heuristics (Paz et al., 2015;Jainari et al., 2022).
Other instruments employed in undertaking inspections include checklists, in which each indicator usually takes the form of a question, and whose answertypically binaryshows whether or not the quality factor under analysis is met; scales, where each indicator is assigned a relative weight based on the importance established or calculated by the experts for each parameter under evaluation (Fern andez-Cavia et al., 2014); indices, metrics that not only evaluate a website's quality, but also how good it is in comparison with similar sites (Xanthidis et al., 2009); and analytical systems, typically qualitative instruments of either a general or specialized nature, which are mainly aimed at evaluating individual websites, conducting benchmarking studies, or for use as web design guides.These systems of analysis vary depending on the factors that their creators consider key to determine the quality of a website (Sanabre et al., 2020).In this study, in order to standardise their name, we refer to them as "evaluation instruments".
These instruments can be applied manually, that is, by experts in website quality or those with an understanding of the discipline; in a semi-automated fashion, with the help of software and specialised validators (Ismailova and Inal, 2017); or in a fully automated manner (Adepoju and Shehu, 2014), using techniques of artificial intelligence (Jayanthi and Krishnakumari, 2016) or natural language processing (Nikoli c et al., 2020).Thus, content analysisa major technique in website quality inspectioncan be applied in one of three ways.
Finally, we also find techniques aimed at the strategic analysis of performance (Kr ol and Zdonek, 2020), including return on investment; search engine positioning (Lopezosa et al., 2019); competitiveness, including web analytics (Kaushik, 2010) and webmetrics (Orduña-Malea and Aguillo, 2014).Additionally, within this group we find mathematical models for decision making with multiple, hybrid, intuitive or fuzzy criteria (Anusha, 2014).By employing criteria at different, unconnected, levels, these models establish a hierarchy of evaluable factors (Rekik et al., 2015).They are used, among other applications, to weight user responses and generate indices of satisfaction or purchase intention.
Website quality evaluation
Thus, this review of the literature highlights that the study of website quality is multidimensional.Moreover, such evaluations can employ a range of different focuses and employ multiple techniques and instruments.With this as our working hypothesis, we seek here to determine the properties that characterise the main website quality evaluation instruments, as well as to identify the dimensions, parameters and indicators that they analyse in each case.Based on these outcomes, we develop a comprehensive evaluation framework (Rocha, 2012).This, in addition to unifying the different concepts examined and helping to clarify the broad panorama comprised by website quality publications, should serve both as a guide and model for the development of new instruments that can be employed by professionals and researchers alike in this field.
Objectives
The general objective of this article is to identify the main characteristics of the instruments of website quality evaluation described in the literature, with particular attention to the factors they analyse, and then, based on this analysis, to propose a multipurpose model for the development of new comprehensive instruments.
Specific objectives
(1) Characterize the main methods and techniques of evaluation used in website quality analyses, while identifying the specific focus of the instruments proposed: be it strategic, functional or experiential.
(2) Determine which website quality factors are used by the instruments employed in the most cited works, and how these are grouped into different dimensions, parameters and indicators.
(3) Build a model that can serve as a guide for the development of future instruments for evaluating website quality.
Methodology
To achieve the objectives outlined above, the systematic bibliographic review method (Booth et al., 2016) was employed, undertaking a search in academic databases and conducting a systematic mapping of the literature (Gough et al., 2017).Specifically, the review was carried out applying the SALSA protocol (Grant and Booth, 2009), which includes the search, appraisal, analysis and synthesis of the selected works.
In the search phase, to identify the main published works on website quality evaluation, we used the search equation presented below, comprising the most common keywords in the specialized literature and representative of the main facet of the field as it stands today: [website OR "web site" OR "web sites"] AND [quality] AND [evaluation OR evaluating OR evaluate OR analysis OR assessment OR assess OR assessing OR assurance OR index OR guideline OR standard OR heuristic].
The query was executed in the multidisciplinary databases of Web of Science (WoS) and Scopus and the results were ordered by relevance, filtered by language, selecting only studies published in English, and by year of publication, comprising the six-year period 2014-2019 (Codina, 2018).This procedure was repeated in other specialized databases of importance in the discipline, including IEEE, ACM, Emerald and the LISTA collection of EBSCO, among other information resources.
Likewise, the Google Scholar search engine was also used, which in addition to its wide coverage (Mart ın-Mart ın et al., 2018), includes books, technical reports and other documents JD 79,7 of interest to both the academic and professional community in the field of website development.To these were added international guidelines and standards detected by undertaking a systematic mapping review (Gough et al., 2017).As a result, a corpus of 432 documents was created, once duplicates and false positives had been excluded.
These documents were appraised by conducting a manual examination of titles and abstracts to determine whether they met the established inclusion or exclusion criteria.The former included studies dedicated to website quality analysis published in the previously established period and language.Publications dedicated solely to web analytics, studies of mobile phone applications and studies focused on user psychology and not on a particular website were excluded.Thus, an evidence base (Yin, 2015) comprising 305 documents was finally obtained.
In the third phase, all the papers were reviewed, their formal aspects described, their quality attributes and methodological tools classified according to a code book (Lavrakas, 2008), and relevant data about their content collected.Then, based on the number of citations reported in Google Scholar as of September 2020, the average number of citationsaverage citation count, ACCwas determined, normalised according to the number of years elapsed since publication (Dey et al., 2018).Using this indicator, we identified the 50 most cited texts, which account for 86% of the total number of citations received.
Finally, in the synthesis phase, all the data were systematized onto a spreadsheet containing the following details: the characteristics of the websites evaluated; the parameters and indicators considered as quality factors; and the respective methods, models, instruments and software on which the evaluation instruments proposed in each study are based.
Results
The main findings from the coincidence count conducted on the data obtained in the synthesis phase, and the most relevant outcomes derived as a result, are detailed below.
Characteristics
Between 2014 and 2019, a total of 305 publications on website quality evaluation were found, with an average of 51 studies per year.A steady upward trend is evident in the period analysed.
Among the scientific journals, 166 different titles were detected, 44 of which belong to the field of health and medical informatics.The journals with the highest number of articles published on this subject were The Electronic Library, International Journal of Engineering and Technology, International Journal of Information Management, Online Information Review and Universal Access in the Information Society.
The number of citations received by each text according to Google Scholar (GS) was also recorded.Table 1 shows the fifteen works with the highest average citation count.The first twelve positions are occupied by website quality guidelines, such as those of the World Wide Web Consortium (W3C, 2016) and new editions of reference books in the discipline (Krug, 2014;Sauro and Lewis, 2016;Shneiderman et al., 2016).These publications mostly contain general recommendations, that is, applicable to any website, with the exception of the guide for websites of the European Union (European Commission, 2016) and the HONcode (Health On the Net, 2017), specialized in medical information.
The level of specialization of the evaluation instruments proposed in these works was also examined.Specifically, a distinction was drawn between those that propose an analysis applicable to all types of website (general) and those that focus on a specific sector.It emerged that most of the evaluation instruments (73.4%) focus on a particular sector (Figure 1).
The same figure shows that the latter are led by the education sectoruniversities, libraries and museums, among othersclosely followed by the health sector, which includes both health sites and hospital websites.At a lower scale, we find the government sector, which focuses on the quality of websites of government administrations and municipalities; commerce, dominated by e-commerce stores; tourism, with sites of destinations, hotels and airlines; and the media, focused on the Internet news media.Methods, focuses and techniques A clear predominance of the associative or correlational paradigm is observed in the type of applied research conducted on the evaluation instruments as opposed to experimental.Indeed, most of the analytic instruments use observational or descriptive methodologies.Also evident is the pre-eminence of qualitative over quantitative approaches, and a balance between objective evaluations, based on the verification of verifiable characteristics, and subjective assessments, based on the perceptions of experts and users.
In turn, most of the proposals are based on non-participatory or indirect methods and, as a result, there are fewer instruments based on surveys or interviews.Similarly, there are a greater number of studies that focus on the verification of technical and functional requisites (57.4%) compared to those concerned with user experience (23.0%), with the strategic objectives of the site owner (14%), or mixed (5.5%).
If we examine more specifically the instruments present in all the publications (Table 2), we find that three-quarters were designed to be applied by professional experts in website quality, and include checklists, indices and scales, and specialized instruments that articulate various dimensions for evaluation.In contrast, usability tests and user questionnaires are much scarcer.
Dimensions, parameters and indicators
Of the 305 publications, 241 (79.0%) present website quality criteria expressed as dimensions, parameters or indicators, the latter being the most specific unit of analysis.To further our examination of these criteria, we concentrate on the systematization of the criteria present in the evaluation instruments proposed in the fifty works with the highest average number of citations.
Overall, we detected 38 factors explicitly stated as dimensions or parameters and 154 as indicators.As Table 3 shows, there is a degree of overlap between the two lists given that each author ranks and classifies the website quality factors differently depending on their own specific objectives.
It is apparent that usability and accessibility occupy the first positions both as a dimension or parameter and as an indicator.However, if all the factors directly linked to content are grouped togetherthat is, readability, language, transparency and othersthis criterion is the one that concentrates the highest number of mentions.Information architecture and navigation and interface graphic design also feature prominently.
It should be noted here that there are entire studies that focus exclusively on a single parameterthe case, for example, of credibility (Choi and Stvilia, 2015) and accessibility (Kamoun and Almourad, 2014) but which are treated as just one more indicator in others.There are also instruments that include indicators that apply only to very specific types of site, such as "public values" and "citizen engagement" on local government websites Website quality evaluation (Karkin and Janssen, 2014) or "emotional appeal" and "use of science in argumentation" in health websites (Keselman et al., 2019).
Likewise, we detect indictors that differ greatly in their nature.Thus, atomic and dichotomous indicators, verifying the presence of a specific elementsuch as an internal search engine or contact informationcoexist with other more abstract, subjective properties, such as coherence, integrity, aesthetic appeal or familiarity.This multiplicity of characteristics and conditions in the nature of the indicators leads us to propose a categorization (Table 4) that should facilitate a better understanding of them.
As can be seen, the indicators can be designed with a specific focus in mind, be they strategic, functional or experiential in nature.The latter, for example, cannot be assessed by means of a metric or a technical inspection, but require a more complex evaluationoften expressed using a scale or scoreapplied by an expert or by recording the perceptions expressed by a website's users.Instruments, tools and models Precisely because of this need to measure indicators of a different nature, website quality evaluation uses a multiplicity of instruments, models and tools.Many originate from the research methodologies employed in the social sciencesthe case, for example, of questionnaires, interviews and observation, while otherssuch as web analytics and code validatorswere formulated specifically to evaluate a site's characteristics.Table 5 reports the techniques most frequently employed by the evaluation instruments described in the 50 publications with the highest number of average citations.It shows that undertaking surveys is the most frequently used technique for collecting user data in these studies.Other techniques used for this purpose include task observation, usability tests, interviews and focus groups.Expert analyses are also represented, as identified through the use of checklists, content analyses, manual inspections and web analytics, all of which are indirect methods that do not necessarily require user participation.
The instruments also employ specialized tools and software, among which we find both manual proceduressuch as the DISCERN or HONcode guidelines (Dueppen et al., 2019;Manchaiah et al., 2019) for the evaluation of medical information on the Internet and the Web Content Accessibility Guidelines (WCAG) 2.0and automated inspection mechanisms, including the W3C HTML code and CSS validators, the Majestic SEO tool for analysing backlinks and the Readability Studio software, aimed at determining text readability (Cajita et al., 2017).
Other software mentioned include AChecker, EvalAccess 2.0, WaaT and Fujitsu Web Accessibility Inspector for automated accessibility validation; Xenu's Link Sleuth and LinkMiner for broken link detection; Pingdom, for monitoring download speed and service availability; SortSite for website technical analysis; mobileOK for mobile adaptability; and SimilarWeb for measuring the site's traffic and that of its competitors, to name just a few (Ismailova and Inal, 2017).
We also find mathematical models designed for multiple-criteria decision-making that are employed primarily in e-commerce sites.In models of this type, user and expert responses, collected by means of assessment scales, are subjected to a weighting of variables mechanism to obtain, for example, an index of perceived quality (Crist obal Fransi et al., 2017) or of content credibility (Choi and Stvilia, 2015).
Proposed model
Following on from the review of the literature dedicated to website quality evaluation and drawing above all on the 50 most cited works, we propose a multipurpose model with three specific focuses for the formulation and application of comprehensive instruments of evaluation.We divide this model in three parts: the first provides a breakdown of website Website quality evaluation quality parameters, organized according to the specific focus they offer; the second serves as a visual scheme of the model's main dimensions and focuses; and the third, comprises a set of tasks or a framework that synthesizes the stages that a researcher needs to consider when designing a website quality evaluation instrument.
In Table 6, we classify into thirteen dimensions the more than 120 website quality factors that appear most frequently in the 50 most cited texts.These factors are treated here as parameters because each of them can be broken down further into a number of different indicators.The dimensions are presented in descending order of frequency as they appear in the literature, while the parameters are organised according to the specific focus taken by the study.
Thus, the table compiles the parameters that have been the object of most attention in the website quality studies identified as having greatest impact.The model proposed on the basis of this mapping aims to offer researchers wanting to design new evaluation instruments a broad initial set of common parameters.The parameters, moreover, are all of a general nature and, as such, can be applied to any type of website.Consequently, the parameters can also be used to complement the specific parameters of sector-specific evaluation instruments.
As can be seen, usability and content are the dimensions with the most parameters, while the others are made up of fewer.However, here, we have opted for a hierarchical structure in order that important website factors, such as user assistance and support, advertising and legal aspects which are typically dealt with less frequently in the literature, are more visible.In so doing, we also seek to identify gaps and, hence, research opportunitiesthe case, for example, of the parameters to evaluate website services, which are not as well developed as those of website content.It also emerges that while certain parameters respond to more than one focus within the same dimension, the case, for example, of multilingualism or user satisfaction, we have opted not to repeat them but rather to take a decision regarding their classification.
The second component of the model is a diagram (Figure 2) that synthesizes the dimensions of website quality evaluation, placing at its core the three analytical focuses proposed: strategic, experiential and functional.For each focus, it then shows, in a tiered arrangement, the dimensions that we consider most important for any website.The figure can be read as follows: starting from the base with the site's essentials elements and working up to the peak, we begin the evaluation by determining how solid the content and services base is and continue by analysing its interface and user experience and conclude by verifying if the website owner's strategic objectivesa critical factor in any exhaustive evaluationhave been met.
Finally, the model also includes a framework or procedure for the creation of instruments to evaluate website quality.Table 7 classifies and sets out the individual steps required to design either a general or sector-specific instrument.It is organized in accordance with the most frequently employed techniques in the discipline: namely, user studies, expert analyses and strategic analyses.In this way, those responsible for the creation of the instrument can opt to incorporate those techniques they consider most pertinent, with triangulation being recommended for the most exhaustive evaluation.
This framework is divided, in turn, into five design stages: definition, research, parameterization, testing and validation.In the first, the design of the instrument is planned in relation to a set of given requirementsincluding the objectives and scopeand the conditions that delimit its useincluding the resources and the degree of data access granted the key informants.In the second, the research stage, a study is undertaken of the specific characteristics of the sector to which the site belongs, its context of use, the profile of its users and the concrete recommendations previously made by other experts.These first two stages are common to each of the three techniques addressed.
From the third stage onwards, the tasks vary depending on the technique chosen by the creators of the instrument (see Table 7).In the parameterization stage, all the sector-specific quality factors relevant to the website's objective are determined.Then, in the testing stage, an initial test of the instrument is made to identify opportunities for improvement and to
Website quality evaluation
calibrate it for purposes of optimization.Finally, in the validation stage, its reproducibility is verified based on the observations of other experts.
In this way, the model guarantees that any evaluation instrument created using this methodology provides an exhaustive analysis of the quality of any given website.This is thanks to the fact that model recommends the use of a triangulation of focuses and techniques and considers such components as: the testing of the general heuristics of usability; the expert analysis of sector-specific indicators; the study of users, albeit with indirect methods such as web analytics; and, importantly, the verification that the site meets its strategic objectives.
To ensure that the cycle of enhancement continues to have positive effects on the websites analysed, we recommend the communication of the results in a timely and effective manner, with a summary of the most relevant findings or insights, accompanied ideally by suggested approaches to address the solution of the most recurrent problems.6.
JD 79,7
Discussion Based on these results, and in line with the conclusions drawn by other authors (Rekik et al., 2018; Semer adov a and Weinlich, 2020), it is evident that studies concerned with website quality evaluation have undergone steady growth in recent years, attracting primarily the interest of authors from academia, but also from the professional world.In this regard, the interest of a number of specific academic disciplines for such analyses is notable, led by the computer sciences, health sciences and business.However, it is worth stressing that no interdisciplinary or transdisciplinary studies involving these fields of study have been detected and that most of the papers cite almost exclusively references from their own discipline.
At the same time, it is apparent that proposals for sector-specific or specialized evaluation instruments are increasing (Morales-Vargas et al., 2020).The education and health sectorsabove all, the analysis of health information sitesare the sectors with the highest number of studies, followed by those of government, commerce and tourism.
A finding of some relevance, here, is the focus adopted by the website quality evaluation instruments.All in all, we detect three primary focuses: strategic or oriented to the fulfilment of the site owner's objectives; functional, present in more than half the proposals and designed to verify the presence of technical factors; and experiential, with a concern for user experience and perception.Sanabre et al. (2020) are pioneers in combining strategic and functional focuses, but the incorporation of all three is not evident in any of studies reviewed herein.
A common element in the way evaluation instruments are organized is the fact that most opt to express the factors to be analysed in dimensions, parameters and indicators.Although a variety of different names are employed to refer to themincluding, attributes, criteria, variables and characteristics, as reported by Chiou et al. (2010) what is present in all of them is the idea of starting from broad groups of properties which are then gradually broken down into more specific units of analysis that facilitate inspection.Website quality evaluation I. Definition 1. Define the instrument's objectives and identify its target users 2. Establish its scope: general or sector-specific 3. Determine the resources, deadlines and degree of effective access to information 4. Delimit the scope of the analysis: exhaustive or centred on a specific parameter 5. Determine the focus of analysis Content, usability and accessibility are the most frequently occurring dimensions among the most cited studies, followed by information architecture and visual design.In the case of the pre-eminence of content, our results coincide with those of Cao and Yang, (2016) and Hasan (2014).Similarly, as regards the number of different indicators detected, our results are in line with the outcomes reported by Sun et al. (2019)Sun et al., 2019. However, other studies have tended to assign the leading role to credibility (Choi and Stvilia, 2015;Huang and Benyoucef, 2014), functionality (Law, 2019) or trust (Daraz et al., 2019).
Our study also identifies indicators of a different nature, including, for example, their level of specificity.Therefore, here, for the first time, we propose a categorization of the parameters according to their scope, site of validation, focus, way of scoring and perspective.We construct a model that classifies the parameters detectednumbering more than 120in thirteen dimensions and three focuses.In this way, we seek to identify the elements that make up an instrument for evaluating websites as well as the main characteristics it is designed to analyse.The classification we propose is based on previous studies that have been validated by experts, including for example the Lee-Geiller and Lee (2019) model.
Having identified the general parameters for the evaluation of all types of website, we propose a procedure for creating new instruments for evaluating website quality.The procedure comprises the following five stages: (1) definition of objectives; (2) study of the characteristics specific to a given sector; (3) the parameterization of the most relevant attributes; (4) the piloting or testing of the instrument; and (5) its subsequent validation by other experts.In this way, an evaluation centred on three main points of focusstrategic, functional and experientialis guaranteed, satisfying also the need to use multiple tools as detected by Rekik et al. (2018)Rekik et al., 2018and triangulation, as recommended by Whitenton (2021).
Conclusions
As is more than evident, website quality as a field of study continues to occupy a broad space in which different areas of knowledge are in continuous dialogue.But the field has yet to develop a shared terminology, a shortcoming that hinders efforts to establish its conceptualization as a discipline in its own right.
Despite the technological advances made and the growing technical mastery of their users, websites are still in need of evaluation instruments that can enhance both their performance and user experience.This is most apparent when these websites belong to a sector whose content, functions and services are characterised by a set of specific requirements.
As such, we wish to highlight the importance in the field of website quality of being able to identify and analyse a set of dimensions, parameters and indicators that are specific to each type Website quality evaluation of website.However, at the same time, it is critical that this be done by adopting a range of different focuses: in other words, the instrument of evaluation has to be able to assess the technical and functional requirements as well the website's strategic objectives and user experience.
This study, therefore, proposes a model for the development of new comprehensive instruments for the evaluation of website quality that are applicable to a very broad set of domains.It also constitutes an initial step in the adoption of a shared conceptualization in this field of study.The latter should, moreover, promote the sharing, reuse and comparison of the instruments proposed by other website quality researchers and professionals working in different disciplines.
Source(s): Created by authors based on most cited publications Table Figure 2. Focuses and dimensions of website quality evaluation | 6,678 | 2023-02-28T00:00:00.000 | [
"Computer Science",
"Business"
] |
Active Low Intrusion Hybrid Monitor for Wireless Sensor Networks
Several systems have been proposed to monitor wireless sensor networks (WSN). These systems may be active (causing a high degree of intrusion) or passive (low observability inside the nodes). This paper presents the implementation of an active hybrid (hardware and software) monitor with low intrusion. It is based on the addition to the sensor node of a monitor node (hardware part) which, through a standard interface, is able to receive the monitoring information sent by a piece of software executed in the sensor node. The intrusion on time, code, and energy caused in the sensor nodes by the monitor is evaluated as a function of data size and the interface used. Then different interfaces, commonly available in sensor nodes, are evaluated: serial transmission (USART), serial peripheral interface (SPI), and parallel. The proposed hybrid monitor provides highly detailed information, barely disturbed by the measurement tool (interference), about the behavior of the WSN that may be used to evaluate many properties such as performance, dependability, security, etc. Monitor nodes are self-powered and may be removed after the monitoring campaign to be reused in other campaigns and/or WSNs. No other hardware-independent monitoring platforms with such low interference have been found in the literature.
Introduction
Wireless sensor networks (WSN) have been the subject of significant research and development in recent years. However they are yet to be deployed on a mass scale because sensor networks may experience problems or errors in their operation. Many causes for such issues have been identified, such as interference in the transmission medium, security attacks (especially in WSN [1]), adverse environmental conditions, and malfunctioning nodes. The node faults, their sources, and detection approaches are diverse, as detailed in [2]. Although debugging and operations testing is usually carried out during the development and implementation of this type of network, when sensors are deployed the conditions may be very different and unanticipated events often occur.
The availability of suitable sensor network diagnostic tools is a key issue in progressing to real-world deployment of WSN. Nowadays, there are no standard tools or standard architectures in this field. Most of the proposals for monitoring and debugging do not consider enough aspects of sensor networks to be fully useful or are built for very specific network architectures. According to [3], there are many challenges in several aspects of sensor networks-architectural, functional, and dynamic-which have yet to be researched. Applications that require safe wireless sensor networks such as the Internet of Things, critical e-health systems, and ambient intelligence cannot be successfully addressed without these kind of tools.
So-called monitoring systems-or simply "monitors"-are used to evaluate the performance and operation of a sensor network in controlled conditions or even in a real environment. Monitors can focus on many performance parameters, such as throughput, jitter, response time or reliability, and even on security and intrusion detection in the network, as described in [4].
The use of these monitoring systems could be helpful in all stages in the life-cycle of a WSN. WSN researchers could use a monitor to perform comparative analysis of new proposals. Designers could use a monitor to select the best suitable techniques for a given application's requirements. In implementation, the enhanced debugging capabilities brought by these monitor tools are unbeatable. Deployment is much easier when the correct functioning of the nodes can be verified in situ. During operation, malfunctions could be diagnosed without stopping the system and any system redesign could benefit from more detailed information about current functioning. In addition, these tools could become fundamental in the standardization and certification of applications based on WSN. Although monitoring systems generate a non-negligible cost, they are necessary to detect problems in the deployment of the wireless solution. These costs, as with many measurement tools, may be considered to be justified as far as the monitoring system is only attached when necessary, is used for a finite amount of time and then may be removed to be reused, if possible, in another monitoring activity.
Monitors are usually built following one of two possible approaches. Active monitors involve additional hardware and/or software in the sensor nodes, interacting with them. Consequently, active monitors usually require the modification of the sensor nodes to be monitored. This interferes with the node's normal operation and measured parameters may vary from those of an unmonitored node. However, more variables may be observed, and thus the data obtained are more accurate.
On the other hand, passive monitors rely on the observation of the external behavior of the monitored system without any interference with its normal operation. Machine-learning algorithms [3]-which analyze the behavior of the system-may be used to evaluate and predict the presence of errors, undesirable operation or unexpected events. The monitor does not interfere with the monitored nodes, but only externally observable variables can be measured.
There is also another approach for monitor construction, which depends on whether the monitor is based on hardware or software. A software monitor is implemented by means of a specific code, application, or plug-in to the operating system of the node, which accesses the system status and reports relevant information. Usually, a software monitor yields in-depth information about the system's performance, but it may interfere with the operation of the monitored system.
A hardware monitor consists of electronic devices connected to the monitored system, which collect data from interesting system points. Hardware monitors are usually less intrusive than software monitors, but they involve the use of additional components.
Each monitor approach by itself cannot cover all aspects of monitoring tasks, as we will discuss in the next section. Monitors can also combine both approaches (hardware and software) in order to achieve the advantages of both types and obtain a complete vision of the system, while trying to keep interference to a minimum. These are the so-called hybrid monitors [5].
This paper presents an active low intrusion hybrid monitor [6], based on both hardware and software. This monitor can record the events which occur in a node of a sensor network and store them in a non-volatile memory for later analysis. Moreover, it can be incorporated into a complete monitoring platform which includes other acquisition possibilities, such as passive monitors, as described in [7]. This paper is structured as follows: after the introduction, monitoring tools are described in Section 2. Section 3 explains the architecture of the non-intrusive hybrid monitor. The implementation of the monitor is described in Section 4. Section 5 details the evaluation of the intrusion produced by the monitoring tool. Finally, a discussion and conclusions are presented in Sections 6 and 7.
Monitoring Tools
There are several monitoring sensor networks tools and techniques; most of them follow one approach and focus on WSN. In [3], both monitoring and debugging tools are considered and compared. This section contains a brief summary of some of most important and relevant tools. SNMS (Sensor Network Management System) [8] and Sympathy [9] are two of the earliest and best-known monitoring systems. SNMS is a complete management system, focused on working with any type of sensor network. It is built on TinyOS [10]-an open source operating system designed for low-power devices-and enables a review of the state of a node and information to be saved locally. Nevertheless, it generates substantial intrusion and is oriented to management rather than monitoring.
On the other hand, Sympathy works as a passive monitor, and can detect and debug pre-and-post deployment errors. It operates by analyzing the data arriving at the sink of a sensor network, applying metrics, and inferring where in the network a fault or failure may occur. It also considers the aggregation of a small overhead in the network to increase its accuracy. However, it only considers the transmitted data and thus cannot observe the internal node information, something which could increase monitoring accuracy. SNIF (Sensor Network Inspection Framework) [11], Pimoto [12], LiveNet [13], SNDS (Sensor Network Distributed Sniffer) [14], NSSN (Network monitoring and packet Sniffing tool for wireless Sensor Networks) [15], and EPMOSt (Energy-efficient Passive Monitoring SysTem for WSN) [16] are examples of passive monitors. Their approach consists in deploying a network of sniffers with an interface to capture all transmissions from nodes. The main difference between them is how the captured data is processed. Some of them transmit the data-via TCP/IP (Transport Control Protocol/Internet Protocol) or another radio interface-to another device for processing, and others can function as a sink, collecting and analyzing the information. They can also provide real-time analysis of data sensor network operation. Nevertheless, all these tools only capture the transmitted frames "on the air"; they cannot obtain information directly from nodes.
Despite its name, Passive Diagnosis (PAD) for WSN [17] is an active monitor system with low intrusion. It is based on a probabilistic diagnosis approach-based on a Belief (or Bayesian) Network-to infer the root causes of abnormal WSN operation. This adds a probe in each node which marks the packets with relevant data with very little overhead. However, PAD has to wait for a message transmission to send information and it might not determine when an error has occurred. Moreover, as non-sense nodes (such as router nodes) do not send sensed data, they are unable to send any monitor information to indicate possible abnormal operation.
Memento [18] and Lightweight Tracing [19] are examples of active monitors. Both use short encoding with sensor node events and information. The first adds its code protocol to a message and transmits it. Memento can detect problems in a node by using information provided by their neighbors in the network. In Lightweight Tracing the events are stored in non-volatile memory by using a very light coding. Further reconstruction and debugging of node and network behavior is then possible. Because both are active monitors, they generate substantial intrusion.
Minerva [20] is not a monitor, but a test-bed for WSN. It uses a debugging port and tracing port connected to the sensor node to observe the behavior of the node. Minerva has very interesting features but is inadequate for monitoring in real environments.
Finally, Spi-Snooper [21] integrates hardware and software in a hybrid approach. The hardware architecture brings the sensor node and the monitor together in a single unit in a transparent manner. The monitor spies on the SPI interface used to connect the sensor node to its radio module. The software architecture has two operation modes: active and passive. In passive mode the monitor-called the co-processor-mainly logs the communication through the SPI bus and checks some node data. In active mode it assumes the control of the SPI and the radio interface. However, it can only be used in sensor nodes that transmit through a SPI port. Only the data transmitted through this SPI interface can be monitored. Obviously, this technique cannot be used in sensor nodes with built-in radio modules.
Each one of these proposals has advantages and disadvantages. The proposed active monitors usually involve a great deal of intrusion. Meanwhile, the proposed passive monitors can only observe transmitted data, they are unable to observe events inside the node. Both the addition of monitoring information to a transmitted message and new messages to the network cause a decrease in network performance. Monitors based exclusively on software cannot work if the node fails. Hardware-based proposed monitors are too architecture-specific. Consequently, a monitor system with sufficiently broad network information coverage which also keeps intrusion low is required. Furthermore, this system has to be sufficiently generic to be applied to all hardware architecture. Many characteristics of the proposals studied were taken into consideration to form a base for the design of our monitoring system, which also attempts to minimize the drawbacks of these previous tools.
Specifications and Characteristics of the Monitor
Taking into consideration the findings from many previous deployments of WSN, the characteristics of wireless sensor network monitors (addressed in Section 1) and the existing proposals discussed in Section 2, the following specifications are deemed necessary for a new WSN monitor: 1. It must be able to observe all the relevant data. This would endow it with significant debugging potential. 2. It must be as hardware/software independent as possible. 3. It must be reusable and configurable. 4. The monitor must be easily attachable and removable, without stopping the operation of the monitored system. 5. The monitor must cause minimum disturbance to the operation of the observed system (low intrusion).
When addressing these specifications, software traps are considered to be the best way to achieve a high observability while maintaining high flexibility, as the designer defines the relevant events and introduces the trap code in the required places in the code. In order to achieve a very low intrusion level, a hardware attachable node frees the monitored system from the task of monitoring data management. The communication between both nodes must be performed through standard interfaces already available in most of the monitored nodes, providing reusability and hardware independence. Consequently, an active hybrid hardware/software solution is considered to be the best proposal.
Moreover, although monitoring systems are usually expensive, this approach minimizes costs because the proposed system is reusable and scalable, adjusting to monitoring requirements.
Finally, it is particularly interesting that the design of the monitor is based on a modular approach to make the system easily adaptable to new monitoring environments and improve reutilization. In this way, the design must be partitioned following a hierarchical layered architecture in order to attain modularity, making it possible to change monitor modules without changing the whole implementation.
Proposed Architecture
In this section the architecture and operation of the proposed monitor is presented. It is based in a standard-oriented architecture in order to provide benefits such as universality, reusability and flexibility.
As observed in the bibliography and discussed above, every monitoring system has to address some issues in order to be functional. Our proposal identifies these problems and classifies them into three categories: Monitoring data. These data must be captured, analyzed, and shown to the user in a meaningful way. The semantic meaning of the data depends on the application and the monitoring requirements.
Once obtained, data must be presented in a standard format to ensure the integration of data provided by heterogeneous sources. Some issues related to this category include a common time base and capturing conditions. As the monitored system is distributed in space, all the obtained data must be centralized and stored to provide a global comprehension of the whole system functioning.
Keeping these different-and usually independent-problems in mind, the proposed architecture is composed of three layers, as shown in Figure 1.
The Monitoring Layer will be located in the upper level of the architecture. This layer is in charge of all the issues related, and specific to the WSN under observation. It must deal with the definition of what should be monitored, how this information should be acquired and the way it has to be processed and shown to the user.
The Information Layer is located under the Monitoring Layer. It must represent the obtained information in a standard way. This level also deals with timing issues, such as when the information must be captured (triggering) and storing this time value with the obtained data (time stamp).
Finally, the Interchange Layer allows the information captured at different points alongside the monitored WSN to be transferred and stored. The upper layers will retrieve this information to be analyzed and/or visualized by the Monitoring Layer.
In this three-layer architecture the communication between layers and entities in the same layer is similar to that standardized in the OSI (Open Systems Interconnection) reference model for networks [22]. Each layer defines interfaces to communicate with the upper and lower ones. One of the advantages of working with a defined architecture is that changes and developments in an entity/layer should not affect the other layers. Hence, any improvements to the monitor system will be easier to develop and implement. Moreover, it is straightforward to reuse this hybrid monitor in another sensor network by making the required changes in the respective layer.
The Active Low Intrusion Hybrid Monitor
In this section an implementation based on this layered architecture is presented. As the main objective of this implementation is to study intrusion, the implemented system is focused on all modules that affect said intrusion (capture subsystem, information layer, and interchange layer; first column in Figure 1), simplifying the other modules that do not. Figure 2 shows the structure of the implemented hybrid monitor. Three hardware devices can be identified: the sensor node to be monitored (mote), the attachable and reusable monitor node, and a storage device.
Active Low Intrusion Hybrid Monitor Structure
The capture subsystem in the Monitoring Layer is implemented by means of a software traps mechanism which runs in the sensor node hardware.
The Information Layer entity of the hybrid monitor is implemented in the monitor node. The interface between the sensor node and the monitor node, supporting communication between the Monitoring and Information Layers, is performed through a standard communications interface (serial, SPI, parallel, etc.). The use of these standard interfaces provides hardware independence and facilitates reutilization, making the attach/detach operation straightforward. Finally, the Interchange Layer entity is implemented by means of a storage device handled by the monitor node, where the collected data will be stored. The communication between the storage and monitor node is done through a serial interface, as shown in Figure 2. Future implementations could consider the use of more sophisticated storage and distribution systems, such as distributed databases and secondary communication networks.
More powerful analysis and visualization subsystems, with lower layer support entities, are not considered at this point because they would not modify the intrusion evaluation carried in this research.
Monitor Operation
In order to observe and evaluate the behavior of the system under observation, the monitor must register the sequence of events that may characterize said behavior. Consequently, the first step must be to define the monitoring points-called probes-which are of interest to the designer.
With this information, the application code must be modified in order to generate the appropriate monitoring events. This modification is implemented using software trap mechanisms.
When a trap is fired, some associated information bytes are generated that may include additional information along with the event code. This monitor allows the designer to decide which additional data can be included, so providing a highly flexible monitoring tool, and increasing the accuracy of obtained information. For example, an error event may include an additional error code that better describes its causes. A transmission event may include part or the whole message, making a packet traceable through the sensor network. This can be considered a significant improvement when compared with other proposed systems that record events but cannot associate additional information to them.
These bytes must be processed, formatted, and recorded in a log file. Most active monitors require the observed node to run these processes. Our approach frees the observed sensor node from having to run said processes-and thus reduces the intrusion-by means of the attached monitor node. The monitor node is in charge of these processes, which include time stamp, data format, data storage or (in future developments) data transmission.
Our solution ( Figure 3) only requires the trap capture routine to transmit the event and its associated information to the monitor node through the Monitoring-Information interface (Mon-Inf interface). The monitor node is in charge of processing the data, including time-stamping. It also applies a standard data format, as described below. Finally, the monitor node must also record this data. In this approach, the data is stored in a SD (Secure Digital flash memory) module through the Information-Interchange interface, implemented by a serial link. In order to minimize the intrusion, the time and resources required for the sensor node to send the monitoring messages must be as low as possible. One of the main advantages of using standard interfaces is that most sensor nodes include specific hardware to perform the transmission in parallel with normal microcontroller operation, without disturbing it. The degree of parallelism obtained between both operations is critical in determining the level of intrusion. That is the reason why this article evaluates the intrusion obtained when the most commonly-available interfaces are used for this Mon-Inf communication. Figure 4 shows the data flow and operation of the implemented hybrid monitor. The tasks which are usually performed by active monitors in the observed node have been divided between the sensor node and monitor node. The operations to be performed by the sensor node have been minimized and the rest, such as time stamp, data format, data storage, etc. have been moved to the monitor node. Communication routines have been included in the sensor node to transmit the data.
Consequently, as shown in Figure 4, two routines are included in the Monitoring Layer inside the sensor node. The first of them-which can be called as a routine by the code running in the sensor node-prepares the data about events to be sent to the monitor and sends the first bytes. As the monitor can manage traps that require many bytes, the message may be too long to be transmitted in just one iteration. In this case, a second routine is activated by an interruption when an ACK (acknowledgment message) is received from the monitor node. This routine sends the additional data until the end of the message.
The use of an ACK mechanism has experimentally proved to be necessary to guarantee a more reliable monitoring operation. The intrusion introduced by this ACK mechanism can be considered very low, due to the low priority interrupts used in implementation, as discussed below.
Tx->ACK
The monitor node is always waiting for information from the sensor node. This data must be processed when it is received, which involves several operations. Data must be stamped with the appropriate time stamp. When the first byte arrives, the time stamp is registered. Then, the monitor node receives the rest of the message. An ACK message is sent to the sensor node for every piece of data received. The message has to be processed when all of it has been received. In this implementation, the message treatment consists in the creation of a record which includes the date, time stamp, event code, and additional data (which is used in our example to indicate the iteration number).
This implementation uses a 4-bit coding for events, as shown in Table 1. Designers are free to define their own codes (and additional data) in order to adjust the system to the monitoring and application operation they require.
Finally, after the monitoring campaign, monitor nodes should be removed to be reused in other monitoring campaigns, even on WSNs whose sensor nodes are based on different hardware architectures. Figure 5 shows the hardware implementation of the monitor shown in Figure 2. The monitor node (on the left) is connected to the sensor node (on the right). The monitor node has been implemented using a commercially available microcontroller system, based on the STM32F051R8 ARM Cortex-M0 microcontroller (STMicroelectronics, Geneva, Italy). The authors consider this architecture to be representative in current applications. This microcontroller also offers several common interfaces-GPIO (General Purpose Input/Output), SPI, USART, and others-which can be used for our purpose [23]. The monitor board has been connected to a SD card through its SPI interface.
Hardware Implementation
The implementation costs in the sensor nodes is, usually, very low. Commonly, microcontrollers in sensor nodes have free communication interfaces that can be used to send the traps to the Monitor node. The only additional hardware engineering required is that the selected interface is externally available through a connector. Figure 5 also shows a sensor node, which is used for monitor evaluation. It is a previously used sensor node which has already been deployed in a real temperature control application. It is based on an ARM Cortex-M0 (STMicroelectronics, Geneva, Italy) and includes a XBEE (Digi international Inc., Minnetonka, MN, USA) wireless communication module, a MAXIM LM75 temperature transducer (Maxim Integrated Products, Inc., Sunnyvale, CA, USA) and an external antenna. It runs an application that periodically wakes up from sleep mode, takes a measurement from the temperature sensor, transmits the results through the wireless communications module, and enters into sleep mode for 60 s. In this sensor node no hardware modifications were required, as the three studied interfaces (SPI, GPIO, and USART) were available through the connectors. The intrusion caused by the active hybrid monitor is related to the communications interface between the sensor node and monitor node. To evaluate this intrusion, three different interfaces were considered. The interfaces studied, which are usually found in most microcontrollers, were SPI (shown in Figure 5), USART (Universal Synchronous Asynchronous Receiver-Transmitter), and parallel transmission using 16 GPIO ports. Parallel and SPI data width is 16 bits, and USART data width is 8 bits. Moreover, three transmission speeds were evaluated for the SPI interface. To compare the performance of each interface, four sizes of transmitted data (common sizes of 16, 32, 64, and 128 bits per trap) were used. The larger the data being transmitted, the more detailed the information provided in each trap, but also the higher the interference.
The monitor node constitutes the Information Layer entity of the monitor. It has a built-in RTC (Real Time Clock), which is used to generate timestamps for the event register.
Software Implementation
The software part of the presented active hybrid monitor consists in a library added to the sensor node and the code in the monitor node.
The library on the sensor node offers an Application Programming Interface (API) which allows the designer a user-friendly way to introduce monitoring capabilities into its application code. This has been implemented using software traps and constitutes the capture subsystem of the Monitoring Layer. These software traps consist in a set of instructions located in the code of the monitored node. They send information when they are executed, usually related to significant operations-from a monitoring point of view-in the node, and can also send optional additional information, such as the value of a variable or the contents of a message. Table 1 shows an example of a software trap used in the presented hybrid monitor. The code shown uses parallel communication as the interface between both nodes. The WriteLog function in the main procedure is a software trap which has been included in the application code. The parameters trap code (Log_sense0) and measured value (temper) are intended to be recorded by the monitoring system, indicating a temperature has been reached and its current value.
The implementation of the WriteLog function is in charge of transmitting the value through the desired interface. In this example, only 16-bit-length traps are considered and the acknowledge mechanism is omitted for simplicity.
As can be seen in Table 2, software traps are implemented as a routine call with very few instructions and thus cause very little intrusion. The monitoring layer also considers the trap collection procedure in the sensor node, as shown in Figure 2, which must transmit the trap data through the standard communication interface. The left block in Figure 4 shows this transmission flowchart in the sensor node. The right block in this figure shows how the monitor node implements the software required to receive this data. Both the sensor node and monitor node may benefit from the availability of standard communication libraries, providing independence between hardware and software. For instance, Cortex Microcontroller Software Interface Standard (CMSIS) has been used in both sides to implement these routines. This provides a standardized level, which may be easily reused in other CMSIS based microcontrollers and even ported to other architectures applying this methodology to their own libraries [24]. As the messages sent through the interface between the sensor node and monitor node do not change, the monitor node requires no modification.
The monitor node also implements the Information Layer, as shown in Figure 2. Consequently, it is in charge of applying a time stamp and data formatting to the received traps.
The monitor node is also responsible for Interchange Layer functions. Figure 2 shows that this implementation of the Interchange Layer consists of a non-volatile memory (Secure Digital-SD-card) where the captured information will be stored. This approach is similar to that used in Lightweight Tracing [19], but in our proposal this memory is attached to the monitor node instead of the sensor node. Future versions of the Interchange Layer could include a radio interface to transmit the collected information, either in real time or in a scheduled manner.
Monitoring Data Obtained
Although the main objective of this paper is not the application of the monitor to a real sensor network, but to study of the intrusion caused by it, the validation of the correct behavior of the monitor was considered to be a necessary, previous condition. Figure 6 shows a trace of the information registered by the monitor in a single node. Three kinds of events-wake up, transmission, and sleep-were deemed sufficient to measure the intrusion and, thus, only three traps were inserted in the code of the sensor node to be monitored. These codes are sent to the monitor node through the Mon-Inf interface. The monitor node processes the data in the Information Layer, adding the time stamp information and formatting said data in CSV (Comma Separated Values format (ASCII-American Standard Code for Information Interchange-text separated by commas). Finally, the Interchange Layer is also implemented in the monitor node to store said data in a SD card. CSV format was selected for the Information Layer because of its portability, it being compatible with most analysis and visualization tools. These tools are sufficient for the evaluation of the intrusion (the main objective of this research). Future implementations will improve the functions offered in analysis and visualization subsystems in accordance with the architecture shown in Figure 1. Figure 6 shows part of the information generated by the hybrid monitor after recovering the produced file from the SD card. This information includes date, time, ms, event code, and additional data. Event codes 04, 06, and 09 (Table 2) have been used for wake up, transmission, and sleep events, respectively. Additional values for wake up and transmission events-also sent as additional information-indicate the iteration number. All this data can be delivered to the visualization and control subsystem (Figure 1) by using the Information Layer services.
Intrusion Evaluation
Three principal intrusion aspects must be considered. Time intrusion deals with the increase in execution time of the sensor node caused by the monitor. Taking into account that sensor nodes usually have a limited flash memory, code intrusion evaluates how many additional bytes are required to implement the monitoring of the sensor node's program. Finally, energy intrusion evaluates the additional energy the monitoring operation requires. Many experiments were performed, following a detailed experimental plan, which took into consideration the previously cited interfaces and data sizes in order to measure the intrusion of the active hybrid monitor. The results obtained are presented in this section.
In order to obtain reliable results, n replications were performed for each measurement, n being calculated as follows: The results for each measurement were considered as random variables (X1, X2, …, Xn) with a μ mean value. Measurements were repeated n times until an estimation of μ was obtained with a 90% confidence interval according to Equation 1, where tn−1,0.95 represents the upper limit of the Student's t-distribution on n − 1 degrees of freedom, and X(n) and S 2 (n) are the mean and the variance of the results obtained in the different experiments:
Time Intrusion Analysis
Time intrusion was determined by the replication of an experiment which consisted in measuring the amount of time needed to fulfill one thousand application iterations in the sensor node during both monitored and not-monitored operation. During monitored operation, a single software trap was added to be fired when the wake-up event occurred. The time required to fulfill the iterations in monitored mode is called monitor mode execution time. The execution time of the same program in the sensor node without traps was also measured (hereafter referred to as reference time). The program consists of a wake-up, measurement of the value from a transducer, and pass-to-sleep mode operations, repeated one thousand times. Reference time was found to be 5805 ms.
As previously mentioned, three interface implementations and four data sizes were combined to yield the results in Table 3. This table shows the difference between the monitor mode execution time and the reference time, divided by the number of iterations. As there is one trap per iteration, this value is the time intrusion per trap. SPI and USART interfaces with dedicated hardware allow the communication processes to be executed concurrently with application code execution. In these interfaces (see code in Table 1), trap routines are not concerned with communication and they merely write the outgoing data into the interface buffers. Then the sensor node program may continue. On the other hand, the parallel interface has no dedicated hardware controller and an additional line is required to generate a reception interruption in the monitor node. That is the reason why the time intrusion is slightly greater than when using the SPI interface. As expected, time intrusion increases for larger data sizes. In the case of USART the intrusion time is about double that of the other interfaces, because it only sends 8 bits per transmission (the other interfaces considered send 16 bits) and it has to manage twice the number of interruptions. Figure 7 shows that the time required to process a trap event in both the sensor node and monitor node. Despite the fact that the sensor node can execute part of its application code while the trap is being processed, it is not possible to launch a new trap before its treatment is complete. This minimum time between traps is shown in Figure 7 as the "Period for event log generation".
This period starts when the sensor node code reaches a software trap. The trap code and its associated data must be prepared to be sent in several messages through the Mon-Inf interface. When it is ready, the dedicated hardware in the sensor node is in charge of transmitting the first message containing this data to the monitor node. After that, the monitor node generates the time stamp and acknowledges the transmission to the sensor node. As the ACK is received, the sensor node transmits the next message containing the trap information. This process repeats until all the messages related to the trap event have been transmitted.
The time intrusion (Table 3) every trap introduces is the sum of several time intervals (red boxes in Figure 7).
It can be seen that this time intrusion in the sensor node is not affected by the communication time. The hardware implementation in modern microcontrollers allows the CPU to execute code (application code) while transmission is being performed. However, this communication time is not negligible when dealing with the maximum event generation rate, and it has to be taken into consideration to evaluate the maximum frequency event generation rate.
This frequency is related to the processing time in the monitor node. This time has also been measured and Table 4 shows the values obtained. As shown in Figure 7, the processing time is measured from the arrival of the first data to the receipt of the last piece of information. Timestamp generation takes 15.8 ms and is performed when the first data message of the trap arrives. As shown in Table 4, when dealing with 16-bit events the times obtained are similar for every interface-except the USART-as the process time depends mainly on timestamp generation. For larger data sizes the processing time is slightly less when using the parallel interface instead of SPI, especially when the SPI speed is low. In the case of the USART interface, the processing time is much greater because of its low transmission speed and the fact that it can only send 8 bits at a time, whereas the other options considered can send 16 bits at a time. As expected, the processing time increases for larger data sizes.
The values in Tables 3 and 4 are dependent on the architecture used (48 MHz ARM Cortex-M0, STMicroelectronics, Geneva, Italy) in both the sensor node and monitor node. A monitor node based on a faster microcontroller will reduce the timestamp generation time, and hence will also reduce the processing time and increase performance. Moreover, the time intrusion and processing time could change depending on the sensor node characteristics, such as core frequency, buffer availability, architecture, interface to monitor, among others. Nevertheless, the time intrusion in the sensor node is very low when compared with the processing time in the monitor node. The relationship between both times is not architecture-dependent. This means a low degree of time intrusion can be expected when monitoring other sensor node architectures.
When comparing this proposal with others in the bibliography, it is important to note that active non-hybrid monitors make the sensor node perform all the monitoring tasks (data capture, formatting, and storage); meanwhile, with our proposal, the time needed to perform all the aforementioned tasks is replaced by a small communication time, thereby substantially reducing the intrusion.
A USART interface was not deemed a good option to be used in our monitor because of its comparatively low speed and high processing time which result in low performance. Therefore this interface is considered in further results for comparison purposes only.
Code Intrusion Analysis
As memory resources are limited for sensor nodes, the evaluation of the intrusion on program code is relevant. Code used in the sensor node was generated by Keil Microcontroller Development Kit (MDK) version 5, a comprehensive software development environment for Cortex-M processor-based microcontrollers which includes an Integrated Development Environment (IDE), Compiler, and Debugger [25].
Size difference in bytes between the pure application code and the trap-modified code has been considered in a binary compiled program. The main differences between both codes consist in the addition of the port initialization subroutine and several data transmission instructions. Table 5 shows the intrusion in bytes on program code in the sensor node. Intrusion is not related to the communication speed, thus this parameter has not been considered.
Since transmission code is reused for all the traps, a new event may be monitored by merely adding a trap call of 8 bytes to the sensor node code. Code intrusion in size (SIntrusion) may be predicted by means of Equation (2), where Sinit is the value appearing in Table 5 (which corresponds to the initialization and interrupt routine), and n is the number of trap calls included in the application code. As expected, for a small number of monitored traps, the code intrusion is mostly determined by the initialization code.
Energy Intrusion Analysis
Power consumption is a key aspect of sensor network nodes. This is the reason for the increasing number of microcontroller systems which include specific hardware to monitor their own power consumption. Furthermore, some microcontrollers, such as STM32L0 (STMicroelectronics, Geneva, Italy), are able to measure their instantaneous power consumption without additional hardware. These power measurements may be used to handle the node energy efficiently, and may even be sent with the data to provide global energy management.
In this environment, a complete monitoring tool must be able to handle the relevant information about the energy behavior of the sensor node.
Due to energy restrictions in this kind of systems it is very important to reduce the energy consumed by the monitoring operation. Consequently, the best solution consists in the addition of an external self-powered node performing passive monitoring, which introduces no energy intrusion. Nevertheless, the solution proposed in this paper seems to be very close to these solutions, using an external monitor node but increasing observability by using active monitoring solutions (software traps). It must be noted that the monitor node include its own power source (battery, energy harvesting techniques or even a wired power installation) to avoid changing the behavior of the monitored sensor node.
The monitoring tool should be aware of the intrusion in power consumption that it introduces. Consequently, this section studies the energy intrusion caused in the sensor node when the monitoring system is capturing. This intrusion is caused, on the one hand, by the communication hardware interface used to support the Mon-Inf interface and, on the other hand, by the time taken to execute the trap. The latter has already been addressed in Section 5.1.
To evaluate the power consumption of the communication hardware interface with and without monitoring operation, the instantaneous power consumption in both cases has been measured by means of a set of experiments. They were performed in accordance with the plan described in Section 5. Power consumption was determined by modifying the application program used in the sensor node in the time intrusion evaluation to make it into an infinite loop, avoiding sleep mode and keeping the peripherals used enabled. With this program in execution, about 7000 samples of sensor node electrical power were taken-18 samples per second-and averaged. Measurements were taken with an Agilent 34405A Multimeter (Agilent Technologies, Inc., Santa Clara, CA, USA). This multimeter has a 5.5 digit resolution, with an accuracy of ±(0.05% of reading +0.015% of range) for our measuring conditions [26]. Both non-monitored and monitored operation measures were repeated until the confidence interval was over 90% according to Equation 1.
These experiments were run for all the interfaces considered (parallel, SPI, and USART) and for several communication speeds and data sizes. The results obtained show that electrical current with no monitoring operation was 22.53 mA. They also demonstrate that the evaluated communication speeds and data sizes do not affect the instantaneous power consumption. Table 6 shows the ratio between the electrical current required in monitored operation compared with non-monitoring operation when each interface is used. The monitoring operation introduces, as previously stated, two causes of energy intrusion: the electrical current increased by the communication hardware and the overtime introduced by the execution of software trap's code. As a consequence, energy intrusion may be calculated as shown in Equation (3): where Em, Im and Tm are the energy, current and execution time when monitoring, and Er, Ir, and Tr are the same variables when no monitoring is being performed. V is the power voltage in all cases. From this expression it is easy to deduce Equation (4): It can be seen that, as previously mentioned, the energy increment is caused by two factors. The first is the execution time increment due to monitoring tasks, and the other is the power consumption increment caused by the hardware used for monitoring. With regard to the first of these factors, the proposed hybrid solution only requires trap capture and basic communication tasks. With regard to the second, the proposed solution only requires the use of the communications hardware interface.
Finally, the percentage of energy intrusion may be calculated using Equation (5): As all these data are known, it is possible to use the values for current increments (Table 6) and time intrusion (Table 3) to obtain the energy intrusion for each interface. Table 7 shows the values obtained. The energy intrusion can be considered low in relation to the normal operation of the sensor node (below 3%). From Table 7 it can be seen that the power consumption intrusion increases with data size and is higher for the parallel interface (as expected) due to the number of lines involved in trap transmission. It can be observed that when using the SPI interface transmission speed has very little influence on energy intrusion. The small amount of data transferred by trap makes speed almost irrelevant (Table 3), and no consumption difference has been found ( Table 6).
It must be noted that these values correspond to the worst trap generation scenario, where a very small application code is executed (only the capture of a sensor). A lower trap generation rate would significantly reduce this energy intrusion.
Discussion
From the previous numerical evaluation it can be concluded that, as observed, the use of a parallel interface presents a small advantage from the point of view of code intrusion. The parallel interface presents a greater advantage in energy intrusion, despite producing greater time intrusion, because it produces a low electrical current increment. However, a drawback of the parallel interface is that it requires approximately 20 lines (depending on the implementation), which are not always available in sensor nodes. If the lines are available, physical attachment should be straightforward using a parallel connector.
Serial USART-based interfaces also introduce a small amount of code, as it is a very common and easy-to-use interface. Its energy intrusion is close to the parallel case, despite its low transmission speed. Its greatest drawback can be found in time intrusion, which is many times greater than that presented by the other interfaces studied ( Table 3). The shorter word length and lower transmission speed makes communication with the monitor node very slow (Table 4). This implies that the number of events per second (trap frequency) could be limited by the capabilities of this interface.
Finally, SPI-based interfaces present a balanced behavior between the other two options. Although its code intrusion is greater than the other interfaces, it is only 150 bytes approximately. Additionally, this interface permits high trap generation frequency. Finally, physical attachment and detachment of monitor node should be straightforward, as the required lines (only four) are externally available, using a serial connector.
The intrusion when the proposed monitor uses the studied interfaces can be considered low in all of the three aspects mentioned (time, energy and code intrusion). The difference between the interfaces is not definitive; in some cases the choice of one of them may be conditioned by other circumstances, such as the available interfaces in the sensor nodes or application/designer restrictions. Even when no other interface is available, the serial USART-based interface may be an appropriate solution, accepting its drawbacks.
To summarize the previous studies, a ranking based on a preference index is shown in Table 8.
A Real Application
Previous analyses were performed in a high stress environment, with a high trap generation rate, in order to study the worst case scenario. This section describes the measurement of the influence of these intrusion effects when applied to the monitoring of a real WSN application.
The monitored application consists in a temperature measuring system. It is formed by a set of nodes with a temperature transducer which periodically (every 2000 ms) wake up, capture the temperature from the transducer, send it to the gateway (without routing capabilities), and finally return to sleep mode. The working time of this application is approximately 200 ms. The code size is approximately 12 KB, and its required energy before monitoring was measured at 24.58 mA on average when in active mode (working time). Both parallel and SPI interfaces were available for monitoring purposes.
In order to monitor this application, four traps were defined, registering four events: Node Wake Up from Sleep (0x04), Read sensor 0 (0x01), Node Sends Data (0x06), and Node Goes to Sleep Mode (0x09), as defined in Table 2. All these traps send 128 bits of additional data.
In the real application the execution time was measured for SPI and parallel interfaces. The time increment was found to be 130 ms and 120 ms when parallel and SPI interfaces were used respectively. Thus, the time intrusion caused was 0.065% in parallel and 0.06% in SPI.
The evaluation of the code intrusion in the proposed real application is very simple. Parallel implementation requires the addition of 300 bytes (see Equation (2)) that represents a code intrusion of 2.44%. On the other hand, SPI required 500 bytes of code, producing an intrusion of 4.07%.
Finally, electrical power consumption was measured in the sensor node when monitoring was active with both interfaces. Instantaneous power consumption when using the SPI interface was found to be 25.2 mA, and the parallel interface produced a measurement of 24.85 mA.
This increase of instantaneous power consumption, together with the additional time intrusion, produces an energy intrusion of approximately 2.6% in the case of the SPI and 1.2% in the case of the parallel interface. These results correspond to a four-event monitoring, so they are lower than those presented in Table 7 (four to five times lower). As has been said in the previous section, the results shown in Table 7 correspond to a worst-case scenario for trap generation rate.
Comparison with Bibliography Cases
The results are quite similar for both interfaces. In order to highlight the difference between them, Figure 8 shows how this intrusion would affect the working time values used in other real implementations found in the literature. For example, in [27] (a wireless sensor node to monitor track bicycle performance) the sensor node active time is 30 ms. In others cases, such as [28] (an application for heating and cooling loads) and [29] (a wireless node enabled by wireless power with some sensors) the node active time reaches 100 ms or more. In all cases, the percentage of intrusion is very low-less than 1.5% in the worst case-even in the smallest case (10 ms). The time intrusion of active monitoring tools is much greater that those caused by the presented solution. For example, an active tool like Envirolog [3,30], in which the sensor node is in charge of capturing the trap events, processing and storing them in flash memory, can generate an overhead of 70% for 15 traps. This overhead seems unacceptable in a real scenario, explaining the interest in hybrid solutions to free the sensor node from most of the monitoring functions.
The overhead of instruction code (inside the sensor node) of active monitoring tools is higher than that associated with the hybrid monitor presented. For example, additional features of Sympathy occupy 1558 bytes of ROM (Read-Only Memory); EnviroLog requires 15,160 bytes of node flash memory [3]; and the Lightweight Tracing program's requirement can reach approximately 4000 bytes [19].
No energy intrusion data was found in the bibliography studied. As the proposed systems consider the monitoring as a temporal phase inside the life time of a WSN, the energy consumption has not been analyzed in depth. However, active monitoring techniques require the monitored node to perform many operations to record its monitoring information. From Equation 3, it seems obvious that these techniques increase the power consumption as they increase the working time and, in most cases, they also require the use of hardware resources from the sensor node. Both increased working time and use of hardware resources result in an energy consumption many times greater than that observed when using the hybrid solution approach presented in this paper.
Conclusions
In this paper an active hybrid (software and hardware) monitor has been presented. It is able to collect detailed information about the operation of a sensor node with very low intrusion, and thus without affecting the node's performance.
The software traps implementation is easily portable to many hardware platforms, as it has been released with open-access license. The libraries are intended to be freely distributed in order to be used without any royalties. Software traps become non-operational when no monitor node is attached, with a negligible impact on sensor node's performance.
The design of the monitor node makes it easily attachable and detachable from sensor nodes. Monitor nodes may then be removed when the monitoring campaigns ends.
A monitoring campaign may be repeated at any moment, just plugging again the monitor nodes, if software traps are implemented. For instance, the monitor nodes may be used for debugging purposes at implementation stage, then removed-just disconnected-when sensor nodes are finished. Later, after the deployment of the WSN, if undesirable behavior of sensor nodes is detected, monitor nodes may be reconnected again to evaluate the in situ operation. When the problem is detected and corrected, monitor nodes may be removed again.
At the same time, monitor nodes use standard interfaces in order to be attachable to many different sensor node architectures. The monitoring tools found in bibliography are always hardware-dependent, being built for a specific sensor node.
The monitor nodes also include its own power source, battery based or even wired power installation, to avoid changing the behavior of the monitored sensor node.
As a first step, and following a study of monitoring tool requirements, a new architecture was presented for monitoring systems. This architecture addresses all the characteristics and functions required in monitoring tools, structured in a hierarchical way. It also provides advantages such as flexibility, reusability, and standardization.
Using this architecture, an active hybrid monitor was implemented. It is based on the use of new additional hardware, called a monitor node, which must be attached to the sensor node that is to be monitored. This monitor node is in charge of many monitoring functions-data format, data storage, time stamp, etc.-reducing the interference to the sensor node application.
The monitor node is complemented with software which is introduced into the sensor node. This piece of software captures the relevant events to be monitored and uses a standard interface to transmit them to the monitor node if present. The designer may define their own relevant events to monitor, and introduce them into the application code by means of software traps. This mechanism offers both flexibility and the possibility of obtaining detailed data regarding the internal behavior of the sensor node. If no monitor node is attached, traps are disabled, causing a negligible intrusion.
The use of a standard interface means that the monitor nodes can be easily added to the WSN during its development, improving system debugging and development time, or during the deployment stage to verify the correct operation of the sensor network. After removal, the monitor nodes can be reused in the monitoring of another WSN, and its cost may be recouped during repeated use in different monitoring studies.
In order to increase reusability, many interfaces may be used to communicate between the sensor node and monitor node. The performance of these interfaces may influence the overall intrusion caused to sensor node operation and, thus, affect the representativeness of the obtained results. In this paper several interfaces have been evaluated and characterized, with parallel or SPI interfaces being shown to be those which offer the best performance. However, as eighteen pins are necessary for parallel communication, which are not always available for monitoring purposes in sensor network nodes, it can be concluded that SPI, when available, is the better option in most cases.
The monitor was also applied to the monitoring of a real WSN application. Real measurements were obtained from a true operation in laboratory conditions. These measurements had associated timestamps and were stored in a SD non-volatile memory for later analysis.
As a result, it has been shown that low time interference permits a high trap generation rate. On the other hand, low code intrusion may be assumed in most common modern microcontrollers, even those with very low memory resources. Finally, the hybrid philosophy used allows the power consumption required to fulfill the monitoring tasks to be divided between the monitor node and sensor node, thus reducing the energy intrusion in the latter. This makes these techniques highly suitable for WSN monitoring (for designing, implementing, deploying or debugging purposes), but they may also be applied to the monitoring of many embedded systems.
Future research in monitor node development could include the use of communication interfaces, following the architecture in Figure 1, to provide greater support to the Interchange Layer. Many monitor nodes, and even passive monitors such as wireless network sniffers, could combine their data in a database. To achieve this goal it would be necessary to standardize the data format. Diverse monitoring data sources, even from different manufacturers, could then be joined to observe the behavior of the entire network. Real-time analysis and visualization of obtained data could also be improved, focusing on the use of standard tools.
A classification of the WSNs according to their acceptable monitor intrusion should be defined, and this intrusion index must condition the overall intrusion-number of captured events, communication interface used, etc.-which can be observed in a single monitoring campaign. | 13,088.6 | 2015-09-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Analysis of Multifrequency GNSS Signals and an Improved Single-Epoch RTK Method for Medium-Long Baseline
.
To determine the optimal carrier linear combinations for multifrequency GNSS, scholars also put forward some algorithms, including the clustering analysis method [9,10], analytical method [11], etc.The frequency division multiple access (FDMA) technique is used in GLONASS, whereas the code division multiple access (CDMA) technique is employed in GPS, BDS, and Galileo.For this reason, the current research considers only CDMA satellite systems to achieve single-epoch real-time kinematic (RTK) with multiple frequencies and systems.Figures 1 and 2 present the quantity of the satellites and the position dilution of precision (PDOP) of GPS and combined Galileo/BDS-2/GPS, respectively.These values range from 140 to 146 days in 2019 with the cutoff elevation set to 10 °.As shown in Figures 1 and 2, in the combined Galileo/BDS-2/GPS system, the number of available satellites is three times that of the GPS, with an average value of 25.The combined Galileo/BDS-2/ GPS system has smaller PDOP values than the GPS.Accordingly, the increase in visible satellites can significantly enhance the geometry strength of the RTK-positioning model, which is conducive to the RTK resolution.
To achieve multi-frequency combined GNSS positioning, the least-squares ambiguity decorrelation adjustment (Lambda), cascading integer resolution (CIR), as well as triple-frequency carrier ambiguity resolution (TCAR) methods have been put forward successively [12][13][14].Teunissen et al. [15] performed a systematic comparison of the fixed triple-frequency ambiguity for TCAR, CIR, and Lambda methods.The results indicated more superior performance of the Lambda approach than that of the CIR and TCAR models.The fixed rate of the narrow-lane (NL) ambiguity can be susceptible to atmospheric errors in the medium-long baseline.Feng [16] proposed the geometrybased (GB) TCAR method and achieved an optimal state and gave the optimal combined observations of GPS, Galileo, and BDS.Wang and Rothacher [17] introduced the pseudorange observations to construct a geometry-free and ionosphere-free (GIF) combination based on minimal noise and analyzed the applicability of the model for GPS, Galileo, and GLONASS.Nevertheless, the results indicated that this model had large noise when solving the NL ambiguity.Li et al. and Wu et al. [18,19] exploited two fixed extra-widelane (EWL) ambiguities to calculate the ionospheric delay 2 International Journal of Aerospace Engineering to ameliorate the NL ambiguity-fixed rate.Under the condition of significant noise, the model should be smoothed for nearly 2 min to enhance the accuracy of ionospheric delay.
In the above results, the triple-frequency ambiguity resolution (AR) methods were theoretically analyzed, and the experiments were performed based on the simulated data.Triple-frequency data could be collected during the operation of the BDS-2 system.Tang et al. [20] compared the performances of the CIR and Lambda methods with the BDS-2 measured data and proposed a single-epoch solution model.To enhance the NL ambiguity-fixed rate for long distance, Li et al. [21] put forward the GIF model and found that the NL ambiguities were fixed over several minutes.After the ambiguity of WL and EWL was fixed, the ambiguity of NL could also be fixed rapidly using a partial ambiguity resolution (PAR) algorithm.Further, Zhang and He [22] tested the TCAR model, the Lambda method, and the GIF model using the triple-frequency data of BDS-2 and showed that the Lambda method with noncombined observations exhibited the best performance among all compared methods.In addition, it was verified that the GIF model could be easily affected by the multipath.Zhao et al. [23] developed an optimized TCAR model where the accuracy of the GIF combination was optimized by using fixed EWL and WL ambiguity, but the AR of NL was time-consuming.During the opera-tion of BDS-3, the fixed ambiguity has been investigated for the new signal of BDS-3.Li et al. [24] introduced a linear combination of the BDS-3 B1C/B1I/B2a carrier phase observations.The numerical results obtained on real data show that the ionospheric-reduced combination (2, 2, -3) can reach 88.4% AR rate at a long baseline up to 1,600 km.
In terms of multi-GNSS combined positioning, before the development of the BDS system, the focus of multi-GNSS combinations primarily was placed on GLONASS and GPS systems.Al-Shaery et al. and Duan and Shen [25,26] explored single GLONASS and combined GPS/GLO-NASS positioning, and the experimental data revealed higher RTK-positioning accuracy of the combined system compared to the corresponding value of a single system.With the official operation of the BDS system, combining the BDS with other existing satellite systems has become a research hotspot [27,28].The single-epoch GPS/BDS RTK was also investigated with respect to its positioning performance for a short baseline [29,30], and the result illustrated that the GPS/BDS integrated system exhibited a remarkably higher success rate of the fixed ambiguity than the single GPS or BDS system.Teunissen et al. [31] reported that compared to a single system, the integrated GPS/BDS system could enhance the positioning accuracy and ambiguity resolution at the cutoff elevation of 40 °.In addition, the Figure 3: The visible satellite numbers and PDOP values of Galileo, BDS-2, GPS, and combined Galileo/BDS-2/GPS systems for the cutoff elevation of 10 °and 35 °at the FXTH station on August 21, 2018; labels "E," "C," "G," and "E + C + G" denote the Galileo, BDS-2, GPS, and combined Galileo/BDS-2/GPS systems, respectively.[32], and experiment data showed that the integer AR fixed rate of the integrated system was upregulated compared to the single, double, and triple systems.The related research has shown that a multisystem GNSS can enhance the strength of a parameter solution model and positioning availability, but combining systems could introduce certain problems (e.g., the introduction of low cutoff elevation angle satellites), which makes it more difficult to solve the ambiguity of all satellites simultaneously.On the basis of the cutoff elevation angle and ambiguity variance, Gao et al. and Wang and Feng [33,34] formulated a strategy for the PAR, which enhanced the fixed rate of NL ambiguities significantly.
Generally, the EWL and WL integer ambiguity can be easier to fix by introducing multifrequency signals.However, under a medium-long baseline, the NL ambiguity is still difficult to fix correctly in a single epoch.Therefore, the ambiguity of NL observations requires further in-depth research.Given that, this study proposes an improved single-epoch multifrequency multisystem RTK method for the mediumlong baseline.First, the Galileo and BDS EWL ambiguities are fixed at a high success rate, and the Galileo and BDS WL ambiguity is solved by the transformation process.Second, the ambiguity-fixed WL is adopted to upregulate the WL ambiguity-fixed rate for the GPS combination observations, and a parameterizing strategy of the ionospheric delay is employed to enhance the fixed rate of GPS NL ambiguity.In addition, the availability of the proposed method is evaluated through real-time data.
The contents of the present work are summarized into the following sections: Section 2 shows the data collection process and signal quality analysis.In Section 3, a singleepoch RTK method of multifrequency positioning is presented, and the feasibility of the presented approach is evaluated experimentally.Finally, the main conclusions are drawn in Section 4.
Data Collection and Signal Quality Analysis
Observation data from two types of receivers (TRIMBLE NETR9 and CHC N71) were collected from three baselines.The CUT0-PERT baseline was located in the campus of Curtin University of Australia, and the data were acquired on January 10-16, 201910-16, (DOY 10-16, 2019)).The CMDN-PDJP baseline was located in Nanjing, China, and data were harvested on May 22-28, 2019(DOY 142-148, 2019).The FXTH-JPST baseline was located in Shanghai, China, and data were collected on August 21-27, 2018(DOY 233-239, 2018).Three baseline lengths were 22.41 km, 30.20 km, and Table 1 provides more details on the baselines.Figure 3 illustrates the time series of visible satellite number and PDOP for Galileo, BDS-2, GPS, and Galileo + BDS-2 + GPS combination with a 10 °and 35 °cutoff elevation angle at the FXTH station (August 21, 2018).As presented in Figure 3, in multiple epochs of the single Galileo system, there were fewer than 4 satellites at a cutoff elevation of 10 °, which could not provide real-time positioning services.Also, the number of BDS-2 visible satellites was approximately 10 at the cutoff elevation of 10 °, which was equal to the GPS satellite quantity.However, the BDS-2 system showed a higher PDOP value than the GPS, which was mainly because the BDS-2 at this stage consists of GEO and IGSO (these GEO and IGSO covered the Asia-Pacific region).In addition, at a 10 °cutoff elevation, the overall satellite quantity for the combined Galileo/BDS-2/GPS system was nearly 20, and the PDOP value was the lowest among all the systems, having a value of approximately 1.5.Accordingly, abundant satellites could enhance the satellite geometry strength, thus facilitating the RTK solution.Further, the number of single GPS at the 35 °cutoff elevation was less than 4, while that of the combined Galileo/BDS-2/GPS system reached nearly 15, with the PDOP value of nearly 2. Thus, the significance of using the combined Galileo/BDS-2/GPS system could be mostly noticed in harsh environments (e.g., urban canyons).5. Since the elevation of GEO satellites varied slightly, the C/N0 value and cutoff elevation angle were not noticeably related, so their relation will not be presented in this paper.Figure 5 shows that C/N0 of B1, B2, and B3 signals for the IGSO satellites were basically the same.The C/N0 data of MEO satellite B2 and B3 signals were almost identical, 1-2 dB-Hz larger than that of the B1 signal at the identical cutoff elevation.Furthermore, the C/N0 values for MEO satellites were 2-3 dB-Hz larger than those of IGSO satellites at the identical cutoff elevation.Therefore, it would be reasonable to consider that the elevation of the MEO satellites was remarkably lower compared to the corresponding value of IGSO satellites.Figure 6 gives the C/N0 values against the satellite elevation for GPS L1/L2.The C/N0 value of the L2 signal varied from 17 dB-Hz to 45 dB-Hz, and those of the L1 signal were from 35 dB-Hz to 52 dB-Hz.Thus, at the identical cutoff elevation, the C/N0 of the L1 signal for three types of satellites is noticeably higher than that of L2.According to the data of all the signals in Figure 7, the Galileo-FOC E5a/E5b exhibited the optimal performance at all frequencies, and it was followed by the IIR-M L1; the IIR-A/B L2 exhibited the worst performance.The Galileo-FOC satellite outperformed the other two systems mainly because of the application of advanced modulation schemes.
The MPC is commonly employed to assess pseudorange on a single-frequency observation.The MPC value can be obtained by where P s r * ,j represents the pseudorange, Φ s r * ,j indicates the carrier observations, r represents the receiver, s represents the satellite, f represents the carrier frequency, i and j represent the carrier frequency subscripts, * represents different systems, and B i,j stands for the integer-valued ambiguity.
IOV-E12, FOC-E04, GEO-C01, IGSO-C08, MEO-C11, IIR-A/B-G23 IIR-M-G12, and IIF-G06 satellites of the MPC values are given in Figure 8.As shown in Figure 8, the fluctuating range of the MPC values for different satellites was between -2 m and 2 m.Different from the GPS and Galileo systems, code biases existed in the BDS-2 sys-tem.The GEO satellites were affected by the code bias, but for the small elevation interval, there was a slight correlation between the code bias and the cutoff elevation.
Figure 9 displays the RMS data of the MPC for each satellite type, in which the values of Galileo signals were mainly within 0.4 m, and the order was as follows: E1 > E5b > E5a.
For BDS-2, the B3 signal exhibited the optimal performance, whereas the B1 signal performed the worst.For the GPS, the RMS values of the MPC for the L2 signal were smaller than those for the L1 signal.
Multifrequency Multisystem Ambiguity Resolution
Following the triple-frequency combination observation theory, the pseudorange combination coefficients are denoted as a, b, and c, while the carrier combination coefficients are represented by d, e, and f , and the observations of the double-difference combination could be indicated by the following equations: where Δ denotes the double-difference factor; φ 1 , φ 2 , and φ 3 refer to the carrier phase observations; P 1 , P 2 , and P 3 refer to The coefficient scale of first-order ionospheric delay β d,e,f and the noise coefficient scale μ d,e,f of the combination observation are represented as Over the past few years, optimal linear combinations with ionospheric-reduced delay, long wavelength, as well as low noise have been achieved for GNSS triple-frequency observations [35][36][37].Table 2 lists the optimal combination characteristics of the triple-frequency observations for the BDS and Galileo systems.
EWL/WL Ambiguity Resolution.
A medium-long baseline exhibits a large double-difference ionospheric delay, so the WL integer ambiguity-fixed rate is low.In our research, two sets of fixed EWL ambiguities were used for the determination of WL ambiguities through the linear transformation process.According to the characteristics of triple-frequency combination observation for the BDS and Galileo systems, the BDS system used (0, -1, 1) and (1, 4, -5), and the Galileo system used (0, -1, 1) and (1, 5, -6).The two sets of the EWL combination data were denoted by EWL1 and EWL2.Since the EWL1 had a longer wavelength, it was easy to fix.The observation equation derived using the EWL1 carrier observation and three pseudorange observations P 1 , P 2 , and P 3 are as follows: where a superscript S represents the BDS and Galileo; the coefficient matrix B corresponds to the positional vector parameters; v denotes the residual vector; X and N represent the baseline vector parameters and the ambiguity vector of carrier phase, respectively; and λ, I, and l are the wavelength of the EWL1 observation, identity matrix, and observed minus computed (OMC) vector of relevant observations, respectively.The least-squares method was employed to obtain a floating solution of the ambiguity parameter, which was then fixed using the Lambda method.The EWL1 carrier observation and fixed EWL1 ambiguity could be considered a highprecision pseudorange observation.The EWL2 carrier observation can be expressed together with the EWL1 observation as follows: The EWL2 ambiguities were ascertained by the Lambda method.For the WL observations, both the BDS and Galileo systems adopted the (1, -1, 0) and (1, 0, -1) combination observations, which were recorded as WL12 and WL13, respectively.However, the GPS employed the (1, -1, 0) combination observations.
When the EWL ambiguity was fixed, the WL ambiguity of the BDS and Galileo can be determined by The GPS exhibited a low WL ambiguity success rate under the medium-long baseline.In this study, the WL carrier observations with the fixed WL ambiguity of the BDS and Galileo systems were considered pseudorange observations.The GPS WL carrier observation can be described as follows: where a superscript G represents the GPS system and the rest of the labels have the same meaning as in Equation ( 4).After obtaining the floating solution of the WL ambi-guity and its corresponding vc-matrix, the lambda method was adopted for fixing the ambiguity.Two test scenarios were conceived to appraise the positioning performance of the single-epoch GNSS.In Scenario 1, the characteristic of the single GPS WL AR was evaluated, while in Scenario 2, the fixed rate of the GPS WL AR found on BDS/Galileo WL observations was analyzed.In the two scenarios, the Ratio threshold and the RMS value of the positioning errors were used to verify the ambiguity fixing reliability and positioning performance.The solution mode adopted the AR of the single epoch.In practical applications, single-epoch RTK-positioning solutions have rarely been used, and using the single-epoch analysis mode is more conducive to assessing the accuracy and feasibility of RTK.The Ratio was calculated by where N denotes the float ambiguity resolution; N 1 and N 2 represent the minimum and second minimum of quadratic forms for ambiguity candidates, respectively; Galileo WL ambiguity BDS EWL ambiguity N C (0, - Galileo EWL ambiguity N E (0, - 10 International Journal of Aerospace Engineering , where c is the threshold, which was set to 2 or 5.The Ratio values of scenarios 1 and 2 for the three baselines are presented in Figure 10.As observed from the figure, the Ratio values of Scenario 2 are noticeably higher compared to the relevant values of Scenario 1, demonstrating that Scenario 2 exhibited more excellent WL AR performance than Scenario 1. Table 3 lists the performance results of the two scenarios with respect to the singleepoch WL AR.At a Ratio value of 2, the maximal fixed rate of AR for Scenario 1 was only 48.44%, while the maximal fixed rate of AR for Scenario 2 was higher, having a value of 97.64%.Even at the 50 km baseline, at the Ratio value of 2, the fixed rate of the GPS WL AR for BDS/Galileo could reach a value over 95%.The Galileo/BDS WL carrier observations, together with the fixed WL ambiguity of the BDS and Galileo systems, could enhance the constraint strength of the model, which could help to upregulate the rate of GPS WL ambiguity fixing. In addition to the single-epoch AR performance, the accuracy of RTK was also evaluated.The position errors of the single-epoch in the east (E), north (N), as well as up (U) directions for these 3 baselines are illustrated in Figure 11.As revealed from these figures, the GPS singleepoch WL observations showed positioning errors below 0.2 m in the E and N directions and below 0.5 m in the U direction.The positioning accuracies of the GPS WL obser-vations are listed in Table 4.The positioning accuracies of the horizontal and vertical directions are 0.03 and 0.05 m for dataset B, respectively.For the 50 km baseline, the RMS was approximately 0.04 m horizontally and 0.1 m vertically.Briefly, the WL observations could ensure subdecimeter positioning accuracy.
NL Ambiguity Resolution.
For medium-long baselines, the fixed NL ambiguity is subject to the ionospheric delay.Although first-order ionospheric delay can be excluded through an ionosphere-free (IF) combination, its noise is amplified.In this study, the ionospheric delay error, ambiguity, and position are considered as unknown parameters.The Galileo/BDS/GPS WL ambiguities, together with the original carrier observations, can be used to derive the where Ion G denotes the ionosphere delay vector of the GPS NL observation.
Equation ( 9) can be simplified to where v denotes the observation residual matrix, A is the coefficient matrix of Y, and l represents the OMC matrix.
Set the weight matrix of the ionosphere parameters by P G = Σ G −1 and the corresponding vc-matrix by Σ G .Afterwards, the least-squares solution is expressed as follows: The matrix P G refers to the diagonal matrix with elements equal to 1/σ 2 G , and σ 2 G denotes the prior variance for ionosphere parameters.
If the WL ambiguity cannot be fixed, a higherprecision observation cannot be achieved.In this regard, the equivalent observations adopting triple-frequency 12 International Journal of Aerospace Engineering EWL observations of BDS and Galileo and NL carrier observations of GPS were jointly developed in this study as follows: Figure 12 illustrates the multifrequency multisystem GNSS single-epoch RTK-positioning process.With the ambiguity fixed progressively, the accuracy of the RTK is also enhanced gradually.At the same time, the proposed method could ensure high positioning accuracy when the ARs of WL and NL failed.
As mentioned above, using the real data with different baseline lengths ranging from 22.41 m to 50.61 km to confirm the feasibility of the proposed approach and assess the NL AR fixed rate, two scenarios were designed as follows: (1) Scenario 1: the first step was the GPS WL AR calculation using a GB model.In the second step, the NL ambiguities were fixed using the IF combination (2) Scenario 2: the first step was the WL AR calculation for GPS, Galileo, and BDS systems using the GB model.Second, the ambiguity-fixed WL observations of Galileo, BDS, and GPS act as constraints, and the ionospheric delay is parameterized Figure 13 illustrates the Ratio values for scenarios 1 and 2 across the three baselines.The fixed rates of the singleepoch NL ambiguities are listed in Table 5.As shown in Figure 13, the Ratio values for Scenario 2 are significantly larger compared to the values for Scenario 1, indicating a better-fixed rate of NL ambiguity for Scenario 2. In addition, as presented in Table 5, for Baseline A, at the Ratio threshold of 2, the fixed rate of the NL ambiguity for Scenario 2 was 96.32%, which was 26.1% higher than that for Scenario 1.For the 50 km baseline, the NL ambiguity-fixed rate for Scenario 1 was only 47.74% at the Ratio threshold of 2, which could not ensure the feasibility of single-epoch RTK positioning.When the proposed method was used, the fixed rate of NL ambiguity was still 93.26%, which could effectively ensure single-epoch RTK positioning.
To confirm the feasibility of the proposed approach, the positioning errors of the three baselines were calculated, as shown in Figure 14; the RMS values in the three directions are listed in Table 6.As shown in Figure 14, the three baseline errors were all below 0.08 m in horizontal directions and lower than 0.1 m in vertical directions.The RMS values in the 3 directions increased with the length of the baseline, and the maximal value was achieved in the vertical direction.For the 50 km baseline, the RMS values were better than 2 cm in the horizontal directions and approximately 2.9 cm in the vertical directions.As the critical factor affecting the fixed rate of NL ambiguity for medium-long baselines, double-difference ionospheric delay should be parameterized.Moreover, in this study, the ambiguity-fixed WL observations of the Galileo, BDS, and GPS observations were considered high-precision observations, which effectively elevated the NL ambiguity-fixed rate.
Conclusion
The narrow-lane (NL) observations of the GPS are subjected to the double-difference ionospheric delay for a mediumlong baseline, which affects the fixed rate of ambiguity.To address this problem, this study proposes a single-epoch multifrequency multisystem real-time kinematic (RTK) modified method.First, the Galileo and BDS extra-widelane (EWL) ambiguities are fixed at a high success rate, and the Galileo and BDS WL ambiguity is solved by the transformation process.Second, the ambiguity-fixed WL is employed to upregulate the GPS WL ambiguity-fixed rate, and the parameterizing strategy of the ionospheric delay is adopted to enhance the rate of GPS NL ambiguity fixing.The feasibility of the proposed approach is verified by measured data.The conclusions of this study are listed below: (1) The full operational capability (FOC) E5a/E5b can achieve an optimal performance at all frequencies; it is followed by the IIR-M L1, while the IIR-A/B L2 performs the worst.The FOC satellite is superior to the other two systems, which is mainly because of the application of advanced modulation schemes.The MPC of Galileo signals shows root mean square (RMS) data below 0.4 m with the following order: E 1 > E5b > E5a.In the BDS-2 system, the B3 signal exhibits an optimal performance, while the B1 signal performs the worst among all signals.For the GPS, RMS values of the MPC for the L2 signals are lower than those of the L1 signal (2) At the Ratio value of 2, the fixed rate of the GPS WL ambiguity resolution assisted by the BDS/Galileo can reach a value over 95% for a medium-long baseline.The Galileo/BDS WL carrier observations, together with the fixed WL ambiguity of the BDS and Galileo systems, can enhance the constraints on position coordinates, which is conducive to elevating the fixed rate of the GPS WL AR (3) In terms of the 50 km baseline, the NL integer ambiguity-fixed rate of the GPS using the 13 International Journal of Aerospace Engineering ionosphere-free (IF) combination is only 47.74% at the Ratio threshold of 2, which cannot ensure the feasibility of single-epoch RTK positioning.However, using the proposed method, the fixed rate of GPS NL ambiguity can reach 93.26%, which can further optimize both the positioning accuracy and the AR for a medium-long distance.Therefore, the proposed approaches broaden the future application of deformation monitoring in medium-long baseline scenarios
Figure 1 :Figure 2 :
Figure 1: The quantity of satellite and PDOP for GPS at cutoff elevation of 10 °.
Figure 7 :
Figure 7: C/N0 values versus the satellite elevation for satellites of varied types.
Figure 4 shows the C/N0 data of Galileo signals concerning the elevation.It could be observed that a higher cutoff elevation resulted in larger C/N0 values.The C/N0 values of E5a and E5b signals were almost equal but better than those of the E1 signal for the identical type of satellite.At the identical cutoff elevation, the C/N0 values of FOC satellites were 2-3 dB-Hz larger than those of IOV satellites.The results of the three different signals in the BDS-2 system were presented in Figure
Figure 8 :
Figure 8: The MPC values versus the satellite elevation angle for different satellites at the CUT0 station with the Trimble NETR9 receiver collected on January 10-16, 2019.
Figure 9 :
Figure 9: The RMS of the MPC for different satellites.
Figure 10 :
Figure 10: The AR performance of WL on datasets A, B, and C.
Figure 11 :
Figure 11: Time series of positioning errors for WL combination observations on datasets A, B, and C.
Figure 13 :
Figure 13: The AR performance of NL on datasets A, B, and C.
Figure 14 :
Figure 14: The positioning error series for the NL combination observations on datasets A, B, and C.
Table 1 :
Specific information of the test data.
Table 2 :
EWL/WL combination characteristics for the triplefrequency BDS and Galileo systems.
Table 3 :
The WL ambiguity-fixed results for different scenarios.
Table 4 :
Statistical data of positioning errors for WL observations (m).
Table 5 :
The NL ambiguity-fixed results for different scenarios.
Table 6 :
Statistical results of single-epoch RTK-positioning error for NL observations (m). | 5,912.2 | 2023-11-09T00:00:00.000 | [
"Engineering"
] |
Methods and Data Systems Methods and Data Systems Observing Desert Dust Devils with a Pressure Logger
A commercial pressure logger has been adapted for long-term field use. Its flash memory affords the large data volume to allow months of pressure measurements to be acquired at the rapid cadence (> 1 Hz) required to detect dust devils, small dust-laden convective vortices observed in arid regions. The power consumption of the unit is studied and battery and solar/battery options evaluated for long-term observations. A two-month long field test is described, and several example dust devil encounters are examined. In addition, a periodic (∼ 20 min) convective signature is observed, and some lessons in operations and correction of data for temperature drift are reported. The unit shows promise for obtaining good statistics on dust devil pressure drops, to permit comparison with Mars lander measurements, and for array measurements.
Introduction
Dust devils (e.g., Balme and Greeley, 2006) are vertical convective vortices encountered in arid regions, in particular during periods of strong solar heating (early afternoon in summer).The vortical structure is rendered visible by lofted dust, which may (via sunlight absorption) itself contribute to the strength of the convection intensity.
Pressure drops in dust devils have been noted in the past on Earth (e.g., Wyett, 1954;Lambeth, 1966;Sinclair, 1973), but are actually more systematically documented in studies of dust devils on Mars (e.g., by Mars Pathfinder: Murphy and Nelli, 2002, and by the Phoenix lander, Ellehoj et al., 2010), where landers have recorded meteorological parameters over long periods with a high enough cadence to detect small vortical structures.A number of dust-devil type vortices were detected by the Viking lander (via wind speed and direction, e.g., Ryan and Lucich, 1983 -the pressure measurements were acquired too infrequently to be of use in this application).Interest in further measurements remains high -various proposals for network missions (including pressure sensing) have been made such as NASA's MESUR, the French Netlander, and recently the Finnish METNET.The NASA rover Curiosity, which landed on Mars in August 2012, carries a meteorology package, as will the recently-selected NASA Insight mission to be launched in 2016.All these missions may detect dust devils on Mars via pressure records and other data.
Routine terrestrial meteorological stations only record data at ∼ 15 min cadence, too infrequently to detect dust devils.As discussed in Lorenz (2012), it would be highly desirable to obtain a dataset of ∼ 1 Hz or better fixed-station pressure data at a place and time where dust devils are known to occur.Such a dataset should last several weeks (Mars experience suggests of order one encounter per day can be expected, so several months of operation of a single station are needed) in order to obtain a useful number of dust devil encounters (i.e., of order a hundred or more, to permit robust comparison with Mars).While mobile measurement platforms (e.g., Sinclair, 1973) can obtain a larger number of encounters in a given time, they do so at considerable expense in labor, and at the cost of introducing selection bias in the dust devils encountered (largest, slowest, etc.) and with possible vehicle effects on part or all of the record.
Sites with high dust devil activity tend to be hot and remote, and long-term unattended operation presents hazards of theft or vandalism.Certain controlled areas where equipment could be supervised (e.g., military bases) may be feasible but present access challenges.Another approach, although not easy with conventional meteorological masts, is to deploy compact instrumentation with a minimal visual signature, such that it is unlikely to be detected and thus interfered with.Recent technological developments in precision pressure sensors and flash memory allow such compact and thus discreet sensing platforms to be deployed for extended periods with a good probability of recovery, and at a cost wherein a finite probability of loss is tolerable.
Commercial unit as supplied
The commercial pressure logger used here is the B1100-1 logger by Gulf Coast Data Concepts.The unit (Fig. 1) is essentially a small circuit board with a USB connector for data transfer: all components (including the pressure sensor, a reset switch, and a holder for a AA battery) are mounted on this board, which is protected by a three-part semitransparent plastic housing.The housing is about 2.5 cm in diameter and 10 cm long: with the AA battery installed, the unit weighs only 55 g.
The product literature (GCDC, 2010) reports that the unit will run for 3 weeks on an alkaline AA cell.This, however, appears to be only at a low sample rate.A requirement for detecting small dust devils, distinct from more general meteorological investigations, is that samples be acquired at a high rate of one sample per second or better.Our initial tests showed that an alkaline AA cell will last only a few days at the highest rate of 10 Hz, although at 1-2 Hz sampling about 10 days of continuous operation can be expected.
The logger stores readings acquired at rate up to 10 Hz in a comma-delimited text file (CSV), with a timestamp for each reading.These files are written as ASCII text files to a 2 GB micro-SD flash memory card.This card is also used to hold a configuration file which determines the sample rate, file size, etc.The sensor (Bosch Sensortech, 2009) in the unit is a BMP-085 Digital Pressure Sensor by Bosch Sensortec, in a 5 × 5 × 1.2 mm package which communicates with the B1100 microcontroller using an I2C interface and has a nominal resolution of 0.01 mb.The BMP-085 is factory-programmed with 11 16-bit constants that are used, together with an on-chip temperature measurement, by the microcontroller to correct the raw pressure reading into a calibrated value.
Data storage
The ASCII file written by the loggers is less compact than a binary format, but considerably more convenient in most applications.A header contains the unit serial number, the file origination time, battery voltage and setting information, then a user-specified number of measurement lines follow, with each line containing the measurement time in decimal seconds after the file origination time, and the pressure reading in Pascals as an integer, and optionally (see below) an integer temperature reading in tenths of a degree Celsius.
A file with 86 400 measurements (i.e., a day of data at 1 sample per second) occupies 1.5 MB: even at the maximum sample rate of 10 Hz, about 4 months of data could be stored in the 2 GB card supplied with the unit.Note that the filesystem on the logger only records 999 data files.While large datafiles can be inconvenient (e.g., at least some versions of Microsoft Excel have difficulties plotting more than 32 768 points, and code in IDL or a similar language needs loop or index variables to be of long integer type), it may be necessary to use large files if high sample rates are intended for long periods of observation in order to keep the total number of files below 1000.
Several additional pieces of information are stored.First, the battery voltage is recorded in the header of the file.Secondly, every 100 pressure readings (or at some other interval -it is specified by the user in the configuration file) a temperature reading is written.Since the temperature sensor is simply on the circuit board, it has too high a thermal inertia to respond quickly to air temperature changes and thus is not of particular interest in dust devil measurements directly.Including the temperature data adds five characters to each line (a three digit temperature reading plus a space and comma), which typically is 12-16 characters long with the time and pressure reading, i.e., it imposes a 20-30 % data overhead; thus, to include the temperature data at every reading would reduce the number of pressure readings that could be accommodated in the flash memory.Thus, since the temperature data were not expected to be of immediate meteorological interest, an interval of 100 readings was chosen; we will return to this decision later.
The file header contains a time-tag from a real-time clock (set by the user).The time increment after this tag is recorded for each pressure measurement as a decimal value in seconds, so the absolute time of each reading can be calculated readily.Thus, data from different pressure loggers can Where this curve intersects the voltage threshold of the logger (dotted line), the unit stops -in this case (threshold = 1.25 V) after a discharge of about 1 A-h -even though substantial capacity remains.One mitigation is to put two cells in series (dashed curve) -here the curve intersects the threshold after some 2.2 A-h.The rather shallow discharge curve of alkaline cells means that if the threshold is even slightly higher, a significant drop in useful battery capacity results.
be synchronized with modest effort and care in setting the clock for each unit.Since each clock can be set from the same PC, an initial synchronization to about 1 s should be achievable (no attempt has been made to evaluate the clock performance.)
Power consumption
It was noted in initial tests that the logger ceased operation when the battery voltage fell to about ∼ 1.25 V.For conventional alkaline batteries, this in fact means there is considerable capacity left in the cell (see Fig. 2).Inquiries with the manufacturer indicated that a ∼ 1.2 V cut-off was introduced to avoid brownout of the system during file-write operations to the SD card, when the current draw can increase momentarily up to 100 mA.Corruption of the file system can result if the voltage drops too low under this current draw, and thus operations are suppressed when the voltage is low enough that this might be a risk.In fact, we have noted similar problems (and in fact introduced a similar precaution to cease operations before battery voltage fell too low) in experiments with digital time-lapse cameras used to study dust devils (Lorenz et al., 2010).
The device (Alex Kooney, personal communication, 2012) uses a boost regulator to generate a 3.3 V operating voltage from the battery supply.Because of this boost operation, current consumption decreases for higher battery voltage, which can be higher than 1.5 V (although no advantage is gained above 3.3 V).We have measured the current consumption for 1.5 V and 3.0 V supplies and list the results in Table 1; indeed, for a given sample rate the current draw is reduced by a factor of just under two for the higher supply voltage.The unit's configuration file also allows the user to disable the indicator LEDs -this change makes only a modest (∼ 5 %) difference in consumption.For the 1.5 V supply, the current varies roughly linearly I = 6 + 1.1 s, where s is the sample rate in Hz and I the current in mA.Note that the currents in Table 1 are the typical values; there are brief spikes during flash write operations that are not accounted for.Thus, our approach to enhancing the longevity of the system is threefold.First, two alkaline cells are used in series.This reduces the current consumption due to the (typically) higher supply voltage, which introduces a factor of nearly two increase in endurance.Second, because the discharge curve of two cells in series only reaches 1.25 V when the cells have been almost completely depleted (∼ 2.1 A-h, compared with ∼ 0.8 A-h for a single cell to 1.25 V, see Fig. 2), an additional factor of ∼ 2.5 is obtained.Finally, larger (D) cells are used, bringing a final additional factor of ∼ 6-10 over AA cells.Thus, the total endurance is enhanced by ∼ 30 compared with the single AA cell by the substitution of 2-D cells with a battery holder, totaling about $ 5 per unit.The logger and D cells/holders fit inside a "2.5-cup" polypropylene food container, sprayed with sand-textured paint for this application (see Fig. 3).Note that a hole must be drilled in this waterproof housing to permit external pressure changes to be rapidly communicated to the sensor and to prevent temperature changes causing the internal pressure to vary.
An obvious alternative approach is to use solar power -especially since the object of the study is areas that are subject to strong solar heating.A plethora of options exist.Simplest is to substitute solar power when available for primary (alkaline) power, but since the device also typically runs at night (actually optional -the configuration file allows for operation between designated times), this only reduces the consumption by about half.Better is to charge a secondary (rechargeable) battery during the day to allow the energy from the solar cells to be exploited for night use as well.For overall energy balance, this requires a solar cell current that exceeds the power consumption by a factor of 3-4.Because extended cloudy periods may allow the secondary battery to become temporarily exhausted (which would reset the datalogger and stop its operation), this approach was adopted in parallel with a pair of alkaline AA cells as backup.This is particularly important since Nickel Cadmium (NiCd) rechargeable cells perform poorly at the very high temperatures encountered by dust devil survey instrumentation.The solar cell charged a set of 3 NiCd AA cells (thus a nominal 3.6 V dropped to 3 V at the logger power terminals by a 1N4148 diode in series; a similar diode in series with the alkaline cells isolated them unless the NiCd voltage fell below 2.4 V).
Design for measurement campaign
We now have the information at hand to best configure the system for field use.For a 1 week campaign, a single AA cell will suffice if 2 Hz data is adequate; for 10 Hz data 2 × AA or 1 × D cell must be used.For a 1 month campaign, 2 × AA or 1 × D will allow 2 Hz data, but the 2 × D option must be exercised for 10 Hz data.For a 4-month campaign, the 2 × D cell (or solar) options are needed, and will permit 10 Hz data.Note, however, in these cases and for longer measurements, the data storage becomes the limiting factor.As indicated in Sect.2.2 above, 4 months of operation at 10 Hz will fill the 2 GB memory.Note also that 4 months (∼ 1E7s) corresponds to 1E8 lines of data, so the 1000-file limit of the file system would require that each file exceed 100 000 lines in length.
Field trial
Three units were deployed on 16 April 2012 on a playa (dry lake bed) in southern Nevada (Fig. 4), a location known (e.g., Pathare et al., 2010) to see dust devil activity.Each was set at the foot of a small sagebrush (Fig. 5), which provided some shelter from wind and some concealment, and the location recorded with GPS.Two units ("P10" and "P16") were powered by alkaline D-cells, and the other ("P15") by three AA NiCd cells recharged with a solar cell and with a 2 AAalkaline cell backup.The units were left unattended until recovered on 21 May 2012; the initial measurement period was therefore 35 days.At this point data were downloaded, and another battery unit ("P12") was deployed and operations continued for another three weeks until the surviving units were recovered on 11 June 2012.
All three units operated successfully for the first measurement duration and some 2 Gbit of data were obtained.Contemporaneous hourly meteorological data were obtained from the Remote Automatic Weather Station (RAWS) at Red Rock Canyon, about 10 km to the northeast, which is not discussed further in the present paper.
Datafiles were written with lengths of 32 000 samples and 86 400 samples, with about 970 files and 360 files, respectively.The large number of files allows a good record of the battery voltage history (see Fig. 6).Three units and another ∼ 2 GB of data were obtained when the site was visited in early June 2012.Unfortunately the solar-powered unit was not present, presumably removed by unknown persons (the playa is frequented for various recreational activities, including drag racing, flying radio controlled airplanes, etc.).Perhaps the necessity of maintaining a clear sky view to obtain solar power made this installation particularly visible to passersby.It may be that the penalty of finite battery lifetime is worth the reduced probability of theft or damage by better concealment (or even burial) that is impossible for solar-powered installations.Similarly, adding a prominent notice "Scientific Equipment, Please Leave Undisturbed" may decrease the probability of interference or removal if the unit is found, but increases the probability of it being noticed in the first place.
Field data and noise performance
The absolute calibration of the unit is not important for the present application although the manufacturer's data sheets (Bosch Sensortech, 2009;GCDC, 2010) quotes 1 hPa (i.e., ∼ 1 mb).The noise level depends on the measurement mode and is quoted as 3-6 Pa or 0.03 mb.The reading is output as an integer in Pa (i.e., a resolution of 1 Pa or 0.01 mb).Indoor tests on short sequences of data (where no trend in pressure is obvious) show Gaussian-distributed scatter with a standard deviation of ∼ 3 Pa, suggesting that the specification is met in quiescent conditions.
In addition to features in the pressure data interpreted to be dust devils (see Sect. 5), some other aspects of the field data were noted (see Fig. 7).While in general the noise characteristics of the data were comparable with the indoor www.geosci-instrum-method-data-syst.net/1/209/2012/Geosci.Instrum.Method.Data Syst., 1, 209-220, 2012 trials indicated above, some dramatically poorer behaviors were also noted.These have been studied here in order to improve them for future measurements, and in particular to improve the performance of automatic dust devil detection algorithms.
First, a sawtooth pattern (Fig. 8) was sometimes apparent in the data -corresponding to what looks like noise in the daily view (Fig. 7).This did not degrade the data quality dramatically beyond the normal noise level for the two loggers that ran with 100 ms sample interval, but the third unit with a sample interval of 500 ms showed a much more pronounced sawtooth.Interestingly, the scatter in the data for that unit within each sawtooth cycle was noticeably smaller than for the other two, suggesting that perhaps the highest sample rate causes higher measurement noise.The sawtooth cycle was a very regular 100 samples, which was the arbitrary value chosen as a compromise between the limited expectation of utility of the temperature data and the data volume cost of recording it.Thus, it was suspected upon examination of the data that the temperature recording is actually used in the temperature correction of the pressure sensor data, and that the sawtooth (most noticeable during the day) was due to comparatively rapid temperature changes such that the correction was not being correctly applied.An obvious mitigation strategy is to simply decrease the interval between temperature readings to just a few seconds (i.e., every 10th sample for a 500 ms pressure sampling cadence, or every 50th for the fastest 100 ms).The improvement in data quality is likely worth the data volume penalty (see Sect. 2.2).Because this temperature calibration error is somewhat deterministic, a correction algorithm can be applied to improve the data quality (Fig. 8).Different filtering options (even simple smoothing) can likely improve the detectability of dust devil pressure drops, especially when the width of the drop being sought is much (∼ 10 ×) wider than the interval between temperature corrections.
Another mitigation (Alex Kooney, personal communication) is to add a "raw-output" flag to the configuration file; this option (not documented in the product manual, GCDC, 2010) forces the device to output a reading that is not "corrected" with an erroneous temperature.While this entails post-processing to recover absolute pressures, or indeed to correct for long-term relative variations, this raw output will vary smoothly without any sawtooth noise, which may be better for detecting short-term dips in pressure due to dust devils.
A related noise effect appears in some instances.This has the appearance of an irregular square wave superposed on the data.This appears to be similar to the sawtooth in origin (in that the steps occur when temperature readings are made) but is due to the finite resolution of the temperature reading (i.e., the error corresponds to a temperature "correction" that uses a value that differs from the true one by less than 0.1 • C; this is the limiting resolution of the BMP-085 onboard sensor, but is in fact large enough to give a noticeable pressure error).Significant correction of this noise by post-hoc processing is likely to be challenging, since the information to make the correction simply does not exist (i.e., this noise is somewhat comparable with the residual noise after bad sawtooth has been corrected.)However, as for the sawtooth noise, changing the temperature measurement cadence to a much longer, or (desirably) much shorter, interval will mitigate its impact on dust devil detection.
Finally, an additional data quality issue occurs, in that the text file is occasionally corrupted.Typically a line is truncated, or in most cases a number of spurious "y" characters is introduced.A broadly effective mitigation is to apply robust error-trapping in the code used to read and plot the data and Fig. 9.Some example dust devil signatures.(a) An isolated 30s-wide event with a peak pressure drop of 0.5 mbar, (b) a narrow (∼ 30 s, 0.3 mbar) and 400 s later a wide (300 s, 0.5 mbar) feature, the latter presumably due to a large devil at some distance.(c) A sequential pair of similarly-sized devils, each ∼ 300 s wide and ∼ 0.4 mbar deep and separated by 5000 s.Note each plot is scaled differently.Data are uncorrected for sawtooth effects.reject those lines (the text file can be manually edited, but this would become prohibitively laborious for large amounts of data such as that analyzed here).No obvious time or temperature correlation has been noted with these dropouts (although notably one brief dropout occurs in the middle of a dust devil encounter -see later -suggesting that perhaps nearby electrostatic discharge associated with triboelectric effects in the dust can cause data transfer problems).
While these issues are inconvenient, especially to automated processing of the data, it should be recognized that overall they occur fairly seldom.Many files are unaffected, and in most of those that are, over 99 % of lines in the ASCII record are uncorrupted.
While the noise in the data acquired in this initial field test was somewhat distracting due to the more rigorous temperature excursions experienced than is typical in laboratory or domestic settings, it was nonetheless easy to identify many dust devil signatures by sight.Further, important lessons have been learned for the optimum settings to use for ongoing and future measurements.
Example dust devils and other observations
A systematic investigation of dust devil population data from the logger described here will be perfomed in future work, using both human and machine methods to detect dust devils.An initial (human) reconnaissance of the data show many "classic" dust devil encounters, and some examples are shown in Figs.9-11.
Pressure drops in Fig. 9 are 0.3-0.5 mbar deep, with durations of 30-300 s.These examples are fairly symmetric in shape, although usually the dust devil introduces a discontinuity in the background pressure (i.e., the pressure recovers after dust devil passage to a value different from the pre-encounter value).This was noted, e.g., by Sinclair (1973), although only a handful of examples were shown.
Two of the plots show two dust devils; it is tempting to speculate these may be pairs (e.g., formed by the break of a horizontal roll vortex into two counter-rotating vertical vortices).Future work will explore, with the robust statistics permitted by the large dataset being acquired, the possibility of pairing or other temporal clustering.
Figure 10 shows additional examples.The second event in Fig. 10a is noticeably asymmetric, with a slow dip but a fast recovery.This could be the result of a change in wind speed involving a slow approach with a faster departure for a dust devil that has a constant intensity; or where, for instance, a dust devil suddenly "dies" soon after closest approach.Again, the methodology described in the present paper will permit enough data to be acquired to determine whether such asymmetry (again, noted first by Sinclair, 1973) is random or whether, for example, the pressure decline on the leading face of a devil is shallower than the trailing side.This example also underscores the potential for future measurements with an array of loggers to resolve the temporalspatial ambiguity.
The third example in Fig. 10 is the most intense encounter observed so far, 1.5mbar deep.Interestingly, it seems to have a distinct broad ramp on either side, a few tens of seconds wide, with a sharp central dip.
Finally, Fig. 11 shows a dual dust devil encounter as observed with three different stations.Stations P15 and P16 were about 50 m apart, whereas P10 was about 300 m to the south, which is the direction from which the prevailing wind usually blows.This seems consistent with the closest approach (assumed to be the time of deepest pressure drop) being ∼ 90 s earlier in P10 than in P15/16, suggesting a migration speed of 3-4 m s −1 .The encounters with P10 were evidently closer than with the other two and/or when the Fig. 11.A pair of dust devils, about 8 min apart, observed in all three loggers (shifted to show the coherence of the records).The bottom two plots are essentially synchronous, but the first plot is about 90 s earlier in time.Note that the second devil is still detectable at the ∼ 0.5 mbar level in the last record (P16) despite the ∼ 0.25 mb sawtooth noise (uncorrected in this plot).The second devil evidently passed closest to the P10, where the pressure drop is both sharpest in structure and deepest (∼ 1.5 mbar); an 8.98 s gap exists in the data at 3469.496 s, in the middle of the dust devil encounter, suggesting a possible electrostatic interference.devils were more intense.Since the P15 dip appears deeper than P16, at least for the first devil, it seems likely that the devil passed closer to P15 than to P16.
Both dust devils, in all three records, appear to be asymmetric, with a slower dip and fast recovery.The possibility (discussed above) that this could be caused by "death" of the dust devil after P10 passage appears to be excluded in this instance in that an apparently correlated encounter with P15/16 occurs.A random encounter of a new devil appears unlikely.It is clear that multiple stations offer interesting prospects for array study, and even with only 3 stations important additional information is obtained and can address ambiguities inherent in single-station records.
Note that there is some evidence of a dual dip in the second devil, perhaps indicating the core.Interestingly, the P16 record has a data gap here where no data was written (while data gaps do occur briefly when a new datafile is being opened, in this instance the gap appears in the middle of a file).It is possible that some electrostatic discharge effect was responsible.
A final observation is noted here, which, while not a direct dust devil observation, may be related.A pseudoperiodic pressure cycle often emerges in the afternoon, with a characteristic period of the order of 1000 s (see Fig. 12).A similar periodicity was noted by Renno et al. (2004) in soil heat flux data in a dust devil field campaign in Eloy, Arizona, in May 2002.Variations of about 30 % in the soil heat flux were noted with a period of about 20 minutes (see their Fig. 5 -although the text and caption says 30 min, the peak spacing is more accurately expressed as 20 min).Renno et al. (2004) speculate that a radiative feedback may be involved, wherein strong surface heating prompts vigorous convection and dust-raising: dust scatters and absorbs sunlight, reducing surface heating and stabilizing the atmosphere to suppress convection; once the dust settles out or is advected away, surface heating begins again.The coincidence of the 20 min period here with that of Renno et al. (2004) is intriguing.The phenomenon will be investigated further with a more robust dataset.Uncorrected pressure record over daylight hours.Beginning around 13 000 s (i.e., around local noon), an oscillation in pressure develops with a period of about 1000 s, and is seen most prominently at the middle of the center panel at around 14:00 LT when dust devil activity is near its peak.In this uncorrected data, the periodic pressure cycle is somewhat highlighted by the sawtooth due to correlated temperature changes.
Conclusions and future work
The unit investigated here is affordable enough ($ 120) to consider deploying in arrays to measure the two-dimensional structure of convection, or (with wider spacing) simply to increase the number of dust devil encounters obtained.The unit cost is modest enough compared with the costs of deployment that it can be considered partially expendable (i.e., some attrition of a deployed set can be tolerated with useful data still obtained).
The data storage capability of the units permits several months of data at a high sample rate (10 Hz) or a year or more at a lower rate.The default AA battery installation, however, permits less than a week at this rate.The power supply modifications described in this paper enable much longer-term operation than the nominal configuration, and thus permit a larger amount of data to be acquired, permitting (for example) robust statistics over several months to be obtained from a single deployment in the field.
Our initial surveys highlighted that solar-powered installations may suffer higher attrition than more covert battery-powered units.Additionally, the importance of temperature correction of the pressure data is noted: specifically, the temperature of the sensor should be logged (and thus the correction updated) at a cadence different from the duration of events being searched for.Although a data volume penalty results, this cadence should be as high as possible, once a second or better.We have described, however, how data degraded by less frequent (∼ 50 s) updates can be substantially recovered by post-processing.
We have identified a number of dust devil signatures in the data, finding that dust devils with pressure drops of ∼ 0.3 mbar or larger are easily identified, and so far one 1.5 mbar event has been seen.Although this is not as deep as the largest drops measured by vehicle-borne measurements penetrating the core of devils, the record is unbiased by vehicle movement and unaffected by engine vibration.One double-devil event is detected by three different stations, showing the potential for array measurements.An interesting periodic pressure fluctuation has been noted in the convectively-active afternoon, comparable with a similar In future work we will examine the data from ongoing field measurements, using both manual and automated methods, to derive an in-situ terrestrial dust devil census comparable with those done at Mars.Such analysis will need to take into account a time-variable noise background; with the improved temperature sampling it seems likely that a detection limit of better than 0.2 mbar may be achieved.The large number of encounters expected will permit statistically robust studies of periodicity/clustering, and an evaluation of the size statistics (Lorenz, 2012).Data from sensors such as these can be augmented by comparison with other meteorological variables such as wind speed.
Fig. 1 .
Fig.1.Datalogger with the casing partly removed.The USB connector is at right, the AA battery terminals are visible at right and left (with user-supplied wires soldered there to provide an alternative power supply in this application).The pressure transducer is the silver square at the lower edge of the board.The memory, microcontroller and other components are on the other side of the circuit board.
Fig. 2 .
Fig. 2. Discharge curve of a Duracell alkaline AA cell (50 mA constant current) from manufacturer's datasheet (circles), compared with a simple analytic model C = Co/[1 + exp {(V − V o)/dV }] with Co = 2.2 A-h, Vo = 1.25 V and dV = 0.07 V (solid line).Where this curve intersects the voltage threshold of the logger (dotted line), the unit stops -in this case (threshold = 1.25 V) after a discharge of about 1 A-h -even though substantial capacity remains.One mitigation is to put two cells in series (dashed curve) -here the curve intersects the threshold after some 2.2 A-h.The rather shallow discharge curve of alkaline cells means that if the threshold is even slightly higher, a significant drop in useful battery capacity results.
Fig. 3 .
Fig. 3. Sand-colored waterproof food container used to house 2-D cells and the pressure logger for field measurements.
Fig. 4 .
Fig. 4. The playa (about 6 km long) seen from a commercial airliner approaching Las Vegas.The approximate measurement site is arrowed.Highway I-95 runs roughly north-south.The blueish rectangular feature at left is a solar power facility.
Fig. 6 .
Fig.6.Battery voltage at the start of each datafile recorded by the three sensors in April 2012.Note the steep initial decline (compare with Fig.2) for the top two alkaline-only units.Note the slower decline for P16, with the lower sample rate.The lowest unit shows a generally uniform diurnal cycle, although peak voltage dropped on days 9 and 10 due to clouds.
Fig. 7 .
Fig. 7. Example data from a logger ("P12") during a follow-up measurement campaign, showing about 12 h of data.(a) Sensor temperature rises throughout the day to a peak (in the shade) of 40 • C. Note that dips in temperature occur at ∼ 5000 s intervals.The timestamp on the data files is Eastern Daylight Time (from the author's laptop), which is 3 h ahead of local time (Pacific).Thus, the record shown here begins at 08:38 a.m.LT (local time).(b) Pressure record, with an initial offset subtracted.Fluctuations of ∼ 10 Pa are seen throughout, but some isolated bursts of ∼ 100 Pa noise are seen, correlating with the temperature dips; the instance at 27 000 s is shown in more detail in Fig. 8.
Fig. 8 .
Fig. 8.The 2000 s of data from Fig. 6 showing (a) the temperature evolution (note 0.1 • C quantization) and (b) the temperature increment between successive samples (acquired at 100 pressure-sample intervals, or 50 s).The pressure readings (c) show a ∼ 100 Pa sawtooth noise due to improper temperature correction -it is seen that the sawtooth amplitude and sign correlates rather well with the temperature derivative rate.(d) A post-processing correction algorithm subtracts a sawtooth correction (i.e., a cyclic ramp with a period of 50 s, synchronized to the temperature reading) with an amplitude proportional to the temperature derivative in (c): it is seen that the noise is reduced by a significant factor (∼ 3-4) to ∼ 20-30 Pa.
Fig. 10 .
Fig. 10.Further dust devil signatures.(a) A 0.4 mbar 60 s event and a 0.6 mbar 100 s event, spaced by 700 s.(b) A 1 mbar event, 500 s long, with an apparent double dip in pressure, perhaps due to a core wandering in a cycloidal pattern (c) the largest event noted so far, 1.5 mbar deep, 100 s wide, with a distinct 0.6 mbar 10 s-wide core.Note each plot is scaled differently.Data are uncorrected for sawtooth effects.
Fig
Fig. 12.Uncorrected pressure record over daylight hours.Beginning around 13 000 s (i.e., around local noon), an oscillation in pressure develops with a period of about 1000 s, and is seen most prominently at the middle of the center panel at around 14:00 LT when dust devil activity is near its peak.In this uncorrected data, the periodic pressure cycle is somewhat highlighted by the sawtooth due to correlated temperature changes.
Observing desert dust devils with a pressure logger temperature/flux variation noted by other workers at dust devil sites.
Table 1 .
B1100-1 Current consumption for one fresh alkaline cell and two cells in series. | 8,241.8 | 0001-01-01T00:00:00.000 | [
"Physics",
"Environmental Science"
] |
Including Facial Expressions in Contextual Embeddings for Sign Language Generation
State-of-the-art sign language generation frameworks lack expressivity and naturalness which is the result of only focusing manual signs, neglecting the affective, grammatical and semantic functions of facial expressions. The purpose of this work is to augment semantic representation of sign language through grounding facial expressions. We study the effect of modeling the relationship between text, gloss, and facial expressions on the performance of the sign generation systems. In particular, we propose a Dual Encoder Transformer able to generate manual signs as well as facial expressions by capturing the similarities and differences found in text and sign gloss annotation. We take into consideration the role of facial muscle activity to express intensities of manual signs by being the first to employ facial action units in sign language generation. We perform a series of experiments showing that our proposed model improves the quality of automatically generated sign language.
Introduction
Communication between the Deaf and Hard of Hearing (DHH) people and hearing non-signing people may be facilitated by the emerging language technologies. DHH individuals are medically underserved worldwide (McKee et al., 2020;Masuku et al., 2021) due to the lack of doctors who can understand and use sign language. Also, educational resources that are available in sign language are limited especially in STEM fields (Boyce et al., 2021;Lynn et al., 2020). Although the Americans with Disabilities Act (United States Department of Justice, 2010) requires government services, public accommodations, and commercial facilities to communicate effectively with DHH individuals, the reality is far from ideal. Sign language interpreters are not always available and communicating through text is not always feasible as written languages are completely different from signed languages.
In contrast to Sign Language Recognition (SLR) which has been studied for several decades (Rastgoo et al., 2021) in the computer vision community (Yin et al., 2021), Sign Language Generation (SLG) is a more recent and less explored research topic (Quandt et al., 2021;Cox et al., 2002;Glauert et al., 2006).
Missing a rich grounded semantic representation, the existing SLG frameworks are far from generating understandable and natural sign language. Sign languages use spatiotemporal modalities and encode semantic information in manual signs and also in facial expressions. A major focus in SLG has been put on manual signs, neglecting the affective, grammatical, and semantic roles of facial expressions. In this work, we bring insights from computational linguistics to study the role of facial expressions in automated SLG. Apart from using facial landmarks encoding the contours of the face, eyes, nose, and mouth, we are the first to explore the use of facial Action Units (AUs) to learn semantic spaces or representations for sign language generation.
In addition, with insights from multimodal Transformer architecture design, we present a novel model, the Dual Encoder Transformer for SLG, which takes as input spoken text and glosses, computes the correlation between both inputs, and generates skeleton poses with facial landmarks and facial AUs. Previous work used either gloss or text to generate sign language or used text-to-gloss (T2G) prediction as an intermediary step (Saunders et al., 2020). Our model architecture, on the other hand, allows us to capture information otherwise lost when using gloss only, and captures differences between text and gloss, which is especially useful for highlighting adjectives otherwise lost in gloss annotation. We perform several experiments using the PHOENIX14-T weather forecast dataset and Figure 1: Sign Language uses multiple modalities, such as hands, body, and facial expressions to convey semantic information. Although gloss annotation is often used to transcribe sign language, the above examples show that meaning encoded through facial expressions are not captured. In addition, the translation from text (blue) to gloss (red) is lossy even though sign languages have the capability to express the complete meaning from text. The lower example shows lowered brows and a wrinkled nose to add the meaning of kräftiger(heavy) (present in text) to the RAIN sign. show that our model performs better than baseline models using only gloss or text.
In summary, our main contributions are the following: • Novel Dual Encoder Transformer for SLG which captures information from text and gloss, as well as their relationship to generate continuous 3D sign pose sequences, facial landmarks, and facial action units.
• Use of facial action units to ground semantic representation in sign language.
Background and Related Work
More than 70 million Deaf and Hard of Hearing worldwide use one of 300 existing sign languages as their primary language (Kozik, 2020). In this section, we explain the linguistic characteristics of sign languages, the importance of facial expressions to convey meaning, and elaborate on prior work in SLG.
Sign Language Linguistics
Sign languages are spatiotemporal languages and are articulated by using the hands, face, and other parts of the body, which need to be visible. In contrast to spoken languages which are oral-aural languages, sign languages are articulated in front of the top half of the body and around the head. No universal method such as the International Phonetic Alphabet (IPA) exists to capture the complexity of signs. Gloss annotation is often used to represent the meaning of signs in written form. Glosses do not provide any information about the execution of the sign, only about its meaning. Even more, as glosses use written languages rather than the sign language, they are a mere approximation of the sign's meaning, representing only one possible transcription. For that reason, glosses do not always represent the full meaning of signs as shown in Figure 1. Every sign can be broken into four manual characteristics: shape, location, movement, and orientation. Non-manual components such as mouth movements (mouthing), facial expressions, and body movements are other aspects of sign lan- guage phonology. In contrast to spoken languages, signing occurs simultaneously while vowels and consonants occur sequentially. Although the vocabulary size of ASL in dictionaries is around 15,000 (Spread the Sign, 2017) compared to approximately 170,000 in spoken English, the simultaneity of phonological components allows for a wide range of signs to describe slight differences of the same gloss. While in English various words describe largeness (big, large, huge, humongous, etc.) in ASL, there is one main sign for "large": BIG. However, through modifications of facial expressions, mouthing, and the size of the sign, different levels of largeness can be expressed just as in a spoken language (Grushkin, 2017). To communicate spoken concepts without a corresponding sign fingerspelling-a manual alphabet-is sometimes used. (Baker et al., 2016)
Grammatical Facial Expressions
Facial expressions are grammatical components of sign languages that encode semantic representations, which, when excluded leads to loss of meaning. Facial expressions in particular have an important role in distinguishing different types of sentences such as WH-questions, Yes/No questions, doubt, negations, affirmatives, conditional clauses, focus and relative clauses (da Silva et al., 2020). The following example shows how the same gloss order can present a question or an affirmation (Baker et al., 2016): Example 1 Indopakistani Sign Language a) FATHER CAR EXIST. "(My) father has a car." b) FATHER CAR EXIST? "Does (your/his) father have a car." In this example, what makes sentence b) a question are raised eyebrows and a forward and/or downward movement of the head/chin in parallel to the manual signs. Figure 2: Examples from different facial Action Units (AUs) (Friesen and Ekman, 1978) from the lower face relevant to the generation of mouthings in sign languages. AUs can occur with different intensity values between 0 and 5. AUs have been used in psychology and in affective computing to understand emotions expressed through facial expressions. Image from (De la Torre and Cohn, 2011). In addition, facial expressions can differentiate the meaning of a sign assuming the role of a quantifier. Figure 1 shows different signs for the same gloss, REGEN (rain). We can observe from the text transcript (in blue) that the news anchor says "rain" in the upper example but "heavy rain" in the lower. This example shows how gloss annotations are not perfect transcriptions of sign languages as they only convey the meaning of manual aspect of the signs. Information conveyed through facial expressions to show intensities are not represented in gloss annotation. To view the loss of information that occurs in gloss annotation we used Spacy (Honnibal and Montani, 2017) to compute the Part-of-Speech (POS) annotation for text and gloss. In Table 1 the occurrence of nouns, verbs, adverbs, and adjectives are shown for text and gloss over the entire dataset. We can see that although gloss annotations have lower occurrence for all POS, the difference is statistically significant for adjectives with p < 0.05. To calculate this significance, we performed hypothesis testing with two proportions by computing the Z score. We used t-tests to determine statistical significance of our model's performance.
Sign Language Generation
Several advances in generating sign poses from text have been recently achieved in SLG, however there is limited work that considers the loss of semantic
Continuous Embedding
Feed-Forward
Self-Attention
Self-Attention
Feed-Forward
Word Embedding 1 Figure 3: Our proposed model architecture, the Dual Encoder Transformer for Sign Language Generation. Our architecture is characterized by using two encoders, one for text and one for gloss annotation. The use of two encoders allows to multiply the outputs of both emphasizing the differences and similarities. In addition we to using skeleton poses and facial landmarks, we include facial action units (Friesen and Ekman, 1978).
information when using gloss to generate poses and aligned facial expressions. Previous work has generated poses by translating text-to-gloss (T2G) and then gloss-to-pose (G2S) or by using either text or gloss as input (Stoll et al., 2020;Saunders et al., 2020). We propose a Dual Encoder Transformer for SLG which trains individual encoders for text and gloss, and combines the encoder's output to capture similarities and differences. In addition, the majority of previous work on SLG has focused mainly on manual signs (Stoll et al., 2020;Saunders et al., 2020;Zelinka and Kanis, 2020;Saunders et al., 2021b). (Saunders et al., 2021a) are the first to generate facial expressions and mouthing together with hand poses. The representation used for the non-manual channels is the same as for the hand gestures, namely coordinates of facial landmarks. In this work we explore the use of facial Action Units (AUs) (see Figure 2) which represent intensities of facial muscle movements (Friesen and Ekman, 1978). Although AUs have been primarily used in tasks related to emotion recognition (Viegas et al., 2018), recent works have shown that AUs help detect WH-questions, Y/N questions, and other types of sentences in Brazilian Sign Language (da Silva et al., 2020).
Sign Language Dataset
In this work we use the publicly available PHOENIX14T dataset (Camgoz et al., 2018), fre-quently used as a benchmark dataset for SLR and SLG tasks. The dataset comprises a collection of weather forecast videos in German Sign Language (DGS), segmented into sentences and accompanied by German transcripts from the news anchor and sign-gloss annotations. PHOENIX14T contains videos of 9 different signers with 1066 different sign glosses and 2887 different German words. The video resolution is 210 by 260 pixels per frame and 30 frames per second. The dataset is partitioned into training, validation, and test set with respectively 7,096, 519, and 642 sentences.
Methods: Dual Encoder Transformer for Sign Language Generation
In this section, we present our proposed model, the Dual Encoder Transformer for Sign Language Generation. Given the loss of information that occurs when translating from text-to-gloss, our novel architecture takes into account the information from text and gloss as well as their similarities and differences to generate sign language in the form of skeleton poses and facial landmarks shown in Figure 3. For that purpose, we learn the conditional probability p = (Y |X, Z) of producing a sequence of signs Y = (y 1 , . . . , y T ) with T frames, given the text of a spoken language sentence X T = (x 1 , . . . , x N ) with N words and the corresponding glosses Z = (z 1 , . . . , z U ) with U glosses.
Our work is inspired by the Progressive Transformer (Saunders et al., 2020) which allows translation from a symbolic representation (words or glosses) to a continuous domain (joint and face landmark coordinates), by employing positional encoding to permit the processing of inputs with varied lengths. In contrast to the Progressive Transformer which uses one encoder to use either text or glosses to generate skeleton poses, we employ two encoders, one for text and one for glosses, to capture information from both sources, and create a combined representation from the encoder outputs to represent correlations between text and glosses. In the following we will describe the different components of the dual encoder transformer.
Embeddings
As our input sources are words, we need to convert them into numerical representations. Similar to transformers used for text-to-text translations, we use word embeddings based on the vocabulary present in the training set. As we are using two encoders to represent similarities and differences between text and glosses we use one word embedding based on the vocabulary of the text and one using the vocabulary of the glosses. We also experiment by using the text word embedding for both encoders. Given that our target is a sequence of skeleton joint coordinates, facial landmark coordinates, and continuous values of facial AUs with varying length we use counter encoding (Saunders et al., 2020). The counter c varies between [0,1] with intervals proportional to the sequence length. It allows the generation of frames without an end token. The target joints are then defined as: The target joints m t are then passed to a continuous embedding which is a linear layer.
Dual Encoders
We use two encoders, one for text and one for gloss annotations. Both encoders have the same architecture. They are composed by L layers each with one Multi-head Attention (MHA) and a feedforward layer. Residual connections (He et al., 2016) around each of the two sublayers with subsequent layer normalization (Ba et al., 2016). MHA uses multiple projections of scaled dot-products which permits the model to associate each word of the input with each other. The scaled dot-product attention outputs a vector of values, V , which is weighted by queries, Q, keys, K, and dimensionality, d k : Different self-attention heads are used in MHA, which allows to generate parallel mappings of the Q, V , and K with different learnt parameters.
The outputs of MHA are then fed into a nonlinear feed-forward projection. In our case, where we employ two different encoders, their outputs can be formulated as: with h n being the contextual representation of the source sequence, N the number of words, and U the number of glosses in the source sequence.
As we want to not only use the information encoded in text and gloss, but also their relationship, we combine the output of both encoders with a Hadamard multiplication. As the N ̸ = U , we stack h n vertically for U times and stack h u vertically for N times in order to have two matrices with the same dimensions. Then we multiply both matrices with the Hadamard multiplication. Hadamard multiplication is a concatenation of every element in two matrices, where a i,j and b i,j are multiplied together to get a i,j b i,j . This represents concatenating the output vectors from the text encoder with the output of the vectors from the gloss encoder.
Decoder
Our decoder is based on the progressive transformer decoder (DPT), an auto-regressive model that produces continuous sequences of sign pose and the previously described counter value (Saunders et al., 2020). In addition to producing sign poses and facial landmarks, our decoder also produces 17 facial AUs. The counter-concatenated joint embeddings which include manual and facial features (facial landmarks and AUs),ĵ u , are used to represent the sign pose of each frame. Firstly, an initial MHA sub-layer is applied to the joint embeddings, similar to the encoder but with an extra masking operation. The masking of future frames is necessary to prevent the model from attending to future time steps. A further MHA mechanism is then used to map the sym-bolic representations from the encoder to the continuous domain of the decoder. A final feed forward sub-layer follows, with each sub-layer followed by a residual connection and layer normalisation as in the encoder. The output of the progressive decoder can be formulated as: whereŷ u corresponds to the 3D joint positions, facial landmarks, and AUs, representing the produced sign pose of frame u andĉ u is the respective counter value. The decoder learns to generate one frame at a time until the predicted counter value, c u , reaches 1. The model is trained using the mean squared error (MSE) loss between the predicted sequence,ŷ 1:U , and the ground truth, y * 1:U : 5 Computational Experiments
Features
We extract three different types of features from the PHOENIX14T dataset: skeleton joint coordinates, facial landmark coordinates, and facial action unit intensities. We use OpenPose (Cao et al., 2019) to extract skeleton poses from each frame and use for our experiments the coordinates of 50 joints which represent the upper body, arms, and hands, which we will start referring to as "manual features". We also use OpenFace (Baltrusaitis et al., 2018) to extract 68 facial landmarks as well as 17 facial action units (AUs) shown in Figure 2 to describe "facial features".
Baseline Models
We will compare the performance of our proposed model (TG2S) with two Progressive Transformers (Saunders et al., 2020), one using gloss only to produce sign poses (G2S), and one that uses text only (T2S). We train each model only with manual features and also with the combination of manual and facial features through concatenation.
Evaluation Methods
In order to automatically evaluate the performance of our model and the baseline models, we use back translation suggested by (Saunders et al., 2020). For that purpose we use the Sign Language Transformer (SLT) (Camgoz et al., 2020) which translates sign poses into text and computes BLEU and ROUGE scores between the translated text and the original text. As the original SLT was designed to receive video frames as input, we modified the architecture to enable the processing of skeleton poses and facial features. When facial AUs are added to the hands, body, and face features, the difference from using manual data only is slightly lower, being BLEU-4 of 10.61. In Table 3 the results of using hands and body joint skeleton as sole input to the baseline models and our proposed model are shown. We can see that our proposed model TG2S shows the highest BLEU-4 scores of 8.19 in test set, compared to 7.84 for G2S and 7.56 for T2S. Table 4 presents the results of including facial landmarks as well as facial AUs with body and hands skeleton joints as input. Also here we can see that our proposed model outperforms the baseline models showing BLEU-4 score of 5.76 in test set. G2S obtained BLUE-4 score of 6.37 and T2S 5.53.
Quantitative Results
We see in Tables 3 and 4 that G2S obtained higher scores than T2S. Given that gloss annotations fail to encode the richness of meaning in signs, it appears the smaller vocabulary helps the model achieve higher scores by neglecting information otherwise described in text. Our proposed model is able to obtain better results than G2S by making a compromise of using information from gloss, text, and their similarities and differences. We also can see in both tables that the inclusion of facial information reduces the overall scores. We believe that this might be the case due to the diverse range of facial expressions possible. We cannot directly compare the results of Table 3 ent SLT models were used to compute the BLEU scores. Figure 4 shows the visual quality of our models prediction when using manual and facial information. Both examples show that the predictions captured the hand shape, orientation and movement from ground truth. In the bottom example for RAIN, the predictions were even able to capture the repetitive hand movement symbolizing falling rain. What can also be noted is that the ground truth is not perfect.
Qualitative Results
In both examples unnatural finger and head postures can be seen. In addition, ground truth is not displaying movements of the eyebrows and mouth in the expected intensities. Figure 5 shows situations in which the predictions failed to represent the correct phonology of signs. In the first example we see that hand shape, orientation, and position are not correct. The predictions of our models also fail to capture pointing hand shapes as shown in example 2.
Discussion and Conclusion
In this work, for the first time, we attempt to augment contextual embeddings for sign language by learning a joint meaning representation that includes fine-grained facial expressions. Our results show that the proposed semantic representation is richer and linguistically grounded. Although our proposed model helped bridge the loss of information by taking into account text, gloss, and their similarities and differences, there are still several challenges to be tackled by a multidisciplinary scientific community.
Complex hand shapes with pointing fingers are very challenging to generate. The first step to improve the generation of the fingers is in improving methods to recognize finger movements more accurately. Similarly, we need tools that are more robust in detecting facial expressions even in situations of occlusion. We also realize that SLG models are overfitting specific sign languages instead of learning a generalized representations of signs.
We chose to work with a German sign language since that is the only dataset with gloss annotation that could help us study our hypotheses. The How2Sign dataset (Duarte et al., 2021) is a feasible dataset for ASL, but it does not allow any model to extract facial landmarks, facial action units or facial expression from the original video frames since the faces are blurred. In the future, we hope to see new datasets with better and more diverse annotations The upper example shows that the predictions captured the correct hand shape, orientation, and movement of the sign CLOUD. In the lower example it is visible that the predictions captured the repeating hand movement meaning RAIN. Although at first glance the hand orientation seems not correct, it is a slight variation which still is correct. for different sign languages that would allow the design of natural and usable sign language generation system. | 5,244.4 | 2022-02-11T00:00:00.000 | [
"Computer Science"
] |
3-({[(1-Phenylethyl)sulfanyl]methanethioyl}sulfanyl)propanoic acid
In the title compound, C12H14O2S3, a chain transfer agent (CTA) used in polymerization, the dihedral angle between the aromatic ring and the CS3 grouping is 84.20 (10)°. In the crystal, carboxylic acid inversion dimers linked by pairs of O—H⋯O hydrogen bonds generate R 2 2(8) loops.
Comment
The title compound C 12 H 14 S 3 O 2 is a carbanotrithioate. It can be used as a chain transfer agent (CTA) in RAFT polymerization (Chong et al., 1999) to control the polymerization and it will produce carbanotrithionate end-terminated polymers.
Very few single-crystal XRD data are available for CTAs, because most of them are liquids (Coady et al., 2008). Recently, we have reported the single-crystal data of a multi-functional CTA, which can be used for the synthesis of star polymers.
Carbanotrithioate CTA is suitable for the polymerization of styrene, acrylates and methacrylates. With appropriate choice of the CTA (RAFT agent) and reaction conditions, RAFT polymerization can be successfully used to produce polymers of narrow polydispersity with predetermined molecular weights. Moreover, the polymers obtained by the RAFT process can be chain extended or used as precursors to synthesize stimuli responsive block copolymers by the addition of further monomer(s). The title compound will result in carboxylic acid end-terminated polymer; this functionality can be further modified and utilized for making block copolymers by reacting it with another homo-polymer.
The compound C 12 H 14 S 3 O 2 is stabilized by a O-H···O interaction with R 2 2 (8) graph set motif.
Experimental
The title compound, was prepared by adding 3-mercapto propanoic acid (1.00 g, 7.35 mmol) to a stirred suspension of K 3 PO 4 (1.72 g, 8.09 mmol) in acetone (20 ml) over a period of ten minutes. CS 2 (1.68 g, 22.06 mmol) was added upon which the solution turned bright yellow. After stirring for ten minutes 1-bromo ethyl benzene (1.26 g, 7.35 mmol) was added and an instant precipitation of KBr was noted. After stirring for three hours the suspension was filtered and the cake was rinsed with acetone (2 × 20 ml). After removing the solvent from the filtrate under reduced pressure the resulting yellow residue was purified by column chromatography on silica using a petroleum ether/ethyl acetate gradient to yield light yellow solid (96%) that crystallized to form light yellow blocks.
Refinement
All hydrogen atoms were fixed geometrically and allowed to ride on the parent carbon atoms, with aromatic C-H = 0.93 Å, methyl C-H = 0.96 Å and methylene C-H = 0.97 Å. The displacement parameters were set for phenyl and methylene H atoms at U iso (H) = 1.2U eq (C) and methyl H atoms at U iso (H) = 1.5U eq (C). | 628.6 | 2011-11-19T00:00:00.000 | [
"Chemistry"
] |
DAWM: Cost-Aware Asset Claim Analysis Approach on Big Data Analytic Computation Model for Cloud Data Centre
Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, Republic of Korea RLRC for Autonomous Vehicle Parts and Materials Innovation, Yeungnam University, Gyeongsan 38544, Republic of Korea Department of Computer Science and Engineering, Velagapudi Ramakrishna Siddhartha Engineering College, Vijayawada, Andhra Pradesh, India Department of Computer Science and Engineering, Indian Institute of Information Technology, Kalyani, West Bengal 741235, India Department of Computer Science, CHRIST University, Bengaluru 560029, India Department of Computer Science, Shah Abdul Latif University, Khairpur, Sindh 66020, Pakistan Department of Computer Engineering, Faculty of Engineering and Architecture, Istanbul Gelisim University, Avcılar, Istanbul 34310, Turkey
Introduction
Nowadays, cloud computing has become a backbone for government enterprises and education sectors because of providing continuous resource (memory, CPU, and bandwidth) allocation service to ensure their application service reliability. e cloud service supplier shares the resources among end-users based on cost function's value (CF) to meet the demand of system performance. Many service suppliers estimate the server cost based on bandwidth usage rate (BUR) and energy usage rate (EUR). As per the Gartner report, the cloud service provider (CSP) market would grow approximately 331.2 billion dollars in 2022 [1]. e cloud global report [2] confines 623.3-billion-dollar market growth rate in 2023 for data computation. e statistical analysis states that cloud computing has a notable impact on the Internet of ings (IoT), blockchain, and soft computing measurement systems with artificial intelligence models. e tasks are divided into subtasks with relative attribute definitions through DAG theory. e DAG approach shows a prominent impact while dealing with complex workflow applications such as systematic mathematical applications [3][4][5]. Data analytic languages such as Hive and Pig [6][7][8] platforms handle the MapReduce model queries. us, the DAG theory's importance tremendously changed over the past decade since it influences the service execution time and resource usage. erefore, this issue is formulated as NPhard [9], and many heuristic approaches resolved the same issue through resource usage consolidation [10][11][12].
Each machine enables a list of resource attributes (e.g., CPU, RAM size, and hard disc space) provided by CSP. In our solution, the cloud resource cost is optimized by estimating user service demands (such as CPU, IOPS, memory, and storage). For instance, an online incremental learning method has been designed in [13][14][15] to estimate service completion time based on heuristic algorithms by allocating the arrived service requests to the correct VM. However, these approaches have not considered server size and machine resource usage rates which causes performance delay. erefore, in our approach, we consider CSC size, effective resource management of machines, and resource autoscaling methods; these are not present in state-of-the-art approaches. Several examinations were carried out for designing effective resource allocation methods to reduce allocation cost by satisfying service request requirements. Most current studies [16] have not considered the pricing models and data analysis models; some on-demand pricing models are considered with an inadequate measurement index. Several recent studies [17] recognize the importance of both on-demand data analytical models and reserved pricing models to minimize resource allocation costs. However, our solution assesses the server resource capacity rate, profit, and cost based on the data analysis model. e user service demand measurement algorithm is essential for profit maximization by autoscaling the resource allocation certainty.
Our research work aim is to design a novel profit optimization model for CSPs to enhance their revenue maximization (RM) by maintaining reliable quality of service (QoS). e profit optimization model must impact active server count, cost, and speed to meet the end-user satisfaction, influencing their service continuity. If there is no precise profit optimization model, then the profit and service quality and revenue generation factors will be affected. However, CSP revenue maximization has become a billion-dollar question in the competitive service computing market because of heterogeneous resource-required application tasks.
To address the listed issues, we develop two algorithms for profit maximization and adequate service reliability. First, a belief propagation-influenced cost-aware asset scheduling approach is derived based on the data analytic weight measurement (DAWM) model for effective performance and server size optimization. Second, the multiobjective heuristic user service demand (MHUSD) approach is formulated based on the CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena for adequate service reliability. e DAWM model classifies prominent servers to preserve the server resource usage and cost during an effective resource slicing process by considering each machine execution factor (remaining energy, energy and service cost, workload execution rate, service deadline violation rate, cloud server configuration (CSC), service requirement rate, and service level agreement violation (SLAV) penalty rate). e MHUSD algorithm measures the user demand service rate and cost based on the USD and CSP profit estimation models by considering service demand weight, service tenant cost, and machine energy cost.
Key Contributions.
e trade-off between cost optimization and revenue maximization models is extensively examined in Section 2. Our manuscript's key contributions are summarized as follows: (1) Develop a data analytic weight measurement (DAWM) approach to optimize service quality and price of CSP during an effective resource slicing process by considering each machine cost and revenue, and profit. (2) Develop a multiobjective heuristic user service demand (MHUSD) based on the CPS profit estimation model and the user service demand (USD) model to measure the user demand service rate cost by considering service demand weight, service tenant cost, and machine energy cost. Subsequently, the MHUSD algorithm also considers maximum baring wait-time of end-user to maximize CSP revenue and optimize operational energy cost. (3) Simulation results confirm the advantage of the proposed approaches, enhancement rate of revenue, and the CSP's profit attributes. e impacts of mathematical key factors are being analyzed theoretically and practically. e manuscript's respite is designed as Section 2 briefly explains research gaps and problem statements of extant approaches. Section 3 describes the proposed system and its mathematical models with an algorithm in detail. Section 4 evaluates the investigation outcomes, and Section 5 concludes the manuscripts.
Related Work
is section describes the examination of related research work, which is classified into 3 steps, such as profit maximization, green data center, and graph theory-based task consolidation approaches.
Profit Maximization.
Several profit maximization methods are proposed for the sustainability of green computing. We can observe the current scenario and requirement analysis of revenue in Figure 1. In [18], the broker management system has been designed to maximize the VM cost and minimize user cost. e author formulates multiserver configuration cost as a profit maximization issue, and a heuristic method has been designed to solve this issue. e delay-sensitive workload dimensionality has been examined based on a novel online heuristic approach to optimize the system's cost and profit [19]. Subsequently, the offline issue is formulated as NP-hard, and it has been resolved by a linear programming concept. In [20], a dynamic cost charging method has been designed to fix specific prices to servers as per the resource demand. A pricing approach has been designed to regulate the prices dynamically as per the demand of a kind. In [21,22], the service penalty has diminished and enhances the profit by VM replacement approach through a mixed-integer nonlinear program called NP-hard; subsequently, a novel heuristic method has been designed to optimize the penalties and profits. CPS profit maximization approaches have been extensively examined in this literature survey. In [23], the authors designed a stochastic programming scheme for the subscription of computing resources to maximize service providers' profit during user request uncertainty. In [24], a profit control policy has been designed to assess machine computing capacity, which decides to maximize the service provider profit. In [25][26][27], an SLA-based resource allocation issue has formulated with profit maximization objective with the consideration of 3 dimensions (processing, storage, and communication). In [28], a service request (SR) distribution approach is designed to enhance the profit with quality of service rate as per the service demand. In [29], the author has addressed the service provider revenue maximization issue by consolidating the service tenant cost and power consumption cost. A joint optimization scheduling model has been designed to manage delay-tolerant batch services based on pricing decisions to maximize service provider revenue [30]. In [31], the authors designed a model to maximize the service provider revenue based on the machine's tenant cost, resource demand size, and the application workload. A suitable online algorithm has been designed for the geo-distributed cloud with an adaptive VM resource cost scheme to maximize the service provider revenue [32]. e relationship between load balance, revenue, and the cost has concentrated on maximizing the service provider revenue than state-ofthe-art approaches [33]. In [34,35], a virtual resource rental strategy has been designed based on tenant cost, task urgency, and task uncertainty to enhance provider profit.
A hill-climbing algorithm has been designed to estimate customer service satisfaction by analyzing demand mark and profit fluctuations [36]. It assesses the customer satisfaction from economic growth ratio by leveraging the cloud server configuration (CSC), task arrival rate, and profit up-downs. erefore, the CSC directly impacts the cloud user service satisfaction rate and the inadequate customer satisfaction also has a direct impact on service request arrival rate. However, there is a lack of an accurate decision-making system and data analysis system that affects the server's profit and performance cost. A profit estimation model has been designed by considering CSC, service requirement rate, SLA, SLAV penalty rate, energy cost, tenant cost, and current CSP margin profit [37]. A server task execution speed-based power usage model is also designed to assess the CSP profit.
Green Data Centre.
In [38], a mixed-integer linear program has been designed for resource allocation to optimize the data center cost and energy consumption. Green computing accomplishes the proficient process and usage of assets by limiting the vitality utilization. An enhanced ant colony approach for optimal VM execution has been developed to enhance vitality utilization and to optimize the cost of cloud environment [39][40][41][42]. e practical swarm optimization (PSO) approach resolves the task allocation issue by consolidating data center count and task demand. In distributed computing, the assets have to schedule effectively to achieve a high-performance rate. Accordingly, the multitarget PSO approach remains preferable to enhance the resource usage rates. erefore, this approach effectively increases the usage of assets and lessens energy and makespan. e outcomes delineated that the proposed strategy multiobjective practical swarm optimization (MOPSO) performance is quite beneficial than concerned existing models. A VM scheduling approach has been designed based on multidimensional resource imperatives, for example, link capacity, to diminish the quantities of dynamic PMs to preserve energy utilization. e 2-step heuristic approach resolves the VM scheduling through migration and VM positioning models [43,44]. e designed method has consolidated the execution time than extant systems in a simulation platform. Asset overburdening is still an issue, and live relocation does not uphold the change of VM performance. In [45], the energyaware asset allocation approach has been investigated to improve the energy productivity of a server farm without SLA negotiations. An asset scheduling strategy with a hereditary method has been proposed to improve the usage of assets and save the expense of energy in distributed computing [46,47]. It utilizes a migration approach dependent on 3 load degrees (CPU usage, the throughput of organization, and pace of circle I/O). e calculation succeeds in improving the usage of assets, and saving energy by run-time asset scheduling is high. An energy preservation system is classified by assorting the asset into four distinct classifications (CPU, memory, storage, and networks). Additionally, the author designed a unique asset scheduling system dependent on cloud assets' energy streamlining with assessment technique [48].
e study [49] evaluates every machine's fitness value, which helps assess the machine rank based on the performance and resource usage rate. However, the machine rank evolution process consumes more time which influences the performance, and task scheduling policy leads to high-performance cost. e complexity rate is high over large-scale frameworks.
Graph
eory-Based Resource/Task Scheduling. Dynamic acyclic graph (DAG) has been used for task scheduling by considering PM capacity and task resource weight to formulate the issue [50]. Here, X[i, j] matrix identifies the errand evolution time of all VMs under different instances. To address all these issues, we design a data analytic weight measurement (DAWM) approach to optimize a cloud service provider's quality and price during an effective resource slicing process by considering each machine's cost and revenue, and profit. e entire cost does not iteratively consider traditional DAG-based models during the measurement of data analysis. Subsequently, we design a multiobjective heuristic user service demand (MHUSD) algorithm based on the CPS profit estimation model and the user service demand (USD) model to measure the user demand service rate and cost by considering service demand weight, service tenant cost, and machine energy cost.
DAWM System Model
A belief propagation-influenced data analysis model is designed for CSP profit maximization by formulating DAG task and resource scheduling policy, as shown in Figure 2. e CSP receives a service request from the cloud user, and by default, the CSP has three service modes: ondemand, advanced reservation, and spot resource allocation, which helps to slice the resources as per resource demand. As per the received service request, the CSP assesses its demand, cost, performance, profit, and required server size factors. e CSP consolidates the overprovisioning machines by optimizing the service execution cost and machine asset usage. Cloud service suppliers drive the data utility analytic method on machines to classify the high-and low-resource usage rate machines, preserve CDC usage and performance cost, and avoid instant repudiations/migrations.
It classifies adaptive servers after the first iteration by concocting an exact data analytic weight measurement (DAWM) model. First, a belief propagation influences a cost-aware asset scheduling approach based on the data analytic weight measurement (DAWM) model, which effectively optimizes the performance cost and server size. e DAWM model classifies prominent servers to preserve the server resource usage and cost during an effective resource slicing process by considering each machine execution factor (remaining energy, energy and service price, workload execution rate, service deadline violation rate, cloud server configuration (CSC), service requirement rate, and service level agreement violation (SLAV) penalty rate). Second, the multiobjective heuristic user service demand (MHUSD) approach is processed based on the CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena for adequate service reliability. e MHUSD algorithm prognosticates the user demand service rate and cost based on the USD and CSP profit estimation models by considering service demand weight, service tenant cost, and machine energy cost. e USD model estimates the resource service demand to estimate the profit and revenue gain and the system's performance cost.
e CSP profit estimation model helps assess the service profit by forecasting the server's performance cost, energy usage, and resource tenant cost. Each subsection describes a subcomponent of the framework mathematically and theoretically.
Cloud Service Provider Model.
e CSP offers various services to cloud end-users. For instance, infrastructure is a service, where the resources are being offered as VMs to meet the end-user satisfaction by running their applications. e user service request (USR) is submitted to the service provider, which runs on a multiserver system to deliver the response for the received service requests. Consider a multiserver system (MSS) enables N homogeneous servers with m speed, and these are modeled based on the (M/M/M) queuing system. Assume that the MSS framework receives a number of user service requests with a rate of u. e service time v � (x/m), where x refers to required instruction count to execute the USR and mean v � (x/m). e service rate of the USR is denoted as q � (1/v) � (m/x). e server utilization rate is estimated with equation (1), and it is denoted with Z: where ρ r refers to probability of r service requests which are executing at a server. In case if there are no tasks/ service requests, then the probability of zero service request is Subsequently, ρ b is the probability of new arrived SRs, which should wait when the server system is busy executing assigned tasks where ρ N refers to probability of all N SRs. e probability density function is defined with equation (5), and d refers to service waiting time: Figure 3 illustrates the DAG task classification and scheduling scheme that accomplishes by evaluating cost price/unit of the machine, which is magnified with ample of time required for task completion. erefore, for instance, n is the number of VMs of
Service Level Agreement Model.
e SLA is a method which maintains a trade-off between price and service quality between end-user and CSP. Here, the required service attribute x is executed within the response time T, to meet the application deadline: where a is the service cost/unit, d is the penalty cost if any SLA violation, b is the constant weight of SLA, and m 0 is the expected service processing speed. ere are three conditions listed even the service request has under waiting time. erefore, T � ((d + x)/m): (1 )If d has low value than bc × m 0 , it provides highquality, reliable service time interval leads to moderate service quality (3) If d is longer than ((c/p) + (b/m 0 ) − (1/m)) · x, then the service is free because the service request waited long time in queue Equation (7) is used to assess the prognosticated service charge of the CSP based on 5 parameters: c, p, b, d, and m. Here, c refers to service cost/unit, p refers to SLAV penalty cost, m 0 refers to expected service speed, b refers to SLA constant weight, and d is the average service waiting time.
User Service Satisfaction
Model. User service satisfaction (USS) is estimated in two ways: quality of service (QoS) and price of service (PoS). QoS describes the discrepancy between users' expectations (how to server SR) and users' perceptions (how to perform service). e user's quality of service (η sq i (x, T)) is evaluated with e η tc i � e ((S ex − S ac )/S ex ) is a fundamental expression to assess the price of service (PoS) with equation (9). Here, S ex and S ac refer to expected cost and actual cost, respectively: (1 )If S ex � S ac , then η tc i � 1, shows there is not impact on user satisfaction (2) If S ex > S ac , then it leads to the higher service cost (η tc i < 1), and it decreases by increasing the actual price (3) If S ex < S ac , then it leads to the lower service cost (η tc i > 1), and it increases by decreasing the actual price e USS (η sa i ) is defined as product of service price and quality of service Such that, e product of sum is calculated with equation (8) and equation (9).
User Demand Service Estimation Model and Algorithm.
e user service demand weight factor (η expec i,k ) assessment plays an essential role to optimize the cost of cloud service provider, and it is estimated with equation (11): where x refers to a list of service attributes, K i refers to the service attribute weight, χ k refers to the attribute perception, and c k refers to the attribute expectation. e service demand is formulated as the product of potential demand and user service demand weight factor. It is defined as (12) where α and β refer to constant basic demand and constant potential demand. Subsequently, both values must be greater than >0, such as α, β > 0.
MHUDS algorithm 1 assesses the user service demand adequately. Lines 1-2 define the entail parameters and attributes for estimation of the user service demand. Line 4 assesses all the service attributes of the cloud service provider and also checks the CPS set. Line 5 helps assess the lower and upper bound value that should not be less than <R. Line 6 estimates the median value of the service attribute demand. Line 7 assesses the η dema i,x (u k m ) which should not be less than 0. η dema i,x (u k m ) refers to user service demand of attribute k with middle-range value. Similarly, the rest of the two variables refer to higher and lower values of the user service demand rate. Lines 12-15 are used to update the concerned value at each iteration of time.
CPS Profit Estimation Model.
e CSP profit is assessed based on the gap between the profits gained by acquiring services to users and the monetary cost of processing user SRs. Equation (13) is defined with function number and server speed (i.e., N and m). e average revenue of CSP is estimated as a product of the expected cost of SR and user service demand: where η dema i,k refers to USD based on user service attribute value. e CSP cost is defined as a paid infrastructure tenant cost and the power cost of system function, and it is assessed with equation (15). e server energy consumption is also estimated with equation (17): where z refers to server usage, Ω nst refers to dynamic power usage, and Ω st refers to static power usage. Assuming that ξ n s (t) refers to energy usage cost at processing time t, the electricity bill (ϖ(t)) is defines as e CSP profit at t is described as the revenue minus from the rental and electricity cost, and it is estimated with equation (18):
CSP Profit Maximization
Factor. e probability of having N SRs is described with equation (19). e Taylor series influences approximately (N! ≈ ���� 2πN √ (N/e) N ) to assess the CSP profit as follows: updated derivation e CSP maximized profit assess as follows: 3.6. DAG Task Scheduling Methodology. e errands are assigned through a computational method, which comes under the DAG-based process by considering the framework's performance weight. It can be observed in Figure 4.
. . , v n where v i speaks to a comparing errand t i and it executes consecutively on a machine. E � e 1 , e 2 , e 3 , . . . , e m remains priority connection among errands because of information reliability. An errand is not initiated until the last errand remains finished.
Because of dissimilar conditions in the cloud, each PM ability remains to differ. erefore, we consider the X [i, j] matrix to identify and for a keen track of each errand processing time t i on j th VM. Here, we have not considered weight and performance factors to measure the assets. In our system, we deliberately utilize a matrix to measure performance time on various VMs, rather than utilizing a consistent weight factor to estimate execution time. As per the data analysis model dataset, we measure each level (L i ) of the convolution network with DAG-based spark. Specifically, each spark stage alludes a vertex, and the connection among 2 phases is compared with organized point. e apexes with 0 degree remain reflected as phases that complete in parallel (P i ).
e 0-degree vertices of DAG indicate with L. e organizing system remains recursively performed and forwards the outcome to any phase of DAG. According to equation (23), we measure most outrageous performance time of all processing phases in parallel (P i ) and that recursively upgrades the task finish time:
Estimation of Optimal
Price. e price-demand function estimates optimal price of service by considering the tradeoff between service price ϕ and the concern service demand Δ based on their service request mode such as on-demand service, reserved service, and spot instance service. It is formulated as where Δ od refers to price-demand of on-demand service and Δ re refers to price-demand of reserved service, and similarly, for price, ϕ od refers to price for on-demand service and ϕ od refers to price for reserved service.
Theorem 1. Let us assume that the CSP considers Z units of time. If service price is φ and average service execution time is t, then the anticipated service price is
Proof. e CSP considers Z units of time, the optimal price is measured with average service execution time t, and it can be measured as It is defined as follows: the service request price is (n + 1)Zϕ in (nZ, (n + 1)Z] time interval. e probability distribution function of t is e expected price is Hence, the theorem is proved and the forecasting service arrival demand is approximately e forecasting service price is S expec � φ − CSPcost � φ − nϕ re . So, the maximum price must have to measure (zs expec /zϕ) � 0, such that Here, L = 1 (for first PL)
Security and Communication Networks
where s los � ((Z n e n(1− Z) )/ ��� � 2πn √ ) refers to loss of server profit, but the probability of expected server profit loss is Subsequently, the probability of forecasting service price is □ 3.8. Estimating Optimal Price. In Algorithm 2, the partial derivative is formulated through s los . It formulates accurate service price though the service arrival rate is high with low profit loss. Lines 1-3 define the input variables, and line 4 applies the models to all arrived service requests. Lines 6-9 estimate the optimal price demand, and lines 10-19 estimate optimal price value based on equations (31) and (13).
DAWM Algorithm for Cloud Server Size and Cost
Analysis. Algorithm 3 assess the server size and performance cost. It assesses the customer satisfaction from the machine economic growth ratio by leveraging the cloud server configuration (CSC) called server size, task arrival rate, and performance cost of the machine. erefore, the CSC has a direct impact on the cloud user service satisfaction rate and the inadequate customer satisfaction, and it also has direct impact on service request arrival rate. Line 1 defines the essential input parameters to accomplish the objectives. Lines 2-5 assess the service execution cost using equation (13) and update the machine matrix H[i, j], for effective prognostication of server configuration size. Lines 6 and 7 update the all machine execution speed rates and maintained in an array. Lines 8 to 10 assess the performance cost in association with CSC (s), service resource requirement rate (K), SLAV penalty rate (L), and energy and resource tenant cost. Lines 12-15 update the iterative value to mitigate performance rate and system execution cost.
Experimental Result Analysis
e proposed DAWM is simulated with real data in MATLAB R2017b, and the system specifications are 8 GB DDR4 memory and an Intel Core i7-6700HQ CPU with 2.6 GHz. We consider DAG [V, E] consisting 25-150 sensors. Every network enables 5% of data centres in the network size, and its capacity varies from 5000 to 75000 GHz. e active servers are varying from 1000 to 1500. e idle server constant energy consumption is 90 − 180 Watt; else the energy consumption is measured based on its energy usage rate, and it is in range [0.5, 1.5]; energy price is ([15, 55]/Mwh). e link bandwidth between sensors varies from 1500 to 25, 000 Mbps and delay transmission is 3 − 6 ms. e revenue gain is [0.15, 0.25], which is not static. Each service execution bandwidth is set from 15 − 25 Mbps, computing demand is 3 − 5 GHz, and the execution of each service is 5 − 30 (data packets/ms). e simulation parameters related to power cost, constant workloads, CSC, service requirement rate, SLAV penalty rate, energy cost, tenant cost, and current CSP margin profit are listed in Table 1. Figure 5 illustrates the average execution time required to process the user service request. It has been compared with four state-of-the-art approaches (SPEA2, COMCPM, NSGA-II, and OMCPM) which are published recently. It is noticed that the proposed approach has high-performance rate than remaining approaches such as 41.2%, 55.56%, 59.89%, and 61.52% faster than SPEA2, COMCPM, NSGA-II, and OMCPM, respectively. Figure 6 illustrates profit, revenue, and cost of the proposed system and SPEA2, COMCPM, NSGA-II, and OMCPM approaches.
e proposed system achieved moderately high revenue by 10%, 8.1%, 8.9%, and 8.91% than SPEA2, COMCPM, NSGA-II, and OMCPM approaches. Subsequently, our approach achieves 2.31%, 2.01%, 1.7%, and 1.37% high profit than four approaches, since our approach estimates the demand of service request and it analyses the machine performance before assigning the load. e reason is that user service request (USR) is submitted to the service provider, which runs on a multiserver system to deliver the response for the received service requests. e CSP assesses the machine data with our deep learning data analytical model. It makes an accurate decision to enhance the system performance by preserving service cost and to enhance the revenue gain consolidating each machine performance. e second reason is that the task is being scheduled base on DAG theory which influences the energy and resource of the system leads to enhance the revenue and optimizes the service request cost. Figure 7 shows the user service demand flexibility impact. We can observe that the active cloud server (from 15 to 75) count and the processing speed m of active servers are high, but there is no impact on the service execution demand rate. If the server count increases, then the user service demand execution rate does not increase, and it is sometimes stable to cope up the reliable quality of service with adequate computing performance. If the USD is high, the server system is frequently unable to meet the service demand requirement synchronously. In such cases, if the customer waits for a long time, then the USD rate becomes low due to low service demand. Usually, the USD may remain constant when the USD market is stable, which would not affect thirdparty factors. Figure 8 shows CSP profit outcomes. As we can observe, the profit rate is drastically decreased when the active servers are increased from 15 to 75. e high server processing speed m has no impact as we expected. e profit ratio is increased due to the USD rate increment than the new active server cost. e revenue enhancement and server size factors are not impacting server cost, but USD will get diminished due to the decrement of CSP profit. Consequently, the profit returns stable when the USD becomes constant. Figure 9 shows the server processing speed comparative study. e server processing speed is decreased when the server size input: CPS: N � N 1 + N 2 + N 3 + · · · + N n output: user demand service input: u, t, n, ϕ re , ϕ od , Δ re , Δ od output: optimal price of service (1) Let ϕ opti � − ∞, Δ opti � − ∞; (2) ϕ st ⟵ least price, server usage<1; (3) ϕ en ⟵ ϕ od ; (4) for each N i ∈ N do (5) Estimate Δ st and Δ st using equations (30) and (13); (6) if Δ st × Δ st > 0 then (7) ϕ opti � ϕ st ; (8) Estimate Δ opti using (13) and with S expec � φ − CSPcost � φ − nϕ re ; (9) end (10) while Δ st × Δ ed > error do (11) ϕ mid � ((ϕ st + ϕ en )/2); (12) Estimate Δ mid using (13) and (31); (13) if Δ st × Δ st > 0 then (14) ϕ st ⟵ ϕ mid ; (15) else (16) ϕ en ⟵ ϕ mid ; (17) end (18) end (19) ϕ opti � ((ϕ st + ϕ e d )/2); (20) end ALGORITHM 2: Optimal price estimation algorithm.
Security and Communication Networks 11
increases; the computation size is fixed, which restricts the execution of the services. e increased server count demands to decrease the systems service execution speed. Figure 10 illustrates the increased profit during server size, and USD rates are increased. e high-computation-required USDs are led to enhance the CSP profit. We can observe that the USD is moderate due to server size enhancement. We noticed that if active servers are less but the server speed is high, the profit increases. If we maintain input: (1) Host set: N � N 1 + N 2 + N 3 + · · · + N n , (2) Ex : execution time matrix of host (3) C : cost weight matrix of host/VM output: performance cost of server (1) Let T � T 1 + T 2 + T 3 + · · · + T t (2) for each N i ∈ N do (3) Find minimum cost-effective host (6) and (13) (14) λ ++ i (15) end (16) Return performance cost of server ALGORITHM 3: DAWM algorithm. constant computing capacity, the server speed is impacted by the increase of active server count, which causes a decrease in the profit. erefore, if the server size is peak and speed remains constant, it saves the energy cost and impacts CSP profit. Figure 11 shows the comparative analyses of the server size and profit by regulating the server speed and USD rate. To assess the outcomes, we have used Table 1 listed parameters. If we increase the m value, then the active server size gets low due to m value increment under USD certainty. e profit gets impact when the energy cost is high and influences service execution speed to diminish CSP profit. Table 2 shows the comparative study analysis concerning all state-of-the-art approaches.
e proposed system has outstanding profit, such as a 35.5% average. Subsequently, the profit is accomplished due to the data analysis model, and also performance rate of our system remains increased than existing approaches. e machine performance and execution cost measurement estimations played an essential role to gain adequate noticeable profit for CSP. Table 3 illustrates our approach's simulation outcomes with the unit price 0.6$ and average execution time 0.6 ms. e service price, service price-demand, maximum average service arrival rate, error rate, and user cost are assessed with average service execution time.
Conclusion
e proposed approach has been designed based on a belief propagation-influenced analytical data model to enhance CSP profit through DAG-based task and resource scheduling policy. It optimizes the CDC asset usage rate by consolidating overprovisioning machines. Cloud service suppliers drive the data utility analytic method on machines with low-resource usage rates to preserve CDC usage and performance cost and avoid instant repudiations/ migrations.
It initially recognizes feasible servers after the first iteration by concocting the data analytic weight measurement (DAWM) model. e DAWM model optimizes the cloud service provider's average cost by 51% due to considering each machine's cost and revenue during an effective resource slicing process.
e multiobjective heuristic user service demand (MHUSD) algorithm accomplished average server performance by 41% and average CSP revenue gain by 35% due to CPS profit estimation model and the user service demand (USD) model with dynamic acyclic graph (DAG) phenomena by providing adequate service reliability. It considers service demand weight, service tenant cost, and machine energy cost. Subsequently, the MHUSD algorithm also considers maximum baring wait-time of end-user to maximize CSP revenue and optimize operational energy cost. Google cloud tracer confines the optimized average system profit by 590$, and service execution speed is 4.5 sec/ MIPS with the m2.6X large core system. e simulation results show that our system has an average service execution speed faster than the remaining approaches, such as 41.2%, 55.56%, 59.89%, and 61.52% faster than SPEA2, COMCPM, NSGA-II, and OMCPM, respectively. Subsequently, the proposed system achieved moderately high revenue by 10%, 8.1%, 8.9%, and 8.91% than SPEA2, COMCPM, NSGA-II, and OMCPM approaches and profit by 2.31%, 2.01%, 1.7%, and 1.37% than the state-of-the-art approaches.
Data Availability
No data were used to support the findings of this study.
Conflicts of Interest
e authors declare that they have no conflicts of interest. | 8,373.4 | 2021-05-28T00:00:00.000 | [
"Computer Science"
] |
Structural and Magnetic Properties of Nanoparticles of NiCuZn Ferrite Prepared by the Self-Combustion Method
NiCuZn ferrites were prepared by the sol-gel self-combustion method. Nanosized, homogeneous and highly reactive powders were obtained at relatively low temperatures. In present work the variations of structural, magnetic, and microwave properties of NiCuZn ferrite nanoparticles were studied as a function of the annealing temperature. The analysis of XRD patterns showed that only the spinel phase is present. Cell parameters slightly vary with thermal treatment while a crystalline size increases. Magnetic nanoparticles were mixed with an epoxy resin for reflectivity studies with a microwave vector network analyzer using the microwave-guide method in the range of 7.5 to 13.5GHz. Static saturation magnetization value (measured by SQUID) and microwave absorption show clear dependence on the annealing temperature/particle size and the absorption maximum moves towards the higher frequencies with an increase in the average size of the particles.
Introduction
Spinel ferrites are commercially important materials because of their excellent magnetic and electrical properties [1].They have been utilized as radar absorbing materials (RAM) in various forms for many years due to their large magnetic losses and large resistivity.However, in the microwave region, the applications of spinel ferrites are limited by the lower end of microwave frequencies (1-3 GHz) compared with hexaferrites [2].The effect of Cu substitution on the magnetic and electrical properties of NiZn ferrite has been reported by Nakamura and Shroti [3][4].Magnetic properties of ferrites, including the temperature dependence of relative initial permeability, depend heavily on their compositions and microstructures [5][6][7][8][9].Although the coprecipitation and the sol-gel processes are the most popular, they have some disadvantages since they are highly pH-sensitive and require special attention for complex systems such as NiCuZn.The sol-gel self-combustion method has much simple processing steps.It has the advantages of using inexpensive precursors [10].In present work the nanosized, homogeneous and highly reactive powders were obtained at relatively low temperatures in order to investigate the influence of the annealing temperature on their structural, magnetic properties and microwave reflecting losses (R L ).
Experimental
In the present work, the substituted Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 ferrites were prepared by the selfcombustion method [6].Different proportions of iron nitrates, nickel, cooper, and zinc oxalates were weighed according to the required stoichiometric proportion and diluted in water (in all preparations, [Fe(III)] + [Ni(II)] + [Zn(II)] +[Cu(II)] = 1 M).A 3 M citric acid solution (50 mL) was added to each metal solution (50 mL) and heated at 40°C for approximately 30 min with continuous stirring.The final mixture was slowly evaporated until a highly viscous gel was formed.The resulting gel was heated at T∼200ºC, when it ignited in a self-propagated process.The final residue (ash) was calcined at 550°C, 800°C and 1100 ºC for two hours.The samples nominations are shown in Table I.In order to characterize the auto-combustion process, the original gel and the ash prepared powders (M-I) were heated in a Shimadzu TGA-51 equipment at a heating rate of 10ºC min -1 in air atmosphere.Nanoparticles were characterized by the X-ray diffraction method using a Bruker D8 Advance diffractometer equipped with a Cu tube, Ge (111) incident beam monochromator (λ=1.5406Å) and a Sol-X energy dispersive detector.The corresponding intensities were measured for 2θ angles from 15 to 90º with a step 0.02º and in the standard Bragg-Brentano geometry.Average crystalline size of nanoparticles was calculated for the strongest diffraction peak in the (311) plane from the line broadening using the Scherer equation corrected for instrumental broadening [11] and given by: D=0.89λ/ξcosθ, where D is the crystalline size, λ is the wavelength of the radiation used, ξ is the full width at half maximum after making the correction due to instrumental broadening and θ is the scattering angle.Cell parameters for the ferrite phase were calculated from XRD patterns using the cell powder programme.XRD diffraction data were fitted using the FULLPROF program included in WinPLOTR [12][13].At the profile refinement the unit cell parameters, peak shape (pseudo-Voigt), a background, systematic 2θ shift, an overall isotropic displacement, U,V,W half-width parameters for the profile function and asymmetry parameters were calculated.Transmission electron microscopy (TEM) was performed after sedimenting of the particles on carbon-coated copper grids.
Magnetic measurements of Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 nanoparticles were performed at room and cryogenic temperatures using superconducting quantum interface device SQUID.The nanoparticles were placed into a gelatin capsule under a small pressure (in order to avoid their movement in a magnetic field).Hysteresis loops were measured at temperatures of 5, 10, 50 150 and 300 K. Zero field cooling (ZFC) and field cooling (FC) curves were obtained for the field values selected on the basis of the analysis of the M(H) curves.
In order to explore microwave-absorbing properties in the X-band, magnetic nanoparticles of each sample were mixed with an epoxy resin to be converted to a microwave-absorbing composite and microwave behavior was analyzed from 7.5 to 13.5 GHz.Reflectivity measurements were carried out using the wave-guide method.Samples were prepared in a rectangular shape of size 12.4mm×24.2mmto fit in rectangular wave-guide of the X-band.All composite samples had the same composition (17/83 wt %) and the same thickness (2.0 mm).The wave-guide fitted with sample was backed by a metal short for reflection loss using HP network analyzer (model ANRITSU 37247 D, 40 MHz-20 GHz).
334
Trends in Magnetism
Results and Discussion
Figure 1 shows TG curves of the obtained gel in order to clarify the chemical reactions of the Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 precursors during the pyrolysis process.It was found that there were three weight loss processes that occurred at 160-166ºC, 166-233ºC, 242-322ºC with a total weight loss of 63.3%.The first weight loss was about 50.0% of total weight, corresponding to the decomposition of the gel by oxidation reactions of organic compounds (citrates and acetates) and the loss of absorbed water in the precursor.The second weight loss was about 5.5% of total weight, corresponding to the decomposition of nitrate.The third weight loss was about 7.8% and probably may arise from the decomposition of Cu-Ni-Zn-Fe compounds into metal oxides.It is obvious that above 350°C the weight was almost constant, indicating that the final decomposed products are ferrites.The inset shows that the ash prepared powders presents a weight loss of 18% up to 550ºC.These results suggest that sample M-I retain some organic precursor compounds.These considerations can be related to the DRX analysis.The main result obtained from XRD studies (Figs. 2 and 3) is that the obtained materials are indeed uniform single cubic fcc (Fd3m) phase spinels having a typical NiCuZnFe phases: [220] [14].According to the XRD studies the cell parameters slightly vary with thermal treatment while a crystalline size increases with an increase of the temperature of post preparation treatment from about 12 to115 nm.The diffraction peaks from M-I and M-II samples of a small size were broader than the expected from small size nanoparticles [15].L. Yu et al. [14] have shown that a complete spinel construction of nanocrystalline NiCuZn ferrite may form at the temperature of about 480ºC.Therefore the structure of the M-I sample was studied in more details.Fine features of XRD spectra indicate that although the main volume of the M-I samples was typical NiCuZnFe ferrite phase, there was a possibility for the as-prepared nanoparticles to have imperfections, local deviations from cubic fcc spinel, accumulation of stresses or the presence of elemental Zn segregations (for volume amount below 1%).It can clearly be seen from TEM images in Fig. 4, that there are areas with increased dislocation density filling the grain boundaries for the sample M-IV.For copper-substituted NiZn ferrite obtained by the coprecipitation process structural peculiarity near the grain boundaries were proved to be Cu-rich phase [16].The presence of Cu + ions near the grain boundaries causes a decrease of the lattice parameters of the sample M-IV sintered at 1100ºC in Table 1.The grain size continues to rise from about 12 nm to 115 nm while increasing the sintering temperature that is also much larger than that of NiZn ferrite prepared through the same fabrication process [5,6,17].In this case a liquid phase sintering takes place when the material transfer is much faster than in solid phase.This can explain the spectacular increase of the grain size.
Figure 5 shows two examples of the field dependence of the magnetization for M-I and M-II types of Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 nanoparticles (see also Table 2).The most important result is that for all the samples and temperatures the non-zero coercivity was observed indicating the presence of the strong ferromagnetic contribution.The same conclusion was made from the analysis of ZFC-FC curves.As temperature increases saturation magnetization (measured at 1 T) increases from 37 emu/g to 84 emu/g.Figure 6 shows zero field cooling and field cooling curves for the following values of an external field H: 100 Oe, 200 Oe and 1 kOe.As the temperature decreased from 340 to 5 K a certain increase of the magnetization was observed (especially well seen for H=1 kOe curves).An increase of the M s with the increase of the post preparation treatment temperature is not surprising.The first explanation comes from the detailed analysis of the X-ray data, namely, the existence of the imperfections, local deviations from cubic fcc spinel or accumulation of stresses in the structure of the M-I sample.The second reason is a small size of nanoparticles with very high surface-to-volume ratio that decreases with increase of an average grain size.There are many reports on similar decrease of the saturation magnetization for magnetic nanoparticles with a decrease of their average size [18,19].The shapes of the ZFC-FC curves clearly indicate that the superparamagnetic phase (if any) is not dominant for all types of the nanoparticles: there is no sharp peak in the ZFC branch of the curve, which is usually associated with the mean blocking temperature of superparamagnetic nanoparticles.Figure 7 shows temperature dependence of the saturation magnetization, the coercive forces and the dependence of the room temperature coercivity on the average size of nanoparticles, which can be described by empiric low: H c ~1/D.The highest coercivity appears in the M-I sample.It can also be explained by the existence of the imperfections, local deviations from cubic fcc Taking into account the analysis of the ZFC-FC curves the M(H) loops were fitted considering both small superparamagnetic and ferromagnetic contributions using an IGOR PRO software [20] and the typical functions to describe principal magnetic contributions and a grain size of nanoparticles [21,22]: , where M FM (H) is the ferromagnetic contribution and M SPM (H) is the superparamagnetic contribution; α and β are parameters included in the log-normal distribution of the grain sizes [23].The example of the fits is shown in Fig. 8.Although the fit look fairly good (Fig. 8) the errors in the determination of the parameters were quite high and the values of the experimental errors increased very much with an increase of the temperature of the post preparation treatments.Table 2 describes selected fitting parameters for some samples: β is usually considered between 0.5 and 1.5 [21,22].The difference between the grain size estimated from structural methods and magnetic measurements fitting shows usual tendency: for small nanoparticles D m <D or D TEM .An increase of the difference with the increase of the grain size tells that the simple model becomes less appropriate for the magnetic system description.No reasonable fitting was found for M-III and M-IV samples indicating that the state of the magnetic system becomes really complicated.
Trends in Magnetism
Reflection losses (R L ) in ferrites are related to magnetic and dielectric behavior of these materials, to ferrite-composite ratio and sample thickness.Figure 9 shows the frequency dependence of the return loss for all the ferrite samples.In the X-band range a clear difference in microwave behavior of the obtained nanoparticles was observed.The M-I sample shows a maximum R L of 4.5dB at 10.6 GHz while the M-IV sample shows a maximum R L of 4.0 dB centered at 12.7 GHz with broad absorption peak of 0.8 GHz near to 3dB in the range 12.1-12.9GHz.As it was earlier reported for Ni-ferrites spin resonance frequency increases with particle size [24].Thus, for low annealing temperatures when the particle size is still small only spin resonances are present.The domain wall resonances appear when the particle size reaches the single domain size.R.H. Kodama et al. [25] showed that the disordered surface spins of ferrite nanoparticles can additionally contribute to the microwave absorption.Although further studies with higher ferrite ratio/different thickness are necessary, the obtained results indicate that Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 ferrite/epoxy composite can be used as RAM.Table .2. Selected data for fitting of the M(H) loops of M-I and M-II samples at room temperature.The size of the particle D m was calculated from α and β, D from X-ray, D TEM from TEM.
Conclusions
Ultrafine NiCuZn nanoparticles were prepared by the sol-gel self-combustion method.Their structural, magnetic, and microwave properties were studied as a function of the annealing temperature.The obtained materials are indeed uniform single cubic fcc (Fd3m) phase spinels having typical NiCuZnFe phases.The grain size increases from 12 nm to 115 nm with increasing sintering temperature.For all the samples and temperatures the non-zero coercivity was observed indicating the presence of strong ferromagnetic contribution.Microwave absorption results show that in the X-band range M-I ferrite has a maximum R L of 4.5dB at 10.6 GHz while sample M-IV has a maximum R L of 4.0 dB at 12.7GHz, i.e.M-I ferrite/epoxy composite can be used as RAM.
Fig. 1 .
Fig. 1.TG curves of the dried nitrate-citrate gel powder.The inset: The ash prepared powders (M-I) heated from room temperature to 1000ºC
Fig. 3 .
Fig. 3. XRD patterns of Ni 0.5-x Cu x Zn 0.5 Fe 2 O 4 M-IV nanoparticles annealed at 1100 o C. Comparison of observed (circles) and calculated (solid line) intensities.The difference pattern is shown below.The vertical bars in the middle position indicate the positions of Bragg reflections.
Fig. 5 .
Fig. 5. Magnetization hysteresis loops measured at selected temperatures for the M-I (a) and M-II (b) Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 nanoparticles.The insets shows the details of the M(H) curves obtained in small external fields.Taking into account the analysis of the ZFC-FC curves the M(H) loops were fitted considering both small superparamagnetic and ferromagnetic contributions using an IGOR PRO software[20] and the typical functions to describe principal magnetic contributions and a grain size of
Fig. 6 .
Fig. 6.Zero field cooling (ZFC) and field cooling (FC) curves for various values of an external applied field H for M-I (a), M-II (b), M-III (c) and M-IV (d) Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 nanoparticles.
Fig. 7 .
Fig. 7. Temperature dependence of the coercivity for M-I, M-II, M-III and M-IV samples (a); the insert shows the temperature dependence of the saturation magnetization.The dependence of room-temperature coercivity on the average size of Ni 0.35 Cu 0.15 Zn 0.5 Fe 2 O 4 nanoparticles (b).
Fig. 9 .
Fig. 9. Reflection loss dependency on the frequency for the powders of NiCuZn-epoxy ferrites.
Table 1
Nominations, structural, and room temperature magnetic characteristics of the samples Similar structure was earlier reported for Ni 0.35 Cu 0.11 Zn 0.57 Fe 1.97 O 4 ferrite nanoparticles prepared by sol-gel self-combustion method in different atmospheres | 3,556 | 2010-12-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
The impact of film thickness on the properties of ZnO/PVA nanocomposite film
Polymer inorganic nanocomposites are attracting a considerable amount of interest due to their enhanced electrical and optical properties. The inclusion of inorganic nanoparticles into the polymer matrix results in a significant change in the nanocomposite’s properties. With this in mind, we have developed a nanocomposite film based on zinc oxide (ZnO) and polyvinyl alcohol (PVA) using a solution casting method with varying concentrations of ZnO nano powder in the PVA matrix. The ZnO / PVA film surface morphology was observed by the scanning electron microscope (SEM). The micrographs indicate that ZnO nanoparticles in the PVA matrix are homogeneously distributed. XRD results indicated that the crystallinity of the film was influenced by the interaction of ZnO nanoparticles and the PVA main chain. Crystallinity is also affected by the doping of ZnO nanoparticles in the PVA matrix and it increases when the concentration of ZnO is low and then decreases when the excess concentration of ZnO is present in the PVA matrix. The FTIR transmission spectra confirmed that significant interaction took place between the ZnO nanoparticles and the PVA main chain over the wave number range of 400–4000 cm−1. The UV–vis spectra reveal that the increase in concentration of ZnO nanoparticles in the polymer matrix results in the movement of the absorption edge in the direction of higher wavelength or lower energy associated with the blue/green portion of the visible spectrum. A decrease in the optical energy bandgap is observed with the increase in nano ZnO concentration in the matrix. Thickness has a signifcant affect on the properties of the ZnO/PVA nanocomposite and the morphology, particle size, degree of crystallinity and bandgap of the ZnO/PVA nanocomposite samples were influenced by the thickness of the sample. The optimal thickess of 0.03 mm with a weight percentage of 16.6% (ZnO) and 83.3% (PVA matrix) was selected due to its higher bandgap of 4.22 eV, reduced agglomeration/aggregation and smaller ZnO particle size of 14.23 nm in the matrix. The optimal film can be used in photovoltaic research.
Introduction
When composites have at least one phase at nanoscale, nanocomposites are made. The power of nanoscale materials maximizes the composite characteristics and can lead to new characteristics. In recent times, great attention has been paid to nanocomposites that that contain organic and inorganic phases. The properties of nanoparticles depend not only on the properties of the various components, but also on whether the attractions or compatibility between the two phases are significant and essential [1,2]. An important application of nanocomposite is in electronics, where thin-film nanocomposites are useful in improving and enhancing the electrical characteristics. Polymer nanocomposites focuses on the final morphology, which depends on interaction between polymer nanoparticles that promote good dispersion and distribution of nanoparticles in the polymer matrix [3,4].
Zinc oxide is an inorganic compound which has a chemical formula of ZnO. It is an almost insoluble white powder in water. It crystallizes in two primary forms, the hexagonal wurtzite and the cubic zinc mixture [5]. This Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. is a group II-VI semiconductor with approximately 3.33 eV broad band gap. The strong and broad band difference in the near-UV spectral field and the great free exciton binding energy make it a promising functional semiconductor with a wide range of new applications [6]. ZnO has recently been doped with nanoparticles of other materials. By monitoring the dopants and concentrating them, it was determined that the particle's physical properties, such as optical, electrical, and magnetic, could be engineered. That modification and enhancement was obtained because of the electronic structure change and the band gap. Studies are very important in this field for developing applications that can be used in optical devices [7]. ZnO is a technologically important material due to its diverse properties.
PVA is whitish, odorless, non-toxic, biocompatible, thermostable, semi-crystalline or linear synthetic polymer [8,9]. Poly (vinyl alcohol) (PVA) is also a synthetic water-soluble and biodegradable polymer. Its degradability can be optimized by hydrolysis due to the inclusion of hydroxyl groups. PVA can be dissolved in water for 30 min, with a minimum of~100°C solvent temperature. The characteristics of polyvinyl acetate depend on the degree or extent of hydrolysis, whether whole or in part. It has a double impact on its classification: partly hydrolyzed and completely hydrolyzed [10]. For completely hydrolyzed and partially hydrolyzed grades, PVA has a melting point of 230°C and 170°C-190°C. It breaks down rapidly above 200°C because, at high temperatures, it can undergo pyrolysis. It has excellent optical characteristics. Nanofillers with doping can easily change its mechanical, optical, and electrical characteristics. PVA is the perfect matrix for the production of versatile devices in the fields of electronics, optoelectronics, bioengineering, and other areas due to its lack of toxicity, biocompatibility, high hydrophilicity and easy processability. It is therefore very important to adjust the PVA properties for various targeted applications [11].
Various physical and chemical techniques have been used to create, manufacture and obtain the desired ZnO/PVA nanocomposite architecture, such as solvent or solution casting, vapor-liquid-solid growth phase, melt processing, chemical vapor deposition, and in situ methods [12]. Different researchers have seen enhanced optical, electrical, thermal, and dielectric properties in ZnO-based PVA nanocomposites prepared by various methods [13][14][15]. However, any stand-alone paper on the effect of film thickness on the properties of ZnO/ PVA nanocomposite film has not been reported. Therefore, the efforts have been made in this study to find out the optimal thickness of ZnO/PVA nanocomposite film for its possible use in photovoltaic research.
Zinc Oxide (ZnO)
Zinc Oxide (ZnO) was purchased from R&M Chemicals . It was used as a filler in polymer/ZnO nanocomposites. The size of the nanoparticles was<100 nm (nano-meters and 10×10 -3 mm ) of diameter.
Solvent
Deionized water (H 2 O) has no charge (ion free and does not conduct electricity) was used as a solvent for PVA supplied by Sigma-Aldrich. It is a common solvent for this type of polymer. It is a polar protic solvent which structurally has a long hydrocarbon chain.
Synthesis of ZnO/PVA nanocomposite
The Solution Casting Method (SCM) is an easy and versatile process to develop Nano-composite laboratory scale thin films. In this study, the SCM wasused to prepare a laboratory scale thin film of ZnO/PVA Nanocomposites. A template (petri dish) has been made with ceramic based on the desired shape (laboratory scale) of the Nano-composite thin film for this study to characterize its composition.
The procedure of preparing the ZnO/PVA nanocomposite is briefly explained in figure 1. The Composition of the total solution of ZnO/PVA mixture and the weight percentages of the blended mixture of ZnO/PVA with deionized water are mentioned in tables 1 and 2, respectively. A total of 10 samples were prepared with different weight percentages of ZnO and PVA.
Different samples of ZnO / PVA Nano-composites with different weight percentages of ZnO and PVA were prepared in order to optimize the sample to obtain a better and more efficient film in terms of thickness, as shown in table 3. The laboratory scale ZnO-PVA Nano-composite thin film has been made based on the size of length × width (L × W) of 130 mm×60 mm. The thickness of the nanocomposite sample was measured using a digital micrometer. The thickness of different composite films varies from 0.03 mm to 0.10 mm.
Results and discussion
A total of 10 samples were selected for characterization. Each sample is characterized using the SEM, FTIR, XRD, and UV-vis Spectrophotometer. The following are the characterization results of each sample. Observed the presence of functional groups present in ZnO/PVA Nanocomposite X-ray diffraction (XRD) Examine the diffraction pattern of the ZnO/PVA Nanocomposite UV-visible Spectrophotometer (UV-vis) Find out the absorption and bandgap of ZnO/PVA Nanocomposite
SEM analysis
The SEM image in figure 2 (a) shows that nano-size crystal aggregates have formed as a result of the high surface energy. However, the micrograph contains several small particles with an average size of 14 nm. The surface morphology study of the ZnO/PVA nano-composite film shows a variety of aggregates or chunks spread randomly on the upper surface of the film. The results indicate that some nano-sized ZnO particles tend to form small aggregates when dispersed in the PVA polymer matrix [16]. The micrograph obtained from prepared film as shown in figure 2 Figure 3 shows the FTIR spectra of ten samples ranging from ZnO Figure 4 depicts the XRD patterns of 9 samples of ZnO / PVA Nanocomposite from ZnO (1) [19]. The formation of the ZnO/PVA nanocomposite has been confirmed by these diffraction peaks. By employing the Debye-Scherrer formula, the crystallite size of 9 samples of PVA / ZnO nanocomposite film was measured as shown in table 5. Doping of ZnO nanoparticles results in an increase in crystallinity, but excessive doping of ZnO nanoparticles results in a decrease in crystallinity due to variability in crystalline size and aggregate formation. Contractions in peak width and the sharpness of peaks show the growth in grain size. There has been a rise in strain in the nanocomposite films due to PVA's matrix effect on the ZnO nanocrystals. The XRD pattern of ZnO(6)PVA(10) sample peaks did not match with the PVA diffraction peak and the regular ZnO wurtzite hexagonal crystal structure PDF database (JCPDS 36-1451). There is no sharp peak observed in the pattern due to which crystallite size of ZnO/PVA nanocomposite films cannot be calculated using Debye-Scherrer formula. Figure 5 shows the absorption spectra of 10 samples of ZnO / PVA Nanocomposite from ZnO(1)PVA(5) to ZnO The performance of sample ZnO(1)PVA(5) comparable to other samples shows less agglomeration/ aggregation and smaller ZnO particle size of 14.23 nm. It also shows a high energy bandgap of 4.22 eV by converting the spectra into Tauc's Plot, which is higher than the other samples made in this study using the
Conclusion
In summary, different samples of ZnO/PVA nanocomposite were successfully prepared by the solution casting method. Each sample was characterized by SEM, XRD, FTIR and UV-vis and the results of each sample were briefly discussed. The findings revealed that the morphology, particle size, degree of crystallinity and bandgap of the ZnO/PVA nanocomposite film samples were affected by the thickness of the sample. The sample ZnO(1) PVA(5) has an optimal thickness of 0.03 mm because of its optimum value of 4.22 eV bandgap and less agglomeration/aggregation of 14.23 nm particles. A thin film of ZnO/PVA nanocomposite with optimal thickness can be applied in photovoltaic research due to its fascinating properties. However, future study on fabrication aspects of ZnO/PVA nanocomposites could be carried out for further improvement and investigation into their long-lasting use in photovoltaic research.
Data availability statement
All data that support the findings of this study are included within the article (and any supplementary files). | 2,525.6 | 2021-01-01T00:00:00.000 | [
"Materials Science"
] |
OTTERS: A powerful TWAS framework leveraging summary-level reference data
Most existing TWAS tools require individual-level eQTL reference data and thus are not applicable to summary-level reference eQTL datasets. The development of TWAS methods that can harness summary-level reference data is valuable to enable TWAS in broader settings and enhance power due to increased reference sample size. Thus, we develop a TWAS framework called OTTERS (Omnibus Transcriptome Test using Expression Reference Summary data) that adapts multiple polygenic risk score (PRS) methods to estimate eQTL weights from summary-level eQTL reference data and conducts an omnibus TWAS. We show that OTTERS is a practical and powerful TWAS tool by both simulations and application studies.
Stage I that uses the GReX regression models to estimate effect sizes of SNP predictors that, in the broad sense, are expression quantitative trait loci (eQTLs), Stage II of TWAS proceeds by using these trained eQTL effect sizes to impute GReX within an independent GWAS of a complex human disease or trait.One can then test for association between the imputed GReX and phenotype, which is equivalent to a gene-based association test taking these eQTL effect sizes as corresponding test SNP weights [19][20][21] .
For Stage I of TWAS, a variety of training tools exist for fitting GReX regression models using reference expression and genetic data, including PrediXcan 19 , FUSION 20 , and TIGAR 22 .
While these methods all employ different techniques for model fitting, they all require individuallevel reference expression and genetic data to estimate eQTL effect sizes for TWAS.Therefore, these methods cannot be applied to emerging reference summary-level eQTL results such as those generated by the eQTLGen 23 and CommonMind 24 consortia, which provide eQTL effect sizes and p-values relating individual SNPs to gene expression.The development of TWAS methods that can utilize such summary-level reference data is valuable to permit applicability of the technique to broader analysis settings.Moreover, as TWAS power increases with increasing reference sample size 25 , TWAS using summary-level reference datasets can lead to enhanced performance compared to using individual-level reference datasets since the sample sizes of the former often are considerably larger than the latter.For example, the sample size of the summarybased eQTLGen reference sample is 31,684 for blood, whereas the sample size of the individuallevel GTEx V6 reference is only 338 for the same tissue.Consequently, TWAS analysis leveraging the summary-based eQTLGen dataset as reference likely can provide novel insights into genetic regulation of complex human traits.
In this work, we propose a framework that can use summary-level reference data to train GReX regression models required for Stage I of TWAS analysis.Our method is motivated by a variety of published polygenic risk score (PRS) methods [26][27][28][29][30][31] that can predict phenotype in a test dataset using summary-level SNP effect-size estimates and p-values based on single SNP tests from an independent reference GWAS.We can adapt these PRS methods for TWAS since eQTL effect sizes are essentially SNP effect sizes resulting from a reference "GWAS" of gene expression.Thus, our predicted GReX in Stage II of TWAS is analogous to the PRS constructed based on training GWAS summary statistics of single SNP-trait association.Here, we adapt four representative summary-data based PRS methods --P-value Thresholding with linkage disequilibrium (LD) clumping (P+T) 26 , frequentist LASSO 32 regression based method lassosum 27 , nonparametric Bayesian Dirichlet Process Regression (DPR) model 33 based method SDPR 29 , and Bayesian multivariable regression model based method with continuous shrinkage (CS) priors PRS-CS 28 for TWAS analysis.We apply each of these PRS methods to first train eQTL effect sizes based on a multivariable regression model from summary-level reference eQTL data (Stage I), and subsequently use these eQTL effect sizes (i.e., eQTL weights) to impute GReX and then test GReX-trait association in an independent test GWAS (Stage II).
As we will show, the PRS method with optimal performance for TWAS depends on the underlying genetic architecture for gene expression.Since the genetic architecture of expression is unknown apriori, we maximize the performance of TWAS over different possible architectures by proposing a novel TWAS framework called OTTERS (Omnibus Transcriptome Test using Expression Reference Summary data).OTTERS first constructs individual TWAS tests and pvalues using eQTL weights trained by each of the PRS techniques outlined above, and then calculates an omnibus test p-value using the aggregated Cauchy association test 34 (ACAT-O) with all individual TWAS p-values (Figure 1).OTTERS is applicable to both summary-level and individual-level test GWAS data within Stage II TWAS analysis.
In subsequent sections, we first describe how to use the PRS methods on summary-level reference eQTL data in Stage I TWAS, and then describe how we can use the resulting eQTL weights to perform Stage II TWAS using OTTERS.We then evaluate the performance of individual PRS methods and OTTERS using simulated expression and real genetic data based on patterns observed in real datasets.Interestingly, when we assume individual-level reference data are available, we observe that OTTERS outperforms the popular FUSION 20 approach across all simulation settings considered.Many of the individual PRS methods also outperform FUSION in these settings.We then apply OTTERS to blood eQTL summary-level data (n=31,684) from the eQTLGen consortium 23 and GWAS summary data of cardiovascular disease from the UK Biobank (UKBB) 35 .By comparing OTTERS results to those of FUSION 20
using individual-level
GTEx reference data of whole blood tissue, we demonstrate that OTTERS using large summarylevel reference datasets and multiple gene expression imputation models can successfully reveal potential risk genes missed by FUSION based on smaller individual-level reference datasets and only one model.Finally, we conclude with a discussion.
Method Overview
For the standard two-stage TWAS approach, Stage I estimates a GReX imputation model using individual-level expression and genotype data available from a reference dataset, and then Stage II uses the eQTL effect sizes from Stage I to impute gene expression (GReX) in an independent GWAS and test for association between GReX and phenotype.GReX for test samples can be imputed from individual-level genotype data and eQTL effect size estimates.
When individual-level GWAS data are not available, one can instead use summary-level GWAS data for TWAS by applying the TWAS Z-score statistics proposed by FUSION 20 and S-PrediXcan 36 (see details in Methods).
Since eQTL summary data are analogous to GWAS summary data where gene expression represents the phenotype, we can follow the idea from PRS methods to estimate the eQTL effect sizes based on a multivariable regression model using only marginal least squared effect estimates and p-values (based on a single variant test) from the eQTL summary data as well as a reference LD panel from samples of the same ancestry [26][27][28][29] .Although all PRS methods are applicable to TWAS Stage I, we only consider four representative methods --P+T 26 , Frequentist lassosum 27 , Nonparametric Bayesian SDPR 29 , Bayesian PRS-CS 28 (see details in Methods).
As shown in Figure 1, OTTERS first trains GReX imputation models per gene g using P+T, lassosum, SDPR, and PRS-CS methods that each infers cis-eQTL weights using cis-eQTL summary data and an external LD reference panel of the same ancestry (Stage I).Once we derive cis-eQTL weights for each training method, we can impute the respective GReX using that method and perform the respective gene-based association analysis in the test GWAS dataset.
We thus derive a set of TWAS p-values for gene g, one per training method.We then use these TWAS p-values to create an omnibus test using the ACAT-O 34 approach that employs a Cauchy distribution for inference (see details in Supplemental Methods).We refer to the p-value derived from ACAT-O test as the OTTERS p-value.The ACAT-O 34 approach has been widely used in hypothesis testing to combine multiple testing methods for the same hypothesis [37][38][39] , which has been shown as an effective approach to leverage different test methods to increase the power while still managing to control for type I error.Adding TWAS p-values based on additional PRS methods to the ACAT-O test can possibly improve the power further at the cost of additional computation.
Simulation Study
We used real genotype data from 1894 whole genome sequencing (WGS) samples from the Religious Orders Study and Rush Memory and Aging Project (ROS/MAP) cohort 40,41 and Mount Sinai Brain Bank (MSBB) study 42 for simulation.We divided 14,772 genes into five groups according to gene length, and randomly selected 100 genes from each group (500 genes in total).We randomly split samples into 568 training (30%) and 1326 testing samples (70%) to mimic a relatively small sample size in the real reference panel for training gene expression imputation models.. From the real genotype data, we simulated 6 scenarios with 2 different proportions of causal cis-eQTL, = (0.001, 0.01), as well as 3 different proportions of gene expression variance explained by causal eQTL, ℎ 2 = (0.01, 0.05, 0.1).
We generated gene expression of gene ( ) using the multivariable regression model To evaluate the type I error of the individual PRS methods along with OTTERS, we picked one simulated replicate per gene from the scenario with ℎ 2 = 0.1 and = 0.001 , simulated 2 × 10 3 phenotypes from (0,1) , and permuted the eQTL weights for TWAS to perform a total of 10 6 null simulations.OTTERS was shown well calibrated in the tails of the distribution as shown by quantile-quantile (Q-Q) plots of TWAS p-values in Figure S1.We also observed that OTTERS had well-controlled type I error for stringent significance levels between 10 −4 and 2.5 × 10 −6 (Table S1), which are typically utilized in TWAS.For more modest significance thresholds ( = 10 −2 ) , we noted that OTTERS had a slightly inflated type-I error rate.
This modest inflation is consistent with the findings of the original ACAT-O work, which showed that the Cauchy-distribution-based approximation that ACAT-O employs may not be accurate for larger p-values when correlation among tests is strong 34 .This suggests that modest OTTERS pvalues may be interpreted with caution.
We also compared the performance of our individual PRS training methods to those of FUSION assuming individual-level reference data were available for the latter method to train GReX models.As shown in Figure 2A, we interestingly observed that our training methods yielded similar or improved test 2 compared to FUSION in this situation, with SDPR and PRS-CS outperforming FUSION across all simulation settings.Comparing TWAS power, we found that OTTERS outperformed FUSION by a considerable margin in our simulations (Figure 2B).These simulation results suggest that, while we developed OTTERS based on PRS training methods to handle summary-level reference data, OTTERS can still improve TWAS power when individuallevel reference data are available.This is likely because OTTERS accounts for multiple possible models of genetic architectures of gene expression assumed by the different PRS training methods.
GReX Imputation Accuracy in GTEx V8 Blood Samples
To evaluate the imputation accuracy of P+T (0.001), P+T (0.05), lassosum, SDPR, and PRS-CS methods in real data, we applied these training methods to summary-level eQTL reference data from the eQTLGen consortium 23 with n=31,684 blood samples, to train GReX imputation models for 16,699 genes.For test data, we downloaded the transcriptomic data of 315 blood tissue samples that are in GTEx V8 but were not part of GTEx V6 (as GTEx V6 samples contributed to the reference eQTLGen consortium summary data).For these 315 samples, we compared imputed GReX to observed expression levels.We considered trained imputation models with test 2 > 0.01 as "valid" models, as suggested by previous TWAS methods 20,43 .We also compared imputation accuracy of these five training models to those using FUSION based on a smaller individual-level training dataset (individual-level GTEx V6 reference dataset; see Methods).For such models, we compared the test 2 for genes that had test 2 > 0.01 by at least one training method.
By comparing test 2 per "valid" GReX imputation model by PRS-CS versus the other methods (Figure 3), we observed that PRS-CS had the best overall performance for imputing GReX as it provided the most "valid" models with higher GReX imputation accuracy compared to P+T methods, lassosum, SDPR, and FUSION.Comparing the test 2 among the other four training methods, we observed that these two P+T methods obtained similar test 2 per "valid" model.Meanwhile, the test 2 per valid model varied widely among the P+T methods, lassosum, and SDPR (Figure S3), suggesting that none of these four were optimal across all genes and their performance likely depended on the underlying unknown genetic architecture.These results are consistent with our simulation results.
TWAS of Cardiovascular Disease
Using the eQTL weights trained by P+T (0.001), P+T (0.05), lassosum, SDPR, and PRS-CS methods with the eQTLGen 23 reference data and reference LD from GTEx V8 WGS samples 44 , we applied our OTTERS framework to the summary-level GWAS data of Cardiovascular Disease from UKBB (n=459,324, case fraction = 0.319) 35 significance level) were identified as significant TWAS genes for cardiovascular risk.
In total, we identified 40 significant TWAS genes by using OTTERS.To identify independently significant TWAS genes, we calculated the 2 (squared correlation) between the GReX predicted by PRS-CS for of each pair of genes.For a pair of genes with the predicted GReX 2 > 0.5, we only kept the gene with the smaller TWAS p-value as the independently significant gene.OTTERS obtained 38 independently significant TWAS genes ( These genes were identified by FUSION when considering the GTEx V6 reference data of artery, thyroid, adipose visceral, and nerve tibial tissues.For example, the most significant gene FES (OTTERS p-value = 2.87 × 10 −32 ) was identified by FUSION using GTEx reference data of artery tibial, thyroid, and adipose visceral omentum tissues, and was also identified as a TWAS risk gene for high blood pressure, which is strongly related to cardiovascular disease 46 .
By comparing OTTERS results with the ones obtained by individual methods (Table 2; Figure 4; Figure S4), we found that all individual methods contributed to the OTTERS results.For example, the novel risk gene LINC01093 was only identified by lassosum, while genes CPEB4, SIDT2, and ACE were only detected by PRS-CS and SDPR and the novel risk gene EDN3 was only identified by the P+T methods.To better understand the differences among individual methods, we plotted the eQTL weights estimated by P+T (0.001), P+T (0.05), lassosum, SDPR, and PRS-CS for three example genes that were only detected by one or two individual methods (Figures S5-S7).For these genes, we plotted the eQTL weights produced by each method with such weights color coded with respect to − 10 (GWAS p-values) from the UKBB GWAS summary statistics and shape coded with respect to the direction of UKBB GWAS Z-score statistics.Generally, significant TWAS p-values would be obtained by methods that obtained eQTL weights with relatively large magnitude for SNPs with relatively more significant GWAS pvalues.
In Figure S5, we showed the eQTL weights for gene SIDT2, which was a significant risk lassosum was the only method that produced relatively large eQTL weights that co-localized with GWAS significant SNPs.
These results were consistent with our simulation study results, demonstrating that the performance of different individual methods depended on the underlying genetic architecture.We do note that there were a handful of genes identified by an individual method that were not significant using OTTERS (Table S2).Nonetheless, the omnibus test borrows strength across all individual methods, thus generally achieves higher TWAS power and identifies the group of most robust TWAS risk genes.
By examining the Q-Q plots of TWAS p-values, we observed a moderate inflation for all methods (Figure S8).Such inflation in TWAS results is not uncommon [48][49][50] , which could be due to similar inflation in the GWAS summary data and not distinguishing the pleiotropy and mediation effects for considered gene expression and phenotype of interest 51 (Figure S9).We also observed a notable inflation in the GWAS p-values of cardiovascular disease from UKBB (Figure S9), as we estimated the LD score regression 52 intercept to be 1.1 from the GWAS summary data.
We did not consider directly comparing to FUSION in our above TWAS analyses of cardiovascular disease since we used the summary-level reference data eQTLGen.However, to assess the performance of OTTERS and FUSION in a real study where individual-level reference data are available, we performed an additional TWAS analysis of cardiovascular disease in the UK Biobank using the GTEx V8 data of 574 whole blood samples as the reference data.We trained OTTERS Stage I using cis-eQTL summary statistics obtained from these 574 GTEx V8 whole blood samples and reference LD from GTEx V8 WGS samples, and trained FUSION models using individual-level genotype data and gene expression data of the same 574 whole blood samples.
We tested TWAS association for 19,653 genes and identified genes with TWAS p-values < 2.53 × 10 −6 (Bonferroni corrected significance level) as significant TWAS genes.Training 2 > 0.01 was used to select "valid" GReX imputation models for TWAS (Figure S10).To identify independently significant TWAS genes, we calculated the training 2 between the GReX predicted by lassosum for of each pair of genes, since lassosum had the best training 2 (Figure S10).For a pair of genes with the predicted GReX 2 > 0.5, we only kept the gene with the smaller TWAS p-value as the independently significant gene.As a result, OTTERS obtained 34 independently significant TWAS genes, while FUSION identified 21 independently significant TWAS genes (Figure S11).A total of 14 genes were identified by both FUSION and OTTERS (Table S3).
These results demonstrate the advantages of OTTERS for using multiple PRS training methods to account for the unknown genetic architecture of gene expression, which is consistent in our simulation results.These results also showed the advantage of using eQTL summary data with a larger training sample size, as more independently significant TWAS genes were identified by using the eQTLGen summary reference data (38 vs. 34), even with a more stringent rule (test instead of training 2 > 0.01) applied to select test genes with "valid" GReX imputation models.
Computational Time
The computational time per gene of different PRS methods depends on the number of test variants considered for the target gene.Thus, we calculated the computational time and memory usage for 4 groups of genes whose test variants were <2000, between 2000 and 3000, between 3000 and 4000, and >4000, respectively.Among all tested genes in our real studies, the median number of test variants per gene is 3152, and the proportion of genes in each group is 10.3%, 33.4%, 34.5%, and 21.8%, respectively.For each group, we randomly selected 10 genes on Chromosome 4 to evaluate the average computational time and memory usage per gene.We benchmarked the computational time and memory usage of each method on one Intel(R) Xeon(R) processor (2.10 GHz).The evaluation was based on 1000 MCMC iterations for SDPR and PRS-CS (default) without parallel computation (Table S4).We showed that P+T and lassosum were computationally more efficient than SDPR and PRS-CS, whose speed were impeded by the need of MCMC iterations.Between the two Bayesian methods, SDPR implemented in C++ uses significantly less time and memory than PRS-CS implemented in Python.
Discussion
Our OTTERS framework represents an omnibus TWAS tool that can leverage summarylevel expression and genotype results from a reference sample, thereby robustly expanding the use of TWAS into more settings.To this end, we adapted and evaluated five different PRS methods assuming different underlying genetic models, including the relatively simple method P+T 26 with two different p-value thresholds (0.001 and 0.05), the frequentist method lassosum 27 , as well as the Bayesian methods PRS-CS 28 and SDPR 29 within our omnibus test for optimal inference.We note that additional PRS methods such as MegaPRS 30 or PUMAS 31 could also be implemented as additional OTTERS Stage I training methods.Higher TWAS power might be obtained by adding more PRS methods in OTTERS Stage I, with additional computation cost.We also note that the existing SMR-HEIDI 53 method, which uses summary-level data from GWAS and eQTL studies to test for possible causal genetic effects of a trait of interest that were mediated through gene expression, could also be used as an alternative method besides TWAS.However, the SMR method generally restricts eQTL for consideration, excluding those where the eQTL p-values larger than a certain threshold, e.g., 0.05.
In simulation studies, we demonstrated that the performance of each of these five PRS methods depended substantially on the underlying genetic architecture for gene expression, with P+T methods generally performing better for sparse architecture whereas the Bayesian methods performing better for denser architecture.Consequently, since genetic architecture of gene expression is unknown apriori, we believe this justifies the use of the omnibus TWAS test implemented in OTTERS for practical use as this test had near-optimal performance across all simulation scenarios considered.While we developed our methods with summary-level reference data in mind, we note that our prediction methods and OTTERS perform well (in terms of imputation accuracy and power) relative to existing TWAS methods like FUSION when individuallevel reference data are available.
In our real data application using UKBB GWAS summary-level data, we compared OTTERS TWAS results using reference eQTL summary data from eQTLGen consortium to FUSION TWAS results using a substantially smaller individual-level reference dataset from GTEx V6.OTTERS identified 13 significant TWAS risk genes that were missed by FUSION using individual-level GTEx V6 reference data of blood tissue, suggesting that the use of larger reference datasets like eQTLGen in TWAS can identify novel findings.Interestingly, the genes missed by FUSION were instead detected using individual-level GTEx reference data of other tissue types that are more directly related to cardiovascular disease.By comparing OTTERS to FUSION when the same individual-level GTEx V8 reference data of whole blood samples were used, we still observed that OTTERS identified more risk genes than FUSION, which we believe is due to the former method accounting for the unknown genetic architecture of gene expression by using multiple regression methods to train GReX imputation models.These applied results were consistent with our simulation results.
Among all individual methods, P+T is the most computationally efficient method.The Bayesian methods SDPR and PRS-CS require more computation time than the frequentist method lassosum as the former set of methods require a large number of MCMC iterations for model fit.By comparing the performance of these five methods in terms of the imputation accuracy and TWAS power in simulations and real applications, we conclude that none of these methods were optimal across different genetic architectures.We found that all methods provided distinct and considerable contributions to the final OTTERS TWAS results.These results demonstrate the benefits of OTTERS in practice, since OTTERS can combine the strength of these individual methods to achieve the optimal performance.SDPR training methods as these three provide complementary results in our studies.Second, the currently available eQTL summary statistics are mainly derived from individuals of European descent.Our OTTERS trained GReX imputations model based on these eQTL summary statistics, and the resulting imputed GReX could consequently have attenuated cross-population predictive performance 56 .This might limit the transferability of our TWAS results across populations.Third, our OTTERS cannot provide the direction of the identified gene-phenotype associations, which should be referred to the sign of the TWAS Z-score statistic per training method.Last, even though the method applies to integrate both cis-and trans-eQTL with GWAS data, the computation time and availability of summary-level trans-eQTL reference data are still the main obstacles.Our current OTTERS tool only considers cis-eQTL effects.Extension of OTTERS to enable cross-population TWAS and incorporation of trans-eQTL effects is part of our ongoing research but out of the scope of this work.
Our novel OTTERS framework using large-scale eQTL summary data has the potential to identify more significant TWAS risk genes than standard TWAS tools that use smaller individuallevel reference transcriptomic data and use only a single regression method for training GReX imputation models.This tool provides the opportunity to leverage not only available public eQTL summary data of various tissues for conducting TWAS of complex traits and diseases, but also the emerging summary-level data of other types of molecular QTL such as splicing QTLs, methylation QTLs, metabolomics QTLs, and protein QTLs.For example, OTTERS could be applied to perform proteome-wide association studies using summary-level reference data of genetic-protein relationships such as those reported by the SCALLOP consortium 57 , and epigenome-wide association studies using summary-level reference data of methylationphenotype relationships reported by Genetics of DNA Methylation Consortium (GoDMC) (see Web Resources).OTTERS would be most useful for the broad researchers who only have access to summary-level QTL reference data and summary-level GWAS data.The feasibility of integrating summary-level molecular QTL data and GWAS data makes our OTTERS tool valuable for wide application in current multi-omics studies of complex traits and diseases.
Traditional Two-Stage TWAS Analysis
Stage I of TWAS estimates a GReX imputation model using individual-level expression and genotype data available from a reference dataset.Consider the following GReX imputation model from individuals and SNPs (multivariable regression model assuming linear additive genetic effects) within the reference dataset: = + , ∼ (0, 2 ).
(Equation 1) Here, is a vector representing gene expression levels of gene , is an × matrix of genotype data of SNP predictors proximal or within gene , is a vector of genetic effect sizes (referred to as a broad sense of eQTL effect sizes), and is the error term.Here, we consider only cis-SNPs within 1 MB of the flanking 5' and 3' ends as genotype predictors that are coded within 19,20,22 .Once we configure the model in Equation 1, we can employ methods like PrediXcan, FUSION, and TIGAR to fit the model and obtain estimates of eQTL effect sizes ( ̂).
Stage II of TWAS uses the eQTL effect sizes ( ̂) from Stage I to impute gene expression (GReX) in an independent GWAS and then test for association between GReX and phenotype.
Given individual-level GWAS data with genotype data and eQTL effect sizes ( ̂) from Stage I, the GReX for can be imputed by ̂= ̂. √ ̂′ ̂ (Equation 2) where is the single variant Z-score test statistic in GWAS for the j th SNP, = 1, … , , for all test SNPs that have both eQTL weights with respect to the test gene and GWAS Z-scores; ̂ is the genotype standard deviation of the j th SNP; and denotes the genotype correlation matrix in FUSION Z-score statistic and genotype covariance matrix in S-PrediXcan Z-score statistic of the test SNPs.In particular, ̂ and can be approximated from a reference panel with genotype data of samples of the same ancestry such as those available from the 1000 Genomes Project 58 .If ̂ are standardized effect sizes estimated assuming standardized genotype and gene expression in Equation 1, FUSION and S-PrediXcan Z-score statistics are equivalent 13 .
Otherwise, the S-PrediXcan Z-score should be applied to avoid false positive inflation.
TWAS Stage I Analysis using Summary-Level Reference Data
We now consider a variation of TWAS Stage I to estimate cis-eQTL effect sizes ̂ based on a multivariable regression model (Equation 1) from summary-level reference data.We assume that the summary-level reference data provide information on the association between a single genetic variant j ( = 1, … , ) and expression of gene g.This information generally consists of effect size estimates ( ̃, = 1, … , ) and p-values derived from the following single variant regression models: Here, is an × 1 vector of genotype data for genetic variant .Since eQTL summary data are analogous to GWAS summary data where gene expression represents the phenotype, we can estimate the eQTL effect sizes ̂ using marginal least squared effect estimates ( ̃, = 1, … , ) and p-values from the QTL summary data as well as reference linkage disequilibrium (LD) information of the same ancestry [26][27][28][29] .Although all PRS methods apply to the TWAS Stage I framework, we only consider four representative methods as follows: P+T: The P+T method selects eQTL weights by LD-clumping and P-value Thresholding 26 .
Given threshold for p-values and threshold for LD 2 , we first exclude SNPs with marginal p-values from eQTL summary data greater than or strongly correlated (LD 2 greater than ) with another SNP having a more significant marginal p-value (or Z-score statistic value).For the remaining selected test SNPs, we use marginal standardized eQTL effect sizes from eQTL summary data as eQTL weights for TWAS in Stage II.We considered = 0.99 and = (0.001, 0.05) in this paper and implemented the P+T method using PLINK 1.9 55 (see Web Resources).We denote the P+T method with equal to 0.001 and 0.05 as P+T (0.001) and P+T (0.05), respectively.
Frequentist lassosum: With standardized and X, we can show that the marginal least squared eQTL effect size estimates from the single variant regression model (Equation 3) is ̃= / and that the LD correlation matrix is = /.That is, = ̃ and = .
(Equation 4) By approximating by ( = (1 − ) + with a tuning parameter 0 < < 1, a reference LD correlation matrix from an external panel such as one from the 1000 Genomes Project 58 , and an identity matrix ) in the LASSO 32 penalized loss function, the frequentist lassosum method 27 can tune the LASSO penalty parameter and using a pseudovalidation approach and then solve for eQTL effect size estimates ̂ by minimizing the approximated LASSO loss function requiring no individual-level data (see details in Supplemental Methods).
OTTERS Framework
As shown in Figure 1, OTTERS first trains GReX imputation models per gene g using P+T, lassosum, SDPR, and PRS-CS methods that each infers cis-eQTLs weights using cis-eQTL summary data and an external LD reference panel of similar ancestry (Stage I).Once we derive cis-eQTLs weights for each training method, we can impute the respective GReX using that method and perform the respective gene-based association analysis in the test GWAS dataset using the formulas given in Equation 2 (Stage II).We thus derive a set of TWAS p-values for gene g; one p-value for each training model that we applied.We then use these TWAS p-values to create an omnibus test using the ACAT-O 34 approach that employs a Cauchy distribution for inference (see details in Supplemental Methods).We refer to the p-value derived from ACAT-O test as the OTTERS p-value.
Marginal eQTL Effect Sizes
In practice of training GReX imputation models using reference eQTL summary data, the marginal standardized eQTL effect sizes were approximated by ̃ ≈ /√( , ), where denotes the corresponding eQTL Z-score statistic value by single variant test and ( , ) denotes the median sample size of all cis-eQTLs for the target gene .
LD Clumping
We performed LD-clumping with =0.99 for all individual methods in both simulation and real studies.Using PRS-CS as an example, we also showed that LD-clumping does not affect the GReX imputation accuracy compared to no clumping in the real data testing (Figure S12).
LD Blocks for lassosum, PRS-CS, and SDPR
LD blocks were determined externally by ldetect 60 Manhattan plot of TWAS results by OTTERS using GWAS summary-level statistics of cardiovascular disease and imputation models fitted based on eQTLGen summary statistics.
Independently significant TWAS risk genes are labeled.
Figure Titles and LegendsFigure 1 .
Figure Titles and Legends
Figure 2 .
Figure 2. Test (A) and TWAS power (B) comparison in simulation studies
Figure 4 .
Figure 4. Manhattan plot of TWAS results by OTTERS.
Table 2
We compared our OTTERS results with the TWAS results shown on TWAS hub (see Web Resource) obtained by FUSION using the same UKBB GWAS summary data of cardiovascular disease but using a smaller individual-level reference expression dataset from , Figure3B), compared to 17 independently significant genes by P+T (0.001), 11 by P+T (0.05), 10 by lassosum, 41 by SDPR, and 12 by PRS-CS.Among these 38 independent TWAS risk genes identified by OTTERS, gene RP11-378A13.1 (OTTERS p-value = 9.78 × 10 −9 ) was not within 1 MB of any known GWAS risk loci with genomic-control corrected p-value < 5 × 10 −8 in the UKBB summary-level GWAS data.This novel risk gene RP11-378A13.1 was also identified to be a significant TWAS risk gene in blood tissue for systolic blood pressure, high cholesterol, and cardiovascular disease by FUSION 1 .HAUS8, RPL28, CTSZ), when considering all available tissue types in GTEx V6 reference data.
For this gene, SDPR and PRS-CS estimated near-zero weights for most test SNPs with significant GWAS p-values in the test region.Most significant GWAS SNPs did not have eQTL test p-values < 0.001 or 0.05, and were thus filtered out by P+T methods.
gene identified by both PRS-CS and SDPR, and had p-values < 10 −4 by other methods.Compared to lassosum, SDPR had more significant GWAS SNPs colocalized with eQTLs having relatively large weights in the test region, and PRS-CS had more non-significant GWAS SNPs colocalized with eQTLs having zero weights.Compared to the P+T methods, SDPR and PRS-CS based on a multivariate regression model modeled LD among all test SNPs, and thus estimated eQTL weights leading to significant TWAS findings.In FigureS6, we provided the results of gene EDN3, which was only identified by P+T methods (p-values ≤ 9.15 × 10 −8 ).Compared to P+T methods, SDPR (p-value = 5.9 × 10 −3 ) and PRS-CS (p-value = 0.0158) had fewer significant GWAS SNPs colocalized with eQTLs that had relatively large weights in the test region, while lassosum (p-value = 8.6 × 10 −6 ) assigned relatively large weights to more non-significant GWAS SNPs.In FigureS7, we provided results for gene LINC01093, which was only identified by lassosum.
55 enable the use of OTTERS by the public, we provide an integrated tool (see Availability of data and materials) to: (1) Train GReX imputation models (i.e., estimate eQTL weights in Stage I) using eQTL summary data by P+T, lassosum, SDPR, and PRS-CS; (2) Conduct TWAS (i.e., testing gene-trait association in Stage II) using both individual-level and summary-level GWAS data with the estimated eQTL weights; and (3) Apply ACAT-O to aggregate the TWAS p-values from individual training methods.Since the existing tools for P+T, lassosum, SDPR, and PRS-CS were originally developed for PRS calculations, we adapted and optimized them for training GReX imputation models in our OTTERS tool.For example, we integrate TABIX54and PLINK55tools in OTTERS to extract input data per target gene more efficiently.We also enable parallel computation in OTTERS for training GReX imputation models and testing gene-trait association of multiple genes.
The OTTERS framework does have its limitations.First, training GReX imputation models by all individual methods on average cost ~20 minutes for all 5 training models per gene, which might be computationally challenging for studying eQTL summary data of multiple tissue types and for ~20K genome-wide genes.Users might consider prioritize P+T(0.001),lassosum, and 28timates ̂ for the underlying multivariable regression model in Equation 1 by assuming a normal prior (0, 2 ) for and a Dirichlet process prior 59 (, ) for 2 with base distribution and concentration parameter .SDPR29assumes the same DPR model but can be applied to estimate the eQTL effect sizes ̂ using only eQTL summary data (see details in Supplemental Methods).The PRS-CS method28assumes the following normal prior for and non-informative scale-invariant Jeffreys prior on the residual variance 2 in Equation1where local shrinkage parameter has an independent gamma-gamma prior and is a globalshrinkage parameter controlling the overall sparsity of .PRS-CS sets hyper parameters = 1 and = 1/2 to ensure the prior density of to have a sharp peak around zero to shrink small effect sizes of potentially false eQTL towards zero, as well as heavy, Cauchy-like tails which asserts little influence on eQTLs with larger effects.Posterior estimates ̂ will be obtained from eQTL summary data (i.e., marginal effect size estimates ̃ and p-values) and reference LD correlation matrix by Gibbs Sampler (see details in Supplemental Methods).We set as the square of the proportion of causal variants in the simulation and as 10 −4 per gene in the real data application.
The median cis-eQTL sample size per gene was also taken as the sample size value required by lassosum, SDPR, and PRS-CS methods, for robust performance.Since summary eQTL datasets (e.g., eQTLGen) were generally obtained by meta-analysis of multiple cohorts, the sample size per test SNP could vary across all cis-eQTLs of the test gene.The median cis-eQTL sample size ensures a robust performance for applying those eQTL summary data based methods.
Table 1 . Test 𝑹 𝟐 in 315 whole blood tissue samples from GTEx V8.
for lassosum and PRS-CS, while internally for SDPR which ensure that SNPs in one LD block do not have nonignorable correlation ( 2 > 0.1) with SNPs in other blocks.Given gene expression simulated from the multivariate regression model = + with standardized genotype matrix and ∼ (0, (1 − ℎ 2 ), we assume GWAS phenotype data of samples are simulated from the following linear regression model = ℎ ( ) + , ∼ (0, ). on true genetic effect sizes, the GWAS Z-score test statistics of all test SNPs will follow a multivariate normal distribution, ( √ ℎ 2 , ), where is the correlation matrix of the standardized genotype from test samples, and ℎ 2 denotes the amount of phenotypic variance explained by simulated GReX= 38 .Thus, for a given GWAS sample size, we can generate GWAS Z-score statistic values from this multivariate normal distribution.size > 3000.As a result, we used cis-eQTL summary data of 16,699 genes from eQTLGen to train GReX imputation models for use in OTTERS in this study..319) 35were generated by BOLT-LMM based on the Bayesian linear mixed model per SNP 63 with assessment centered, sex, age, and squared age as covariates.Although BOLT-LMM was derived based on a quantitative trait model, it can be applied to analyze case-control traits and has well-controlled false positive rate when the trait is sufficiently balanced with case fraction ≥ 10% and samples are of the same ancestry .The tested dichotomous cardiovascular disease phenotype includes a list of sub-phenotypes: hypertension, heart/cardiac problem, peripheral vascular disease, venous thromboembolic disease, stroke, transient ischaemic attack (tia),
Table 2 . Independent TWAS risk genes of cardiovascular disease identified by OTTERS. 690
Reference eQTL summary data from eQTLGen consortium and GWAS summary data from 691 UKBB were used.The corresponding TWAS p-values by 5 individual PRS methods and 692 OTTERS are shown in the table with significant p-values in bold, and those for genes with test 693GReX 2 ≤ 0.01 were shown as a dash.Risk gene of UKBB cardiovascular disease in TWAS-hub identified using GTEx whole blood tissue.
b: Risk genes of UKBB cardiovascular disease in TWAS-hub identified using other GTEx tissue types.c: Novel risk gene 50.Center for Molecular and Biomolecular Informatics, Radboud Institute for Molecular Life Sciences, Radboud University Medical Center Nijmegen, Nijmegen, The Netherlands 51.Institute for Community Medicine, University Medicine Greifswald, Greifswald, Germany 52.Institute for Laboratory Medicine, LIFE -Leipzig Research Center for Civilization Diseases, Universität Leipzig, Leipzig, Germany 53.Department of Internal Medicine, Erasmus Medical Centre, Rotterdam, The Netherlands 54.UMC Utrecht Brain Center, University Medical Center Utrecht, Department of Neurology, Utrecht University, Utrecht, The Netherlands 55.Interfaculty Institute for Genetics and Functional Genomics, University Medicine Greifswald, Greifswald, Germany 56.School of Life Sciences, College of Liberal Arts and Science, University of Westminster, 115 New Cavendish Street, London, United Kingdom 57.Division of Medical Sciences, Department of Health Sciences, Luleå University of Technology, Luleå, Sweden 58.Institute for Advanced Research, Wenzhou Medical University, Wenzhou, Zhejiang 325027, China | 9,083.2 | 2022-12-10T00:00:00.000 | [
"Computer Science",
"Biology"
] |
DisCountNet: Discriminating and Counting Network for Real-Time Counting and Localization of Sparse Objects in High-Resolution UAV Imagery
: Recent deep-learning counting techniques revolve around two distinct features of data—sparse data, which favors detection networks, or dense data where density map networks are used. Both techniques fail to address a third scenario, where dense objects are sparsely located. Raw aerial images represent sparse distributions of data in most situations. To address this issue, we propose a novel and exceedingly portable end-to-end model, DisCountNet, and an example dataset to test it on. DisCountNet is a two-stage network that uses theories from both detection and heat-map networks to provide a simple yet powerful design. The first stage, DiscNet, operates on the theory of coarse detection, but does so by converting a rich and high-resolution image into a sparse representation where only important information is encoded. Following this, CountNet operates on the dense regions of the sparse matrix to generate a density map, which provides fine locations and count predictions on densities of objects. Comparing the proposed network to current state-of-the-art networks, we find that we can maintain competitive performance while using a fraction of the computational complexity, resulting in a real-time solution
Introduction
Counting objects is a fine-grain scene-understanding problem which can arise in many real-world applications including counting people in crowded scenes and surveillance scenarios [1][2][3][4][5], counting vehicles [6], counting cells for cancer detection [7], and counting in agriculture settings for yield estimation and land use [8,9]. Counting questions also appear as some of the most difficult and challenging questions in Visual Question Answering (VQA). Despite very promising results in "yes/no" and "what/where/who/when" questions, counting questions (how many) are the most difficult questions for the system, which have the lowest performance [10,11].
In natural resource management, livestock populations are managed on pastures and rangeland consisting of hundreds or thousands of acres, which may not be easily accessible by ground-based vehicles. The emergence of micro Unmanned Aerial Vehicles (UAVs), featuring high flexibility, low cost, and high maneuverability has brought the opportunity to build effective management systems. They can easily access and survey large areas of land for data collection and translate this data into a user-friendly information source for managers.
Current methods to count animals and identify their locations through visual observation are very expensive and time consuming. UAV technology has provided inexpensive tools that can be used to gather data for such purposes, but this also creates an urgent need for development of new automatic and real-time object detection and counting techniques. Existing computer vision algorithms for object detection and counting are mainly designed and evaluated on non-orthogonal photographs taken horizontally with optical cameras. For UAVs, images are taken vertically at higher altitudes (usually a hundred meters or less above ground level). In such images, the objects of interest can be very small, lacking important information; For example, an aerial image of an animal has only the top view which presents a blob shape, containing no outstanding or distinguishing features. Additionally, this area of interest presents itself similarly to other objects in background, such as tree and bushes; while corresponding terrestrial image of the same animal has many distinguishing features such as head, body, or legs which makes it easier for recognition. Moreover, ground-based images offer a balance between background and foreground, which is not present in UAV images taken from a high altitude. A difference between a frontal view (ground-based) of an animal and top view (aerial-based) is depicted in Figure 1. Objects in aerial images are small, flat, and sparse; moreover, objects and backgrounds are highly imbalanced. In human-centric photographs, different parts of objects (head, tail, body, legs) are clearly observable while aerial imagery present very coarse features. In addition, aerial imagery presents argumentative features, such as shadows.
In addition, the UAV images we use in this project are associated with a large scene-understanding problem, which is still a challenging issue even for ground-based images. Specific challenges to count and localize animals in large pastures include: (1) animals may be occluded by bushes and trees; (2) variant lighting conditions; (3) small areas of animals in the imagery make it difficult to detect them based on shape features; (4) herding animals tend to group together (form a herd).
Recent advances in deep neural networks (DNNs) along with massive datasets have facilitated the progress in artificial intelligence tasks such as image classification [12], object recognition [13,14], counting [8,9], contour and edge detection [15] and semantic segmentation [16]. Most successful network architectures have improved the performance of various vision tasks at the expense of significantly increased computational complexity. In many real-world applications, real-time analysis of data is necessary. One of the goals of our research is to develop a real-time algorithm that can count and localize animals while on board UAVs. For this purpose, we need an algorithm that balances portability and speed with accuracy instead of sacrificing the former for the latter. To address this, we propose a novel technique influenced by both detection and density map networks along with specialized training techniques in which coarse and fine detection occurs. One network operates on a sparse distribution, while the other operates on a dense distribution. By separating and specializing these tasks, we compete with state-of-the-art networks on this particular challenge while maintaining impressive speed and portability.
In this research, we have designed a novel end-to-end network that takes a high-resolution and large image as input and produce the count and localization of animals as output. The first stage, DiscNet, is designed to discriminate between foreground and background data, converting a full feature rich image into a sparse representation where only foreground patches and their locations are encoded. The second network, CountNet, seeks to solve a density function. Operating on the sparse matrix from DiscNet, CountNet can limit its expensive calculations to important areas. An illustration of our network is presented in Figure 2. The novel contributions of our work include:
•
We developed a novel end-to-end architecture for counting and localizing small and sparse objects in high-resolution aerial images. This architecture can allow for a real-time implementation on board the UAV, while maintaining comparable accuracy to state-of-the-art techniques.
•
Our DisCount network discards a large amount of background information, limiting expensive calculations to important foreground areas.
•
The hard example training part of our algorithm addresses the issues of shadow and occluded animals.
•
We collected a novel UAV dataset, prepared the ground truth for it, and conducted a comprehensive evaluation.
Counting Methods
Counting can be divided into several categories based on the annotation methods used for generating the ground-truth data.
Counting Via Detection
One can consider that perfect detection will lead to the perfect counting. In case that objects are distinct and can be easily detected, this assumption is true. In this method, objects need to be annotated by a bounding box. Several methods [5,[17][18][19] have applied detection for counting objects. For instance, Ref. [5] have manually annotated bounding box and trained a Faster R-CNN [20] for counting people in a crowd. However, in many cases these methods suffer from heavy occlusions among objects. Moreover, the annotation cost can be very expensive and impractical in very dense objects. In our case, although animals are sparsely located in the image, they are herding together which results in dense patches. Using this theory, we use coarse detection methods to model the first function of density.
Counting Via Density Map
In this case, annotation involves marking a point location for each object in the image. This annotation is based on density heat map and is preferred in scenarios that there are many objects occluding each other. Density heat-map annotation has been used in several cases including counting cells, vehicles, and crowd [6,[21][22][23][24][25][26]. Since our counting involves counting occluded objects in the selected patches, we use density heat-map annotation technique.
Counting on Image Level
This counting is based on image level label regression [8,9,27] which is the least expensive annotation technique. However, these methods can only count. Since we are interested in both counting and localization of objects, we did not use image level annotation which is basically the global count in the image.
Counting Applications
Counting methods have been mainly applied for counting crowd [5,[28][29][30][31], vehicles [6,32], and cell [7]. In agriculture, there have been limited research for counting apples and oranges [33], tomatoes [8,9], maize tassels [34], and animals [27]. However, authors are not aware of any fully automatic techniques for counting animals or fruit from aerial imagery. The existing techniques for counting and detection of animals on UAVs [35][36][37] need manual preparation of training data in a way that each image contains a single animal [35,36] or extra sensor such as thermal camera [37]. Due to payload limitation, it is not always possible to add extra sensor; thermal cameras are usually more expensive than optical one which is a prohibitive cost for local farmers. Moreover, counting in [35,36] is performed via a post-processing step by connected component analysis. Our approach is different from previous work as we have developed a fully automatic technique where the region of interest are selected automatically in the first part of the network ( DiscNet) without any manual cropping of imagery and counting is performed automatically in an end-to-end learning procedure on optical imagery.
Unmanned Aerial Systems
In recent years UAS have been extensively used in various areas such as scene understanding and image classification [12], flood detection [16], vehicle tracking [38], forest inventory [39], soil moisture [40], and wildlife and animal management [36,41]. There has been very limited work on use of UAS for monitoring livestock particularly for animal detection, feeding behavior, and health monitoring. For the review of these techniques , see [42].
In addition, several methods based on DNNs [32,[43][44][45] have been developed for object detection and tracking in satellite and aerial imagery, particularly vehicles. For counting and detecting of man-made objects ( such as vehicles in parking lots) in aerial imagery, one deal with the imagery that contain an equal distribution of objects of interest and background and there is not any overlap between objects. In typical crowd or vehicle counting from aerial imagery, more than 70% of image contain the object while in our case less than 1 percent of imagery contain the object of interest (cattle). Based on our knowledge there are not any fully automatic techniques for counting sparse objects from UAV imagery. Objects from UAV images are usually flat, proportionally small, and missing normal distinguishing features. Moreover, the ratio of foreground (object of interest) to background data in UAV imagery is prohibitively small. This means that we need to handle sparse information to separate foreground information from background data. Additionally, most domesticated animals used in agriculture are herding. This means that even though they represent a minute amount of sparsely distributed information, they will tend to group, leading to density situations that cannot be accurately handled by detection networks.
Data Collection
UAS flights for cattle and wildlife detection were conducted at the Welder Wildlife Foundation in Sinton, TX on December 2015. This coincides with the typical dates for wildlife counts due to leaf drop of deciduous trees. A fixed-wing UAV fitted with a single-channel non-differential GPS and digital RGB camera for photogrammetry was flown by the Measurement Analytics Lab (MANTIS) at Texas A&M University-Corpus Christi under a blanket Certificate of Authorization (COA) approved by the United States Federal Aviation Administration (FAA). Over 600 acres were covered using a fixed-wing small UAV called the SenseFly eBee ( Figure 3). It is an ultra-lightweight ( 0.7 kg), fully autonomous platform which has a flight endurance of approximately 50 min on a fully charged battery and light wind, and can withstand wind speeds up to 44 km/h. With this setup, it can cover ten square kilometers per flight mission. For this survey, the platform was equipped with a Canon IXUS 127 HS 16.1 MP RGB camera with automatic exposure adjustment for optimal image exposure. Four flights were conducted at 80 m above ground level with 75% sidelap and 65% endlap to seamlessly cover the entire study area, which consisted of over one thousand individual photographs. The resultant ground sample distance (GSD) was on average 3.8 cm. These images were post-processed using structure-from-motion photogrammetric techniques to generate an orthorectified image mosaic (orthomosaic) (Figure 4). In this work, Pix4Dmapper Pro (Pix4D SA, 1015 Lausanne, Switzerland) was used to process the imagery. The SfM image processing workflow is summarized as follows [46]: (1) Image sequences are input the software and a keypoint detection algorithm, such as a variant of the scale invariant feature transform (SIFT), is used to automatically extract features and find keypoint correspondences between overlapping images using a keypoint descriptor. SIFT is a well-known algorithm that allows for feature detection regardless of scale, camera rotations, camera perspectives, and changes in illumination [47] (2) Key points as well as approximate values of the image geo-position provided by the UAS autopilot (onboard GPS) are input into a least squares bundle block adjustment to simultaneously solving for camera interior and exterior orientation. Based on this reconstruction, the matching points are verified, and their 3D coordinates calculated to generate a sparse point cloud. (3) To improve reconstruction, ground control points (GCPs) laid out in the survey area are introduced to constrain the solution and optimize reconstruction. GCPs also improve georeferencing accuracy of the generated data products. (4) Densification of the point cloud is then performed using a MultiView Stereo (MVS) algorithm to increase the spatial resolution. The resultant densified set of 3D points is used to generate a triangulated irregular network (TIN) and obtain a digital surface model (DSM). (5) The DSM is then used by the software to project every image pixel and to calculate a geometrically corrected image mosaic (orthomosaic) with uniform scale. Due to the low accuracy of the onboard GPS used to geotag the imagery, ground control targets were laid out in the study area, and RTK differential GPS was used to precisely locate their position within 2 to 4 cm horizontal and vertical accuracy. These control targets were used during the post-processing of the imagery to accurately georeference the orthomosaic image product.
Dataset Feature Description
The prominent features of this data set are roads, cows, and fences which are standard for ranch land in the southern United States. According to the USDA [48], each head of cattle requires roughly 2 acres (43,560 square feet or 4047 square meters) of ranch land to maintain year-around foraging. On average, a cow when viewed orthogonal occupies roughly 16 square feet, or 1.5 square meters. This means when viewed as an area, a properly populated ranch land will have approximately 0.0037% area that pertain to cattle. As can be seen in Figure 5, any given image in our dataset contains a large amount of background information. In this figure, background information is represented as translucent areas, while important areas, containing objects of interest, are transparent. In addition, cattle are herding animals, meaning they travel in groups. This is especially prevalent in calves, which stay within touching distance of their mothers. Due to this large disparity of area-to-cow and the propensity of the cattle to group up, you end up with unique distributions and sub-distributions of data. The description of these distributions would be densely packed locations of data scattered sparsely in a much larger area. Figure 5 shows an example of sparsity in an image. Out of 192 regions, only 11, signified by transparent areas, represent useful information in the given counting task. Furthermore, out of the useful regions, only 12% of the pixel area represent non-zero values in the probability heat map.
Dataset Preparation
Individual images taken from the UAV have a native resolution of 3456 by 4608, which is scaled down to 1536 by 2048. Ground-truth annotations for this dataset are center of object point locations, all of which were labeled by hand. The density map is generated by processing the center of point objects with a Gaussian smoothing kernel with a size of 51 and a sigma of 9. This process can be visualized in Figure 6, with an example region, its ground-truth point annotation, and the resultant Gaussian smoothing output. The size of the Gaussian smoothing kernel roughly correlates to the average distance between the tip of a cow's head and the base of its tail. Due to the scale of the data, some unimportant areas may be labeled as important as they are located proximate to cows. To generate region labels, a sum operation is performed over each region. Any region with a value greater than zero is classified as foreground, with all others classified as background.
Our Approach
In this work, we will employ deep-learning models to approximate two density maps, θ d and θ c based on assumptions generated from observation of the data. These assumptions are:
1.
The data can be accurately described as a set of two distributions.
2.
The majority of our data can be classified as background information.
3.
Background information can be safely discarded without losing contextual details.
4.
Foreground information can be densely packed.
Therefore, we design a two-stage approach to solve the problem. The first stage, DiscNet, is designed to discriminate between background and foreground data, converting a full feature rich image into a sparse representation where only foreground data and its location is encoded; this will approximate θ d function. The second network, CountNet, approximates θ c by operating on the sparse matrix from DiscNet; CountNet can limit its expensive calculations to regions. The result of this design, DisCountNet, is a two-stage, end-to-end supervised learning process that maintains remarkable accuracy while yielding a real-time solution to the provided problem.
Implementation and Training
The design for DisCountNet is detailed in Figure 2, and shows the full end-to-end implementation. DiscNet (the first stage) is an encoder characterized by convolutions of large kernels and leaky RELU activation functions followed by aggressive pooling. The first four convolutions use kernels with sizes of seven, six, five, and four. The first three pooling operations are all max pooling with a kernel and stride of four, and the final pooling operation is another max pooling layer with stride and kernel size of two. The last next to last layer in the network is a final one-by-one convolution to reduce the feature map depth to two, followed by a SoftMax activation, yielding a 12 by 16 matrix of values that represent the likelihood that a cow is found in a given region. The aggressive striding allows us to use larger kernel sizes to capture contextual information that could be lost while limiting expensive operations. DiscNet then uses this matrix to convert the original input image into a sparse representation, operating on the assumptions listed above. CountNet uses the sparse representation to generate per-pixel probability values. This flow of information can be visualized in Figure 7, which shows different data representations at different stages of the proposed network. Our training procedure is depicted in Algorithm 1. Given the dataset {X i } N i=0 , DiscNet gets trained using the full images and region label ground truths via a weighted cross entropy loss to determine if there is a cow in a given region. Each data X i consist of R (i) regions, where each region is labeled by y (i) r with r = 1 . . . where r = 1 . . . R (i) . The result of the network prediction is denoted as y (i) r . We use a weighted cross entropy minimization equation, which is given by Equation (1); for convenience, we drop the superscript (i) in the formula. d = − ∑ r (y r pr −0.5 log( y r ) + (1 − y r )pr 0.5 log(1 − y r )). (1) where pr ∈ [0, 1] and represents the percentage of regions with desired information. This weighted loss function serves to counterweight the loss values for our unbalanced data set. For example, if an image is 90% background regions, the loss for foreground regions will be ten times higher. This will cause the network to weigh the loss for positive examples more highly than negative examples, resulting in an increased number of false positives and fewer false negatives. In our given implementation, a false negative will hurt the performance much more than a false positive. As an example, a false negative in the discriminator would mean that a region with a cow is not passed to CountNet, meaning no cows can be detected. However, a false positive means that a region without a cow is passed on, for which CountNet can still compensate. In the second stage, as CountNet seeks to model a different function based on Assumption 4, it uses a different implementation. CountNet features a U-Net structure [49] with modified operations. Operating on a sparse representation of the original image generated by DiscNet, CountNet creates a sparse density map. The encoding pathway features four convolution-pooling operations with skip connections to the decoding pathway, which uses transposed convolutions. All the pooling operations are max pooling with a kernel and stride of two. Each of the convolutions uses a three-by-three kernel, and all transposed convolutions use a stride of two. Finally, the network uses a one-by-one convolution with one feature depth that represents the likelihood that a cow is in one given pixel. The U-Net-like-architecture is proven for providing accurate inferences while maintaining contextual information for per-pixel tasks. CountNet is trained by the sparse data generated by DiscNet. CountNet's loss value is generated using regions and corresponding ground-truth density map regions by minimizing the Mean Square Error. The 2 loss function, is given by Equation (2) where z i is a given ground-truth density map and z i is a prediction,
Hard Example Training
During end-to-end training, CountNet maintains a list of loss values per region. At the end of each epoch, it sorts this list, then truncates the lowest half. CountNet then randomly perturbs these regions using random flipping and rotating, training again with a larger batch size. The loss from this training is again stored, and the process is repeated m − 1 times. As the population decreases by half in every iteration, m should be chosen to ensure that the population of regions does not drop below a given batch size. On observation, regions used multiple times contain argumentative features, such as black cows that look similar to shadows or obscured cows behind foliage.
Evaluation Metrics
For evaluation, we used five metrics in addition to comparing parameters between DisCountNet, RetinaNet [14] and CSRNet [50]. The targeted goal is to have as-accurate-as-possible counting and density map generation while providing a real-time solution on portable hardware. The metrics can be broken into three different sections; image level label comparison, region level label comparison, and generated density map quality comparison.
Image Level Label Metrics
To compare raw counting results, we use mean squared error (mSE) and mean absolute error (mAE). The resultant values provide us with an idea of the average error expected for any image in our testing set. In both equations, n is the total number of images, y t is a given ground-truth label count, and y t is our count prediction for image t.
Image Region Level Metric
The grid average mean absolute error (GAME) [6] metric provides more accurate information for counting quality. Mean absolute error as a metric does not care where errors occur as long as they average out, where GAME simultaneously considers the object count, and the location estimated for the objects.. The formula for GAME is as follows: where n is the total number of images, L is the amount of gridlines in each dimension, and y r t is the actual count for image t on region r. It should be noted that GAME(0) is equal to the mAE, as the region considered is the whole image.
Density Map Quality Comparison
To evaluate the quality of the produced density heat maps, we use peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) [51]. These metrics provide insight to the quality of the generated density map compared to the ground truth. Standard implementations of these metrics are non-distance evaluations, meaning that they cannot be used to evaluate raw counting results, but rather to provide an insight into a network's ability to create accurate per-pixel values. The formula for PSNR can be found below, with MAX i being the maximum possible pixel value of a given image and mSE being the mean squared error found in Equation (3).
The formula for structural similarity is as follows: where µ represents the mean, σ represents the standard deviation, σ xy is covariance, x is a ground truth, and y is a prediction. Finally, c 1 and c 2 are variables to stabilize division.
Experimental Setup
We have compared the performance of our technique with two state-of-the-art techniques namely RetinaNet and CSRNet [14,50]. RetinaNet is a detection network [14] that generates bounding boxes and CSRNet generates a density map [50].
All networks were trained on a Spectrum TXR410-0032R Deep Learning Dev Box, leveraging an Intel Core 17-5930K, 64 GB of RAM, and four Nvidia GeForce Titan Xs.
DisCountNet was trained with the Adam [52] optimizer with a learning rate of 1e-3. The batch size for DiscNet was 1, and the batch size for CountNet is the number of regions detected by DiscNet. During hard example training, the batch size of DiscNet was set to be 24, and m was chosen to be three. The number of repetitions of hard example training, m, was set to three to maintain a large sample population. A larger dataset could theoretically use a larger number of repetitions, as the batch size of regions could remain higher.
To generate positive anchors, RetinaNet was trained using images that contained cows extracted from a three-by-three grid of the original images using bounding box ground truths.
Validation was run with a Dell Inspiron 15-7577 using a solid-state drive, i5-7300 processor, 16 GB of RAM, and an Nvidia 1060 Max-Q. To operate on more limited hardware, images were split into non-overlapping regions before being processed by CSRNet and RetinaNet. For RetinaNet [14], a three-by-three grid was used to generate the regions, while CSRNet [50] was validated using a two-by-two grid. Using this hardware as an analog for consumer attainable and portable hardware, DisCountNet averaged 34 frames per second. To compare, RetinaNet [14] averaged 4 frames per second, and CSRNet [50] averaged 12 frames per second. This shows that only our technique can count and localize objects in real time.
Qualitative and Quantitative Results
A sample full image, its ground truth, and the predicted density map by our algorithm are shown in Figure 8. In addition to the density map, the network predicts 6 objects in this image, which corresponds to the actual count. As it can be seen in Figure 8, our method is able to detect small and sparse objects in large UAV images. Further results for selected regions by our discriminator network are shown in Figure 9. This figure shows that our method can distinguish between two adjacent cattle and those animals that our occluded by foliage. As can be seen in Table 1, DisCountNet maintains competitive metrics despite using just over 1% of the parameters of current state-of-the-art networks. In addition, DisCountNet would have the benefit of limiting computation whereas other networks would not. For example, in an empty image, DisCountNet would only use the operations of DiscNet, as CountNet would not run. RetinaNet [14] and CSRNet [50] however, would need to use their full operations on the empty image, resulting in computations with no benefit.
As it can be seen in Tables 2 and 3, DisCountNet outperformed state-of-the-art in all GAME metrics as well as SSIM. This shows that our method is more accurate in simultaneous counting and localization of objects compared to others. (Table 3) metric is 1e-4 from being a perfect score for DisCountNet. This is due to the fact that when using a sparse representation, we allow for perfect zero output. This results in absolutely no error for the majority (around 80%) of all pixels. CSRNet [50] does not have this type of design, so every pixel output value can be extremely close to zero, but statistically will not be zero. This results in a small error value in every pixel which is even more pronounced when comparing SSIM over other metrics.
Conclusions
In this paper, we propose an innovative method to work with sparse datasets by designing a fully convolutional counting and localization method. Our method outperformed state-of-the-art techniques in quantitative metrics while providing real-time results. Through innovative design, we limit operations to only important areas while discarding non-important areas. While our method greatly improves the counting and localization performance, it has the limitation of detecting and counting highly occluded objects. As it can be seen on the bottom row of Figure 3, our network has difficulty detecting cows inside shrubbery with high occlusion. This technique is easily portable to other application domains, as it provides general implementations rather than specific hand-crafted techniques. Our technique can possibly be expanded to an iterative series of DiscNets, or a cascade of weak convolutional regional classifiers. By the iterative process of dense to sparse information representations, successive networks would work on less and less information.
Conflicts of Interest:
The authors declare no conflict of interest. | 6,868.4 | 2019-05-11T00:00:00.000 | [
"Computer Science",
"Environmental Science",
"Engineering"
] |
Well-being and the Great Recession in Spain
ABSTRACT This letter assesses the impact of the Great Recession on well-being in Spanish provinces using two alternative composite indicators of objective well-being that include somewhat different dimensions. Whereas the crisis notably eroded economic well-being, its impact on overall well-being – which in addition to economic dimensions also includes non-economic ones – was imperceptible. This result points to the need to carefully define and assess well-being in empirical analyses.
I. Introduction and motivation
Assessing well-being is a challenging task. Whereas traditional measures have been based on simple economic indicators, mostly GDP per capita, society as a whole calls for a more comprehensive way to gauge well-being. In addition to economic issues, fresh measures should include other non-economic dimensions of well-being such as health, education or the environment, to name just a few. In response to this social demand, the Human Development Index was created by the United Nations in the 1990s, and included income per capita, education and life expectancy (see UN 2016). Later on, the Commission on the Measurement of Economic Performance and Social Progresslaunched in 2009 and headed up by the Nobel laureate Joseph Stiglitzsuggested several non-economic dimensions that, beyond economic ones, can affect well-being (Stiglitz, Sen, and Fitoussi 2009). Furthermore, the OECD provides data at both national (Better Life Index dataset), and regional (OECD Regional Well-being Database) levels on these dimensions (see Durand 2015).
Recently, several papers have employed these datasets to building objective composite indicators of well-being, mostly at the country level (e.g. Lorenz, Brauer, and Lorenz 2017;Peiró-Palomino and Picazo-Tadeo 2018). Nonetheless, the rankings of countries resulting from these indicators are not significantly different from those derived from conventional measures of wellbeing, including GDP per capita. In addition, other recent research papers also conclude that there is a close relationship between subjective well-being and income (Stevenson and Wolfers 2013).
According to the arguments outlined above, it might be sensible to assume that indicators based on economic dimensions such as GDP per capita are good proxies of overall well-being. However, in this letter we show that this might not be the case when it comes to assessing the impact of an economic crisis on well-being. In doing so, we empirically evaluate the impact of the Great Recession that began in 2008 (IMF, 2009;Camacho, Gadea, and Pérez-Quirós 2018) on wellbeing in Spanish provinces, using two composite indicators of objective well-being computed with Data Envelopment Analysis (DEA) and Multi-Criteria-Decision-Making techniques (MCDM). One of themreferred to as economic wellbeingjust includes GDP per capita and the unemployment rate as essential economic dimensions of well-being, whereas the othernamed overall well-beingalso includes 5 non-economic dimensions. We find that: i) there are notable disparities in well-being across Spanish provinces; ii) when objective well-being is assessed only with economic dimensions a sharp decline is observed as a result of the Great Recession; conversely, when non-economic dimensions are also accounted for, well-being remains fairly stable. Accordingly, our conclusion is that the choice of well-being indicators should be carefully justified in empirical analyses.
II. Data and methodology
We employ information about 7 well-being dimensions at the level of the 50 Spanish provinces, 1 built with data from different sources for the period 2000-14 (Table 1). In constructing this dataset, we have attempted to assemble a set of indicators as close as possible to those proposed by Stiglitz, Sen, and Fitoussi (2009) and provided by the OECD at both national and regional levels, but using the much more limited information available for the Spanish provinces. Using these figures, we have computed averages for each indicator in the growth period 2000-07 and the crisis period 2008-14. To ensure comparability across dimensions, and given that the indicators have different measurement units, the data have been standardised on a 0-1 scale using the min-max method, with higher values representing better performance; minimum and maximum values are chosen from the whole 2000-14 period to ensure comparability over time. Finally, normalised data on dimensions have been used to build a couple of composite indicators of objective well-being: the first, economic well-being, includes only the economic dimensions of income and jobs, while the other, overall well-being, considers all 7 dimensions.
Regarding the methodology, we have used DEA and MCDM techniques as in Peiró-Palomino and Picazo-Tadeo (2018). First, following Lovell, Pastor, and Turner (1995) we have computed a composite well-being indicator for each province p' with DEA as: w dp 0 dimension dp 0 Subject to: P D d¼1 w dp 0 dimension dp 1 p ¼ 1; . . . ; 50 where dimension dp is the observed value for dimension d in province p, and w dp is the idiosyncratic weight assigned to dimension d in the composite indicator of province p. Moreover, composite indicators from (1) are, by construction, bounded between zero and one, the latter representing the highest well-being; i.e. the lower the score, the worse the well-being.
Whereas DEA provides a successful approach to the building of a composite well-being indicator, it might be less effective when it comes to ranking provinces. In this respect, comparisons might be meaningless as provinces' well-being indicators are computed with different sets of weightings (Kao and Hung 2005); besides, program (1) could assign a score of onemeaning highest well-beingto some provinces just because of a lack of discriminating power (see technical details in Dyson et al. 2001). In order to ensure comparability and also increase the discriminating power, in a second stage we have employed MCDM techniques to compute a composite well-being indicator with common weights across provinces for dimensions, as proposed by Despotis (2002). Formally: where w d is the common weight assigned to dimension d; ε is a non-Archimedean small number; h is a non-negative parameter to be estimated; m p represents the deviation between the composite indicator for province p calculated with DEA, and that computed with MCDM; finally, t is a parameter ranging from 0 to 1, which we have set to 1 (see details in Peiró-Palomino and Picazo-Tadeo 2018).
III. Results and discussion
Table 2 displays averages for economic and overall well-being in both growth (2000-07) and crisis (2008-14) periods for provinces and regions; also, averages for regions are weighted by population. 2 In this respect, as well-being affects people, we have considered population-weighted averages to be much more illustrative than simple ones. Figure 1 illustrates the geographic distribution of well-being in Spain, with darker colours representing better performance. Overall well-being is unevenly distributed across space with no clear patterns, although the lowest scores are found in the Mediterranean coast and Southern provinces. Furthermore, a positive (although moderate) association is observed between overall well-being and the level of development of provinces and regionsmeasured by real GDP per capita-, particularly in the recession period. 3 Besides, lower economic well-being is found in Southern and Western provinces while Northern and Eastern provinces perform notably better. Lastly, intraregional heterogeneity is high in most cases, especially for overall well-being. Generally speaking, our results show geographic patterns of well-being that are in line with those Note: Well-being is categorised through the quintile distribution of our composite indicators for the entire period, in order to evaluate disparities across regions and also over time. from other studies of well-being (or quality of life) in Spain carried out using different methodological approaches, aggregation levels and time periods. In this regard, Ventura (2011, 2018) focused on quality of life at the municipal level, although they only considered a limited sample of municipalities and overlooked temporal variation. Murias, Martinez, and de Miguel (2006) and Zarzosa Espina and Somarriba Arechavala (2013) (2018) also assessed well-being at the region level for the period 2006-15, but they did not provide a composite indicator.
Regarding the impact of the Great Recession, a severe deterioration is observed in all provinces between 2000-07 and 2008-14 when well-being is assessed considering only economic dimensions; e.g. population-weighted average economic wellbeing decreases from 0.809 to 0.439. Conversely, well-being remains much more stable when it is assessed with our overall well-being indicator, with a weighted average that even increases slightly from 0.747 to 0.792. 4 These findings can be clearly seen in Figure 1, which also suggests that the geographical North-East versus South-West division observed in the growth years persisted during the crisis. Furthermore, the population-weighted distributions of overall well-being among Spanish provinces are not statistically different between 2000-07 and 2008-14 ( Figure 2); conversely, those of economic well-being are statistically different; i.e. a notable shift to the left has occurred as result of the crisis.
IV. Conclusions
In this letter, we report two main conclusions. First, the Great Recession has profoundly affected the economic dimensions of well-being in Spain, whereas overall well-beingwhich also includes other non-economic dimensionshas remained fairly stable. Second, leaving aside the desire of academics, international organisations and society as a whole to broaden the notion of well-being, this concept needs to be carefully defined and assessed in empirical studies, as different measures may lead to quite different interpretations. | 2,086 | 2018-11-11T00:00:00.000 | [
"Economics"
] |
Optimization Design and Experimental Study of Low-Pressure Axial Fan with Forward-Skewed Blades
This paper presents an experimental study of the optimization of blade skew in low pressure axial fan. Using back propagation (BP) neural network and genetic algorithm (GA), the optimization was performed for a radial blade. An optimized blade is obtained through blade forward skew. Measurement of the two blades was carried out in aerodynamic and aeroacoustic performance. Compared to the radial blade, the optimized blade demonstrated improvements in efficiency, total pressure ratio, stable operating range, and aerodynamic noise. Detailed flow measurement was performed in outlet flow field for investigating the responsible flow mechanisms. The optimized blade can cause a spanwise redistribution of flow toward the blade midspan and reduce tip loading. This results in reduced significantly total pressure loss near hub and shroud endwall region, despite the slight increase of total pressure loss at midspan. In addition, the measured spectrums show that the broadband noise of the impeller is dominant.
INTRODUCTION
Skewed and swept blade technique was originated from the research achievements of aircraft airfoil.Since this technique was introduced to turbomachinery field, it has played a very important role in the performance improvement of turbomachinery.So far, many research results have proved that the skewed and/or swept technique would promote aerodynamic efficiency, reduce throughflow losses, enhance stable range, as well as decrease the aerodynamic noise of turbomachinery.
Beiler and Carolus [1] studied the aerodynamic performance of both forward-and backward-swept impellers of the low-speed axial fans.The results showed that the forward-swept blades could improve the aerodynamic performance and have the potential of widespread application.However, there was poor performance in aerodynamics of the backward-swept blades.Cai et al. [2] presented an experimental investigation and numerical simulation of the performance of an axial-flow fan with skewed rotating blades.The results showed that the performance of the forward-skewed blade increased at a higher pressure rise of 13.1% and gave a larger flow rate of about 5%, as well as a higher efficiency of more than 3% and a lower noise of 2 to 4 dBA.Outa [3] studied rotating stall performance of a single stage subsonic axial compressor and found that forward-swept blade would increase throttle margin with decreased tip loss.Corsini and Rispoli [4] proved that the forward-swept blade of a subsonic axial fan operates more efficiently in particular at low flow rates, with a delayed onset of stall.
With the development of computer technology and optimization algorithms, optimum design of turbomachinery with swept and skewed blade to be concerned has become practicable.
One notable attempt to study the optimization of blade in transonic axial compressor was achieved by Yi et al. [5,6].The optimal impeller with backward-swept and skewed blades is designed based on simulated annealing (SA) methods.Experimental results showed that the adiabatic efficiency of the optimal impeller is increased by 0.82%, while the flow rate and total pressure ratio were kept constant.Besides, further efficiency promotion of 1.05% was also achieved by using Gradient Method (GM), with the optimum parameters of swept angle and blade camber curves.Jang et al. [7,8] designed a backward-swept impeller in transonic axial compressor by using response surface method (RSM).The adiabatic efficiency of optimum impeller was increased by 1.25%.The separation line, which was defined as interference between shock and boundary layer of blade suction surface, was moved to further downstream of the optimal impeller.
With concern to the optimization of skewed and swept blades in low-pressure axial fan, Yu and Yuan [9] introduced International Journal of Rotating Machinery an optimization procedure of a multistage axial compressor with inlet guide vane (IGV) and outlet guide vane (OGV) by using design of experiment (DOE) and sequential quadratic programming (SQP).Main optimization parameter is the swept and skewed blade stacking line.The efficiency of the optimized impeller was increased by 1.26%, the mass flow rate was increased by 1.56%, total pressure ratio was increased by 1.77%, as well as the surge margin that was extended by 9.38%.Lotfi et al. [10,11] also showed the optimization of blade camber line of low-speed axial fan by using GA method would achieve higher efficiency than the original design.In our former work [12], aerodynamic and aeroacoustic performances of radial, forward-skewed, and backward-skewed blades of low pressure axial fan have been studied contrastively.Both skewed angle of forward and backward impeller is set as 8.3 deg.It is shown that the forward-skewed impeller has a noise reduction of 4.3 dBA and the stable operating range extended by 5.69%, compared to the radial impeller.However, the aerodynamic efficiency and total pressure rise is decreased by 3.53% and 5.63%, respectively.
In present study, optimization design of skewed angle is carried out based on former impeller parameters described in [12].An optimization algorithm based on GA and BP neural network is adopted in present work, as well as threedimensional (3D) Reynolds-averaged Navier-Stokes (RANS) computational fluid dynamics (CFD) simulation to determine the aerodynamic performance during the optimization procedure.The performance promotion of optimized impeller is confirmed by experiment, and the flow mechanism is also explained by detailed flow field measurements.
Skewed blade
So far, there is not a unified definition of skewed blade in turbomachinery field.In this paper, circumferential-skewed blade is defined as follows.Figure 1 illustrates the stacking line ABC of blade which is observed from the axial direction of the impeller.The stacking line is composed of straight line segment and arc segment.The straight line segment is AB and the arc segment is BC.Point "A" is located on the hub and point "C" is located at the blade tip.Point "B" is the intersection of the arc and the straight-line of the stacking line.Point "O" is located on the axis of the impeller.As shown in the figure, the angle δ sk between line OB and line OC is called circumferential skewed angle of the blade.If the angle δ sk is more than 0 • , it shows that the skewed direction of the blade is along rotation direction of the impeller and vice versa.They are called the circumferential forward-skewed blade and the circumferential backward-skewed blade, respectively.
Study model
In this case, the optimization of blade skew is based on a blade of a kind of low-pressure axial flow fan.because the circumferential-skewed angle of the archetypal blade is very small, it can be approximately regarded as radial blade.More detailed descriptions of the impeller are described in [12].
OPTIMIZATION DESIGN
Optimization design system consists of a parameterized blade geometry description, 3D RANS flow solver, and an optimization procedure, as shown in Figure 2. The parameterized blade geometry is generated by the geometry program (GEOM).The commercial European aerodynamic numerical simulator (EURANUS) [13] is selected as 3D RANS flow solver.The optimization tool includes GA and artificial neural network (ANN) with BP training.At first, the blade geometry is defined by a few number of design parameters in GEOM.A database with multiple samples is created and the CFD results of each sample are stored in the database with 3D RANS flow solver.With the widely adopted BP learning algorithm and the trained ANN, an optimization model constructing the relationship between the input variables (blade geometry variables) and output variables (i.e., efficiency, total pressure ratio) is setup.At last, the final results of the optimization parameters are achieved based on GA and 3D RANS flow solver.
Parameterization of blade geometry
The parametric blade representation includes the parametric blade sections, at different spanwise locations, the location of the stacking point on the blade sections, and the parametric stacking line of the blade.The whole 3D-blade geometry of the impeller is combined of eight blade sections in the spanwise direction, which are located at λ = 0, 0.25, 0.38, 0.5, 0.63, 0.75, 0.87, and 1.00, respectively.In the parametric blade sections, a simple second-order Bezier curve is used to define the camber curve (Figure 3).Both the pressure side and the suction side curve of the 2D-blade profiles are defined by third-order Bezier curve.There are the rounded leading edge (LE) and trailing edge (TE) of the 2Dblade profiles (Figure 4).The stacking points are located in the center of gravity of 2D-blade section.In the circumfer- ential direction, the stacking line is controlled by a composite curve, including a third-order Bezier curve, a straight-line and a third-order Bezier curve (Figure 5).As shown in Figure 5, α 1 is the angle of the first Bezier curve at span = 0. Parameter C 1 is the spanwise extent of the first Bezier curve.Parameter P 1 is the fractional factor defining the second control point location of the first Bezier curve.α 2 is the angle of the linear segment, and α 3 is the angle of the second Bezier curve at span = 1.Parameter C 2 is the spanwise extent of the second Bezier curve.Parameter P 2 is the fractional factor defining the second control point location of the second Bezier curve.
Numerical method
The three-dimensional, incompressible, viscous flow is computed with the EURANUS.In this optimization, the simulator solves the Reynolds-averaged form of Navier-Stokes equations.
The turbulence is modeled by Spalart-Allmaras (S-A) [14] one-equation turbulence model for this application.The spatial discretization is based on a finite-volume approach allowing a fully conservative discretization based on a central formulation.A time discretization is applied through a fourth order Runge-Kutta procedure.An efficient combination of multigrid and implicit residual averaging is used for convergence acceleration to steady state, which is helpful to shorten time of optimization design.
A structured mesh, generated by using the interactive grid generator (IGG) [15], is required in the optimization computation of the archetypal blade.The computation mesh consists of three blocks.One block is in the passage and the others are at the tip clearance (Figure 6).Generally, a small alteration of blade geometry shape can have more negative impact on I-mesh quality than on H-mesh quality.This may result in the increase of "bad" samples in the database with I-mesh.Therefore, the H-mesh is only used in the flow passage, in present optimization.In the flow passage, the mesh contains 129 points in the streamwise direction, 73 points in the spanwise direction, and 65 points in the azimuthal direction.The mesh in the tip clearance is embedded into the H&O-mesh.A tip clearance block with O-mesh contains 13 points in the spanwise direction, 13 points in the azimuthaldirection, and 161 points wrapping around the tip of the blade.The other tip clearance block with H-mesh contains 13 points in the spanwise direction, 17 points in the azimuthal direction, and 65 points in the streamwise direction.
The boundary conditions are defined as follows.The inlet flow condition is specified from measured total pressure, total temperature, and flow angle from axial direction.The outlet flow condition is specified from mass flow rate and static pressure.In the S-A turbulence model, turbulence viscosity at inlet is given.The boundary conditions specified at the hub and the blades are rotating walls in the fixed reference frame.The shroud is a stationary wall in the fixed reference frame.The periodic boundary condition is imposed for the upstream and downstream of the blade, and inside the tip clearance.
In the computation, the convergence criteria are as follows.
Optimization algorithm
This optimization tool is an algorithm based on BP neural network and GA.GA, based on the evolution theories of Darwin and the genetics of Mendel, is a global optimization algorithm.A population of individuals changes over a number of generations using the mechanisms of selection, crossover, and mutation, whereby the best individual is always transferred unchanged to the next generation.ANN, based on the process of imitating brain to solve problem, is an intelligent information processing technology.By some network topology, many basic processing elements (neuron models) are connected to be a neural network.Using ANN, some information processing functions are carried out.The processing functions are similar to the brain to learn, recognize, and remember and so forth.The algorithm has highly efficient global optimization ability of GA and strong local searching and learning of ANN.The optimization design systems with only GA or ANN have been examined, however none are found to be better than the algorithm in optimization time and accuracy of predicted results.
Objective function
The two of parameters controlling the blade stacking line in circumferential direction are chosen to be design variables in this optimization.The two design variables are angle α 2 and α 3 (Figure 5) and the other parameters are invariable during this optimization.This means that straigh-line "AB" is invariable and the shape of arc "BC" may be altered in this optimization.Point "B" is located at λ = 0.4 (Figure 1).An initial database with 20 samples is created.In each sample, the two design variables are chosen randomly between the given lower and upper bounds.According to experience and experimental results obtained from former designs, the variation ranges of the two design variables are from −30 • to 30 • .Thus, 20 blades with different circumferential-skewed angles are obtained.The computational mesh of each sample is generated and the internal flow field is simulated.
In this optimization, the objective function is to maximize efficiency and total pressure rise (equation ( 1)).At the same time, constraints are imposed on the mass flow rate (equation ( 2)), ( Imposed value of total pressure P imp is usually more than true value.The two pressures P ref and P imp are set to equality.The weight factors m 1 and m 2 show the influences of efficiency and total pressure rise in the objective function, respectively.Both the weight factors are set to 1.0.Exponent factor k is set to 2.0.Constraint factor M is set to 0.5%.Figure 7 presents the evolution curve of the objective function in this optimization.Curve "PV" is prediction values by BP neural network and GA.Curve "CFD" is a computation result of the samples by 3D N-S flow solver.As shown in the figure, with the increase of the iteration numbers, the value of the objective function is less and less.Near the 30th step, the value of the objective function shows little variation.It indicates that the result converges and the optimal result is finally achieved.
Optimization result
The optimization result shows that the circumferentialskewed angle of the optimized blade is 6.1 deg. Figure 8 presents the 3D models of the archetypal blade and the optimized blade.As shown in Figure 8, the optimized blade is a typical circumferential forward-skewed blade.
EXPERIMENTAL SETUP
In order to evaluate the performance, both aerodynamic and aeroacoustic experiments have been carried out in the anechoic chamber of the turbomachinery laboratory of Shanghai Jiaotong University.The aerodynamic performance test follows GB/T 1236-2000 standard [16] and the aeroacoustic performance test follows GB/T 2888-91 standard [17].The designed fan test rig consists of test impeller, driving unit, experimental appliances, and duct system (Figure 9).
The test impellers include the archetypal impeller and the optimized impeller.The impeller is connected directly to an electromotor YSF-8014.Using a frequency converter Sanken MF-7.5K-380, the rotary speed of the motor is controlled.The rotary speed is obtained by a noncontact photoelectric digital tachometer SZG-441C.As shown in Figure 9, the performance parameters of the test impellers are obtained with the experimental appliances, including the manometer for static pressure, the pitot probe for total pressure, the sound level meter for overall sound pressure level, and so on.Using the throttle cone, which has a diameter of 600 mm, the flow rate of the fan is changed.In addition, the cone can prevent environment noise from entering the anechoic chamber by the duct system.Hence, it is also very helpful to reduce the error of aeroacoustic measurement.
Measurement of outlet flow field is performed with fivehole probe.The measured points are located on the plane AB (Figure 9), which is at 15 mm behind the outlet of the impeller.On the plane, the aerodynamic parameters of 21 points along radial direction of the blade are measured.More detailed descriptions of the experimental design and measurement methods are given in [12].
Aerodynamic performance
Figure 10 shows the aerodynamic performance comparison between archetypal impeller and the optimized impeller with forward-skewed blades.Key dimensionless aerodynamic parameters are defined as follows: ( Both the total pressure rise and efficiency of optimized impeller are higher than archetypal impeller at almost all flow rate range, except ϕ = 0.20 ∼ 0.22.At the design flow rate condition ϕ = 0.245, the total pressure efficiency of the optimized impeller is increased by 1.27% and its total pressure rise is increased by 3.56%, as compared to the archetypal impeller.Detailed results are shown in Table 2.As described in Section 1, the efficiency and total pressure rise of forwardskewed impeller with skewed angle of 8.3 deg is decreased of 3.53% and 5.63%, compared to the archetype radial impeller.This obvious difference proves that the optimization design method is effective in skewed blade design.At off-design low flow rate conditions, as shown in Figure 10(a), the minimum stable volume flow coefficient of the optimized impeller is significantly extended from 0.2 to 0.18, which stands for a wider stable operation range for present axial fan.
Aeroacoustic performance
Figure 11 shows the overall sound pressure level (SPL) and the average A-weight sound pressure level of the two impellers at various flow rates.The average A-weight sound pressure level is defined as overall A-weight sound pressure level of unit flow rate and unit total pressure, which is more effective to evaluate fan noise with different flow rate and to-tal pressure rise.The definition of average A-weight sound pressure level is As shown in Figure 11, the SPL will increase at low flow rate range, and decrease at stable operation flow range.The minimum SPL of stable flow range is located at design flow rate condition ϕ = 0.245, where the overall sound pressure level L A and the average A-weight sound pressure level L SA of the optimized impeller are decreased by 6.5 dBA compared to archetypal impeller.
Figure 12 shows the one third octave spectrum of the two impellers at the design condition.As shown in Figure 12, noise frequency domain of the two impellers is from 100 Hz to 10 000 Hz.It indicates that the broadband noise is dominant in the measured spectrums.In the whole frequency domain, the SPL of the optimized impeller is lower compared to the archetypal impeller.In the frequency domain of [300 Hz, 4000 Hz], the difference between the two impellers in the SPL is more obvious than in the other domain.It indicates that the frequency domain between 300 Hz and 4000 Hz is the chief part of noise reduction in the optimized impeller.
Detailed flow field
In order to measure the detailed flow field and figure out the difference between archetypal and optimized impeller, fivehole aerodynamic probe is adopted in present study to carry out outlet flow field measurement of these two impellers.Figure 13 illustrates the spanwise distribution of circumferentially averaged total pressure loss coefficient at the outlet of the two impellers at the design condition.The total pressure loss coefficient is defined as As shown in Figure 13, the spanwise distribution of total pressure loss coefficient reveals that most of the aerodynamic losses are converged to the near wall regions, namely, blade tip (λ > 0.7) and hub (λ < 0.1).The losses of the main flow region (0.1 < λ < 0.7) are relatively lower.Furthermore, the C pt of the optimized impeller is obviously decreased at both blade tip and hub region, as well as increased at midspan.According to the radial-equilibrium equation, since line "BC" of the stacking line is skewed obviously along rotation direction of the impeller (Figure 1), the radial component of the body force, which is in reverse radial direction, is increased obviously.As a result, low-energy fluid accumulation near the shroud endwall moves to the mid-span region and the loss near the shroud endwall is deceased.In the lower mid-span region, the stacking line shape is unchanged and the centrifugal force is still predominant.Hence, low energy fluid accumulation near the hub endwall still moves to the mid-span region.The experimental results show that in the optimized impeller, the decrease of the blade tip loss is 2.2 times more than the increase of the loss in the midspan region.It indicates that the circumferential skew of the blade may have beneficial impact on the spanwise redistribution of the total pressure loss.The blade with the appropriate circumferential-skewed angle can reduce the loss in the flow field and increase aerodynamic efficiency of the fan.
Figure 14 presents the spanwise distribution of circumferentially averaged total pressure rise at the outlet of the two impellers at the design condition.The maximum of total pressure rise at the outlet of the archetypal impeller is in the upper mid-span region, which is 0.8 < λ < 0.9.The maximum of total pressure rise at the outlet of the optimized impeller is near the mid-span region, which is 0.5 < λ < 0.7.According to the radial-equilibrium equation, spanwise pressure distribution is determined by the centrifugal force in the archetypal impeller, which results in the increase of blade loading of the upper mid-span.However, the radial component of the body force generated by skewed blade is opposite to the centrifugal force.As a result, the impact of the centrifugal force is weakened in the upper mid-span region.In the lower mid-span region, the main forces (i.e., centrifugal force) are unchanged.It results in the increase of the blade loading of the mid-span in the optimized impeller.The experimental results indicate that, in the optimized impeller, the increase of the total pressure rise in the region of 0.2 < λ < 0.8, is 1.4 times higher than the decrease of the blade tip total pressure rise.at the design condition, respectively.As shown in Figure 15, the distribution of axial velocity is similar to the distribution of total pressure rise in the two impellers.The maximum of flow rate at the outlet of the archetypal impeller is in the upper mid-span region, which is 0.7 < λ < 0.8.The maximum of flow rate at the outlet of the optimized impeller is near the mid-span region, which is 0.5 < λ < 0.7.It shows that the circumferential skew of the blade has great impact on the spanwise redistribution of flow rate.In addition, there is obviously improved flow in the corner zone between suction surface of blade and shroud in the optimized impeller (Figure 16).It has a beneficial impact on the delay of onset of stall and the extension of stable operating range.
CONCLUSIONS
(1) The aerodynamic optimization design of forward-skewed blade of low-pressure axial flow fan has been achieved by BP neural network and GA optimization method.Both aerodynamic and aeroacoustic performances of optimized impeller with forward-skewed angle of 6.1 deg have been improved with comparison to the archetype radial impeller.The total pressure efficiency is increased by 1.27%, total pressure rise is increased by 3.56%.The stable operating range of the optimized impeller is greatly extended to more than 30%, as well as aerodynamic noise reduced by more than 6 dB(A) at design operating condition.In addition, detailed spectrums indicate that the broadband noise of the impeller is dominant.The optimization design procedure has been proved to be effective to further skewed and swept impeller design.
(2) Detailed flow field results indicate that the impeller with forward-skewed blades would cause a spanwise redistribution of flow rate and pressure toward the blade midspan, as well as reduce tip loading.The aerodynamic losses of optimized impeller are decreased significantly near blade Li Yang et al. shroud and hub endwall region with only few penalty of mid-span.The overall pressure losses are obviously lower than the archetype radial impeller, which results in higher efficiency and lower noise.
NOMENCLATURE r:
Radial (m) θ: Circumferential (deg) z: Axial(m) r h : Hub radius r t : Tip radius n: Impeller speed (r• min −i ) v: Hub-tip ratio = r h /r t Z: Numberofblade λ: Blade span = r/r t β: Blade stagger angle (deg) ε R : Tip clearance (m) ρ: Fluid density (Kg/m 3 ) P imp : Imposed value of total pressure (Pa) P ref : Reference value of total pressure (Pa) P c : Computational value of total pressure (Pa) m 1 , m 2 : Weight factor of objective function k: Exponent factor of objective function G 1 : Initial value of mass flow rate (Kg/s) G 2 : Final value of mass flow rate (Kg/s) M: Constraint factor of mass flow rate U t : Tangential velocity of blade tip (m/s) Q: V olumeflowrate(m 3 /s) P t : Total pressure rise (Pa) N: Shaftpower(W) ϕ: Flowratecoefficient Ψ: T otalpressurecoefficient η: T otalpressureefficiency Q min : Minimum flow rate before flow separation (m 3 /s) Q 1 : Flowrateatpeakefficiency point (m 3 /s) Δ: Stable operating range L A : Overall sound pressure level ( dBA) L SA : Average A-weight sound pressure level W: Relative velocity (m/s)
Figure 1 :
Figure 1: Schematic of stacking line of blade.
Figure 7 :
Figure 7: Evolution curve of objective function.
Figure 8 :
Figure 8: 3D models of archetypal blade and optimized blade.
Figures 15 and 16 show the spanwise distribution of circumferentially averaged axial velocity and the computed distribution of axial velocity at the outlet of the two impellers International Journal of Rotating Machinery
p * 1w : Inlet Stagnation pressure (Pa) p * 2w : Outlet Stagnation pressure (Pa) C Pt : Total pressure loss coefficient Table 1 summarizes the key design parameters.As shown in Table 1,
Table 1 :
Key design parameters of archetypal impeller.
Table 2 :
Comparison of aerodynamic performance between two impellers at design condition. | 5,853.4 | 2007-08-22T00:00:00.000 | [
"Engineering",
"Physics"
] |
Nanofluid-Powered Dual-Fluid Photovoltaic/Thermal (PV/T) System: Comparative Numerical Study
: A limited number of studies have examined the effect of dual-fluid heat exchangers used for the cooling of photovoltaic (PV) cells. The current study suggests an explicit dynamic model for a dual-fluid photovoltaic/thermal (PV/T) system that uses nanofluid and air simultaneously. Mathematical modeling and a CFD simulation were performed using MATLAB ® and ANSYS FLUENT ® software, respectively. An experimental validation of the numerical models was performed using the results from the published study. Additionally, to identify the optimal nanofluid type for the PV/T collector, metal oxide nanoparticles (CuO, Al 2 O 3 , and SiO 2 ) with different concentrations were dispersed in the base fluid (water). The results revealed that the CuO nanofluid showed the highest thermal conductivity and the best thermal stability compared to the other two nanofluids evaluated herein. Furthermore, the influence of CuO nanofluid in combination with air on the heat transfer enhancement is investigated under different flow regions such as laminar, transition, and turbulent. Using a CuO nanofluid plus air and water plus air the total equivalent efficiency was found to be 90.3% and 79.8%, respectively. It is worth noting that the proposed models could efficiently simulate both single and dual-fluid PV/T systems even under periods of fluctuating irradiance.
Introduction
Cooling the PV module with heat transfer fluids causes a decrease of the solar cells temperature and an increase of the electricity conversion efficiency. A compact heat exchanger having a smaller surface area would be an excellent choice to fulfill the specified amount of cooling required by the PV module. A conventional heat exchanger usually uses a single heat transfer fluid. Such heat exchangers have an insufficient contact area between the PV cells and the circulating fluids. Additionally, the PV module temperature increases due to the continuous penetration of solar radiation, which results in heat losses to ambient air [1]. For this reason, the optimization of the simultaneous production of heat energy and electricity from the PV/T system has been an important issue for the past two decades [2,3].
Over the years, the PV cells in PV/T collectors have been cooled using either water or air as the heat transfer fluid. Although air-based systems were demonstrated to be more economical for PV cell cooling, air has a relatively low heat transfer rate [4]. On the contrary, water-based systems, although more expensive, are considered more practical because of a higher thermal conductivity and a higher heat extraction rate compared to air [5,6]. The low thermal conductivity problem using water or air can be overcome by suspending nanoparticles in water. Adding nanoparticles into water results in a thermal conductivity improvement, and consequently, a significant positive effect on the heat transfer performance [7]. The effect of the different nanofluids on the thermal/electrical efficiency of the PV/T system was reported by Rejeb et al. [8]. It was found that under similar conditions, when nanofluids
Mathematical Model
The proposed model of the dual-fluid PV/T is developed by modifying the single-fluid PV/T model reported by Chow [18]. The transient energy balance equations across different collector
Mathematical Model
The proposed model of the dual-fluid PV/T is developed by modifying the single-fluid PV/T model reported by Chow [18]. The transient energy balance equations across different collector A series of baffles were arranged on the back plate surface at a certain distance transverse to the air flow direction, for the purpose of breaking the laminar sub-layer between the channel walls Energies 2019, 12, 775 4 of 19 and the circulating air, and thereby reduce the thermal resistance. This arrangement will significantly increase the heat transfer interactions among the different components within the back panel of the PV/T collector.
Mathematical Model
The proposed model of the dual-fluid PV/T is developed by modifying the single-fluid PV/T model reported by Chow [18]. The transient energy balance equations across different collector components were modified according to a dual-fluid PV/T design. The explicit dynamic model can viably be applied to both single-and dual-fluid PV/T systems. The parameters that were used for the simulation are given in Table 2. In the simulation, a control volume having a fictitious boundary is regarded as a node within which the mass and energy balances are satisfied [5]. Along the flow direction, the heat is accumulated, which leads to a positive temperature gradient in all of the collector components. Due to the parallel arrangement, the fluid flow rate is assumed to be the same in all the copper tubes. The material properties and physical dimensions of the PV plate, copper tubes, and back panel are considered to be constant [18]. All the heat transfer coefficients used for the simulation are calculated in real time. In the PV/T system, the temperature change across the interface between the layers of different materials is attributed to the thermal resistance. Therefore, each component of the collector can be represented by a temperature node. The first node, 'p', represents the PV plate, the 't' node represents the tube absorber, 'n' is for the nanofluid in the tube, 'a' is for the inside air, and the last node, 'b', denotes the back panel.
PV Plate
where α p is the absorptivity of the PV plate; M p , T p , and C p are the mass, temperature, and specific heat of the PV plate, respectively; G and E are the solar radiation and electrical output from the PV cells, respectively; and T ∞ , T t , T a , and T b are the ambient, absorber tube, inside air, and back panel temperatures, respectively. h wind is the heat convected caused by wind [19], h p∞ is the heat radiated from the PV plate to ambient air, h pt is the energy conducted from the PV plate to the absorber tube, h pa is the heat convected from the PV plate to inside air, and h pb is the heat radiated from the PV plate to the back panel. (2) where P and e are the packing factor and electrical efficiency, respectively. β r and r are respectively the temperature coefficient and cell efficiency at the reference operating temperature (T r ).
where u a is the wind velocity and k p is the thermal conductivity of the PV plate. ε p and ε b are the emissivity of the PV plate and back panel, respectively. W is the tube spacing [18], and D o is the outer diameter of the tube.
Absorber Tube
where C t and M t are the specific heat and mass of the absorber tube, respectively; h tn and h ta represent the heat convected from the tube to the nanofluid and inside air, respectively; and h tb is the heat radiated from the tube to the back panel. h tn can be estimated using the following correlation [8]: where k n and Nu n are the thermal conductivity and Nusselt number of the nanofluid. Assuming a spherical shape of Al 2 O 3 nanoparticles [20], the Nusselt number can be calculated as follows: Nu n = Pr 0.1039 1.0257φ + 1.1397Re 0.205 + 0.788φRe 0.205 + 1.2069 (11) where φ represents the volume concentration of metal oxide nanoparticles in the water. Pr and Re are the Prandtl number and Reynolds number, respectively. Nanofluid in tube T n = (T n,o + T n,in )/2 (13) whereṁ n is the (mass) flow rate of the nanofluid. C t and M t are respectively the specific heat and mass of the nanofluid. T n,in and T n,o are respectively the nanofluid inlet and outlet temperatures. Inside air T a = (T a,o + T a,in )/2 (15) whereṁ a is the (mass) flow rate of the inside air; C a and M a are respectively the specific heat and mass of the inside air; T a,in and T a,o are respectively the air inlet and outlet temperatures; and h ab is the convected heat from the back panel to the inside air. (16) where C b and M b are respectively the specific heat and mass of the back panel; and h b∞ is the heat loss to ambient air. The instantaneous total thermal efficiency of the dual-fluid PV/T collector is determined using the following correlation [4]:
Back Plate
where A c is the collector area, which is considered to be the same as the PV plate area (A p ). The total yield of the dual-fluid PV/T system can be expressed in terms of primary energy saving efficiency or total equivalent efficiency. For this purpose, the electrical efficiency of the PV/T system is divided by the efficiency of a conventional electric power plant [15]. The total equivalent efficiency is calculated as follows: where PVT is the total equivalent efficiency and e is the electrical efficiency. An pp average value of 38% is taken as the electric generation efficiency for a coal power plant [19]. As suggested by Maxwell [21], the thermal conductivity of the colloidal solution (nanofluid) can be expressed as: Considering the mixture rule [22], the equation for calculating the nanofluid density can be written as: The specific heat of the nanofluid can be determined as follows [22]: where, ρ n is the density of the nanofluid. The subscripts np and bf are denoted as the nanoparticles and base fluid, respectively.
CFD Model
In order to analyze the performance of a dual-fluid PV/T system, the temperature distribution in the cooling conduits and the PV module was predicted using the model built in ANSYS FLUENT software (Release 14.5, ANYSY, Inc., Canonsburg, PA, USA) [23]. The CAD modeling of the proposed system was performed using Creo Elements/Pro software. A precise and accurate prediction of the temperature profile across the PV module plays a key role in developing a prototype of a complex solar system such as a dual-fluid PV/T system [24]. Therefore, in this study, special care was taken regarding the selection of boundary conditions and discretization schemes. Modeling and a numerical analysis have been performed based on the following three steps. The first step is to develop a dual-fluid PV/T model. The second step is assessing the thermal performance of the collector under various operating conditions. The last step is the validation of the developed model using experimental data.
Numerical Scheme
A computational fluid dynamics (CFD) solver based on the finite volume method (FVM) was used to discretize the continuous governing equations into algebraic counterparts [25]. To provide the solution field, these algebraic counterparts are then solved numerically. To produce the velocity and temperature field solutions, different convergence criteria were set; the convergence of the continuity and momentum equations and energy equation is achieved when the residuals drop to 10 −3 and 10 −6 , respectively. To enhance the solution stability and to accelerate the convergence rate, the under relaxation factors were precisely taken. The temperature condition and fluid flow rate in the parallel tubes can be considered as being the same; the model is therefore limited to the vicinity of a single tube [26,27].
To evaluate the heat flow through the fluid-solid interactions, a conjugate heat transfer mechanism was taken into account [28]. For this purpose the whole model is divided into two domains based on their physical appearance: the solar cells, absorber tube, baffles, and back panel compose a solid domain, while the air and the nanofluid or water compose a fluid domain. The domains are differentiated by different colors, as shown in Figure 1b. A no-slip condition was imposed at all the fluid-solid boundaries. The k-ε two-transport-equation model with an enhanced wall treatment was chosen to solve and analyze the turbulence in the air duct and nanofluid pipe. The governing equations used to define the velocity, pressure, and temperature in the fluid domain is as follows [29]: Continuity equation: ∂u ∂x Momentum equation: Energy equation u i is velocity field u, v, w along x, y, z.
Boundary Conditions and Grid Study
A mass flow rate boundary condition was used at the tube absorber and air channel entrances, respectively. A pressure-outlet boundary condition was applied at the outlet section of both fluids. For the analysis, the ambient air temperature was considered to be 25 • C and the incident solar radiation of 900 W/m 2 was taken into account. Due to the opaque top surface of the PV/T collector, a fixed heat flux was applied as a thermal boundary condition instead of using a solar ray tracing algorithm, because the solar load model's ray tracing algorithm in FLUENT does not include the internals such as the heat gain for a model having an opaque rooftop [30].
For the appropriate meshing, an automatic mesh method, which is the toggle between tetrahedron (patch conforming algorithm) and sweep meshing, has been used. A higher density mesh was applied in the areas where the heat transfer is of greater concern. In order to obtain the desired solution, a grid independence test was carried out, in particular for those problems where the heat transfer fluid phenomenon is involved. In this study, different grid sizes were tested by taking into account the PV module temperature. The goal of this step is to ensure that the predicted PV temperature will no longer be changed by varying the size of the grid. The process to find the optimal mesh size is also known as a grid independence study. Four grid sizes with 781,430, 903,638, 1,060,023, and 1,437,673 element numbers were tested for the given dual-fluid PV/T model, as shown in Table 3. The optimal mesh size for the independent and simultaneous fluid modes are found to be 903,638 and 1,060,023 element numbers, respectively.
Model Validation
The dual-fluid PV/T model has been validated individually for the air and the nanofluid heat transfer fluids. This means that the air flow rate is set to zero when the nanofluid is to be operated, and vice versa when the air is operated independently. The model validation was performed by comparing the predicted PV and outlet air temperatures against the experimental data for the unglazed PV/T air heating system presented by Joshi et al. [31]. For the purpose of the comparison, similar geometric and operational parameters were considered, as indicated in the reference [31]. Other information that is not given in this research was taken from the study by Abu Bakar et al. [14]. As can be seen in Figure 3, the maximum deviation between the predicted and measured values did not exceed 2.25 • C and 1.98 • C for the PV and outlet air temperatures, respectively. These deviations can be attributed to the assumptions made during the model development. The published data have a satisfactory agreement with the simulation results delivered by the suggested model. 1,060,023, and 1,437,673 element numbers were tested for the given dual-fluid PV/T model, as shown in Table 3. The optimal mesh size for the independent and simultaneous fluid modes are found to be 903,638 and 1,060,023 element numbers, respectively.
Model Validation
The dual-fluid PV/T model has been validated individually for the air and the nanofluid heat transfer fluids. This means that the air flow rate is set to zero when the nanofluid is to be operated, and vice versa when the air is operated independently. The model validation was performed by comparing the predicted PV and outlet air temperatures against the experimental data for the unglazed PV/T air heating system presented by Joshi et al. [31]. For the purpose of the comparison, similar geometric and operational parameters were considered, as indicated in the reference [31]. Other information that is not given in this research was taken from the study by Abu Bakar et al. [14]. As can be seen in Figure 3, the maximum deviation between the predicted and measured values did not exceed 2.25 °C and 1.98 °C for the PV and outlet air temperatures, respectively. These deviations can be attributed to the assumptions made during the model development. The published data have a satisfactory agreement with the simulation results delivered by the suggested model. Meanwhile, the numerical model of a nanofluid heat exchanger was validated by comparing the predicted results with the experimental data taken from the uncovered nanofluid PV/T system reported by Rejeb et al. [8]. To check the reliability of the model, a statistical parameter, the root mean square percentage deviation (RMSD), is used [15]. The RMSD, which is the most frequently Meanwhile, the numerical model of a nanofluid heat exchanger was validated by comparing the predicted results with the experimental data taken from the uncovered nanofluid PV/T system reported by Rejeb et al. [8]. To check the reliability of the model, a statistical parameter, the root mean square percentage deviation (RMSD), is used [15]. The RMSD, which is the most frequently used parameter for error analysis, measures the deviations between the results predicted by a model and the measured results. The RMSD for the average PV surface and nanofluid outlet temperatures between the predicted and measured data was found to be 1.3% and 1.9%, respectively. The results derived from the model are consistent with the experimental data. It is concluded that the obtained results demonstrate the reliability of the model that is used for the performance prediction of the PV/T system.
where n is the number of data points. Y i and X i are the predicted and measured values, respectively.
Results Derived from Mathematical Model
The selection of an optimal fluid type and the optimal concentration of nanoparticles are important for a higher energy production from the PV/T collector. Based on the availability, cost, and inertness to the PV/T material, three metal oxide nanoparticles were selected: aluminum oxide (Al 2 O 3 ), copper oxide (CuO), and silicon dioxide (SiO 2 ). Table 4 shows the thermo-physical properties of the metal oxide nanoparticles used in this study [32][33][34]. The influence of the nanoparticle concentrations on the collector's performance is investigated by considering important thermo-physical properties such the viscosity and thermal conductivity. As depicted in Figure 4, the viscosity ratio and thermal conductivity ratio increase with the increasing nanoparticle concentration. However, the highest percentage increase in the thermal conductivity was found with the CuO nanofluid, followed by the Al 2 O 3 , and SiO 2 nanofluids. The optimal concentration for the available nanoparticles in the base fluid (water) is around 0.75%; beyond this point, the aggregation of the nanoparticles and thermal diffusivity increased significantly. One of the reasons that the CuO nanofluid affords the highest heat transfer performance is that it has a lower specific heat and a slightly higher thermal conductivity compared to the aforementioned nanofluids. Based on the preceding outcomes, the CuO nanoparticles with a concentration of 0.75% in water are selected and employed as an optimal nanofluid throughout the rest of this study. In order to locate the optimal flow rate of each fluid, both the nanofluid and air are operated independently when their counterparts are kept stagnant. Considering the proposed system configurations, the laminar, transition, and turbulent flow regions for the nanofluid are 0.006 kg/s, 0.015 kg/s, and 0.025 kg/s, respectively; and for air, they are 0.009 kg/s, 0.024 kg/s and 0.055 kg/s, respectively. Therefore, the mass flow rate of the nanofluid varied from 0 to 0.03 kg/s, and the air flow rate from 0 to 0.1 kg/s. The thermal and electrical efficiencies of the PV/T collector are predicted by operating the CuO nanofluid and air independently, as shown in Figure 5. The efficiency values increased with an increasing flow rate of both fluids. However, the impact of the increase of the air flow rate on the PV/T efficiency patterns is small compared to the nanofluid flow rate. Furthermore, the percentage increase of the thermal and electrical efficiencies with the air flow rate is very small. On the contrary, the increase in the collector efficiency is notable even at a low mass flow rate of the nanofluid. Therefore, due to better thermal properties and a high heat removal capability, the nanofluid flow rate varied, as opposed to the air flow rate. This means that when both heat transfer fluids were operated simultaneously, the air flow was kept constant while the variable flow rate of the CuO nanofluid was considered. In order to locate the optimal flow rate of each fluid, both the nanofluid and air are operated independently when their counterparts are kept stagnant. Considering the proposed system configurations, the laminar, transition, and turbulent flow regions for the nanofluid are 0.006 kg/s, 0.015 kg/s, and 0.025 kg/s, respectively; and for air, they are 0.009 kg/s, 0.024 kg/s and 0.055 kg/s, respectively. Therefore, the mass flow rate of the nanofluid varied from 0 to 0.03 kg/s, and the air flow rate from 0 to 0.1 kg/s. The thermal and electrical efficiencies of the PV/T collector are predicted by operating the CuO nanofluid and air independently, as shown in Figure 5. The efficiency values increased with an increasing flow rate of both fluids. However, the impact of the increase of the air flow rate on the PV/T efficiency patterns is small compared to the nanofluid flow rate. Furthermore, the percentage increase of the thermal and electrical efficiencies with the air flow rate is very small. On the contrary, the increase in the collector efficiency is notable even at a low mass flow rate of the nanofluid. Therefore, due to better thermal properties and a high heat removal capability, the nanofluid flow rate varied, as opposed to the air flow rate. This means that when both heat transfer fluids were operated simultaneously, the air flow was kept constant while the variable flow rate of the CuO nanofluid was considered. In order to locate the optimal flow rate of each fluid, both the nanofluid and air are operated independently when their counterparts are kept stagnant. Considering the proposed system configurations, the laminar, transition, and turbulent flow regions for the nanofluid are 0.006 kg/s, 0.015 kg/s, and 0.025 kg/s, respectively; and for air, they are 0.009 kg/s, 0.024 kg/s and 0.055 kg/s, respectively. Therefore, the mass flow rate of the nanofluid varied from 0 to 0.03 kg/s, and the air flow rate from 0 to 0.1 kg/s. The thermal and electrical efficiencies of the PV/T collector are predicted by operating the CuO nanofluid and air independently, as shown in Figure 5. The efficiency values increased with an increasing flow rate of both fluids. However, the impact of the increase of the air flow rate on the PV/T efficiency patterns is small compared to the nanofluid flow rate. Furthermore, the percentage increase of the thermal and electrical efficiencies with the air flow rate is very small. On the contrary, the increase in the collector efficiency is notable even at a low mass flow rate of the nanofluid. Therefore, due to better thermal properties and a high heat removal capability, the nanofluid flow rate varied, as opposed to the air flow rate. This means that when both heat transfer fluids were operated simultaneously, the air flow was kept constant while the variable flow rate of the CuO nanofluid was considered. Considering different heat transfer fluids, the daily PV module temperature is predicted under similar operating conditions. For a comparative analysis, four fluid modes were used: water, CuO nanofluid, water plus air, and CuO nanofluid plus air ( Figure 6). During the simultaneous mode of fluid operation, the air and CuO nanofluid flow rates were fixed at 0.055 kg/s and 0.025 kg/s, respectively. During the independent mode, the flow rate of either one of the two heat transfer fluids was set to zero, as described by Abu Bakar et al. [14]. The predicted results show that the maximum PV module temperature with water, nanofluid, air plus water, and air plus nanofluid was 57.5 • C, 55.1 • C, 51.9 • C, and 48.6 • C, respectively. It is noted that the nanofluid (in either the simultaneous or independent mode) has an enormous potential as a heat transfer fluid compared to water. This indicates that a fluid with a high thermal conductivity can extract extra accumulated solar heat from PV cells and thus provide better and more targeted cooling. In addition, the simultaneous application of two fluids (air and nanofluid in particular) results in a significant reduction in the PV cell temperature. The results showed that the application of two fluids remarkably enhanced the total surface area of the heat exchanger. It should be noted that when a dual-fluid heat exchanger is used for the independent mode of fluid operation, it might affect the secondary fluid outlet temperature due to the primary fluid which may have been trapped in the pipe bends.
Considering different heat transfer fluids, the daily PV module temperature is predicted under similar operating conditions. For a comparative analysis, four fluid modes were used: water, CuO nanofluid, water plus air, and CuO nanofluid plus air ( Figure 6). During the simultaneous mode of fluid operation, the air and CuO nanofluid flow rates were fixed at 0.055 kg/s and 0.025 kg/s, respectively. During the independent mode, the flow rate of either one of the two heat transfer fluids was set to zero, as described by Abu Bakar et al. [14]. The predicted results show that the maximum PV module temperature with water, nanofluid, air plus water, and air plus nanofluid was 57.5 °C, 55.1 °C, 51.9 °C, and 48.6 °C, respectively. It is noted that the nanofluid (in either the simultaneous or independent mode) has an enormous potential as a heat transfer fluid compared to water. This indicates that a fluid with a high thermal conductivity can extract extra accumulated solar heat from PV cells and thus provide better and more targeted cooling. In addition, the simultaneous application of two fluids (air and nanofluid in particular) results in a significant reduction in the PV cell temperature. The results showed that the application of two fluids remarkably enhanced the total surface area of the heat exchanger. It should be noted that when a dual-fluid heat exchanger is used for the independent mode of fluid operation, it might affect the secondary fluid outlet temperature due to the primary fluid which may have been trapped in the pipe bends. The influence of the variable flow rate of the nanofluid or water at a fixed airflow on the fluid temperature rise is presented in Figure 7. When the fluids are to be circulated simultaneously, the temperature rise of both fluids decreased as the flow rate of the nanofluid increased. During the simultaneous operation of fluids, the temperature rise of both the CuO nanofluid and air was higher than the water and air. In both systems, the temperature rise of the liquid fluids (nanofluid and water) was smaller than that of the air. The discrepancy may be a result of the lower specific heat capacity of air. In addition, the findings demonstrate that the CuO nanofluid in combination with air can extract more accumulated solar heat from the PV/T system than water and air as a dual-fluid. This is anticipated to be due to the higher thermal conductivity and the lower specific heat of the nanofluid by dispersing CuO to the water, which removes solar heat faster than water. The influence of the variable flow rate of the nanofluid or water at a fixed airflow on the fluid temperature rise is presented in Figure 7. When the fluids are to be circulated simultaneously, the temperature rise of both fluids decreased as the flow rate of the nanofluid increased. During the simultaneous operation of fluids, the temperature rise of both the CuO nanofluid and air was higher than the water and air. In both systems, the temperature rise of the liquid fluids (nanofluid and water) was smaller than that of the air. The discrepancy may be a result of the lower specific heat capacity of air. In addition, the findings demonstrate that the CuO nanofluid in combination with air can extract more accumulated solar heat from the PV/T system than water and air as a dual-fluid. This is anticipated to be due to the higher thermal conductivity and the lower specific heat of the nanofluid by dispersing CuO to the water, which removes solar heat faster than water. Table 5 shows the variations of the total equivalent efficiency of a dual-fluid PV/T system against the variable CuO nanofluid flow rate at a fixed airflow of 0.055 kg/s. Meanwhile, the total equivalent efficiency is determined at a fixed quantity of the daily solar radiation (23.25 MJ/m 2 day) and ambient temperature (21.47 • C). When the nanofluid flow rate is set to vary between 0.005 kg/s and 0.030 kg/s at a fixed air flow rate of 0.055 kg/s, the total equivalent efficiency of the PVT collector was increased to 79.8% and 90.3% with water plus air, and with nanofluid plus air, respectively. It is noted that at the lowest nanofluid flow rate of 0.005 kg/s, the total equivalent efficiency was found to be as low as 82.6%, while under similar operating conditions using water plus air, the minimum value was 73.7%. The results show that when the fluids are operated simultaneously, a reasonably good total equivalent efficiency is achievable even at a low mass flow rate. In comparison with water plus air, the total equivalent efficiency of the PV/T system using nanofluid plus air as the dual-fluid was found to be approximately 10% higher. This can be attributed to the thermophysical properties of the nanofluid being sufficiently great to enhance the heat transfer behavior and thus increase the rate of heat removal from the PV module. Table 5 shows the variations of the total equivalent efficiency of a dual-fluid PV/T system against the variable CuO nanofluid flow rate at a fixed airflow of 0.055 kg/s. Meanwhile, the total equivalent efficiency is determined at a fixed quantity of the daily solar radiation (23.25 MJ/m 2 day) and ambient temperature (21.47 °C). When the nanofluid flow rate is set to vary between 0.005 kg/s and 0.030 kg/s at a fixed air flow rate of 0.055 kg/s, the total equivalent efficiency of the PVT collector was increased to 79.8% and 90.3% with water plus air, and with nanofluid plus air, respectively. It is noted that at the lowest nanofluid flow rate of 0.005 kg/s, the total equivalent efficiency was found to be as low as 82.6%, while under similar operating conditions using water plus air, the minimum value was 73.7%. The results show that when the fluids are operated simultaneously, a reasonably good total equivalent efficiency is achievable even at a low mass flow rate. In comparison with water plus air, the total equivalent efficiency of the PV/T system using nanofluid plus air as the dual-fluid was found to be approximately 10% higher. This can be attributed to the thermophysical properties of the nanofluid being sufficiently great to enhance the heat transfer behavior and thus increase the rate of heat removal from the PV module. Figure 8 shows the variations of the convection heat transfer coefficient for various fluid flow rates and absorber temperatures. The convection heat transfer coefficient is calculated using the fluid average temperature and wall temperature, which are extracted from the ANSYS FLUENT Figure 8 shows the variations of the convection heat transfer coefficient for various fluid flow rates and absorber temperatures. The convection heat transfer coefficient is calculated using the fluid average temperature and wall temperature, which are extracted from the ANSYS FLUENT software. To understand the influence of the nanofluid, the convection heat transfer coefficients of the CuO nanofluid (0.45%, 0.60%, and 0.75%) are discussed here, in comparison with those of water. The results indicate that the convection heat transfer coefficient (h_ft) between the circulating fluid and absorber wall increases for all heat transfer fluids with an increasing mass flow rate, as expected. However, initially, at an absorber temperature of 55 • C, the water shows a higher h_ft than that of the CuO nanofluid. The high specific heat of water may at least partly account for these results. Moreover, the low density and high thermal conductivity of the CuO nanofluid at a higher temperature enhances the random motion of the nanoparticles, and this ultimately results in an increase of the nanoparticle contact with the absorber surface and the heat transfer rate, respectively. It is observed that at a higher absorber temperature of 95 • C, the 0.75% CuO nanofluid has the highest heat transfer rate, followed by 0.60% CuO, 0.45% CuO, and water.
Results Derived from CFD Model
the CuO nanofluid. The high specific heat of water may at least partly account for these results. Moreover, the low density and high thermal conductivity of the CuO nanofluid at a higher temperature enhances the random motion of the nanoparticles, and this ultimately results in an increase of the nanoparticle contact with the absorber surface and the heat transfer rate, respectively. It is observed that at a higher absorber temperature of 95 °C, the 0.75% CuO nanofluid has the highest heat transfer rate, followed by 0.60% CuO, 0.45% CuO, and water. Since the absorber tubes are arranged in parallel, the temperature distribution and fluid flow through all the tubes can be taken as being the same. Therefore, the vicinity of a single pipe can be used to analyze the thermal behavior of the entire PV panel [18]. The temperature distribution across the PV surface is predicted under the simultaneous and independent modes of fluid operation, namely: water, CuO nanofluid, water plus air, and CuO nanofluid plus air. In the simulation, the flow rates of the liquid fluid and air are considered to be fixed at 0.025 kg/s and 0.055 kg/s, respectively. The interface temperature between the PV module and both heat exchangers is presented in Figures 9 and 10. Due to the fluid-to-solid and solid-to-solid coupling, the shadow effects can be clearly seen at the interfaces or common faces. The PV surface temperature has been reported taking four modes of fluid operation into account: with solely a water heat exchanger, the PV surface reached a temperature of 59 °C; with the use of the nanofluid, the estimated PV surface temperature is 56 °C; with water plus air as a dual exchanger, the PV temperature fell to 52 °C; and with nanofluid plus air this value further declined to 47 °C. This is attributed to the dual-fluid exchanger possibly covering most of the surface area of the PV module and ultimately contributing to a more efficient heat transfer. Furthermore, in the case of the simultaneous application of nanofluid plus air, in particular, an increase in the surface area of the heat exchanger is one among other possible explanations. Since the absorber tubes are arranged in parallel, the temperature distribution and fluid flow through all the tubes can be taken as being the same. Therefore, the vicinity of a single pipe can be used to analyze the thermal behavior of the entire PV panel [18]. The temperature distribution across the PV surface is predicted under the simultaneous and independent modes of fluid operation, namely: water, CuO nanofluid, water plus air, and CuO nanofluid plus air. In the simulation, the flow rates of the liquid fluid and air are considered to be fixed at 0.025 kg/s and 0.055 kg/s, respectively. The interface temperature between the PV module and both heat exchangers is presented in Figures 9 and 10. Due to the fluid-to-solid and solid-to-solid coupling, the shadow effects can be clearly seen at the interfaces or common faces. The PV surface temperature has been reported taking four modes of fluid operation into account: with solely a water heat exchanger, the PV surface reached a temperature of 59 • C; with the use of the nanofluid, the estimated PV surface temperature is 56 • C; with water plus air as a dual exchanger, the PV temperature fell to 52 • C; and with nanofluid plus air this value further declined to 47 • C. This is attributed to the dual-fluid exchanger possibly covering most of the surface area of the PV module and ultimately contributing to a more efficient heat transfer. Furthermore, in the case of the simultaneous application of nanofluid plus air, in particular, an increase in the surface area of the heat exchanger is one among other possible explanations.
It is worthwhile to investigate the thermal performance of each fluid in a dual-fluid PV/T system when both fluids are operated at the same time, as shown in Figures 11 and 12. In particular, a combination of the CuO nanofluid and air as a dual fluid is attractive because of the superior thermo-physical properties of the nanofluid relative to those of water. Because of the simultaneous operation, the thermal performance of each fluid is directly associated with its counterpart. Therefore, it is worth noting the contribution of each fluid to the overall performance of a dual-fluid PV/T system. In a situation where the mass flow rate of water or nanofluid increases while considering a fixed air flow rate, the extra solar heat is extracted by the fluid with an increasing flow rate [15]. Therefore, a relatively small amount of solar heat remains to be removed by air as a second fluid. The observed increase in the mass flow rate of the nanofluid or water at a constant air flow rate had a significant impact on the amount of heat extracted by air. Hence, the total amount of accumulated solar heat extracted by a nanofluid plus air is higher than in the case of water plus air when used as a dual fluid. It is worthwhile to investigate the thermal performance of each fluid in a dual-fluid PV/T system when both fluids are operated at the same time, as shown in Figures 11 and 12. In particular, a combination of the CuO nanofluid and air as a dual fluid is attractive because of the superior thermo-physical properties of the nanofluid relative to those of water. Because of the simultaneous operation, the thermal performance of each fluid is directly associated with its counterpart. Therefore, it is worth noting the contribution of each fluid to the overall performance of a dual-fluid PV/T system. In a situation where the mass flow rate of water or nanofluid increases while considering a fixed air flow rate, the extra solar heat is extracted by the fluid with an increasing flow rate [15]. Therefore, a relatively small amount of solar heat remains to be removed by air as a second fluid. The observed increase in the mass flow rate of the nanofluid or water at a constant air flow rate had a significant impact on the amount of heat extracted by air. Hence, the total amount of accumulated solar heat extracted by a nanofluid plus air is higher than in the case of water plus air when used as a dual fluid. It is worthwhile to investigate the thermal performance of each fluid in a dual-fluid PV/T system when both fluids are operated at the same time, as shown in Figures 11 and 12. In particular, a combination of the CuO nanofluid and air as a dual fluid is attractive because of the superior thermo-physical properties of the nanofluid relative to those of water. Because of the simultaneous operation, the thermal performance of each fluid is directly associated with its counterpart. Therefore, it is worth noting the contribution of each fluid to the overall performance of a dual-fluid PV/T system. In a situation where the mass flow rate of water or nanofluid increases while considering a fixed air flow rate, the extra solar heat is extracted by the fluid with an increasing flow rate [15]. Therefore, a relatively small amount of solar heat remains to be removed by air as a second fluid. The observed increase in the mass flow rate of the nanofluid or water at a constant air flow rate had a significant impact on the amount of heat extracted by air. Hence, the total amount of accumulated solar heat extracted by a nanofluid plus air is higher than in the case of water plus air when used as a dual fluid. The nano-engineered dual-fluid PV/T system is assessed in terms of effectiveness and reliability by comparing its performance with the previously reported collectors using conventional heat transfer fluids such as air, water, nanofluid, and water plus air ( Figure 13). As reported by Abu Bakar et al. [14], the maximum thermal and electrical efficiencies of the PV/T collector with water plus air were 65.1% and 11.3%, respectively. In contrast, using the proposed PV/T collector, the predicted thermal and electrical efficiencies were found to be 8.4%, and 2.3%, respectively, higher than those of the aforementioned case. Compared to water-based and nanofluid-based PV/T systems [11], the proposed system had a 26%, and 17.3% higher thermal efficiency, respectively. Furthermore, in the case of the reference PV module (without cooling) [35], the electrical efficiency was found to be 6.61%. This may be attributed to the use of two fluids for the PV cells cooling, which consequently increased the overall surface area for the heat transfer, and hence ultimately improved the heat extraction from the PV cells. Specifically, introducing the CuO nanofluid along with air as a dual fluid increases the total efficiency per unit area because of their superior thermo-physical properties. Using the CuO nanofluid in combination with air for a PV/T system is promising considering the higher overall performance that can be achieved compared to a collector employing conventional fluids. In addition, a nano-engineered dual-fluid PV/T system offers a wide range of thermal applications depending upon the energy needs. The nano-engineered dual-fluid PV/T system is assessed in terms of effectiveness and reliability by comparing its performance with the previously reported collectors using conventional heat transfer fluids such as air, water, nanofluid, and water plus air ( Figure 13). As reported by Abu Bakar et al. [14], the maximum thermal and electrical efficiencies of the PV/T collector with water plus air were 65.1% and 11.3%, respectively. In contrast, using the proposed PV/T collector, the predicted thermal and electrical efficiencies were found to be 8.4%, and 2.3%, respectively, higher than those of the aforementioned case. Compared to water-based and nanofluid-based PV/T systems [11], the proposed system had a 26%, and 17.3% higher thermal efficiency, respectively. The nano-engineered dual-fluid PV/T system is assessed in terms of effectiveness and reliability by comparing its performance with the previously reported collectors using conventional heat transfer fluids such as air, water, nanofluid, and water plus air ( Figure 13). As reported by Abu Bakar et al. [14], the maximum thermal and electrical efficiencies of the PV/T collector with water plus air were 65.1% and 11.3%, respectively. In contrast, using the proposed PV/T collector, the predicted thermal and electrical efficiencies were found to be 8.4%, and 2.3%, respectively, higher than those of the aforementioned case. Compared to water-based and nanofluid-based PV/T systems [11], the proposed system had a 26%, and 17.3% higher thermal efficiency, respectively.
Conclusion
Transient mathematical and CFD models of a nano-engineered dual-fluid PV/T system were developed in this study. To determine the optimal fluid type for the dual-fluid PV/T system, the effect of different concentrations of metal oxide nanoparticles in the base fluid was evaluated. The 0.75% CuO nanofluid showed more promising results than the other colloidal solutions evaluated herein. When the fluids are being operated simultaneously, the total energy production of a PV/T system using CuO nanofluid plus air as a dual fluid was higher than that of the water plus air case. We observed that the maximum total equivalent efficiencies of the PV/T system using the CuO nanofluid plus air, and using the water plus air, were 90.3% and 79.8%, respectively. The results showed that the heat transfer behavior of the nanofluid was highly dependent on the nanoparticle concentration. The nanofluid as a coolant tends to extract extra accumulated solar heat from the PV module even at higher operating temperature, in comparison with water. The simulation results were in good agreement with the published data. It is important to emphasize that the utilization of two fluids (nanofluid and air in particular) instead of a single fluid affects the efficiency pattern of the PV/T collector. It is anticipated that even with a very small penalty in the form of the electrical cost to pump two fluids, the decrease in the PV module temperature and the increase in the thermal efficiency of the collector are enormous. Outdoor experimental testing will be a future research focus in order to optimize the proposed collector performance.
Author Contributions: All authors contributed equally to the research work and its final decimation as an article in its current form.
Conclusions
Transient mathematical and CFD models of a nano-engineered dual-fluid PV/T system were developed in this study. To determine the optimal fluid type for the dual-fluid PV/T system, the effect of different concentrations of metal oxide nanoparticles in the base fluid was evaluated. The 0.75% CuO nanofluid showed more promising results than the other colloidal solutions evaluated herein. When the fluids are being operated simultaneously, the total energy production of a PV/T system using CuO nanofluid plus air as a dual fluid was higher than that of the water plus air case. We observed that the maximum total equivalent efficiencies of the PV/T system using the CuO nanofluid plus air, and using the water plus air, were 90.3% and 79.8%, respectively. The results showed that the heat transfer behavior of the nanofluid was highly dependent on the nanoparticle concentration. The nanofluid as a coolant tends to extract extra accumulated solar heat from the PV module even at higher operating temperature, in comparison with water. The simulation results were in good agreement with the published data. It is important to emphasize that the utilization of two fluids (nanofluid and air in particular) instead of a single fluid affects the efficiency pattern of the PV/T collector. It is anticipated that even with a very small penalty in the form of the electrical cost to pump two fluids, the decrease in the PV module temperature and the increase in the thermal efficiency of the collector are enormous. Outdoor experimental testing will be a future research focus in order to optimize the proposed collector performance.
Author Contributions: All authors contributed equally to the research work and its final decimation as an article in its current form.
Conflicts of Interest:
The authors declare no conflict of interest. | 10,781.4 | 2019-02-26T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Materials Science",
"Physics"
] |
Analyzing the Local Electronic Structure of Co3O4 Using 2p3d Resonant Inelastic X-ray Scattering
We present the cobalt 2p3d resonant inelastic X-ray scattering (RIXS) spectra of Co3O4. Guided by multiplet simulation, the excited states at 0.5 and 1.3 eV can be identified as the 4T2 excited state of the tetrahedral Co2+ and the 3T2g excited state of the octahedral Co3+, respectively. The ground states of Co2+ and Co3+ sites are determined to be high-spin 4A2(Td) and low-spin 1A1g(Oh), respectively. It indicates that the high-spin Co2+ is the magnetically active site in Co3O4. Additionally, the ligand-to-metal charge transfer analysis shows strong orbital hybridization between the cobalt and oxygen ions at the Co3+ site, while the hybridization is weak at the Co2+ site.
A. 2p XAS spectra background subtraction
shows the raw 2p XAS spectra (red), subtracted 2p XAS spectra (blue), and the background profile (black). We subtracted the background signal from the original XAS results, where the background signal contains edge jump(s), particles scattering, and linear signal. The subtracted spectra were normalized to the maximum of the Co L3-edge. The photon energy of RIXS beamline were calibrated to the spectra acquired in WERA beamline, where the calibration also applied to the incident energy of RIXS spectra.
B. Theoretical absorption background estimation
According to the tabulated data [s1], we can estimate the attenuation length for the individual elements of Co3O4 ( = 6.11 g/cm 3 ). The partial density of cobalt and oxygen elements ( and ) are 4.49 g/cm 3 and 1.62 g/cm 3 , respectively. So the attenuation lengths at 780 eV for the cobalt and oxygen elements in the Co3O4 are expected to be ~140 nm and ~700 nm, respectively. But the attenuation lengths at the absorption edge are likely overestimated using the Henke's table. For cobalt metal ( = 8.9 g/cm 3 ), the estimated attenuation length is ~75 nm but the experimental results indicate that the attenuation length was ~25 nm at the peak maximum [s2]. Thus, we estimated the value within the range from 25 nm to 140 nm for the attenuation length of cobalt element at 780 eV.
The weighting of is proportional to the inverse of the attenuation length that weighting of for Co and O are estimated to be 96-83% and 4-17% (attenuation lengths are 25-140 and 700 nm). For pure Co3O4, the background absorption ( ) at 780 eV is the contribution of the oxygen absorption (the contributions of other edges were omitted). Thus, a value of ~10% of the is suitable estimation for the .
C. Used parameters and the effective crystal field energy
Table S1-S4 give the used parameters. The Slater integrals F 2 dd, F 4 dd, F 2 pd , G 1 pd , and G 3 pd as well as the Udd and Upd are used to determine the Coulomb interaction. The Slater integrals were taken to an ionic scheme, where ~80%(75%) of the values from the Hartree-Fock approximation is used for the Co 2+ (Co 3+ ). The Udd and Upd values were set to the reference values. and describe the spinorbit interaction. The charge transfer energy ∆ and the hopping integrals V (e g ) / V t 2 (t 2g ) mimic the energy splitting between two configurations and the electron hopping intensity from one configuration to another one.
The crystal field energy 10Dq identifies the energy different between the e(eg) and the t2(t2g) orbitals in the Td(Oh) symmetry. Once the ligand-to-metal charge transfer is included, the total effective crystal energy 10Dqtot is composed by two different parts: (i) ionic crystal field energy of cobalt 3d shell (10Dqionic) and (ii) additional contribution caused by charge transfer and exchange interaction (10DqCT) [s4]. The 10Dqtot in currently work can be estimated by 1 A1g-1 T1g and 4 A2-4 T2 excited states energy for the octahedral Co 3+ and the tetrahedral Co 2+ sites, which are ~1.90 eV and ~−0.55 eV, respectively. The negative sign on the tetrahedral symmetry infers to the inverse of t2 and e orbitals with respect to the octahedral symmetry. In contrast, for the simulation considering the charge transfer and exchange interaction effects, the 10Dqionic should be further reduced to 1.15 eV and −0.1 eV for the octahedral Co 3+ site and tetrahedral Co 2+ site, respectively. Our theoretical crystal field energy values (obtained by LDA calculation) considered only the values applied on the Co 3d orbitals, which means the charge transfer induced crystal field energy splitting was not involved. Thus, only 10Dqionic of cobalt 3d shell has been compared in the main text. We note that the contraction induced by the core hole is applied to the whole valence state wave function (correspond to 10Dqtot), thus we applied the 10Dqtot value of the intermediate state is reduced by ~15% in comparison with the ground state [s3] (1.59 eV for the Co 3+ site and −0.47 for the Co 2+ site).
In the simulation, a 300 meV (FWHM) Lorentzian convoluting a 300 meV (FWHM) Gaussian was used to simulate the intrinsic broadening and the instrumental broadening of the incident beam. It provides a 0.6 eV total width. For the RIXS spectra, the same incident beam width was applied. In addition to it, a 50 meV (FWHM) Lorentzian convoluting a 60 meV (FWHM) Gaussian was used for the emitted beam, which implies a total width 0.11 eV. These values are comparable to the experimental setting. Nevertheless, we note that the intrinsic broadening was fixed to a value in the current simulation. Figure S2 presents the comparison of the simulated spectra with and without ligand-to-metal charge transfer effect using the parameters in Table S1, S2 (using charge transfer parameters) and Table S3, S4 (using Slater reduction), respectively. Overall, the spectra look similar. The simulation including the ligand-to-metal charge transfer at both Co 2+ and Co 3+ sites shows better agreement. Figure S2: Comparison of the simulated spectra with and without ligand-to-metal charge transfer effect.
E. Estimating the differential orbital covalency of a cation from the cluster model
Including the ligand-to-metal charge transfer effect suggests that the ground state configuration is a mixture of the 3d orbit and ligand hole (L). We calculated the weight of configurations up to two ligand holes and list them in Table S5. Then, we further estimated the cation orbital covalency of Co 2+ and Co 3+ cations using the following relation [s4-s6]: where stands for the state corresponding to the e(eg) or t2(t2g) orbitals. 100% indicates to the target orbital which is dominated by the ionic configuration. The coefficient N is a renormalization factor of the number of holes in the orbit out of number of holes in 3dn configuration. For example, in the case of high-spin Co 2+ (Td), there are three holes in the t2 orbit out of three holes in 3d 7 configuration. Hence the renormalization factor is equal to one ( number of holes in 3 number of holes in 2 = 1). In contrast, the renormalization factor for the e orbit is meaningless because it is fully occupied (no hole exists). The is the percentages for the configurations which accept the elections transfer from ligand to the orbital. Note here that we only consider one electron transfer in the covalency estimation. is the percentages summation of all possible configurations involved in the hybridization, which is equal to one in this work. Thus, the cation orbital covalency of t2 orbital on the Co 2+ site and eg orbital on Co 3+ site are given as ~80% and ~50%, respectively. Table S3. The weight of configurations and orbital covalency in ground state (unit in %). Although the number of ligand holes is considered up to two in the spectral simulations, the covalency is estimated only using the configurations up to one ligand hole. |3d n > |3d n+1 L 1 > |3d n+2 L 2 > e(eg) covalency t2(t2g) covalency Co 2+ (3d 7 ) 79 20 1 100 80 Co 3+ (3d 6 ) 40 50 10 50 100 | 1,864.8 | 2022-05-11T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Aqueous Calcium Phosphate Cement Inks for 3D Printing
Mimicking the native properties and architecture of natural bone is a remaining challenge within the field of regenerative medicine. Due to the chemical similarity of calcium phosphate cements (CPCs) to bone mineral, these cements are well studied as potential bone replacement material. Nevertheless, the processing and handling of CPCs into prefabricated pastes with adequate properties for 3D printing has drawbacks due to slow reaction times, limited design freedom, as well as fabrication issues such as filter pressing during ejection through thin nozzles. Herein, an aqueous cement paste containing α‐tricalcium phosphate powder is proposed, which is stabilized by sodium pyrophosphate (Na4P2O7·10H2O) as additive. Since high powder loadings within pastes can result in filter pressing during extrusion, various concentrations and molecular weights of hyaluronic acid (HyAc) are added to the cement paste, resulting in reduced filter pressing during 3D extrusion‐based printing. These cement pastes are investigated regarding their setting reaction after activation with orthophosphate solution by isothermal calorimetry and X‐ray diffraction, as well as their hardening performance using Imeter measurements, while the processability is assessed by extrusion through 1.2 and 0.8 mm cannulas. The 3D‐printed structures with appropriate HyAc molecular weight and concentration demonstrate suitable mechanical properties and resolution for clinical application.
Introduction
Due to their excellent biocompatibility and their chemical similarity to human bone mineral, hydroxyapatite-forming calcium phosphate cements (CPCs) are well-established materials for bone repair. [1]Several commercial products are currently available for clinical usage, [2] which are based on a starting powder containing one or more soluble calcium phosphate phases.Upon combination with an aqueous solution as mixing liquid, these phases dissolve, enriching the mixing liquid with calcium (Ca 2þ ) and phosphate (PO 4 3À ) ions.The solution then becomes supersaturated with respect to less soluble calcium phosphate hydrate phases, resulting in their precipitation.Depending on the hydration product, the cements can be assigned as either apatite or brushite cements. [3]A common constituent of apatite cements is α-tricalcium phosphate (α-TCP; α-Ca 3 (PO 4 ) 2 ).Hydration of α-TCP results in the formation of a calcium-deficient hydroxyapatite (CDHA) according to Equation (1). [4] As this reaction proceeds rather slowly, methods for acceleration are required.One option is the addition of Na 2 HPO 4 , acting as accelerator due to the common ion effect. [5]Reactivity of α-TCP powder can be further increased by partial amorphization by prolonged milling, resulting in the formation of highly soluble amorphous TCP (ATCP). [6]Equal to crystalline α-TCP, this ATCP reacts under formation of CDHA according to Equation (1). [7]ydration of α-TCP can be successfully suppressed for prolonged time periods by addition of pyrophosphate (P 2 O 7 4À ) ions, which are supposed to adsorb to the surfaces of the α-TCP particles, thus blocking their dissolution. [8]This effect can be utilized for the fabrication of premixed cement pastes, with the potential to simplify the mixing procedure during surgery by using standardized static mixing devices, reducing preparation-related effects, e.g., caused by user variations. [9]Such premixed pastes can then be activated in a controlled manner by addition of an aqueous orthophosphate solution to start the setting reaction.In a recent study, it was demonstrated that the setting kinetics after activation of premixed pastes with an aqueous orthophosphate solution, containing an overall concentration of 30 wt% Na 2 HPO 4 and NaH 2 PO 4 (Na 2 /Na) in a weight ratio of 4:1, were well adjustable by varying the concentration of tetrasodium pyrophosphate decahydrate (Na 4 P 2 O 7 •10H 2 O; PP), as well as the added amount of Na 2 /Na activator solution.PP concentrations of 0.05 wt% and an addition of 21 vol% activator solution were proposed as best suitable with respect to the resulting setting performance. [8]hile small defects surrounded by intact bone are easy to fill with injectable cement pastes, the treatment of larger size defects remains a challenge in bone regeneration, as pastes are not ideal DOI: 10.1002/adem.202300789Mimicking the native properties and architecture of natural bone is a remaining challenge within the field of regenerative medicine.Due to the chemical similarity of calcium phosphate cements (CPCs) to bone mineral, these cements are well studied as potential bone replacement material.Nevertheless, the processing and handling of CPCs into prefabricated pastes with adequate properties for 3D printing has drawbacks due to slow reaction times, limited design freedom, as well as fabrication issues such as filter pressing during ejection through thin nozzles.Herein, an aqueous cement paste containing α-tricalcium phosphate powder is proposed, which is stabilized by sodium pyrophosphate (Na 4 P 2 O 7 •10H 2 O) as additive.Since high powder loadings within pastes can result in filter pressing during extrusion, various concentrations and molecular weights of hyaluronic acid (HyAc) are added to the cement paste, resulting in reduced filter pressing during 3D extrusion-based printing.These cement pastes are investigated regarding their setting reaction after activation with orthophosphate solution by isothermal calorimetry and X-ray diffraction, as well as their hardening performance using Imeter measurements, while the processability is assessed by extrusion through 1.2 and 0.8 mm cannulas.The 3D-printed structures with appropriate HyAc molecular weight and concentration demonstrate suitable mechanical properties and resolution for clinical application.due to their poor mechanical shape stability before hardening.Furthermore, the irregular shape of such defects is often difficult to rebuild while providing sufficient porosity for nutrient supply.Here, 3D printing of ceramics has gained increasing attention within the last decades due to the freedom in design of the fabricated constructs.Extrusion-based 3D printing enables the production of macroporous and mechanical stable scaffolds in patient-specific shape by using computer-aided design (CAD) model, for example, based on CT scans and/or MRI.However, the ceramic materials often require high-temperature sintering steps to provide sufficient mechanical properties resulting in shrinkage of the final 3D shape.To overcome this, reactive cement pastes composed of cement powder and an oil/surfactant mixture have been developed, whereas an exchange of the oil by water post-printing initiates cement setting. [10]While this is successful in 3D printing of microporous calcium phosphate scaffolds [11] and even patient specific implants, [12] such cement pastes are only printable at very high solid contents >80 wt% due to viscosity reasons leading to a low microporosity hydroxyapatite matrix with a limited degradation ability in vivo.
In the current work, we have chosen a different approach for 3D printing of CPC scaffolds.Here, the previously described aqueous cement paste was further modified for an application in extrusionbased 3D printing by adding hyaluronic acid (HyAc) as swelling agent to increase paste viscosity.Although CPC modification with HyAc has been already described, [13] these cements have not yet been used for extrusion-based additive manufacturing approaches.Only one previous study used oxidized HyAc as an additive to a mixture of polyvinylalcohol and CPC to modulate the angiogenic properties of printed scaffolds. [14]The modification with HyAc allows viscosity adaption by the liquid phase and hence will enable a lower ceramic content of the paste.Different molecular weights and concentrations of HyAc were applied and resulting properties such as viscosity, setting behavior, and mechanical performance were systematically investigated.Finally, printing experiments demonstrated the applicability of the paste, whereas a direct printing into the hardening solution was beneficial to prevent fusion of the processed strands and to maintain the overall shape of the printed construct.
Characterization of α-TCP Starting Powder
Using the laser-scattering particle size distribution analyzer, the α-TCP powder was investigated regarding the particle sizes and distribution indicating a bimodal particle distribution with maxima at 0.9 and 13.2 μm and an average particle size of 10.8 AE 3.4 μm.The samples were analyzed using X-ray diffraction (XRD) with Rietveld refinement and G-factor quantification regarding the phase composition of the material, resulting in 87 AE 2 wt% of crystalline α-TCP and 13 AE 2 wt% of ATCP.
Rheological Investigation of the Cement Pastes Containing HyAc
The viscosity and its dependence on the shear rate are decisive for the behavior of the cement systems before and during extrusion.Therefore, the viscosity as a function of shear rate was measured for different cement systems containing 1 wt% HyAc within the liquid phase, as shown in Figure 1a while investigating different molecular weights of HyAc.Furthermore, in Figure 1b, the influence of HyAc on the viscosity was studied from 1 to 5 wt% HyAc with a molecular weight of 2-2.5 MDa and in the following denominated as HyAc_1, HyAc_3, HyAc_4, and HyAc_5, respectively.
Figure 1a shows a clear increase in initial viscosity due to the increasing molecular weight of the HyAc used.However, the cement system containing HyAc with 1-2 MDa showed a different behavior and resulted in the highest viscosity, even greater than the cement system with 2-2.5 MDa HyAc.This might be due to variations during sample preparation, as the highly viscous pastes were difficult to mix homogenously.In the viscous cement pastes, small agglomerates can easily remain, even after mixing, which influence the measurement.Furthermore, the error bars of the samples with lower viscosity are higher compared to those with high viscosity (mean deviation, n = 3), which might be due to a decrease in stability of the lower (molecular weight) cement systems.This could be caused by the lower viscosity resulting in less stable cement pastes, which are more likely to show phase separation, leading to a cement system containing liquid parts and solid parts influencing the measurements.In general, initial viscosities of 3200 and 2450 Pa s, respectively, can be obtained for the 1 wt% HyAc formulations with molecular weights of 1-2/2-2.5 MDa. Figure 1b shows a multiplication of viscosity with an increase in the HyAc content in the liquid.With a fivefold increase (1 wt% ! 5 wt%) of the HyAc content in the fluid, the viscosity increases from about 2500 to 48 000 Pa s, corresponding to an increase of 19.2-times the initial viscosity.Due to the strong shear thinning behavior of the cement paste containing HyAc_5, the material can be used for 3D-printing application even though the viscosity increases dramatically.
Isothermal Calorimetry
All three samples investigated exhibited a pronounced initial heat flow maximum.The height of this maximum was clearly dependent on the HyAc content: it was the highest for HyAc_0 with a maximum value of 97 AE 8 mW g TCP À1 , while the maxima of the HyAc containing samples were significantly lower with 33 AE 3 mW g TCP À1 for HyAc_3, and 23 AE 2 mW g TCP À1 for HyAc_5 (Figure 2a).However, it is also evident that the heat flows of HyAc_3 and HyAc_5 started to increase earlier than those of HyAc_0.The samples also differ in the heat flow following the initial maximum: HyAc_0 and HyAc_5 showed a nearly linear decrease afterward, with the level of heat flow being significantly higher for HyAc_0 (Figure 2c).In contrast, a small, second maximum was visible after about 2 h in HyAc_3.In the later course of the reaction, the heat flows of HyAc_0 and HyAc_3 continuously decreased toward zero, while a rather broad shoulder appeared in HyAc_5 (Figure 2b).
In accordance with this, the increase in heat of hydration (HoH) was most rapid for HyAc_0 (Figure 2c).While the HoHs of both HyAc containing samples developed quite similar during the first 2 h, the HoH of HyAc_3 slightly exceeded that of HyAc_5 in the following hours.However, after around 15 h, this trend was reversed, obviously caused by the broad shoulder in HyAc_5.After hydration of 42 h, HoHs were 115 AE 6 J g TCP À1 for HyAc_0, 91 AE 5 J g TCP À1 for HyAc_3, and 111 AE 7 J g TCP
À1
for HyAc_5.Hence, the values were practically identical for HyAc_0 and HyAc_5, while they were slightly lower for HyAc_3.Though the final HoHs after 42 h slightly differed, the reactions of all three samples can be considered reproducible.Calculations of the estimated reaction rates, as described in Experimental Section, Isothermal calorimetry, resulted in values of 90% AE 6% for HyAc_0, 67% AE 4% for HyAc_3, and 86% AE 7% for HyAc_5.Hence, no significant difference was recorded between HyAc_0 and HyAc_5, while a remarkably lower reaction rate was achieved for HyAc_3.
Imeter Measurements
The Imeter hardness H i20 of HyAc_0 showed a pronounced increase during the first 0.5 h of hydration and a slower, but continuous increase afterward (Figure 3).In contrast, the hardness of HyAc_5 continuously increased during the measurement time of around 3 h, the H i20 values were always far below those of HyAc_0.Restart of the measurement of one sample of HyAc_5 resulted in H i20 values strongly exceeding those reached after around 3 h in the same sample.IST was 9.1 min (0.15 h) for HyAc_0 and around 100 min (1.7 h) for HyAc_5.Since none of the samples reached a hardness of 63 MPa mm À1 , no FST data could be obtained.The measurements of both HyAc_0 and HyAc_5 were well reproducible.
Quantitative Phase Composition after Hardening
After 7 days of hydration at 37 °C, all three samples investigated (HyAc_0, HyAc_3, and HyAc_5) were composed of CDHA and residual α-TCP, no other crystalline phases were detected.An exemplary plot is presented in Figure 4a, the patterns of the other samples are practically identical.Quantitative analysis by Rietveld refinement and G-factor method resulted in CDHA quantities ranging from 51 to 54 wt%, with no significant differences between the three samples (Figure 4b), and 4-5 wt% of crystalline α-TCP were left.Accordingly, the amorphous fraction, which included the residual water after cement setting, was around 42-44 wt%.Hence, no significant differences in quantitative phase composition were observable among the three samples with varying HyAc content.
The true crystallite size (True CS) parameters of CDHA were 8.46 AE 0.03 nm for HyAc_0, 8.1 AE 0.2 nm for HyAc_3, and 8.1 AE 0.1 nm for HyAc_5 (Figure 4c), thus slightly higher for the sample without HyAc and nearly identical for both HyAc-containing samples.Aspect ratios were around 3.1-3.2for all samples, with no significant differences related to the HyAc content.
Extrudability of the α-TCP Cement Systems
To examine whether the cement systems can be used for 3D-extrusion-printing method, the extrudability was investigated.The extrudability as a function of the HyAc content at an extrusion speed of 20 mm s À1 can be seen in Figure 5a, indicating mean values from three measurements.When the syringe plunger reached the bottom of the syringe, 100% extrudability was assumed and the material remaining in the needle was neglected.At extrudabilities below 100%, filter pressing occurred, meaning a phase separation (within the paste) into a liquid and a solid phase.Figure 5 indicates that all samples were extrudable with the 1.2 mm cannula, but the extrudability clearly decreased when using the 0.8 mm cannula.The extrudability as a function of the HyAc (2-2.5 MDa) concentration in the solution was measured at 20 mm s À1 and the results can be seen in Figure 5b.The significantly higher extrudability of the system with HyAc_1 solution (2-2.5 MDa) compared to the same composition in Figure 5a was achieved due to improved mixing by using a planetary vacuum mixer, which was necessary for the systems with >1 wt% HyAc due to the higher viscosity.
The reason for the low extrudability at HyAc_3, HyAc_4, and HyAc_5 might be the high viscosity of these cement systems causing limited extrudability.The high viscosity leads to a significantly greater resistance during extrusion, resulting in leakage of the cement paste at the connection between the tip and the syringe.The poor extrudability of the HyAc_3 and HyAc_4 systems might be further caused by the fast extrusion speed used, as these systems were 100% extrudable at a speed of 10 mm s À1 .
3D-Printing of the α-TCP Cement System
After investigating the extrudability of the different cement pastes containing HyAc, 3D printability was tested.Therefore, cubic or rectangular scaffolds with 0°-90°strand orientation were fabricated (Figure 6a).With a HyAc concentration of 1 wt%, the printed strands of the individual layers started to fuse, losing their roundish shapes and flattening onto the collector substrate 10 min after processing, as indicated in Figure 6b.Therefore, to prevent the flattening and fusion of cement paste, the fibers were directly printed into the Na 2 /Na activator solution, enabling a fast hardening of the 3D-printed cement paste and more defined strands within the construct.However, printing directly into the Na 2 /Na solution resulted in less accurate fiber placement and clogging of the nozzle tip due to hardening of the cement paste already within the nozzle tip.
Another approach to improve the form stability of the fibers after 3D printing is to adapt the HyAc content within the cement paste by increasing it up to 5 wt%.HyAc_2 did not show any changes compared to the cement system using HyAc_1.In contrast, 3D printing of HyAc_5 was still possible; however, the resulting fibers were not uniform and extrudability was only possible using the larger nozzle tip with a diameter of 1.2 mm.Therefore, using HyAc_4 resulted in accurate fiber placement within the 3D-printed construct and sufficient fiber morphology and uniformity, as shown in Figure 6c.
In Figure 6d,e, overview and magnified scanning electron microscope (SEM) images show CDHA crystals on the 3D-printed fibers after incubation in Na 2 /Na activator solution.The crystals cover the whole fiber surface (Figure 6d) showing their roundish shape in the overview image and their platelike structure within the magnified SEM image in Figure 6e.
Compressive Strengths after Hardening
To be suitable for use as bone substitute material, the cement systems must be able to withstand mechanical stress to a certain extend.The strengths of manually fabricated samples as a function of the molecular weight of the HyAc are shown in Figure 7a, indicating that after 1-day setting time, a larger molecular weight of the HyAc had a negative effect on the stability of the cement system.The cement system with the largest HyAc had a strength of only of %2.1 MPa after 1 day, whereas the system with the smallest HyAc reached a strength of 3.6 MPa.However, after 7 days of setting, systems with larger HyAc were significantly more stable than systems with small HyAc.Particularly large are the differences for HyAc larger than 1 MDa.The peak value after 7 days is achieved by the cement system with a HyAc of 2-2.5 MDa with an achieved strength of 7.9 MPa.In comparison, the system with the 8-15 kDa HyAc only reached a strength of 5.0 MPa.
In addition to the change in molecular weight, the proportion of HyAc in the paste was also varied.The compressive strength of the cement systems with varying 2-2.5 MPa HyAc proportions is described in Figure 7b.After 1-day setting time, HyAc had a negative effect on the stability of the test specimens for all concentrations.While the HyAc free system shows a strength of 4.3 MPa after 1 day, the HyAc-containing systems range between 2.1 and 3.5 MPa.After 7 days, the HyAc-containing systems, except for HyAc_2, are above the strength of 7.7 MPa provided by the cement system without HyAc.However, only the systems HyAc_4 and HyAc_5 in the liquid are significantly higher, reaching strengths of 9.1 and 9.8 MPa.In comparison, the compressive strength of 3D-printed samples from HyAc_4 was found to be 8.6 AE 0.6 MPa, despite the microporous character of the sample.
Setting Reaction of HyAc-Modified α-TCP Cements
To interpret the setting reactions recorded by isothermal calorimetry, it should be first considered that the α-TCP starting powder contained 13 AE 2 wt% of ATCP.Hence, the hydration model established for α-TCP powders with amorphous content (ATCP) is relevant here: rapid hydration of the ATCP content, indicated by a sharp, early heat flow maximum, is followed by a slower reaction of crystalline α-TCP, visible as a comparably low, broad maximum.The reactions of ATCP and α-TCP are not clearly separated: α-TCP dissolution starts during the declining part of ATCP dissolution. [7]Based on this information, it can be concluded that the initial heat flows observed in the samples in this study most likely result from the reaction of ATCP.Accordingly, the following, continuously declining heat flow observed in HyAc_0 is proposed to result from α-TCP hydration, likely overlain by the decelerating reaction of ATCP.Reaction of the ATCP fraction present in the starting powder would result in an HoH of 33 J g TCP À1 .This amount of heat release was only reached after about 3 h in both HyAc_3 and HyAc_5.Hence, it can be concluded that in accordance to ref.
[7b], the reaction of ATCP is actually not restricted to the sharp initial maximum, but proceeds during the second maximum, here overlaid by α-TCP reaction.
In a similar cement system investigated in ref. [8], in situ XRD measurements indicated that the first, sharp maximum actually resulted from ATCP reaction, while a second maximum was produced by hydration of α-TCP.The major difference between both studies is that the two maxima were clearly separated in ref. [8], while there was a more continuous transition in this studyindicating the reaction was generally more rapid here.These differences might result from the usage of different α-TCP starting powders, varying in their grain size and hence their reactivity.Indeed, the calorimetry curve of HyAc_3 exhibited a clearly separated, second maximum.This can be interpreted as retardation of α-TCP hydration, induced by HyAc, separating its reaction from the initial reaction of ATCP.In HyAc_5, the continuously declining heat flow after the initial heat flow maximum was remarkably reduced, compared to HyAc_0.However, heat flow proceeded over a prolonged time.These observations suggest that the reaction of the crystalline α-TCP was indeed even stronger retarded than in HyAc_3.To sum it up, there are clear indications that HyAc retarded the hydration of both ATCP and α-TCP, while the extent of this effects increased with increasing HyAc concentration.However, despite this retarding effect, it was evident that the heat flow started to increase earlier in HyAc_3 and HyAc_5, compared to HyAc_0.This suggests that at the very beginning, HyAc even promotes the onset of reaction.
Comparison of the heat flow curves with the Imeter data (Figure 8) pointed out that in both HyAc_0 and HyAc_5, the initial heat flow was accompanied by a pronounced increase of the hardness H i20 .In the following, the continuously decreasing heat flow, resulting in flattening of HoH increase, went parallel to a slower, but continuous increase of H i20 data.In accordance with the isothermal calorimetry data, placing the HoH of HyAc_5 significantly below that of HyAc_0 in the time range considered here, the H i20 values of HyAc_5 were far below these of HyAc_0.Hence, both Imeter and isothermal calorimetry unambiguously demonstrated that the setting reaction was strongly retarded by HyAc.This effect is so pronounced that the degree of hydration reached at the end of the Imeter measurements was still far below that of HyAc_0 after around 3 h.
From qualitative observation of the mixed cement pastes, it was evident that the HyAc-modified samples had a rubberlike consistency, while HyAc_0 was rather liquid.This effect was even more pronounced at both higher HyAc concentration as well as higher molecular weight.Hence, it is likely that HyAc absorbed parts of the water, thus reducing the amount of water available for cement hydration, and accordingly resulting in retardation.As it was further noticed that the consistency of the HyAc-modified pastes turned more liquid after storage over 24 h at 37 °C, it is possible that the HyAc liberates parts of the water after some time, thus allowing further proceeding of the setting reaction.
3D Printability of HyAc-Modified α-TCP Cement
With the addition of HyAc to the cement paste, 3D printability into defined strands with accurate deposition was enabled, while keeping the roundish strand shape in xy-, as well as z-direction (Figure 6c-e) at a concentration of 4% HyAc (HyAc_4).Furthermore, the filter pressing effect occurring when processing the cement paste without HyAc, as well as leakage of the paste between the syringe and the nozzle tip during 3D printing was decreased and extrusion of pastes with different HyAc contents was proofed.Suitable extrudability was enabled for all the tested cement pastes containing HyAc with the 1.19 mm cannula.However, when using the smaller cannula with a diameter of 0.84 mm, filter pressing effects and leakage of the pastes at the connection points were more likely to happen, especially for the more viscous pastes containing higher HyAc contents.However, the extrudability was improved even for the higher viscous pastes by decreasing the extrusion speed.Furthermore, HyAc_4 enabled sufficient 3D printability resulting in uniform strands with good shape fidelity.Although HyAc has been shown to be susceptible to ionic cross-linking (e.g., by 1 M Ca 2þ in ref. [15]), the low calcium concentration in the cement liquid (solubility of the cement component α-TCP is %2.5 mg L À1 [16] ) is not expected to result in significant cross-linking.Indeed, we did not observe any change in rheological behavior of the pastes over a course of several days.
Compressive Strength of 3D-Printed HyAc-Modified α-TCP Cement
The influence of the HyAc concentration, as well as molecular weight on the compressive strengths of the resulting samples was studied.This indicated that an increasing molecular weight of HyAc decreased the stability of the cement system after 1-day setting time.The cement system with the largest and smallest HyAc molecular weight had strength of 2.1 and 3.6 MPa, respectively, after 1 day.However, after 7 days of setting the systems with the largest HyAc molecular weight were more stable than systems with small HyAc.A possible explanation could be the setting mechanism of hydroxyapatite from α-TCP-forming small needle-shaped crystals (or platelets in the case of CDHA).Cement hardening is then caused by interlocking of the crystals, which is continuously proceeding with setting time.Since the speed of cement setting decreases with increasing concentration of HyAc (see Section 2.3.and 2.4.), samples with larger HyAc molecules (and higher concentration) show initially slower crystal growth and hence lower mechanical stability.After longer setting periods of 7 days, the degree of cement conversion is nearly equal for all sample variations (see Section 2.5) and a reinforcement effect by the HyAc hydrogel phase is clearly visible, especially for higher molecular weight.
Phase Composition of HyAc-Modified α-TCP Cement after Hardening
Despite the remarkable retarding effect of HyAc, as observed in isothermal calorimetry and Imeter measurements, XRD evaluation of the storage samples indicated that there were no differences in final phase composition after 7 days of hardening at 37 °C, as the quantities of both residual α-TCP and precipitated, crystalline CDHA were practically identical.There were also no relevant differences in the size of the crystallites (coherent scattering domains [CSDs]) of CDHA, the information obtained from X-ray diffraction analysis.This means in turn that the crystallite (CSD) growth of CDHA was not affected by HyAc.This is in accordance with results from another study with HEMA-modified cement based on α-TCP, where only minor reduction of CDHA crystallite size was induced by addition of the polymer.However, other studies suggest that slower CDHA formation might result in a pronounced increase of CDHA crystallite sizes. [17]This was obviously not the case in the present study for the retardation induced by HyAc.In addition, typical platelike CDHA crystals completely covering the fiber surface were visible in HyAc-modified cement in SEM images.This means that also the growth of the CDHA crystals (the structures visible under an SEM, which might be composed of several CSDs) was not remarkably affected by HyAc, as the CDHA developed its typical morphology.
Quantitative XRD measurements of storage samples revealed that all samples contained a high fraction of amorphous content after 7 days of hydration at 37 °C (Figure 4b).It should be considered that also the residual water left after hydration accounts to this amorphous content.The water contents of the samples after hydration were 28 wt%, taking into account the water loss during storage.Since water uptake of CDHA is rather low (only 1.9% of its weight), most of this water should still be present as free water after hydration if no other hydration products were formed.However, amorphous fractions of 42-44 wt% were measured in the samples.Hence, it is likely that some kind of amorphous hydration product formed in addition to the crystalline CDHA, in amounts of around 15 wt% or slightly more, considering that also the amorphous phase will probably contain parts of the water not incorporated by CDHA.Though crystalline CDHA with small crystallite sizes is the main hydration product with amounts of 51-54 wt%, it should be considered that the additional amorphous phase might affect the biological performance of the set cements.Amorphous CaP phases are supposed to have a higher solubility than their crystalline counterparts; hence, faster degradation within the body would be expected. [18]ndeed, it was shown that apatite implant coatings with 40 wt% of amorphous fraction were very well resorbed in in vivo studies, even better than completely amorphous coatings. [19]Therefore, the amorphous fraction present in the samples investigated is supposed to have a positive influence on degradability.As there was no significant variation of amorphous fraction between the samples with different HyAc contents, the ratio of crystalline CDHA to the proposed amorphous calcium phosphate phase was unaffected by HyAc addition.It should be further mentioned that the HyAc contained in the samples also accounts to the amorphous fraction.However, since its content in the set cements was only 0.8 and 1.4 wt%, respectively, the observations described earlier are not influenced by this.
Differences in the reaction rates obtained in isothermal calorimetry and XRD storage samples were observed.The reaction rates in the storage samples were higher, as only around 4 wt% of crystalline α-TCP were left, while maximum reaction rates of 90 AE 6% were reached in isothermal calorimetry measurements, resulting in around 10 wt% of unreacted α-TCP.There might be two reasons for this effect: first, it should be considered that isothermal calorimetry measurements were stopped after 42 h, while the XRD storage samples were measured after 7 days.As the heat flow did not decrease to zero after ending the calorimetry measurements, it is likely that the reaction further proceeded, resulting in increased degree of hydration after 7 days.Another option is that mixing of the pastes was less effective in the calorimetry setup, resulting in reduced reactivity of the pastes.
Isothermal calorimetry results suggested a reduction of reaction rate in HyAc_3, compared to HyAc_0 and HyAc_5.However, as this was not confirmed by the XRD storage samples, it is proposed that the 3 wt% of HyAc in the sample (HyAc_3) do not generally reduce the extent of CDHA formation, but it is rather an effect related to the mixing procedure, for example, reduced mixing ability of HyAc_3 paste, compared to the others.Nevertheless, as still an estimated reaction rate of 67% AE 4% was reached in HyAc_3, the isothermal calorimetry results can be considered as reliable.
Conclusion
The pronounced retarding effect of HyAc, as indicated by isothermal calorimetry and Imeter measurements, needs to be considered in the 3D-printing procedure, specifically in the subsequent hardening in Na 2 /Na activator solution.However, the study also demonstrated that formation of CDHA over prolonged time periods is not hampered by HyAc.This confirms the feasibility of fabricating HyAc-modified CDHA-forming cements based on α-TCP.The resulting cement pastes containing HyAc enable sufficient 3D printability, as the filter pressing effect was reduced with the addition of HyAc to the cement paste.Furthermore, the cement paste HyAc_4 resulted in accurately 3D-printed constructs with uniform fibers and improved shape fidelity with a reduced extrusion speed of 10 mm s À1 .After hardening in Na 2 /Na activator solution for 7 days, compressive strengths up to 9.2 to 9.8 MPa were reached for the cement pastes HyAc_4 and HyAc_5, respectively, and typical CDHA platelike crystals covered the whole fiber surfaces.In conclusion, a cement composition with 4 wt% (HyAc_4) of 2-2.5 MDa HyAc is most suitable for 3D printing.Since the whole process chain works at ambient temperature, this opens the possibility to further modify the paste, e.g., by incorporating drugs such as antibiotics within the aqueous solution of tetrasodium pyrophosphate decahydrate (Na 4 P 2 O 7 •10H 2 O).
Experimental Section
Fabrication of α-TCP Starting Powder: 1000.0 g of CaHPO 4 (Innotere, Germany) was mixed with 341.9 g CaCO 3 (Merck, Germany) with a ploughshare mixer (M5, Lödige) for 1 h.Subsequently, the mixture was sintered in a high temperature furnace (Oyten thermotechnik system Vecstar) for 5 h at 1400 °C followed by quenching in air.The resulting α-TCP was then pulverized and ground in a planetary ball mill PM400 (Retsch, Haan, Germany) with six zirconia balls (d = 25 mm) and approximately 1 mL of isopropanol for 2.5 h at 200 rpm.
The α-TCP starting powder was characterized by powder XRD with a D8 Advance (Bruker AXS, Karlsruhe, Germany), with the following measurement parameters: range 7°-70°2θ; step size 0.0112°2θ, integration time 0.3 s; radiation: copper K α ; generator settings: 40 kV, 40 mA; divergence slit: 0.3°; and sample rotation with 30 min À1 .Measurements were performed in triplicate.Quantitative phase composition was determined by Rietveld refinement combined with the G-factor method, an external standard method enabling indirect quantification of amorphous fraction. [20]he structure ICSD# 923 (α-TCP) [21] was applied for the Rietveld method; scale factor, lattice parameters, and crystallite size (Lorentz contribution) were refined.A Chebyshev polynomial of 5th order was used for the background.A slice of the natural rock quartzite, calibrated with fully crystalline silicon powder (NIST Si Standard 640d), served as external standard.Application of the G-factor method for the investigation of α-TCP cements is described in detail in ref. [7a].Particle size was determined with a laser-scattering particle size distribution analyzer (LA-300, HORIBA) after dispersion in isopropanol.
Fabrication of Cement Pastes: The premixed pastes were composed of the α-TCP starting powder and an aqueous solution of 0.05 wt% tetrasodium pyrophosphate decahydrate (Na 4 P 2 O 7 •10H 2 O), referred to as PP solution afterward.The liquid to powder ratio (L/P) was 0.4 mL g À1 .For controlled activation, an aqueous solution of Na 2 HPO 4 and NaH 2 PO 4 in a weight ratio of 4:1 and an overall concentration of 30 wt% (labeled as Na 2 /Na) was used.All chemicals used for preparation of the solutions were obtained from Merck (Darmstadt, Germany).For activation of the cement pastes, 21 vol% of Na 2 /Na related to the water fraction in the premixed pastes were added. [8]or the addition of the HyAc to the cement system, 1-5 wt% stock solutions of the corresponding HyAc salt were prepared with 0.05% (0.002 M) sodium pyrophosphate.Six HyAcs with different molecular weights (1: 8-15 kDa, 2: 80-100 kDa, 3: 0.1-0.5 MDa, 4: 0.6-1 MDa, 5: 1-2 MDa, and 6: 2-2.5 MDa) were investigated regarding their influence on the extrudability of the cement paste.Mixing with the α-TCP cement powder was achieved by using a planetary mixer (THINKY ARV-310P, THINKY U.S.A).
For isothermal calorimetry, Imeter measurements, and preparation of storage samples for XRD measurements, HyAc with 2-2.5 MDa was used in concentrations of 0, 3, and 5 wt% related to the water content of the premixed cement pastes.Samples were denominated as HyAc_0, HyAc_3, and HyAc_5, respectively.For these measurements, the HyAc powder was added directly to the α-TCP powder.This alternative preparation method was chosen, as the highly viscous solutions with high HyAc contents would not have been workable within the setup used for isothermal calorimetry (Table 1).
Isothermal Calorimetry: Isothermal calorimetry of the samples HyAc_0, HyAc_3, and HyAc_5 was conducted at a thermal activity monitor (TAM) air isothermal calorimeter (TA Instruments) equipped with eight twin-type channels (sample and reference chamber).A temperature of 37 AE 0.02 °C was adjusted by an integrated thermostat.Cement pastes were mixed by internal stirring with InMixErs (injection and mixing device for internal paste preparation, FAU Erlangen, Mineralogy) directly in the measurement chamber to avoid any external disturbances that might affect the initial heat flow.
For preparation of the measurements, the α-TCP/HyAc starting powder was mixed with the PP solution for 1 min directly in the calorimeter crucibles using a spatula.The Na 2 /Na activator solution was inserted into syringes.Reaction was started by injecting the Na 2 /Na solution into the premixed pastes and stirring for 1 min by an external motor with a defined, constant stirring rate of 858 rpm.Measurements were performed in duplicate and evaluated with the software Microcal Origin V 2019.The heat flow curves were corrected for the calibration constant of the InMixEr tools and the time constant. [22]he total heat release (HoH) achieved at the end of the measurements was obtained by integrating the heat flow curves.Hydration enthalpies for the relevant reactions, i.e., the hydration of both ATCP and crystalline α-TCP to CDHA (see Equation (1)), were determined by Hurle et al. [7b] Based on these studies, values of ΔH R(α-TCP!CDHA) = 33 AE 2 kJ mol TCP À1 and ΔH R(ATCP!CDHA) = 78 AE 2 kJ mol TCP À1 were used.As ATCP was shown to be highly reactive, [7a] it can be reasonably assumed that it completely reacted during the initial part of hydration.Hence, the heat released by reaction of all ATCP from the starting powder was subtracted from the measured HoH.As the difference was proposed to result from α-TCP hydration, the fraction of α-TCP needed to provide this amount of heat was calculated.
Imeter Measurements: The hardening performance of HyAc_0 and HyAc_5 was measured with an IMETER (IMETER/MSB Breitwieser MessSysteme, Augsburg, Germany) using the "Auto-Gilmore-Needle" approach.The IMETER method Nr. 20 was applied, providing the H i20 data as a measure for cement hardness. [23]The initial and final setting times (IST/FST) of the cements were determined according to the definition for cements.The criterion for IST was H i20 = 3.94 MPa mm À1 and H i20 = 63.0MPa mm À1 for FST.The premixed pastes were prepared by mixing the α-TCP/HyAc powder with the PP solution for 1 min with a metal spatula.Then, the Na 2 /Na solution was added and further stirred for 1 min and transferred into a circular sample holder.The sample chamber temperature was adjusted at 37 °C.Measurements were performed in duplicate.For one sample of HyAc_5, the measurement was interrupted after 3 h and restarted after 16 h to check the hardness development over a prolonged time period.The sample was stored in humid atmosphere at 37 °C during the break to prevent desiccation of the paste.
Extrudability and Rheological Properties: To quantify the extrudability of the pastes, a HyAc solution (varying in concentration from 1 to 5 wt% and in molecular weight from 8 to 15 kDa to 2-2.5 MDa) was mixed with 3 g cement powder in a ratio of L/P = 0.4.Each sample was then labeled by their combination of concentration and molecular weight.These pastes were transferred into syringes of 12 mm diameter and fixed in a custom designed mount in the universal testing machine Z010 (Zwick/Roell, Ulm, Germany).The pastes were extruded through a 1.19 and 0.84 mm cannula via a constant protrusion of 20 mm min À1 until either the syringe was empty or a force of more than 350 N was reached.The extrudability was calculated from the residual cement paste in the syringe m residual and the initial loaded weight of the cement paste m full according to Equation (2) where m syringe is the weight of the empty syringe including the cannula.The rheological behavior of the pastes was measured at a shear rate range from 0.01 to 1000 s À1 with a Rheometer (MCR 301 TruGap Ready, Anton Paar) with the PP50 measurement head (Ø 50 mm) and a plate distance of 0.5 mm (0.7 mm for high viscous pastes).
3D Printing: The scaffolds were prepared using a 3D extrusion printer (3D Discovery, RegenHU, Switzerland) with a 0.84 mm cannula.For a smooth printing, the applied pressure was always adjusted with the fresh prepared paste, but it remained in a range of around 0.15 to 3 bar, depending on the paste composition.The 24 Â 24 mm scaffolds with 4 layers, 12 Â 12 mm scaffolds with 4 layers, and 6 Â 12 mm scaffolds with 8 layers were printed.
Mechanical Properties: To address the cements' mechanical properties, 6 Â 6 Â 12 mm (H Â W Â L) samples were prepared by mixing 0.0832 mL of the Na 2 /Na per gram of cement powder followed by a transfer of the paste into silicon molds.The samples were hardened for 1 and 7 days in 100% humidity at 37 °C.For each time point and composition, 12 samples were removed and tested under compression load until failure.In comparison to the traditionally prepared and hardened samples, also dimensionally equal samples were printed and tested.Compression tests were performed in a universal testing machine (Z010, Zwick/Roell, Germany) with a crosshead speed of 30 mm min À1 until failure.The compressive strength was calculated according to Equation (3) where F max is the force at failure and A is the area of the sample in contact with the machine XRD Characterization of Hardened Samples: Storage samples were fabricated to investigate the quantitative phase content of the cements 7 days after activation.Activated cement pastes were prepared with the same procedure as for the Imeter measurements.The freshly prepared pastes were inserted into special plastic containers with an inner diameter of 23 mm and an inner height of 3 mm.The containers were tightly sealed with parafilm to minimize water evaporation during storage.The cements were then allowed to harden in an incubator Heratherm (Thermo Fisher Scientific, Schwerte, Germany) at 37 °C for 7 days.Water loss during storage was determined by weighing the samples before and afterward.For XRD analysis of the samples, the lid was removed and the sample surface was polished using a 120-grit sandpaper to remove any possible surface effects.Samples were covered with a Kapton polyimide film (Chemplex Industries, Cat.No. 440) to reduce evaporation of residual water during XRD measurement.
The samples were analyzed at the D8 Advance used for powder measurements.The integration time was increased to 0.4 s, while the other parameters were identical to the powder measurements.Measurements were performed in duplicate.For Rietveld refinement, the structure of hydroxyapatite (HAp) with ICSD #26 204 [24] was used for CDHA.As the CDHA showed anisotropic crystallinity, i.e., an anisotropic size of the CSDs, a special ellipsoid model was applied for refinement of the CSDs. [25]Due to the constraints of the hexagonal symmetry, rx and ry were set to the same value.rx was aligned parallel to the crystallographic a-axis and rz to the crystallographic c-axis.The cube root of the model ellipsoid volume revealed the "true crystallite size" (True CS).The background contributions of the Kapton film and the residual water in the samples were each modeled by an hkl phase. [26]ble 1.Overview of cement paste preparation approaches used for the different experimental methods in the study.
Figure 1 .
Figure 1.Mean viscosity as a function of the shear rate for cement systems differing by a) the molecular weight of the hyaluronic acids (HyAc, w/c = 0.4 and 1 wt% HyAc content) and b) the proportion of HyAc (2-2.5 MDa) within the liquid phase in wt% and labeled as followed HyAc_1, HyAc_3, HyAc_4, and HyAc_5, respectively.
Figure 2 .
Figure 2. Isothermal calorimetry of samples HyAc_0, HyAc_3, and HyAc_5; a) initial heat flow; b) high resolution of prolonged measurement time; and c) overview of complete reaction, including heat of hydrations (HoHs); T = 37 °C; n = 2, all single measurements are shown.
Figure 4 .
Figure 4. a) Diffraction pattern of set cement, exemplarily shown for HyAc_0; b) quantitative phase composition of set cements, determined by G-factor, and c) true crystallite size (True CS) and aspect ratio rz/rx of calcium-deficient hydroxyapatite (CDHA); samples were hardened for 7 days at 37 °C; n = 2.
Figure 5 .
Figure 5. Extrudability of α-tricalcium phosphate (α-TCP) cement systems (w/c = 0.4) as a function of the cannula size (1.19 and 0.84 mm) measured at an extrusion speed of 20 mm s À1 for a) different molecular weights of the HyAcs (HyAc solution concentration of 1 wt%) and b) for different HyAc concentrations using the HyAc with the molecular weight of 2-2.5 MDa.
Figure 6 .
Figure 6.The 3D-printed samples using the α-TCP cement system.a) Sample size of 12 Â 6 Â 6 mm used for compression tests.b) W/c = 0.4 with HyAc_1 (1 wt%; 2-2.5 MDa) after hardening in Na 2 /Na solution.c-e) W/c = 0.4 with HyAc_4 (4 wt%; 2-2.5 MDa) after hardening for 7 days in Na 2 /Na solution.Scanning electron microscope (SEM) images showing the surface of the of the 3D-printed HyAc_4 structures after 7 days hardening with the typical platelike crystals of CDHA as overview image (d) and a magnified view (e).
Figure 7 .
Figure 7. Compressive strength of α-TCP cement systems after 1 and 7 days setting time and a w/c ratio = 0.4 and a) with 1 wt%.HyAc in the liquid phase as a function of the different molecular weights of the HyAcs in Da, and b) with different HyAc contents (wt%) in the solution.A sample series without HyAc was used as reference.
Figure 8 .
Figure 8.Comparison of isothermal calorimetry and Imeter results of HyAc_0 and HyAc_5; T = 37 °C; n = 2; the means are presented for heat flows and HoHs, all single measurements are shown for the H i20 data. | 10,406.2 | 2023-08-01T00:00:00.000 | [
"Materials Science",
"Engineering",
"Medicine"
] |
Spin raising and lowering operators for Rarita-Schwinger fields
Spin raising and lowering operators for massless field equations constructed from twistor spinors are considered. Solutions of the spin-$\frac{3}{2}$ massless Rarita-Schwinger equation from source-free Maxwell fields and twistor spinors are constructed. It is shown that this construction requires Ricci-flat backgrounds due to the gauge invariance of the massless Rarita-Schwinger equation. Constraints to construct spin raising and lowering operators for Rarita-Schwinger fields are found. Symmetry operators for Rarita-Schwinger fields via twistor spinors are obtained.
I. INTRODUCTION
In four dimensional conformally flat spacetimes, the solutions of the massless field equations for different spins can be mapped to each other by spin raising and lowering procedures [1]. A spin raising operator is an operator constructed from a twistor spinor and gives a solution of the spins + 1 2 massless field equation from a solution of the spin-s massless field equation. Similarly, a spin lowering operator maps a solution of the spin-s massless field equation to a solution of the spins − 1 2 massless field equation by using twistor spinors. Twistor spinors are special spinors defined as the solutions of the twistor equation on a spin manifold. They appear in various problems of mathematical physics. Supersymmetry generators of both superconformal field theories in curved backgrounds and conformal supergravity theories correspond to twistor spinors [2][3][4][5]. They are also used in the construction of extended conformal superalgebras and are related to the conformal hidden symmetries of a background that are conformal Killing-Yano forms [6][7][8][9]. Twistor spinors contain Killing spinors and parallel spinors as special cases which are supersymmetry generators of supersymmetric field theories and supergravity theories. The classification of manifolds admitting twistor spinors in Riemannian and Lorentzian signatures has been investigated in [10,11]. Especially, they exist on conformally flat manifolds in maximal number.
Starting with a twistor spinor, the spin raising and lowering operators can be constructed for massless spin-0 fields which satisfy the conformally covariant Laplace equation, massless spin- 1 2 fields which satisfy the massless Dirac equation and massless spin-1 fields which satisfy the source-free Maxwell equations [12][13][14]. However, for the case of higher spins, the construction is not straightforward and some constraints may arise in the procedure. Massless spin- 3 2 fields are solutions of the massless Rarita-Schwinger equation which determines the motion * Electronic address<EMAIL_ADDRESS>† Electronic address<EMAIL_ADDRESS><EMAIL_ADDRESS>of gravitino particles in supergravity [15,16]. Rarita-Schwinger fields appear as sources of torsion and curvature in supergravity field equations. They correspond to spinor-valued 1-forms that are in the kernel of the Rarita-Schwinger operator which can be seen as the generalization of the Dirac operator to spin- 3 2 fields. Spin raising and lowering procedures can allow us to find the solutions of the massless Rarita-Schwinger equation by using spin-1 source-free Maxwell solutions and twistor spinors.
In this paper, we focus on the construction of spin raising, spin lowering, and symmetry operators for massless Rarita-Schwinger fields. We start by writing Rarita-Schwinger field equations in a modern geometrical language [17][18][19]. Spin raising and lowering operators between the massless spin-1 and spin- 3 2 fields are found by using twistor spinors and the constraints for the construction of them are obtained. Spin raising operators are constructed for middle-form Maxwell fields in all even dimensions besides dimension four and it is found that the twistor spinor used in the construction of the spin raising operator must be in the kernel of the spin-1 Maxwell field. However, since the gauge invariance of the massless Rarita-Schwinger equation requires Ricci-flat backgrounds, it is shown that the spin raising operators automatically solve the massless Rarita-Schwinger equation in those backgrounds. Spin lowering operators are constructed for four dimensional fields and a constraint relating the Rarita-Schwinger field with the curvature characteristics of the background is found. We also construct a symmetry operator for Rarita-Schwinger fields by using spin raising and lowering operators which map a solution of the massless Rarita-Schwinger equation to another solution.
The paper is organized as follows. We define the spin raising and lowering operators for lower spin massless fields in Sec. 2. In Sec. 3, we construct the spin raising and lowering operators for massless spin-3 2 fields. A symmetry operator of massless Rarita-Schwinger fields is proposed in Sec. 4. Section 5 concludes the paper. In an appendix, we give the transformation rules between the languages of Clifford calculus and gamma matrices to write the equalities in the paper in an alternative notation.
II. SPIN CHANGING OPERATORS FOR MASSLESS FIELD EQUATIONS
We consider massless and source-free field equations in curved backgrounds written for particles with different spins. For example, massless spin-0 particles satisfy the following conformally generalized Laplace equation in n dimensions where ϕ is a function, R is the scalar curvature of the background spacetime and the Laplace-Beltrami operator ∆ is defined as the square of the Hodge-de Rham operator d, which contains the exterior derivative operator d and coderivative operator δ as The operator given in (1) acting on the scalar field ϕ is called the conformally invariant Yamabe operator. For massless spin-1 2 particles, the field equation corresponds to the following massless Dirac equation where ψ is a spinor field and the Dirac operator D is defined as D = e a .∇ Xa for the frame basis {X a } and coframe basis {e a } that are related by the duality property e a (X b ) = δ a b . Here δ a b is the Kronecker delta, ∇ X is the spinor covariant derivative with respect to the vector field X and . denotes the Clifford multiplication. On the other hand, we also consider source-free Maxwell equations for spin-1 particles which are written in terms of the Hodge-de Rham operator d = d − δ as follows: where F is the Maxwell 2-form field [17]. One can use twistor spinors to obtain solutions of the massless and source-free field equations written in (1), (2) and (3) from the solutions of each other. A twistor spinor u is a solution of the following differential equation in n dimensions: for any vector field X and its metric dual X. By taking the second covariant derivative of (4), one obtains the following integrability conditions of the twistor equation [10,14]: where the 1-form K a is defined in terms of the Ricci 1forms P a and curvature scalar R as and its components correspond to the Schouten tensor. C ab are conformal 2-forms which are written in terms of the curvature characteristics and defined for n > 2 as where R ab are curvature 2-forms and ∧ denotes the wedge product. The components of the 2-form C ab correspond to the conformal (Weyl) tensor. The second integrability condition given in (6) is the standard Weitzenbock identity. One can see from the third integrability condition in (7) that all conformally flat manifolds can have solutions of the twistor equation (4).
In four dimensions, a solution ψ of the massless Dirac equation given in (2) can be constructed from a solution ϕ of the massless spin-0 particle equation in (1) and a twistor spinor u that satisfies (4) [12,14]. The so-called "spin raising" operator from spin-0 to spin-1 2 is given as follows: It can easily be seen that by using the defining equations of ϕ and u given by (1) and (4) and the integrability conditions (5)-(7), one can find the result Dψ = 0. A reverse procedure of obtaining a solution ϕ of massless spin-0 equation from a solution ψ of the massless Dirac equation and a twistor spinor u can also be constructed [13,14]. To do this, we consider the spin-invariant inner product ( , ) defined on spinor fields [17]. For any two spinor fields u and v, it has the property (u, v) = −(v, u) and for any differential form α, we have where α ξ = (−1) ⌊p/2⌋ α with ⌊⌋ denoting the floor function that takes the integer part of its argument. By using this inner product, the so-called "spin lowering" operator from spin-1 2 to spin-0 is given in the following form: One can check by using the defining equations (2) and (4) and the integrability conditions (5)-(7) that this function ϕ satisfies the massless spin-0 field equation given in (1). By combining the spin lowering and spin raising operations, one can also construct symmetry operators for massless Dirac fields. A symmetry operator takes a solution of an equation and gives another solution. So, by considering two twistor spinors u 1 and u 2 and a solution ψ of the massless Dirac equation, the following symmetry operator for massless Dirac fields can be written from the spin lowering and spin raising operations Since the antisymmetric generalizations of conformal Killing vector fields that are called conformal Killing-Yano forms can be constructed from two twistor spinors [9], it can be shown that the symmetry operator (12) is equivalent to the symmetry operators of massless Dirac equation written in terms of conformal Killing-Yano forms [12,20]. Spin raising and spin lowering operators can also be defined for obtaining solutions of spin-1 source-free Maxwell equations from the solutions of the massless Dirac equation and vice versa. To do this, we consider the dual spinor u of a spinor u. The dual spinor u is defined by the action of it on a spinor v in terms of the spinor inner product as follows: We can consider tensor products of spinors and dual spinors which correspond to the linear transformations on spinors for any spinor w [13]. Since the inner products (v, w) and (w, v) give scalar quantities and u and v are a spinor and a dual spinor, respectively, the quantities u(v, w) and (w, u)v correspond to a spinor and a dual spinor respectively. This means that tensor products of spinors and dual spinors are elements of the Clifford algebra and they can be Clifford multiplied by inhomogeneous differential forms. For any differential form α, we have the following relations for the tensor product of spinors and dual spinors: So, from a solution ψ of the massless Dirac equation and a twistor spinor u, one can write the following spin raising operator from spin-1 2 to spin-1 as follows [12,13]: It can be seen that from the properties of the tensor product given above, the differential form F in (13) is a 2form in four dimensions and from the defining equations of (2) and (4) and the integrability conditions (5)- (7), it satisfies the source-free Maxwell equation (3) [12,13].
Similarly, we can also construct a spin lowering operator from a solution F of the source-free Maxwell equations and a twistor spinor u to obtain a solution of the massless Dirac equation. The spin lowering operator from spin-1 to spin-1 2 is given as It can be checked that ψ in (14) is a harmonic spinor, namely, a solution of the massless Dirac equation. From the spin lowering and spin raising operations between the solutions of spin-1 and spin-1 2 field equations, the symmetry operators of source-free Maxwell equations can be constructed in the following form: where u 1 and u 2 are two twistor spinors and F is a sourcefree Maxwell solution.
III. SPIN RAISING AND SPIN LOWERING FOR RARITA-SCHWINGER FIELDS
In this section, we construct spin raising and spin lowering operators for spin- 3 2 fields that satisfy the massless Rarita-Schwinger field equations. These operators will be used to make connections between the solutions of the source-free Maxwell equations and the massless Rarita-Schwinger equations via twistor spinors in all even dimensions besides the special case of dimension four.
Let us first consider spinor-valued p-forms which are constructed out of tensor products of spinor fields and differential p-forms and describe spinp + 1 2 particles. For a spinor with indices ψ I and a p-form e I , the spinorvalued p-form is defined by where I is a multi-index. Note that, the tensor product of spinor fields and p-forms defined above is different from the tensor product of spinors and dual spinors defined in Sec. 2 although we denote them with the same symbol. In a similar way, we can also define a Clifford form-valued p-form N p = n A ⊗ e A as a tensor product of a differential form n A on the Clifford bundle and a differential p-form e A on the exterior bundle. The action of a Clifford form-valued p-form N p to the spinor-valued q-form Ψ q is defined by in terms of Clifford and wedge products, the result of which is a spinor-valued (p + q)-form. As a special case, we consider spinor-valued 1-forms representing spin- 3 2 particles. In that case, we have a spinor field ψ a and the coframe basis e a of 1-forms to construct the spin-3 The action of a Clifford form α on Ψ is defined by Moreover, the inner product of a spinor-valued 1-form Ψ and a spinor u in terms of the spinor inner product ( , ) is given as follows which takes a spinor and a spinor-valued 1-form and gives a 1-form. The Levi-Civita connection defined on differential forms and spinor fields can also be induced on spinorvalued 1-forms with the following property: for any vector field X. In a similar way to the definition of the Dirac operator on spinor fields, we can also define a Rarita-Schwinger operator which acts on spinor-valued 1-forms. From the definitions given above, the massless Rarita-Schwinger field equations of spin-3 2 fields in supergravity can be written for a spin-3 2 field Ψ = ψ a ⊗ e a as follows Equation (21) can be seen as a generalization of the massless Dirac equation to spin-3 2 fields and Eq.(22) is the tracelessness condition. Moreover, these equations imply a Lorentz-type condition ∇ X a ψ a = 0. This can be seen as follows. By taking the covariant derivative of the Rarita-Schwinger field Ψ = ψ a ⊗ e a We will use ∇ X b e a = 0 for normal coordinates in the following calculations. By Clifford multiplying with e b from the left and using (21), we find Dψ a ⊗ e a = 0.
Then, we have Dψ a = 0 for every a. Again, by Clifford multiplying with e a from the left and using the Clifford algebra identity e a .e b + e b .e a = 2g ab for the components of the (dual) metric g ab , we obtain where we have used ∇ X b e a = 0 in normal coordinates and from (22) we obtain the following Lorentz-type condition A. Spin raising Let us consider a twistor spinor u which satisfies Eq. (4) and a middle-form Maxwell field F which satisfies Eq. (3). For even dimensions n = 2p, the p-form Maxwell field F is the generalization of the 2-form Maxwell field in four dimensions to all even dimensions. We propose the following spinor-valued 1-form that is the spin-3 2 field as the spin raising operator from the spin-1 Maxwell field F to the spin- 3 2 Rarita-Schwinger field Ψ = ψ a ⊗ e a via the twistor spinor u Here it is clear that ψ a = ∇ Xa F.u − 1 n F.e a . Du . Now, we have to prove that the spin-3 2 field defined in (24) satisfies both of the Rarita-Schwinger field equations in (21) and (22).
Using this ψ a in (22) we obtain e a .ψ a = e a .∇ Xa F.u − 1 n e a .F.e a . Du.
For a p-form α, we have the relation e a .α.e a = (−1) p (n− 2p)α. From this identity and the definition of the Hodgede Rham operator d acting on any differential form α as dα = e a .∇ Xa α, we find Since F satisfies dF = 0 and we have n = 2p, one concludes e a .ψ a = 0.
As a consequence, Ψ defined in (24) satisfies the tracelessness condition of a Rarita-Schwinger field. To see if Eq. (21) is satisfied by Ψ in (24), we apply the Rarita-Schwinger operator defined in (20) where we have used (19) and ∇ X b e a = 0 in normal coordinates. From the property (17) and the definition of d, we obtain We know that F satisfies (3) and u is a twistor spinor which satisfies (4) and (5). Then, we have where we have used e b .∇ Xa F.e b = (−1) p (n−2p)∇ Xa F = 0 since ∇ Xa F is a n 2 -form. By direct computation, one can see that the action of the commutator of the Hodgede Rham operator and the covariant derivative on a differential p-form α can be written in terms of the curvature operator as By using this identity, (28) transforms into the following form where we have used (3) in the second line. The action of the curvature operator on a generally inhomogeneous differential form α which is a section of the Clifford bundle can be written in terms of the curvature 2-forms and Clifford bracket as [17] R(X a , Moreover, from the definition of conformal 2-forms C ab in (9), we can write the curvature 2-forms as follows: where we have used the definition of K a in (8) and the expansion of the Clifford product in terms of the wedge product and contraction i X with respect to a vector field X as e a .K b = e a ∧K b +i Xa K b with the property i Xa K b = i X b K a for zero torsion. So, by substituting (31) and (32) in (30), we find where we have used the identity e b .R ab = −P a for zero torsion, e b .F.e b = 0 and the integrability condition (7). Then, for Ψ to satisfy (21), we obtain the condition or from the definition (8) of K a , it can also be written as (35) By Clifford multiplying (35) with e a from the left and using the identities e a .P a = R, e a .e a = n, e a .e b = −e b .e a + 2g ab and the property e a .F.e a = 0, one obtains the following equality: So, the condition (35) for Ψ to be a massless Rarita-Schwinger field transforms into the following condition This resembles a condition on Killing-Yano forms that can be used in the construction of symmetry operators of a massive Dirac equation with an electromagnetic minimal coupling term [21]. Those symmetry operators are constructed from the Killing-Yano forms ω that satisfy the condition Cl is the Clifford bracket. Since a Killing spinor u, which is a special twistor spinor that satisfies the massive Dirac equation at the same time, can be used in the construction of the Killing-Yano form ω as ω = u ⊗ u [9], the above condition on ω reduces to the condition on u written in (37). On the other hand, the gauge invariance of the massless Rarita-Schwinger equation in a curved background requires the background to be Ricci flat, that is P a = 0. This can be seen as follows. The Rarita-Schwinger equation given in (21) and (22) can be written in a more compact form. Let us consider a Clifford-valued 1-form e = e a ⊗ e a and define the action of the covariant exterior derivative D on Ψ = ψ a ⊗ e a as where we have used ∇ Xa e b = 0 in normal coordinates and e ab = e a ∧ e b . The action of the Hodge star * on DΨ is given by * DΨ = ∇ Xa ψ b ⊗ * e ab . From the action of Clifford-valued forms on spinor-valued forms defined in (16), we can write where we have used the identity e c ∧ * e ab = g ca * e b −g cb * e a in the second line and the action of the Hodge star in the last line. So, from (21) and (22), the Rarita-Schwinger equation is equivalent to e. * DΨ = 0.
This equation has to be gauge invariant under the transformation Ψ → Ψ + Dφ for a spinor φ. We can write the spinor φ as φ = φ ⊗ 1 and By applying D once more, we have where we have used the antisymmetry of the indices in the third line and the definition of the curvature operator in the fourth line. Then, if we choose Ψ as a pure gauge term Ψ = Dφ, we obtain where we have used the identity e c ∧ * e ab = g ca * e b −g cb * e a in the second line, R(X b , X a )φ = 1 2 R ba .φ in the third line and e b .R ba = P a in the last line. The gauge invariance implies the vanishing of the pure gauge term, namely e. * D 2 φ = 0. So, to obtain a gauge invariant massless Rarita-Schwinger equation we must have a Ricci-flat background, P a = 0. In that case, the right-hand side of (33) automatically vanishes and we obtain a Rarita-Schwinger field Ψ from a source-free Maxwell field F and a twistor spinor u as constructed in (24).
B. Spin lowering
By using a twistor spinor u, we can also construct a spin lowering procedure to obtain a spin-1 Maxwell field F from a spin-3 2 Rarita-Schwinger field Ψ = ψ a ⊗ e a in four dimensions. From the inner product definition (18) for spinor-valued 1-forms, let us consider the following 1-form A constructed out of a twistor spinor u and a Rarita-Schwinger field Ψ = ψ a ⊗ e a satisfying (21) and (22): We consider the 1-form A as the potential 1-form of the 2-form Maxwell field F = dA. So, we have where we have used normal coordinates, the definition d = e b ∧ ∇ X b and the twistor equation (4). By definition, F is an exact form and it automatically satisfies dF = 0. So the action of Hodge-de Rham operator d = d − δ on F gives By taking the covariant derivative of F , we find from (39) where we have used the identity (e b . Du, ψ a ) = ( Du, e b .ψ a ) and antisymmetrized the corresponding indices. So, we have From the identity i X c (e b ∧ e a ) = g cb e a − g ca e b , we obtain the action of d on F as Since Ψ is a Rarita-Schwinger field, we have e c .∇ Xc ψ a = Dψ a = 0 and ∇ Xa ψ a = 0. From the twistor equation By defining the spinor Laplacian ∇ 2 = ∇ Xc ∇ X c and using the Schrödinger-Lichnerowicz-Weitzenböck formula for spinor fields which is and the integrability condition (5) of twistor spinors with Dψ a = 0, we obtain We can use the equality (e a .K c .u, ψ c ) = (u, K c .e a .ψ c ) and calculate the term e c .K c as where we have used e c .e c = n and e c .P c = R. Finally, the quantity dF is found as follows: This means that, to obtain a Maxwell field defined as in (39), ψ a of the Rarita-Schwinger field has to satisfy the following condition: On the other hand, from the definition of the curvature operator, we have ∇ Xc ∇ Xa ψ c = ∇ Xa ∇ Xc ψ c + R(X c , X a )ψ c . By using the action of the curvature operator on a spinor as R(X c , X a )ψ c = 1 2 R ca .ψ c and the property (23), one can write the condition as follows: which is automatically satisfied in a flat background. The constant coefficient on the right-hand side turns out to be 1 6 in four dimensions.
IV. SYMMETRY OPERATORS
Construction of spin raising and spin lowering operators gives way to write down symmetry operators for massless Rarita-Schwinger fields. A symmetry operator is an operator that acts on a solution of an equation and gives another solution of it. By starting with a Rarita-Schwinger field Ψ and applying spin lowering and spin raising operators one after the other, one can find another Rarita-Schwinger field Ψ ′ via a twistor spinor u. So, one can construct the symmetry operators of massless Rarita-Schwinger fields in terms of twistor spinors. However, this construction subjects to some constraints which arise in the procedures of spin raising and lowering.
Let us consider a massless Rarita-Schwinger field Ψ = ψ a ⊗ e a in four dimensions which satisfies (21) and (22) with an extra condition By using a twistor spinor u that satisfies (4) with an extra condition we can construct the symmetry operators in the following way. Since Ψ satisfies (49), we have a well-defined spin lowering procedure from spin-3 2 to spin-1 as in Sec. 3.B and can construct a source-free Maxwell field as follows: We can use this source-free Maxwell field for spin raising to another spin-3 2 massless Rarita-Schwinger field. So, we can write the new spin-3 2 field from F in (51) by the procedure in Sec. 3.A as and by using the twistor equation (4) and the integrability condition (5), we obtain Since u satisfies (50), Ψ ′ is a massless Rarita-Schwinger field. So, we construct a symmetry operator between massless Rarita-Schwinger fields Ψ → L u Ψ = Ψ ′ subject to some extra constraints. The symmetry operator L u constructed from a twistor spinor u can be deduced from (53). It can also be deduced from (53) that the eigentensor spinors of the operator L u which are satisfying the condition L u Ψ = kΨ for a constant k and a Rarita-Schwinger field Ψ = ψ a ⊗ e a correspond to the solutions of the following equality: which is not a trivial equation to solve.
V. CONCLUSION
We construct a solution generating technique for massless spin-3 2 Rarita-Schwinger fields by using source-free Maxwell fields and twistor spinors. A spin raising operator that maps a solution of the source-free Maxwell equations to a solution of the massless Rarita-Schwinger equation in terms of a twistor spinor is found for all even dimensional Ricci-flat backgrounds which is the requirement for the gauge invariance of the massless Rarita-Schwinger equation. A spin lowering operator that maps a solution of the massless Rarita-Schwinger equation to the solution of the source-free Maxwell field is also obtained in four dimensions with an extra constraint depending on the curvature characteristics of the background. From these spin raising and lowering procedures, a symmetry operator between massless Rarita-Schwinger fields is also constructed.
One can also investigate the construction of spin raising and lowering operators for spin- 3 2 fields in more general spacetimes. For example, one can search the possibilities to construct spin changing operators via gauged twistor spinors which are generalizations of twistor spinors to Spin c spinors and can exist on more general backgrounds. This can extend the solution generating concept discussed in this paper to general cases. Moreover, the construction of spin raising and lowering operators for spin-2 and higher spin fields may also be investigated by similar procedures. Because of the consistency problems in the interactions of massless higher spin fields with nontrivial gravitational backgrounds, some restrictions may appear in the construction of spin raising and lowering operators for higher spin fields. These restrictions may reduce the backgrounds to the constant curvature spacetimes such as anti-de Sitter and flat backgrounds.
Appendix A
Clifford algebra and spinor identities in physics literature are extensively written in terms of gamma matrices and abstract indices. Since the Clifford and exterior calculus notations are used in the papers that include previous calculations about the topic of the paper, we prefer to use this notation in our paper to have a direct connection with the previous results. The calculations are easier in this notation which is also more economic and elegant. However, it can easily be transformed into the language of gamma matrices and abstract indices. In this appendix, we give a summary of transformation rules between two notations and give the basic formulas in the paper in terms of gamma matrices.
In a flat Lorentzian spacetime, the gamma matrices satisfy the following Clifford algebra identity where η µν is the flat Lorentzian metric and µ, ν are flat space indices. In a curved spacetime, the coframe basis 1-forms e a can be written in terms of coordinate components as e a = e a µ dx µ where e a µ are called tetrad components and e µ a are the inverse tetrad. However, for the Clifford bundle, e a corresponds to the basis of the Clifford algebra and can be written in terms of gamma matrices as e a = e a µ γ µ . The curved space gamma matrices are defined in terms of tetrad components as follows: γ a = e a µ γ µ and they satisfy the following Clifford algebra identity: where g ab is the inverse metric and a, b are curved space indices. So, the Clifford algebra basis e a and the curved space gamma matrices γ a are identical to each other. The Dirac operator defined in (2) can be written in terms of gamma matrices as where we have used ∇ Xa = ∇ a in terms of abstract indices and omit the Clifford product notation. In this way, the twistor equation in (4) corresponds to A p-form α as an element of the Clifford bundle can be written in terms of the Clifford algebra basis as follows: where γ a1a2...ap corresponds to the antisymmetric combination of γ a1 γ a2 ...γ ap . For example, we have γ ab = 1 2 (γ a γ b − γ b γ a ). So, the action of a p-form α on a spinor ψ via the Clifford product . is given in the following form: α.ψ = 1 p! α a1a2...ap γ a1a2...ap ψ.
Then, the integrability conditions of the twistor equation given in (5) and (7) are written as C abcd γ cd u = 0 (A6) while (6) remains unchanged. Here K ab are the components of the Schouten tensor and C abcd are the components of the conformal (Weyl) tensor. For any two spinor fields u and v and a p-form α, the spinor inner product ( , ) has the following property given in the equation above (11) So, for a massless spin-0 field φ, we can obtain a massless spin-1 2 field via the spin raising operator given in (10) ψ = (∂ a φ)γ a u + 1 2 φ Du (A7) and from a massless spin-1 2 field ψ, we can obtain a massless spin-0 field via the spin lowering φ = (u, ψ). The symmetry operators given in (12) are L u1u2 ψ = ∂ a (u 1 , ψ)γ a u 2 + 1 2 (u 1 , ψ) Du 2 .
For the case of spin raising and lowering operators between massless spin-1 2 and spin-1 fields, we can write (13) and (14) (A11) A spin-3 2 field Ψ = ψ a ⊗ γ a is a massless Rarita-Schwinger field, if it satisfies the following equations: where D = γ a ∇ a is the Rarita-Schwinger operator defined in (20). The Lorentz-type condition (23) corresponds to ∇ a ψ a = 0. The spin raising operator from a spin-1 Maxwell field F to obtain a massless spin-3 2 Rarita-Schwinger field via a twistor spinor u given in (24) The condition (37) on the twistor spinor u corresponds to F a1a2...ap γ a1a2...ap u = 0.
The manipulations between (24) and (37) can be done in terms of gamma matrices in a similar manner to the calculations done in Sec. 3.A. For the spin lowering from spin-3 2 to spin-1, we have A = (u, Ψ) given in (38) and the condition to obtain the Maxwell solution in (48) cor-responds to 1 2 R bacd γ cd + K bc γ c γ a ψ b = n − 3 2(n − 1) Rψ a (A16) where R bacd are the components of the Riemann tensor. The transformation rule between two massless Rarita-Schwinger fields given in (53) can be written as follows: With the conventions defined in this appendix, all the derivations in the paper can be done by using gamma matrices in an equivalent manner to the Clifford and exterior calculus methods used in the paper. | 7,691.6 | 2017-12-05T00:00:00.000 | [
"Physics",
"Mathematics"
] |
Mechanisms of primordial follicle activation and new pregnancy opportunity for premature ovarian failure patients
Primordial follicles are the starting point of follicular development and the basic functional unit of female reproduction. Primordial follicles are formed around birth, and most of the primordial follicles then enter a dormant state. Since primordial follicles are limited in number and can’t be renewed, dormant primordial follicles cannot be reversed once they enter the growing state. Thus, the orderly occurrence of primordial follicles selective activation directly affects the rate of follicle consumption and thus determines the length of female reproductive lifespan. Studies have found that appropriately inhibiting the activation rate of primordial follicles can effectively slow down the rate of follicle consumption, maintain fertility and delay ovarian aging. Based on the known mechanisms of primordial follicle activation, primordial follicle in vitro activation (IVA) technique has been clinically developed. IVA can help patients with premature ovarian failure, middle-aged infertile women, or infertile women due to gynecological surgery treatment to solve infertility problems. The study of the mechanism of selective activation of primordial follicles can contribute to the development of more efficient and safe IVA techniques. In this paper, recent mechanisms of primordial follicle activation and its clinical application are reviewed.
Introduction
Ovary is an important reproductive and endocrine organ for female mammal. The normal ovarian function provides a fundamental guarantee for the body's suitable reproductive life and stable endocrine environment. There are two types of follicles in the adult ovarian follicle pool, one is the growing follicle, and the other is large number of primordial follicles as ovarian reserve. The primordial follicle pool is not renewable, and the primordial follicle cannot be reversed once it enters the growing state Zhang and Liu, 2015;Kallen et al., 2018). Therefore, the orderly primordial follicle activation plays a decisive role in maintaining the length of female reproductive life (Reddy et al., 2010;Zhao et al., 2021). There are different stages during follicles development included primordial follicle, primary follicle, secondary follicle, antral follicle and preovulatory follicles in the ovary, but most of the follicles are primordial follicles, and these primordial follicles are in a dormant and static state (Pedersen, 1970;Hsueh et al., 2015;Zhang et al., 2015;Monget et al., 2021) (Figure 1). Primordial follicles consist of a single central oocyte surrounded by multiple pre-granulosa cells. Interestingly, the oocytes are arrest in the first meiosis stage and their growth is relatively static, the cell cycle of the pre-granulosa cells is inhibited (Jaffe and Egbert, 2017;Granados-Aparici et al., 2019). This state can be maintained as long as a year in mice, and up to 50 years in humans. These dormant primordial follicles are recruited from the primordial follicle pool and enter the growth follicle stage. This process is named the initial recruitment, also called primordial follicle activation (Lintern-Moore and Moore, 1979). Primordial follicle initial recruitment is different from cyclic recruitment (Kallen et al., 2018). It is generally believed that cyclic recruitment is regulated by gonadotropins, while initial recruitment is not regulated by gonadotropins (McGee and Hsueh, 2000;Bian et al., 2021). Primordial follicle initial recruitment is mainly regulated by the signals in the pre-granulosa cells and oocyte, as well as by conditions such as growth factors and stress in the primordial follicle microenvironment (Bian et al., 2021). After the primordial follicle is activated, the pre-granulosa cells gradually change from flat to wedge-shaped, then cuboidal, and later called granulosa cells. Meanwhile, oocyte diameter increased (Kallen et al., 2018) (Figure 2). When the primordial follicle is activated to enter the growth follicle stage, it enters the irreversible growth and development process, so the activation of the primordial follicle is equivalent to a gate for follicular development (Hirshfield, 1991;. In order to maintain a suitable length of reproductive life and the reproductive health of the body, primordial follicles in the ovaries need to be properly activated at the right time Chen et al., 2022). The understanding of the mechanisms of primordial follicle activation is still limited. To better grasp the progress of research on primordial follicle activation, we summarize the currently known key networks that regulate the activation of primordial follicles in this review.
Two waves of primordial follicles activation in mouse
Primordial follicles are formed around birth. The primordial follicles in the ovarian medulla are synchronously activated to become the first wave of activated follicles. The dormant primordial follicles in the cortical region of the ovary are gradually activated into a second wave of activated follicles (Hirshfield, 1991;Mork et al., 2012). Primordial follicles are mainly stored in the cortical area, and growing follicles are mainly stored in the medullary area. The developmental dynamics, functions and mechanisms of the two waves of follicles are different (Zheng et al., 2014;Dai et al., 2022). The activation of the first wave primordial follicles in the medulla is regulated by oocytes and has been determined during the formation of primordial follicles in the embryonic stage (Dai et al., 2022). However, the activation of the second wave primordial follicles in the medulla of the adult ovary may be regulated by pre-granulosa cells . The first wave of primordial follicle development later contributes to the onset of puberty. The second wave of primordial follicles contributes to the entire reproductive process in adulthood (Zheng et al., 2014). The current understanding of the mechanism of the two waves of primordial follicle activation is limited, and this will be a fundamental scientific topic that needs attention in the development of primordial follicles.
The signal pathway in oocyte PTEN-PI3K-AKT signaling Primordial follicles are composed of only two types of cells: oocytes and pre-granulosa cells. The activation of primordial follicles requires the participation of these two types of cells . In the process of primordial follicle activation, two signaling pathways, the phosphatidylinositol-3 kinase (PI3K) signaling pathway and the mechanistic target of rapamycin complex (mechanistic target of rapamycin complex 1, mTORC1) signaling pathway play a key role. PI3K signaling in oocytes is required for primordial follicles to maintain dormant state and follicular reserve (Adhikari and Liu, 2009;Adhikari and Liu, 2010;Zhang et al., 2014;Maidarti et al., 2020;Zhao et al., 2021). Phosphatase and tensin homolog (PTEN) negatively regulate intracellular levels of phosphatidylinositol-3,4,5-trisphosphate (PIP3) in cells and functions as a tumor suppressor by negatively regulating protein kinase B (PKB/AKT) signaling pathway (Worby and Dixon, 2014;Yehia et al., 2020). PTEN-PI3K-AKT is a relatively well-studied and clear signaling pathway during primordial follicle activation (Reddy et al., 2005;Liu et al., 2006). PTEN is mainly localized in dormant primordial follicle oocytes, deletion of Pten in primordial follicle oocytes will lead to excessive activation of the PI3K signaling pathway in oocytes, leading to premature activation of primordial follicles and ultimately premature ovarian failure (Reddy et al., 2008). Pyruvate dehydrogenase kinase 1 (PDK1) activates AKT through co-binding to PIP3 generated by PI3Ks (Gagliardi et al., 2018). Conditional knockout of Pdk1 in primordial follicle oocytes results that the majority of primordial follicles are depleted around the onset of sexual maturity. PTEN-PDK1 signaling in oocytes that controls the survival, loss and activation of primordial follicles (Reddy et al., 2009).
FIGURE 2
Characteristics of tissue structure changes in primordial follicle activation A follicle consists of a single oocyte in the middle and several somatic cells surrounding it. Primitive follicle activation mainly has two characteristics of structural changes. On the one hand, after primordial follicle activation, the pre-granulosa cells slowly changed from flat to wedge-shaped, and then to cubic-shaped. On the other hand, the oocyte diameter increased.
Frontiers in Physiology frontiersin.org 03 TSC1/TSC2-mTOR signaling in oocyte mTOR is essential for oogenesis, follicular development, maintenance of follicular reserve, and oocyte maturation (Liu et al., 2018;Correia et al., 2020). TSC complex subunit 1 (TSC1) and TSC complex subunit 2 (TSC2) negatively regulates mammalian target of rapamycin complex 1 (mTORC1) signaling. mTOR is a conserved kinase that mediate cellular responses to stresses such as nutrient deprivation, growth factors and DNA damage (Salussolia et al., 2019). Deletion of Tsc1/2 in primordial follicle oocytes will lead to overactivation of the mTOR signaling pathway in oocytes, which will also lead to the premature activation of primordial follicles and eventually lead to premature ovarian failure ). However, primordial follicles are normally activated in the absence of mTOR in primordial follicle oocytes, but subsequent follicle development is arrested, and granulosa cells transform into sertoli-like cells (Guo et al., 2018).
CDC42
Cell Division Cycle 42 (CDC42) is a small GTPase of the Rhosubfamily, which regulates signaling pathways that control diverse cellular functions including cell morphology, migration, endocytosis and cell cycle progression (Heinrich et al., 2021;Campbell et al., 2022;Wirth et al., 2022). The subcellular localization of CDC42 during primordial follicle activation is interesting. In dormant primordial follicles, CDC42 is specifically expressed in the oocyte cytoplasm. When primordial follicles are activated, the expression of CDC42 on the oocyte membrane is greatly enhanced, the expression of the GTP-active form of CDC42 is enhanced on the oocyte membrane. CDC42 binds to P110-β protein, regulates the activation of PI3K signaling pathway in oocytes, and promotes primordial follicle activation (Yan et al., 2018). In Yan's study, it was found that the expressions of CDC42 and PTEN in primordial follicle oocytes are mutually exclusive, but the specific regulatory relationship between CDC42 and PTEN is not clear, which needs to be explored in future research.
E-cadherin
Cell adhesion is essential for tissue structure and function. The cadherin family members play a key role in cell-cell recognition and adhesion and interact with intracytoplasmic proteins through adaptor proteins (Collins et al., 2017). E-cadherin, a classical cadherin of the cadherin superfamily, is a calcium-dependent cell adhesion molecule that is involved in the establishment and maintenance of epithelial cell morphology during embryogenesis and adulthood (Zaidel-Bar, 2013). E-cadherin is specifically localized to the cytomembrane of oocytes in primordial follicle. E-cadherin in primordial follicle oocytes plays an indispensable role in the maintenance of the primordial follicle pool by facilitating follicular structural stability and regulating NOBOX expression (Yan et al., 2019). The study also demonstrates that oocytederived factors are necessary for the maintenance of follicles.
The signal pathway in pre-granulosa cells TSC1/TSC2-mTOR signaling in pregranulosa cells Interestingly, primordial follicles failed to be activated after deletion of Rptor, a key member of the mTORC1 complex in pre-granulosa cells, and primordial follicles were hyperactivated after deletion of TSC1/2 . Studies using multiple transgenic mouse models reveal that pre-granulosa cells initiate and govern the activation of the second wave of primordial follicles Zhang et al., 2015). Under the stimulation of surrounding factors such as hypoxia, nutritional factors, stress and other factors, mTOR in pre-granulosa cells is upregulated, pre-granulosa cells grow and differentiate into granulosa cells, and they secrete more KIT ligands at the same time . KIT ligands bind to KIT receptors on the oocyte membrane and activate the PI3K signaling pathway in the oocyte (Kissel et al., 2000;Nilsson and Skinner, 2004;Hutt et al., 2006;Zhang et al., 2014;Saatcioglu et al., 2016). This enables downstream FOXO3A to be phosphorylated, and FOXO3a is transported out of the nucleus to relieve the inhibition of oocyte growth, thereby enabling primordial follicle activation (Castrillon et al., 2003;Zhang et al., 2014;Ezzati et al., 2015). Other studies have also found that CREB, MAPK, HDAC6, NGF and other molecules can regulate the activation of primordial follicles through the mTOR signaling pathway (He et al., 2017;Zhao et al., 2018;Li et al., 2020;Zhang et al., 2021b;Zhang et al., 2022). These studies further illustrate the important role of the mTOR signaling pathway in the activation of primordial follicles.
FOXL2
Foxl2 forkhead box L2 (FOXL2), a forkhead transcription factor, contains a fork-head DNA-binding domain and it may play a role in ovarian development and function (Benayoun et al., 2011;Georges et al., 2013). Expansion of a polyalanine repeat region and other mutations in FOXL2 are a cause of blepharophimosis syndrome, premature ovarian failure and granulosa cell tumour (De Baere et al., 2002;Nallathambi et al., 2007;Pierini et al., 2020). The formation of primordial follicles in Foxl2 knockout mice was not affected, but the pre-granulosa cells failed to differentiate and remained flat, resulting Frontiers in Physiology frontiersin.org in no growing follicles in the ovary and female mice were sterile (Schmidt et al., 2004). This study also demonstrates that the developmental status of pre-granulosa cells is critical for the activation of primordial follicles and the development of subsequent growing follicles.
SMAD3
SMAD family member 3 (SMAD3) is known to serve as a signaling intermediate for the transforming growth factor beta TGF family (Kawabata et al., 1998) Smad3 knockout mice are viable. Notably, primordial follicle formation was not affected in Smad3 knockout mice but delayed the activation of primordial follicles and development of growing follicles, resulted in reduced fertility (Tomic et al., 2002). The transcription factor Smad3 is expressed in the nucleus of pre-granulosa cells. SMAD3 directly regulates the transcription of CCND2 and inhibits the expression of Myc. CCND2 is bound by p27, thereby arresting the cycle of precursor granulosa cells and maintaining the dormant state of primordial follicles. When the level of TGF-β increases, SMAD3 is transported out of the nucleus, p27 dissociates from CCND2 to relieve the inhibition of the pre-granulosa cell cycle and promote the activation of primordial follicles (Granados-Aparici et al., 2019). From the current research, p27 and SMAD3 play key roles in follicle development and oogenesis, and they may regulate primordial follicle activation mainly by affecting the pre-granulosa cell cycle (Rajareddy et al., 2007). However, p27 and Smad3 knockout mice were used in the current study, but the precise roles and mechanisms of p27 and SMAD3 in primordial follicle activation cannot be fully elucidated, so further research is needed.
AMH
Anti-mullerian hormone (AMH) is a secreted ligand of the TGF-β superfamily (Pepinsky et al., 1988;Howard et al., 2022). AMH is exclusively produced by granulosa cells of ovarian follicles during the early stages of follicle development (Moolhuijsen and Visser, 2020). AMH plasma levels reflect the continuous non-cyclic growth of small follicles, thereby mirroring the size of the resting primordial follicle pool and thus acting as a useful marker of ovarian reserve (Dewailly et al., 2014). AMH is the best measure of ovarian reserve in different clinical conditions at present. (Teede et al., 2019;Shrikhande et al., 2020;Vatansever et al., 2020). AMH supplementation is able to maintain follicular reserve in some ovarian injury models, such as chemotherapy-induced premature ovarian failure, polycystic ovary syndrome (PCOS) (Sonigo et al., 2019;Hoyos et al., 2020;Ou et al., 2021;Rudnicka et al., 2021).
ESR2
Estrogen and its receptors play an integral role in the periodic recruitment of growing follicles, and estrogen receptor knockout mice lead to infertility in female mice due to abnormal meiosis (Shoham and Schachter, 1996;Liu et al., 2017;Tang et al., 2019). A recent study found that disruption of estrogen receptor β (ESR2) signaling results in increased protein level of AKT and mTOR in both granulosa and oocyte factors and leading to increased activation of primordial follicles (Chakravarthi et al., 2020). This study suggests that estrogen receptors may have no effect on the activation of the first wave of primordial follicles but may regulate the activation of the second wave of primordial follicles. It is also possible that the deletion of the estrogen receptor leads to the development of growth follicles, and the changes in the ovarian microenvironment lead to abnormal activation and loss of primordial follicles.
Other important molecules and relative signaling pathways HIPPO The HIPPO pathway was first discovered in Drosophila melanogaster, the pathway name comes from the fact that Drosophila overgrew like a hippopotamus after mutations in key molecules of the HIPPO pathway in the head and eyes of Drosophila. The HIPPO pathway is highly conserved from Drosophila to mammals Moya and Halder, 2019). The upstream membrane protein receptors of the Hippo signaling pathway act as receptors for extracellular growth inhibition signals, and once they sense the extracellular growth inhibition signals, they activate a series of kinase cascade phosphorylation reactions that eventually phosphorylate the downstream effectors Yes-associated protein (YAP) and transcriptional coactivator with PDZ-binding motif (TAZ). And cytoskeletal proteins bind to the phosphorylated YAP and TAZ, causing them to remain in the cytoplasm and reduce its cytosolic activity, thus achieving the regulation of regulation of organ size and volume (Huang et al., 2005;Yu et al., 2015;Ma et al., 2019). After ovarian fragmentation promotes actin polymerization, p-YAP levels decrease and promotes Frontiers in Physiology frontiersin.org nuclear transfer of YAP. Nuclear localization of YAP further promotes the expression of CCN growth factors and BIRC apoptosis inhibitors, which ultimately promotes follicular overgrowth (Li et al., 2010;Kawamura et al., 2013). The changes in the HIPPO pathway after ovarian fragmentation are a doubleedged sword, on the one hand they can lead to follicular overgrowth and premature ovarian failure due to early follicular depletion. On the other hand, this property can be used to promote primordial follicle activation and develop primordial follicle in vitro activation techniques to help infertile patients to conceive children.
HDAC6
Histone deacetylase 6 (HDAC6) is a special histone deacetylase with two deacetylation domains and one ubiquitination domain. HDAC6 plays a center role in several processes, including positive regulation of peptidyl-serine phosphorylation, protein deacetylation, protein destabilization, microtubule stability (Olzmann et al., 2007;Wang et al., 2018;Wang et al., 2020a;Osseni et al., 2020;Wang et al., 2022). Our study showed that histone deacetylase HDAC6 was expressed heterogeneously in different primordial follicles. About 3%-4% of primordial follicles in neonatal and adult mouse ovaries had low HDAC6 expression, and 65% of primordial follicles with low HDAC6 expression will be activated. Further studies found that HDAC6 was transiently downregulated during primordial follicle activation, mediating selective activation of mouse primordial follicles by regulating the expression of mTOR (Zhang et al., 2021b). Interestingly, overexpression of Hdac6 extends fecundity in female mice, longer telomeres and reduced DNA damage may reduce tumorigenesis in Hdac6 overexpression mice . Combined with these studies, we speculate that HDCA6 may regulate primordial follicles to selectively activate primordial follicles, prolong follicular cell telomere length and reduce DNA damage, and ultimately prolong female reproductive lifespan.
SIRT1
NAD-dependent protein deacetylase Sirtuin-1 (SIRT1) has been reported to be involved in the regulation of cellular senescence, aging and organism longevity through the acetylation and deacetylation of these substrates altering their transcriptional and enzymatic activities, as well as protein levels (Yao and Rahman, 2012;Chen et al., 2020). SIRT1 binds directly to the Akt1 and mTOR promoters to promote their transcription, and increased levels of AKT and mTOR expression promote primordial follicle activation. We conducted a clinical translational potential study and found that short-term SIRT1 agonist treatment activates primordial follicles in vitro and these follicles develop normally, both in mice and humans. In vitro fertilization experiments in mice showed that the quality of oocytes obtained by this method was normal. These results suggest that SIRT1 may be a key protein regulating primordial follicle activation and has certain clinical value . Interestingly, overexpression of Sirt1 was able to delay ovarian aging, and this effect was the same as that of calorie restriction, Calorie restriction protects fertility in female mice by activating SIRT1 (Long et al., 2019;Zhang et al., 2019).
TGF-β1
TGFB1 transforming growth factor beta 1 (TGF-β1) is a secreted ligand of the TGF-β superfamily. TGF-β binds to various TGF-β receptors leading to recruitment and activation of SMAD family transcription factors that regulate gene expression. The members of TGF-β superfamily, including TGF-β, GDF9, BMP2, BMP4, BMP5, BMP6, BMP7, BMP15, activins and inhibin are expressed by ovarian somatic cells and oocytes in a developmental stage-related manner and function as intraovarian regulators of folliculogenesis (Lee et al., 2001;Hanrahan et al., 2004;Zhao et al., 2016;Vander Ark et al., 2018). Fetal mouse ovary at embryonic stage 18.5 were cultured with the addition of TGF-β ligand for 5-7 days in vitro (Wang W. et al., 2014). The results showed that the primordial follicle reserve was reduced and the primordial follicle activation was inhibited. The opposite result was obtained after incubation with SD208, an inhibitor of TGFβ-R1. Further testing found that TGF-β maintained primordial follicle inventory and primordial follicle dormancy by inhibiting the mTOR signaling pathway. In Wang's research, TGF-β only affects the mTOR signaling pathway, and has no effect on the PI3K signaling pathway (Wang Z.-P. et al., 2014). Zhang's research showed that mTOR signaling in precursor granulosa cells initiates and regulates primordial follicle activation, after the mTOR pathway in pre-granulosa cells is activated, PI3K key proteins in oocytes are phosphorylated . Combined with the research analysis of Wang's and Zhang's, we speculate that long-term addition of TGF-β may maintain primordial follicle pool and primordial follicle activation by regulating the mTOR signaling pathway in oocytes. Interestingly, the 4-day-old mouse ovaries were cultured with TGF-β for 2 h, and the phosphorylation of S6 which is a key downstream of the mTOR pathway was significantly increased, p-AKT was not changed, and SMAD3 nuclear export in pregranulosa cells was increased, thereby promoting primordial follicle activation (Granados-Aparici et al., 2019). In our study (data not shown), mTOR and PI3K signaling pathways are significantly inhibited after adding SD208 to cultured 2 dpp ovaries of mice for 2 days. TGF-β plays different roles in different stages of follicular development. Long-term upregulation of TGF-β and short-time upregulation of TGF-β may lead to different or even opposite results for primordial follicle development.
NGF
Neurotrophins are growth factors that promote neuronal and non-neuronal cell survival, proliferation and differentiation (Wang W. et al., 2014;Denk et al., 2017). Nerve growth factor (NGF) is a prototype glycoprotein that belongs to the neurotrophins family. NGF contains two classes of receptors: the high affinity receptor tyrosine kinase A (TrkA) and the low-affinity receptor p75 (Chao and Hempstead, 1995;di Mola et al., 2000). The expression of NGF and its receptors is developmentally regulated during Frontiers in Physiology frontiersin.org 06 folliculogenesis in different mice ovary (Chaves et al., 2010). The number of primordial follicles was not changed, but the number of primary and secondary follicles was significantly reduced in Ngf knockout mice. In the absence of NGF, primordial follicles cannot be activated. It is worth noting that exogenous addition of NGF has no effect on the activation of primordial follicles (Kerr et al., 2009;Dorfman et al., 2014;Dorfman et al., 2014). After the ovaries are mechanically injured, the expression of NGF in the stroma cells near the injury site increases rapidly, NGF induces selective activation of primordial follicles near the injury site, including near the ovulation site through the mTOR signaling pathway (He et al., 2017). However, how the NGF in the stroma cells induces the activation of nearby primordial follicles, as well as the specific signal transduction and molecular mechanisms are still unclear. This is a scientific issue that needs attention in the future.
EGF
Epidermal growth factor (EGF) encodes a member of the epidermal growth factor superfamily, which acts by binding with high affinity to the cell surface receptor, epidermal growth factor receptor (Schneider and Wolf, 2009). In an in vitro ovarian culture model, addition of EGF promotes primordial follicle activation by activating the activity of the PI3K pathway in oocytes, and shortterm treatment (30 min) can induce the activation of primordial follicles in humans and mice in vitro. EGF is a highly effective drug target for primordial follicle activation in vitro (Fujihara et al., 2014;Zhang et al., 2020). EGF is highly expressed in zebrafish ovary and testis, EGFRa is expressed in various organs, including the brain, and EGFRb is mainly expressed in the lung and ovary. It is worth noting that only deletion of EGFRa inhibited primordial follicle activation in vivo, whereas primordial follicle activation was not affected by deletion of EGF (Song et al., 2022). This suggests that other growth factors may promote primordial follicle activation through EGFR.
p27
Cyclin-dependent kinase inhibitor 1B (Cdkn1b), also known as p27 or p27Kip1, is a suppressor of cell cycle (Polyak et al., 1994;Chu et al., 2008;Razavipour et al., 2020). The expression of p27 in the ovary during primordial follicle formation and activation is interesting. During primordial follicle formation, p27 is only expressed in the nucleus of somatic cells and not in oocytes. After primordial follicle formation, p27 is expressed in both pregranulosa cells and oocytes. The expression of p27 is decreased in granulosa cells during primordial follicle activation. In p27 knockout mice, primordial follicles are formed in advance, and the formed primordial follicles are then activated in advance. In addition, a large number of follicles are atresia and eventually lead to premature ovarian failure. In many studies, it was found that PI3K can regulate the expression of p27, but it is interesting that p27 and PI3K are independent in the process of primordial follicle activation (Rajareddy et al., 2007). However, during chemotherapy, dormant primordial follicles are simultaneously overactivated in the ovary via the PI3K/FOXO3a/p27 pathway. Further studies found that in the model of premature ovarian failure induced by cisplatin injection, FOXO3a binds to the promoter of p27 to inhibit its transcription, resulting in excessive activation of primordial follicles. When melatonin and gastrin were injected at the same time, the binding activity of FOXO3a and p27 increased, which promoted the transcription of p27 and saved the over-activation of primitive follicles caused by cisplatin (Jang et al., 2017).
Clinical application of primordial follicle activation in vitro
Primordial follicles (about 1,000) remain in the ovaries of patients with premature ovarian failure, but these primordial follicles are dormant, and their development is not regulated by gonadotropins (Nelson, 2009;De Vos et al., 2010). To utilize the primordial follicle resources in the ovarian tissue of POF patients, the dormant primordial follicles in the ovary must first be activated to develop to a stage when they can respond to gonadotropins, and then use assisted reproductive technology to achieve pregnancy (Telfer and Anderson, 2021). Primordial follicle activation in vitro (IVA) was recently developed based on the mechanism of primordial follicle activation, which can help patients with premature ovarian failure to achieve fertility (Yin et al., 2016). In addition, IVA can also be used in middle-aged women who are infertile or infertile due to treatment, allowing them to use their own oocytes to carry on offspring (Bertoldo et al., 2018).
The HIPPO signaling pathway determines organ size and is conserved from drosophila to mammals (Seo and Kim, 2018;Ma et al., 2019;Wu and Guan, 2021). Following ovarian damage, disruption of the HIPPO pathway accelerates follicle development, including primordial and growing follicles, which results in increased mice ovarian size. Using the feature of HIPPO signaling pathway can promote primordial follicle activation, combined with agonists of the PI3K or mTOR signaling pathway. A method of primordial follicle activation in vitro was developed to help patients with premature ovarian failure successfully have healthy babies (Kawamura et al., 2013;Zhai et al., 2016;Fabregues et al., 2018;Grosbois and Demeestere, 2018;Lee and Chang, 2019;De Roo et al., 2020;Devenutto et al., 2020;Hsueh and Kawamura, 2020;Tanaka et al., 2020;Zhang et al., 2021a). At the same time, factors such as long in vitro processing time, poor in vitro activation efficiency of primordial follicles, and ethical issues have hindered the clinical application of this technology. Several studies are devoted to improving these adverse factors. The combined use of PI3K and mTOR agonists, resveratrol (SIRT1 agonists), and Rac/Cdc42 activator II (CDC42 agonists) can induce mice ovary primordial follicle activation in vitro, which greatly shortens the time of in vitro activation. These drugs may be a potential new in vitro activation target drugs (Sun et al., 2015;Yan et al., 2018;Zhang et al., 2019). Further research found that orthotopic injection of CDC42 agonist into the ovary can promote the activation of primordial follicles in premature ovarian failure mice and induce human ovary primordial follicle activation in vitro. This method by inducing activation directly in vivo avoids the unknown risks associated with in vitro exposure of ovarian tissue . Primordial follicle activation in vitro, this new assisted reproductive technology, has been developed to provide new fertility hope for patients with premature ovarian failure.
Frontiers in Physiology frontiersin.org
Conclusion
The rate of primordial follicle activation controls the length of female fertility. The activation of primordial follicles is mutually regulated by various signaling pathways between oocytes and granulosa cells, and is the result of a close interaction between molecules and between cells (Figure 3). The current study shows that the first wave of primordial follicle activation is determined by PI3K signaling in the oocyte and contributes to female puberty. The second wave primordial follicle activation is determined by mTOR signaling in pre-granulosa cells and determines female fertility throughout life. Understanding the mechanism of primordial follicle activation will help us to further analyze the truth of follicle development and promote the progress of in vitro activation technology.
Author contributions
MH, TZ, JZ, TY, and TC collected the information. MH and TZ wrote the manuscript. TZ, ZX, CW, and ZX revised the manuscript. All authors read and approved the final manuscript.
Funding
This study is funded by the National Natural Science Foundation of China (32100686 to TZ, 32100913 to MH and 82260291 to TZ), Guizhou Provincial Science and Technology Projects [ZK (2022) (2022)4017].
Acknowledgments
We thank the members of TZ's laboratory for their constructive suggestions in the preparation of the manuscript.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
FIGURE 3
Schematic model depicting the mechanisms of primordial follicle activation. Primordial follicle activation is a result of the delicate interaction of pregranulosa cells and oocytes and the follicular microenvironment. The mTOR and WNT pathways in pre-granulosa cells, the mTOR and PI3K pathway in oocytes, and the communication channel (KITL-KIT) between pre-granulosa cells and oocytes are all necessary for primordial follicle activation. The mTOR signaling pathway in pre-granulosa cells initiates and regulates primordial follicle activation in adult ovary. The mTOR signal in the pregranulosa cells senses changes in surrounding nutrients, pressure, etc., so that the pre-granulosa cells secrete more KITL. After KITL binds to the receptor on the oocyte membrane, it activates the PI3K signaling pathway in the oocyte, and then promotes the primordial follicle activation.
Frontiers in Physiology frontiersin.org Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 7,040.6 | 2023-02-28T00:00:00.000 | [
"Biology"
] |