text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Optical-image transfer through a diffraction-compensating metamaterial : Cancellation of optical diffraction is an intriguing phenomenon enabling optical fields to preserve their transverse intensity profiles upon propagation. In this work, we introduce a metamaterial design that exhibits this phenomenon for three-dimensional optical beams. As an advantage over other diffraction-compensating materials, our metamaterial is impedance-matched to glass, which suppresses optical reflection at the glass-metamaterial interface. The material is designed for beams formed by TM-polarized plane-wave components. We show, however, that unpolarized optical images with arbitrary shapes can be transferred over remarkable distances in the material without distortion. We foresee multiple applications of our results in integrated optics and optical imaging. Introduction Due to diffraction, transversely confined optical fields diverge upon propagation in free space.As a result, optical images are not propagation-invariant and can be made sharp only in a single transverse plane.A simple way to overcome the divergence is to guide light in a waveguide such as an optical fiber.As an example, a bundle of optical waveguides can be used to transfer two-dimensional images over long distances without considerable distortions [1][2][3][4][5].However, the image at the output is usually broken into circular pixels and has a low resolution.A different approach has been proposed in the field of spatially dispersive photonic crystals.It has been shown that, if at a given frequency the wave vector k in the crystal depends on the wave propagation direction such that its projection onto a certain direction (z) remains constant (k z = C), light will propagate along this direction without divergence [6][7][8][9][10][11][12][13][14][15][16][17].The phenomenon is called self-collimation.It is as a rule obtained at a wavelength (λ ) close to the Bragg diffraction regime, and therefore, the surface reflectivity of the crystal appears to be high.In addition to photonic crystals, hyperbolic-type metamaterials have been shown to exhibit diffraction cancellation [14,18,19].These materials are composed of infinitely wide metal plates or infinitely long metal rods distributed periodically with a period Λ < λ /2.The materials are also spatially dispersive, but for them the condition k z = C can cover also free-space evanescent waves, which yields subdiffraction-limited resolution for optical-image transfer through the material.The phenomenon is called canalization.Also fishnet metamaterials have been shown to provide diffraction compensation at optical frequencies [20,21].These materials, however, show a very high absorption: Im(k z ) ≈ 6 µm −1 corresponding to an imaginary refractive index of ca.0.7.Recently, we have proposed one more way to suppress optical diffraction by combining spatial dispersion with optical anisotropy of silver-disc metamolecules [22].Our approach makes it possible to tune the value of the wave impedance in the material towards that of the surrounding medium and, as a result, dramatically reduce the reflection of the metamaterial surface.Often, the materials proposed for diffraction-free guidance of optical beams are twodimensional, meaning that the beam divergence is suppressed only for waves with wave vectors in a single plane.Materials like these are usually fabricated in the form of a slab waveguide parallel to the plane in question.Waves propagating at an angle to this plane are confined in the slab by total internal reflection. In this work, we introduce a three-dimensional metamaterial that suppresses optical diffraction for three-dimensional optical fields.The material is composed of silver nanobars aligned parallel to the desired diffraction-compensation axis z.The increase of the wavenumber with the angle θ between z and k (required to obtain constant k z ) is achieved for TM polarized waves as a result of their more efficient coupling to the longitudinal plasmon resonance of the nanobars at larger propagation angles θ .We demonstrate the effect of suppressed diffraction with an example of a radially polarized hollow optical beam.Then we consider the possibility to achieve distortion-free transfer of arbitrary two-dimensional images through the designed metamaterial.We show that, if the image is formed by unpolarized or circularly polarized light, it can be transferred through the material without distortion over a remarkable distance, even though the material is designed to deal only with TM-polarized waves.Eventually, the image becomes composed purely of TM-polarized waves, as the TE-polarized part of the image freely spreads in the material.Hence, the material acts as an unconventional optical polarizer.The wave parameters and the wavelength of the diffraction-cancellation effect can be selected by tuning the dimensions of the nanobars and the unit cells.Compared to already demonstrated diffraction-compensating photonic crystals [8-11, 14, 15, 17] and metamaterials [14,18,19,23], our material has a negligible surface reflectivity.In addition, it has a much lower absorption coefficient than typical "canalizing" metamaterials have. The introduced diffraction-compensating metamaterial can be used as an optical waveguide or a distortion-free image-transferring device.Independently of the field intensity profile, incidence angle and localization at the material surface, the field will propagate along the designed diffraction-compensation axis and preserve its transverse shape and size.In particular, multiple laser beams with coinciding focal spots, but different incidence angles will propagate along the same path in the material, which implies an increased information-transfer capacity of the material compared to conventional waveguides [24].Furthermore, the material can be used to create wide-angle antireflection and high-reflection coatings or beam splitters acting independently of the incidence angle.The designed metamaterial can also be used in laser technology to create, e.g., planar laser resonators insensitive to the mirror alignment and Fabry-Perot-type intracavity filters free of the wave walk-off problem [25,26]. Design and characterization of metamaterials The metamaterials designed in this work consist of a lattice of silver metamolecules embedded in glass.They are conveniently characterized by using the effective refractive index (n) and impedance (Z) seen by a plane wave (fundamental Bloch mode [27,28]) in the material.Due to spatial dispersion, these parameters depend on the wave propagation direction.The retrieval of n and Z is done by considering reflection and transmission of plane waves by a metamaterial slab [29][30][31].If the material contains tilted metamolecules, one has to calculate the slab reflection and transmission coefficients ρ 1 and τ 1 for a wave incident at an angle θ i , and coefficients ρ 2 and τ 2 for a wave incident from the opposite side at an angle π − θ i [29].To minimize the computational load, we use a metamaterial with negligible evanescent-wave coupling between the metamolecular layers, in which case the consideration of a single-layer slab is sufficient.In the calculations, the slab is assumed to be surrounded by the host medium.We use the COMSOL Multiphysics software to calculate the field distributions around and inside the slab, which yields the required reflection and transmission coefficients.The normal component of the effective propagation constant in the metamaterial is calculated as where Here, the phase-shifted coefficients g i = ρ i exp(ik 0z Λ z ) and f i = τ i exp(ik 0z Λ z ) are used; k 0 is the wave vector in the host medium.The lattice period in the z-direction is denoted by Λ z and m is a physically justified integer number [29][30][31].The refractive index can then be solved from where k vac is the wavenumber in vacuum.The impedance is defined as the ratio of the spatially averaged transverse electric and magnetic fields.It is given by where Z 0 is the impedance of the host medium and σ = ±1 for TE-and TM-polarizations, respectively.In general, impedances seen by two counter-propagating waves are equal if the metamaterial is not bifacial [30,32].The parameters n and Z are functions of the angle θ i that is connected to the propagation angle θ in the medium by complex Snell's law [33]. The design of diffraction-compensating metamaterials follows the procedure proposed in [22] with the aim to reach suppressed diffraction for three-dimensional optical beams.The procedure allows us to achieve not only three-dimensional diffraction compensation, but also impedance-matching of the metamaterial to the surrounding medium.Then, to model propagation of optical beams in the designed metamaterial, we use a general method we have previously developed [34].The method makes use of a rigorous vectorial plane-wave decomposition that allows one to model the beam-metamaterial interaction very efficiently independently of the beam propagation distance. The diffraction compensation is achieved when the refractive index plotted in spherical coordinates as a function of the wave propagation direction shows a large enough flat part.We fulfill this requirement for TM-polarized waves using a strongly optically anisotropic and spatially dispersive metamaterial composed of silver nanobars in glass.Hence, the material ensures effective diffraction cancellation for optical beams composed of TM-polarized waves, such as radially polarized Hermite-Gaussian modes [35]. A diffraction-compensating metamaterial The metamaterial we have designed has a periodic three-dimensional structure shown in Fig. 1.It consists of silver nanobars arranged in a tetragonal lattice in glass.The nanobars are cuboids with dimensions L x = L y = 30 nm and L z = 130 nm.The lattice dimensions are Λ x = Λ y = 120 nm and Λ z = 200 nm.The material is designed to suppress divergence of optical beams propagating along the z-axis.The parameters n and Z were evaluated using Eqs.( 4) and ( 5) for each given angle θ , wavelength and both TM and TE polarizations.For TM-polarized plane waves, the coupling to the longitudinal plasmon resonance of the nanobars is more efficient at larger propagation angles (θ ), which leads to a higher refractive index.The unit-cell dimensions are adjusted such that, when plotted in spherical coordinates, the real part of the refractive index forms a flat surface at around θ = 0 and obeys the relation n(θ ) = n(0)/ cos(θ ).The symmetry of the structure makes the refractive index independent of the azimuthal propagation angle, as long as the wave stays TM-or TE-polarized.The same holds for the wave impedance.Hence, to describe the material, it is enough to show only two-dimensional polar plots of n(θ ) and Z(θ ). Figure 2(a) shows these plots for TM-polarized waves at a vacuum wavelength λ vac = 913 nm.In the calculations, the spectrum of the refractive index of silver was taken from [36]. The black solid and red dashed curves in Fig. 2(a) show the real and imaginary parts of the wave parameters, respectively.The real part of the refractive index is seen to be flat at small angles θ .The real part of the normalized impedance is close to 1 at these angles, which ensures a low reflection loss at the glass-metamaterial interface.The angles θ corresponding to the gray sectors cannot be reached from glass that has a lower refractive index compared to the metamaterial.Because the values of Im(n) and Im(Z/Z 0 ) are very small, they have been multiplied by factors of 100 and 10, respectively, in order to make the corresponding curves visible in the pictures [Im(n)= 0.0003 and Im(Z/Z 0 )= −0.0004 at θ = 0].Low values of the absorption and reflection coefficients of the metamaterial are important for good optical-image transfer, since absorption and reflection can significantly reduce the intensity as well as distort the image.A rapid increase of the imaginary part of n and both the real and imaginary parts of Z when θ approaches its maximum value are caused by enhanced excitation of localized surface plasmons and increased spatial dispersion as the structure gets closer to the Bragg-reflection regime when Re(n) increases. The difference of the designed metamaterial from a hyperbolic-type metamaterial made of infinitely long wires is in the presence of gaps between the ends of the nanobars.For comparison, we removed these gaps and tried to satisfy the diffraction-compensation and impedancematching conditions for λ vac < 1 µm.The wave parameters were retrieved from numerically calculated reflection and transmission coefficients of a 1 µm thick metamaterial slab.We have verified that the calculated wave parameters are valid also for thicker slabs.The best result was obtained at a slightly longer wavelength of 1.12 µm for a metamaterial with Λ x = Λ y = 180 nm (L x = L y = 30 nm).This hyperbolic-type metamaterial, however, has significant drawbacks compared to the nanobar metamaterial.While at normal incidence it performs very well, the optical absorption and impedance mismatch of the material increase with θ much faster, i.e., when θ increases from 0 to 7 • , Im(n) increases from 0.0001 to 0.02 and Z/Z 0 changes from about 0.97 to 0.56 − 0.07i.Furthermore, the range of angles for which n(θ ) stays approximately flat is more than three times narrower, being limited by θ = 7 • .In addition, within this range the first-order divergence parameter, calculated as the angle-averaged root-mean-square value of the derivative ∂ Re(k z )/∂ Re(k x ), is much higher than that of the nanobar metamaterial, 0.07 instead of 0.001.At a fixed θ , this derivative can be considered as the plane-wave inclination parameter [20].Finally, the wavelength range of the diffraction-compensation regime at small θ is fifty times narrower than for the nanobar metamaterial.This spectral range is defined by requiring that the deviation of n(θ ) from a perfectly flat profile n fl (θ ) at θ = 7 • stays below |n fl (θ ) − n(0)|/10 -at the limit, it is 10 times smaller than in pure glass.The diffractioncompensation bandwidth of the nanobar material, ca.50 nm, is relatively wide, which is an important advantage of our metamaterial in view of practical realization of the metamaterial concept and verification of the predicted diffraction-compensation phenomenon. The wavelength at which the metamaterial is designed to suppress optical diffraction can be changed by tuning some of the structural dimensions.For example, if the longitudinal period Λ z is changed from 200 nm to 220 nm, the wavelength of the diffraction compensation effect moves from λ vac = 913 nm to λ vac = 883 nm.The plots of n(θ ) and Z(θ )/Z 0 corresponding to this case are presented in Fig. 2(b).If instead of Λ z , we change the thickness of the bars, e.g., from 30 nm to 40 nm, the operation wavelength of the material shifts to 793 nm.The functions n(θ ) and Z(θ )/Z 0 calculated for this case are plotted in Fig. 2(c).The refractive-index flatness condition, n(θ ) = n(0)/ cos(θ ), is not anymore strictly satisfied around θ = 0, but met at a larger angle of about 20 • .Therefore, the material is expected to provide diffraction compensation for optical images with wider angular spectra (smaller features), but over a shorter propagation distance compared to the materials of Figs.2(a) and 2(b). The isofrequency surfaces of the wave parameters evaluated for TE-polarized plane waves are spherical, since essentially, these waves do not interact with the nanobars and "see" only the host medium.The values of the refractive index and impedance for all possible propagation directions of these waves are equal to the corresponding values obtained for TM-polarized waves at θ = 0.Because of this we do not show the plots of n(θ ) and Z(θ ) for the TE-polarization. Propagation of light in the designed metamaterials To verify the effect of diffraction compensation for light composed of TM-polarized waves, we consider a radially polarized hollow beam at λ vac = 913 nm normally incident onto the surface of the metamaterial of Fig. 2(a) from glass.The beam waist is located at the surface and has a radius of 1 µm.In glass, the divergence angle of the beam is 16 • .Figure 3(a) shows the longitudinal cross-section of the beam intensity normalized to its peak value.The glassmetamaterial interface is shown by the vertical white line.The beam is seen to propagate in the material essentially free of diffraction over a distance of 50 µm.The reflection loss at the interface is negligibly low (ca.0.2 %), as expected, since Z ≈ Z 0 for θ < 16 • .The 1/e 2 power decay length of the beam is evaluated to be ca.200 µm.To better illustrate the evolution of the beam cross section upon propagation, we show the calculated transverse intensity profiles of the beam at z equal to 100 µm, 200 µm and 300 µm (see Fig. 3(b)).In spite of some divergence of the beam, the diffraction-compensation effect is evident.If, for example, the metamaterial was replaced with glass, the beam radius would at z = 300 µm be about 90 µm instead of the observed 1.5 µm.Optical absorption does not affect the shape of the profile, even though the plane-wave absorption coefficient depends on θ .In conclusion, the designed metamaterial shows excellent diffraction-compensation characteristics for radially polarized optical beams.It can be used not only for the considered hollow beam, but also for higher-order Hermite-Gaussian modes with radial polarization.In the next step we show that the designed metamaterial can be used to guide not only circularly symmetric, but essentially arbitrary optical images over long distances in the material without distortion.As the material is sensitive to light polarization we choose to use unpolarized radiation to form the original image at the entrance surface of the material.In the calculations, we model unpolarized field by incoherently summing two waves with opposite circular polarizations.Alternatively, one could use orthogonal linear polarizations.Circular polarization, however, includes all possible directions of the transverse electric-field vectors, and in principle, if the image is circularly polarized, it also preserves the shape upon propagation.For images formed by unpolarized light, the intensity is calculated as a sum of the intensities of two orthogonal circularly polarized components. Figure 4(a) shows the image of the letter M at the entrance surface of the material (at z = 0; the material corresponds to that in Fig. 2(a)) and the intensity profiles of the field at a distance of 1 3 mm, 2 3 mm and 1 mm from the surface.At the input surface, the field within the M has a planar wavefront and a gradually decreasing intensity at the edges.The edges are made smooth in order to keep the angular spectrum of the incident field within the angular spread of the flat part of n(θ ).The image is seen to preserve its shape even after a 1-mm propagation distance in the material.If the material was replaced with glass, the thickness of each line in the M would at z = 1 mm be huge, about 200 µm, making the image absolutely unrecognizable.The optical power confined within the image decreases a bit faster upon propagation than the power of the previously considered radially polarized beam, because the contribution of the TE-polarized plane-wave components to the image vanishes at long propagation distances.Nevertheless, the material clearly exhibits the ability to transfer optical images of any shapes without significant distortions.If instead of unpolarized, circularly polarized light is used, the image starts to show some spiral artefacts visible in Fig. 4(b).Otherwise the image propagates in a similar way with the originally unpolarized image.Two-dimensional images have been previously transfered through metamaterial structures made of effectively infinite metal rods [14], but only over very short distances, on the order of 10 µm, due to high absorption losses.In our metamaterial design, the absorption is remarkably low.For the metamaterial adjusted to operate at λ vac = 883 nm, the image transfer characteristics are illustrated in Fig. 4(c).A weak halo around the M at z = 1 mm is explained by a larger curvature of the refractive-index surface of the TM-polarized waves in this case compared to the case of Fig. 4(a).It is noticeable that, when the originally unpolarized image propagates through the material, it becomes more and more polarized because of the diffraction loss of the TE-polarized components.We have found that after a long enough propagation distance, the image becomes composed of the TM-polarized plane waves only, which does not cause any considerable changes in the details of the image.Hence, the metamaterial acts as a polarizer, but in a conceptually new way, making the field be composed of TM-polarized waves. The propagation-induced gain of the degree of polarization within the image is illustrated in Fig. 4(d).We choose the metamaterial of Fig. 2(c) -as it has a wider flat part of n(θ ) -and make the lines composing the M thinner to increase the divergence of its TE-polarized part.The width of the lines is now only 400 nm, while the wavelength is 793 nm.The intensity profiles of the image are shown for the propagation distances of 25 µm and 50 µm.The two last intensity profiles in Fig. 4(d) correspond to the same propagation distance of 50 µm, but in the first picture we show the total intensity (I t ), while in the second one, only its y-polarized component (I y ).We see that the image is polarized mostly in the direction perpendicular to the lines of the M, which proves that the image is composed of the TM-polarized waves.The major distortion of the image is in the appearance of a double-slit-type interference pattern.This pattern originates from a non-negligible curvature of the n(θ )-contour and a stronger divergence of light due to narrower features of the image. Conclusions In this paper, we have introduced a simple metamaterial design that provides compensation of optical diffraction for radially polarized optical beams.In addition, we have demonstrated nearly propagation-invariant transfer of essentially arbitrary two-dimensional images created by unpolarized or circularly polarized waves.The wave impedance and the operation wavelength of the material were adjusted by tuning the dimensions of the nanobars and the unit cells.The metamaterial was designed to be approximately impedance-matched to glass in order to minimize the reflection loss at the glass-metamaterial interface. In the material, images composed of TE-polarized waves diverge as fast as in glass.Therefore, initially unpolarized intensity profiles preserve their shapes and become polarized (composed of TM-polarized waves) upon popagation in the material.Hence, the material acts as a new type of an optical polarizer.We foresee applications of the designed metamaterial in optical imaging systems, integrated optics, optical communications, and in microfabricated light sources and detectors.We believe that in the future, diffraction-compensating metamaterials can be further developed to obtain even lower optical absorption loss and higher-quality image transfer. Fig. 1 . Fig. 1.Diffraction-compensating silver-nanobar metamaterial.The nanobars are 30 nm thick in the xand y-directions and 130 nm long.They form a tetragonal lattice in glass and suppress optical diffraction for light propagating along the Poynting vector S. The unit-cell dimensions are Λ x = Λ y = 120 nm and Λ z = 200 nm. 2 . Refractive index n and normalized impedance Z/Z 0 , where Z 0 is the impedance of glass, plotted in polar coordinates as functions of the plane-wave propagation angle θ for three different metamaterials.The black solid and red dashed curves represent the real and imaginary parts of the quantities, respectively.The imaginary parts are multiplied by factors of 100 and 10 as shown for each plot separately.The angles unavailable for waves incident from glass are marked by the gray sectors.The parameter values by which the materials in (a), (b) and (c) differ from each other are as follows: (a) L x = L y = 30 nm, Λ z = 200 nm and λ vac = 913 nm, (b) L x = L y = 30 nm, Λ z = 220 nm and λ vac = 883 nm, and (c) L x = L y = 40 nm, Λ z = 200 nm and λ vac = 793 nm. Fig. 3 . Fig. 3.The longitudinal (a) and transverse (b) intensity profiles of a radially polarized hollow optical beam at λ vac = 913 nm focused onto the surface of a diffraction-compensating metamaterial (white line).The cross sections in (b) correspond to the coordinates z of 0, 100 µm, 200 µm and 300 µm.The intensity is normalized to its maximum value at z = 0. Fig. 4 . Fig. 4. Propagation of an optical image of letter M in three different diffractioncompensating metamaterials.The z-coordinates of the presented intensity profiles are shown above each profile.The intensity is normalized to its maximum value at z = 0.In (a) and (b), the material corresponds to that in Fig. 2(a) and the image is originally formed by (a) unpolarized and (b) left-handed circularly polarized light.In (c), the material corresponds to that in Fig. 2(b) and the image is originally unpolarized.In (d), the material is as in Fig. 2(c) and the image is unpolarized.The two last pictures of case (d) show the total intensity (I t ) and the intensity of the y-polarized part of the image (I y ) at the same coordinate z = 50 µm.
5,597.2
2016-05-02T00:00:00.000
[ "Physics" ]
Near-Infrared Organic Phototransistors with Polymeric Channel/Dielectric/Sensing Triple Layers A new type of near-infrared (NIR)-sensing organic phototransistor (OPTR) was designed and fabricated by employing a channel/dielectric/sensing (CDS) triple layer structure. The CDS structures were prepared by inserting poly(methyl methacrylate) (PMMA) dielectric layers (DLs) between poly(3-hexylthiophene) (P3HT) channel layers and poly[{2,5-bis-(2-octyldodecyl)-3,6-bis-(thien-2-yl)-pyrrolo[3,4-c]pyrrole-1,4-diyl}-co-{2,2′-(2,1,3-benzothiadiazole)-5,5′-diyl}] (PODTPPD-BT) top sensing layers. Two different thicknesses of PMMA DLs (20 nm and 50 nm) were applied to understand the effect of DL thickness on the sensing performance of devices. Results showed that the NIR-OPTRs with the CDS structures were operated in a typical n-channel mode with a hole mobility of ca. 0.7~3.2 × 10−4 cm2/Vs in the dark and delivered gradually increased photocurrents upon illumination with an NIR light (905 nm). As the NIR light intensity increased, the threshold voltage was noticeably shifted, and the resulting transfer curves showed a saturation tendency in terms of curve shape. The operation of the NIR-OPTRs with the CDS structures was explained by the sensing mechanism that the excitons generated in the PODTPPD-BT top sensing layers could induce charges (holes) in the P3HT channel layers via the PMMA DLs. The optically modulated and reflected NIR light could be successfully detected by the present NIR-OPTRs with the CDS structures. Particular attention has been very recently paid to the near-infrared (NIR) light-sensing OPTRs since the NIR technology has become one of the most important cores for advanced control and sensing systems such as night vision for cars and airplanes, light detection and ranging (LiDAR) sensors for autonomous cars and drones, probe beams for biomedical devices and diagnostics, and optical communications. [18][19][20][21][22][23][24]. However, the NIR-absorbing organic materials are very rare because of the difficulty in synthesis to meet the narrow energy band (level) gap of ca. 0.89~1.65 eV, which corresponds to the wavelength (λ) range of ca. 750~1400 nm [25][26][27][28][29][30][31]. In this regard, conjugated polymers have been considered a viable chemical platform for the NIR-absorbing materials since their energy band gaps can be narrowed by combinations of electron-donating and electron-accepting comonomers [32][33][34][35][36][37][38]. In the basic structure of OPTRs, which is actually identical to organic field-effect transistors (OFETs), the channel layers should play a sensing role simultaneously [1]. However, some conjugated polymers do not have sufficient charge carrier mobility for the operation of OFETs even though they deliver good NIR-absorbing characteristics [39]. On this account, the NIR-absorbing conjugated polymers have been applied as a gate-sensing layer (GSL) in the advanced structure of OPTRs [40]. It is considered that the GSL concept could expand the choice of NIR-absorbing organic materials, regardless of whether they are semiconductors or not. Our recent work has demonstrated that the OPTRs with the GSL structure can be properly operated by applying the top channel layers with a sensing efficiency of 40~60% (λ = 780~1000 nm) compared to the theoretical maximum photoresponsivity [41]. However, further progress in device design on a microscale and/or nanoscale is required for the advancement of NIR-OPTRs which can be applied for various system environments [42][43][44][45]. Here, we demonstrate a new type of NIR-OPTR which consists of polymeric channel/dielectric/sensing (CDS) triple layers in the transistor geometry of bottom-gate and bottom-source/drain contact. The CDS structure was prepared by sequential spin-coating processes of poly(3-hexylthiophene) (P3HT), poly(methyl methacrylate) (PMMA), and poly[{2,5-bis-(2octyldodecyl)-3,6-bis-(thien-2-yl)-pyrrolo [3,4-c]pyrrole-1,4-diyl}-co-{2,2 -(2,1,3-benzothiadiazole)-5,5 -diyl}] (PODTPPD-BT) on the silver electrode-deposited PMMA gate-insulating layers. The PMMA dielectric layer (DL) in the middle of the CDS structure was designed to play a dual role for the protection of beneath channel layers upon spin-coating as well as the dipole induction by photogenerated excitons (charges). To investigate the influence of PMMA DLs on the sensing performances, two different DL thicknesses (20 and 50 nm) were employed for the fabrication of the CDS structures. For practical applications, the sensing performances of the NIR-OPTRs with the CDS structures were examined under on/off modulation of NIR light and for the reflected (scattered) NIR light from an object. Materials and Solutions The P3HT polymer (weight-average molecular weight = 50 kDa) was purchased from Solaris Chem (USA). The PODTPPD-BT polymer (weight-average molecular weight = 8.7 kDa, polydispersity index (PDI) = 1.34) was synthesized via Suzuki coupling reaction using a palladium catalyst as reported in our previous work [41]. The P3HT solutions were prepared by employing toluene (Sigma-Aldrich, St. Louis, MO, USA) as a solvent at a solid concentration of 26 mg/mL, while the PODTPPD-BT solutions were prepared using chlorobenzene (Sigma-Aldrich, USA) at a solid concentration of 15 mg/mL. The PMMA polymer (weight-average molecular weight = 120 kDa) was purchased from Sigma-Aldrich (USA). For the preparation of gate-insulating layers, the PMMA solutions (80 mg/mL) were made using chlorobenzene as a solvent. To form the dielectric layers (DLs), n-butyl acetate was used as a solvent for the PMMA solutions with two different concentrations (3.6 and 9 mg/mL). All solutions prepared were subjected to continuous stirring on a hot plate at 60 • C for 24 h. Thin Film and Device Fabrication The OFETs with a bottom-gate-bottom-source/drain contact structure were fabricated using the patterned indium-tin-oxide (ITO)-coated glass substrates (ca. 20 Ω/cm 2 ). The ITO-glass substrates were immersed in acetone and underwent ultrasonication processes for 30 min. Then the initially cleaned ITO-glass substrates were cleaned and rinsed with isopropyl alcohol using the same ultrasonic cleaner, followed by drying with nitrogen gas flow. The dried ITO-glass substrates were subjected to the 20 min treatment of ultraviolet (UV)/ozone using a UV/ozone cleaner (50 mW/cm 2 , AC-6, AHTECH LTS Co., Ltd., Anyang-si, Gyeonggi-do, Korea). On top of the ITO-coated sides of the treated substrates, the PMMA solutions (solvent: chlorobenzene) were spun at 2000 rpm for 60 s, leading to a 450 nm-thick gate-insulating layer. The PMMA layer-coated ITO-glass substrates were thermally treated at 120 • C for 60 min. After transferring these samples to a vacuum chamber, which is equipped inside an argon-filled glovebox, the 60 nm-thick silver (Ag) source/drain electrodes were deposited on the PMMA layers by thermal evaporation processes. Note that a shadow mask leading to a channel length of 70 µm and channel width of 2 mm was used during the deposition of Ag electrodes. The Ag electrode-deposited samples were moved out and thermally treated at 120 • C for 30 min. Next, the P3HT solutions were spun on top of the Ag electrode-deposited samples at 1500 rpm for 30 s, followed by soft-baking at 120 • C for 30 min. To form DLs, the two PMMA solutions (solvent: n-butyl acetate) were spun on the P3HT layers at 2000 rpm for 60s and then soft-baked 120 • C for 30 min. The thickness of the PMMA DLs was 20 nm and 50 nm for 3.6 mg/mL and 9 mg/mL, respectively. Finally, The PODTPPD-BT layers were formed on the PMMA DLs by dropping the PODTPPD-BT solutions upon spinning the DL-coated sample substrates at 1500 rpm for 60 s. The resulting thickness of the PODTPPD-BT layers was 50 nm. All devices fabricated were stored inside the argon-filled glovebox before measurement to minimize a possible attack of moisture and oxygen. Measurement A surface profilometer (Dektak XT, Bruker, Billerica, MA, USA) was used for the measurement of film thickness. A UV-visible-NIR spectrometer (Lambda 750, PerkinElmer, Waltham, MA, USA) was employed to measure the optical absorption spectra of film samples. The channel area of OFETs was examined on a microscale using an optical microscope (SV-55, Sometech, Seoul, Korea). The transistor performances were measured using a semiconductor parameter analyzer (2636B, Keithley, Cleveland, OH, USA). For the measurement of phototransistor performances, the channel area of devices was illuminated with a laser diode (905 nm, VD9030V, Delos Laser, Seoul, Korea). All devices and laser diodes were placed inside a dark metal box to avoid any influence by ambient light. The incident light intensity (P IN ) of the laser diode was measured using a calibrated photodiode (818-UV, Newport, Irvine, CA, USA), while it was adjusted using a neutral density filter set (CVI Melles-Griot SP Pte. Ltd., Singapore). For the measurement of reflected and scattered NIR light, the 905 nm light from the laser diode was irradiated to an object (optical post holder, PH-3, NAMIL Optical Instrument Co.), and the resulting scattered light was detected by the OPTRs fabricated in this work. Results and Discussion As illustrated in Figure 1a, the present OPTRs feature polymeric channel/dielectric/sensing (CDS) triple layers which are composed of the multi-stacked P3HT/PMMA/PODTPPD-BT polymers. The P3HT channel layers can be protected by the PMMA dielectric layers from the solvent attack upon spin-coating using the PODTPPD-BT solution (solvent: CB) for the preparation of the top sensing layers. Note that PMMA was dissolved in n-butyl acetate only at a high temperature (>70 • C) and n-butyl acetate did not dissolve the P3HT layers at all. Similarly, the thick (450 nm) PMMA gate-insulating layers were found not to be seriously affected by the CB solvent when it comes to the short spin-coating time (30 s) for the preparation of the P3HT channel layers. In order to minimize any possible influence on the very thin (20~50 nm) PMMA DLs, the PODTPPD-BT solutions were dropped when spinning the PMMA DL-coated samples so that the contact time of CB solvent could be as short as 1 or 2 s. The resulting CDS structures were prepared on quartz substrates for the examination of optical absorption properties. As shown in Figure 1b, the prepared CDS structures (P3HT/PMMA/PODTPPD-BT) delivered a broadband absorption covering whole visible light and the NIR region up to 1100 nm even though the pristine P3HT and PODTPPD-BT layers could absorb limited visible and NIR regions, respectively. The inset photographs in Figure 1b provide eye-catching evidences for the well-prepared CDS structures. This result confirms that the PMMA DLs did successfully play a role in protecting the P3HT channel layers from the CB solvent upon spin-coating of the PODTPPD-BT top sensing layers. The performances of the present OPTRs in the dark were measured to understand the basic characteristics of the transistors with the CDS structures. As observed from the output curves in Figure 2a, the devices showed a typical p-channel transistor behavior with a clear dependency of drain current (ID) on the gate voltage (VG) at a fixed drain voltage (VD). Here, interestingly, the level of drain current was relatively lower for the devices with the 20 nm-thick PMMA DLs than those with the 50 nm-thick DLs. A similar trend of drain current difference was measured for the transfer curves (see Figure 2b). The sweeping test in the dark unveiled almost no hysteresis in the output curves but very slight hysteresis in the transfer curves (see the inset graphs in Figure 2). In addition, the off current was considerably poor for the devices with the 20 nm-thick PMMA DLs compared to the devices with the 50 nm-thick DLs (refer to the gate current (IG) for each case). The performances of the present OPTRs in the dark were measured to understand the basic characteristics of the transistors with the CDS structures. As observed from the output curves in Figure 2a, the devices showed a typical p-channel transistor behavior with a clear dependency of drain current (I D ) on the gate voltage (V G ) at a fixed drain voltage (V D ). Here, interestingly, the level of drain current was relatively lower for the devices with the 20 nm-thick PMMA DLs than those with the 50 nm-thick DLs. A similar trend of drain current difference was measured for the transfer curves (see Figure 2b). The sweeping test in the dark unveiled almost no hysteresis in the output curves but very slight hysteresis in the transfer curves (see the inset graphs in Figure 2). In addition, the off current was considerably poor for the devices with the 20 nm-thick PMMA DLs compared to the devices with the 50 nm-thick DLs (refer to the gate current (I G ) for each case). The performances of the present OPTRs in the dark were measured to understand the basic characteristics of the transistors with the CDS structures. As observed from the output curves in Figure 2a, the devices showed a typical p-channel transistor behavior with a clear dependency of drain current (ID) on the gate voltage (VG) at a fixed drain voltage (VD). Here, interestingly, the level of drain current was relatively lower for the devices with the 20 nm-thick PMMA DLs than those with the 50 nm-thick DLs. A similar trend of drain current difference was measured for the transfer curves (see Figure 2b). The sweeping test in the dark unveiled almost no hysteresis in the output curves but very slight hysteresis in the transfer curves (see the inset graphs in Figure 2). In addition, the off current was considerably poor for the devices with the 20 nm-thick PMMA DLs compared to the devices with the 50 nm-thick DLs (refer to the gate current (IG) for each case). The poor dark performances of the devices with the 20 nm-thick PMMA DLs can be attributed to the imperfect protection role of the 20 nm-thick DLs against the attack of CB solvent during the PODTPPD-BT top sensing layers when it comes to the higher possibility of pinhole generation in a thinner film than a thicker film. In more detail, some parts of the CB solvents might permeate through pinholes in the 20 nm-thick DLs and cause partial damage to the P3HT channel layers, leading to such a poor device performance. From the I D 0.5 -V G curves, the hole mobility (µ h ) of the present OPTRs in the dark was calculated as~0.7 × 10 −4 cm 2 /Vs and~3.2 × 10 −4 cm 2 /Vs for the 20 nm-thick and 50 nm-thick PMMA DLs, respectively (see Table 1). The high threshold voltage (V TH ) toward a positive voltage direction may reflect the existence of interfacial charges formed in due course of multi-layer deposition processes in the present device structures [46]. Next, the OPTRs with the 20 nm-thick and 50 nm-thick PMMA DLs were subjected to the examination of photosensing characteristics under illumination with NIR light using a high-power laser diode used for practical LiDAR applications (wavelength (λ) = 905 nm). First, the NIR sensing characteristics were measured by adjusting the output of laser diode to a lower light density level of ca. 2.3~742 µW/cm 2 . As shown in Figure 3a, for both 20 nm-thick and 50 nm-thick PMMA DLs, the output curves at V G = −30 V were gradually shifted toward a (negatively) higher drain current direction with increasing the incident NIR intensity (P IN ). This result basically informs that the present OPTRs with the CDS structures did properly work and respond to the incident NIR light. Because the P3HT channel layers and PMMA layers do not absorb any NIR light, it is obvious that the PODTPPD-BT top sensing layers should absorb the incident NIR light, and the generated excitons might act as a floating gate (external bias) to induce charges in the P3HT channel layers via the PMMA DLs. A similar working mechanism has been reported by applying liquid crystals (LCs) with a high dielectric constant in our previous reports [46][47][48]. Here, it is also worth noting that the drain current difference became larger at the higher drain voltage. This may directly reflect that more charges (holes) in the P3HT channel layers, which were induced by the photo-generated excitons in the PODTPPD-BT top sensing layers, could be transported at higher drain voltages. Further investigation into the transfer curves at V D = −30 V finds that there were gradual shifts in threshold voltages with the incident NIR light intensity (see Figure 3b). Note that the intrinsic off-current characteristics were not changed upon the NIR light illumination as the poor off current of the devices with the 20 nm-thick PMMA DLs was kept for all P IN cases. Considering that the threshold voltage shift does in principle indicate the charge trapping phenomena in devices, it is supposed that the charges induced from the excitons generated in the PODTPPD-BT top sensing layers might be trapped at the layer interfaces of the CDS structures. The threshold voltage shift toward a positive voltage reflects that the trapped charges in the CDS structure did mainly induce holes in the P3HT channel layers. Micromachines 2020, 11, x FOR PEER REVIEW 6 of 12 To understand the detailed trend, the device parameters were plotted as a function of the incident NIR light intensity. As shown in Figure 4a top panel, the overall drain current was almost linearly increased with the incident NIR light intensity for both cases (20 nm-thick and 50 nm-thick PMMA DLs). After removing the dark current portion, the linearity was still kept with the incident NIR light intensity (see Figure 4a bottom panel). This result implies that the similar portion (ratio) of charges induced by the excitons generated in the PODTPPD-BT top sensing layers was transported between the source and drain electrodes irrespective of the incident NIR light intensity. Taking into account the linearly increasing trend of threshold voltage (see Figure 4b top panel), the ratio of trapped charges might be proportionally increased with the incident NIR light intensity for both devices. However, a close look into the slopes of net photocurrent (ΔID) as well as net threshold voltage shift (ΔVTH) may deliver that, as the incident NIR light intensity increased, the ratio of charge transport to charge trap became higher for the OPTRs with the thinner (20 nm) PMMA DLs. In other words, more charges could be trapped for the OPTRs with the thinner (20 nm) PMMA DLs at higher incident NIR light intensity. This result can give a rough clue regarding the degree of film perfectness (vice versa, pinhole-like defects) in the two different thicknesses of PMMA DLs. Given that, over the whole range of incident NIR light intensity, the net photocurrent was always higher for the OPTRs with the thinner PMMA DLs than those with the thicker DLs (see Figure 4a bottom panel), the charges induced via the DLs from the excitons generated in the PODTPPD-BT top sensing layers might follow the basic relation of capacitance-thickness that defines higher capacitances at lower thicknesses [49]. To understand the detailed trend, the device parameters were plotted as a function of the incident NIR light intensity. As shown in Figure 4a top panel, the overall drain current was almost linearly increased with the incident NIR light intensity for both cases (20 nm-thick and 50 nm-thick PMMA DLs). After removing the dark current portion, the linearity was still kept with the incident NIR light intensity (see Figure 4a bottom panel). This result implies that the similar portion (ratio) of charges induced by the excitons generated in the PODTPPD-BT top sensing layers was transported between the source and drain electrodes irrespective of the incident NIR light intensity. Taking into account the linearly increasing trend of threshold voltage (see Figure 4b top panel), the ratio of trapped charges might be proportionally increased with the incident NIR light intensity for both devices. However, a close look into the slopes of net photocurrent (∆I D ) as well as net threshold voltage shift (∆V TH ) may deliver that, as the incident NIR light intensity increased, the ratio of charge transport to charge trap became higher for the OPTRs with the thinner (20 nm) PMMA DLs. In other words, more charges could be trapped for the OPTRs with the thinner (20 nm) PMMA DLs at higher incident NIR light intensity. This result can give a rough clue regarding the degree of film perfectness (vice versa, pinhole-like defects) in the two different thicknesses of PMMA DLs. Given that, over the whole range of incident NIR light intensity, the net photocurrent was always higher for the OPTRs with the thinner PMMA DLs than those with the thicker DLs (see Figure 4a bottom panel), the charges induced via the DLs from the excitons generated in the PODTPPD-BT top sensing layers might follow the basic relation of capacitance-thickness that defines higher capacitances at lower thicknesses [49]. For the practical applications, a stronger NIR light with a power density of ca. 2.8~3.8 mW/cm 2 was exposed to the present OPTRs with the CDS structures. As shown in Figure 5a, the output curves were largely shifted with the incident NIR intensity for both devices. In addition, the higher drain current difference was measured at the higher drain voltage, which was similarly observed in the case of low-power NIR irradiations in Figure 3. This result supports the notion that the present OPTRs with the CDS structures function properly for the detection of high-power NIR light as well. Note that the drain current measured at P IN = 3.8 mW/cm 2 was still higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick DLs. In particular, all the transfer curves in Figure 5b showed a largely different shape from those in Figure 3b, as the drain current in the positive voltage region was not steeply dropped. This can be ascribed to the considerably shifted threshold voltages by the trapped charges under the high-power NIR light illumination. As the incident NIR intensity increased from P IN = 2.8 mW/cm 2 to P IN = 3.8 mW/cm 2 , the transfer curves were gradually shifted toward a higher drain current direction. Note that the sweeping test revealed almost no hysteresis in the output curves but very slight hysteresis in the transfer curves (see the inset graphs in Figure 5). (vice versa, pinhole-like defects) in the two different thicknesses of PMMA DLs. Given that, over the whole range of incident NIR light intensity, the net photocurrent was always higher for the OPTRs with the thinner PMMA DLs than those with the thicker DLs (see Figure 4a bottom panel), the charges induced via the DLs from the excitons generated in the PODTPPD-BT top sensing layers might follow the basic relation of capacitance-thickness that defines higher capacitances at lower thicknesses [49]. For the practical applications, a stronger NIR light with a power density of ca. 2.8~3.8 mW/cm 2 was exposed to the present OPTRs with the CDS structures. As shown in Figure 5a, the output curves were largely shifted with the incident NIR intensity for both devices. In addition, the higher drain current difference was measured at the higher drain voltage, which was similarly observed in the case of low-power NIR irradiations in Figure 3. This result supports the notion that the present OPTRs with the CDS structures function properly for the detection of high-power NIR light as well. Note that the drain current measured at PIN = 3.8 mW/cm 2 was still higher for the OPTRs with the 20 nmthick PMMA DLs than those with the 50 nm-thick DLs. In particular, all the transfer curves in Figure 5b showed a largely different shape from those in Figure 3b, as the drain current in the positive voltage region was not steeply dropped. This can be ascribed to the considerably shifted threshold voltages by the trapped charges under the high-power NIR light illumination. As the incident NIR intensity increased from PIN = 2.8 mW/cm 2 to PIN = 3.8 mW/cm 2 , the transfer curves were gradually shifted toward a higher drain current direction. Note that the sweeping test revealed almost no hysteresis in the output curves but very slight hysteresis in the transfer curves (see the inset graphs in Figure 5). The detailed changes of drain current and threshold voltage were analyzed and plotted as a function of the high-power incident NIR intensity. As shown in Figure 6a top panel, the overall drain current showed a linearly increasing trend with the high-power incident NIR intensity between 2.8 mW/cm 2 and PIN = 3.8 mW/cm 2 . The net photocurrent (ΔID) was linearly increased with the highpower incident NIR intensity irrespective of the thickness of DLs (see Figure 6a bottom panel), while the increasing slope was slightly higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick PMMA DLs. This trend was in accordance with the result in Figure 4a. However, as observed from Figure 6b, the slope of threshold voltages was almost similar for both devices, which is different from the result in Figure 4b. Therefore, it is considered that the degree of threshold voltage shift might be less sensitive to the incident NIR intensity in the case of a high-power regime because the interfaces and/or channels could be considerably saturated by the induced charges (holes) due to the high population of excitons generated in the PODTPPD-BT top sensing layers. Here, it is worth The detailed changes of drain current and threshold voltage were analyzed and plotted as a function of the high-power incident NIR intensity. As shown in Figure 6a top panel, the overall drain current showed a linearly increasing trend with the high-power incident NIR intensity between 2.8 mW/cm 2 and P IN = 3.8 mW/cm 2 . The net photocurrent (∆I D ) was linearly increased with the high-power incident NIR intensity irrespective of the thickness of DLs (see Figure 6a bottom panel), while the increasing slope was slightly higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick PMMA DLs. This trend was in accordance with the result in Figure 4a. However, as observed from Figure 6b, the slope of threshold voltages was almost similar for both devices, which is different from the result in Figure 4b. Therefore, it is considered that the degree of threshold voltage shift might be less sensitive to the incident NIR intensity in the case of a high-power regime because the interfaces and/or channels could be considerably saturated by the induced charges (holes) due to the high population of excitons generated in the PODTPPD-BT top sensing layers. Here, it is worth noting that the net threshold shift (∆V TH ) was still higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick PMMA DLs. Micromachines 2020, 11, x FOR PEER REVIEW 8 of 12 noting that the net threshold shift (ΔVTH) was still higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick PMMA DLs. Finally, the present OPTRs with the CDS structures were tested for the direct or indirect detection of NIR (905 nm) light that is optically modulated with a constant on/off frequency. As shown in Figure 7, the drain current was quickly increased when the modulated NIR light was incident to the OPTRs irrespective of the thickness of DLs. However, there was a delayed increase after the initial quick jump for both devices, which can be attributed to the charging behavior of devices when the OPTRs were exposed to the NIR light for such a long time. When the NIR light was blocked (off phase in modulation), the drain current was quickly dropped in the presence of a marginal drain current tail. The slope of the tail drain current signals was slightly higher for the OPTRs with the 50 nm-thick PMMA DLs than those with the 20 nm-thick PMMA DLs, indicative of more charge trapping behavior in the thicker PMMA DLs. As illustrated in Figure 8a, the reflected NIR light was detected by the present OPTRs. When the moving wheel was slowly rotated, the NIR light was reflected or scattered by the wheel frame, and Finally, the present OPTRs with the CDS structures were tested for the direct or indirect detection of NIR (905 nm) light that is optically modulated with a constant on/off frequency. As shown in Figure 7, the drain current was quickly increased when the modulated NIR light was incident to the OPTRs irrespective of the thickness of DLs. However, there was a delayed increase after the initial quick jump for both devices, which can be attributed to the charging behavior of devices when the OPTRs were exposed to the NIR light for such a long time. When the NIR light was blocked (off phase in modulation), the drain current was quickly dropped in the presence of a marginal drain current tail. The slope of the tail drain current signals was slightly higher for the OPTRs with the 50 nm-thick PMMA DLs than those with the 20 nm-thick PMMA DLs, indicative of more charge trapping behavior in the thicker PMMA DLs. Micromachines 2020, 11, x FOR PEER REVIEW 8 of 12 noting that the net threshold shift (ΔVTH) was still higher for the OPTRs with the 20 nm-thick PMMA DLs than those with the 50 nm-thick PMMA DLs. Finally, the present OPTRs with the CDS structures were tested for the direct or indirect detection of NIR (905 nm) light that is optically modulated with a constant on/off frequency. As shown in Figure 7, the drain current was quickly increased when the modulated NIR light was incident to the OPTRs irrespective of the thickness of DLs. However, there was a delayed increase after the initial quick jump for both devices, which can be attributed to the charging behavior of devices when the OPTRs were exposed to the NIR light for such a long time. When the NIR light was blocked (off phase in modulation), the drain current was quickly dropped in the presence of a marginal drain current tail. The slope of the tail drain current signals was slightly higher for the OPTRs with the 50 nm-thick PMMA DLs than those with the 20 nm-thick PMMA DLs, indicative of more charge trapping behavior in the thicker PMMA DLs. As illustrated in Figure 8a, the reflected NIR light was detected by the present OPTRs. When the moving wheel was slowly rotated, the NIR light was reflected or scattered by the wheel frame, and As illustrated in Figure 8a, the reflected NIR light was detected by the present OPTRs. When the moving wheel was slowly rotated, the NIR light was reflected or scattered by the wheel frame, and then some part of the reflected NIR light could be incident to the OPTR mounted inside the sample holder. As shown in Figure 8b, the OPTRs could successfully detect the reflected NIR light irrespective of the thickness of PMMA DLs. This result implicates that the present OPTRs can be potentially used as an actual sensor for the LiDAR systems which should properly detect a reflected NIR light from an object [22]. Micromachines 2020, 11, x FOR PEER REVIEW 9 of 12 then some part of the reflected NIR light could be incident to the OPTR mounted inside the sample holder. As shown in Figure 8b, the OPTRs could successfully detect the reflected NIR light irrespective of the thickness of PMMA DLs. This result implicates that the present OPTRs can be potentially used as an actual sensor for the LiDAR systems which should properly detect a reflected NIR light from an object [22]. Conclusions The NIR-OPTRs with the channel/dielectric/sensing (CDS) triple layers were successfully fabricated by applying two different thicknesses of the PMMA DLs. The devices with the CDS structures showed typical p-channel transistor performances in the dark irrespective of the DL thickness, while their hole mobility was measured in the range of 0.7~3.2 × 10 −4 cm 2 /Vs. Upon illumination with the low-power NIR light (905 nm), the drain current of devices was gradually increased with the NIR light intensity in both output and transfer curves. In addition, the threshold voltage in the transfer curves was shifted proportionally with the intensity of the low-power NIR light. The similar gradual drain current increase was measured upon illumination with the higher power NIR light, while the shape of transfer curves was almost identical for the NIR-OPTRs with the same DL thickness. The net photocurrent was higher for the NIR-OPTRs with the 20 nm-thick DLs than those with the 50 nm-thick DLs, which can be explained by the basic capacitance-thickness relation defining higher capacitances at lower thicknesses. These results confirm that the CDS structures in the present devices do actually function as a sensing medium for NIR light via a charge induction mechanism that forms charges (holes) in the P3HT channel layers through the PMMA DLs from the excitons generated in the PODTPPD-BT top sensing layers. The optimized NIR-OPTRs with the CDS structures exhibited stable sensing performances upon on/off modulation of NIR light and could sense the reflected (scattered) NIR light from an object. This test supports that the present NIR-OPTRs with the CDS structures are promising as a potential NIR sensor for LiDAR systems. Conclusions The NIR-OPTRs with the channel/dielectric/sensing (CDS) triple layers were successfully fabricated by applying two different thicknesses of the PMMA DLs. The devices with the CDS structures showed typical p-channel transistor performances in the dark irrespective of the DL thickness, while their hole mobility was measured in the range of 0.7~3.2 × 10 −4 cm 2 /Vs. Upon illumination with the low-power NIR light (905 nm), the drain current of devices was gradually increased with the NIR light intensity in both output and transfer curves. In addition, the threshold voltage in the transfer curves was shifted proportionally with the intensity of the low-power NIR light. The similar gradual drain current increase was measured upon illumination with the higher power NIR light, while the shape of transfer curves was almost identical for the NIR-OPTRs with the same DL thickness. The net photocurrent was higher for the NIR-OPTRs with the 20 nm-thick DLs than those with the 50 nm-thick DLs, which can be explained by the basic capacitance-thickness relation defining higher capacitances at lower thicknesses. These results confirm that the CDS structures in the present devices do actually function as a sensing medium for NIR light via a charge induction mechanism that forms charges (holes) in the P3HT channel layers through the PMMA DLs from the excitons generated in the PODTPPD-BT top sensing layers. The optimized NIR-OPTRs with the CDS structures exhibited stable sensing performances upon on/off modulation of NIR light and could sense the reflected (scattered) NIR light from an object. This test supports that the present NIR-OPTRs with the CDS structures are promising as a potential NIR sensor for LiDAR systems. Conflicts of Interest: The authors declare no conflict of interest.
7,931
2020-11-30T00:00:00.000
[ "Materials Science" ]
DAMA: a method for computing multiple alignments of protein structures using local structure descriptors Abstract Motivation The well-known fact that protein structures are more conserved than their sequences forms the basis of several areas of computational structural biology. Methods based on the structure analysis provide more complete information on residue conservation in evolutionary processes. This is crucial for the determination of evolutionary relationships between proteins and for the identification of recurrent structural patterns present in biomolecules involved in similar functions. However, algorithmic structural alignment is much more difficult than multiple sequence alignment. This study is devoted to the development and applications of DAMA—a novel effective environment capable to compute and analyze multiple structure alignments. Results DAMA is based on local structural similarities, using local 3D structure descriptors and thus accounts for nearest-neighbor molecular environments of aligned residues. It is constrained neither by protein topology nor by its global structure. DAMA is an extension of our previous study (DEDAL) which demonstrated the applicability of local descriptors to pairwise alignment problems. Since the multiple alignment problem is NP-complete, an effective heuristic approach has been developed without imposing any artificial constraints. The alignment algorithm searches for the largest, consistent ensemble of similar descriptors. The new method is capable to capture most of the biologically significant similarities present in canonical test sets and is discriminatory enough to prevent the emergence of larger, but meaningless, solutions. Tests performed on the test sets, including protein kinases, demonstrate DAMA’s capability of identifying equivalent residues, which should be very useful in discovering the biological nature of proteins similarity. Performance profiles show the advantage of DAMA over other methods, in particular when using a strict similarity measure QC, which is the ratio of correctly aligned columns, and when applying the methods to more difficult cases. Availability and implementation DAMA is available online at http://dworkowa.imdik.pan.pl/EP/DAMA. Linux binaries of the software are available upon request. Supplementary information Supplementary data are available at Bioinformatics online. Introduction Structures of proteins are conserved more than their sequences. It is believed, that methods based on analysis of structure, rather than sequence analysis, provide more complete information on residue conservation in evolutionary processes. This knowledge is crucial for both, determination of evolutionary relationships between proteins, and for the identification of structural patterns involved in similar functions. Unfortunately aligning structures is a computationally much more demanding task than aligning sequences. Nevertheless, structure alignment is commonly used in protein classification (e.g. Fox et al., 2014;Orengo et al., 1997) or in structural motif recognition (Singh and Saha, 2003). One of the most challenging tasks in computational biology is multiple structure alignment (MStA). There are several methods for computing MStA, like CBA (Ebert and Brutlag, 2006), Matt (Menke et al., 2008), MASS (Dror et al., 2003), MAMMOTH-Multi (Lupyan et al., 2005), MultiProt (Shatsky et al., 2004), MUSTANG (Konagurthu et al., 2006) or POSA (Ye and Godzik, 2005). Some of them were reviewed in (Berbalk et al., 2009). Since then some new methods have been developed, like MAPSCI (Ilinkin et al., 2010), MISTRAL (Micheletti and Orland, 2009), 3DCOMB (Wang et al., 2011), msTALI (Shealy and Valafar, 2012), mTM-align (Dong et al., 2018) or Caretta (Akdel et al., 2020). The MStA problem can be formulated in several ways and existing algorithms differ by the type of alignments they compute. We distinguish methods in which structures are treated as rigid bodies (CBA, 3DCOMB) from methods that allow a certain level of flexibility (Matt, POSA, MASS). Some algorithms may allow for circular permutations (and other rearrangements) (MASS). Most approaches either gradually join pairwise structural alignments into one MStA (i.e. perform progressive alignment), or align all structures simultaneously. DAMA has been designed to bridge a gap between these approaches and ensure robust performance while imposing the least constraints on the solution. Quality of alignments is achieved by ensuring that the entire molecular environment of the aligned residues is taken into account. There are no constraints on the global superposition, which enables alignment of structures with significant spatial distortions (e.g. a different arrangement of domains connected by a flexible linker). Circular permutations and segment swaps are allowed as well. The concept of progressive alignment is applied in the initial phase for generating several alignments to be further improved by an evolutionary algorithm. Selected algorithms have already been implemented and optimized for CUDA graphical processors (Daniluk et al., 2019). Approach Finding optimal multiple alignments is a computationally difficult problem. However, it has been proven that a seemingly simpler problem of computing a multi-alignment as a consensus of given pairwise alignments is intractable as well (Daniluk and Lesyng, 2014). This is due to the fact, that not all sets of pairwise alignments may constitute a valid multiple alignment. It is disallowed for two different residues from a single structure to belong to the same column in the alignment. Nevertheless, it is easy to construct a set of pairwise alignments which would lead to such a condition if merged into a multiple alignment (see Fig. 1). Therefore, computing a multiple alignment from a set of pairwise alignments would require identifying all such conflicts, and finding an optimal way of removing them. In this study, we tackle the multi-alignment problem in its most generic form. We aim at finding multiple alignments that may contain circular permutations, segment swaps and other sequential rearrangements, as well as, structural deformations. The only constraint we enforce on the multiple alignments is the similarity of local physico-chemical environments, which are characterized using molecular fragments called local descriptors of protein structure (Daniluk and Lesyng, 2011b;Hvidsten et al., 2009). A local descriptor is a small part of a structure that can be viewed as a residue-attached local environment. In principle, it is possible to build a descriptor for every residue of a given protein. This process begins by identifying all residues in contact with the descriptor's central residue. Elements are then built by including two additional residues along the main-chain, both upstream and downstream of each contact residue. Any overlapping elements are concatenated into single segments. Thus, a descriptor is typically built of several disjoint pieces of the main chain (Fig. 2). It reflects approximately the range of local, most significant physico-chemical interactions between its central residue and surrounding amino-acids. This constitutes a significant difference compared to single segments so frequently used in other studies. Single segments reflect features along the main-chain exclusively, while descriptors are spatial, and thus add a threedimensional context to local properties of a protein molecule. In the preliminary stage of computing all pairs of similar descriptors belonging to compared structures are identified. Such pairs constitute small, local pairwise alignments, which can be viewed as building blocks for a larger alignment. In the following stages only alignments which comprise such descriptor pairs are considered. It has already been established, that this approach yields remarkably accurate (also in terms of low RMSD) pairwise alignments (Daniluk and Lesyng, 2011b). The main difficulty in building a multiple alignment from descriptor alignments results from the fact, that contrary to the pairwise case, it is not enough to select a set of descriptor pairs in which every two of them form a valid alignment. This condition does not prevent conflicts of a kind mentioned above. Therefore, an approach based on computing maximal cliques cannot be straightforwardly used in this case [It would be possible to search for maximal cliques, and then try to resolve all conflicts. However, the problem of resolving conflicts constitutes the most difficult (and intractable) part of the multiple alignment. Furthermore, because resolving conflicts shrinks a clique, such a method would have to take into consideration all maximal cliques, not just the largest ones.]. We have overcome this problem by building an alignment incrementally. If we divide the set of structures into two sets, an optimal multiple alignment is an optimal alignment of multiple alignments of these subsets (Alignments of these subsets do not have to be optimal.). Aligning two multiple alignments may lead to conflicts as well, but their number is expected to be low, when similarity within these alignments is high. This principle leads to an application of a neighbor join (NJ) method. We use a randomized version of the NJ algorithm to generate a set of specimens to be further improved by an evolutionary algorithm. Local descriptors of protein structure Descriptors have already been applied in several studies (Bjö rkholm et al., 2009;Daniluk andLesyng, 2011b, 2014;Drabikowski et al., 2007;Hvidsten et al., 2003Hvidsten et al., , 2009Strö mbergsson et al., 2006Strö mbergsson et al., , 2008. Here, we use a version of the local descriptor methodology described in (Daniluk andLesyng, 2011a,b, 2014). Every descriptor is built around its central residue. It contains residues that are in contact with the central residue (i.e. d a 6:5Å, or d C 8Å and d a À d C ! 0:75Å, where d a and d C denote distances between C a atoms and geometrical centers of side-chains, respectively). In the Fig. 1. Example of inconsistency between pairwise alignments. Each row of dots presents residues in a protein structure. Arcs between dots denote pairwise alignment between residues. Three alignments between structures S1, S2 and S3 cannot be combined into a single multiple alignment. A gap at position 4 in S2-S3 causes ambiguity which makes several residues in e.g. S2 transitively (via S1 and S3) aligned An exemplary descriptor built around the residue MET70 of 1lg7A contains nine contacts (dashed lines) between its central amino acid (red) and residues forming the centers of its elements. Some of the elements overlap forming longer segments [in particular fragments of two b-strands (blue and yellow) and a fragment of an a-helix (green)]. Altogether, this descriptor consists of five continuous segments second step, elements around selected residues are built by taking four sequential neighbors, two on each side. Finally, overlapping elements are merged into segments. For more details, see Daniluk andLesyng (2011a, 2014). It should be noted that this kind of similarity is totally sequenceindependent. Because elements are the smallest indivisible blocks, it is possible that one segment will be aligned to two smaller ones which are a few residues apart. Pairwise alignments Once a set of pairs of similar descriptors is computed, one can define a graph, where nodes correspond to descriptor pairs. It can be easily proven, that the largest alignment of two structures corresponds to a maximal clique in such a graph. Cliques in the graph are identified using a heuristic approach based on the Motzkin-Straus theorem by iteratively searching for a maximum of a certain quadratic form which corresponds to the largest clique in the graph (Daniluk et al., 2019). Possible conflicts are identified and then resolved by a branch-and-bound algorithm searching for a set of descriptor pairs which removal should result in the smallest possible reduction of the alignment size. Stochastic generation of initial multiple alignments We incorporated the progressive alignment method to generate a starting population for the evolutionary algorithm. In order to obtain several such alignments, we have developed a process of randomly generating guide trees. We use a method akin to the NJ algorithm, where at each step a pair of clusters with the highest average similarity of their elements is joined. In our implementation, a pair to be joined is chosen randomly with a probability proportional to the average similarity of elements. Evolutionary algorithm Multiple alignments generated in a stochastic progressive alignment phase are refined using an evolutionary algorithm. It is an application of a generic strategy mimicking evolutionary processes by maintaining a population of specimens that correspond to solutions to a problem.Mutation and crossover In the case of mutation, an internal node of a spanning tree is randomly selected and a multiple alignment connected with it is recomputed as a pairwise alignment of multiple alignments in its children. The process is later repeated for all nodes on a path from a selected node to the root of a tree. In the crossover procedure, structures are randomly divided into two subsets and subalignments containing these subsets are extracted from the chosen specimens. These subalignments are then aligned to obtain a new specimen. A pair of structures is chosen with probability proportional to their distance in spanning trees. They form centroids of subsets for both specimens. After that, structures close to respective centroids in spanning trees are added to subsets. If the process stops before all structures are exhausted, new centroids are selected and iteration is resumed.Steady-state algorithm We applied a steady-state evolutionary algorithm (Whitley et al., 1989). It is performed independently for each specimen immediately after its generation. In this manner, successful individuals can contribute to the new population without an unnecessary delay. To achieve quick convergence, we used a variant of elitist selection. A new specimen is preserved (added to population), if its fitness exceeds the fitness of an individual most similar to it, or if the population has not reached its maximal size. All individuals whose identity to the newly added specimen exceeds 80% are removed from the population.Gradual extension of the search space The evolutionary algorithm starts with refining the consensus of pairwise alignments. After converging, the most under-performing structure is identified by comparing its contribution to the score of the best multiple alignment with scores of its pairwise alignments. This structure is freed by including all descriptor pairs in which one descriptor belongs to this structure in the set of allowed similarities. Then the evolutionary algorithm is restarted. The process is repeated until all structures become unconstrained. Measure of the alignment size and quality To evaluate the global quality we assess the spatial arrangement of the local components. We enumerate all pairs of the aligned residues which are in contact in at least one of the aligned structures. Then for each such contact, we compute the RMSD of the respective five residue pieces (elements) of the backbone. These distances are averaged for each residue over all its contacts, for each pair of structures, and after raising to the power of two for the whole multiple alignment. The result can be viewed as an average 'tension' exerted on structures when superimposed structures are treated as elastic objects. Sometimes a pairwise alignment can be divided into regions with no contacts between them. In such a case, possible conformational distortions would not influence tension. Therefore, we augment the score of all regions, except the largest one, by a factor proportional to ffiffiffiffiffiffiffiffiffiffiffi ffi 1þcos a 2 q , where a is an angle between rotations required to superimpose the largest region and the augmented one, respectively. Core alignment and its refinement We use a two-stage process similar to the one used by our DEDAL method (Daniluk and Lesyng, 2011b). An optimal multiple alignment is built using only descriptor alignments that have at least three segments. Such alignment in principle contains all similarities of the protein cores, but may not cover loops and extended linkers. In the second stage, the algorithm is rerun to extend computed alignment with all remaining descriptor pairs which are consistent and overlap the alignment computed in the first stage. Implementation We have implemented the described algorithm in C on the Linux platform. For the rapid finding of cliques in large graphs we have used CUDA-MS-a GPU accelerated library (Daniluk et al., 2019). We have made DAMA available online at http://dworkowa. imdik.pan.pl/EP/DAMA. Linux binaries of the software are available upon request. The server can be used to align structures identified by PDB or SCOP accession codes or supplied in uploaded files. Superpositions can be downloaded as PDB files and also viewed through the WebGL applet. The alignments are available in FASTA format and as a list of corresponding residue ranges. Scalability of implementation was shown in Supplementary Figure S3 in Supplementary Materials. SISY-multiple dataset In this study, we have used SISY-multiple-a set of multiple alignments created especially for assessment of the quality of multiple structure alignment methods (Berbalk et al., 2009). It is based on SISYPHUS-a manually curated set of multiple structure alignments (Andreeva et al., 2007), which has been pruned from ambiguities. It contains 106 multiple alignments comprising from 3 to 119 structures ($13 on average). Several of them contain particular difficulties such as repetitions, insertions/deletions, permutations or conformational variabilities. This set has been used for testing several alignment algorithms already. Berbalk et al. use two measures to assess the similarity of a given alignment to a reference one. A more stringent measure (Q C ) is the ratio of correctly aligned columns, while a more lenient one (Q P ) is the ratio of correctly aligned residue pairs. It should be noted, that all alignments in SISY-multiple have all columns completely filled (i.e. contain residues belonging to a core common to all structures). We present values of Q C and Q P for alignments computed by Caretta (Akdel et al., 2020), MASS (Dror et al., 2003), Matt (Menke et al., 2008), MultiProt (Shatsky et al., 2004), MUSTANG (Konagurthu et al., 2006), POSA (Ye and Godzik, 2005), 3DCOMB (Wang et al., 2011), MISTRAL (Micheletti and Orland, 2009), MAMMOTH (Lupyan et al., 2005), MAPSCI (Ilinkin et al., 2010), mTM-align (Dong et al., 2018) and DAMA. The results for the first five methods are taken from (Berbalk et al., 2009). For the remaining ones, we have performed computations ourselves. Several methods fail in some cases either due to internal faults or incompatibility of input data (MUSTANG, Matt and POSA are incapable of aligning structures with multiple chains). In his study Berbalk et al. have chosen a set of 61 alignments for which no program has failed. DAMA turned out to be the most accurate method on the whole SISY-multiple dataset achieving median accuracy of 82.3% for the Q C and 92.7% for Q P measure, second-best 3DCOMB achieved 67.1% and 89.8% respectively (67.6% and 89.9% when limited to cases for which program has not failed). If one disregards cases for which programs have failed, Matt (Q C : 81.4%, Q P : 90.6%), POSA (Q C : 77.4%, Q P : 88.3%) and MUSTANG (Q C : 75.9%, Q P : 90.6%) perform better than 3DCOMB. However, these three methods have the highest number of failures. Results for the remaining methods are provided in Table 1 and Figure 3. Performance profiles (Dolan and Moré, 2002) are convenient to assess quality on a large dataset. In this case, however, we used them to compare the accuracy of the algorithms tested. Let c m;a be the accuracy of the solution computed by the method m for the alignment a-with either the Q C or Q P measures. We define accuracy ratio as r m;a ¼ cm;a maxmcm;a . These ratios are aggregated into profiles for each method: where A is a set of reference alignments, and j Á j denotes the set cardinality. According to Dolan and Moré performance profiles may be interpreted as probabilities for a method to achieve performance not worse than the best method by a given ratio. Performance profiles for all alignments in the SISY-multiple dataset for the Q C (a) and Q P (b) measure are presented in Figure 4, and profiles for the subset of 61 safe alignments selected from the SISY-multiple dataset by Berbalk et al. are presented in Supplementary Figure S2 in Supplementary Materials. Performance profiles show that DAMA is the most likely method to retain a satisfactory alignment regardless of the desired accuracy. MUSTANG, Matt and POSA along with 3DCOMB perform very well on the subset of easier alignments. However, when compared using the whole set, only 3DCOMB and DAMA remain outstanding. There is a significant difference between the Q C and Q P profiles for these methods indicating that DAMA is more likely to align whole columns correctly, and thus identify the whole common core. The numerical values of AUC are provided in Supplementary Case study: protein kinases As reference, we have taken an alignment of 31 kinases prepared by Scheeff and Bourne (Scheeff and Bourne, 2005). This alignment includes 25 typical protein kinases (TPKs) and 6 atypical kinases (AKs). The authors describe 20 features characteristic to some kinase families or of all of them (see Supplementary Tables S1 and S2 in Supplementary Materials). We have identified 240 aligned positions in that curated multiple alignment corresponding to notable features, and used them to test two methods performing best on the SISY-multiple set-3DCOMB and DAMA. Highly conserved residues There are few residues playing important role in kinase activity which are conserved in all structures, in particular K72, E91, D166 or D184 [positions according to PKA (1cdk)]. Residues crucial for the ATP hydrolysis (D184) and present in the catalytic region (D166) were aligned correctly by both methods. However, only DAMA did equally well with residues responsible for ATP stabilization in the binding pocket (K72 and E91), while 3DCOMB shifted its alignment of three structures by 1-2 residues (see Fig. 5). There is also a number of other residues which are conserved in some structures. Examples are H158 and D220 forming hydrogen bonds stabilizing the catalytic region in most kinases, and N171 or equivalent isoleucine or glutamine interacting with an Mg þ2 cation important for the catalytic process. Aligning N171 or H158 caused no difficulties. 3DCOMB aligned all residues at position D220, while DAMA shifted respective residues in PDK1, GSK3 and PKB by one position and IRK by three. Secondary structure There are several secondary structures conserved in the reference alignment. Among them, helices denoted with letters A-F are common for all structures (except for B-helix), while G to I a-helices are present only in TPKs and vary in length. B-helix is shared only by AGC kinases (five out of TPKs) and ChaK, while in other structures a loop takes its place. Also a number of b-strands numbered from 1 to 8 forming a b-sheet N-terminal domain is present in all structures (see Fig. 6 and Supplementary Fig. S2 in Supplementary Materials). C-and D-helices or b-strand 4 were aligned correctly for TPKs by both programs. 3DCOMB failed, however, to align corresponding regions in AKs due to incorrect gap placement. Well-aligned remaining secondary structures are F-helix, containing aspartate D220 forming H-bond with H158, common to all kinases, as well as, G-and H-helices present in TPKs exclusively. In the DAMA alignment four shifts occur in F-helix and propagate further through all following helices, but remaining structures are aligned correctly, including AKs F-helix. 3DCOMB, on the other hand, incorrectly aligned F-helix in AKs, but committed no errors in the case of Gand H-helices in TPKs. The most troublesome elements were helix B (or its substitute loop), and a-helix I common for all TPKs. 3DCOMB aligned correctly only half of I helices, applying minor shifts in the remaining structures, while the result from DAMA shows only shifts in four previously mentioned structures. Helix B, on the other hand, was aligned correctly by DAMA, while 3DCOMB found only two correct pairs out of thirty (partially aligning remaining pairs). Insertions There are four insertions present only in some kinases. One present in the catalytic region of one structure-AFK-was not aligned with any residues from other structures by both programs as expected. Long insertions in CKA-2 and APH were correctly aligned by DAMA, and 3DCOMB achieved approximately 25% accuracy (see Fig. 7). The similarity of insertions between G-and H-helices shared by CMGC kinases was detected by DAMA in 3 out of 5 structures, while 3DCOMB failed to align any inserted regions in these structures. In the case of insertion preceding I-helix, shared by five ACG kinases, DAMA missed one structure, while 3DCOMB detected similarity for only one pair of structures. Other Summary of all identified structural features and performance of DAMA and 3DCOMB can be found in Supplementary Tables S4 and S5 in Supplementary Materials. Conclusion DAMA is capable of computing multiple alignments of large sets of structures, imposing only minor constraints in the process. It is based on local similarities, which may comprise several disjoint segments of a protein backbone and encompass complete physico-chemical neighborhoods of amino-acid residues. It is constrained neither by protein topology, thus permitting segments swaps and circular permutations, nor by its global structure allowing for conformational variability which, in particular, is essential for enzymatic and other activities. In our approach, the alignment algorithm searches for the largest non-contradictory ensemble of similar descriptors. Local descriptors are generic enough to capture most of the biologically significant similarities present in the test set, while at the same time they are discriminatory enough to prevent the emergence of larger, but meaningless, solutions. This result is consistent with our previous study demonstrating the applicability of local descriptors to the pairwise alignment problem (Daniluk andLesyng, 2011b, 2014). When solving a so-called black-box optimization problem, in which an optimized function (in this case the alignment score) cannot be differentiated and has to be independently evaluated for each attempted solution, it is crucial to limit the number of unsuccessful trials by reducing the number of infeasible solutions. It has been established that the multiple alignment problem is NP-complete (Daniluk and Lesyng, 2014), and thus the development of an accurate algorithm with polynomial time complexity is highly unlikely. However, the DAMA example shows, that an effective heuristic approach can be developed without imposing artificial restrictions (e.g. lack of segment swaps, global RMSD threshold) which would limit the solution space. 3DCOMB, the second-best method after DAMA tested in this study, uses a similar definition of local similarities-so-called local and global structure environments. Its search space is subject to the aforementioned constraints due to the usage of dynamic programming and TM-score (Zhang and Skolnick, 2004) for assessment of the resulting alignments. This gives 3DCOMB an advantage over DAMA in easy cases. In several cases, these simplifying assumptions make the correct alignment infeasible which causes 3DCOMB to perform slightly worse.
5,993.4
2021-08-16T00:00:00.000
[ "Biology", "Computer Science" ]
Human placental mesenchymal stem cells ameliorate liver fibrosis in mice by upregulation of Caveolin1 in hepatic stellate cells Background Liver fibrosis (LF) is a common pathological process characterized by the activation of hepatic stellate cells (HSCs) and accumulation of extracellular matrix. Severe LF causes cirrhosis and even liver failure, a major cause of morbidity and mortality worldwide. Transplantation of human placental mesenchymal stem cells (hPMSCs) has been considered as an alternative therapy. However, the underlying mechanisms and the appropriate time window for hPMSC transplantation are not well understood. Methods We established mouse models of CCl4-injured LF and administered hPMSCs at different stages of LF once a week for 2 weeks. The therapeutic effect of hPMSCs on LF was investigated, according to histopathological and blood biochemical analyses. In vitro, the effect of hPMSCs and the secretomes of hPMSCs on the inhibition of activated HSCs was assessed. RNA sequencing (RNA-seq) analysis, real-time PCR array, and western blot were performed to explore possible signaling pathways involved in treatment of LF with hPMSCs. Results hPMSC treatment notably alleviates experimental hepatic fibrosis, restores liver function, and inhibits inflammation. Furthermore, the therapeutic effect of hPMSCs against mild-to-moderate LF was significantly greater than against severe LF. In vitro, we observed that the hPMSCs as well as the secretomes of hPMSCs were able to decrease the activation of HSCs. Mechanistic dissection studies showed that hPMSC treatment downregulated the expression of fibrosis-related genes, and this was accompanied by the upregulation of Caveolin-1 (Cav1) (p < 0.001). This suggested that the amelioration of LF occurred partly due to the restoration of Cav1 expression in activated HSCs. Upregulation of Cav1 can inhibit the TGF-/Smad signaling pathway, mainly by reducing Smad2 phosphorylation, resulting in the inhibition of activated HSCs, whereas this effect could be abated if Cav1 was silenced in advance by siRNAs. Conclusions Our findings suggest that hPMSCs could provide multifaceted therapeutic benefits for the treatment of LF, and the TGF-/Cav1 pathway might act as a therapeutic target for hPMSCs in the treatment of LF. Supplementary Information The online version contains supplementary material available at 10.1186/s13287-021-02358-x. Background Hepatic fibrosis is a reversible wound healing response caused by many chronic liver diseases, such as viral infection, alcohol abuse, and autoimmune hepatitis. It is characterized by activation of HSCs and excessive accumulation of extracellular matrix (ECM) in the liver [1,2]. HSCs are a resident mesenchymal cell type located in the subendothelial space of Disse and usually display a quiescent state [3]. Following liver injury, HSCs are activated and transdifferentiated into the fibrogenic myofibroblasts, the major cell type that causes fibrosis and collagen synthesis. If the source of the injury is sustained, HSC activation and accumulation of ECM persist [4]. Advanced LF can develop into irreversible cirrhosis, portal hypertension, and liver failure, and correlates with an increased risk of hepatocellular carcinoma [5]. The available treatment methods mainly focus on inhibiting HSC activation and/or promoting ECM degradation. Although there are many drugs to treat LF, the effect is very limited and may also aggravate liver damage [1,6]. Therefore, new strategies to delay or prevent the progression of LF are urgently required. Cell therapy is a promising approach for the treatment of liver disease, hepatocyte-based therapies have been shown to be very effective in experimental animals, but limited cell sources and low proliferation have restricted their largescale application [7,8]. Recently, many studies have demonstrated the therapeutic potential of mesenchymal stem cells (MSCs) in liver disease [9,10]. It has recently been shown that MSCs can secrete various cytokines in a paracrine manner to regulate inflammatory responses, hepatocyte apoptosis, and fibrosis, and finally restore liver function after acute injury or chronic fibrogenesis [11,12]. Along with their properties of high self-renewal, multipotent differentiation capacity, and immunosuppressive qualities [13][14][15], MSCs are considered to be the most suitable source of cells for cell-based therapy for LF. Similarly, the secretomes of MSCs, which contain a large number of soluble proteins, free nucleic acids, lipids, and extracellular vesicles have been proved retain the same beneficial effect of the cell of origin for the treatment of LF [10,16]. The placenta is another promising source of MSCs. In contrast to autologous MSCs, including bone marrow and adipose mesenchymal stem cells, human placental mesenchymal stem cells (hPMSCs) can be easily obtained in massive numbers by a simple and painless procedure [17,18]. Furthermore, they exhibit greater self-renewal, multilineage differentiation capacity, and strong immunologic privileges [19,20]. More importantly, many studies indicate that hPMSCs have therapeutic potential in liver diseases. A previous study proved the therapeutic potential of hPMSCs in miniature pigs model of acute liver failure [21]. Additional studies have shown that hPMSCs exert an anti-fibrotic effect when transplanted into rats with carbon tetrachloride (CCl 4 )injured livers by promoting hepatic regeneration via increased autophagy [22]. Although the use of hPMSCs has been studied in animal models of LF, the number of MSCs transplanted and the appropriate time window as well as the mechanisms responsible for liver repair by hPMSCs are not well understood. The aim of the present study was to investigate whether transplantation of hPMSCs reduces fibrosis in CCl 4 -injured mouse liver and to perform a comparative analysis of the important factors involved in MSC-based cell therapy. We further analyzed the involvement of the TGF-β/Cav1 pathway in hPMSC-mediated anti-fibrosis activity in vitro and in vivo. Our results provide further support that hPMSCs could provide a new avenue for the treatment of LF. Isolation and identification of human placental-derived mesenchymal stem cells Placental tissue was obtained from three health donors in the Sichuan Maternal and Child Health Hospital, upon consent of its donor according to procedures approved by the Medical Ethics Committee, Sichuan University (K2018109-1). hPMSCs were isolated and purified; the immunophenotype and differentiation potential of hPMSCs were then determined according to reported procedures [17,20]. hPMSCs were cultured in mesenchymal stem cell basal medium (DAKEWE, Beijing, China) supplemented with 5% UltraGROTM (HPCFDCRL50, Helios), and cells between passage 3 and 6 were used for all experiments. HSC culture and in vitro study Human primary hepatic stellate cells (HSCs) were provided by ScienCell Research Laboratories and cultured in Stellate Cell Medium (SteCM, San Diego, CA) supplemented with growth supplement (SteCGS). The TGF-β1 (a well-known pro-fibrotic ligand) (20ng/mL)-mediated HSC activation were induced after growing in DMEM with only 0.2% FBS for 24 h and determined by western blot analysis ( Figure S3). To investigate the effect of hPMSCs in vitro, hPMSC secretomes were harvested and concentrated 15-fold with ultrafiltration tubes (millipore), activated HSCs were cultured with concentrated hPMSC secretomes in a gradient ratio (10%, 20%, 40%). Additionally, to exclude the effect of medium compositions, concentrated MSC complete medium also treated in the same manner, and the results were compared to the hPMSCs supernatant. Unactivated and activated HSCs without extra treatment were used as controls. CCl 4 -injured mice liver fibrosis and hPMSC transplantation To induce liver fibrosis, 8-week-old C57BL/6 mice (20 ± 2 g) were intraperitoneally injected with CCl 4 (0.5 mL/ kg body weight, dissolved in olive oil, 1:9; Sigma-Aldrich) twice a week for 6 weeks (n = 20). Five mice were sacrificed every 2 weeks for histopathological examination and liver function test. The liver tissue sections from CCl 4 -treated mice exhibited focal fibrosis, confirming the successful establishment of an animal model of liver fibrosis. No animals died during the experiments after CCl 4 administration. All experimental procedures involving animal experiments were approved by the Sichuan University Medical Ethics Committee (K2018109-2). In this study, hPMSCs were injected into the tail vein of mice in the liver fibrosis model after 4 or 6 weeks of CCl4 treatment, once a week for 2 weeks. These two time points of treatment with hPMSCs: mild-tomoderate stage of LF and severe stage of LF, corresponding to TM and TS, respectively. Meanwhile, the different doses of hPMSCs (high dose, 5 × 10 7 cells/kg; low dose, 2 × 10 7 cells/kg) were also tested in each group, as shown in Fig. 1a. The liver fibrosis group mice were injected with PBS alone. Untreated mice were treated as control. There were four hPMSC treatment groups in Fig. 1 as TM high , TM low , TS high , and TS low . The default for all hPMSC groups in the subsequent in vivo experiments was TM low . Serum and liver tissue samples were collected at 2 weeks after the 6-week course of CCl 4 administration. Specifically, samples were collected at the same time for mice of all groups, including normal, fibrosis, TM high , TM low , TS high , and TS low group in Fig. 1 and normal, fibrosis, and hPMSC groups in Fig. 2 and in Fig. 6. Western blot analysis Total proteins were extracted from HSCs with different treatments, equal amounts of soluble protein were separated via sodium dodecyl sulphate-polyacrylamide gel electrophoresis using 10% Tris-glycine mini-gels and transferred onto a nitrocellulose membrane (Bio-Rad).The primary antibodies were listed in Table S1, GAPDH mAb (Santa Cruz, Biotechnology) was used as an internal control. Following incubation with horseradish peroxidase-conjugated secondary antibody (Zsbio, Beijing, China) for 2 h at room temperature, the bands were then tested by a chemiluminescent substrate ECL kit (Merck Millipore). Flow cytometry and immunofluorescence staining Immunophenotype of hPMSCs and intrahepatic macrophages were detected by flow cytometry. The single-cell suspensions were filtered, fixable Viability Stain 620 (BD Biosciences) were used to discriminate live and dead cells. The cells were then blocked with Fc-block (BD Biosciences) and stained with fluorochrome-labeled mAbs. Data were acquired on a NovoCyte flow cytometer. The primary antibodies were listed in Table S1. For immunofluorescence staining, the cells were fixed in 4% paraformaldehyde for 20 min. The fixed cells were blocked with goat serum and subsequently incubated with primary antibodies at 4°C overnight. After thorough washing, secondary antibodies were used. Nuclei were visualized with DAPI (Roche Basel, Switzerland). The primary antibodies were listed in Table S1. Quantitative real-time reverse transcription-polymerase chain reaction (qRT-PCR) Total RNA was isolated from liver tissue and other cells using Trizol reagent (Invitrogen, USA). qRT-PCR was performed using Step-One Real-Time PCR system (Takara) according to the manufacturer's instructions. The expression of genes was normalized to GAPDH. All reactions were repeated in triplicate, and the primer sequences were listed in Tables S2 and S3. Microarray analysis Total RNA of HSCs or liver tissues obtained from different groups was prepared with Trizol reagent (Invitrogen, Carlsbad, CA, USA). The products were sequenced by an Illumina HiSeq2500 instrument in Shanghai Majorbio Biopharm Technology Co. Ltd. (Shanghai, China). Data were extracted and normalized according to the manufacturer's standard protocol [23,24]. Differentially expressed genes were identified using the nbinomTest and DESeq (2012) functions estimate Size Factors. Genes displaying twofold or greater changes (P < 0.05, t test) in expression level between control group and test group were selected to generate the heatmap and for GO term enrichment analysis. The RNAseq raw expression files and details of liver tissues have been deposited in NCBI GEO under accession nos. SRR12777460, SRR12777461, and SRR12777462. The RNAseq raw expression files and details of HSCs have been deposited in NCBI GEO under accession nos. SRR12806194, SRR12806195, SRR12806196, and SRR12806197. Liver function tests and histological analysis Liver function and fibrotic degree were assessed by analyzing the levels of serum aspartate aminotransferase (AST), alanine aminotransferase (ALT), albumin (ALB), and hydroxyproline using UniCel DxC 800 Synchron (Beckman Coulter) according to the manufacturer's instructions. For histopathological and immunohistochemical analysis, formalin-fixed, paraffin-embedded liver samples were cut into 4-μm-thick sections and stained with hematoxylin-eosin (H&E), Sirius red, and Masson staining. At least 3 animals per group were examined. The primary antibodies were listed in Table S1. Enzyme-linked immunosorbent assay The serum of the mice in each group was separated and detected according to the manufacturer's instructions of Xinbosheng's QuantiCyto® Mouse ELISA kits. IL-6, TNF-α, and IFN-γ were tested. Transfection of Cav1 small interfering RNA in HSCs To investigate the role of Cav1 in inhibiting the activation of HSCs, we designed three small interfering RNA (siRNAs) and transfected into HSCs with the Invitrogen's Lipofectamine™ 3000 reagent. The downregulation of Cav1 in HSCs was determined by western blot Fig. 1 Therapeutic effects of hPMSCs in CCl 4 -induced liver fibrosis. a Experimental scheme of hPMSC transplantation in CCl 4 -injured liver fibrosis. Intravenous injection of hPMSCs was administered. TM high , treatment with a high dose of hPMSCs in the mild-to-moderate stage of LF; TM low , treatment with a low dose of hPMSCs in the mild-to-moderate stage of LF; TS high , treatment with a high dose of hPMSCs in the severe stage of LF; TS low , treatment with a low dose of hPMSCs in the severe stage of LF. A high dose was specified as 5 × 10 7 cells/kg and a low dose was specified as 2 × 10 7 cells/kg. b Hepatic function was assessed by serum level of AST, ALT, ALB, and hepatic hydroxyproline content in liver tissues in CCl 4 -injured mice that treated with or without hPMSCs. c Photomicrographs of liver sections stained with Sirius red (upper) and Masson trichome (bottom) at week 8. d Immunohistochemical staining using anti-α-SMA (red) and DAPI (blue) at week 8. e Expression of Acta2, Col1a1, Timp1, and Vimentin was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the fibrosis group. Mice from fibrosis group received PBS followed by CCl 4 injection. Scale bar, 50 μm. ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; ns, no significance, hPMSCs, human placental mesenchymal stem cells. TM high mice, n = 4/group; normal, TS high , and Fibrosis mice, n = 5/group; TM low and TS low mice, n = 7/group analysis and qRT-PCR. siRNA-negative control (siRNA-NC) and untransfected HSCs were treated as controls. The sequences of siRNAs were listed in Table S4. Statistical analyses For statistical analysis, the experimental data were analyzed by SPSS software version 17.0 statistical software. Multivariate data were compared using analysis of variance. After statistical significance, pairwise comparisons were performed by Sidak's multiple comparisons test. All statistical graphs were drawn using Prism 6.0 (GraphPad). p ≤ 0.05 was considered significant. hPMSC transplantation alleviated CCl 4 -injured liver fibrosis in mice In order to evaluate the effect of hPMSCs in the treatment of LF, we tested the efficiency of hPMSC engraftment in experimental LF by CCl 4 administration, as shown in the modeling process ( Figure S2 A). After CCl 4 treatment for 2 weeks, normal liver lobules were destroyed and the fibrous connective tissue in the portal area was significantly increased (Figure S2 B-C). Four weeks later, the fibrous tissue further increased, extending to adjacent liver lobules and dividing the liver tissue. Six weeks later, the increased collagen fibers formed linear fibrous septa, and pseudolobules formed, according to Sirius Red staining, Masson staining, and α-SMA staining ( Figure S2 B-C). AST, ALT, and hepatic hydroxyproline content were also elevated in CCl 4 -treated mice, while the ALB levels decreased, with the trend consistent with histopathological examinations (Figure S2 D). Time-point and cell-dose are two important parameters for cell-based therapy. In previous studies, MSCs were usually transplanted in vivo at the 4 th week or 6 th week after CCl 4 administration [25,26]. They reflect the mild-to-moderate stage of LF and the severe stage of LF, respectively. In this study, these two time points (TM and TS) were chosen respectively, and a comparative study was conducted, as shown in Fig. 1a. Two doses of hPMSCs, including high cell doses (5 × 10 7 cells/kg) and low cell doses (2 × 10 7 cells/kg) were analyzed as well, mice from fibrosis group received PBS followed by CCl 4 injection were served as control. Compared with the fibrosis group, biochemical parameters of liver function, including ALT and AST levels were reduced, while the ALB levels increased in all hPMSC treatment groups, especially in the TM groups (Fig. 1b). Moreover, we found Fig. 2 hPMSCs improve inflammation microenvironment in CCl 4 -induced liver fibrosis. a Immunohistochemistry staining using anti F4/80 antibodies at 2 weeks after the second injection of hPMSCs. b Monocyte-derived macrophage (MoMF) or Kupffer cells (KC) isolated from liver tissues were detected by FCAS analysis. MoMFs (CD11b high F4/80 low ) were inside the black circle and KCs (CD11b low F4/80 high ) were inside the red circle. c A statistical analysis was performed to determine the proportion of MoMF or KC in CD45+ cells. d Inflammatory cytokines in serum (IL-6, TNFα, IFN-γ) were examined with ELISA assays. e Expression of inflammatory cytokines mRNA was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the fibrosis group. Mice from fibrosis group received CCl 4 followed by PBS injection. Scale bar, 50 μm. ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; ns, no significance. Normal and fibrosis mice, n = 5/group; hPMSCs mice, n = 7/group that the level of hydroxyproline, the main component in collagen tissue, was also reduced in all hPMSC treatment groups (Fig. 1b). Histopathological examination using Sirius Red staining and Masson staining was performed to quantify the degree of LF. Compared to the fibrosis group, the fibrous area of liver tissue was significantly reduced in the hPMSC treatment groups (Fig. 1c). In addition, immunostaining showed α-SMA expression was decreased in hPMSC treatment groups (Fig. 1d). Furthermore, the expression of fibrosis-related genes, including Acta2, Col1a1, and Vimentin was decreased upon hPMSC treatment, and downregulation of these genes was greater in TM groups compared with TS groups, as determined by qRT-PCR analysis (Fig. 1e). These results suggest that hPMSC treatment could improve liver function and alleviate LF in CCl 4 -treated mice. Compared with the TS groups, the therapeutic effects of hPMSC transplantation were more profound in TM groups according to the results of the blood biochemical indices, collagen area, and fibrosis-related genes. Moreover, increasing the cell doses from TM low to TM high , did not further improve the therapeutic effect of hPMSCs, indicating that low cell doses (2 × 10 7 cells/kg) are sufficient to play a therapeutic role in experimental LF (Fig. 1b-e). The default for all hPMSC groups in the subsequent in vivo experiments was TM low . hPMSC transplantation has an anti-inflammatory effect in CCl 4 -injured mice liver Inflammation is vital to the initiation and progression of LF [1]. To investigate the potential anti-inflammatory effects of hPMSCs in vivo, we examined the differences in intrahepatic macrophages and inflammatory cytokines in liver tissue isolated from different mice. Compared to normal liver tissue, a large number of infiltrating macrophages (F4/80 + ) were found in fibrotic livers according to immunohistochemical staining, while the number of macrophages reduced with hPMSC treatment (Fig. 2a). To explore the source of infiltrating macrophages, we then examined the proportion of mononuclear/macrophage cells using FACS analysis. The results showed that there was no statistical difference in the proportion of monocyte-derived macrophages (MoMF, CD11b high F4/ 80 low ) [27] from different liver tissues. Interestingly, the proportion of Kupffer cells (CD11b low F4/80 high ) [27] was 17.44 ± 3.18% in the hPMSC group, which was significantly lower than that in the fibrosis group (34.64 ± 2.12%) (Fig. 2b, c). These results suggested that hPMSC treatment could suppress the infiltration of macrophages, mainly by reducing the number of Kupffer cells. We then detected the expression of inflammatory cytokines in the serum of mice by ELISA. As expected, the expression levels of inflammatory factors, including IL-6 and TNF-α, were lower in the hPMSC treatment group than in the LF group (Fig. 2d). However, there was no significant change (P = 0.07) in the expression level of IFN-γ, which may be related to the alleviation of inflammation in mice after treatment. These results were further confirmed by qRT-PCR analysis (Fig. 2e). Our findings indicated that hPMSC treatment contributes to the improvement of CCl 4 -injured LF, at least in part through anti-inflammatory processes. hPMSCs inhibit TGF-β1-induced HSC activation in vitro HSC activation is an indispensable component in the initiation and progression of LF [3]. TGF-β1 treatment is a common way to activate resting HSCs in vitro [4]. In the presence of TGF-β1, the number of activated HSCs that expressed the myogenic marker α-SMA was markedly increased, as determined by western blot analyses ( Fig. 3a; Figures S3; S4 A). Activated HSCs became elongated, with a dendritic-like shape, compared with unactivated HSCs ( Fig. 3b; Figure S4 B). Recent studies now support that the beneficial effects observed with MSC-based therapy can be mediated through the paracrine release of soluble proteins or other biologically active molecules and extracellular vesicles, which together constitute the MSC secretomes [6,16,25,28,29]. Therefore, we investigated whether hPMSCs and their secretomes could regulate the activity of the HSCs in vitro. In the present study, hPMSCs were co-cultured with activated HSCs. Meanwhile, the secretomes, which are 15-fold concentrations of the culture supernatant from hPMSCs, were cultured with activated HSCs in a gradient ratio of 10%, 20%, and 40%. Unactivated and activated HSCs without extra treatment were used as controls. Western blot ( Fig. 3a; Figure S4 A) and immunofluorescence staining showed that α-SMA levels were significantly decreased, and this was accompanied by changes in cell morphology that resulted in a morphology similar close to that of unactivated HSCs (Fig. 3b; Figure S4 B). The suppression of HSC activation was also confirmed by qRT-PCR analysis, which revealed a restoration of fibrosis-related genes in the hPMSCs and hPMSC secretomes group compared with activated HSCs. In particular, Acta2, the α-SMA coding gene, was significantly downregulated compared with the activated HSCs, while the expression of Timp1, an anti-fibrotic gene was increased with hPMSCs and hPMSC secretomes treatment ( Fig. 3c; Figure S4 C). These results demonstrate that HSC activation can be inhibited by treatment with hPMSCs and hPMSC secretomes. Interestingly, the expression level of α-SMA protein in activated HSCs was downregulated after treatment with 10% hPMSC secretomes. This was the same effect that was seen with hPMSC co-culture treatment. Furthermore, this inhibitory effect was more pronounced as the secretome concentration was increased to 40%, indicating that hPMSC secretomes inhibited HSC activation in a concentrationdependent manner (Fig. 3a, b). Additionally, to exclude the effect of MSC medium composition, MSC complete medium was also concentrated and tested in the same manner. Compared with the hPMSC supernatant, the concentrated MSC medium (40%) exhibited limited effects on the inhibition of HSC activation (Fig. 3a, b). The default for all hPMSC groups in the subsequent in vitro experiments was 40% hPMSC secretome concentration. Cav1 is a potential target for hPMSC treatment in liver fibrosis To further investigate the mechanism of relieving LF with hPMSC treatment, we performed RNA sequencing (RNA-seq) analysis of liver tissues obtained from normal C57 mice (Normal group), mouse models with hepatic fibrosis (Fibrosis group), and hPMSC-treated fibrosis mice (hPMSC group). RNA-Seq analyses showed that the gene expression profiles of the hPMSC group was more closely resembled those seen in normal liver tissues and were significantly different from those fibrosis group (Fig. 4a). Furthermore, the genes included in three key functional clusters, including fibrosis, cytoskeleton, and inflammation-related factors were analyzed. The results revealed a significant change in the expression of these genes in fibrotic liver tissues, and they can be restored after hPMSC treatment (Fig. 4b-d). In addition, the top 10 GO biological process terms are listed in Fig. 4e. The terms related to TGF-β signaling pathway, SMAD protein signal transduction and SMAD protein phosphorylation in this list suggests that regulation of TGF-β/Smad signaling may be a potential mechanism in the treatment of LF with hPMSCs. We also performed qRT-PCR analyses to check the expression of ten important fibrosis-related genes (Fig. 4f). Among them, Cav1 revealed the most significant differences between fibrosis group and hPMSC group. Importantly, previous studies have shown that Cav1 can participate in regulating TGF-β/Smad signaling pathway in many situations [30], indicating that Cav1 may be a potential target for hPMSC treatment in LF. To further confirm the above findings, we performed RNA-seq analyses on different cells, including activated HSCs (TGF-β1 group), unactivated HSCs (Blank group), and activated HSCs, which were then treated with Fig. 3 The activation of HSCs was inhibited by hPMSC in vitro. a Representative western blot of α-SMA and GAPDH of activated HSCs, in the presence of concentrated MSC medium or concentrated hPMSC culture supernatant (hPMSC secretomes). b Typical cell morphology (upper) and α-SMA immunofluorescence staining (lower) of HSCs. c Expression of fibrosis-related genes in activated HSCs was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the TGF-β1 group. Cells from blank group were unactivated HSCs, cells from TGF-β1 group were activated HSCs that induced by TGF-β1, and cells from medium group or hPMSC group were activated HSCs treated with either concentrated MSC medium or hPMSC secretomes. Scale bar, 50 μm. ****p < 0.0001, ***p < 0.001, **p < 0.01; *p < 0.05; ns, no significance. HSCs, hepatic stellate cells hPMSC secretomes (hPMSC group) and medium (Medium group), respectively. The gene expression profiles of the hPMSC group more closely resembled those seen in blank group and were significantly different from those TGF-β1 group and medium group (Fig. 5a). Furthermore, the expression of fibrosis-related genes as well as the ECM-associated genes was similar to findings in liver tissue. It appears that, hPMSC secretomes can restore the expression of genes that had changed in the TGF-β1 group or medium group to some extent (Fig. 5b). In addition, the top GO biological process terms related to the proteinaceous ECM and ECM suggest that hPMSC secretomes may reduce the formation of ECM by inhibiting the activation of HSCs, a key factor in alleviating LF (Fig. 5c). We also performed qRT-PCR analyses to check the expression of ten important fibrosisrelated genes (Fig. 5d). The expression of Cav1 was significantly upregulated in activated HSCs when cultured with hPMSC secretomes, further supporting the finding of vivo analysis, indicating that Cav1 might be a potential target for hPMSC treatment in LF. Downregulation of Cav1 is associated with activation of HSCs To test this hypothesis, we investigated the effect of Cav1 on HSC activation. We measured the expression of Cav1 and α-SMA in liver tissues by immunofluorescence staining. Results showed that Cav1 was expressed at a low level both in normal liver tissues and in fibrotic liver tissues, where α-SMA was greatly upregulated after CCl 4 administration. However, after treatment with hPMSCs, the expression of Cav1 was upregulated compared with that in fibrotic liver tissues, accompanied by the reduction of α-SMA in the hepatic lobular margin (Fig. 6a). To further illustrate the relationship between Cav1 downregulation and HSC activation, we also tested the expression of Cav1 and α-SMA in activated HSCs. Compared to unactivated HSCs, α-SMA was significantly Fig. 4 Cav1 participates in the improvement of CCl 4 -injured liver fibrosis after hPMSCs treatment. a Heatmap showing the differentially expressed genes (DEGs) among liver tissues from hPMSC group, normal, and fibrosis group (log2 fold change ≥ 2, p < 0.05). b-d Heatmaps of three key gene clusters associated with liver fibrosis. e GO analysis (biological processes) of significantly downregulated DEGs between hPMSC group and fibrosis group. (p < 0.05). f Expression of ten fibrosis-related genes at different groups was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the fibrosis group. Mice from fibrosis group received PBS followed by CCl 4 injection. ***p < 0.001, **p < 0.01, *p < 0.05; ns, no significance; Cav1, Caveolin-1 increased while Cav1 was decreased in activated HSCs. After treatment with hPMSC secretomes, α-SMA levels were greatly attenuated, while Cav1 levels were partially restored in activated HSCs (Fig. 6b). These data demonstrated the involvement of Cav1 in HSC activation. We then carried out loss-of-function experiments by transfecting an siRNA targeting human Cav1, which effectively reduced Cav1 expression in HSCs. Cav1 was not knocked down in siRNA-negative control (siRNA-NC)-transfected HSCs. Untransfected HSCs served as control cells (blank). HSC mRNAs from different groups were collected and subjected to qRT-PCR testing. Compared to control cells, the expression of pro-fibrotic genes, such as Acta2, and Col1a1 were upregulated by 1.5−3-fold in Cav1-silenced HSCs, accompanied by the attenuation of Timp1, an anti-fibrotic gene (Fig. 6c), indicating a vital role of Cav1 in HSC activation and collagen production. To explore the molecular mechanism of the anti-fibrotic effects of Cav1, we detected the regulation of Smad activation by Cav1 in HSCs using loss-of-function experiments. It is noteworthy that Smad genes, including Smad2 and Smad4, which are related to the TGF-β signaling pathway, were upregulated in Cav1silenced HSCs, but not in negative control group and control cells (Fig. 6d). These data reveal the involvement of the TGF-β/Smad signaling pathway in Cav1-mediated HSC activation. hPMSCs inhibit HSC activation by restoring Cav1 function To explore the interplay between hPMSC and Cav1 in HSC activation, we prepared siRNA-transfected HSCs, cells were activated by TGF-β1 and then treated with hPMSC secretomes (columns 3-6). Untransfected but activated HSCs served as control (column 2). Additionally, unactivated HSCs were also tested (column 1). The detailed operation is shown in Table S5. As shown in Fig. 7a, the expression of Cav1 was reduced in HSCs in the presence of TGF-β1. It is noteworthy that the Fig. 5 Cav1 is a potential target of hPMSC secretomes in inhibition of activated HSCs. a Heatmap showing the DEGs among HSCs from hPMSC group, blank group and medium group (log2 fold change ≥ 2, p < 0.05). b Heatmaps of two key gene clusters associated with HSC activation and liver fibrosis. c GO analysis (biological processes) of significantly downregulated DEGs between hPMSC group and TGF-β1 group (p < 0.05). d Expression of ten fibrosis-related genes at different groups was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the fibrosis group. Cells from blank group were unactivated HSCs, cells from TGF-β group were activated HSCs that were pretreated with TGF-β, and medium group or hPMSC group were activated HSCs treated with either MSC medium or hPMSC secretomes. ***p < 0.001; **p < 0.01; *p < 0.05; ns, no significance decreased Cav1 in activated HSCs can be upregulated by hPMSC secretomes. Knockdown of Cav1 in activated HSCs, however, alleviated the effect of hPMSC secretomes on the upregulation of Cav1 (Fig. 7a). Furthermore, the relative expression of the pro-fibrotic genes Acta2, Col1a1, and Desmin were also measured and normalized to β-actin. The data showed that the trend of changes in pro-fibrotic gene expression in different groups was opposite to that of Cav1 (Fig. 7a). These results indicate that hPMSC secretomes inhibit HSC activation by restoring Cav1 function in activated HSCs. We then collected protein from HSCs of different groups and validated the results by western blot analysis. Consistent with the results from the qRT-PCR assay, Cav1 was reduced in activated HSCs that were induced by TGF-β1 (Fig. 7b). In contrast, hPMSC treatment upregulated Cav1 expression in activated HSCs and inhibited HSC activation, as indicated by the reduction of α-SMA production. However, Cav1 knockdown in activated HSCs attenuated the inhibitory effect of hPMSCs in activated HSCs (Fig. 7b). Importantly, after hPMSC treatment, HSCs showed reduced TGF-β/Smad signaling, as reflected by a significantly smaller amount of phosphorylation of Smad2 and α-SMA expression compared to that in the activated HSCs. However, knockdown of Cav1 by siRNA increased Smad-2 phosphorylation and α-SMA expression in activated HSCs even after treatment with hPMSC secretomes (Fig. 7b). In summary, hPMSC treatment restored the function of Cav1, and elevated Cav1 was sufficient to inhibit HSC activation and collagen production, partly by regulating the TGF-β/Smad signaling pathway. Fig. 6 Upregulation of Cav1 after hPMSCs treatment was important in relieving liver fibrosis and inhibition of activated HSCs. a Immunohistochemistry staining using anti CAV1 antibody in liver sections. b Immunofluorescence staining using anti CAV1 antibody in HSCs. c, d Expression of fibrosis-related genes (c) and TGF-β/Smad signaling pathway related genes (d) at different groups was determined using qRT-PCR. Relative mRNA expression was normalized to βactin and compared with the NC group. Cells from blank group were unactivated HSCs, cells from si (1-3) group or NC group were HSCs that transfected with Cav1 siRNA (1)(2)(3) or Cav1 siRNA-negative control. Scale bar, 50 μm. ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; ns, no significance Discussion In this study, we demonstrated that transplantation of hPMSCs can reduce LF in a mouse model, with changes including the improvement of liver function, inhibition of inflammation, and a reduction in ECM deposition. Moreover, the therapeutic effects of hPMSCs against mild-tomoderate LF were significantly greater than those in severe fibrotic cases. Furthermore, our in vitro and in vivo data indicated that the therapeutic effects of hPMSCs are achieved partly through inhibition of the TGF-β/Smad signaling pathway via upregulation of Cav1 in activated HSCs, which resulted in inhibited HSC activation and alleviated LF. In recent years, hPMSC-based cell therapy in regenerative research has gained a broad interest owing to its great potential for self-renewal, differentiation, and the immunomodulatory properties [19,20]. Compared with the well-investigated autologous MSCs, including bone marrow and adipose mesenchymal stem cells, the number of studies using hPMSCs in the treatment of liver disease is relatively small, and the underlying molecular mechanisms have not yet been fully elucidated. In this study, we used mouse models of CCl 4 -injured LF to explore the therapeutic value of hPMSCs. Our findings indicated that hPMSC transplantation not only enhanced general hepatic function, as indicated by increased levels of liver functional index, including ALT, AST, and ALB, but also alleviated LF, as demonstrated by reductions in collagen fiber regions in liver tissues. This effect occurred concomitantly with a reduction in activated HSCs and downregulation of fibrosis-related genes. This is consistent with previous studies in a miniature pigs and rat models of CCl 4 -injured liver [21,22]. Therefore, our findings provide further proof of the potential therapeutic effects of hPMSCs in LF. Choosing an appropriate time window is a key factor for hPMSC transplantation. Although the effects of hPMSCs in LF have been reported in previous studies [31], to date, little is known about the optimal of therapeutic procedures in terms of time windows. In previous studies, MSCs were transplanted at 4 weeks after CCl 4 administration [26]. However, in a few studies, MSCs were administered at 6 weeks after CCl 4 administration, at the more severe stage of the LF according to liver function tests and histopathological examination [25]. In this study, two representative time points were chosen and comparative research was performed. We found that the therapeutic effects of hPMSCs at the early stage of LF were significantly greater than those of hPMSCs in advanced LF. These results suggest that earlier intervention with hPMSCs, may afford better therapeutic effects. These findings will need to be considered in the design of future clinical studies. In addition, to explore the effectiveness of different infusion doses of hPMSCs to LF, we adopted two commonly used doses (high and low), based on other trials that reported improved LF [12]. Our results showed that both doses improved LF and restored all indicators, with no significant difference between them. This provided a reference for our selection of the minimum effective dose (MED) for MSC treatment, for use in clinical trials [32]. In the present study, we also investigated the possible mechanisms involved in relieving LF by hPMSCs. In summary, the following mechanisms may account for them: Transplanted hPMSCs inhibit the secretion of multiple cytokines that otherwise promote inflammation and impair liver restoration [6,15,33]. Serum IL-6 and TNF-α levels were significantly lower in hPMSC-treated mice than in untreated fibrotic mice. Furthermore, the number of Kupffer cells in the fibrotic liver also decreased after hPMSC treatment. These results provide evidence supporting the immunomodulatory roles of hPMSCs, which is beneficial to the improvement of liver Fig. 7 hPMSCs inhibit TGF-β/Smad signaling pathway by restoring the function of Cav1 in HSCs. a HSCs were transfected with Cav1 siRNA 1-2 and Cav1 siRNA-NC, respectively. Transfected HSCs were treated with TGF-β1 and then cultured with or without hPMSC secretomes (40%). Expression of Cav1 and fibrosis-related genes (Acta2, Col1a1, Desmin) at different groups was determined using qRT-PCR. Relative mRNA expression was normalized to β-actin and compared with the TGF-β1 group (column 2). b Representative western blot of CAV1, α-SMA, and Smad2 from HSCs at different groups. Cells from TGF-β1 group were activated HSCs, without extra treatment. ****p < 0.0001, ***p < 0.001, **p < 0.01, *p < 0.05; ns, no significance function as well as the inhibition of LF. In addition, the therapeutic potential of hPMSCs in LF stems mainly from inhibiting HSC activation. According to RNA-seq analysis of liver tissues and GO enrichment analysis (Figure S5), we found that hPMSC treatment contributed to the upregulation of Cav1 in activated HSCs, which then helped in the inhibition of HSC activation by regulating the TGF-β1/Smad signaling pathway. CAV1 is a fatty acid-and cholesterol-binding protein that constitutes the major structural protein of caveolae [30,34]. Previous studies have confirmed that CAV1 can exercise a homeostatic function in the process of fibrosis by regulating TGF-β and its downstream signaling [35,36]. Moreover, Cav1 knockout mice have revealed impaired wound healing and profound fibrosis in the lungs, heart, and liver [36][37][38]. However, it remains unclear whether Cav1-mediated signaling pathways play an important role in relieving LF by hPMSC treatment. In this study, we found that mRNA and protein levels of Cav1 were decreased in activated HSCs. Importantly, Cav1 upregulation can be achieved after hPMSC treatment. Cav1 influenced the activity of Smad2, Smad3, and Smad4. In particular, it influenced the phosphorylation of Smad2, which then inhibits the TGF-β1/Smad signaling pathway as well as HSC activation. While endogenous inhibition of Cav1 by siRNA alleviated the effect of hPMSC secretomes on the upregulation of Cav1, based on immunofluorescence staining and qRT-PCR assays. The results from in vitro assays were consistent with the findings in our animal models. In summary, these combined mechanisms contribute significantly to the therapeutic potential of hPMSCs in the treatment of LF. To our knowledge, this is the first report to support the important role of Cav1 in MSC-based therapy for LF. Two limitations were also present in this study. First, we did not verify our findings in clinical samples because LF tissue is scarce. Second, it remains unclear whether antifibrotic factors or hPMSC-derived exosomes contribute to Cav1 upregulation after hPMSC treatment and the underlying mechanisms. These points will be considered in further studies. Conclusions Collectively, we present further evidence demonstrating the potential of hPMSCs in treating LF. The injection of hPMSCs at conventional doses at the mild-to-moderate stage of experimental LF had a significant therapeutic effect. Moreover, hPMSC-mediated upregulation of Cav1 in activated HSCs plays a key role in deactivating HSCs via inhibition of TGF-β1/Smad signaling. These findings will contribute to the development of effective treatment for fibrotic liver diseases. Additional file 6: Table S1. antibodies used for immunofluorescence, FACS analysis and WB. Table S2. Quantitative RT-PCR primer sequences of mouse. Table S3. Quantitative RT-PCR primer sequences of human. Table S4. Small interfering RNA of Caveloin1. Table S5. Effects of secretomes of hPMSCs on HSCs activation and silencing Caveolin1 expression simultaneously.
9,100
2021-01-07T00:00:00.000
[ "Biology", "Medicine" ]
Community-Led Development and Participatory Design in This whitepaper delves into the active role of community-led development (CLD) and participatory design (PD) in open source software, highlighting how these complementary approaches bring stakeholders from various backgrounds together to create a cooperative atmosphere for developing stable solutions. It emphasizes the importance of these methodologies in enabling communities to tackle real-world issues effectively and robustly, thus influencing the expansion of open-source development. Integrating CLD and PD within open-source projects fosters a more inclusive collaborative development environment, driving innovation and user-centric solutions. Through case studies like Kubernetes and Konveyor, it is evident that these methodologies significantly contribute to project success by enhancing adaptability, ensuring broad community engagement, and addressing diverse user needs. The findings underscore the vital role of these strategies in creating sustainable and resilient software solutions, highlighting their potential to transform the technology development landscape. Introduction The evolution of open source software development is moving away from traditional models, steering towards a more inclusive and collaborative framework.This whitepaper delves into community-led development (CLD) and participatory design (PD) roles within this transformative context.These methodologies are central to a broader movement that aspires to democratize technology creation and stimulate innovation.The discussion aims to reveal how their integration improves the development process and enhances the open-source initiatives. In the active open-source landscape, blending community insights and stakeholder engagement through CLD and PD is a critical strategy for developing robust and innovative solutions.This paper highlights the contributions of these approaches in creating adaptable and resilient software while fostering an environment where diversity of thought and collaboration are highly valued.By exploring the implementation and outcomes of CLD and PD within the open-source ecosystem, the paper illuminates a path towards a future characterized by technological advancement driven by inclusivity, shared ownership, and collaboration. Community-Led Development (CLD) Community-Led Development (CLD) refers to a development approach where the community members collaboratively identify their goals and objectives, develop and implement strategies to achieve them, and foster relationships within the community and with external parties [1].This approach leverages the community's collective strengths and leadership to make progress in the project. In the context of open source, CLD is characterized by several key attributes: Participation, Inclusiveness, sustainability, accountability, community leadership, adaptability, and collaboration [1].CLD focuses on the importance of everyone working together and having a say in decisions.This method ensures that the people involved are responsible for the project's development and final results.It fosters a more vibrant and resilient community that leads to the creation of software reflecting the diverse needs and insights of its users and contributors. 2.2Participatory Design (PD) Participatory Design (PD) is a collaborative design methodology that emerged from Scandinavian work-life research in the 1970s, focusing on involving stakeholders, especially users, in the design process.Its roots in co-creation, democracy, and mutual learning highlight the importance of engaging all participants in shaping the outcomes to meet their needs and enhance usability [2], [3].This approach democratizes the design process, bridging the gap between developers and users to ensure products are both functional and reflective of diverse user requirements. PD fits nicely in open-source software development with the key ideas of being open, working together, and involving the community.It means bringing users into the process of making the software, from the first design steps to making ongoing improvements.This approach ensures the software is robust from a technical point of view and truly meets what users need and want.By encouraging everyone to share their ideas and keep giving feedback, PD helps create creative, flexible software that meets the needs of the people using it.Incorporating this method in open source makes the projects more lasting and impactful. Historical Context and Relevance Integrating CLD and PD into open-source projects is a major transition in software development.In the past, software development was top-down, directed mainly by a small group of developers or project leaders, who made most of the decisions with little input from the actual users or the wider community [4].Although this method worked in some situations, it often led to software that did not fully meet the needs of its users or benefit from the community's collective knowledge. The emergence of open source software has changed the game by promoting a more open, collaborative way of creating software, emphasizing working together, being transparent, and sharing ownership [4].Within this environment, CLD and PD have helped to make the development process even more democratic.CLD lets community members have a say in the direction of a project, ensuring it grows in line with what users need and want.PD goes hand in hand with CLD by involving users in designing the software, ensuring it is functional and userfriendly. This move towards an inclusive and collaborative approach in software development is important because it means projects are more likely to stay relevant and keep up with changes [5].By getting a wide range of ideas and expertise from the community, open-source projects can innovate faster and create solutions that more people can use and support.Bringing CLD and PD into open source reflects a more significant change toward making technology development unbiased and collaborative, where everyone has a chance to contribute. Implementing Community-Led Development (CLD) and Participatory Design (PD) Fig 1. Representation of CLD and PD in an Open-Source Project community Implementing CLD and PD in open-source projects means using an organized plan but still being flexible enough to handle the ways communities work and what they aim to achieve.The starting point is to make sure everyone knows what the project is about, its vision, its values, and how contributors can get involved [6].This step involves setting up easy-to-use tools for sharing ideas and working together, like forums, messaging apps, and project management systems.These tools are crucial for clear and open communication among everyone involved [7]. Keeping the community active and interested takes continuous effort.It can be done by regularly sharing updates, recognizing the work people contribute, and creating a friendly space where all feedback is valued.Implementing recognition programs and contribution guidelines can help maintain a high level of participation and ensure that contributions are aligned with the project's vision and goals. In order to facilitate collaborative development and design, it is important to have tools that help work together in real-time, like Discord or Slack, keep track of changes, and manage tasks via GitHub issues or Jira [8].These technologies make the process of building the project more efficient and allow people from anywhere to add their contributions, no matter their location or time zone. Challenges and Solutions Integrating CLD and PD into open-source projects presents several challenges [2] -Ensuring Inclusivity, Balancing Stakeholder Expectations, Addressing Long Feedback Loops, and Balancing Rapid Innovation with Stability. Ensuring Inclusivity • Challenge: Inclusivity is essential for creating a welcoming and diverse community where all voices are heard and valued.However, achieving true inclusivity can be challenging, especially in large and diverse communities. • Solution: Actively promote diversity and inclusion within the community by creating policies, initiatives, and resources that support underrepresented groups [10]. • How it relates: Enabling diversity and inclusion ensures that community members from all backgrounds feel welcome and valued [10].The community benefits from broader perspectives and experiences by actively seeking and amplifying diverse voices.This enriches the collective pool of knowledge and creates a welcoming environment for all contributors, ultimately leading to a stronger and more resilient community. Balancing Stakeholder Expectations • Challenge: Balancing the needs and expectations of various stakeholders, including users, developers, sponsors, and other community members, can be challenging, as their priorities may differ. • Solution: Prioritize Stakeholder engagement by actively engaging with them throughout the development cycle to ensure project goals and priorities align with their needs and expectations [11]. • How it relates: Prioritizing stakeholder engagement fosters a sense of ownership and commitment among all parties involved in the project.By obtaining input and feedback from stakeholders early and often, the project can better align its goals with the needs of its diverse user base. Addressing Long Feedback Loops • Challenge: In integrating CLD and PD, projects may encounter long feedback loops, extending the time between proposing a change and seeing it implemented or receiving feedback.This can slow the development cycle and impact the timely release of new features or updates [2]. • Solution: Implement more efficient tools and processes for collecting and acting on community feedback.This could include regular sprint reviews, where community input is gathered and prioritized for action, or real-time collaboration tools to accelerate decisionmaking processes [12]. • How it relates: By streamlining how feedback is collected and responded to, projects can reduce the time it takes to implement changes or introduce new features.This keeps the development cycle agile and ensures that community contributions have a real impact on the project's progress, maintaining engagement and satisfaction among contributors.However, this can sometimes compromise the stability, especially in projects critical to users' operations or businesses. Balancing Rapid Innovation with Stability • Solution: Adopt a Dual-Track Development Approach [13] as shown in figure 3: One possible strategy is to separate the project's development into two tracks: one focused on rapid experimentation and innovation (often in an edge environment) and another on maintaining a stable, thoroughly tested release for production use.This approach allows participatory design and community-led development to thrive without compromising the project's stability. • How it relates: A dual-track development strategy finds a middle ground between wanting to innovate and needing to keep things stable.This approach creates a win-win situation for everyone involved, ensuring the project meets both its innovative goals and the users' need for reliability. Real-World Examples This section will explore case studies demonstrating the successful application of CLD and PD across various open-source projects.Each case will detail the approach taken, the challenges encountered, and the impact on project outcomes.From small-scale community initiatives to large, globally distributed projects, these examples will showcase the versatility and effectiveness of integrating CLD and PD methodologies. Background: Kubernetes is an open-source container orchestrator that simplifies container management, enabling efficient resource utilization and PaaS development [14]. CLD Approach: Kubernetes exemplifies community-led development with its governance in the hands of its community.It has a well-structured, open, and inclusive process for contributions, including special interest groups (SIGs) for various aspects of the project.These SIGs are open for anyone to join and contribute to, fostering a diverse and collaborative environment [15]. PD Methodology: Kubernetes also incorporates participatory design principles by actively involving its users in the development process.Users can contribute through GitHub issues, pull requests, Kubernetes Enhancement Proposals (KEPs) [16], and participation in SIG meetings.This direct line of communication between users and contributors ensures that the platform meets realworld needs and use cases. Impact: The success of Kubernetes can be attributed to its open, community-driven approach, which has led to rapid innovation, extensive adoption, and a robust ecosystem of tools and services built around the platform.Kubernetes' ability to meet the complex needs of modern, cloud native applications is a direct result of the collaborative efforts of its global community. Challenges: Despite its successes, the Kubernetes project faces challenges, including managing the sheer scale of contributions [17], [18], ensuring the quality and security of the code, and navigating the diverse needs and opinions within its community.However, its governance model and commitment to openness and inclusivity have effectively helped it address these challenges. CLD Approach: The Konveyor project operates on a community-led development model, where the community drives contributions, decisions, and leadership.It leverages its community members' collective expertise and efforts to develop rules and best practices for Kubernetes migrations.The project encourages participation from individuals and organizations, fostering a collaborative environment where everyone can contribute to its success [20]. PD Methodology: Participatory design is at the core of Konveyor, with the project actively involving its target users-developers, system administrators, and IT professionals-in the development process.This involvement takes various forms, including feedback on tool usability, contributions to the codebase, documentation improvements, and participation in community meetings [21].By engaging its end users directly, Konveyor ensures that its tools and practices align with organizations' actual needs and challenges when migrating their workloads to Kubernetes. Impact: Konveyor reduces the complexity, time, and cost of transitioning legacy applications to modern, cloud native platforms.The project enables organizations to successfully adopt Kubernetes, contributing to the broader adoption and success of the technology. Challenges: Like many open-source projects, Konveyor faces challenges, including ensuring broad community engagement, managing diverse user needs, and maintaining momentum in a rapidly evolving technology landscape.But, its commitment to an open, inclusive, and participatory development process has helped it navigate these challenges and continue to grow and evolve. Conclusion: Konveyor demonstrates the benefits of integrating CLD and PD by fostering a collaborative, inclusive community and actively involving users in the development process. Konveyor has developed practical solutions to real-world problems, facilitating the widespread adoption of Kubernetes.The project's success so far highlights the value of user participation in driving innovation and addressing complex technical challenges. Conclusion This Fig 3 . Fig 3. Dual-track development approach • Challenge: The dynamic nature of open-source development, powered by CLD and PD, encourages rapid innovation and the frequent introduction of new ideas and features.However, this can sometimes compromise the stability, especially in projects critical to users' operations or businesses. whitepaper investigates the essential roles that Community-Led Development (CLD) and Participatory Design (PD) play in open source software.It shows how these methods help create new ideas, encourage working together, and make sure everyone feels included.By looking closely at how CLD and PD work, their history, and examples of their use, it is clear that using these methods together is key to making technology that meets the community's needs.The stories of Kubernetes and Konveyor are examples that highlight how effective CLD and PD can be in opensource projects, proving that a development process focused on community cooperation can solve complex problems and support ongoing growth.CLD and PD provide a guide for creating technology solutions that are flexible, strong, and centered around users' needs.Embracing Community-Led Development and Participatory Design, the open-source community paves the way for a future rich in innovation, inclusivity, and collective advancement. : [19]rnetes is a prime example of how CLD and PD can drive the success of an open-source project.Its governance structure, commitment to community involvement, and mechanism for incorporating user feedback into its development cycles have made it a worldwide model for open-source projects.The project's ability to innovate and adapt quickly to changing technologies and user needs underscores the value of a community-led, participatory approach to open-source software development.Konveyor is a community project aimed at helping organizations rehost, replatform, and refactor applications to run on Kubernetes.It provides tools and practices to accelerate the process of migrating existing applications to Kubernetes, addressing common challenges and streamlining the transition to containerized environments[19].
3,406.2
2024-04-16T00:00:00.000
[ "Computer Science", "Environmental Science", "Sociology" ]
Measurement of Permeability in Horizontal Direction of Open-Graded Friction Course with Rutting Although the permeability of open-graded friction course (OGFC) materials in the transverse direction and the reduction in permeability associated with long-term traffic loading are important issues, they have remained under researched thus far. In this study, testing equipment and procedure were developed to evaluate the permeability of an OGFC specimen along the horizontal direction and its reduction due to rutting. Horizontal permeability tests were conducted by varying the hydraulic gradient of specimens with porosities of 19.6%, 15.6%, and 10.3%. The reduction in cross-section due to traffic loading was simulated via a wheel tracking test, and the permeability was subsequently evaluated. The reliability of test methodology was successfully verified; the tendency of the relationship between discharge velocity and hydraulic gradient was in good agreement with existing research results. The reduction in cross-sectional flow area due to rutting decreased and the horizontal permeability. The test results using developed testing equipment will enable efficient OGFC design. Introduction Problems such as decline in ground water levels, depletion of ground water resources, and increase in flood damage in densely developed areas are becoming increasingly common in the current modern society, because urbanization has led to a decrease in green and permeable areas and an expansion of impervious areas. Low impact developments (LIDs) have been suggested as a suitable approach for resolving such water-related environmental issues and for recovering the water circulation in urban environments. A LID considers the integrated hydrological system, administration of the small-scale distribution, source management, and diversity based on the water circulation characteristics under natural conditions, by applying the concept of better site design devised by Prince George's County Department of Environmental Resources, Maryland, U.S.A., while planning integrated facilities. Furthermore, transitioning from centralized to decentralized managements and the corresponding applications are dynamically conducted in several countries. Practical examples of LID include the decentralized urban design in the Netherlands and the water-sensitive urban design in the U.K. [1,2]. Such LID facilities in metropolitan regions have been reviewed intensively to reduce the damage caused by floods and non-point source pollution [3][4][5][6]. sensitive urban design in the U.K. [1,2]. Such LID facilities in metropolitan regions have been reviewed intensively to reduce the damage caused by floods and non-point source pollution [3][4][5][6]. Among the many factors that influence the coverage of impervious materials in densely developed areas, roads occupy approximately 30% of urban spaces. Roads are laid for the convenience of transportation, and the area of paved roads in urban spaces is approximately twice the area occupied by buildings [7]. Conventionally, paved roads negatively impact the water circulation system (WCS) by discharging rain water that falls on its surface, reducing ground water base flow, potentially increasing flood damage, transporting urban pollution to water sources, and interfering with the natural water circulation cycle. These problems are caused by the materials used to build roads or pavements, such as concrete and asphalt, which are impervious [8,9]. Open-graded friction course (OGFC), which is an LID technique used worldwide, has been suggested as a method for solving such water-related environmental issues and recovering urban water circulation [10]. The most important OGFC parameter during the hydrological design of LID-based road and traffic facilities is porosity [11][12][13]. The porosity of porous asphalt materials can be calculated by measuring the weight of a specimen when it is saturated and dry. Studies on the porosity of porous asphalts have been conducted by Montes et al. [14], Neithalath et al. [15], and Ahn et al. [16]. Further research on porous asphalt remains difficult owing to the problems of pore blockages and aggregate desorption encountered during the early stages. However, the use of OGFC has increased because it supports sustainable development, especially for WCSs [17]. The Federal Highway Administration (FHWA) proposed the consideration of a hydrological design, whereby rain water is discharged through permeable asphalt pavements, introduced a testing method [18][19][20][21], and indicated that additional research on pore blockages is required. The FHWA further stated that urban flooding and the volume of runoff rain water can be reduced by adjusting the thickness of the OGFC, adding a water permeable layer, and using a trench filled with aggregates [22]. Amirjani [23] suggested a permeation test that considered blockages, and Marcaida et al. [24] performed experimental research that assessed the OGFC blockage based on the size of the blockage particles. Suresha et al. [25] and Deo et al. [26] experimentally and theoretically investigated methods to prevent long-term blockages caused by pore clogging. Additionally, they developed the test equipment for evaluating the permeability of OGFCs. Fwa et al. [27,28] conducted experimental research on the permeability characteristics and the phenomenon of blockages. Ahn et al. [29] developed testing equipment that could be applied to permeable base courses, supplementary base courses, and other materials and was also capable of adjusting the hydraulic gradient via a method other than the falling head method. Furthermore, they simulated pore blockage by improving the strengths and addressing the weaknesses of the permeability test device for existing OGFCs ( Figure 1). Andrés-Valeri et al. [30] conducted a performance test of the permeable asphalt in the horizontal direction. However, since Andrés-Valeri et al. [30] installed slab-shaped samples on the floor and measured the amount of interflow induced by the rainfall intensity, it is difficult to evaluate the accurate permeability coefficient inside the pavement. Research on the vertical rain water permeability in pavements and pavement pore blockage due to sediments has experienced excellent progress in recent years. In the case of an OGFC, it is presumed that the structure allows all drainage water to vertically pass through to the ground because it does not contain any impervious layers. In addition, all the layers of OGFCs are designed such that they feature permeable characteristics. However, research on the permeability of such pavements in the horizontal direction (i.e., transverse direction) is currently insufficient. Moreover, the current research status on testing equipment and analysis methods to predict and simulate the reduction in permeability caused by a reduction in pore size resulting from rutting that occurs, owing to long-term traffic loading, is inadequate. Therefore, the pore characteristics in the discharge direction and the influence of traffic loading on these pore characteristics must be considered during the design phase of OGFCs. Furthermore, permeability should be evaluated to analyze the phenomenon of pore blockage along the permeation direction and that caused by traffic loading and also for the standardization of verification plans and simulation methods and the development of standardized testing equipment. This study is aimed at developing laboratory-scale equipment to evaluate the permeability of an OGFC specimen in the horizontal direction (by varying the hydraulic gradient) and the pore reduction caused by traffic loading. For this purpose, the compositions of the newly developed testing equipment, procedure, and method are described. A permeability test in the horizontal direction along the hydraulic gradient was conducted based on the proposed testing method, and the pore reduction caused by rutting due to traffic loading was simulated via a wheel tracking (WT) test. Additionally, the permeability was evaluated. Based on the test results, the practical applicability of OGFC and additional research topics are discussed. Materials OGFC specimens were designed with open-graded aggregate and asphalt binder based on SUPERPAVE (superior performing asphalt pavement) [31]. Specifications of OGFC mixtures and the properties of the asphalt binder used are as shown in Tables 1 and 2, respectively. The nominal maximum size used for OGFC mixtures was 10 mm and the gradations are shown in Figure 2. Specimens with dimensions of 300 × 300 × 50 mm (width × length × height) were molded according to the KS F 2374 standard [33]. The maximum theoretical density was calculated by the methods of KS F 2366 [34] and AASHTO T-209 [35], and the porosity was calculated by the ratio of the measured density and the theoretical density of the sample. The target porosities of the specimen were considered as 20%, 15%, and 10%, and the actual porosities were measured to be 19.6%, 15.6%, and 10.3%, respectively. The nominal maximum size used for OGFC mixtures was 10 mm and the gradations are shown in Figure 2. Specimens with dimensions of 300 × 300 × 50 mm (width × length × height) were molded according to the KS F 2374 standard [33]. The maximum theoretical density was calculated by the methods of KS F 2366 [34] and AASHTO T-209 [35], and the porosity was calculated by the ratio of the measured density and the theoretical density of the sample. The target porosities of the specimen were considered as 20%, 15%, and 10%, and the actual porosities were measured to be 19.6%, 15.6%, and 10.3%, respectively. Test Equipment OGFC composed of porous asphalt material exhibits a viscoelastic behavior under the influence of traffic loading and high temperatures. The pavement material maintains its strength at low temperatures, whereas the mixture becomes soft at high temperatures. Applying a load on the asphalt pavement at high temperatures results in rutting, which permanently transforms the surface characteristics and reduces pavement permeability. Therefore, pore reduction due to the rutting caused by long-term traffic loading was simulated in this study by using WT test equipment. During the test, the dynamic stability was calculated (Equation (1)) to evaluate the resistance of the pavement to rutting. Dynamic stability occurs when the deformation curve becomes a nearly straight line and the rate of deformation change approaches zero (i.e., the deformation between 45-60 min), as shown in Figure 3. The test was performed in accordance with the KS F 2374 standard [33] and utilized a WT compaction machine and measuring machine ( Figure 4). Three specimens with porosities of 10.3%, 15.6%, and 19.6% were subjected to Test Equipment OGFC composed of porous asphalt material exhibits a viscoelastic behavior under the influence of traffic loading and high temperatures. The pavement material maintains its strength at low temperatures, whereas the mixture becomes soft at high temperatures. Applying a load on the asphalt pavement at high temperatures results in rutting, which permanently transforms the surface characteristics and reduces pavement permeability. Therefore, pore reduction due to the rutting caused by long-term traffic loading was simulated in this study by using WT test equipment. During the test, the dynamic stability was calculated (Equation (1)) to evaluate the resistance of the pavement to rutting. Dynamic stability occurs when the deformation curve becomes a nearly straight line and the rate of deformation change approaches zero (i.e., the deformation between 45-60 min), as shown in Figure 3. The test was performed in accordance with the KS F 2374 standard [33] and utilized a WT compaction machine and measuring machine ( Figure 4). Three specimens with porosities of 10.3%, 15.6%, and 19.6% were subjected to rutting until each specimen reached a rut depth of 4, 8, and 12 mm, respectively. Thereafter, the permeability was evaluated. where d 1 (mm) and d 2 (mm) represent the deformations at t 1 (45 min) and t 2 (60 min), respectively. rutting until each specimen reached a rut depth of 4, 8, and 12 mm, respectively. Thereafter, the permeability was evaluated. Specimen Preparation The test specimens were prepared in accordance with the following steps: (1) Preparation of the WT specimen mold ( Figure 5a); (2) Preparation of the OGFC specimen; shape: square, length of one side: 300 ± 5 mm, and thickness: 50 mm; (3) The maximum compaction load is 8820 N [33]. Compact the specimen up to 100% ± 1% of the standard density of the Marshall stability test [36]; permeability was evaluated. Specimen Preparation The test specimens were prepared in accordance with the following steps: (1) Preparation of the WT specimen mold ( Figure 5a); (2) Preparation of the OGFC specimen; shape: square, length of one side: 300 ± 5 mm, and thickness: 50 mm; (3) The maximum compaction load is 8820 N [33]. Compact the specimen up to 100% ± 1% of the standard density of the Marshall stability test [36]; Specimen Preparation The test specimens were prepared in accordance with the following steps: (1) Preparation of the WT specimen mold ( Figure 5a); (2) Preparation of the OGFC specimen; shape: square, length of one side: 300 ± 5 mm, and thickness: 50 mm; (3) The maximum compaction load is 8820 N [33]. Compact the specimen up to 100% ± 1% of the standard density of the Marshall stability test [36]; (4) Curing the compacted specimen at room temperature for 12 h; (5) Curing the specimen at a constant temperature of 60 ± 0.5 • C for 5 h before beginning the test. Maximum curing time should be 24 h; Equipment Development The testing equipment developed in this study was used to evaluate the permeability in the horizontal (i.e., road transversal direction) and vertical directions of the OGFC, unlike the existing testing equipment, which only evaluates the permeability in the vertical direction. The configuration of the testing equipment is shown in Figure 6a. The features of this testing equipment include the ability to evaluate non-linear permeability characteristics along the horizontal direction of OGFCs and to adjust the hydraulic gradient. This testing equipment consists of a specimen-fixing mechanism, water tank, and head-adjusting part. The hydraulic gradient can be adjusted by positioning the specimen-fixing mechanism at the top of the head-adjusting part in the water tank. (1) Specimen Mold Fixture A specimen mold fixture was designed to attach the test specimen to the testing apparatus ( Figure 6b). However, it is additionally capable of implementing the test for specimens of various dimensions based on the size of the water tank. The test specimen can be firmly fixed using bolts; moreover, as materials that maintain the watertight characteristics of rubber or silicone are used, the tolerance is not affected by water leakage between the specimen and the fixing part. In addition, stainless steel (thickness of 3.0 mm) was used to fabricate the specimen mold fixture to prevent corrosion and deformation. (2) Water Tank The water tank had a dimension of 500 × 500 × 530 mm (width × length × height), as shown in Figure 6c. The tank was used to saturate the specimen and store and discharge the spilled water that penetrated the specimen. Water overflowed through the outlet if the volume of water that passed through the specimen exceeded the capacity of the water tank; in such cases, the volume of water that passed through the specimen was considered as the flow. Equipment Development The testing equipment developed in this study was used to evaluate the permeability in the horizontal (i.e., road transversal direction) and vertical directions of the OGFC, unlike the existing testing equipment, which only evaluates the permeability in the vertical direction. The configuration of the testing equipment is shown in Figure 6a. The features of this testing equipment include the ability to evaluate non-linear permeability characteristics along the horizontal direction of OGFCs and to adjust the hydraulic gradient. This testing equipment consists of a specimenfixing mechanism, water tank, and head-adjusting part. The hydraulic gradient can be adjusted by positioning the specimen-fixing mechanism at the top of the head-adjusting part in the water tank. (1) Specimen Mold Fixture A specimen mold fixture was designed to attach the test specimen to the testing apparatus ( Figure 6b). However, it is additionally capable of implementing the test for specimens of various dimensions based on the size of the water tank. The test specimen can be firmly fixed using bolts; moreover, as materials that maintain the watertight characteristics of rubber or silicone are used, the tolerance is not affected by water leakage between the specimen and the fixing part. In addition, stainless steel (thickness of 3.0 mm) was used to fabricate the specimen mold fixture to prevent corrosion and deformation. (2) Water Tank The water tank had a dimension of 500 × 500 × 530 mm (width × length × height), as shown in Figure 6c. The tank was used to saturate the specimen and store and discharge the spilled water that penetrated the specimen. Water overflowed through the outlet if the volume of water that passed through the specimen exceeded the capacity of the water tank; in such cases, the volume of water that passed through the specimen was considered as the flow. (3) Head-Adjusting Part The head-adjusting part consisted of head-adjusting pedestals of various heights (30-60 mm) and a specimen-fixing support, as shown in Figure 6d. This part controlled the hydraulic gradient using the head-adjusting pedestals. A sufficient cross-sectional flow area was provided at the specimen-fixing support such that any excess water that penetrated the specimen could be smoothly routed to the water tank to minimize its influence on water flow. The head-adjusting part consisted of head-adjusting pedestals of various heights (30-60 mm) and a specimen-fixing support, as shown in Figure 6d. This part controlled the hydraulic gradient using the head-adjusting pedestals. A sufficient cross-sectional flow area was provided at the specimen-fixing support such that any excess water that penetrated the specimen could be smoothly routed to the water tank to minimize its influence on water flow. General Procedures The testing equipment for horizontal permeability is based on constant heads. Permeability is evaluated by measuring the flow that penetrated the OGFC specimen over a specific time interval, by varying the hydraulic gradients against a specimen with a specific cross section and thickness. The testing procedure is performed in accordance with the following steps ( Figure 7). (1) Mark the direction in which the horizontal permeability will be evaluated in the test specimen ( Figure 7a). (2) Set the hydraulic gradient using the head-adjusting pedestals (Figure 7c) and set up the testing specimen by mounting the specimen on the specimen mold fixture (Figure 7d). General Procedures The testing equipment for horizontal permeability is based on constant heads. Permeability is evaluated by measuring the flow that penetrated the OGFC specimen over a specific time interval, by varying the hydraulic gradients against a specimen with a specific cross section and thickness. The testing procedure is performed in accordance with the following steps ( Figure 7). (1) Mark the direction in which the horizontal permeability will be evaluated in the test specimen (Figure 7a). The head (Δh) is the ratio of hydraulic gradient (i) to the specimen height (l). The testing equipment developed for this study controls the head (Δh) by positioning the head-adjusting pedestals, specimen-fixing support, and specimen mold fixture in the water tank. The applicable maximum height of the head is 490 mm, because it is restricted by the height of the water tank, and the test can be performed for various hydraulic gradients. The hydraulic gradients with varying thicknesses of the head-adjusting pedestals are listed in Table 3. The minimum and maximum hydraulic gradients (i) that can be tested are 0.1 and 1.3, respectively. The head (∆h) is the ratio of hydraulic gradient (i) to the specimen height (l). The testing equipment developed for this study controls the head (∆h) by positioning the head-adjusting pedestals, specimen-fixing support, and specimen mold fixture in the water tank. The applicable maximum height of the head is 490 mm, because it is restricted by the height of the water tank, and the test can be performed for various hydraulic gradients. The hydraulic gradients with varying thicknesses of the head-adjusting pedestals are listed in Table 3. The minimum and maximum hydraulic gradients (i) that can be tested are 0.1 and 1.3, respectively. Darcy's law assumes that the relationship between the discharge velocity and the hydraulic gradient is linear, as expressed in Equation (1), which explains the permeability of typical soil or soil materials, i.e., the discharge velocity (v) is proportional to the hydraulic gradient (i). However, Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39] proved through experimental research that the relationship between discharge velocity and the hydraulic gradient is non-linear, because the porous asphalt material is large. where k = coefficient of permeability or permeability (mm/s), i = hydraulic gradient, and n = experimental coefficient. This study measured the horizontal permeability in the longitudinal direction (i.e., the driving direction) of the road and the lateral direction across the road to evaluate the permeability of OGFC. The values obtained were averaged and calculated using the following equation: where v = volumetric flow of water (mm 3 ), l = sample length (mm), A = sample area (mm 2 ), h = differential head (mm), and t = time for flow (s). Permeability after Rutting In the process of installing a sample in the specimen mold fixture after rutting during the test to determine the permeability, impermeable materials were filled in the space to prevent water from flowing into the space generated by the rutting. The following steps were performed to determine the permeability of the test specimen after rutting. (1) Prepare the specimen with rutting via the WT test. (2) Seal the rutted portion with impermeable material (Figure 7b). To evaluate the horizontal permeability of the OGFC corresponding to hydraulic gradient and rutting, the test was conducted following the sequences in Figure 8. Darcy's law assumes that the relationship between the discharge velocity and the hydraulic gradient is linear, as expressed in Equation (1), which explains the permeability of typical soil or soil materials, i.e., the discharge velocity (v) is proportional to the hydraulic gradient (i). However, Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39] proved through experimental research that the relationship between discharge velocity and the hydraulic gradient is non-linear, because the porous asphalt material is large. = (2) where k = coefficient of permeability or permeability (mm/s), i = hydraulic gradient, and n = experimental coefficient. This study measured the horizontal permeability in the longitudinal direction (i.e., the driving direction) of the road and the lateral direction across the road to evaluate the permeability of OGFC. The values obtained were averaged and calculated using the following equation: where v = volumetric flow of water (mm 3 ), l = sample length (mm), A = sample area (mm 2 ), h = differential head (mm), and t = time for flow (s). Permeability after Rutting In the process of installing a sample in the specimen mold fixture after rutting during the test to determine the permeability, impermeable materials were filled in the space to prevent water from flowing into the space generated by the rutting. The following steps were performed to determine the permeability of the test specimen after rutting. (1) Prepare the specimen with rutting via the WT test. To evaluate the horizontal permeability of the OGFC corresponding to hydraulic gradient and rutting, the test was conducted following the sequences in Figure 8. Horizontal Permeability Horizontal permeability tests were performed on the OGFC specimens with porosities of 19.6%, 15.6%, and 10.3%. Each test was conducted in accordance with the described test procedure for the horizontal permeability, using the testing equipment developed. For each sample, the permeability Horizontal Permeability Horizontal permeability tests were performed on the OGFC specimens with porosities of 19.6%, 15.6%, and 10.3%. Each test was conducted in accordance with the described test procedure for the horizontal permeability, using the testing equipment developed. For each sample, the permeability tests were conducted at least three times, and the average of discharge velocities and permeabilities from multiple tests were presented in Figures 9 and 10. Figure 9 show that the discharge velocity increased as the hydraulic gradient increased, and the relationship between hydraulic gradient and discharge velocity is non-linear, which is in accordance with the results of Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. The permeability gradually decreased as the hydraulic gradient increased as presented in Figure 10. As porosity decreased, the horizontal permeability also decreased. increased as the hydraulic gradient increased, and the relationship between hydraulic gradient and discharge velocity is non-linear, which is in accordance with the results of Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. The permeability gradually decreased as the hydraulic gradient increased as presented in Figure 10. As porosity decreased, the horizontal permeability also decreased. Horizontal Permeability after Rutting The permeability was evaluated after making ruts with depths of 4, 8, and 12 mm which simulate the changes in OGFC cross-sections due to long-term traffic loading. Two values of hydraulic gradients, 0.1 and 0.5, were adopted in the tests. To investigate the relative differences among specimens with and without ruts, the permeabilities were normalized by the permeability of the same sample with no rut and defined as the horizontal permeability ratio. Figures 11 and 12 present the horizontal permeability ratios with respect to rut depths, for hydraulic gradients of 0.1 and 0.5, respectively. The results indicated that the horizontal permeability decreased as rutting depth increased; the reduction in cross-sectional flow area caused the decrease in the porosity. In addition, the reduction in horizontal permeability was severer when the hydraulic gradient was smaller. increased as the hydraulic gradient increased, and the relationship between hydraulic gradient and discharge velocity is non-linear, which is in accordance with the results of Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. The permeability gradually decreased as the hydraulic gradient increased as presented in Figure 10. As porosity decreased, the horizontal permeability also decreased. Horizontal Permeability after Rutting The permeability was evaluated after making ruts with depths of 4, 8, and 12 mm which simulate the changes in OGFC cross-sections due to long-term traffic loading. Two values of hydraulic gradients, 0.1 and 0.5, were adopted in the tests. To investigate the relative differences among specimens with and without ruts, the permeabilities were normalized by the permeability of the same sample with no rut and defined as the horizontal permeability ratio. Figures 11 and 12 present the horizontal permeability ratios with respect to rut depths, for hydraulic gradients of 0.1 and 0.5, respectively. The results indicated that the horizontal permeability decreased as rutting depth increased; the reduction in cross-sectional flow area caused the decrease in the porosity. In addition, the reduction in horizontal permeability was severer when the hydraulic gradient was smaller. Horizontal Permeability after Rutting The permeability was evaluated after making ruts with depths of 4, 8, and 12 mm which simulate the changes in OGFC cross-sections due to long-term traffic loading. Two values of hydraulic gradients, 0.1 and 0.5, were adopted in the tests. To investigate the relative differences among specimens with and without ruts, the permeabilities were normalized by the permeability of the same sample with no rut and defined as the horizontal permeability ratio. Figures 11 and 12 present the horizontal permeability ratios with respect to rut depths, for hydraulic gradients of 0.1 and 0.5, respectively. The results indicated that the horizontal permeability decreased as rutting depth increased; the reduction in cross-sectional flow area caused the decrease in the porosity. In addition, the reduction in horizontal permeability was severer when the hydraulic gradient was smaller. Sustainability 2020, 12, x FOR PEER REVIEW 12 of 15 Figure 11. Change in horizontal permeability after rutting for each porosity (i = 0.1). Figure 12. Change in horizontal permeability after rutting for each porosity (i = 0.5). Practical Implications As mentioned in Section 4, the proposed horizontal permeability testing equipment yielded similar results to those reported by Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. In addition, the testing equipment and procedures were developed and successfully applied to evaluate the permeability of OGFC specimens in the horizontal (Figure 13a) direction. As the permeability in the horizontal direction could be evaluated by considering the geometric design of the road, it can be applied to the design of OGFCs if the pavement is required to drain water laterally to the side rather than only to the permeable base course vertically. Roads are designed differently depending on their function and scale (Table 4), and the width of vehicle tires which transfers different wheel loads to the OGFC. The ratio of widths of the specimen (300 mm) and wheel (50 mm) used in WT test was 0.167, which reflects typical range of values in the field. If one assumes a passenger car driving over a local road, the width of wheel may be approximately 0. This study provided an opportunity to develop an improved testing method that considered factors influencing the permeability of OGFCs, such as horizontal permeability and permeability reduction caused by rutting. Our results are expected to ultimately enable the design and maintenance of efficient OGFCs in the future (Figure 13b). Figure 12. Change in horizontal permeability after rutting for each porosity (i = 0.5). Practical Implications As mentioned in Section 4, the proposed horizontal permeability testing equipment yielded similar results to those reported by Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. In addition, the testing equipment and procedures were developed and successfully applied to evaluate the permeability of OGFC specimens in the horizontal (Figure 13a) direction. As the permeability in the horizontal direction could be evaluated by considering the geometric design of the road, it can be applied to the design of OGFCs if the pavement is required to drain water laterally to the side rather than only to the permeable base course vertically. Roads are designed differently depending on their function and scale (Table 4), and the width of vehicle tires which transfers different wheel loads to the OGFC. The ratio of widths of the specimen (300 mm) and wheel (50 mm) used in WT test was 0.167, which reflects typical range of values in the field. If one assumes a passenger car driving over a local road, the width of wheel may be approximately 0.215 m to 0.265 m and the lane width 3.0 m, resulting in the ratio of wheel width to lane width of 0.143 to 0.177. This study provided an opportunity to develop an improved testing method that considered factors influencing the permeability of OGFCs, such as horizontal permeability and permeability reduction caused by rutting. Our results are expected to ultimately enable the design and maintenance of efficient OGFCs in the future (Figure 13b). Practical Implications As mentioned in Section 4, the proposed horizontal permeability testing equipment yielded similar results to those reported by Fwa et al. [28], Huang et al. [37], Coleri et al. [38], and Liu et al. [39]. In addition, the testing equipment and procedures were developed and successfully applied to evaluate the permeability of OGFC specimens in the horizontal (Figure 13a) direction. As the permeability in the horizontal direction could be evaluated by considering the geometric design of the road, it can be applied to the design of OGFCs if the pavement is required to drain water laterally to the side rather than only to the permeable base course vertically. Roads are designed differently depending on their function and scale (Table 4), and the width of vehicle tires which transfers different wheel loads to the OGFC. The ratio of widths of the specimen (300 mm) and wheel (50 mm) used in WT test was 0.167, which reflects typical range of values in the field. If one assumes a passenger car driving over a local road, the width of wheel may be approximately 0.215 m to 0.265 m and the lane width 3.0 m, resulting in the ratio of wheel width to lane width of 0.143 to 0.177. This study provided an opportunity to develop an improved testing method that considered factors influencing the permeability of OGFCs, such as horizontal permeability and permeability reduction caused by rutting. Our results are expected to ultimately enable the design and maintenance of efficient OGFCs in the future (Figure 13b). Conclusions This study developed a permeability testing equipment and procedure to evaluate the permeability of OGFCs in the horizontal direction. Horizontal permeability tests were conducted with varying the hydraulic gradient for specimens with porosities of 19.6%, 15.6%, and 10.3%. The reduction in cross-section of OGFC due to traffic loading was simulated via a wheel tracking test, and the permeability was subsequently evaluated. After testing OGFCs with no rut, it was found that the hydraulic gradient and discharge velocity is non-linear. The permeability of OGFC was higher when the hydraulic gradient was smaller but the porosity was higher. The reliability of test methodology was successfully verified; the tendency of the relationship between discharge velocity and hydraulic gradient was in good agreement with existing research results. The rut depth of 4, 8, and 12 mm were made in OGFC specimens to simulate the decrease in cross-section caused by the rutting produced from long-term traffic loading. The results of constant head tests with rutted specimens showed that the horizontal permeability decreases owing to a decrease in the cross-sectional flow area. It would be necessary to incorporate, in design, the effect of the change in permeability due to long-term traffic loading. The horizontal permeability of OGFC and permeability reduction due to rutting are important considerations in hydrological design. These can be evaluated using the testing equipment and procedure proposed in this study. Conclusions This study developed a permeability testing equipment and procedure to evaluate the permeability of OGFCs in the horizontal direction. Horizontal permeability tests were conducted with varying the hydraulic gradient for specimens with porosities of 19.6%, 15.6%, and 10.3%. The reduction in cross-section of OGFC due to traffic loading was simulated via a wheel tracking test, and the permeability was subsequently evaluated. After testing OGFCs with no rut, it was found that the hydraulic gradient and discharge velocity is non-linear. The permeability of OGFC was higher when the hydraulic gradient was smaller but the porosity was higher. The reliability of test methodology was successfully verified; the tendency of the relationship between discharge velocity and hydraulic gradient was in good agreement with existing research results. The rut depth of 4, 8, and 12 mm were made in OGFC specimens to simulate the decrease in cross-section caused by the rutting produced from long-term traffic loading. The results of constant head tests with rutted specimens showed that the horizontal permeability decreases owing to a decrease in the cross-sectional flow area. It would be necessary to incorporate, in design, the effect of the change in permeability due to long-term traffic loading. The horizontal permeability of OGFC and permeability reduction due to rutting are important considerations in hydrological design. These can be evaluated using the testing equipment and procedure proposed in this study. Conflicts of Interest: The authors declare no conflict of interest.
8,052.8
2020-08-10T00:00:00.000
[ "Materials Science" ]
Stiffer Bonding of Armchair Edge in Single‐Layer Molybdenum Disulfide Nanoribbons Abstract The physical and chemical properties of nanoribbon edges are important for characterizing nanoribbons and applying them in electronic devices, sensors, and catalysts. The mechanical response of molybdenum disulfide nanoribbons, which is an important issue for their application in thin resonators, is expected to be affected by the edge structure, albeit this result is not yet being reported. In this work, the width‐dependent Young's modulus is precisely measured in single‐layer molybdenum disulfide nanoribbons with armchair edges using the developed nanomechanical measurement based on a transmission electron microscope. The Young's modulus remains constant at ≈166 GPa above 3 nm width, but is inversely proportional to the width below 3 nm, suggesting a higher bond stiffness for the armchair edges. Supporting the experimental results, the density functional theory calculations show that buckling causes electron transfer from the Mo atoms at the edges to the S atoms on both sides to increase the Coulomb attraction. Introduction The nanoribbons of 2D materials are expected to exhibit unanticipated functionalities due to their unique electronic, mechanical, and optical properties.Research on these nanoribbons has actually been vigorously pursued from fundamental understanding to application as new devices are being devised using the nanoribbons of graphene, molybdenum disulfide (MoS 2 ), etc. [1][2][3][4] DOI: 10.1002/advs.2023034777][8][9][10][11][12][13][14][15] They influence the electronic structure of a nanoribbon, such that graphene nanoribbons with armchair edges have a band gap caused by quantum confinement, while those with zigzag edges have a band gap caused by spin polarization at the edges. [16,17][20][21] However, very few experimental studies have been conducted to reveal the bond stiffness at the edge due to technical difficulties, as the bond stiffness must be precisely measured while simultaneously observing the atomic structure to evaluate the bond stiffness at the edge.MoS 2 nanoribbons attract much attention because of their high chemical stability and stiffness, intrinsic band gap, and other characteristics. [22,23][34] Through the firstprinciples calculations, the SLMoS 2 bandgap is predicted to decrease as the tensile strain increases, [35] such that the transition from direct to indirect gap occurs at 0.01 strain, and that from semiconductor to metal occurs at 0.10 strain. [36]In short, SLMoS 2 tunes its electronic and optical properties through mechanical deformation.Considering that it is expected to be utilized for fabricating transistors that exhibit extremely high on/off ratios and very low power dissipation, [37][38][39][40][41] the mechanical properties of SLMoS 2 must be clearly understood. [44][45][46][47][48] In one study, an exfoliated SLMoS 2 was transferred onto a patterned substrate containing a series of circular holes.Its mechanical properties were then measured by nanoindentation based mainly on AFM.In 2011, Betolazzi et al. [42] estimated the effective modulus of SLMoS 2 as 270 ± 100 GPa through AFM indentation tests.Their value was in a good agreement with the 210 GPa Young's modulus predicted using the first-principles density functional theory (DFT) calculations reported by Copper et al. [49] in 2013.The Young's modulus, breaking strength, and friction coefficient of SLMoS 2 can be measured by AFM indentation tests.However, the MoS 2 film prestretching caused by the internal strain between the layer and the substrate induced in the transfer process should be considered because it may lead to a large error of the measured Young's modulus. [50]In addition, the uniaxial elastic modulus could not be measured because the AFM probe was pressed at the center of the suspended SLMoS 2 in the circular hole.Thus, the size and the structure dependence of the Young's modulus of SLMoS 2 nanoribbons could not be obtained through the AFM nanoindentation tests. The size and the edge-structure dependence of the Young's modulus of SLMoS 2 nanoribbons have mainly been investigated using a molecular dynamics simulation.Jiang et al. [51] reported that the Young's modulus for both zigzag-and armchair-edge SLMoS 2 (Arm-SLMoS 2 ) decreased with the width decrease.However, Bao et al. [52] reported an opposite tendency and showed that the elastic modulus of the Arm-SLMoS 2 increased as the width became narrower.These conflicting theoretical calculation results required an experimental study. [55][56] The LER made from quartz with a high Young's modulus has a high resonant frequency of ≈1 MHz due to its elongated shape, which is effective in noise reduction.It also has a high Q-factor that can sufficiently suppress the dissipated energy during the measurements, resulting in a high accuracy.The equivalent spring constants of the nanoribbon can be measured as at least one order of magnitude higher in accuracy by the LER compared to the conventional Si cantilever.The in situ transmission electron microscopy (TEM) observation also showed the possibility of identifying the structure of the ultra-narrow nanoribbon, which is suitable for estimating the edge contribution.The size and the shape of nanocontacts (NCs) and their supporting bulk parts are determined by TEM observations.Hence, the Young's moduli of gold (Au) [55] or platinum (Pt) [56] NCs were previously precisely measured by removing the contribution of the bulk parts from the experimental data.We thought that this nanomechanical measurement meets the requirements necessary for measuring the bond stiffness at the nanoribbon edges. In this work, we precisely measure the bond stiffness of the MoS 2 nanoribbon edges.To the best of our knowledge, this is the first work to perform a precise measurement of the widthdependent Young's modulus of the Arm-SLMoS 2 nanoribbon through an in situ TEM observation using the transmission electron microscope holder equipped with an LER.Our experimental results show that the Young's modulus of Arm-SLMoS 2 is inversely proportional to the width, indicating that the armchairedge bonding is stiffer than the interior.These experimental results are explained by the buckling of the Mo─S bonding at the armchair edge during the DFT calculations. Preparation and Characterization of Multilayer MoS 2 Flakes The samples for the in situ TEM observation were prepared as shown in Figure 1a.First, a 200-mesh TEM grid was cut in half and coated with conducting silver (Ag) paste on the grid bars.Second, a block of natural MoS 2 , which was a few hundred micrometers thick, was adhered to a double-sided adhesive tape on a glass slide and repeatedly peeled off to thin it using Scotch tape.Third, the prepared half TEM grid coated with Ag paste was adhered to a MoS 2 sheet.The half TEM grid was removed with a tweezer once the Ag paste was cured.Accordingly, the MoS 2 flakes with a layer thickness ranging from a few to tens of layers remained at the half TEM grid edge.Finally, the prepared TEM grid was adhered to a copper plate fixed at the head of our homemade TEM holder. The quality of the natural MoS 2 block was characterized via Xray diffraction, in which the peak positions indicated the MoS 2 -2H structure (Figure S1, Supporting Information ).The size and the quality of the exfoliated MoS 2 flake were further evaluated using the 200 kV transmission electron microscope, JEM-ARM200F.The suspended MoS 2 flake had a rectangular shape with 3.5 μm length and 2.8 μm width (Figure 1b).The annular dark-field scanning TEM (ADF-STEM) images and the corresponding fast-Fourier transformation (FFT) pattern (Figure 1c) depicted that the flake had a highly crystalline and multilayer structure.The lattice constant estimated from an ADF-TEM image was ≈0.32 nm, which was consistent with the MoS 2 structure (a = 0.312 nm) (Figure 1d).Considering that the flakes are often folded at the edges, the layer number of the MoS 2 flake was determined from the folded flake edge in the TEM image.In Figure S2 (Supporting Information), the MoS 2 flake shows eight clear parallel dark lines indicating eight layers.The measured spacing of these eight dark lines was 0.65 nm, which matched with the bulk MoS 2 structure spacing in Figure 1d. Fabrication and Observation of the Single-Layer MoS 2 Nanoribbons The SLMoS 2 nanoribbon was produced by peeling the outermost layer of the folded edge of a multilayer MoS 2 flake by approaching the tungsten (W) tip (Movie S1, Supporting Information).[59] The space between the layers at the outer sides of the folded MoS 2 flake increased, making it easier for the layers to separate from each other. [60]This suggests that the outermost layer of the folded MoS 2 flake can be peeled off as SLMoS 2 by the W tip. Figure 2a illustrates the peel-off process.First, the edge of the small multilayer MoS 2 nanoflakes was identified through the TEM images in Figure 2b.These small SLMoS 2 nanoflakes may be formed during the exfoliation process of the MoS 2 block.Next, the W tip was moved toward the edge of the multilayer MoS 2 nanoflakes and attached to their outermost single layer (Figure 2c).The W tip was moved at an inclined angle after making the contact.Consequently, SLMoS 2 followed the W tip and was gradually peeled off from the multilayer MoS 2 flakes (Figure 2d).Finally, the SLMoS 2 nanoribbon was fabricated, and the orientation was tilted to identify its width and edge structures by slightly tuning the W tip position (Figure 2e).Supporting Information Movie S1 shows the whole fabrication process of the SLMoS 2 nanoribbons.The number of layers was confirmed by the side-view TEM images of the SLMoS 2 nanoribbons in Figure S3 and S4, Movie S2 and S3 (Supporting Information). However, the observation showed that SLMoS 2 above the 10 nm width was hardly pulled off.We think that the outermost monolayer may have had defects, including cracks, [47,61] in the layer during the flake folding process, which resulted in a thin nanoribbon that became a part of the layer when it was pulled out with the W tip. If no defects were produced, pulling out with the W tip would have been difficult because it was strongly adsorbed to the flake body.Choosing the folded MoS 2 flake and controlling the peeling direction of the W tip made it possible to fabricate the armchair-or zigzag-edge SLMoS 2 nanoribbons (Figure S5, Supporting Information).Few-layer nanoribbons can also be fabricated, as depicted in Figure S6 and Movie S4 and S5 (Supporting Information).However, at the present status, we cannot precisely give the mechanical parameters for fabricating MoS 2 nanoribbons with a specified number of layers.We will further analyze our data and try to clarify the experimental parameters for preparing different MoS 2 nanoribbon layers in a future work.In this study, we focused on the Arm-SLMoS 2 nanoribbons with widths ranging from 5.15 to 1.13 nm to investigate the effect of the edge or surface on the mechanical properties of the MoS 2 nanoribbons. In Situ TEM Experiment on the Single-Layer MoS 2 Nanoribbons The in situ TEM experiment on the SLMoS 2 nanoribbon was set up as shown in Figure 3a.A quartz LER was used to estimate the Young's modulus of the MoS 2 nanosheets (Figure S4, Supporting Information).The W tip was made to approach and establish contact with the ≈3.7 nm-wide SLMoS 2 nanoribbon for the stiffness measurement.The TEM image in Figure 3bA illustrates that the W tip made contact with the SLMoS 2 nanoribbon.They were separated in the TEM image in Figure 3bB when the W tip was pulled back (Movie S6, Supporting Information).This SLMoS 2 nanoribbon revealed armchair-edge structures that can be identified by a reciprocal lattice spot in the FFT pattern shown in Figure 3bB. In Figure 3c, the nanoribbon stiffness was simultaneously measured through the TEM observation.The average stiffness was found to be ≈60 N m −1 when the W tip came into contact with the nanoribbon in Stage A. It became 0 when the W tip was removed from the nanoribbon in Stage B. Figure 4 exhibits the TEM images of the Arm-SLMoS 2 nanoribbons with four different widths of 5.15, 3.85, 2.14, and 1.13 nm.The nanoribbons in Figure 4a-d were fabricated by peeling off the outermost layer from the flakes along the direction parallel to the armchair edge to make an armchair-edge single-layer nanoribbon.These armchair edges were confirmed by the corresponding FFT patterns showing that the nanoribbon axis was parallel to the {100} reciprocal lattice vector (Figure S8, Supporting Information).Each nanoribbon configuration seemed stable without obvious defects because neither the shape and contrast in the TEM images nor the time evolution of the stiffness changed (Figure 4e-h). The irradiation damage for the nanoribbons seemed negligible. [62]In the case of the 2D materials, the knock-on damage caused by electron irradiation makes vacancy or structural changes at acceleration voltages above 100 kV.However, this damage will practically be recovered at a temperature higher than room temperature. [63]In this experiment, we supposed that the temperature may be raised by current annealing with a 10 mV bias voltage to maintain the original structure.The corresponding average stiffness was measured as 211, 146, 113, and 97 N m −1 .The Young's modulus of 22 Arm-SLMoS 2 nanoribbons with different widths were also obtained, including these four nanoribbons. Young's Modulus of the Armchair-Edge Single-Layer MoS 2 Nanoribbons Note that the measured stiffness values (k m ) include contributions from the Arm-SLMoS 2 nanoribbon (k ribbon ), the MoS 2 flake (eight layers in thickness) connected with the nanoribbon (k flake ), and the W tip (k W ). The measured stiffness (k m ) is a series coupling of these three stiffness values and expressed as follows: Precisely estimating the Young's modulus of the Arm-SLMoS 2 nanoribbon required removing the contributions of the MoS 2 flake and the W tip supporting the Arm-SLMoS 2 nanoribbon from the measured stiffness. We estimated the MoS 2 flake stiffness by its dimension.The flake had length, width, and thickness of 3.5 μm, 2.8 μm, and 5.2 nm (eight layers), respectively (Figure 1b and Figure S2, Supporting Information).The Young's modulus of the suspended MoS 2 nanosheets with five to 25 layers was 330 ± 70 GPa. [50]ence, the MoS 2 flake (k flake ) stiffness was calculated as 1373 N/m as follows: [55] where Y, w, L, and d correspond to the material's Young's modulus, width, length, and thickness, respectively.The stiffness of the W wire (k W ), including the connection part with the MoS 2 nanoribbon, was calculated on the order of 10 5 N m −1 .Figure S9 and Table S1 and S2 (Supporting Information) present the calculation details.The W wire tip cut using pliers was not as sharp as that made by chemical etching.The aspect ratio of the length to the wire diameter at the connection part with the nanoribbon (Figure S9, Supporting Information) was small.Therefore, the k W inverse could be ignored in Equation ( 1).The Arm-SLMoS 2 nanoribbon stiffness was obtained by removing the MoS 2 flake contribution (k flake = 1373 N m −1 ).The Young's modulus of the nanoribbon was estimated using Equation (2), in which the length was 2 nm, and the widths were 5.15, 3.85, 2.14, and 1.13 nm, as depicted in the TEM images in Figure 4. Figure 5 displays the Young's modulus of the nanoribbon, which was inversely proportional to the width of the Arm-SLMoS 2 nanoribbon below 3 nm.The Young's modulus increased from 179 ± 8 to 215 ± 11 GPa as the width decreased from 2.39 to 1.13 nm.By contrast, it was almost constant ≈165 GPa above the 3 nm width.The values in the present results were slightly lower than those obtained from the previous studies (i.e., 270 ± 100 GPa [42] and 185 ± 46 GPa [49] ) that performed the AFM indentation tests.The differences from our results can be attributed to the different measurement methods.This is considering that the Young's modulus was estimated under biaxial tensile stress during the AFM indentation tests and measured under uniaxial stress along the armchair edges in this work.Akhter et al. [64] and Hung et al. [65] pointed out that the experimental results for the biaxial elastic modulus in the AFM indentation tests were higher than the simulation results for the uniaxial elastic modulus.The Young's modulus of SLMoS 2 under uniaxial tension was previously only reported in theoretical calculations [51,[66][67][68] due to experiment difficulty.To the best of our knowledge, we are the first to report on the experimental results of the width-dependent Young's modulus of the Arm-SLMoS 2 nanoribbons under uniaxial tensile stress.Note that the Young's modulus measurements for Arm-SLMoS 2 nanoribbons of different lengths almost showed the same width dependence for the nanoribbons of 2 nm length (Figure S10, Supporting Information).Thus, the length of the nanoribbon did not matter for the obtained width dependence of Young's modulus. The DFT calculations for the Arm-SLMoS 2 nanoribbon were performed to understand the reason behind the width-dependent Young's modulus.Since the length of the nanoribbon did not matter for the obtained width dependence of Young's modulus, we assumed infinite periodicity in the length direction.In these calculations, a supercell with a sufficiently wide area perpendicular to the nanoribbon axis with 50 Å width and 20 Å thickness was prepared to verify the edge effects.Assuming a uniform distortion, the size of the supercell along the nanoribbon axis direction was changed from 5.363 to 5.603 Å at a 0.02 Å step to evaluate the stiffness.The stiffness was obtained from a second-order derivative of the calculated energy with respect to the strain at the minimum.The Young's modulus was evaluated by considering the Arm-SLMoS2 nanoribbon dimensions (Supporting Information Section S8 and S9). Figure 5 shows that the calculated Young's modulus of the Arm-SLMoS 2 nanoribbons, which is represented by red sphere in the figure, increases with the decreasing width, indicating the same tendency with the experimental results.The calculated Young's modulus decreased from 201 to 180 GPa as the width increased from 0.97 to 2.89 nm.It remained constant at ≈179 GPa for the width above 2.89 nm.[71] The measured Young's modulus was slightly lower than the calculated one when the nanoribbons were wider than 3 nm.However, the ultra-narrow nanoribbons with a width below 3 nm showed a Young's modulus that was similar to the theoretical one, suggesting defect reduction.This result was in agreement with the previous works showing less defects for narrow nanoribbons. [47,61] Interpretation of the Width-Dependent Young's Modulus Figure 6 shows the charge distribution of the Arm-SLMoS 2 nanoribbon obtained through the DFT calculations.The electron density isosurfaces of 7 × 10 −2 e Å −3 , which are indicated by brown closed surfaces in this figure, depicted that more electrons were accumulated in the edge S atoms of the Arm-SLMoS 2 nanoribbon than at the interior S atoms.This result suggests that electrons can be transferred from the edge Mo atom of the Arm-SLMoS 2 nanoribbon to the S atoms on both sides.Optimized geometry confirmed that the edge Mo atom was buckled.Dimerization is known to form a 2 × 1 reconstruction on the Si (001) surface. [72]The dimer is further reduced in energy when one Si atom is buckled to take on an asymmetric atomic configuration.This is attributed to the buckling eliminating the s-and p-orbital hybrid and creating an s-like state in the atom displaced toward the surface while forming a p-like state in the atom displaced toward the substrate, such that the electrons move to the atoms displaced to the surface side.As an analogy to this asymmetric dimer, we think that buckling caused the electrons to be transferred from the edge Mo ion to the S ion, albeit the bonding between the Mo and S ions being a mixture of ionic and covalent natures. [73]he DFT calculations obtained a 2.31 Å Mo─S bond length at the edge and 2.43 Å in the interior, which were consistent with the results of the previous studies. [74]The calculated Mulliken charges of the edge Mo and S atoms indicated by Mo 1 and S 7,8 in Figure S13 (Supporting Information) were 0.39 and −0.20, respectively.The charge difference between the edge Mo and S atoms were obviously larger than that between the internal Mo (0.23, as indicated by Mo 2 -Mo 6 in Figure S13, Supporting Information) and S (−0.11 to −0.12, as indicated by S 9,10 -S 17,18 in Figure S13, Supporting Information) atoms.These larger Mulliken charges of the edge S atoms compared to the interior S atoms (0.160) were almost the same as the reduced amounts of the edge Mo atoms compared to the interior Mo atoms (0.157).In other words, the charge was mainly transferred from the edge Mo atoms to the edge S atoms, which was consistent with the result in Figure 6.The Coulomb attraction in the Mo-S bond at the edge may be greater than that in the Mo─S bond inside the nanoribbon, and the Mo─S covalent interaction may have been reduced by the charge transfer at the edge.The Mo─S covalent interaction may be enhanced by the unsaturated bond of the edge S atoms.The edge Mo and S ions of the Arm-SLMoS 2 nanoribbon were easily displaced by their low coordination number; hence, we think that the Coulomb attraction shortened the bond length and enhanced the stiffness. With the above discussion, we may conclude that the Mo─S bond at the edge can be stiffer than that in the interior.The smaller the nanoribbon width, the greater the substantial edge effect provided, which is consistent with the experimental result that the width dependence of Young's modulus is more pronounced below 3 nm width.The ratio of the edge atoms, including both the edge atoms of the four and five rings, to the interior atoms was approximately 21% when the width was narrower than ≈3 nm (corresponding to nine six-membered rings in width), which was a non-negligible value. Conclusion The width dependence of Young's modulus for an armchair-edge single-layer MoS 2 (Arm-SLMoS 2 ) nanoribbon was investigated herein through in situ TEM observation, which allowed us to obtain its structural information while simultaneously measuring its stiffness.Arm-SLMoS 2 nanoribbons can be fabricated by peeling the outermost MoS 2 layer from the folded MoS 2 flake.The Young's modulus of the Arm-SLMoS 2 nanoribbon used in this work was precisely estimated by removing the contributions of the flake and the W tip, thereby supporting the nanoribbon from the measured stiffness.The Young's modulus had an inversely proportional relationship with the width of the Arm-SLMoS 2 nanoribbon.That is, the Young's modulus almost remained constant ≈166 GPa when the ribbon width was wider than 3 nm.The Young's modulus clearly increased from 179 to 215 GPa when the ribbon width decreased from 2.4 to 1.1 nm.This dependence was well reproduced by the DFT calculations revealing that the Mo─S bonds at the armchair edge were stiffer than those at the interior due to buckling.The edge effect enhanced and dominated the Young's modulus of the armchair-edge SLMoS 2 nanoribbon as the width decreased, especially at a width smaller than ≈3 nm. In conclusion, the edges play an important role in the mechanical properties of SLMoS 2 nanoribbons. Experimental Section Developed TEM Holder and Measurement System: Figure S7(a) (Supporting Information) depicts the head part (sample stage) of developed in situ TEM holder.The left side of the figure shows that the prepared MoS 2 flakes were fixed on the copper plate, while the right side illustrates a 10 μm-diameter tungsten (W) tip attached to the end of the LER with Ag paste.The W tip position was controlled to approach and establish contact with the MoS 2 flake edge using a compact ultrasonic linear motor (TULA50, Technohands) (coarse motion) and a tube piezo (fine motion). Figure S7(b) (Supporting Information) shows the stiffness measurement system.The LER was induced to oscillate at its resonance frequency (f 0 ) by applying an excitation voltage to one of its electrodes (blue color, Figure S7, Supporting Information).The MoS 2 nanoribbon stiffness (k) was obtained from the resonance frequency shift (∆f), as shown by formula k ≈ 2 × k 0 (∆f/f 0 ).The resonance frequency was determined by the total stiffness corresponding to the serial coupling of the LER stiffness and the stiffness of the sample that came into contact with the LER (frequency modulation method). [75]Supporting Information Section S4 provides details on the measurement methods. Cleaning: Prior to the in situ TEM experiment, the sample mounted in the TEM holder was baked at ≈100 °C for at least 24 h in a vacuum chamber to remove the contamination from the prepared multilayer MoS 2 flake as much as possible. TEM Observation: High-resolution TEM observations were conducted using an ultra-high vacuum transmission electron microscope (JEM-2000VF) with 200 kV accelerating voltage at room temperature.The ultrahigh vacuum conditions (≈1 × 10 −6 Pa) inside the TEM column were effective in avoiding contamination and gas adsorption onto the sample.The TEM images were captured by a charge-coupled device camera at 0.2 s intervals while the stiffness was being simultaneously measured. Figure 1 . Figure 1.Multilayer MoS 2 sample preparation and characterization: a) schematic illustration of the MoS 2 flake preparation; b) TEM image of a suspended multilayer MoS 2 flake; c) ADF-STEM and higher-magnification (yellow square) images of the MoS 2 flake with a white atom contrast showing a 0.319 nm lattice constant (inset: corresponding fast-Fourier transformation pattern); and d) side and front views of the multilayer MoS 2 flake model, with the adjacent layers stacking from a 60°rotation.The measured layer spacing was 0.65 nm. Figure 2 . Figure 2. Fabrication process of the single-layer MoS 2 nanoribbons: a) schematic of the fabrication method of the SLMoS 2 nanoribbon from a multilayer MoS 2 flake during TEM and b-e) TEM images captured at the time sequence from Movie S1 (Supporting Information) showing the fabrication process. Figure 3 . Figure 3. a) Schematic illustration of the in situ TEM experiment on the SLMoS 2 nanoribbon.b) TEM images captured during (A) and after (B) the stiffness measurement.The top side of the TEM images depicts a multilayer MoS 2 flake with a peeled-off SLMoS 2 nanoribbon.The bottom side displays the W tip used to make contact with the SLMoS 2 nanoribbon.The lower-right image shows the corresponding FFT pattern.c) Typical variations in the SLMoS 2 nanoribbon stiffness during the measurement.A and B correspond to the captured moments in (b). Figure 4 . Figure 4. a-d) TEM images and e-h) corresponding measured stiffness of the armchair-edge SLMoS 2 nanoribbons with 5.15, 3.85, 2.14, and 1.13 nm widths.The TEM image tended to blur as the nanoribbon became thinner due to the mechanical vibration caused by the noise from the voltage applied to the piezo. Figure 5 . Figure 5. Width dependence of the Young's modulus of the Arm-SLMoS 2 nanoribbons.The black squares with an error bar represent the experimentally measured value.The red spheres indicate the simulated value by the DFT calculations (Supporting Information Section S8 and S9). Figure 6 . Figure 6.Top a) and side b) views of the deformation electron density isosurfaces of the Arm-SLMoS 2 nanoribbon with a finite width (four or five rings).The top view shows the x-y plane.The side view depicts the y-z cross-sections.The blue spheres in the model represent the Mo atoms, while the yellow ones represent the S atoms.The brown closed surfaces depict the 7 × 10 −2 e Å −3 electron density isosurfaces.Note that SLMoS 2 comprises S, Mo, and S layers with different heights.
6,246.8
2023-09-11T00:00:00.000
[ "Materials Science" ]
Engineering Feasibility of Building Blocks Produced from Recycled Rice Husks Rice husks abundance in Nigeria requires the consideration of their alternate economic uses to prevent environmental pollution from the waste heaps, litter and combustion. This study focused on the determination of the feasibility of blocks made from recycled rice husks for building construction. Twenty-four absolute cubes were moulded from a mixture of fine aggregate (sand), binder (cement) and water. These were used for control experiments. Also, 144 cubes of partially replaced sand with rice husks in the steps of 10, 20, 30, 40, 50 and 60% were produced and cured for 7, 12, 21 and 28 days like the absolute cubes. They were weighed and experimented for some engineering properties including compressive strength in triplicate. The average values of triplicate readings were recorded and documented. Laboratory strengths result at the 28 th day were compared with the reference strength of sandcrete block provided in the Federal Building Code to ascertain the performances of the partial sandcrete cubes. The low maximum compressive strength of 0.54N/mm 2 obtained at 30% replacement and 28 th day curing showed that rice husks were not feasible for replacing fine aggregate in sandcrete blocks at the percentages tested. This strength value is far less than the minimum allowable compressive strength of 1.75N/mm 2 of individual blocks provided in Federal Building Code. Introduction Waste heaps from rice husks generates serious environmental disturbance in areas where rice is produced, processed and the wastes are disposed. Stake holders are usually bordered about the disposal of these husks from the environment and ignore the economic benefits accruable from the wastes. Owners of rice-mills do not see economic gain in rice husks, therefore; they offer them out to free their environments from these wastes. However, Chukwudebelu et al. (2015); Opara (2006); Opeyemi and Makinde (2012); and Nicholas and Folorunsho (2012) have proven the economic and housing benefits of rice husks in the building industry through their research on recycling of rice husk in some forms, especially in the form of rice husk ash (RHA) for economic benefits in the building industry. Carter et al. (1982) worked on the incorporation of ungrounded rice husks into handmade, kiln-fired bricks. They measured properties like density, compressive strength, modules of rupture, water absorption and initial state of absorption; and then concluded that it was possible to substitute up to 50% rice husks (by volume of clay) into bricks without dropping the properties of brick outside the acceptable limits in developing countries. Rice husks, also known as rice hulls are the hard protecting coverings of rice grains. They are byproducts of threshing paddy and constitute close to 20% of the dry mass of harvested paddy. It is made up of about 50% cellulose, 23-35% lignin and 15-20% silica. These husks are economically, readily available in Nigeria. Research on wastes recycling for building materials production needs urgent attention because large demand has been placed on the building materials industries owning to the increase in population and rising prices resulting in shortage of building materials. Sandcrete block commonly used in Nigeria is defined in the Federal Building Code (2006) as a homogeneous mixture of composite material made up of cement, sharp sand and water (Anosike and Oyebade, 2012) with 2.00 N/mm 2 (300 psi) average standard strength for blocks, and 1.75 N/mm 2 (250 psi) lowest strength of respective block. This strength, if achieved through research on recycling of wastes for the production of building blocks for the construction industry, will go far in saving cost and speeding up national development requiring availability of good shelters for optimal productivities of citizens. Therefore, this work studied the suitability of rice husks for producing low cost and lightweight construction blocks for the building industry instead of the more expensive sandcrete blocks in current use. This is also in pursuit of the use of environmental friendly, low cost and lightweight materials of required standard in the building industries. Materials The materials include rice husks from a rice-mill in Auchi, Edo State of Nigeria; sharp sand (fine aggregate) of 3.35 mm, 0.85%, 2.64, and 2.91 sieve size, moisture content, specific gravity and coefficient of uniformity respectively; and free from loam, organic matter, clay, dirt and any chemical matter; binder -ordinary Portland cement and potable water. 2.1.1. Production of samples Following the provision of the Federal Building Code (2006), cement and sand were properly mixed to ratio 1:8 to achieve an even coloured, consistency mixture. Adequate volume of water was added to ensure a mixture of adequate workability. In the same way, rice husk was introduced in different percentages (10,20,30,40,50 and 60%) to produce some blocks. For this, cement and rice husks were properly mixed to achieve a uniform colour. Water was added in an adequate proportion to ensure mixture workability before moulding with a mould of 100 mm x 100 mm x 100 mm dimension. Methods Twenty-four absolute cubes were moulded from a mixture of fine aggregate (sand), binder (cement) and water. These were used for control experiments. Also, 144 cubes of partially replaced sand with rice husks in the steps of 10, 20, 30, 40, 50 and 60% were produced and cured for 7, 12, 21 and 28 days like the absolute cubes. They were weighed and experimented for some engineering properties including compressive strength in triplicate. The average values of triplicate readings were recorded and documented. Laboratory data of experimented cubes were analyzed and the compressive strengths at the 28 th day were compared with the reference strength of sandcrete block provided in the Federal Building Code (2006) to ascertain the performances of the partial sandcrete cubes. Results and Discussion The compressive strengths results are displayed in Figures 1 and 2. Figure 1 centred on control cubes' strengths variation with curing ages. Figure 2 centred on the comparison of the compressive strengths of cubes with partial rice husks and that of the control cube at curing age 7 days. This comparison was significant to determine the relationship between strengths of partial sandcrete cubes with the 7 days' strength of control cubes. This showed a great increase in strength of the control cubes over the partial sandcrete cubes. There was an increase in compressive strength with age, but a decrease in strength with increase in percentage of rice husks from 10 to 20%, and a rise from this point to a peak strength at the replacement of 30%, before a final decrease in strength with replacement of sand in the cubes. Cubes from rice husks attained a maximum strength of 0.54N/mm 2 at a percentage replacement of 30 on the 28 th curing day. This strength was found to be far less than the minimum strength of 1.75 N/mm 2 specified for sancrete blocks in (Federal Building Code, 2006). Figure 3, absolute sandcrete absorbed less water in comparison with partial sandcrete of replaced sand with rice husks. The water absorption rate increased with percentage replacement with rice husks. As discovered in previous work by Subramani and Ravi (2015), the more water absorption capacity of a sancrete block, the weaker the block and vice versa. The bond between blocks and mortar is highly dependent on their water absorption capacities. Thus, if the water absorption rate of a block is high; it absorbs water from fresh laid mortar and this ultimately results to weak strength (Subramani and Ravi, 2015). A good number of the cubes produced from partially replaced sand with rice husks had higher water absorption capacities above the minimum 12% specified in Federal Building Code (2006). The bulk densities of control sandcrete cubes were far greater than those of cubes produced from partial replacement of sand, and decreased with curing age. The bulk densities of absolute sandcrete cube at day 7 was 1992 kg/m³ while that of cube from partial replacement at 10% and 14 days curing age was 1512 kg/m³ as the highest values for partial sandcrete. Bulk densities were found to decrease with increase in percentage replacement of sand with rice husks. The control sample went above the minimum of 1500 kg/m³ in (British Standard Institute -BSI, 2002), and also, cubes from 10% of rice husks replacement at curing ages 7 and 14 days were above the minimum requirement bulk density values of 1500 kg/m³. Conclusions This work showed that rice husks were not feasible for replacing sand in sandcrete blocks at the percentages of substitution studied. This became obvious from the very low maximum compressive strength value of 0.54N/mm 2 for blocks with rice husks. This value is far below the minimum acceptable compressive strength of 1.75N/mm 2 for individual blocks provided in Federal Building Code (2006); and occurred at 30% replacement and 28 th day curing age. More litres of water were required for mixing partial sandcrete than required for mixing control sandcrete. Partial sandcrete required more binder (cement) than the control sandcrete.
2,066.4
2019-10-01T00:00:00.000
[ "Engineering", "Environmental Science" ]
Surface Anchoring Effects on the Formation of Two-Wavelength Surface Patterns in Chiral Liquid Crystals We present a theoretical analysis and linear scaling of two-wavelength surface nanostructures formed at the free surface of cholesteric liquid crystals (CLC). An anchoring model based on the capillary shape equation with the high order interaction of anisotropic interfacial tension is derived to elucidate the formation of the surface wrinkling. We showed that the main pattern-formation mechanism is originated due to the interaction between lower and higher order anchoring modes. A general phase diagram of the surface morphologies is presented in a parametric space of anchoring coefficients, and a set of anchoring modes and critical lines are defined to categorize the different types of surface patterns. To analyze the origin of surface reliefs, the correlation between surface energy and surface nano-wrinkles is investigated, and the symmetry and similarity between the energy and surface profile are identified. It is found that the surface wrinkling is driven by the director pressure and is annihilated by two induced capillary pressures. Linear approximation for the cases with sufficient small values of anchoring coefficients is used to realize the intrinsic properties and relations between the surface curvature and the capillary pressures. The contributions of capillary pressures on surface nano-wrinkling and the relations between the capillary vectors are also systematically investigated. These new findings establish a new approach for characterizing two-length scale surface wrinkling in CLCs, and can inspire the design of novel functional surface structures with the potential optical, friction, and thermal applications. Introduction A variety of periodic surface structures and wrinkled textures are widely found in the plant and animal kingdoms [1][2][3][4][5][6].Since these surface ultrastructures with micro/nano scale features provide unique optical responses and iridescent colors [7][8][9][10][11], understanding their formation mechanism is crucial in realizing structural color in nature and in biomimetic design of novel photonic systems.As similar nano/micro scale periodic wrinkles are formed at the free surface of both synthetic and biological cholesteric liquid crystals (CLCs) [12,13], and CLC phases are widely found in Nature and living soft materials both in vivo and vitro [13,14], nematic liquid crystal self-assembly has been proposed as the formation mechanism of helicoidal plywoods and the surface ultrastructures in many fibrous composites ranging from plant cell walls to arthropod cuticles [15][16][17][18][19].Moreover, it has been shown that the characteristics of chiral phases control the unique colors and optical properties exhibited in the films and fibers made by cellulose-based CLCs [20,21]. Inspired by surface ultrastructure in Nature, engineered surface structures incorporating chiral nematic structures can be fabricated to mimic the unique optical properties.If the formation of the surface patterns can be efficiently captured by a rigorous model based on a CLC mesophase, we can elucidate the pattern formation mechanisms for the construction of biomimetic proof-of-concept prototypes.In our previous works [22][23][24], significant efforts have been made in formulating and validating theoretical models to explain the formation of surface wrinkles in a plant-based CLC as a model material system.We identified the chiral capillary pressure, known as director pressure, that reflects the anisotropic nature of CLC through the orientation contribution to the surface energy as the fundamental driving force in generating single-wavelength wrinkling.However, surface wrinkling in nature can include more complex patterns such as multiple-length-scale undulations [11,[25][26][27]. To elucidate this feature, we previously proposed a physical model [28,29] that combines membrane bending elasticity and liquid crystal anchoring.A rich variety of multi-scale complex patterns, such as spatial period-doubling and period-tripling are presented for the cases in which the anchoring and bending effects are comparable [28].In a recent communication [30], we briefly presented a pure higher order anchoring model in the absence of bending elasticity, surprisingly capturing multiple length-scale surface wrinkles.In this previous work, a novel mechanism for the formation of two-scale nano-wrinkling was proposed, which was exclusively based on anchoring energy including quartic harmonics.Here, we present a complete and rigorous new analysis of the multiple-length-scale surface wrinkles based on the pure higher order anchoring model in full detail and approximate the response of the surface structure to chirality and anchoring coefficients based on a linear model.In addition, a fundamental characterization of the capillary vector and capillary pressures required to connect surface geometry and mechanical forces is presented. The objective of this paper is to identify the key mechanisms that induce and resist the multiple-length-scale surface wrinkling in CLCs based on a pure higher order anchoring model.To develop the anchoring model, we used the generalized shape equation for anisotropic interfaces using the Cahn-Hoffman capillarity [31] and the Rapini-Papoular quartic anchoring energy [32].The presented model depicts the formation mechanism of two-length scale surface patterns based on the interaction between lower and higher order anchoring modes.The linear approximations of surface curvatures are derived to provide the explicit relations between the anchoring coefficients, helix pitch, and surface profile of the two-length scale wrinkles.These new findings can establish a new strategy for characterizing two-length scale surface wrinkling in biological CLCs, and inspire the design of novel functional surface structures with the potential optical, friction, and thermal applications. The organization of this paper is as follows.Section 2 presents the geometry and structure of the CLC system.Section 3 presents the governing nemato-capillary shape equation expressing the coupling mechanism between the surface geometry and anisotropic ordering for a CLC free interfaces with a quartic anchoring energy and a pure surface splay-bend deformation.Appendix A presents the details of the derivation of the Cahn-Hoffman capillary vector thermo-dynamics for CLC interfaces.Appendix B describes the capillary shape equation in terms of three capillary pressures.Appendix C represents the shape equation based on the driving and resisting terms.Section 4 analyzes the effect of anchoring coefficients and helix pitch on the surface normal angle and the resultant surface profile.In this section, a general phase diagram of surface profiles in the parametric of anchoring coefficients is presented and the origin of the two scales is revealed through the linear theory.Then, the linear approximations of surface curvatures, assuming small values of anchoring coefficients, are derived to identify the leading mechanism controlling the surface wrinkling.Appendix D proposes the analytical expression for the linear approximation of the surface relief.The surface energy associated with the CLC interface is also analyzed to establish an energy transfer mechanism from anchoring energy of a flat surface into a wrinkled surface.Furthermore, the surface wrinkles are evaluated through analyzing the three capillary pressures, and the pressure-curvature relations are introduced to explore the variation of curvature profile with respect to the capillary pressures.Appendix E represents the derivation of the pressure-curvature relations.Finally, the capillary vectors are formulated to provide a clear physical explanation for the formation of the surface wrinkles.Appendix F formulates the capillary vectors.Section 5 presents the conclusions. Geometry and Structure Figure 1 depicts the schematics of the CLC structure where ellipsoids indicate fiber orientation on each parallel layer.We assume that the helix axis, H is parallel to the surface; other complex structures occurring when the helix axis H is distorted are beyond the scope of this paper.The fiber orientation at the interface is defined by the director n.The pitch length P 0 is defined as the distance through which the fibers undergo a 2π rotation.For a rectangular (x,y,z) coordinate system, the surface relief that is directed along the x axis can be described by a y(x,z) deviation from the xz plane.The amplitude of the vertical undulation is h(x).As the surface relief is constant in the z direction for a linear texture, the curvature in the z-direction is zero.The unit tangent, t, and the unit normal, k, to the surface can be expressed with the normal angle, ϕ: t(x) = (sin ϕ(x), −cos ϕ(x), 0), k(x) = (cos ϕ(x), sin ϕ(x), 0).L is the given system length in the x direction.The arc-length measure of the undulating surface is "s". Geometry and Structure Figure 1 depicts the schematics of the CLC structure where ellipsoids indicate fiber orientation on each parallel layer.We assume that the helix axis, H is parallel to the surface; other complex structures occurring when the helix axis H is distorted are beyond the scope of this paper.The fiber orientation at the interface is defined by the director n.The pitch length P0 is defined as the distance through which the fibers undergo a 2π rotation.For a rectangular (x,y,z) coordinate system, the surface relief that is directed along the x axis can be described by a y(x,z) deviation from the xz plane.The amplitude of the vertical undulation is h(x).As the surface relief is constant in the z direction for a linear texture, the curvature in the z-direction is zero.The unit tangent, t, and the unit normal, k, to the surface can be expressed with the normal angle, φ: = sin φ x , cosφ x , 0 , x = cos φ x , sinφ x , 0 .L is the given system length in the x direction.The arc-length measure of the undulating surface is "s". Governing Equations In this paper, we assume that the multi-length scale surface wrinkles are formed through modulation in surface energy at the anisotropic-air interface of CLCs.The typical capillary shape equations, which are generalized forms of a Laplace equation including the liquid crystal order and gradient density have been comprehensively formulated and previously presented for liquid crystal fibers, membranes, films, and drops [33].Here, the coupling mechanism between the surface geometry and CLC order are demonstrated through the capillarity shape equation for CLC free interfaces with a pure surface splay-bend deformation. The formation of surface nanostructures in CLC interfaces is a complex phenomenon involving interfacial tension, surface anchoring energy, and bulk Frank elasticity that requires integrated multiscale modelling of bulk and surface.However, the analytic solution of the problem with the usual formalism is very complicated.Here, we assume a cholesteric director field in the bulk region, n b (x) = (0,cosθ,sinθ), and a splay-bend director field at the interface ( ) (cos ,sin ,0) where θ = qx,q = 2π / P o ,θ is the director angle, q is the wave vector, and P0 is the helix pitch. Based on the generalized Rapini-Papoular equation [24], the interfacial surface energy, γ between a liquid crystal phase and another phase can be described by [32] Figure 1.Schematic of a cholesteric liquid crystals (CLC) and surface structures.H is the helix unit vector, and P 0 is the pitch.The surface director has an ideal cholesteric twist in the bulk.The helix uncoiling near the surface creates a bend and splay planar (2D) orientation and surface undulations of nanoscale relief h(x) with micron range wavelength P 0 /2.Adapted from [22]. Governing Equations In this paper, we assume that the multi-length scale surface wrinkles are formed through modulation in surface energy at the anisotropic-air interface of CLCs.The typical capillary shape equations, which are generalized forms of a Laplace equation including the liquid crystal order and gradient density have been comprehensively formulated and previously presented for liquid crystal fibers, membranes, films, and drops [33].Here, the coupling mechanism between the surface geometry and CLC order are demonstrated through the capillarity shape equation for CLC free interfaces with a pure surface splay-bend deformation. The formation of surface nanostructures in CLC interfaces is a complex phenomenon involving interfacial tension, surface anchoring energy, and bulk Frank elasticity that requires integrated multi-scale modelling of bulk and surface.However, the analytic solution of the problem with the usual formalism is very complicated.Here, we assume a cholesteric director field in the bulk region, n b (x) = (0, cos θ, sin θ), and a splay-bend director field at the interface n(x) = (cos θ, sin θ, 0) where θ = qx, q = 2π/P o , θ is the director angle, q is the wave vector, and P 0 is the helix pitch. Based on the generalized Rapini-Papoular equation [24], the interfacial surface energy, γ between a liquid crystal phase and another phase can be described by [32] where γ 0 is the isotropic contribution, n is the director field at the interface, k is the surface unit normal, and µ 2i are the temperature/concentration dependent anchoring coefficients.The preferred orientation that minimizes the anchoring energy (Equation ( 1)) is known as the easy axis.The actual stationary surface director orientation is the result of a balance between surface anchoring and bulk gradient Frank elasticity [34].For the cases in which the gradient Frank elasticity is insignificant, the actual stationary and preferred director fields are identical.As shown in ref. [22], for the cholesteric-air interface with quite strong anchoring, the gradient Frank elasticity is negligible in comparison with anchoring in the formation of the surface undulations.It should be noted that here we neglect the Marangoni flow that is likely to be formed due to the orientational-driven surface tension gradients [35][36][37].Other effects and processes such as 3D orientation structures, strong nonlinearities, hydrodynamic [38,39], and viscoelastic effects [40][41][42] discussed elsewhere are beyond the scope of this paper. The generalized Cahn-Hoffman capillary vector Ξ [43,44], is the fundamental quantity that reflects the anisotropic contribution of CLC in the capillary shape equation.It contains two orthogonal components: normal vector, Ξ ⊥ representing the increase in surface energy through dilation (change in area) and tangent vector, Ξ representing the change in surface energy through rotation of the unit normal.The derivation details of the Cahn-Hoffman capillary vector thermodynamics for anisotropic interfaces are given in Appendix A [31]. Here I s = I-kk is the 2 × 2 unit surface dyadic, and I is the identity tensor.The dyadic (kk) m is similar to (tt) m due to (kk [45] and [46] for details): The following identity holds: The interfacial static force balance equation at the CLC/air interface is expressed by where T a/b represent the total stress tensor in the air and the bulk CLC phase, ∇ s = I s • ∇ is the surface gradient operator, and T s is the interface stress tensor.The air and the bulk CLC stress tensor, T a/b are given by T a = −p a I and where p a/b are the hydrostatic pressures, f g is the bulk Frank energy density, and T E is the Ericksen stress tensor.The bulk Frank energy density for a CLC reads where {K i }(i = 1, 2, 3) are splay, twist, and blend elastic constant, respectively.K 4 is saddle-splay elastic constant.The Ericksen stress tensor, T E is given by The projection of Equation ( 6) along direction k yields the capillary shape equation: where stress jump, SJ, is the total normal stress jump, and p c is the capillary pressure.Usually we take p a − p b = 0, and consider the other terms as elastic correction.The interfacial torque balance equation is given by where λ s is the Lagrange multiplier and h is the surface molecular field composed by two parts: Here γ g is the gradient interfacial free-energy density defined by introducing surface gradient energy density vector g: By multiplying (∇n) T on both sides of Equation ( 11), the torque balance equation can be rewritten in a compact form: Equation ( 14) gives an alternative path to compute kk : T E .The expansion of the term hk : (∇n) T reads hk : (∇n) T = − ∂γ an ∂n which gives hk : (∇n) T = 0. Thus, only the bulk energy density, f g , contributes to the elastic correction, which is negligible [22].For typical cholesteric liquid crystals, the internal length K/γ 0 is in the range 1 nm (an order of magnitude estimation of the elastic constant K and the surface tension γ 0 gives K ≈ 10 −11 J/m and γ 0 ≈ 10 −2 J/m 2 ) [43].As the ratio of W/γ 0 at the cholesteric-air interface with quite strong anchoring lies in the range (B = W/γ 0 = 0.01), the extrapolation length scale K/W is about . With these values, for a typical CLC with a pitch P 0 ∼ 1.2 µm, the ratio of extrapolation length scale to pitch is in the order of K/W P 0 = 20 [nm] 1200 [nm] = 0.08.So, the elastic correction contributes 8% to the shape equation, and can be neglected to describe nano-scale surface undulations.As the result, the final shape equation becomes (see Appendix B) The first two terms contain ∇ s k = −κtt, providing information about the surface curvature κ = dφ ds , where φ is the normal angle and s is the arc-length.The first term on the right-hand side of Equation ( 16), which is the usual Laplace pressure, corresponds to the contribution from the normal component of the Cahn-Hoffman capillary vector.The second term which is the anisotropic pressure due to preferred orientation (known as Herring's pressure) corresponds to the contribution from the tangential component of the Cahn-Hoffman capillary vector Ξ .The last term in Equation (16) represents the additional contribution to the capillary pressure which corresponds to the director curvature due to orientation gradients (see Appendix C).Considering a rectangular coordinate system (x,y,z), where x is the wrinkling direction, and y is the vertical axis, and considering the typical quartic anchoring model [24], 4 , yields the nonlinear ordinary differential equation (ODE) in terms of normal angle, φ: Here F Dr denotes as the driving force and F Rs the resistant term.The boundary condition at x = 0 is φ| x=0 = π 2 ; µ 2 * and µ 4 * are the scaled anchoring coefficients divided by isotropic surface tension γ 0 , µ 2 * = µ 2 /γ 0 and µ 4 * = µ 4 /γ 0 ; and φ(x) is the approximation of φ(x).The generic features of the normal angle and its periodicity are the important outputs of the shape equation.There are three significant system parameters that have influence on the φ(x): the scaled anchoring coefficients (µ 2 * , µ 4 * ), and the sign and magnitude of the helix pitch P 0 .Thus, the surface profile h(x) is a function of two material properties (µ 2 * , µ 4 * ) and one structural order parameter (P 0 ).In the following context, we always assume that helix pitch is constant at P 0 = 1.2 µm.Figure 2a depicts the regions with different surface wrinkling in the parametric space of the scaled anchoring coefficients: O Here O, H, and P refer to oblique, homeotropic, and planar director anchoring modes, respectively.The reader is directed to reference [30] (μ ,μ ) space obtained using Equation (17).The anchoring coefficients correspond to all computed curves are less than 0.01. Surface Profile The surface normal angle, φ(x)can be directly obtained through solving the governing shape 17).The anchoring coefficients correspond to all computed curves are less than 0.01. Surface Profile The surface normal angle, φ(x) can be directly obtained through solving the governing shape equation, Equation (17).The generic features of the normal angle φ(x), its magnitude, and its periodicity are the three key outputs of the model.The two significant parameters influencing φ(x) are the helix pitch P 0 , and the scaled anchoring coefficients µ 2 * and µ 4 * , which affect the periodicity and the magnitude of φ(x), correspondingly.Theoretically, µ 2 * and µ 4 * give two degrees of freedom to the governing equation.But, for small anchoring coefficients and constant helix pitch, the shape of φ(x) is only a function of the anchoring ratio, r = µ 2 /2µ 4 .The plot of normal angle φ(x) as a function of the distance "x", corresponding to the points A, B, C, and D, is shown in Figure 3a.As expected, the periodicity equals the half pitch, P 0 /2, and the amplitude shows a slight deviation, φ(x) = π/2 + ε(x).Figure 3b shows the effect of helix pitch on the normal angle φ(x) for the particular point B at three different values of helix pitch P 0 , P 0 /2, and −P 0 /2.The helix pitch does not influence the amplitude's span of normal angle, but it changes the periodicity of the normal angle.By reducing the helix pitch to half, a more squeezed normal angle profile can be observed.The sign of P 0 reflects the normal angle profile with respect to π/2.It should be noted that we can estimate the behavior of curvature κ by checking the slope of φ(x) The surface profile is then obtained from Figure 4a shows typical surface profiles h(x) and corresponding energy profile for the point B and point D. As shown in Figure 4a, increasing P0 results in both higher periodicity and magnitude.We can clearly see that the surface relief profiles of points B and D exhibit the mirror symmetry, while changing the sign of P0 result in the same mirror symmetry.These surface undulations can be validated with the two-length-scale surface modulations observed in a sheared CLC cellulosic films [25].The two different scale periodical gratings include a primary set of bands perpendicular to the shear direction, and a smoother texture characterized by a secondary periodic structure containing "small" bands.It has been shown that the development and periodicities of the small bands are mainly ruled by the CLC characteristics.The chirality of CLC can therefore be mainly responsible for the formation of the secondary bands.The model can be also validated with the two-scale surface pattern of the Queen of the Night tulip [11], where for this specimen the ratio of amplitudes are h2/h0=0.01,and corresponding wavelength is λ=1.2 μm. Figure 4b shows the scaled energy profile, , in comparison with the surface profile for The surface profile is then obtained from Figure 4a shows typical surface profiles h(x) and corresponding energy profile for the point B and point D. As shown in Figure 4a, increasing P 0 results in both higher periodicity and magnitude.We can clearly see that the surface relief profiles of points B and D exhibit the mirror symmetry, while changing the sign of P 0 result in the same mirror symmetry.These surface undulations can be validated with the two-length-scale surface modulations observed in a sheared CLC cellulosic films [25].The two different scale periodical gratings include a primary set of bands perpendicular to the shear direction, and a smoother texture characterized by a secondary periodic structure containing "small" bands.It has been shown that the development and periodicities of the small bands are mainly ruled by the CLC characteristics.The chirality of CLC can therefore be mainly responsible for the formation of the secondary bands.The model can be also validated with the two-scale surface pattern of the Queen of the Night tulip [11], where for this specimen the ratio of amplitudes are h 2 /h 0 = 0.01, and corresponding wavelength is λ = 1.2 µm. Figure 4b shows the scaled energy profile, (γ * −1) q , in comparison with the surface profile for point B. The scaled energy profile gives the similar plot as the surface relief. If we denote the parametric vector as µ * = (µ 2 * , µ 4 * ), then h(x) becomes a function depending on two variables, the vector µ * and the helix pitch P 0 .Within a linear regime (|µ 2 * | << 1, |µ 4 * | << 1), the following identities holds true: Surface Geometry-Energy Relation : qh(µ This identity formulates the symmetric property of surface relief, and its relation to surface energy.Figure 4b Geometric Symmetries: Surface Geometry-Energy Relation: This identity formulates the symmetric property of surface relief, and its relation to surface energy.Figure 4b is a clear demonstration of symmetry and scaling laws formulated in Equations Another important parameter that categorizes the shape of surface relief is the ratio between its two wavelengths.The origin of the two scales can be revealed through the linear theory, which gives the signed amplitudes of h 0 and h 2 (the nomenclature is defined in Figure 4b) as a function of anchoring ratio, r = µ 2 /2µ 4 : L 1 and L 2 are defined as the two mode transition lines.Line L 1 , which gives a four-wave profile within one period corresponds to the condition µ 4 * = −µ 2 * (r = −1/2, h 0 = h 2 ).Line L 2 , which gives a two-wave profile within one period corresponds to the condition µ 4 * = −µ 2 * /2 (r = −1, h 2 → 0 ).In addition, if µ 4 * → 0 , then r → ∞ such that h 0 → h 2 , also gives a two-wave profile. Figure 2b shows the general phase diagram of h-profiles in the parametric (µ 2 * , µ 4 * ) plane.As shown in the figure, the transition lines L 1 and L 2 are the critical lines across which surface relief changes its shape.We identify line L 1 as a resonant line with the maximum interaction between quadratic and quartic anchoring effects. The computations show that h-profile is centrally symmetrical with respect to original point, which can be observed in Figure 2b.As summarized as in , µ 4 * = 0 due to the existence of a small plateau shown in the pattern computed along the two lines: L 2 and µ 2 * = 0.This small plateau corresponds to the discontinuity of two capillary vectors diagram which will be discussed later. Results above are considered within one period.Nomenclature: O (oblique), P (planar), and H (homeotropic) refer to the type of anchoring.The L i's refer to transition lines; see text. Table 1 summarizes the main four types of surface relief profiles.Region O + 4 , O − 4 , H + 4 , H − 4 and L 1 both give four waves within one period.The difference is that four waves are identical on line L 1 .Region H + 2 , H − 2 , P + 2 , µ 4 * = 0 and L 2 , µ 2 * = 0 both give two waves within one period, so h 2 /h 0 is equal to 0. The difference between these two modes is that region H + 2 , H − 2 , P + 2 , µ 4 * = 0 gives very smooth surface geometry while region L 2 , µ 2 * = 0 gives sharp peaks on the surface profile. Surface Curvature In this subsection we present, discuss, and characterize the surface curvature obtained from direct numerical simulations of the governing equations, and from a new and highly accurate linear model. The surface behavior is not only affected by the magnitude of the surface relief, but also by the surface curvature.The curvature can be computed directly by two equivalent forms: The first computing method in Equation ( 21) is exactly based on the governing Equation (17).Considering that for small values of anchoring coefficients, the resistant term is mainly controlled by isotropic energy γ 0 , we obtain the resistant term denoted in Equation ( 17), F Rs = 1.So, the linear approximation of curvature reads where κ φ denotes the linear approximation of curvature assuming that φ = π/2.The analytical expression for the linear approximation of the surface relief is proposed in Appendix D. By assuming κ h = h xx , we can also obtain another approximation for the surface curvature.It can be easily found that κ h = κ φ as we made similar assumptions to approximate the surface curvature based on Equation (21). A more sophisticated approximation of curvature κ G can be derived without linearizing the governing equation: As illustrated in Figure 5, the linear approximation of curvature κ φ obtained by Equation ( 22) and κ G from Equation ( 23) provides a very good approximation of curvature.As the curvature κ φ includes the explicit and simple expression, it allows us to mathematically derive more feasible relations to characterize the formation of the surface relief. Crystals 2019, 9 FOR PEER REVIEW 11 isotropic energy 0 γ , we obtain the resistant term denoted in Equation ( 17), Rs 1 F = .So, the linear approximation of curvature reads   as we made similar assumptions to approximate the surface curvature based on Equation (21). A more sophisticated approximation of curvature G κ  can be derived without linearizing the governing equation: As illustrated in Figure 5, the linear approximation of curvature φ κ   obtained by Equation (22) and G κ  from Equation ( 23) provides a very good approximation of curvature.As the curvature φ κ   includes the explicit and simple expression, it allows us to mathematically derive more feasible relations to characterize the formation of the surface relief. Surface Energy Understanding surface energy behavior is another perspective in realizing the surface profile which helps us to establish an energy transfer mechanism from the anchoring energy of a flat surface into a wrinkled surface.For sufficient small values of the anchoring coefficients, as the normal angle profile φ(x) is fluctuating around π/2 with a very small amplitude, an explicit relation between the linearized surface profile and the total surface energy can be estimated based on the linear approximation: where ( γ * − 1) is the scaled anisotropic anchoring energy, and q h is the scaled surface relief.This correlation is detected in Figure 4b where h and (γ * −1) q are essentially identical for the small anchoring coefficients.This simple expression implies an essential physical phenomenon.The expression, Equation (24) verifies that zero anisotropic surface energy results in a flat surface (h = 0).As the result, based on the expression, the anchoring energy is the driving force contributing to the surface relief, which is in accordance with the previous findings [22].Moreover, the expression confirms the expected insight that the uppermost surface area contain the highest surface energy. Capillary Pressures As mentioned above, the three main contributions in the capillary pressure are (1) P dil : dilation pressure (Laplace pressure), P rot : rotation pressure (Herrings pressure), P dir : director curvature which is the anisotropic pressure due to the preferred orientation (see Equation ( 16)).P dir is the driving forces to wrinkle the interface.The explicit expansion of Equation ( 16) in terms of (n • k) yields: As all the pressures are scaled by isotropic tension γ 0 , they have the same unit as curvature.It should be noted that based on theory dim Figure 6a shows the wrinkling mechanism through the capillary pressures changes along x.The three scaled pressure contributions are plotted as function of "x" for the particular point B. As shown in the figure, the capillary pressures cancel each other out maintaining the summation at zero.The important observation from these pressure profiles is that P dil and P dir are always out-of-phase, while P rot is always negative.These outcomes, P dir • P dil ≤ 0 and sgn(P rot ) = −sgn(P 0 ) can be also interpreted from the linear model.Figure 6a also denotes that P rot is two orders of magnitude smaller than P dil and P dir .This phenomenon confirms that P dir is the formation source of wrinkling, annihilated by inducing area change and area rotation.Another observation from the linear model is that P rot has the similar expression of curvature, κ.This similarity encourages us that capillary pressures can be also analyzed in the κ − P frame.Figure 6b shows the variation of curvature profile with respect to the capillary pressures.We can realize from the figure that in the linear region and for the constant P 0 , each capillary pressure only lay on intrinsic curves independent of the anchoring coefficients.The linear approximation gives the intrinsic curves (see Appendix E for the details): The κ − P relations approve that helix pitch P 0 is the only parameter affecting the intrinsic curves.Equation (26) implies that the intrinsic curves obtained for -P 0 show the central symmetry.Variations in anchoring coefficients do not impose any influence on the intrinsic curves, they only change the arc-length of the intrinsic curves (denoted by l).The analytical expression of the arc-length for the intrinsic curves can be obtained by where 22)), the interval of κ can be found by These findings denote that the span of curvature is associated with the anchoring coefficients, and ideally exhibits a linear correlation with 1/P 0 .So, we expect that if the helix pitch is increased to 2P 0 under the same anchoring condition, the span of curvature would reduce to half. Figure 6b illustrates the numerical solutions for director, dilation, and rotation pressures obtained by Equation ( 25) in comparison with the intrinsic lines defined by Equation (26).We can observe that there are no considerable deviations between the director pressures and the intrinsic lines approximated by the linear model.As shown in Figure 6b, the span of actual curvature is in accordance with the minimum and maximum values of curvature computed by Equations ( 30) and (31), which confirms that the linear approximation is validated within the linear region (small anchoring coefficients). In partial summary, in this subsection we have shown (i) the key balancing pressures are the Laplace and director pressures (Figure 6); (ii) quadratic curvature contributions are proportional to the pitch, the curvature-pressure relations follow intrinsic curves (Equation ( 26)) whose lengths are affected by anchoring, such that lower anchoring (higher anchoring) decreases (increases) their lengths (Equations ( 27)-( 31)).pressure, respectively.Black dash lines are the intrinsic lines defined by Equation (26).Green dash lines are the span of curvature computed by Equations ( 30) and (31).Two black points are where the span of numerical solution for curvature ends.The helix pitch is 0 1.2μm P = . Capillary Vectors The behavior of the capillary vectors can give another perspective to analyze the surface wrinkling.If we assume that 4* 0 μ = , then the magnitude of two capillary vectors ξ and ξ 26).Green dash lines are the span of curvature computed by Equations ( 30) and (31).Two black points are where the span of numerical solution for curvature ends.The helix pitch is P 0 = 1.2µm. Capillary Vectors The behavior of the capillary vectors can give another perspective to analyze the surface wrinkling.If we assume that µ 4 * = 0, then the magnitude of two capillary vectors ξ ⊥ and ξ naturally satisfy ξ denotes the magnitude of the capillary vector, Ξ.From this equation, we can read an ellipse with eccentricity e cc = √ 3/2 which is independent of anchoring coefficient µ 2 .The two capillary vectors change proportionally; ξ ⊥ oscillates around γ 0 + 1 2 µ 2 with an amplitude of 1 2 |µ 2 |, while ξ oscillates around zero with amplitude of |µ 2 |.This ellipse with invariant shape can provide a clear physical explanation to understand how capillary vectors are formed.Figure 7a illustrates the plots of the ellipse equation for the anchoring coefficient |µ 2 * | = 0.002.Considering that the CLC surface is differentiable, we can introduce two foci (F 1 and F 2 , defined by µ 2 * in Figure 7b) such that every point P in the vector diagram is restrained by Ellipse becomes a circle with a radius of |µ 2 * |, which can be considered as a point.From Figure 7a we can also observe that ξ ⊥ only reaches its extrema when ξ vanishes.This phenomenon corresponds to ξ = I s • ∂ k γ = t • ∂ k ξ ⊥ .However, when ξ reaches its extrema, ξ ⊥ does not vanish as isotropic surface tension prevents ξ ⊥ to be reduced to zero. The solution to ellipse equation yields These are explicit algebraic relations between ξ ⊥ and ξ .Recall that the capillary vectors and the normal angle are related by Replacing ξ ⊥ with ξ from Equation (33), the normal angle can be expressed only in term of ξ (see Appendix F): Equation ( 35) clarifies the source of fluctuation; the perturbation φ (ξ ), is imposed onto the normal angle profile due to the presence of ξ , which is fixed by the ellipse equation. If we assume that µ 2 * = 0, the magnitude of two capillary vectors ξ ⊥ and ξ satisfy This equation reads a teardrop curve.Figure 7b illustrates the plots of the teardrop equation for the anchoring coefficient |µ 4 * | = 0.002.The main parameters defining this teardrop curve are given in Figure 7b.Similar to the ellipse curves shown in Figure 7a, the magnitude of µ 4 * does not change the shape of teardrops, while it controls the size of the teardrop curves.It should be noted that the teardrop curves are not continuous at the original point (Point O shown in Figure 7b).Both the ellipse and teardrop curves show a symmetry by changing the sign of the anchoring coefficients, and shrink to zero as the anchoring coefficients go to zero. Conclusions This paper presents a rigorous model based on nonlinear nemato-capillarity shape equation and its linear approximation to describe the main formation mechanism of two-length scale surface wrinkling formed at the CLC/air interface.The role of three capillary pressure contributions (dilation, rotation, and director curvature) on the formation of surface curvature have been elucidated and the effect of the helix pitch and the anchoring coefficients has been characterized.The linear approximation provides a simple model to describe wrinkling behavior with high accuracy and less computation when the two anchoring coefficients are very small.The linear approximation can also serve as the main criteria to classify the type of surface relief.The key mechanism driving surface wrinkling is identified and discussed through the two perspectives: capillary pressures and capillary vectors.Moreover, the surface normal is expressed by the capillary pressures, whose summation must maintain at zero, serving as the constraint to the system.The proposed new model and its linear approximation augment previous models dedicated to understand and mimic complex surface patterns observed at the free surface of synthetic and biological chiral nematic liquid crystals, chiral polymer solutions, surfactant-liquid crystal surfaces and membranes, and in frozen biological plywoods.The present results can inspire design and fabrication of complex surface patterns with the possible potentials in optical, high friction, and thermal applications.Three capillary pressures can be derived by: Therefore, the surface curvature is written as: Figure 1 . Figure1.Schematic of a cholesteric liquid crystals (CLC) and surface structures.H is the helix unit vector, and P0 is the pitch.The surface director has an ideal cholesteric twist in the bulk.The helix uncoiling near the surface creates a bend and splay planar (2D) orientation and surface undulations of nanoscale relief h(x) with micron range wavelength P0/2.Adapted from[22]. Figure 3 . Figure 3. Normal angle profile.(a) The normal angle profiles corresponding to the points A, B, C, and D as illustrated in Figure 2a: A (green; mode P + 2 ), B (blue, mode O + 4 ), C (red, full line, mode H + 2 ), and D (red, dashed line, mode H − 4 .(b) The normal angle profile for the point B at different helix pitch values of P 0 , −P 0 , P 0 /2 and −P 0 /2, where P 0 = 1.2 µm. is a clear demonstration of symmetry and scaling laws formulated in Equations (19a,b): if we compare B and D we have mirror symmetry and if we plot the anchoring energy of B we would see the same plot as the surface relief: −h(D, P 0 ) = h(B, P 0 ) symmetry , h(B, P 0 ) = γ * (B, P 0 ) − 1 q geometry−energy .Crystals 2019, 9 FOR PEER REVIEW 9 Figure 4 . Figure 4. Mirror symmetries observed in surface relief profiles.(a) The surface relief profiles at point B with different helix pitches are given by the two blue curves and the black curve.The red curve gives the surface relief profile at point D. The red and black ellipsoids depict the director orientation for point B with P0/2 and point D with P0 , respectively.These ellipsoids show where the surface extrema occur for planar, homeotropic, and oblique anchoring.(b) The surface profile at point D and scaled energy profile at point B. This figure indicates that there is similarity between surface relief profile and energy profile.The helix pitch is. Figure 4 . Figure 4. Mirror symmetries observed in surface relief profiles.(a) The surface relief profiles at point B with different helix pitches are given by the two blue curves and the black curve.The red curve gives the surface relief profile at point D. The red and black ellipsoids depict the director orientation for point B with P 0 /2 and point D with P 0 , respectively.These ellipsoids show where the surface extrema occur for planar, homeotropic, and oblique anchoring.(b) The surface profile at point D and scaled energy profile at point B. This figure indicates that there is similarity between surface relief profile and energy profile.The helix pitch is. linear approximation of the surface relief is proposed in Appendix D. By assuming also obtain another approximation for the surface curvature.It can be easily found that h Figure 5 . Figure 5. Surface curvature profiles computed numerically and with the two approximation methods: φ κ   and G κ  .Blue and red solid lines are the numerical solutions solved from governing equation for point B and D, respectively.Blue hollow circles and blue filled triangles represent the data points of computed φ κ   and G κ  at point B, respectively.Red hollow squares and red filled circles represent the data points of computed φ κ   and G κ  at point D, respectively.As the both approximations φ κ   Figure 5 . Figure 5. Surface curvature profiles computed numerically and with the two approximation methods: κ φ and κ G .Blue and red solid lines are the numerical solutions solved from governing equation for point B and D, respectively.Blue hollow circles and blue filled triangles represent the data points of computed κ φ and κ G at point B, respectively.Red hollow squares and red filled circles represent the data points of computed κ φ and κ G at point D, respectively.As the both approximations κ φ and κ G are identical, the filled circles and triangles are superimposed on hollow squares and circles.The helix pitch is P 0 = 1.2 µm. Figure 6 . Figure 6.Capillary pressure profile.(a) Three components of capillary pressures with respect to x axis for the point B. Black real line, black dash line, and blue dot line represent dilation pressure, director pressure, and rotation pressure, respectively.(b) Curvature-Pressure plot at point B. Red, blue, and purple lines represent the numerical solutions to director pressure, dilation pressure, and rotation Figure 6 . Figure 6.Capillary pressure profile.(a) Three components of capillary pressures with respect to x axis for the point B. Black real line, black dash line, and blue dot line represent dilation pressure, director pressure, and rotation pressure, respectively.(b) Curvature-Pressure plot at point B.Red, blue, and purple lines represent the numerical solutions to director pressure, dilation pressure, and rotation pressure, respectively.Black dash lines are the intrinsic lines defined by Equation(26).Green dash lines are the span of curvature computed by Equations (30) and(31).Two black points are where the span of numerical solution for curvature ends.The helix pitch is P 0 = 1.2µm. Figure 7 . Figure 7. Plots of capillary vector components under two limiting anchoring coefficient values.(a) 4* μ 0 = .resultsin an ellipse.(b) 2* μ 0 = .This results in a teardrop curve.The sign of anchoring coefficient imposes a mirror symmetry.The axes of the loops are determined by the anchoring coefficients. Table 1 , there are mainly three types of surface wrinkling patterns.It should be noted that there is no difference between O 4
10,138.8
2019-04-02T00:00:00.000
[ "Physics" ]
An Optimal Transmission Strategy for Joint Wireless Information and Energy Transfer in MIMO Relay Channels ThisisanopenaccessarticledistributedundertheCreativeCommonsAttributionLicense, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. An optimal resource allocation strategy for MIMO relay system is considered in simultaneous wireless information and energy transfer network, where two users with multiple antennas communicate with each other assisted by an energy harvesting MIMO relay that gathers energy from the received signal by applying time switching scheme and forwards the received signal by using the harvesting energy. It is focused on the precoder design and resource allocation strategies for the system to allocate the resources among the nodes in decode-and-forward (DF) mode. Specifically, optimal precoder design and energy transfer strategy in MIMO relay channel are firstly proposed. Then, we formulate the resource allocation optimization problem. The closed-form solutions for the time and power allocation are derived. It is revealed that the solution can flexibly allocate the resource for the MIMO relay channel to maximize the sum rate of the system. Simulation results demonstrated that the performance of the proposed algorithm outperforms the traditional fixed method. Introduction Wireless power transfer technology, where the receiver can scavenge energy from the received signals, has recently attracted much attention in academia and industry [1,2].It is a promising technology to overcome the bottleneck of energy constrained wireless networks.The nodes collect the energy from radio frequency (RF) signals to charge their batteries by the electromagnetic radiation theory [3].RF energy transfer can be fully controlled and fit for the scenarios with strict quality of service constraints.Meanwhile, the conventional renewable energy resources (wind/solar/tide energy) are not satisfied with these scenarios due to their intermittent and unpredictable properties [4]. The concept of simultaneous wireless information and power transfer (SWIPT) is first proposed by Varshney in [5].In [6], Zhou et al. proposed the SWIPT architecture for receiver design and suggested two methods to distinguish information and energy: time switching (TS) and power splitting (PS).Considering cochannel interference, an optimal design is proposed for outage-energy tradeoff and rate-energy tradeoff in SWIPT [7].In [8][9][10], researchers extend this concept to MIMO, OFDM, and cooperation system.Zhang Rui considers a MIMO broadcasting method for SWIPT.The optimal precoder design is proposed and the rate-energy region is obtained in MIMO scenarios [11].Considering the limited feedback constraint, an optimal beamforming design to trade off the energy harvesting and information transfer in multiple antennas system is investigated [12].The work in [13] proposes an amplify-andforward relaying protocol for SWIPT with TS and PS mode and investigates the influence of different parameters on the system performance.In [14], the throughput for a Gaussian relay network with energy harvesting constraint is analyzed.Ding et al. proposed several power allocation strategies to optimize the outage probability in DF cooperative network where multiple source-destination pairs communicate via a shared energy harvesting relay [15].Combing MIMO and cooperative technology, Krikidis et al. investigates a lowcomplexity antenna switching between decoding/rectifying in order to achieve optimal outage probability with simultaneous information and energy transfer in [16]. In this paper, we focus on a general MIMO cooperative network, where two users communicated with each other via an energy harvesting relay.Specifically, each node is equipped with multiple antennas.The relaying transmission is powered by the scavenged energy from the signals sent by the users.Assuming that the battery of the relay is sufficiently large, the relay can accumulate a significant amount of power for relaying transmission.The aim of this paper is to analyze the precoder design for the source node and the relay node to optimize the system performance and study how to efficiently distribute the time resource between information transmission and power transfer.Moreover, the optimal power allocation strategies for the system are also investigated. This paper is organized as follows.Section 2 introduces the system model and the basic notation.In Section 3, we present the proposed precoder design for DF model, Moreover, the optimal transmission ratio between energy transfer and information transmission is also derived.Simulation results and comparisons are given in Section 4, and, finally, conclusions are drawn in Section 5. System Model We consider a MIMO energy harvesting (EH) cooperative network with two users and a relay node as shown in Figure 1.All nodes operate in half-duplex mode.Two users are equipped with antennas and fix power supply.They cannot communicate with each other directly.They exchange the messages assisted by the energy harvesting relay node.The relay node is battery-free and equipped with antennas.It can scavenge the energy from its observation.We adopt time switching method to charge the battery.It means that the relay node harvests the energy from RF signals transmitted by the users and uses this energy to relay users' information. The relaying protocol with energy harvesting can be stated as follows. In the first phase (Slot 1), the source terminal transmits the signal to the energy harvesting relay node with precoder matrix S EH ∈ C × for energy transfer.It is worth noting that the EH receiver at the relay node does not need to convert the received signal from the RF band to the baseband for scavenging the carried energy.Nevertheless, according to the law of energy conservation, it can be supposed that the total harvested RF-band power is proportional to that of the received baseband signal.The harvested energy can be normalized by the baseband symbol period from all receiving antennas at the EH receiver.Therefore, the scavenged power at the relay node can be presented as where x ∈ C ×1 denotes the data symbol to be transmitted by source user.In addition, we assume that there is a power constraint at the transmitter across all transmitting antennas denoted as tr(S 1 ) ≤ , where S 1 = S EH x(S EH x) indicates the covariance matrix of transmit signal in the first phase.H ∈ C × denotes the channel coefficients between the relay and the source terminals; (0 < < 1) is the energy conversion efficiency at the relay node . (0 < < 1/2) denotes the percentage of transmission time allocated to the EH time phase.For simplicity, it is assumed in (1) that the scavenged energy due to the background noise at the EH receiver is negligible and thus can be ignored. In the second phase (Slot 2), the source terminal transmits the signal to the energy harvesting relay node with precoder matrix S ID ∈ C × for information decoding (ID).The relay's detection is based on the following observation: where y denotes the received signal at the relay node.n ∈ C ×1 is the additional Gaussian noise vector for the baseband signal and n ∼ CN(0, I).The transmit power constraint should be also maintained ‖S ID x‖ 2 ≤ .The throughput of the system within the second phase can be expressed as where S = S ID x(S ID x) denotes the covariance matrix of transmit signal.We consider a DF relay protocol for the MIMO relaying network with energy harvesting.The relay node will decode the signal and reconstruct the message x ∈ C ×1 for the transmission in the third phase.Furthermore, the relay node simultaneously processes and forwards the received signals using the harvested energy. It is noted that the total duration for the first and second phase is half of the total transmission of the relay system.It can maintain the consistence with the conventional relay transmission protocol that the durations of the first hop and the second hop are the same.Then, the consumed energy of the transmitter in proposed protocol is the same with conventional relay transmission protocol.Therefore, the simulation comparison in Section 4 is fair for all the mentioned algorithms. In the third phase (Slot 3), the relay node broadcasts the signal with precoder matrix S BC ∈ C × by using a multiplexing strategy.The received signal y at the destination terminal can be presented as International Journal of Antennas and Propagation 3 where G ∈ C × denotes the channel coefficients between the relay and the destination terminal.The additional Gaussian noise at the destination terminal is n ∈ C ×1 ∼ CN(0, I), noted that the relay node is battery-free.It needs to scavenge energy from the received signal in the first phase.Therefore, the energy constraint (1/2)‖S BC x ‖ 2 ≤ in the third phase would be maintained.Then the data rate of the destination terminal can be expressed as where S = S BC x (S BC x ) denotes the covariance matrix of broadcast signal. Then, the rate in DF cooperative network with energy harvesting can be presented as In the following, we will investigate the optimal transmit precoder matrix S EH , S ID , and S BC to maximize the transported energy efficiency and information rate for the relay node.Moreover, the optimal time switching ratio is also analyzed to maximize the system throughput. Problem Formulation and Optimal Solution In this section, we formulate the optimal problem to maximize the system throughput.Consider the three-phase MIMO link from the transmitter to the relay node and the relay node to the receiver; the optimization problem can be formulated as follows: maximize ,S EH ,S ID ,S BC = min ( , ) Since there are too many variables in the optimization problem, the energy constraint and rate in relay node with energy harvesting are nonconcave function for ; it is difficult to solve this problem directly. In the following, we propose dividing the original problem into four subproblems.Each subproblem can be solved efficiently by using convex optimization technology. Consider the MIMO link of the first phase from the transmitter to the relay node; the design objective in this case is to maximize the scavenged power.Subproblem (1) can be formulated as maximize Since the objective function is monotonically increasing of , we will focus the issue on the S EH optimal design.Applying the singular value decomposition (SVD) method, the MIMO link between transmitter and the relay node can be decoupled into several independent subchannels: where U ∈ C × and V ∈ C × are complex unitary matrix, each of which consists of orthogonal columns with unit norm.Moreover Σ = diag{√ℎ 1 , √ℎ 2 , . . ., √ℎ }, = min{, } with the singular values in order of decreasing size.Then applying eigenvalue decomposition method S 1 can be expressed as where where h is the column element of HV 1 .Apparently, in order to maximize the scavenged power , we just need to find the maximum value of all ‖ h ‖ 2 and allocate all power to this subchannel.The inequality can be expressed as Considering ( 9), the value of v 1,1 in V 1 is the first column of V corresponding to the largest singular value of H, which is √ℎ 1 .In this scenario, the equality of (13) would hold.Therefore, the optimal solution for subproblem (1) can be obtained as And the optimal harvested power at the relay node is Consider the MIMO link of the second phase from the transmitter to the relay node; the design objective in this International Journal of Antennas and Propagation case is to maximize the data rate over the MIMO channel.Subproblem (2) can be formulated as maximize It is a typical optimization problem of MIMO system [17]. Applying the SVD method, the optimal solution can be obtained as follows: where Apparently, the water-filling power allocation solution would maximize the data rate of the MIMO channel.It can be stated as follows: where is the constant water level that makes the sum power of all subchannels satisfy the power constraint of ∑ =1 2, ≤ .Therefore, the sum rate of the system within the second phase can be presented as Consider the MIMO link of the third phase from the relay node to the receiver; the design objective in this case is also to maximize the data rate over the MIMO channel.The subproblem can be formulated as maximize Supposing that is fixed, the optimal solution is the same as in the second phase, which has the following form: where V ∈ C × is the unitary matrix obtained from the SVD of G, presented as G = U Σ BC V , where Σ BC = diag{ √ 1 , √ 2 , . . ., √ } with the singular values in order of decreasing size.Λ BC = diag{ ,1 , ,2 , . . ., , } denotes the power values assigned to the subchannels.Then, just like the case in the second phase, the optimal power allocation for the subchannels in the MIMO channel of the third phase can be given as where is the constant water level that makes the sum power of all subchannels satisfy the power constraint of (1/2) ∑ =1 , ≤ .And the sum rate can be presented as Until now, the original problem can be reformulated as maximize = min ( , ) Due to the rate constraint of the DF protocol, the optimal point of the problem would be obtained when the rates of two hops are equal.In other words, the following formula must be established: Since ℎ , is known within each transmission period, the optimal * 2, can be calculated through (18).Then we can set ∑ =1 log 2 (1 + * 2, ℎ ) as a fixed value r .Substitute ( 22), (26) into (25); we can get a function with only one variable as follows: It is a nonlinear logarithmic equation.It can be solved efficiently by using library function "fzero" in Matlab.While the optimal * is found, the optimal * can be given as In summary, we proposed an optimal transmission strategy for MIMO relay network with energy harvesting.The whole transmission is divided into three phases.For the first phase, an energy beamforming precoder matrix to maximize the scavenged power at the relay node is proposed.For the second and third phase, we proposed a spatial multiplexing precoder matrix to obtain the maximum data rate.Moreover, the optimal time switching ratio is also investigated to balance the power transfer and information transmission.System rate (bit/s/Hz) The transmit power P (dBmW) Proposed method Same precoder design method Random antenna switching method Antenna switching method in [18] Fixed ratio for the EH and ID method Simulation Results In this section, the performance of MIMO relay network with energy harvesting would be investigated.For comparison, we adopt four reference methods: (1) the same precoder matrix for the first and second phase with optimal time switching ratio; (2) the optimal precoder matrix for the first and second phase with fixed time switching ratio; (3) random antenna switching for energy harvesting and information transmission; and (4) antenna switching scheme in [18].Two users and relay node are distributed in a one-dimensional region.Suppose that two terminals are separated in a normalized distance.Denote the distance between transmitters and relay node as and for destination terminal as (1 − ).Suppose the large scale path loss factor = 3; the channel coefficients between relay and terminals would have additional large scale path loss Ω 1 = −3 , Ω 2 = (1 − ) −3 , respectively.We set the noise power as −60 dBmW.First, we evaluate the system throughput of all the mentioned methods.Suppose that each node equipped 4 antennas.The relay node is in the middle of the source terminal and destination terminal.Figure 2 shows the performance of all the mentioned methods under different transmit power.It shows that the proposed solution outperforms other methods.For method (1), the power loss due to the energy beamforming precoder matrix design is not optimal.The power transfer through the subchannels with low coefficients would waste the source energy; the performance gap especially would be apparently in high transmit power region.For method (2), the fixed ratio for energy transfer and information would not balance the rate and available energy System rate (bits/s/Hz) Fixed ratio for the EH and ID method Random antenna switching method Antenna switching method in [18] Proposed method Same precoder design method for the third phase.For example, if the power scavenged in the first phase with fixed splitting ratio is small.Then, it cannot surpport the information transmission of the third phase, since the target rate of third phase is constraint by the second transmission phase with the fixed splitting ratio. For method (3), it was proposed to divide the antennas into energy harvesting mode and information received mode with optimal precoder design during the first phase.In low transmit power region, the scavenged power is the bottleneck for the system throughput.In high transmit power region, since each antenna would have the good channel condition, the performance is close to the proposed method.For method (4), it was proposed to select the proper antenna working at energy harvesting mode or information decoding mode by analyzing the channel condition of each antenna.But the algorithm did not consider the precoder design to improve the system performance.The performance loss would be increased especially in high transmit power region.Next, considering the total transmit power is 20 dBmW, we investigate the influence on the rate with the different distance between relay node and users.Figure 3 shows that the performance would decrease when the distance between source terminals and relay node increases.It is because that the harvested energy and the received signal strength at the relay node would be decreased due to the large path loss.Moreover, the proposed method outperforms other solutions.Random antenna switching method would decrease the performance at the long distance due to the large path loss.It demonstrated that the proposed method can balance the harvested energy and the rates of the two hops to maximum system throughput in different scenarios. Conclusions In this paper, an optimal transmission strategy for DF MIMO relay network with energy harvesting is investigated.It is proposed to design optimal precoders specifically for energy harvesting and information transmission.For energy harvesting, an energy beamforming matrix is designed to scavenge maximum power during the finite time.For information transmission, traditional SVD method is adopted.Moreover, the optimal ratio of the time between energy harvesting and information transmission is also derived.Numerical results show that the proposed method outperforms other traditional methods.In future work, optimal strategy with QoS constraint for multiple relay nodes would be further investigated. 2 International 1 S 1 S 2 Figure 1 : Figure 1: System model of MIMO relay network with energy harvesting. Figure 2 : Figure 2: Achievable sum rate of the SWIPT relay network for different transmission strategy. relay node and source terminal (d) Figure 3 : Figure 3: Achievable sum rate of the SWIPT relay network for different relay position (total transmit power = 20 dBmW).
4,289.6
2015-02-05T00:00:00.000
[ "Business", "Computer Science" ]
DFT in supermanifold formulation and group manifold as background geometry We develop the formulation of DFT on pre-QP-manifold. The consistency conditions like section condition and closure constraint are unified by a weak master equation. The Bianchi identities are also characterized by the pre-Bianchi identity. Then, the background metric and connections are formulated by using covariantized pre-QP-manifold. An application to the analysis of the DFT on group manifold is given. Introduction Since Siegel proposed to formulate spacetime duality in superspace [1, 2] double field theory (DFT) has been developed and investigated by many authors [3,4,5,6,7,8] with the aim to construct an effective theory of string with manifest T-duality symmetry. The gauge transformation and diffeomorphism are unified into the generalized Lie derivative of doubled space [5] similarly as done in generalized geometry [9,10,11,12]. However, unlike generalized geometry, the gauge algebra of DFT generated by the generalized Lie derivative does not close. To achieve closure we need to constrain the algebra. One way to do so is to impose the section condition (strong constraint), a particular solution of which reduces the DFT action to the standard action of supergravity. On the other hand, the section condition is not the only possibility to close the DFT algebra [8,13]. This was demonstrated via a Scherk-Schwarz [14] type compactification of DFT to the double torus, now known as generalized Scherk-Schwarz (GSS) compactification [15,16,8,17]. In the GSS compactification, the fields depend also on the internal space coordinate. The doubled coordinates of the internal space enter in a very particular way through the GSS ansatz. It has been shown that the resulting fluxes can be identified with the structure constants of the gaugings of gauged supergravity theories, giving a geometric interpretation to them all, including the non-geometric fluxes. This is very interesting, since in the compactified direction the section condition is not imposed, i.e., the GSS compactification gives an alternative solution to the closure constraint. One aim of this paper is to provide an unified and geometric characterization of the closure constraint based on the supermanifold formulation. The supermanifold considered here is a non-negatively graded QP-manifold, i.e. a supermanifold with a graded Poisson structure (P-structure) and a nilpotent vector field (Q-structure), and its generalization called a pre-tion of DFT. The advantage of the supermanifold approach is that its QP-structure gives a concise characterization of the underlying algebra/algebroid structure. After this preparation, we formulate the DFT on a pre-QP-manifold and elaborate the closure condition of the generalized Lie derivative and the meaning of the section condition in this wider structure. We shall see that the failure of the closure of the algebra can be understood as a failure of the classical master equation on the QP-manifold. However, unlike the original supermanifold approach [22,24] we do not make use of the section condition, since it is not the only possibility to achieve closure of the algebra [23,8,26]. From the pre-QP structure formulation we obtain new criteria for the closure of the algebra of the generalized Lie derivative. We will also see that the O(D, D) transformation is still realized as a canonical transformation. With the generalized metric and generalized dilaton, the DFT action of the NS-NS sector [7] can be written as: In DFT the diffeomorphisms and gauge transformations in the D-dimensional theory are unified into the generalized Lie derivative [5], similarly as in generalized geometry [9,10] where L λ is the standard Lie derivative with respect to a D-dimensional vector λ. The antisymmetrization of the generalized Lie derivative of DFT is known as C-bracket In the supergravity frame, the C-bracket reduces to the Courant bracket. The obstruction for defining a generalization of the diffeomorphism by the generalized Lie derivative is that L Λ does not satisfy the Leibniz rule. That is does not vanish, which is apparently satisfied by the standard Lie derivative L λ . Vanishing of the above ∆ M (Λ 1 , Λ 2 , V ) means that the commutator of the generalized Lie derivative satisfies which is also called the closure condition. Closure is always guaranteed, when the section condition is imposed where Φ and Ψ denote any fields and gauge parameters of DFT. However, the section condition is not necessary to achieve closure for the algebra. Generalized Scherk-Schwarz (GSS) compactification In this paper, we also analyze the generalized Scherk-Schwarz (GSS) compactification of DFT where the 2D-dimensional DFT is reduced to (D − n)-dimensional gauged supergravity. The generalized Lie derivative of the 2D-dimensional DFT is twisted by the GSS ansatz and the constraint of its internal space is relaxed compared to the section condition [15,16,8,17]. For the GSS compactification, it is convenient to start with the DFT action in flux formulation, which is equivalent to the DFT action in generalized metric formulation (2.6) up to terms which vanish under the section condition [15]: result provides a unified picture of fluxes discussed in [22]. As in the original Scherk-Schwarz compactification ansatz [14], the generalized Scherk-Schwarz (GSS) compactification of DFT splits the 2D-dimensional target space with coordinate X into 2d-dimensional external space with coordinate X and 2(D − d)-dimensional internal space with coordinate Y, as X = (X, Y). Then, the ansatz for the generalized vielbein EÂM (X), generalized dilaton d(X) and gauge parameter ΛM (X) are [25,15], The twist matrices are assumed to satisfy UÎM ∂M g(X) = ∂Î g(X) , (2.24) for any field of the reduced theory, which means that the twist occurs only in the internal space, and preserves the Lorentz invariance in the external spacetime of the reduced theory. Imposing this ansatz the resulting theory becomes independent of the internal coordinate and the DFT action is reduced to the effective action of the so-called gauged double field theory (GDFT) [16]. The gauge algebra of GDFT is inherited from the original DFT. The corresponding generalized Lie derivative of a generalized vectorV (X) in the reduced theory can be derived from the one of the original DFT by substituting the GSS ansatz as Thus, the generalized Lie derivative L of GDFT is defined on the reduced fields and gauge parameter which depend only on the external spacetime coordinate X. The algebra of L closes by the closure constraint for GDFT fields and the Jacobi identity of the structure constant fÎĴK, Note that the condition for closure for the internal space is relaxed compared to the solution by using the section condition. Supermanifold formulation of DFT In this section, first we summarize briefly the supermanifold formulation of DFT given in the refs. [23,19,22]. Then, we define a pre-QP-manifold which is a generalization of QPmanifold that can describe the algebraic structure of DFT. On the pre-QP-manifold the closure condition is relaxed, which fits to the generalized Lie derivative and gives a new understanding of the section condition. The algebra on a QP-manifold is a kind of simplified graded algebra as used in the BVand BRST-formalism. See e.g. [18] and references therein. There are two structures on this supermanifold. One is called P-structure, which specifies a graded Poisson bracket and thus defines the derivation and the vector field on the supermanifold. The other is called Qstructure, which is a nilpotent Hamiltonian vector field Q, an analogue of the BRST charge, the nilpotency of which is imposed by the classical master equation. It is known that a QPmanifold of degree 2 gives a concise definition of a Courant algebroid [27] and thus fits to the formulation of the generalized geometry [9,10]. The QP-manifold, however, is too strict to apply to DFT, as one can imagine from the relation between the double geometry and generalized geometry or, more concretely, from the relation between the C-bracket and Courant bracket. We shall see that on a pre-QP-manifold we have more freedom to accommodate the DFT algebra, and we can obtain a new closure condition for the generalized Lie derivative, which gives another alternative to the section condition used in the original DFT. Pre-QP-manifold and derived bracket A P-manifold of degree n is a pair (M, ω), where M is a graded manifold with Z graded coordinates and ω is a graded symplectic form of degree n defining a P-structure. In our context, we always consider the case with non-negative n. The graded Poisson bracket {−, −} is calculated from the graded symplectic form ω as for f, g ∈ C ∞ (M), where |·| denotes the degree. Here X f is a Hamiltonian vector field defined by ι X f ω = −df . Using graded Darboux coordinates (q a , p a ), the graded Poisson bracket has a local coordinate representation given by (2.34) The graded Poisson bracket is of degree −n and satisfies the following graded version of skew symmetricity, Leibniz rule and Jacobi identity, In the P-manifold we can define the following canonical transformation. Let α ∈ C ∞ (M) be a degree n function. Then, a canonical transformation generated by α is defined by the following exponential adjoint action, for any smooth function f, g on M. On the P-manifold, we can also define a Q-structure by specifying a degree n + 1 function Θ ∈ C ∞ (M). This function Θ defines a degree 1 graded vector field Q as If this vector is nilpotent, i.e., Q 2 = 0, it is called a homological vector field, and the corre- Nevertheless, also in this case we call Θ the Hamiltonian function of the pre-QP-manifold. As we shall see, in the pre-QP-manifold approach, the classical master equation is replaced by another condition, and the section condition is just one of the possible solutions to that condition. Since the definition of a canonical transformation is independent of the Q-structure, the An important point to apply the pre-QP-manifold to DFT is the fact that the condition 6) {{{{Θ, Θ}, f }, g}, h} = 0 (2.47) 5) If the bracket [·, ·] is graded antisymmetric, (2.46) is the graded Jacobi identity. In general, it is called the graded Leibniz identity. In this paper we simply call it the Leibniz identity 6) This condition is also derived in [20] as the condition on the pre-QP-manifold, and later in the context of an L ∞ algebra in [21]. Generalized Lie derivative on pre-QP-manifold In the following we give the formulation of the generalized Lie derivative of DFT on the pre-QP-manifold. It gives a characterization of the closure of the gauge algebra using the weak master equation. In order to construct the supermanifold formulation of DFT, we take a 2D dimensional The symplectic structure on M is which leads to the following graded Poisson brackets, In order to formulate DFT on the pre-QP-manifold, it is necessary to introduce the following coordinates (Q ′ , P ′ ) which we call the DFT basis: Then, the push forward j ′ * and pullback j ′ * of a generalized vector field V = VM ∂M is defined as where VM P ′M ∈ C ∞ (M). We can also define the similar map for a 1-form on M . In the following, we identify the generalized vectors VM ∂M with VM P ′M and omit to write j ′ * and j ′ * for simplicity. The O(D, D) invariant inner product for the generalized vector fields V and V ′ can be defined by using the graded Poisson bracket of the corresponding functions on the supermanifold, (2.58) In order to formulate the generalized Lie derivative of DFT by using the pre-QP-manifold, we take the following degree 3 function constructed by ΞM and P ′M , It is straightforward to show that the derived bracket using this Θ 0 gives the generalized Lie derivative on a generalized vector field V , as well as the generalized anchor map for a function f Thus, the Θ 0 in (2.59) is a suitable Hamiltonian for the supermanifold formulation of DFT. On the other hand, it is easy to see that the classical master equation of Θ 0 , (2.41) does not vanish: This means that for the algebra of the derived bracket with Θ 0 , i.e., the generalized Lie derivative defined on a pre-QP-manifold, the closure condition should be generalized as discussed in the previous subsection. There, we have shown that the condition for the Leibniz identity of the derived bracket on a pre-QP-manifold is given by the weak master equation, (2.47). In the DFT case, the weak master equation is Eq.(2.63) gives the closure constraint of the generalized Lie derivative as follows: The gauge parameter of generalized Lie derivative is a generalized vector, thus on the pre-QPmanifold, we require the weak master equation for the generalized vectors V 1 , V 2 , V 3 : This condition is equivalent to the closure constraint of the generalized Lie derivative in DFT: Bianchi identity and GSS twist in pre-QP-formalism In this section, first we analyze the canonical transformation on pre-QP-manifold. We show that the generalized vielbein can be introduced by a canonical transformation from the general Canonical transformation on supermanifold To define the generalized vielbein and flux of DFT, we need to introduce a local flat frame. Correspondingly, we generalize the supermanifold to We also define the corresponding DFT basis coordinates Q ′ and P ′ , in the same way as the DFT basis, Q ′M and P ′M defined in (2.53). Here, ηÂB and its inverse Canonical transformations are generated by degree 2 functions on the n = 2 supermanifold. All possible degree 2 functions made from P ′ andP ′ are given by linear combinations of the following functions where the parameters AM in each frame. We can, in principle, also consider the degree 2 functions including Q ′ and Q ′ . However, we do not need them in the following discussions. We could also introduce the degree 2 function made from ΞM , but they do not generate the canonical transformations of the fiber directions. We discuss the canonical transformations generated by each function in (3.3) in the following. Canonical transformation by A and generalized flux The canonical transformation generated by A is where ΩMNP = EÂM EBN EĈP ΩÂBĈ with ΩÂBĈ = EÂM ∂M EBN EĈN . In the following, we consider the transformation with parameter θ = π 2 . In this case, the transformation rules simplify as Then, the twisted Hamiltonian function becomes, where FÂBĈ and ΩÂBĈ are the generalized flux and generalized Weitzenböck connection introduced in (2.19). Thus, we have obtained the generalized flux and the Weitzenböck connection as a flux generated by a canonical transformation of Hamiltonian Θ 0 . We have concluded that the second term in (3.12) is in fact a generalized flux by comparing with the explicit form of the vielbein given by EM A in (2.19). This can be proven also directly using the representation of the generalized flux defined by the derived bracket as follows: We regard the generalized vielbein as a set of the generalized vectors E = EÂM P ′M on the pre-QP manifold. The first line is the definition of the generalized flux by the derived bracket. After applying the canonical transformation, we obtain the last line, where the Poisson bracket withP ′ ,P ′B ,P ′Ĉ picks up the coefficient ofP ′ ,P ′B ,P ′Ĉ in the transformed Hamiltonian e π 2 δ E Θ 0 . Thus, we obtain the representation of the generalized flux in pre-QPmanifold as (3.14) This explains why the generalized flux is generated by a canonical transformation of the Hamiltonian. Finally, we consider the O(D, D) covariant canonical transformation generated by the functiont in (3.3). This transformation leads to the same formula as in (3.15) and (3.16) with P ′M replaced byP ′ and T byT , respectively: For the Hamiltonian function we obtain, Generalized Bianchi identity on pre-QP-manifold In this subsection, we propose a formulation of the Bianchi identity on the pre-QP-manifold and derive the DFT Bianchi identities satisfied by the generalized flux and Weitzenböck connection. On a QP-manifold, the Bianchi identity can be obtained from the classical master equation, {Θ F , Θ F } = 0 for the Hamiltonian function Θ F twisted by a flux as discussed in [22]. Especially, the n = 2 QP-manifold can be applied to the generalized geometry and the For this purpose we introduce the most general Hamiltonian function Θ F which includes all possible fluxes. Since the DFT on pre-QP-manifold can be formulated by using only P ′ and P ′ , we need to consider the degree 3 Hamiltonian function consisting of (XM , ΞM , P ′M ,P ′Ĉ ). It can be written by using six arbitrary tensors on M =M × M, denoted byρ, ρ, F , Φ, ∆, Ψ: We have seen above that we can generate some of the fluxes by canonical transformation of the Hamiltonian Θ 0 (2.59) without flux. We show that the Bianchi identities for the corresponding fluxes can be obtained by using the two Hamiltonians Θ 0 , Θ F combined with a canonical transformation as follows: The Bianchi identity on pre-QP-manifold can be defined by introducing the following where α is a canonical transformation function of degree 2. Then, the condition on the pre-QP-manifold for the Bianchi identity is the vanishing of the function B, which we call pre-Bianchi identity: The generalized Bianchi identity of DFT can be given by the pre-Bianchi identity (3.23) in the following way. Here, we take the Hamiltonian function: where fields E, Φ and F are considered as independent objects. We take Θ 0 the standard [8,26]. See also [28]. The fourth equation (3.29) gives another generalized Bianchi identity for ΦÂMN . The equation (3.30) does not give a new condition. Note that in the above derivation, we have used the Θ F given in (3.24) for simplicity. However, in principle, we can use the most general Hamiltonian with fluxes given in (3.21). As a result of the pre-Bianchi identity, we obtain the condition that the redundant fluxes vanish. GSS twist as canonical transformation In this section, we show that the GSS ansatz (2.21) can be understood as a canonical transformation. The generalized Lie derivative (2.31) and generalized flux (2.25) after the compactification will be derived by using the canonical transformations on the pre-QP-manifold. For the GSS compactification in the supermanifold formalism, we also split the base manifold into internal and external spaces. We use the same notation for coordinates X = (X, Y) as in section 2.1.2, that is X is used for the 2(D − d)-dimensional external space and Y is used for the 2d-dimensional internal space. Then, the canonical transformation e − π 2 δ U provides the GSS twist of the generalized vielbein EÂÎ(X) and the gauge parameter ΛÎ(X) of the reduced theory. When we assume that TÎĴ depends only on Y, we can regard the matrix TÎĴ UĴM (Y) as GSS twist matrix. In (3.37) and (3.38), the GSS twist is generated by the canonical transformation e π 2 δ U . On the other hand, when we take UÎM = δÎM in (3.39) and (3.40), the GSS twist is generated by the canonical transformation e δ T . In the following, we discuss the GSS twist by the canonical transformation e π 2 δ U . The twisted Hamiltonian function is given by the same equation as (3.12) where EÂM is replaced by UÎM (Y): Here, the Weitzenböck connection ΩÎĴK = UÎM ∂M UĴN UKN is made from UÎM (Y) and the resulting flux fÎĴK = 3 Ω [ÎĴK] is assumed to be constant by the GSS ansatz. The generalized Lie derivative on the reduced theory is derived from the parent theory as The right hand side is calculated using the property of the canonical transformation as From this we can read off that Thus, this derived bracket realizes the generalized Lie derivative of GDFT (2.31). The closure condition for the derived bracket is provided by the weak master equation leads the following conditions for the generalized vectors and structure constants: The dynamical field in the reduced effective theory is in EÂÎ. Therefore, the generalized flux of the theory after the GSS compactification is calculated in superspace formalism by applying the canonical transformation as: This shows that the GSS twisted flux appears in the twisted Hamiltonian function in the same way as in (3.14). We summarize the correspondence between the DFT and the GSS compactified DFT objects on the pre-QP-manifolds in table 1. The Q-structure Θ 0 is replaced by Θ GSS in the GSS compactified DFT. Thus, the deformation of the background of DFT on the pre-QPmanifold is realized by the deformation of the Hamiltonian function. DFT GSS generalized vector VM (X)P ′M VÎ(X) P ′ Here, we formulate the double geometry on a non-trivial background with pre-QP-manifold. As we have seen in the previous sections, the structure of the generalized Lie derivative and the consistency conditions are characterized by the pre-QP-structure. For this purpose, we introduce the covariantized pre-QP-manifold and analyze the double geometry of the background. The background vielbein is an element of GL(2D) and the metric ηMN is not constant in general. As in the standard geometry, we introduce the generalized affine connection and the covariant derivative. The generalized Lie derivative and the fluxes are also considered in the background geometry. Then, the fluctuation is introduced on this background. We apply this formulation to DFT on the group manifold proposed in [29]. GL(2D) covariant formulation of pre-QP-manifold Here, RMNŜR andRMNÎĴ are curvature tensor defined by ΓMNP and WMÎĴ , respectively: where EÎM is the inverse vielbein, EÎM EĴM = δ I J . Furthermore, we assume that φ acts trivially on the base manifold coordinate as φ(XM ) = XM . We also require the condition of The canonical transformation φ acts on the DFT basis Q ′M , Q ′Î , P ′M and P ′ I as, Pre-QP-structure and gauge algebra Since the covariant coordinate Ξ ∇ M realizes the background covariant derivative, we can formulate the pre-Q-sturcture on the covariantized pre-QP-manifold. The pre-Q-structure written with Ξ ∇ M realizes the generalized Lie derivatives in background covariant form. The simplest Hamiltonian function is The derived bracket from this Hamiltonian function defines the covariant generalized Lie derivative L ∇ Λ with generalized vector Λ = ΛM (X)P ′M on a generalized vector V = VM (X)P ′M on the background where the covariant generalized Lie derivative is given by replacing the derivative in the generalized Lie derivative by ∇M : We define the canonically transformed Hamiltonian function using φ introduced in the previous subsection: Here, we want to make some remarks. First, note that the vielbein postulate guarantees the equivalence between the generalized Lie derivatives (4.49) and (4.52) defined on the two different frames. Second, once the covariant generalized Lie derivative is defined, we can formulate the generalized torsion in DFT on the pre-QP-manifold. The generalized torsion of the background is then where TPNM = ΓPNM − ΓNPM + ΓMPN (4.54) Finally, note that we can consider other possibilities of Hamiltonian functions written with Q ′ and Q ′ , but in our discussion here for the DFT case it is sufficient to consider the generalized vectors in the P ′ and P ′ sector of the DFT basis. The closure condition of the generalized Lie derivative (4.52) is the weak master equation (2.47) for generalized vectors: The above weak master equation leads to the following condition for the spin connection WMÎĴ and arbitrary generalized vectors V 1 , V 2 and V 3 , (4.56) We discuss this condition order by order in the differential on generalized vectors. Then, the following conditions are sufficient to satisfy (4.56). The first condition is the closure condition of the generalized Lie derivative and it is satisfied with the section condition. The second and third condition can be solved for various cases. Here we just show that the solutions for ordinary DFT and DFT WZW are included. 1) The second condition (4.58) is satisfied by taking, 2) The second condition (4.58) can be satisfied by where the matrices κ and λ are defined as κMN = AÎM AĴN and λÎĴ = AÎN AĴN , respectively. In this case, the canonical transformation for A = θE are written as e θδ E P ′M =P ′M cos θ + EÎM P ′ I sin θ, (4.67) e θδ E P ′ I = − EÎM P ′M sin θ + P ′ I cos θ. (4.68) The coordinate Ξ ∇ M is invariant under the canonical transformation. Then, the canonical transformation e θδ E of the Hamiltonian function Θ ∇ 0 is Applying the similar discussion to A, we can introduce the fluctuation vielbein EÂÎ. When we take AÂÎ = π 2 EÂÎ, we obtain the canonical transformation rules as follows. . The generalized flux can be calculated by the derived bracket similarly as (3.14), as (4.78) Pre-Bianchi identities Now we can consider the pre-Bianchi identity for DFT on covariantized pre-QP-manifold. To define the B function (3.22), we take the Hamiltonian function with general fluxes as, and for Θ 0 , we take Θ ∇ 0 given in (4.50). Since, as we have seen before, the canonical transformations e π 2 δ E e π 2 δ E generate the fluctuation on the background Hamiltonian Θ ∇ 0 , we choose the α in eq.(3.22) as π 2 E. Then, the B function is given by Application to DFT WZW In this section, we apply our discussions to DFT WZW and specify the pre-QP-structure. We assume the background space as a group manifold G, so we can regard the coordinate P ′ I of its tangent space T G as the generator of the Lie algebra of G by the injection map j ′ * ( P ′ I ) = TÎ. Then, the derived bracket of P ′ I should reproduce the Lie bracket: The left hand side is calculated as, Summary and Conclusion In this paper we formulated the algebraic structure of DFT on a pre-QP-manifold in an We have also shown that the GSS compactification fits to this formalism. gives the right description of the generalized flux in gauged DFT (GDFT). One advantage of the superfield formulation is its background independence as can be seen, e.g., from the special structure of the derived bracket. It shows that all information on the background is completely contained in the Hamiltonian function Θ GSS of the intermediate frame. From the geometrical point of view, it is natural to formulate the geometry by using connection and covariant derivative. Therefore, in the last section, we developed the covariantized pre-QP-manifold to formulate the background geometry and gave a consistent theory with the Hamiltonian Θ ∇ instead of Θ GSS . One important observation is the algebraic property of the Ξ ∇ coordinate. It shows that the Poisson structure is preserved in original P Q coordinates as well as in the primed DFT basis. Note that the coordinate Ξ ∇ is fixed by the requirement of conservation of the Pstructure and the vielbein postulate. We have shown that the familiar geometric objects are obtained from the pre-QP-manifold through certain identifications. Thus, we have also shown the application of the superfield formulation to the group manifold case. A construction of DFT on the group manifold has been intensively discussed in [30] in the wider context of Poisson-Lie T-duality, which contains both abelian and non-abelian T-duality as special cases. The solution of the weak master equation in section 4 reduces consistently to the DFT WZW theory discussed in [30]. Finally, we discuss the relation of our approach to GSS compactification and DFT on group manifolds in supermanifold formulation. In section 3.3 we derived the GSS twist in terms of a canonical transformation. There, a GSS twist matrix UMN (Y) is introduced via the canonical transformation of the degree 2 function A = π 2 U in (3.35). The GSS twisted vielbein is (3.37) and the twisted Hamiltonian function is (3.41). On the other hand, the GSS compactified DFT can be regarded as a covariantized DFT whose background manifold is a generalized twisted torus. From this point of view, the background vielbein EÎM is identified with the GSS twist matrix UÎM (Y). With this identification, the total vielbein EÂM is identified with the GSS twisted vielbein. In our approach, this corresponds to the GSS twist matrix introduced by the canonical transformation e δ A in section 4.1.4. We identify the transformation function AÎM with the GSS twist matrix EÎM (Y). Then, the GSS twisted vielbein is obtained as In this case, the GSS twisted Hamiltonian function (4.50) becomes, The difference between the Hamiltonean function (3.41) and the one given above (5.2) is due to the fact that the former one was not written in a covariant form, while the latter one is covariant. Explicitly, we have However, these terms do not affect the generalized Lie derivative and we obtain the same formula for the generalized flux from the connection in covariantized formulation. In order to see the correspondence to original DFT structures, we restricted the transformation function EÂM to an element of O(D, D) and discussed the canonical transformation e π 2 δ E . For general θ case, we obtain the twisted Hamiltonian function in following form. D Double Field Theory on group manifolds Let us briefly recall the DFT on group manifold defined in ref. [29]. D.1 Background vielbein, covariant derivative and fluctuation We consider a 2D-dimensional group manifold G and introduce local coordinates XM = The action of DFT WZW is rewritten using the generalized flux [29]. The DFT on group manifold has been developed as DFT WZW which is considered as a double field formalism of conformal string field theory (CSFT) for the Wess-Zumino-Witten (WZW) model [31].
6,944
2018-12-09T00:00:00.000
[ "Physics" ]
Mesenchymal Stem Cell‐Derived Extracellular Vesicles as Mediators of Anti‐Inflammatory Effects: Endorsement of Macrophage Polarization Abstract Mesenchymal Stem Cells (MSCs) are effective therapeutic agents enhancing the repair of injured tissues mostly through their paracrine activity. Increasing evidences show that besides the secretion of soluble molecules, the release of extracellular vesicles (EVs) represents an alternative mechanism adopted by MSCs. Since macrophages are essential contributors toward the resolution of inflammation, which has emerged as a finely orchestrated process, the aim of the present study was to carry out a detailed characterization of EVs released by human adipose derived‐MSCs to investigate their involvement as modulators of MSC anti‐inflammatory effects inducing macrophage polarization. The EV‐isolation method was based on repeated ultracentrifugations of the medium conditioned by MSC exposed to normoxic or hypoxic conditions (EVNormo and EVHypo). Both types of EVs were efficiently internalized by responding bone marrow‐derived macrophages, eliciting their switch from a M1 to a M2 phenotype. In vivo, following cardiotoxin‐induced skeletal muscle damage, EVNormo and EVHypo interacted with macrophages recruited during the initial inflammatory response. In injured and EV‐treated muscles, a downregulation of IL6 and the early marker of innate and classical activation Nos2 were concurrent to a significant upregulation of Arg1 and Ym1, late markers of alternative activation, as well as an increased percentage of infiltrating CD206pos cells. These effects, accompanied by an accelerated expression of the myogenic markers Pax7, MyoD, and eMyhc, were even greater following EVHypo administration. Collectively, these data indicate that MSC‐EVs possess effective anti‐inflammatory properties, making them potential therapeutic agents more handy and safe than MSCs. stem cells translational medicine 2017 Stem Cells Translational Medicine 2017;6:1018–1028 INTRODUCTION Tissue repair, sometimes called healing, refers to the restoration of tissue architecture and function after an injury [1]. It is a multistep, dynamic process and consists of three consecutive and overlapping stages: inflammation, new tissue formation, and remodeling [2]. The transition from one stage to another is controlled and regulated by cell-released mediators, which are common to most regenerating tissues, with exception of some specialized ones, such as liver and skeletal tissues, that possess distinctive forms of regeneration and follow separate pathways [3]. There is an increasing evidence that the inflammatory microenvironment resulting from the initial cell interactions dictates how the healing process will proceed [4]. In particular, innate immune cells, such as macrophages, lead the inflammatory cascade reaction guiding revascularization and repair at injury sites [5,6]. Diversity and plasticity are distinctive characteristics of macrophages. Classical M1 and alternative M2 activated macrophages represent two extremes of a dynamic state of activation. M1 macrophages exhibit potent antimicrobial properties, high capacity to present antigen, and consequent activation of Th1 responses. Conversely, M2 macrophages possess the capacity to facilitate tissue repair and regeneration [7]. The contribution of mesenchymal stem cells (MSCs) in tissue repair has been addressed in a variety of disease models [8,9]. Contextually, their efficacy in the functional improvement of injured tissues was mostly related to a paracrine effect rather than a direct engraftment and differentiation [10][11][12]. We have recently demonstrated that in an inflammatory environment as the one generated during the early phases of the wound healing process, MSC paracrine activity was significantly modulated promoting a functional switch of macrophages from a pro-to an anti-inflammatory state, thus corroborating evidences showing that the mobilization of innate immune cells mediates the activation of regenerative processes [10,13]. Among the factors responsible for the paracrine effects of MSCs, extracellular vesicles (EVs) have been recently described as new players in cell-to-cell communication by serving as vehicles for transfer between cells of membrane and cytosolic proteins, lipids, and genetic information [14,15]. EVs are defined as a mixed population of membrane-surrounded structures with overlapping composition, density, and sizes, including exosomes, ectosomes, microvesicle particles, and apoptotic bodies in accordance with the recommendations of the International Society for Extracellular Vesicles (ISEV) [16]. Recent studies demonstrated that EVs represent physiologically relevant and powerful components of the MSC secretome, playing important roles in local induction of tissue regeneration [8,12,17]. In the present study, we focused on the detailed characterization of EVs released by human adipose tissue-MSCs to evaluate if the crosstalk between MSCs and cells of the innate immunity could be carried out by secreted EVs and if these interactions occur also in a regenerative microenvironment as the one generated following skeletal muscle damage. To mimic the typical environment established during tissue injury, EVs were isolated from the conditioned medium of MSCs harvested under both normoxic and hypoxic culture conditions (EV Normo and EV Hypo , respectively). We here report that both types of vesicles acted as mediators of the dynamic interplay between MSCs and cells of the innate immunity in vitro and in vivo. EVs effectively triggered the macrophage proliferation and polarization from a M1 to a M2 phenotype. Of note, the hypoxic preconditioning induced an intensified release of EVs enriched with miRNAs involved in different stages of the healing process. Taking advantage of a cardiotoxin (CTX)-induced skeletal muscle injury model, we confirmed a potent EV-mediated anti-inflammatory effect, through the significant downregulation of the inflammatory cytokine IL6 accompanied by the concomitant upregulation of IL10. At the same time we observed also a downregulation of the M1 marker Nos2 and an increased expression of the putative M2 markers Arg1 and Ym1, together with an increased percentage of CD206 pos cells infiltrating damaged and EV-treated muscles. Mice C57Bl/6 (MHC H2b haplotype) male mice between 3 and 5 month old were used. All mice were bred and maintained at the Animal Facility of "IRCCS Azienda Ospedaliera Universitaria San Martino -IST, Istituto Nazionale per la Ricerca sul Cancro." All animal procedures were approved by the Local Ethical Committee and performed in accordance with the national current regulations regarding the protection of animals used for scientific purpose (D. Lgs. 4 Marzo 2014, n. 26, legislative transposition of Directive 2010/63/EU of the European Parliament and of the Council of 22 September 2010 on the protection of animals used for scientific purposes). Adipose Tissue-Derived MSCs Isolation and Culture Subcutaneous adipose tissue in the form of liposuction aspirates was obtained from human healthy donors (n 5 18) during routine lipoaspiration after informed consent. Protocol and procedures were approved by the local ethical committee. For more details regarding MSC isolation and characterization, see Supporting Information Materials and Methods. Bone Marrow-Derived Macrophage Isolation and Culture Bone marrow (BM)-derived macrophages (M/) were isolated from C57Bl/6 mice by flushing the BM with 5 ml of Phosphate Buffered Saline (PBS), as previously described [10]. Each primary culture was obtained from the BM of 3 mice, and a total of 6 primary cultures were used. Details are in Supporting Information Materials and Methods. Preparation of MSC Conditioned Media and EV Isolation EVs were isolated from the conditioned media derived from human MSCs. When cells reached a confluence of 80%, extensive washes in PBS were performed to remove any possible residue of FBS. The cells were transferred in EV-isolation medium (serumfree Dulbecco-Modified Eagle Medium (D-MEM) not supplemented with Fibroblast Growth Factor-2) and the culture split into two subcultures maintained for 48 hours under normoxic (20% O 2 ) and hypoxic (1% O 2 ) condition, respectively. EVs were isolated from normoxic-and hypoxic-conditioned media (EV Normo and EV Hypo ) by differential centrifugation at 300g for 10 minutes, 2,000g for 20 minutes, 10,000g for 30 minutes at 48C to eliminate cells and debris. Obtained supernatants were depleted of residual floating cells and cell debris by filtration with 0.22 lm filter units (Merck Millipore Ltd, Vimodrone, MI, Italy), followed by two consecutive steps of ultracentrifugation at 100,000g for 90 minutes, including a washing step in PBS, to precipitate EVs. A Beckman Coulter ultracentrifuge (Beckman Coulter Optima L-90K ultracentrifuge; Beckman Coulter, Fullerton, CA) was used with swinging bucket rotors type SW28 and SW41Ti. EVs were collected in 100 ll of filtered PBS and used immediately after isolation. Transmission Electron Microscopy The morphological evaluations of isolated EV Normo and EV Hypo , and corresponding MSC monolayers were performed by transmission electron microscopy (TEM). For details, see Supporting Information Materials and Methods. Protein Quantification and Immunoblot Analysis Protein contents of isolated EVs were measured using a BCA protein assay kit (Thermo Scientific Pierce, Rockford, IL) following manufacturer's instructions. Sample preparation for immunoblot analysis is described in Supporting Information Materials and Methods. Cell Viability and BrdU Cell Proliferation Assay 3 3 10 4 M/ in serum free medium were plated in 96-well plates for 24 hours in the presence or absence of either EV Normo or EV Hypo . Cell proliferation was measured with the use of the Cell Proliferation Enzyme-linked immunosorbent assay (ELISA), Bromodeoxyuridine (BrdU) (Roche Mannheim, Germany), according to the manufacturer's instructions. Five independent experiments were performed. In Vivo Angiogenic Assay The in vivo angiogenic assay is described in Supporting Information Materials and Methods. Flow Cytometry Analysis At least nine independent preparations of both EV Normo and EV Hypo were stained with 10 lM Cell Trace (Molecular probes) in combination with the mouse anti-human monoclonal antibody (mAb) CD63 (Clone: H5C6) (BD Pharmingen) or the anti-human mAb CD105 (Clone: SN6) (eBioscience). A set of microsphere suspensions (1, 4 lm) (Molecular Probes) was used as size reference. An unstained sample was acquired to detect the sample autofluorescence and set the photomultiplier for all the three used channels; fluorescent spill-over was controlled by spectral overlap adjustment, acquiring single-color stained tubes. Forward and side scatter channels (FSC and SSC) were used on a logarithmic scale visualized in bi-exponential mode. The FSC and SSC photomultipliers were set using background noise as the lower optical limit, acquiring a sample of sterile PBS tube. The threshold, set on the FSC channel, was regulated to reduce the noise progressively, allocating dots in low left corner of plot, in order to clearly detect EVs. Details about the absolute count of EVs, the immunophenotype of M/ cultured in presence/absence of EVs and the immunophenotype of M/ infiltrating the injured tibialis anterior (TA) muscles are reported in Supporting Information Materials and Methods. RNA Extraction RNA extraction procedure for both EV pellet and TA muscles is described in Supporting Information Materials and Methods. microRNA Profiling The miRNA fraction of each sample was subjected to stem-loop RT-qPCR amplification, as described in Supporting Information Materials and Methods. Quantitative Real-Time PCR To validate the RNA sequencing data, we performed a qPCR analysis of miR-199a-3p, miR-126, miR-223, and miR-146b. Each micro-RNA was tested on three independent preparations of both EV Normo and EV Hypo , and three independent experiments were performed. The miRNA-specific miScript Primer Assays were purchased from QIAGEN (MS00007602 for miR-199a-3p, MS00003430 for miR-126, MS00003871 for miR-223, and MS00003542 for miR-146b). Details reported in Supporting Information Materials and Methods. Details about the quantification of IL-6, IL-10, Nos2, Arg1, Ym1, MCP1, eMyhc, Pax7, and MyoD mRNAs in TA muscles of CTX and EVs injected mice are described in Supporting Information Materials and Methods. Labeling and Internalization of EVs EV Normo and EV Hypo (derived from three different MSC cultures) were labeled using PKH67 membrane-binding fluorescent labels according to manufacturer's recommendations (Sigma-Aldrich, Allentown, PA). Three independent primary cultures of M/ seeded on glass slides placed in 24-well plates were incubated at 378C with labeled EVs at a concentration of 1 lg EVs/10,000 cells. Uptake was stopped after 3 hours by washing and fixation in 4% paraformaldehyde for 20 minutes. Immunofluorescence Analysis Immunofluorescence analysis performed on M/ is included in Supporting Information Materials and Methods. Mouse Model of Cardiotoxin-Induced Muscle Injury Eight-week-old male C57BL/6 mice (six per group) were anesthetized with isoflurane. Twenty microliter of 10 mM cardiotoxin (CTX) (Sigma) in PBS were intramuscularly administered into the TA muscle of both legs. One microgram of EVs (diluted in 20 ll PBS) derived from normoxic and hypoxic MSCs were injected into the right and left TA muscles, respectively. Control mice were treated with 20 ll of vehicle solution. EVs or vehicle solution were injected 2 hours postadministration of CTX and a boost of EVs was done 4 days after muscle injury. Mice were sacrificed after 1, 2, and 7 days post lesion induction and the harvested TA muscles were snap-frozen in liquid nitrogen before further RNA extraction processing. Histology and Morphometric Analysis The histological analysis of differentially-treated muscle tissues is described in Supporting Information Materials and Methods. Statistical Analysis All results were expressed as mean 6 SD or as mean 6 SEM from at least three independent experiments. Statistical comparisons between two groups were performed using an unpaired twotailed Student's t test. Differences among multiple groups were statistically analyzed employing One-way ANOVA and Tukey's multiple comparisons test. A p value below.05 was considered to be statistically significant. All statistical analyses were performed using GraphPad Prism Version 6.0a (GraphPad Software, La Jolla, CA). Hypoxic Conditioning of MSCs Enhances the Release of EVs Endowed With Angiogenic Potential The cargo and function of EVs depend on their cells of origin, suggesting that intercellular communication through vesicles is a dynamic system, adapting its message depending on the conditions of the producing cells [18]. Changes in oxygen concentrations affect many of the distinctive characteristics of stem and progenitor cells [19]. On this basis, we evaluated whether hypoxic conditioning of human adipose tissue-derived MSCs could influence their EV secretion. Confluent primary MSC cultures fulfilling the minimal criteria proposed by the International Society for Cellular Therapy [20] (Supporting Information Fig. 1A) were maintained for 48 hours in serum-free medium in a normoxic or hypoxic environment. After the starvation period, more than 85% of MSCs resulted viable in both culture conditions (Supporting Information Fig. 1B). As expected, MSCs cultured in hypoxic conditions had a higher level of HIF-1a expression than those cultured in normoxic conditions (Supporting Information Fig. 1B). After 48 hours of medium conditioning, isolated EV Normo and EV Hypo , and corresponding MSC monolayers (MSC Normo and MSC Hypo ) were analyzed by TEM. TEM revealed the presence of larger shedding vesicles (microvesicles) as well as several multivesicular bodies (MVBs) containing exosomes within the cell cytoplasm in both culture conditions, indicating the release of a mixed population of EVs (Fig. 1A). In both samples, EVs appeared with a round-shape morphology, mainly isolated or less frequently aggregated in small groups. They showed a diameter ranging from 40 to 250 nm suggesting that the separation procedure selected a population of nano-scaled vesicles referable mostly but not only to exosomes. No morphological differences between EV Normo and EV Hypo were observed with regard to their size, shape, or electrondensity (Fig. 1A). In order to characterize isolated EVs, immunoblot and flow cytometry analysis were performed. Western blot analysis revealed that both EV Normo and EV Hypo express the specific vesicular protein CD81, member of the tetraspanin family, and Alix, that is involved in MVB formation (Fig. 1B). EV Normo and EV Hypo were further characterized taking advantage of a multiparametric flow cytometry approach. To separate true events from background noise, EVs were defined as events that were included in the dimensional gate of 1 lm, which was established according to a well-defined light scatter profile of beads with absolute size (Fig. 1C). EVs were targeted with the Cell Trace labeling, in order to consider only intact membrane structures, along with either the mesenchymal marker CD105 or the vesicular marker CD63. Both types of Cell Trace labeled-EVs expressed the CD105 and CD63 antigens, but the percentage of EVs co-expressing CD63 was significantly higher in the hypoxic condition compared to the normoxic one (p < .01) (Fig. 1D). The absolute quantification of EV Normo and EV Hypo was determined by comparing their events to a known number of fluorescent bead events (Trucount beads, Fig. 1C).The hypoxic conditioning induced a significantly increased release of EVs when compared to the normoxic condition (p 5 .0318) (Fig. 1E). The observations that the regenerative properties mediated by MSCs, including the ability to stimulate angiogenesis, are mediated by EV secretion, and that hypoxia is a factor that favors the accumulation of pro-angiogenic molecules [21], led us to explore the angiogenic potency of MSC-EVs in vivo by performing the Matrigel plug assay. After 3 weeks of implantation, we observed that EV Normo and EV Hypo induced the formation of vessel-like endothelial structures (Fig. 1F). Matrigel plugs in presence of both types of vesicles were enriched in angiogenic molecules, such as Pecam1 and VegfA when compared with control empty plugs (Fig. 1G). The presence of vessels along the periphery of the plugs was confirmed in all the experimental conditions by hematoxylin and eosin and CD31 immunostaining (Fig. 1H). Noteworthy, in EV Hypotreated plugs, a higher expression of Pecam1 and VegfA and an increased density of vessels with a larger diameter were detectable (Fig. 1G, 1H). EVs Secreted Under Hypoxia Express miRNAs Actively Involved in Different Stages of the Healing Process miRNAs influence many biological processes and can be taken up as EV cargo also by distant cells [22,23]. To compare the profile of miRNAs present in both EV Normo and EV Hypo , each sample was tested for the expression of 384 different miRNAs by PCR array. In order to identify differentially expressed miRNAs in EVs released under hypoxic conditions, raw data were normalized using the small U6 RNA as endogenous reference. Setting EV Normo as control sample and EV Hypo as test sample, the fold change was calculated dividing the normalized gene expression profile of the test sample by the corresponding control sample. The hypoxic cell treatment during the EV release induced the significant over-expression of 20 miRNAs and the under-expression of 48 miRNAs (Fig. 2A, 2B). We focused on four specific miRNAs that are implicated in the inflammatory (miR-223 and miR-146b) [24][25][26], proliferative and differentiative phases (miR-126 and mir-199a) [27,28] of the healing process (Fig. 2C-2F). The significantly upregulated expression of these miRNAs was confirmed by quantitative Real-Time PCR, thus suggesting that hypoxia-driven pathways are critical for successful tissue repair. MSC-Derived EVs Promote Macrophage Polarization In the healing process, macrophages mediate the inflammatory phase by maintaining a pro-inflammatory phenotype in order to inhibit possible infections. However, they switch to a proresolving, anti-inflammatory phenotype as soon as the initial "emergency" is over [29,30]. To evaluate the role exerted by EVs in macrophage polarization, we began characterizing the interactions of EVs with recipient cells. We tested whether BM-derived macrophages (M/) were able to internalize both EV Normo and EV Hypo . M/ that were incubated for 3 hours in presence of either EV Normo or EV Hypo previously stained with the fluorescent lipophilic membrane-diffuse dye PKH67, efficiently internalized EVs within their cytoplasm (Fig. 3A). This result was also confirmed by flow cytometry analysis performed after the coculture period on responding cells. More than 70% of EV-treated macrophages resulted positive for the expression of the FITC-fluorescent dye PKH67 used to stain EVs and no FITC-positive signal was detectable in untreated macrophages (Fig. 3B). Cell proliferation of recipient M/ maintained for 24 hours in serum free culture conditions was evaluated using a BrdU-uptake assay. Macrophage proliferation was significantly increased following treatment with both EV Normo and EV Hypo compared to untreated cells (p < .0001), and this increase was even greater in hypoxic conditions compared to the normoxic (p 5 .0011) (Fig. 3C). The flow cytometric analysis of M/ maintained in standard culture medium (M/ w/o EVs) or in the presence of either EV Normo or EV Hypo was performed. In standard conditions, M/ expressed statistically significant higher levels of the proinflammatory M1-like markers Ly6C, CD11b, CD40, and CD86 compared to the EV-treated cells (Ly6C: p 5 .0186; CD11b: p 5 .0017; CD40: p 5 .0073; CD86: p 5 .0019) and did not express any of the typical M2 markers, such as the scavenger receptor CD36, the mannose receptor CD206 or the a v b 3 integrin CD51 (Fig. 3D, 3E). Interestingly, 72 hours of treatment with both EV Normo and EV Hypo induced a significant switch of recipient macrophages toward an anti-inflammatory phenotype (CD206: p < .0001; CD51: p 5 .0126; CD36: p 5 .0027) (Fig. 3D, 3E). It is noteworthy that EVs that were released under hypoxic conditions exerted a strengthened anti-inflammatory effect compared to EVs released under normoxia, downregulating the expression of the costimulatory molecule CD86 and the activation marker CD11b (p 5 .0095 and p 5 .0448, respectively) (Fig. 3E). Taken together, these data indicate that MSC-derived EVs, and in particular those released under hypoxic conditions, actively interact with key components of the innate immune system and influence their immunoregulatory and regenerative behavior. EVs Regulate the M1/M2 Balance of Infiltrating Macrophages in a Skeletal Muscle Injury Model In Vivo Skeletal muscle has a remarkable capacity for regeneration through a complex injury/repair process that includes inflammation, myofiber regeneration, and angiogenesis [31,32]. Observations that different M/ subsets are associated with different stages of muscle regeneration led us to investigate whether EV treatment could influence macrophage polarization from M1 to M2 phenotype in vivo. We opted for a CTX injury in the mouse TA muscle, a reproducible model that recapitulates all healing phases. Muscles, subjected to CTX-damage followed by injection of either EV Normo or EV Hypo , were examined at different times (Fig. 4A). One day after CTX injection, the histopathological evaluation of muscle damage was performed in all experimental groups. Normal myofibers with uniform size, poligonal shape and peripheral nuclei were observed in untreated mice (naive) (Fig. 4B). Following injury, CTX-treated mice, as well as EV Normo and EV Hypo treated animals, had extensive necrotic muscle fibers with vigorous mononuclear cell infiltrate (Fig. 4B). However, at day 1 and 2 post-lesion induction, the ratio between IL6 and IL10 cytokines (IL6/IL10) progressively decreased in EV-treated muscles compared to CTXtreated controls (day1: p 5 .0024; day 2: p < .0001), thus indicating that the injection of both types of EVs significantly mitigated the inflammatory milieu within the injured tissues (Fig. 4C). At day 2, this observation was accompanied by a significant increase in both types of EV-treated muscles of the M2 markers Arginase 1 (Arg1) and Chitinase 3-like 3 (Ym1) (p 5 .0453 and p 5 .0087, respectively), parallel to a decreased expression of the M1 marker Nitric Oxide Synthase 2 (Nos2) (Fig. 4D-4F). The latter results were also confirmed by flow cytometry, analyzing the cells recovered from the damaged and/or EV-treated muscles. The percentage of CD206-positive (CD206 pos ) M/ compared to the percentage of Ly6C-positive (Ly6C pos ) cells was significantly higher within the cells recovered from EV Hypo -treated muscles compared to both CTX-treated and EV Normo -treated samples (p 5 .0006) (Fig. 4G). Given the important role of M/ in muscle regenerative activities, chemokines that are known to attract and interact with these innate immune cells play pivotal roles in the process of muscle recovery after an injury. Among the others, monocyte chemoattractant protein-1 (MCP-1) coordinates inflammation-dependent events involved in muscle regeneration [33]. Interestingly, at day 2 post-lesion induction, the expression level of MCP-1 was significantly upregulated in EV Hypo -treated muscles compared to other experimental conditions (p 5 .028) (Supporting Information Fig. 2A). Since CTX-induced skeletal muscle injury is an optimal model of muscle self-repair, we analyzed key genes playing a dominant role during the overlapping regeneration and remodeling phases that follow inflammation. At day 7, when compared to CTXtreated and EV Normo -treated muscles, EV Hypo -treated muscles presented a significant upregulation of both Paired Domain Transcription Factor 7 (Pax7) and Myogenic Differentiation Antigen (MyoD) genes, selectively expressed by activated satellite cells (p 5 .048 and p 5 .0006, respectively), as well as of embryonic myosin heavy chain (eMyhc), expressed by regenerating fibers (p 5 .018) (Supporting Information Fig. 2B). Concurrently, the progression of muscle regeneration and the prospective differences between EVtreated and CTX-treated muscles were confirmed by histological observations. As expected, many newly formed centrally nucleated fibers were present in CTX-treated muscles (Supporting Information Fig. 2C). It's well known that multinucleated muscle fibers form from the fusion of mononucleated myoblasts [34]. We observed that the number of mononucleated myoblasts was significantly decreased in both types of EV-treated muscles, compared to the CTX controls (Supporting Information Fig. 2C-2E, 2G). Interestingly, in the same EV-treated muscles the number of fibers containing two or more centrally located nuclei was significantly increased compared to the CTX-injured muscles, and this increase was greater followed EV Hypo injection (Supporting Information Fig. 2C, 2D, 2F, 2G). These results suggest that MSCderived EVs, and in particular those released under hypoxic conditions, accelerate the muscle regeneration process. DISCUSSION EVs represent novel players in various cell communication systems, being involved in the regulation of many routes of signaling pathways and intercellular information transfer [35]. It is thanks to their vast amount of properties that EVs have been successfully applied in different fields, such as tumor biology, immunology and regenerative medicine [36]. Stem/progenitor cells and in particular MSCs are active biological components of many regenerative medicine therapies [37]. Recent efforts in elucidating mechanisms of action of these therapies have revealed an increasingly important role of the cell paracrine activity in enhancing positive outcomes without a significant cell engraftment [13,38]. We recently demonstrated a new role of MSCs in wound healing, showing that they can act as modulators of the inflammatory response, secreting cytokines and factors able to induce the switch of pro-inflammatory macrophages toward a pro-resolving, anti-inflammatory phenotype [10]. Indeed, the initial inflammation underlying all regenerative processes is finely coordinated to obtain an efficient outcome, and an altered identity of the inflammatory infiltrate can result in a persistent rather than resolved inflammatory phase [39]. Macrophages, that are an essential component of the inflammatory infiltrate, play important roles in the maintenance of tissue homeostasis [40]. In response to different signals, macrophages are subjected to a reprogramming and undergo two different polarization states that mirror the Th1/Th2 nomenclature [41]. Classical activated M1 macrophages, induced by interferon-g alone or in combination with microbial stimuli and/or inflammatory cytokines, exert pro-inflammatory activities. On the contrary, cytokines such as IL-4 and IL-13 induce an alternative activation of M2 macrophages, which become involved in inflammation resolution [42]. Since secreted vesicles represent a relevant component of the MSC regenerative milieu [43], in the present study we investigated the possible role of EVs in modulating the MSC paracrine capacity to actively interact with innate immune cells. Given that the presence of areas of hypoxia is a prominent feature of various inflamed, diseased tissues contributing to modulate the MSC regenerative milieu, these interactions were evaluated after both normoxic and hypoxic cell conditioning [44]. We showed that: (a) hypoxic conditioning induced an increased secretion of EVs by MSCs, enriching the EV content in microRNAs involved in different phases of the healing process; (b) MSC-EVs acted as "switchers" of macrophage polarization toward an anti-inflammatory phenotype. The latter result was observed both in vitro and in vivo in a mouse model of skeletal muscle regeneration. Literature reports indicate that hypoxia conditioning of MSCs regulates the cargo and protein packaging into EVs [45]. As herein shown, the higher expression of both pro-angiogenic factors and specific microRNAs, such as miR-223, miR-146b, miR126, and miR-199a in response to hypoxia could be at least in part due to a higher number of EVs released by MSCs. Among microRNAs carried by EVs, miR-223 represents a novel regulator of macrophage polarization, being responsible of suppressing classic pro-inflammatory pathways and enhancing the alternative anti-inflammatory responses, whereas the enforced expression of mir-146b in human monocytes leads to a significant reduction in the production of several proinflammatory cytokines and chemokines, such as IL6 [25]. In addition, the increased expression pattern of miR-126 and miR-199a plays important roles in the repair process restoring vascular integrity and inducing cell differentiation, respectively [27,28]. An increasing number of literature reports indicate that MSCs possess the capacity to reduce inflammation and to promote tissue repair processes by their paracrine activity [13,46]. In particular, it was recently reported that lipopolysaccharide preconditioning of umbilical cord-MSCs increased the secretion of exosomes, responsible for the switch of macrophages to a M2-like profile [47]. In line with this evidence, we here demonstrated for the first time that adipose tissue derived-MSCs release EVs endowed with potent anti-inflammatory capacities to balance macrophage polarization toward a M2 profile, especially after hypoxic pre-treatment. The in vitro stimulation of GM-CSF treated macrophages with either EV Normo or EV Hypo led responding cells to increase their proliferation rate and progressively acquire a M2 phenotype characterized by the expression of CD206, CD51, and CD36. The proper requirement for macrophages is a key feature for efficient muscle regeneration [31,32]. Indeed, macrophages exert specific functions all through the inflammatory response following muscle damage, which includes the sequential release of pro-inflammatory effectors, the phenotype shift and the activation of myogenic precursors [33]. In this context, CTX-induced skeletal muscle damage represents a highly reproducible model useful to study each step of the inflammatory cascade. The expression level of the typical pro-inflammatory, Th-1 cytokine IL6 was significantly downregulated in EV-treated muscles at day 1 and 2 post-lesion induction, that represents the timeframe in which maximum macrophage infiltration occurs [48]. This was strictly associated with a significant upregulation of IL10, a cytokine that contributes to promote an anti-inflammatory microenvironment [49]. At the same times, the dynamics of macrophage activation marker expression in response to EV administration were investigated. At day 2 the early marker of innate and classical activation Nos2 was downregulated whereas the expression of Arg1 and Ym1, late markers of alternative activation, were upregulated. This effect was even greater following EV Hypo administration. In the EV-treated muscles, the changes in the expression of these early/late markers coincided with an increased percentage of CD206 pos macrophages. MCP-1, also known as CCL2, is important in macrophage recruitment and activation. Mice deficient in CCL2/MCP1 show impaired muscle regeneration, characterized by a decrease in the diameter of the new myofibers, a reduced number of capillaries, and fat accumulation [50]. In our experimental setting, the administration of hypoxic vesicles determined, at day 2, an accumulation of MCP-1 parallel to the macrophage shift toward a M2 phenotype. These concomitant events could underlie the increased expression, at day 7, of the myogenic markers Pax7 and MyoD, that are upregulated and activated by satellite cells, the increased expression of eMyhc, that is upregulated by regenerating myofibers, as well as the significantly increased number of newly formed multinucleated muscle fibers, thus indicating an acceleration of tissue repair triggered by EV administration. When developing novel regenerative medicine strategies, the rational control of inflammation represents a critical aspect to consider. In this context, the anti-inflammatory, pro-regenerative effects mediated by MSC-EVs could be exploited for therapeutic purposes. From a translational perspective, the use of EVs, in comparison to either traditional cell-based therapies or more recent cell-free strategies based on the use of MSC secretome, presents undeniable advantages. Compared with traditional cellbased therapies, the benefits underlying the use of EVs arise in the possibility to develop safer cell-free therapeutic approaches that could overcome the regulatory obstacles and clinical risks associated to the use of transplanted progenitor cells. Compared to the use of poorly characterized soluble factors, the advantage relies on the ability of EVs to interact and reprogram the surrounding microenvironment, which is a consequence of the variety of their cargo, therefore influencing many biological processes, in particular in injured tissues. CONCLUSION This study demonstrates that MSCs cultured under both normoxic and hypoxic conditions release EVs endowed with antiinflammatory effects. When co-cultured with responding BMderived macrophages, EVs are efficiently internalized by responding cells, inducing, in the short term, an increase in their proliferation rate, and shifting the balance toward a M2 pro-resolving phenotype. A significant enrichment in microRNAs involved in different phases of the healing process was detectable in EVs especially in the ones derived from hypoxia conditioned MSCs. Direct administration of EVs in a CTX-induced skeletal muscle injury reduced the inflammatory response, upregulating key markers of alternative activation patterns, and accelerating the expression of myogenic markers. These effects were even greater following EV Hypo administration. Although additional investigations on the mechanisms underlying the therapeutic effects of MSC-EVs is still necessary before proceeding with clinical trials, these results already provide the basis for the use of EVs as an alternative cellfree approach for the induction of regenerative processes.
7,462
2017-01-31T00:00:00.000
[ "Biology", "Medicine" ]
Two way workable microchanneled hydrogel suture to diagnose, treat and monitor the infarcted heart During myocardial infarction, microcirculation disturbance in the ischemic area can cause necrosis and formation of fibrotic tissue, potentially leading to malignant arrhythmia and myocardial remodeling. Here, we report a microchanneled hydrogel suture for two-way signal communication, pumping drugs on demand, and cardiac repair. After myocardial infarction, our hydrogel suture monitors abnormal electrocardiogram through the mobile device and triggers nitric oxide on demand via the hydrogel sutures’ microchannels, thereby inhibiting inflammation, promoting microvascular remodeling, and improving the left ventricular ejection fraction in rats and minipigs by more than 60% and 50%, respectively. This work proposes a suture for bidirectional communication that acts as a cardio-patch to repair myocardial infarction, that remotely monitors the heart, and can deliver drugs on demand. All values are presented as mean ± SD, n=5 independent replicates. Figure S3 . Figure S3.Suture tissue damage test.The DTMS and 5-0 silk suture were respectively threaded through the rat's back tissue and surgically knotted.After 7 days, the tissue was fixed and stained with HE. White arrows indicate DTMS and silk thread respectively.Black arrows indicate areas of tissue damage, inflammation, and necrosis.Bar = 200μm. Figure S4 . Figure S4.Suture tissue damage test.The DTMS and 5-0 silk suture were respectively threaded through the rat's back tissue and surgically knotted.After 7 days, the tissue homogenate was extracted and used for ELISA detection.a. IL-6, One-way ANOVA with multiple comparison tests.All values are presented as mean ± SD.N=6 biologically independent replicates.b.MPO, One-way ANOVA with multiple comparison tests.All values are presented as mean ± SD.N=8 biologically independent replicates. Figure S6 . Figure S6.Glucose levels in interstitial fluid extracted by DTMS.Blue dots represent the 24-hour interstitial fluid concentration in deep tissues collected by DTMS, and red represent the peripheral blood glucose concentration collected by blood-glucose meter. Figure S10. a Figure S10.a The BMD101 Bluetooth module.b Schematic diagram of DTMS perfusion and sensing functions.c ECG signal measured by BMD101 chip while rat heart beating.d.ECG signal measured by BMD101 chip while rat heart beating.The red box represents the process of injecting drugs.In a very short time, the internal electrical signal was slightly disturbed, and then returned to normal. Figure Figure S12.a NIR photothermal experiment of DTMS and PRIS in vitro.b, c.Heating curve of DTMS in vitro.The DTMS under skin was irradiated with 3w 808nm near-infrared laser at a distance of 25cm for 10min. Figure S14 . Figure S14.Photothermal bacteriostatic ability of sutures in Staphylococcus aureus solution.aAfter near-infrared laser irradiation, each group of Staphylococcus aureus solution was further cultured.b Over 24 hours of quantitative statistics of the absorbance at 600nm wavelength.All values are presented as mean ± SD, n=7 independent replicates. Figure Figure S16.a HUVEC live and dead cell staining; All groups were treated for 48h and stained with Calcein and PI.b MTT colorimetry of the viability of HUVEC cells, all group was treated with gradient concentration of SNAP for 48h.All values are presented as mean ± SD, n=3 cell independent replicates. Figure S17 : Figure S17: SNAP treated H9C2 cells with 100μM H2O2 for 24h. a SNAP and H2O2 co-incubate with H9C2.b MTT colorimetry of cytotoxicity of SNAP.All values are presented as mean ± SD, n=6 cell independent replicates.c MTT colorimetry of cytotoxicity SNAP + H2O2.All values are presented as mean ± SD, n=5 cell independent replicates. Figure S18 . Figure S18.Cardiac function of rat.Quantitative analysis of LVDD, LVDS and LVEF evaluated by echocardiography on days 60(a-c) and 90(d-f).One-way ANOVA with multiple comparison tests.All values are presented as mean ± SD, n=5 biologically independent replicates. Figure Figure S21: Normal group cMRI based on tissue feature tracking of cine sequence (a: diastole, b: systole)
851.2
2024-01-29T00:00:00.000
[ "Medicine", "Engineering" ]
Engineering a flux-dependent mobility edge in disordered zigzag chains There has been great interest in realizing quantum simulators of charged particles in artificial gauge fields. Here, we perform the first quantum simulation explorations of the combination of artificial gauge fields and disorder. Using synthetic lattice techniques based on parametrically-coupled atomic momentum states, we engineer zigzag chains with a tunable homogeneous flux. The breaking of time-reversal symmetry by the applied flux leads to analogs of spin-orbit coupling and spin-momentum locking, which we observe directly through the chiral dynamics of atoms initialized to single lattice sites. We additionally introduce precisely controlled disorder in the site energy landscape, allowing us to explore the interplay of disorder and large effective magnetic fields. The combination of correlated disorder and controlled intra- and inter-row tunneling in this system naturally supports energy-dependent localization, relating to a single-particle mobility edge. We measure the localization properties of the extremal eigenstates of this system, the ground state and the most-excited state, and demonstrate clear evidence for a flux-dependent mobility edge. These measurements constitute the first direct evidence for energy-dependent localization in a lower-dimensional system, as well as the first explorations of the combined influence of artificial gauge fields and engineered disorder. Moreover, we provide direct evidence for interaction shifts of the localization transitions for both low- and high-energy eigenstates in correlated disorder, relating to the presence of a many-body mobility edge. The unique combination of strong interactions, controlled disorder, and tunable artificial gauge fields present in this synthetic lattice system should enable myriad explorations into intriguing correlated transport phenomena. INTRODUCTION The idea that the transport of quantum particles in a random environment can be completely arrested due to the interference of multiple transport pathways was first pointed out by Anderson six decades ago [1]. While Anderson considered the localization of electrons in disordered solids, the presence of electron-phonon coupling and electron-electron interactions prohibit direct observation of most single-particle localization phenomena in such systems, even at low carrier density. In contrast, quantum simulation experiments using light [2] or atoms [3] have become an important testbed for disorder physics, since in these systems the issues of lattice phonons and interparticle interactions are either naturally unimportant or can be precisely controlled. For cold atoms, the abilities to tune system dimensionality, applied disorder, atomic interactions, artificial gauge fields, and lattice geometry open up myriad possibilities for exploring novel localization phenomena. In the absence of interactions, Anderson localization is the generic fate of quantum states in lower-dimensional (d ≤ 2) systems featuring static, random potential energy landscapes and short-ranged tunneling [1,4]. In higher dimensions, the increasing density of states with increasing energy ensures the possibility of delocalization. The exploration of an energy-dependent localization transition, i.e., a mobility edge, has even been undertaken in atomic gases [5,6] in three dimensions through the precision control over disorder and atomic state energies. Cold atom techniques in principle also allow for the exploration of such physics in lower-dimensional systems, where mobility edges can be introduced by correlations in the applied disorder or modified lattice connectivities (e.g., through long-range tunneling). Despite the exquisite control over cold atom systems and the observations of localization in one dimension (1D) over a decade ago, for both nearly random disorder [7] and correlated pseudodisorder [8], single-particle mobility edges (SPMEs) in lower dimensions have gone unexplored. The reasons for this are somewhat technical -it is quite difficult to modify lattice connectivities, and the varieties of engineered disorder that have been explored in experiment have either been practically random (speckle disorder [5][6][7]9], with short-range correlations due to diffraction) or of a particular form of correlated disorder which, due to a peculiar fine-tuning, does not admit a SPME. In the latter case, the pseudo-disorder that arises in a lattice system due to shifts of the site energies by an added, weaker incommensurate lattice is welldescribed by the Aubry-André model [8,[10][11][12]. While this form of correlated pseudodisorder allows for a localization transition in 1D, the fine-tuning of the cosinedistributed site energies and the cosine nearest-neighbor band dispersion results in an energy-independent metalinsulator transition, and thus the absence of a SPME. By deviating from this fine-tuned condition, either by modifying the band dispersion [13] or by modifying the form of the pseudodisorder [14], one can, in principle, controllably introduce a SPME in such a system. In this work, we add multi-ranged tunneling pathways to a one-dimensional lattice that features site energy pseudodisorder described by the Aubry-André (AA) model. Specifically, we use our synthetic lattice sys- The two-dimensional zigzag lattice representation, formed by a rearrangement of the one-dimensional picture of (a). A uniform clockwise flux φ through each triangular plaquette is generated via NNN tunneling phases φ with alternating sign. (c) Atomic dispersion indicating first-(black arrows) and second-order (dashed red arrows) Bragg transitions used to couple NN and NNN lattice sites, respectively, in the momentum-space lattice. The recoil energy is given by tem based on parametrically coupled atomic momentum states to engineer independently controllable nearestneighbor (NN) and next-nearest-neighbor (NNN) tunneling terms ( Fig. 1(a)). The combination of NN and NNN tunneling pathways results in closed tunneling loops that can support a nontrivial flux ( Fig. 1(b)), which we control directly through the complex phase of the various tunneling terms. This system realizes an effective zigzag chain with a tunable magnetic flux. With the combination of controlled pseudodisorder and tunable flux, we perform the first explorations of the interplay of disorder and artificial gauge fields. We observe direct evidence for a flux-dependent SPME in this system, through measurement of the localization properties of the extremal energy eigenstates. In addition to the SPME that results from multi-ranged hopping, we observe asymmetric (with applied flux) localization behavior of the systems lowest-energy and highest-energy eigenstates, caused by the presence of effectively attractive interparticle interactions in the lattice of momentum states [15]. The influence of interactions is even more strongly evident in the case of the 1D AA with only NN tunneling, where a drastic shift in the localization transition is observed between low-and high-energy eigenstates, corresponding to a mobility edge driven purely by inter-particle interactions. EXPERIMENTAL METHODS To experimentally engineer effective zigzag chains, which are equivalent to a lattice model with NN and NNN tunneling terms, we coherently couple an array of discrete atomic momentum states with both first-and second-order Bragg transitions, as depicted in Fig. 1(c). Starting with atoms from a stationary Bose-Einstein condensate (BEC) of ∼10 5 87 Rb atoms, we apply a set of counter-propagating lattice laser beams with wavelength λ = 1064 nm, wavenumber k = 2π/λ, and frequency ω + = c/2πλ, allowing for quantized momentum transfer to the atoms in units of ±2 k. The parametric coupling of states separated in momentum by 2 k, which mimics NN tunneling, is realized by using a pair of acoustooptic modulators to write a controlled spectrum of frequency components onto one of the lattice beams. Starting with atoms at rest, the counter-propagating beams are able to couple the momentum states p n = 2n k as synthetic lattice sites. For example, to create a NN tunneling link between adjacent momentum states p = 0 and p = 2 k, a first-order Bragg resonance (solid black arrows in Fig. 1(c)) is fulfilled by matching the photon energy difference of the two laser fields to the added kinetic energy of an atom moving with momentum p = 2 k. More generally, there exists a unique energy difference between any pair of adjacent states with momenta p n and p n+1 , owing to the quadratic free-particle dispersion. In this way, the multiple frequency tones imprinted onto the one Bragg laser field enable the simultaneous addressing of many Bragg resonances. In this study, we introduce the novel capability to engineer multi-range tunneling through the simultaneous addressing of first-and second-order Bragg transitions, shown in Fig. 1(c) as solid black and dashed red arrows, respectively. Because each of the spectral tones associated with a given NN or NNN tunneling term is unique, we are able to individually control each of the tunneling links in our synthetic lattice. Specifically, all of the site energies, tunneling amplitudes, and tunneling phases in our synthetic zigzag chains are individually controlled by the strength, phase, and frequency of a corresponding frequency component of the multi-frequency beam. For all of the studies described herein, a total of 21 synthetic lattice sites (momentum states) are coupled through firstand second-order Bragg transitions. In addition to local parameter control, this system supports site-resolved detection by a simple time-of-flight expansion period where the momentum states separate in space according to their momenta, after which absorption imaging is used to determine the population at each site. A more detailed description of this momentum-space lattice scheme can be found in Refs. [16][17][18][19]. HOMOGENEOUS GAUGE FIELD STUDIES We first demonstrate our control of a homogeneous synthetic gauge field in the zigzag lattice. We directly impose a synthetic magnetic flux φ on every three-site Chiral dynamics in the zigzag lattice. (a) Band structure for φ/π = ±0.5 considering a two-site unit cell (yellow boxes in lattice cartoon), for tunneling ratio t /t = 0.62. Color represents spin polarization σ , or the overlap of the quasimomentum eigenstate with the top (red, spin up) or bottom (blue, spin down) row of the lattice. Dashed black curves represent the folded band structure for t /t = 0. q should be considered "quasiposition" in our momentum-space lattice and is given in terms of the unit cell lattice spacing d = 4 k. (b) Population imbalance between sites 2 and −2 of the 21-site lattice, measured after 180 µs of dynamics (∼ 1.05 /t) with optical density (OD) images of atomic populations at φ/π = 0, ±0.5 above. Dashed and solid curves represent an ideal simulation of the experiment using Eq. (1) and a full simulation of experimental parameters, respectively. (c,d) Site population dynamics for applied flux (c) φ/π = 0.5 and (d) φ/π = −0.5. Left to right: data, full simulation, and ideal simulation of experiment. Arrows indicate direction of chiral motion. Data for (b-d) were taken with averaged NN tunneling time /t = 176(2) µs and tunneling ratio t /t = 0.622(3). All error bars denote one standard error of the mean. OD images in (b) and extracted site populations in (c,d) are plotted with the color scale in (b). plaquette using engineered tunneling phases. Because the plaquettes alternate pointing up and down, to generate a homogeneous positive flux φ we impose an alternating sign on the NNN tunneling phases, as shown in Fig. 1(a,b). The effective tight-binding Hamiltonian describing the 21-site zigzag lattice is then given bŷ where t (t ) is the NN (NNN) tunneling energy andĉ † n (ĉ n ) is the creation (annihilation) operator at site n. The synthetic gauge field, which can lead to the breaking of time-reversal symmetry, allows us to engineer an analog of spin-momentum locking in the zigzag lattice [20][21][22][23][24][25][26]. We consider the upper and lower rows of the lattice as an effective spin degree of freedom with (pseudo)spins σ = 1 and −1, respectively ( Fig. 2(a)). Under conditions of broken time-reversal symmetry (φ = 0, ±π) we expect to observe chiral trajectories for atoms "polarized" on one row of the lattice. The band structure (shown for the tunneling ratio t /t = 0.62 used in experiment) of the lattice shows this correlation between the sign of the group velocity and the (colored) spin/row degree of freedom [19]. The two bands here reflect the twosite unit cell of the lattice, highlighted in yellow boxes. To explore this spin-momentum locking in experiment, we initialize atoms on the lower row at site 0 and quench on the tunnel couplings according to Eq. (1). With zero applied flux, the population delocalizes across the lattice symmetrically, as shown in the top middle optical density (OD) image of Fig. 2(b). For positive flux φ/π = +0.5 (right panel), population initially in site 0 moves towards lattice site 2, corresponding to counter-clockwise chiral motion. Under a negative flux φ/π = −0.5 (left panel), population moves in a clockwise fashion to lattice site −2. These observed chiral flows for φ = ±π/2 are clear signatures of spin-momentum locking. By tuning the applied flux, we map out the entire range of chiral behavior, as shown in Fig. 2(b), bottom. Here we plot the population imbalance P 2 − P −2 between lattice sites 2 and −2, such that a positive (negative) value of imbalance indicates counter-clockwise (clockwise) motion. The data agree qualitatively with an ideal simulation of the experiment using only Eq. (1) (dashed curve), but agree more closely with a full simulation of the system parameters (solid curve) which considers the exact form of atomic coupling to the many laser frequency components, accounting for off-resonant Bragg couplings [19]. We are also able to directly observe the fully siteresolved chiral dynamics of initially localized atomic wave packets, as shown in Fig. 2(c,d). For positive flux, we see that atomic population moves counter-clockwise from site 0 to site 2, and further on to sites 4 and 6, remaining confined to the bottom row. Because the initial state (site 0) does not project entirely onto states with positive group velocity, a portion of the population stays near the center plaquette and oscillates between site 0 and sites ±1. Off-resonant Bragg coupling causes deviations from the ideal simulation (right), but these major qualitative features remain present in both the data (left) and full simulation (middle). For the case of negative applied flux, we observe the opposite chiral behavior, demonstrating that the nature of the spin-momentum locking can be controlled by the applied synthetic flux. LOCALIZATION STUDIES Localization phenomena in disordered quantum systems depend intimately on the properties of applied disorder and on the connectivity between regions of similar energy. For random potential disorder in three dimensions, a localization-delocalization transition is assured for states with energies beyond a critical value due to an increasing density of states. For a given disorder strength, a mobility edge, or energy-dependent localization transition, is found in such a system [5,6]. In lower dimensions, for truly random potential disorder, all energy states remain localized in the thermodynamic limit even for arbitrarily small strengths of disorder [4]. Considering instead the influence of correlated pseudodisorder, one finds that the localization physics is strongly modified, with delocalization and mobility edges permitted even in lower dimensions. One form of quasiperiodic pseudodisorder that has been of interest to quantum simulation studies with both light [27] and atoms [8] is that described by the diagonal AA model. Interest in this model has stemmed in part from its intriguing localization phenomenology and connections to the Hofstadter lattice model [10,28,29]. Experimental interest in this form of disorder has also been driven by the relative ease of its realization through the overlap of two incommensurate optical lattices [8]. The AA model of pseudodisorder has interesting properties in the context of SPMEs. The highly correlated disorder allows for the possibility of a metallic, delocalized states in lower dimensions. However, a subtlety arises due to a correspondence between the distribution of pseudodisorder -characterized by quasiperiodic, cosine-distributed site energies -and the cosine dispersion in a NN-coupled 1D lattice. This fine tuning results in a metal-insulator transition that occurs at the same critical disorder value (in units of the tunneling energy) for all energy eigenstates, and thus the absence of a mobility edge. By moving away from this fine-tuned scenario in any number of ways -by introducing longer-range hopping [13], by modifying the pseudodisorder correlations [14], or by adding nonlinear interactions [11,12,[30][31][32]] -a SPME can be introduced into the AA model. The addition of longer-range tunneling, as in our zigzag lattice, allows for the band dispersion to be modified from its simple cosinusoidal form. For a flux of φ = ±π/2, as shown in Fig. 2(a), increasing the tunneling ratio t /t from zero leads to a deformation of the low-energy band structure from quadratic, to quartic, to forming a doublewell structure [33][34][35], with a symmetric modification of the band energies at high energy. The high ground state degeneracy of the quartic band in this system and of flat bands in similar multi-range hopping models has attracted great interest [36][37][38]. Such systems promise interesting localization properties under disorder [13], and the inherent high single-particle degeneracy allows for the study of emergent physics driven by interactions [36][37][38][39]. For all other flux values (φ = ±π/2) the dispersion of the bands at low and high energies is asymmetric, and this system permits the localization properties of the extremal energy eigenstates to be tuned through modification of the effective mass at low and high energies. Here, we study the localization properties under the AA model on a 1D lattice and on the multi-range hopping zigzag lattice, observing evidence for an interactioninduced mobility edge as well as the emergence of a fluxdependent SPME. 1D Aubry-André localization transition We first examine the localization properties of the onedimensional AA model, or the t /t = 0 limit of the zigzag lattice. Figure 3(a) shows this model's pseudodisordered distribution of site energies ε n = ∆ cos (2πβn + ϕ), for an irrational periodicity β = ( √ 5 − 1)/2 and a given value of the phase degree of freedom ϕ. Under this model, all energy eigenstates experience a transition from delocalized metallic states to localized insulating states at the same critical disorder, (∆/t) c = 2, for an infinite system size. To probe the crossover in our finite 21-site system, we initialize various energy eigenstates and explore their localization properties as a function of ∆/t. The experiment begins with population at site 0 (the BEC at rest) with all tunnelings turned off. In this initial limit of infinite disorder (∆/t) i = ∞, all eigenstates are trivially localized to individual sites of the lattice, with a vanishing localization length. We can initialize our atoms in a particular energy eigenstate of the system through choice of ϕ, as the eigenstates and eigenstate energies are solely determined by the site energies in this t = 0 limit. We then slowly ramp the magnitude of the tunneling energy to a final value, and probe the localization properties of the prepared eigenstate as a function of ∆/t. The ramp of t to its final strength t/ = 2π × 1013(9) Hz (corresponding to a tunneling time of /t = 157(1) µs, determined through two-site Rabi oscillations) is linear and performed over 1 ms, slow enough to largely remain within the prepared eigenstate. In each experiment, the disorder strength is fixed to a given value ∆, such that the tunneling ramp (always to the same t value) can be seen as traversing in parameter space from ∆/t = ∞ to the chosen final value (shown as an arrow in Fig. 3(b)). We expect that for final values with ∆/t > (∆/t) c , the population should largely remain localized to the initial site, whereas for ∆/t < (∆/t) c we should see population begin to delocalize across the lattice. In Fig. 3(b), we plot the measured population outside the central three sites P out , averaged over four realizations of the AA phase ϕ/π = {0.96, 0.64, 1.35, 1.88} corresponding to energy eigenstates {|ψ 0 , |ψ 7 , |ψ 7 , |ψ 18 }, where |ψ 0 is the ground state and |ψ 20 is the highest excited state. As expected, the measured delocalized fraction is almost entirely absent for large disorder, and grows steeply for ∆/t < (∆/t) c . We find excellent agreement between our ϕ-averaged measurements and numerical simulation results based on our experimental ramp (dashed curve, idealized simulations ignoring off-resonant Bragg couplings) in Fig. 3(b), suggesting the observation of a localization crossover that is broadened due to finitesize effects as well as the finite ramp duration. This same behavior can also be seen in the integrated optical density data, shown in the inset, which directly shows the averaged site populations for each final disorder value ∆/t. For large disorder, population remains localized to the initial site, while the metallic regime shows population spreading out to sites n = ±7. The data for individual energy eigenstates is also shown, both as integrated optical density images in Fig. 3(c) and the P out observable in Fig. 3(d). While all four data runs show localization crossovers, their positions in terms of a critical disorder-to-tunneling ratio (∆/t) c differ according to the state energies. Visually, the ground state |ψ 0 appears to localize for smaller disorders than the intermediate energy eigenstates, with the highly excited state |ψ 18 requiring the largest critical disorder strength for localization. While some of the broadening of the transition observed in Fig. 3(b) can be attributed to effects of finite size and finite ramp durations, to a large degree it is explained by this averaging over unique localization transitions of different energy eigenstates. The difference in localization properties for different energy eigenstates runs counter to our expectations of an energy-independent transition for the NN-coupled AA model, but can be explained by the presence of nonlinear atomic interactions in our momentum-space lattice [15,19]. In particular, the interactions between indistinguishable bosons in momentum space are effectively attractive and site-local, in the sense that direct interactions are present for collisions between two atoms occupying any pair of momentum modes, while exchange interactions are present only when two identical bosons occupy distinguishable modes [40,41]. Thus, while the momentum-space interactions are physically long-ranged and repulsive, they give rise to an effective local attraction. For atoms initially prepared at the site with lowest energy, attractive interactions can be seen to bring atoms further away from tunneling resonance with other sites (Fig. 3(d), inset). Thus, such a state should remain localized even when the disorder drops below the singleparticle critical value. In contrast, for atoms prepared at the highest energy site, attractive interactions effectively lower the total site energy and bring the atoms closer to tunneling resonance with the unoccupied lower-energy sites of the lattice (Fig. 3(d), inset). Then, by filling the high-energy sites with attractively-interacting bosons, the disorder potential can be effectively smoothed out at high energies by atomic interactions [30]. This behavior for our effectively attractive momentumspace interactions is exactly the opposite of that found for real-space repulsive interactions, the influence of which has previously been studied on ground state localization properties of the AA model [30]. The simulation curves in Fig. 3(d) take into account the effective attractive interactions present in our system at an approximate, mean-field level (also ignoring the inhomogeneous atomic density and neglecting off-site contributions of the effective attraction, which arise due to partial indistinguishability of atoms in different momentum states resulting from superfluid screening [19]). The simulations assume a mean-field interaction based on our condensate's central mean-field energy U 0 / ≈ 2π × 860 Hz (as measured through Bragg spectroscopy), which is of the order of the single-particle tunneling energy t/ = 2π × 1013(9) Hz. To account for the inhomogeneous density distribution, we take a weighted average over homogeneous mean-field energies ranging from 0 to the peak mean-field energy U 0 to get an average mean-field energy of U/ ≈ 2π×500 Hz. We then use this average value as a homogeneous mean field energy in our simulations. These simplified simulation curves already reproduce well the observed shifts of the localization transitions for the low-(|ψ 0 ) and high-energy (|ψ 18 ) states. These direct observations of interaction-induced localization and delocalization for low and high-energy states, respectively, are indicative of a many-body mobility edge. Such measurements are enabled by our unique ability to stably prepare any particular eigenstate in our synthetic lattice. Localization studies in zigzag chains With the addition of longer-range tunneling, the energy-independent transition of the simple 1D AA model begins to depend critically on the eigenstate energy even at the single-particle level. By tuning the NNN tunneling strength and the artificial flux in our effective zigzag chains, we can introduce a tunable SPME through band structure engineering. While in the demonstration of control over flux and the observation of spinmomentum-locking in Fig. 2 we employed a tunneling ratio of t /t ≈ 0.6, here we work at a smaller value of t /t ≈ 1/4. Under this condition, a maximal difference in the band dispersion at low and high energies appears for flux values of 0 and π, where a quartic dispersion appears at high and low energies, respectively. To probe the mobility edge, we prepare the two extremal energy eigenstates of the system, the ground state (GS) and the highest excited state (ES), and compare their localization properties. As in the 1D study, our experiment begins with all atomic population prepared at site 0 with all tunnelings turned off, i.e., in the infinitedisorder limit of the system (∆/t = ∆/t = ∞) where all energy eigenstates are localized to individual sites of the lattice. To initialize the atoms in a particular energy eigenstate of the system, we simply vary the AA phase: ϕ = 0 for the GS and ϕ = π for the ES. In short, we track how the prepared eigenstate evolves as the parameters of the Hamiltonian, given bŷ are smoothly and slowly varied to some final desired conditions of ∆/t for fixed tunneling ratio t /t and fixed flux φ. To help ensure adiabaticity over a large part of the parameter ramp, an extra potential offset of strength V is added at the initial site n = 0, such that the modified site energies are given by ε n (V ) = ∆ cos (2πβn + ϕ) − V δ n,0 . By setting V > 0 (V < 0) for the GS (ES), we further separate the initial eigenstate from the rest of the spectrum by a potential well (hill). Starting from the initial limit of V /t = ∞ and ∆/t = ∞, we adiabatically load our desired eigenstate by linearly ramping up both tunneling terms (t and t ) over 2 ms while also smoothly removing the potential well by ramping V to zero [19]. We perform this procedure over parameter ranges 1 ≤ ∆/t ≤ 4.25 and 0 ≤ φ/π ≤ 1, mapping out the localization behavior of the GS and the ES in Fig. 4(a,d). We plot the standard deviation of the population distribution in the lattice, σ n (i.e., the momentum standard deviation σ p normalized to the spacing between sites of 2 k), where the values are resampled from the actual (∆/t, φ/π) points where data were taken (small black dots). The ∆/t values of the data have variations and uncertainties stemming from variations and measured uncertainties in calibrated tunneling rates for the experimental runs, with an overall averaged NN tunneling rate t/ = 493(2) Hz and tunneling ratio t /t = 0.247 (4). For the ground state in Fig. 4(a), we see that the region of metallic, delocalized states (red region, corresponding to states with large σ n ) extends out to larger ∆/t values when the applied flux is near zero than for the case of an applied π flux. This can also be seen in the integrated optical density images at bottom: sites as far as n = ±2 remain populated even at large disorder ∆/t ∼ 3.5 at small flux φ/π = 0.05 (left), while for large flux φ/π = 0.95 (right) population fully localizes for ∆/t > 3. The top panel of Fig. 4(b) highlights that for a fixed disorderto-tunneling ratio of ∆/t ∼ 2.9, the GS can be driven from metallic to insulating by changing only the flux. In the absence of flux, the shift of the GS localization transition to larger disorder values as compared to the t = 0 case is intuitive: simply adding longer-range tunneling increases the connectivity of the lattice, increasing the dispersion at low energy, and enhancing delocalization. As non-zero flux is added, however, the GS localization transition shifts towards smaller critical disorder values. This effect is perhaps surprising when considering effects such as the suppression of weak localization by broken time-reversal symmetry, as observed recently in measurements of coherent back scattering [42]. However, in the context of our zigzag flux chains, this fluxenhanced localization of the GS is easy to interpret. The shift of the GS localization transition towards smaller (∆/t) c is driven by a flattening of the low-energy band dispersion, owing to kinetic frustration of the different tunneling pathways. The system is maximally frustrated at φ = π for t /t = 1/4, corresponding to a nearly flat, quartic low-energy dispersion (Fig. 4(c), right). Under these conditions, the states at low energy become heavy (large effective mass) and easier to localize in the presence of disorder. In considering the flux-dependent localization properties of the highest energy eigenstate, a similar line of argumentation holds, but with the opposite trend with applied flux. The high energy states of the band structure are maximally dispersive for φ = π, becoming flatter for decreasing flux, with a quartic band appearing for zero flux. The consequence of this modified band structure on the localization properties of the ES is reflected in the measured dependence of the ES localization proper- Fig. 4(a,d). Non-interacting and interacting simulations (U/ = 2π × 500 Hz used in the latter) are shown as dashed and dotted lines, respectively. For flux values where no critical disorder is plotted, atomic population was determined to be delocalized (based on the set threshold value of the standard deviation) over the full range of disorder strengths. Vertical gray line at φ/π = 0.5 denotes flux value at which the GS and ES curves should cross in the absence of interactions and any off-resonant coupling terms. Error bars denote one standard error of the mean. ties following the parameter ramp to final ∆/t values for different flux values (Fig. 4(d)). The flux-dependence of the localization transition is also seen in striking fashion in the integrated OD images at the bottom of Fig. 4 For both states, we empirically estimate the approximate "critical" disorder strength (normalized to t) relating to the metal-insulator transition by finding the ∆/t value at which σ n equals 0.68 lattice sites. This estimate is determined for each flux value of the data, and the extracted critical disorder strengths are shown as white circles in Fig. 4(a,d). We can compare these experimentallyextracted points to the predicted threshold values of disorder, based on numerical simulations of our experimental ramp protocol. These single-particle predictions are shown as dashed lines in Fig. 4(a,d), and show the same qualitative trend as the experimental points for both the GS and ES. To better contrast the localization behavior of the GS and ES, we additionally plot both the experimentally determined transition points and the theory predictions for both extremal eigenstates together in Fig. 5. With the two datasets overlaid, one can more clearly see the direct evidence for a flux-dependent SPME. While this sampling of the two extremal eigenstates does not deter-mine the critical energy at which delocalization occurs for given values of ∆/t and φ, it does provide the first direct experimental evidence for a SPME in lower dimensions. The behavior of the transition ∆/t values for the GS and ES are nearly opposite to one another. For flux values near zero, the disorder strength needed to localize the GS exceeds that of the ES by nearly t, due to kinetic frustration of the high energy states. The situation reverses for flux values near π: the GS becomes localized at lower disorder strengths ∆/t ∼ 2.3, and the ES remains delocalized even up to the highest disorder value used in experiment ∆/t ∼ 4.25. This apparent asymmetry, i.e., that a larger magnitude of shift between the GS and ES transition points is found for flux values near π than for flux values near 0, is in disagreement with the single-particle prediction. Moreover, at the singleparticle level the flux-dependence of the GS and ES localization properties should essentially be mirror images of one another (dashed lines, with a slight asymmetry resulting from effects due to off-resonant driving), such that their transitions points should cross very near to φ/π = 0.5 (vertical gray line in Fig. 5). However, the apparent crossing point is offset to lower flux values by nearly 0.1π. As in the previously discussed case of the 1D AA model with only NN interactions (Fig. 3(c,d)), the nonlinear interactions present in our atomic system are largely responsible for this asymmetry observed between the localization properties of our low and high energy eigenstates. As described earlier in the context of the NN-coupled AA model, we can approximately capture the influence of the momentum-space interactions in this system by including a site-local mean-field attraction in a multisite nonlinear Schrodinger equation [19], with an interaction energy that is determined independently by calibration via Bragg spectroscopy. Including these interactions (dotted lines, also shown in Fig. 4(a,d)), the transition lines get shifted to lower (GS) and higher (ES) disorder values, so that they cross at lower flux values. The interacting simulation results better capture the localization properties of the ES, which was shifted to significantly higher disorder strengths than was predicted at the single-particle level. It also qualitatively captures the shift of the crossing of the critical disorder curves in Fig. 5 to lower flux values, although it predicts a slightly larger shift than seen in experiment. In the future, by studying fluctuations of the atomic number distribution and inter-site correlations in our synthetic lattice, or by more closely studying fine features of the localization properties, this simulation platform may enable unique explorations into the physics of interacting disordered systems, in particular related to the physics of many-body localization. It also offers a unique platform to study the interplay of disorder, artificial gauge fields, and interactions. CONCLUSIONS This work represents the first direct observation of a single-particle mobility edge in lower dimensions, which is enabled by the unique ability to stably prepare atoms in any energy eigenstate and explore their localization properties in a system with precisely controlled disorder and tunable artificial gauge fields. We also present the first direct quantum simulation evidence for a many-body mobility edge, studied through a shift of the localization properties of low-and high-energy eigenstates in the 1D AA model that arise due to many-body interactions. These interaction shifts are also observed in the localization transitions of a multi-range hopping AA model that admits a flux-dependent SPME, leading to the interplay of single-particle and many-body shifts of the localization transition for states at different energies. This work also constitutes the first quantum simulation study combining synthetic gauge fields and disorder, and its extension to fully two-dimensional lattices beyond coupled chains promises to pave the way towards studies of disordered quantum Hall systems. In particular, by moving to a larger system containing bulk lattice sites, a robustness of the observed chiral-propagating modes to disorder (similar to the robustness to disorder observed recently for the bulk winding of chiral symmetric wires [18]) should be readily observable. EXPERIMENTAL SETUP Our experiments begin with a Bose-Einstein condensate (BEC) containing ∼10 5 87 Rb atoms, held in an optical dipole trap primarily formed by a single focused laser beam (wavelength λ = 1064 nm, wavenumber k = 2π/λ) with weak additional trapping provided by one other beam. To create the lattice, we allow this primary beam to pass through two acousto-optic modulators (AOMs) which, together, write onto the beam a spectrum of radiofrequency tones in the sub-MHz range. This multifrequency beam is sent back towards the atoms along the same path as the incoming single-frequency beam. The interference of these two beams creates a time-dependent optical potential comprised of multiple superimposed optical lattices moving at different velocities. Each of these velocities is determined by the frequency difference between one frequency component of the multi-frequency beam and the incoming beam, specifically tuned to address a particular Bragg transition between two discrete momentum states. The simultaneous driving of many Bragg transitions mimics tunneling between sites (momentum states) in our synthetic lattice [1,2]. By varying the amplitude and phase of each frequency component, along with the frequency detuning from Bragg resonance, we can control the amplitude and phase of each effective tunneling link, as well as each site-energy term. Because the single-particle dispersion relation is quadratic (E = p 2 /2M Rb ), all of the first-and secondorder Bragg transitions have unique frequencies and can be individually addressed. Next-nearest-neighbor (NNN) tunnelings are realized via a four-photon, second-order Bragg process. Shown in Fig. S1(a) as dashed red lines, this involves virtually absorbing two photons from the single frequency beam (ω + ) and emitting two photons into the multi-frequency beam (ω 0,2 ), or vice versa. Control of the effective lattice parameters for these four-photon processes requires slightly different considerations compared to the nearestneighbor (NN) terms. For example, to enable a NNN tunneling phase of π, we apply half this phase (π/2, relative to incoming field) to the corresponding frequency component ω 0,2 . A similar consideration holds for the relationship between site energy and frequency detuning from resonance. More generally, these differences can be summarized in the relationships between the tunneling terms for NN (t nn e iϕnn ) and NNN (t nnn e iϕnnn ) processes. Taking into account the field strengths (assumed to be real) (6) and applied flux φ/π = −0.5 (same data as in Fig. 2(d)). Dashed and solid curves in (c,d) represent results from an ideal simulation of the experiment and a full simulation accounting for off-resonant coupling, respectively. of the incoming beam (Ω I ) and a particular frequency component of the multi-frequency beam (Ω R ), the phases of these same fields (φ I and φ R ), the large single-photon detuning from atomic resonance ∆ (relating to roughly 100 THz for our laser wavelength λ = 1064 nm), and the recoil energy E R = h 2 /2M Rb λ 2 , these terms are given at resonance as: (S1) Here we present the exact NN tunneling times /t and NNN to NN tunneling ratios t /t for each individual data set shown in the main text. For the variable flux data of Fig. 2(b), /t = 172(1) µs and t /t = 0.633 (3). For the dynamics under φ/π = 0.5 of Fig. 2(c), /t = 182(1) µs and t /t = 0.605 (5). For the dynamics under φ/π = −0.5 of Fig. 2(d), /t = 174(2) µs and t /t = 0.628 (6). The tunneling ratio used to make the band structures of Fig. 2(a) was an average of these three values: t /t = 0.622 (3). For the 1D Aubry-André data of Fig. 3, /t = 157(1) µs. For the multi-range hopping Aubry-André data of Fig. 4 and Fig. 5, /t = 323(1) µs and t /t = 0.247(4), averaged over all data points taken. OFF-RESONANT EXCITATIONS While we seek to address individual Bragg transitions with single frequency components, in practice we apply the full spectrum of frequencies to the condensate, as shown in Fig. S1(b). Thus each transition is not only addressed by one resonant frequency, but also feels the effects of all of the other non-resonant frequency components. For a lattice with only NN tunnelings, the frequency components are equally spaced at 8E R / = 8 × k 2 /2M Rb ≈ 2π × 16.2 kHz. Adding NNN links halves the spacing of applied frequency components to 4E R / ≈ 2π × 8.1 kHz. On a lattice with only NN tunnelings, the off-resonant couplings result in step-like intervals in the dynamics (Fig. S1(c)). For this data (from Ref. [3]), atomic population undergoes a continuous-time quantum walk on a 21-site lattice engineered with 20 equally-spaced frequency teeth. Due to the equal spacing of the frequency teeth, the off-resonant effects add up constructively. This is evident in the period T between these steps, which corresponds exactly to the spacing between adjacent frequency teeth, T = h/8E R . We have shown in a previous study [3] that adding random tunneling phases onto equally-spaced frequency teeth suppresses these steps, resulting in smoother dynamics. We note that the magnitude of the tunneling rate plays a significant role in the magnitude of off-resonant effects. If the tunneling rate is comparable to or greater than the frequency spacing between the first-order Bragg resonances (t/ 8E R / ), then each frequency tooth may address multiple transitions. The extreme limit of non-resonant addressing is often encountered in cold atom experiments, e.g., when a deep stationary potential (two interfering fields with equal frequency) that is suddenly turned on results in Kapitza-Dirac diffraction [4] of atomic matter waves. With respect to synthetic lattices, this can be viewed as a system with constant NN tunneling terms in the presence of a quadratic poten-tial, set simply by the single-particle dispersion relation. This leads to expansion dynamics in momentum space at short times up until population reaches outer regions where the site (momentum state) energy roughly equals the effective tunneling bandwidth [5]. On the other hand, in the limit where the tunneling energy is much smaller than the energy spacing between relevant Bragg resonances (i.e., in the limit that the rotating wave approximation is valid), each transition will be ideally addressed by only a single spectral component, and the step-like behavior in Fig. S1(c) (data, solid curve) should approach the ideal smooth behavior (dashed curve). Intuitively, this occurs as the number of steps per tunneling period gets very large, or in other words as the tunneling time /t gets much larger than the step period T , such that the dynamics are spread out over many, many steps. For the data in Fig. S1(c), we are in the intermediate regime (with a tunneling time /t = 111.6(7) µs, corresponding to a tunneling rate of t/ = 2π × 1425(9) Hz), where off-resonant coupling primarily results in the observed dynamics with small steplike behavior. For the zigzag lattice, we introduce NNN frequency teeth that halve the spacing to 4E R / ≈ 2π × 8.1 kHz (addition of dashed red peaks in Fig. S1(b)). This results in longer steps of exactly twice the duration, i.e., T = 2T (Fig. S1(d)) due to off-resonant first-order Bragg processes. The smaller structure on top of these long steps, with a spacing of T /2 relating to a frequency spacing of 16E R / , is due to off-resonant second-order Bragg processes. For this data, we used a tunneling ratio t /t = 0.628(6) and NN tunneling time /t = 174(2) µs, corresponding to a tunneling rate t/ = 2π × 917(9) Hz. The "full" simulations in Fig. S1(c,d) (solid curves) and in the main text account for both resonant and offresonant driving on every Bragg transition, and thus retain these step-like features. The "ideal" simulations in Fig. S1(c,d) (dashed curves) and in the main text ignore off-resonant effects and consider only the smooth behavior of the idealized tight-binding Hamiltonians. BAND STRUCTURE CALCULATIONS The band diagrams shown in Fig. 2(a) and Fig. 4(c) were calculated using the same method as described in Sec. II of Ref. [6]. While the studies there focused on a semisynthetic lattice with one synthetic dimension and one real-space dimension, the same physics hold for our fully synthetic momentum-space lattice. We consider the zigzag lattice in the absence of any applied disorder. We can take the rows of the lattice to be an effective spin degree of freedom, with a twosite unit cell comprised of one spin up (σ = +1) site from the top row and one spin down (σ = −1) site from the bottom row. We generate the spinful dispersion by calculating the 2×2 Hamiltonian introduced in Ref. [6] at each value of quasimomentum. 1 In terms of the creation and annihilation operators at spin σ and quasimomentum q (ĉ † σ, q andĉ σ, q ), the Hamiltonian is given bŷ where h jj = −2t cos qd + φ(−1) j and h 12 = h 21 = 2t cos [ qd/2] for lattice spacing d = 4 k (not 2 k due to the two-site unit cell). By diagonalizing this Hamiltonian for every value of q ∈ [−π/d, π/d], we generate the double-band dispersions. We note that for t /t = 0 (dashed black curves in Fig. 2(a) and Fig. 4(c)), the dispersion relation is cosinusoidal, but folded back at the edges of the Brillouin zone due to the two-site unit cell. The spin magnetization σ is simply the projection of the quasimomentum eigenvectors derived from Eq. (S3) onto the rows of the lattice. We take the difference between the projections onto the upper row and the lower row such that a positive (negative) σ corresponds to population on the upper (lower) row. INFLUENCE OF INTERACTIONS As mentioned in the main text, atomic interactions show effects on the localization properties of both the 1D Aubry-André data in Fig. 3 and the longer-range Aubry-André data in Fig. 4 and Fig. 5. Interactions in momentum-space lattices are described in detail in Ref. [7], where we show that effects like self-trapping can be observed when the mean-field energy becomes large compared to the tunneling. Atoms in a particular momentum state experience an added positive self energy due to repulsive cold collisions, i.e., mean-field interactions. In addition, atoms overlapped in space but occupying distinct spatial eigenstates (i.e., distinguishable plane-wave momentum states) experience both a direct interaction as well as an added exchange energy, resulting in twice as large a repulsive energy [8,9]. For a fixed total density, this situation where atoms occupying the same momentum state have a weaker repulsive interaction energy may be recast as an effectively site-local attraction with a scale set by the mean-field interaction energy U . In reality, for an interacting degenerate Bose gas, superfluid screening can make distinct plane wave states partially indistinguishable, resulting in some off-site contribution to the effective attraction (although this vanishes as U becomes much less than 2E R ). As discussed in Ref. [7], these interactions shift the Bragg resonance frequencies away from the single particle resonances. Under typical experimental conditions, we measured this shift to be ∼300 Hz, relating to a peak mean-field energy U 0 = gn 0 ≈ 2π × 860 Hz at the center of the harmonic trap, and an homogeneous mean-field energy U/ ≈ 2π × 500 Hz averaged over the entire trap (the measured shift is distinct from the average U value due to a combination of the aforementioned screening effects and the long duration of the Bragg pulses used in this determination). The central atomic density is n 0 ≈ 10 14 cm −3 and g = 4π 2 a/M Rb , where a is the scattering length [10]. We incorporate this mean-field energy U by considering an attractive interaction that depends on the population of atoms at each site [7], resulting in the curves shown in Fig. 3(d) and the dotted theory curves in Fig. 4. Specifically, the evolved state at the end of the tunnelingramps described in the main text are found by solving a time-dependent multi-site nonlinear Schrödinger equation that includes a local attractive self-nonlinearity −U |ψ n | 2 , where the ψ n are c-numbers (with normalization Σ n |ψ n | 2 = 1) relating to the atomic field terms at each site. This approach is approximate in a number of ways: it ignores quantum fluctuations, ignores off-site contributions to the effective atomic interaction, ignores energy-dependent corrections to the collisional scattering cross-section, ignores spatial variations in the atomic density in our trapped sample, ignores effects such as the loss of spatial overlap of the momentum wavepackets, and explicitly restricts the collisions to be mode-preserving (ignoring both s-wave collisions that may scatter atoms out from the considered set of 21 modes into many "halos" of additional states, as well as mode-changing collisions within our defined set of states, that would be energetically suppressed in the absence of our drive fields, but may be effectively enabled through higher-order, Braggmediated processes). We point out that a fuller quantum treatment of the problem (still restricted to being modepreserving, still ignoring spatial variations of the density, still ignoring loss of spatial overlap of momentum states, and still ignoring energy-dependent corrections to the scattering cross-section) would instead include an interaction term (U/N )Σ n,n c † n c † n c n c n in the effective tight-binding Hamiltonians of the main text. Even for our modest 21-mode system, the time-dependence of this problem would become intractable for particle numbers well below the ∼ 10 5 used in experiment. RAMP PROCEDURE For the localization studies of the zigzag lattice of Fig. 4 and 5, we slowly load into the extremal eigenstates (ground state or highest excited state) of the Hamiltonian we wish to explore (Eq. (2)). We begin, as described in the main text, by preparing with high fidelity the ground state of the system in the zero tunneling (t = t = 0) limit by shifting the Aubry-André site energy distribution such that the initial site has the lowest energy (ϕ = π for ∆ > 0). To ensure that there is a relatively large energy gap from this initially populated ground state to all other eigenstates even after finite tunneling is introduced, we add an effective potential well of depth V at the central site. 2 Then, over the course of 2 ms, we smoothly vary the system parameters until we reach the desired Hamiltonian. If these ramps are quasistatic adiabatic, such that the energy associated with the ramp rate is much smaller than the smallest energy gap encountered, then this procedure should prepare the desired ground state with high fidelity. First, we describe the different ramps used in experiment. The depth of the potential well at site n = 0 is ramped from V to zero over 2 ms (for comparison, the NN tunneling time for this experiment was /t = 304(4) µs). Over the same 2 ms duration, we also ramp both the NN and NNN tunneling amplitudes linearly from zero to their final magnitudes t and t , respectively. Over the course of this ramp we preserve the flux distribution, imposed by fixed tunneling phases. We additionally preserve the ratio of t /t by ramping the field strengths of the first-and second-order Bragg spectral components (∝ τ and ∝ √ τ ) according to their distinct scalings with the applied field strengths (Eqs. (S1) and (S2)). One complication arises from the small spacing between applied spectral components (frequency teeth), as mentioned in the "off-resonant excitations" section above
11,803.4
2017-05-25T00:00:00.000
[ "Physics" ]
Using innovative interactive technologies for forming linguistic competence in global mining education Globalization of mining education imposes new requirements for mining engineer competence. Nowadays linguistic competence is one of the most demanded. It guarantees technical university graduates the possibility of global employment, on the one hand, and the chance of getting cutting edge education in leading training centers of the world, on the other hand. Distance education is actively developing all over the world and is widely used in technical colleges and universities, as well. Interactive method that involves active engagement of students appears to be of the greatest interest due to introduction of modern information and communication technologies for distance learning. The paper presents step-by-step implementation of several interactive technologies (jigsaw, case study, brainstorming, and role-play) that can be used in distance education in the process of teaching subjects in foreign languages with the help of information and communication technologies. In response to the changes in the conditions of educational process, the implementation of the methods has been transformed to a combination of traditional (in-class) and distance (online) learning. Introduction It is generally accepted that distance learning is an integral part of modern education alongside the traditional in-class studying. A number of authors in their research papers consider innovative structures and components of distance learning in education [1]. Mining engineers training is no exception with the world leading higher technical schools and universities offering distance-learning services. The main language of educational programs is English, which requires students' knowledge of not only general, but also professional vocabulary. The tendency of gradual conversion of global mining education to the distance learning format is quite understandable if one looks at last decades' intensive development of technical means. Functioning of XXI Century University, in our opinion, is not possible without new resources, including modern information and communication technologies that enable to increase the number of participants in educational process. The development of distance learning technologies ensures the possibility of providing educational services without being tied to a specific geographical location (Open and Distance Learning). Material and method Even though distance learning technologies in the sphere of mining engineers training are innovative, they have proved to affect the process positively. Thus, positive relation between students' usage of information technologies for educational purposes and their involvement in learning and interaction with lecturers has been acknowledged [2,3,4]. Another study has shown that the use of Twitter for educational purposes has a positive effect on students' learning interest and the average semester grades [5]. Some authors analyze the effectiveness of distance learning basing on students' own evaluation of the educational process [6,7] and evaluation of students by their teachers [8]. A group of scientists have compared traditional (off-line), on-line and mixed forms of education and have proved that educational results in groups with mixed education methods are the highest [9]. In the context of implementing distance learning technologies into the training of mining engineers, the process of educational design (specifically targeted planning of training courses) should be adjusted towards finding some learning forms and methods that can encourage students to self-study of foreign languages alongside with university training. The target mentioned can be reached by applying e-learning and distance learning that can be successfully used for forming linguistic competence of future mining engineers. Elearning can be applied in a traditional form (in-class face-to-to-face learning), when students use electronic devices and resources of the university or their own mobile devices as educational tools. Distance learning allows students to acquire knowledge by accessing university virtual educational environment with the help of any computer connected to the Internet. Educational design of the courses with elements of distance education requires a certain level of professional creativity from the educators teaching courses in foreign languages. Among the innovative methods of foreign language teaching, the interactive approach appears to be the most promising in terms of distance technologies. It is based on active interaction of students. Teachers act as consultants and coordinators of students' educational and research activities. They step aside while students' work intensifies and dominates the process. Some research works are dedicated to the importance of the interactive approach in learning using materials and information about real situation in economy in general and in mining industry in particular [10,11]. Researchers pay attention to the theoretical description of the method analyze specific industrial technologies [12]; or consider the possibility of using interactive methods when teaching particular subjects concerning mining industry [13]. Evidently, it is of interest to contemplate the usage of the interactive approach in distance education. New teaching methods, based on using modern information and communication technologies (webinars, interactive multimedia, online learning, etc.), allow developing distant forms that do not involve teacher's direct physical participation in the process. It is impossible to overestimate modern information and communication technologies: they provide access to the sources of information previously unused; significantly increase efficiency of students' independent work; provide completely new opportunities for creativity, identification and demonstration of capabilities, acquiring and consolidating different skills, both of a teacher and a student; allow implementing essentially new forms and methods of training (local and global information networks, conference calls, e-mail, elibraries, forums, chats, and others). It is worth mentioning that distance learning technologies can noticeably reduce the level of panic and stress experienced by students when taking tests and exams in traditional forms, increase students' initiative and motivation, as well as their psychological comfort. Using information and communication technologies in educational process enables to address the following didactic tasks more effectively: forming and improving reading, writing, speaking, and listening skills, as well as extending students' active and passive vocabulary. The authors particularly emphasize that students are provided with an extra opportunity of facing social and cultural environment of foreign languages they study (speech etiquette, verbal behavior, culture issues, traditions of foreign countries, etc.), as well as increasing academic motivation through the use of authentic materials [14]. Undoubtedly, that contributes to forming skills useful for solving professional problems in a foreign language as a means of direct as well as indirect communication. Both in traditional and distance educational systems, fulfillment of educational goals in disciplines taught in foreign languages depends on the quality of learning materials, facilities of the educational environment, professional skills as well as information and communication competence of teachers. An important role is also attributed to the correct choice of tasks, which demand using information and communication technologies, such as working with e-learning materials, using Internet resources in foreign languages, performing online tests and exercises, etc. Results and discussion When summarizing authors' long-term practice in using innovative methods for teaching foreign languages, it can be stated that it is the interactive approach that demonstrates maximum efficiency in developing linguistic competence in a foreign language through the use of distance technologies. In terms of distance learning, among interactive teaching techniques the main interest is aroused in technologies and methods to be implemented within two stages: preparation and presentation. It is possible to use distance learning technologies at the preparation stage, when students communicate, for example, by means of Skype-based video conferencing. This technology is used in synchronous distance education which offers an environment closer to the traditional in-class environment, and allows students to establish visual and voice communication with teachers [15]. During video conferences students discuss their tasks, choose the leader of the working group, distribute authorities, produce and edit texts, etc. A set of actions performed at the preparation stage depends on the technology used. Teacher's participation in such video conferences is optional. This article focuses on step-by-step description of implementing traditional interactive learning technologies applied with the help of innovative information and communication technologies in the course of foreign language teaching. The technologies under consideration are: jigsaw, case study, brainstorming, role-play. Jigsaw technology was developed by Dr. Elliot Aronson in 1978. Using this technology assumes that students are divided into teams of 4 to 6 people to work with some training material (for example, a text on the section of the subject), which is divided into fragments (logical or semantic units). It is recommended to perform the division into the teams and explain the task during in-class activity. During the preparation (distant) stage the following set of tasks is performed: − each team member examines a part of the material (offline independent out-of-class activity); − "experts meeting", when students from different teams who explored the same part of the material meet and exchange information acting as experts in this section (online team out-of-class activity); − experts return to their teams and share discovered information with other team members. Thus, each expert acts as a "tooth of a saw" (online out-of-class activity). Each participant of the educational process (each expert) is responsible for the extent to which all members of the team acquire new information. The work of each team member (expert and / or student) is reflected in their final evaluation. The presentation (traditional) stage, evaluation of the acquired knowledge, is held by the teacher during the in-class work (face-to-face learning). Testing is carried out individually and / or as a team. The choice of a particular task type depends on the level of the linguistic skills of the academic group [16], the type of the material under consideration, time available, etc. «Case study» technology was first applied in the mid-twentieth century at Harvard Business School. The principles of the technology are as follows: students receive a training case; after considering the task and the problem it poses, which, as a rule, has no unique solution, students are supposed to propose their own decision, based on the acquired knowledge and skills. Applying case-technology during language classes leads to improvement of students' speaking skills as a result of discussing a problem under consideration. The preparation stage of training based on «Case Study» technology consists of two stages: organization and implementation. It is advisable to carry out the organizational work as a traditional offline classroom team activity. At this stage, the teacher explains the task and introduces materials to be examined. The implementation stage (students' detailed acquaintance with the information on the case, task execution, and decision-making) is held in online form (online independent out-of-class work). At the same time, students have access to additional resources that might be required during the preparation. The final (traditional) stage, presentation of the case solution, is carried out in class (offline team classroom activity). The teacher assesses the level of language skills developed and provides the team with feedback on their performance. Each member's personal contribution to the teamwork is defined by the students themselves. «Brainstorming» technology is one of the most popular technologies used in foreign language training. It is effective for stimulating students' creativity. Participants are encouraged to propose a great variety of suggestions, including most improbable ones. The discussion of the problem usually lasts from 1 to 5 minutes; afterwards, the most feasible solution of the problem is chosen. It is generally considered that this technology can be used offline only, since it is necessary to write the ideas down on the board or a piece of paper for the purpose of selecting the best. However, modern information and communication technologies provide a way of not only seeing and hearing one another during the communication process, but also actively interacting with a variety of online graphic editors, available to all the participants simultaneously. The preparation stage may be carried out in two ways: 1. Offline in-class team activity: the teacher divides the students into teams, appoints the leaders of each team (members of the teams who are responsible for organizing the conference, recording ideas, managing the discussion), and provides them with the tasks for consideration. 2. Online out-of-class teamwork: the teacher organizes the conference, appoints the leaders, and provides the task. This activity is followed by an online discussion in which students choose the best ideas and prepare to defend their point of view in front of other teams. The online discussion, during which students select the best ideas and prepare for speaking up for their point of view, concludes the preparation stage. The presentation (traditional) stage is held as an offline classroom team activity during which the results are presented by the teams. As an additional activity, one idea can be chosen and developed into an independent project (Project technology). «Role-play» technology is another effective way of forming linguistic competence which makes it possible to introduce foreign language training to the model of mining engineers' future work. During this activity students learn how to deal with problems that they might encounter in their professional activity. At the preparation stage (offline team in-class activity), the teacher forms teams, explains the tasks for the role-play, and determines the degree of students' creativity freedom. Discussing key positions (distributing roles, determining the course of the game, etc.) and writing an open-ended script takes place in a distant form (online team out-ofclass activity). The teacher reviews the script via e-mail. The presentation (traditional) stage is carried out in the classroom (offline team in-class activity) in the form of presenting the game for the teacher and other academic groups. At the stage of discussing the results of the activity (offline team in-class work) students have an additional opportunity to communicate in a foreign language. Questioning students reveals that they have been using information and communication technologies for homework preparation without extra motivation from their teachers. This fact indicates that it is university teachers, not students, who should reconsider their position regarding the use of distance learning technologies in teaching foreign languages. Success of students' remote communication depends largely on teachers' motivation and competence in implementing information and communication technologies in educational process. Recommendations Developing forms and methods of distance learning as a part of designing and implementing professional training programs still raises a number of issues. At present, strengthening the position of distance learning with the help of information and communication technologies in the process of teaching foreign languages requires the following: − conducting regular surveys of students on the issues of training with elements of distance learning technologies; − working out guidelines on using distance learning technologies in teaching; − holding seminars for university teachers devoted to distance learning technologies in teaching; − including in educational content tasks that require using information and communication technologies; − developing a mechanism of monitoring students' out-of-class activities with the use of distance learning technologies. Conclusions Like any living organism, educational process can develop and improve only within the terms and the conditions the society lives in. Modern society cannot be imagined without information and communication technologies extensively used in education, including distance learning. The next step in developing distance education is transforming methods used in traditional education into new forms of teaching that can be implemented with the
3,438.4
2017-01-01T00:00:00.000
[ "Education", "Linguistics", "Engineering", "Computer Science" ]
m6A-induced lncRNA RP11 triggers the dissemination of colorectal cancer cells via upregulation of Zeb1 Background Long noncoding RNAs (lncRNAs) have emerged as critical players in cancer progression, but their functions in colorectal cancer (CRC) metastasis have not been systematically clarified. Methods lncRNA expression profiles in matched normal and CRC tissue were checked using microarray analysis. The biological roles of a novel lncRNA, namely RP11-138 J23.1 (RP11), in development of CRC were checked both in vitro and in vivo. Its association with clinical progression of CRC was further analyzed. Results RP11 was highly expressed in CRC tissues, and its expression increased with CRC stage in patients. RP11 positively regulated the migration, invasion and epithelial mesenchymal transition (EMT) of CRC cells in vitro and enhanced liver metastasis in vivo. Post-translational upregulation of Zeb1, an EMT-related transcription factor, was essential for RP11-induced cell dissemination. Mechanistically, the RP11/hnRNPA2B1/mRNA complex accelerated the mRNA degradation of two E3 ligases, Siah1 and Fbxo45, and subsequently prevented the proteasomal degradation of Zeb1. m6A methylation was involved in the upregulation of RP11 by increasing its nuclear accumulation. Clinical analysis showed that m6A can regulate the expression of RP11, further, RP11 regulated Siah1-Fbxo45/Zeb1 was involved in the development of CRC. Conclusions m6A-induced lncRNA RP11 can trigger the dissemination of CRC cells via post-translational upregulation of Zeb1. Considering the high and specific levels of RP11 in CRC tissues, our present study paves the way for further investigations of RP11 as a predictive biomarker or therapeutic target for CRC. Electronic supplementary material The online version of this article (10.1186/s12943-019-1014-2) contains supplementary material, which is available to authorized users. Introduction Colorectal cancer (CRC), also known as large bowel cancer, is a major public health problem worldwide [1]. Epidemiological data have revealed that the 5-year survival rate of CRC patients ranges from 90% for patients with stage I disease to 10% for those with metastatic disease [2]. Although numerous studies have revealed that alterations in oncogenes and tumour suppressor genes contribute to tumorigenesis and the development of CRC [3], the precise molecular mechanisms underlying CRC pathogenesis, particularly for metastasis, remain to be fully elucidated. Long noncoding RNAs (lncRNAs), which are more than 200 nt in length and have limited or no protein-coding capacity, play both oncogenic and tumour suppressor roles in tumorigenesis and progression [4,5]. LncRNAs can regulate gene expression via multiple mechanisms, including chromatin remodelling, modulation of the activity of transcriptional regulators, and posttranscriptional modifications [5]. Dysregulated lncRNA expression has been reported to modulate the progression of various types of cancers, such as bladder, prostate, lung, breast, gastric and colorectal cancers [6,7]. Increasing evidence suggests that lncRNAs can trigger metastatic progression, increase chromosomal instability, and promote CRC tumorigenesis [8][9][10]. Therefore, further identification of CRC-related lncRNAs and investigations of their functions in CRC are imperative. Metastasis is the major cause of CRC related death [11]. The epithelial mesenchymal transition (EMT), a process by which epithelial cells gain a migratory and invasive mesenchymal phenotype [12], is considered as the first and most important step for cancer cell metastasis. During EMT, epithelial cells can acquire mesenchymal components and motility features, lose epithelial components and cell adhesion, and infiltrate into the tumour vasculature [13]. Increasing evidences indicate that EMT is a pivotal step for tumour infiltration and distant metastasis in a variety of carcinomas [14]. EMT-transcription factors (EMT-TFs), including Twist, Snail, and Zeb1, have been implicated in the control of EMT [15]. The important role of Zeb1 in EMT regulation has been described for many cancer types [16,17]. LncRNAs have been reported to regulate EMT-TFs and subsequently trigger the EMT of cancer cells [18]. We are interested in determining whether any lncRNAs exist that can regulate EMT-TFs to trigger the EMT and dissemination of CRC cells. In this study, a CRC-associated lncRNA (RP11, RP11-138 J23.1) that displayed a remarkable trend towards increasing expression levels from normal colorectal to CRC tissues was identified and selected for further validation and functional analysis in terms of CRC progression. We demonstrated that posttranslational upregulation of Zeb1 is required for the lncRNA RP11-induced EMT and dissemination of CRC cells. Microarray and computational analysis Fresh paired normal and histologically confirmed CRC tumour tissues were obtained from 3 stage I CRC cases and 3 stage IV cases with distant metastasis before any treatment during surgery from the Sixth Affiliated Hospital of Sun Yat-sen University from February to October 2014. Total RNA from the samples (3 stage I CRC tissues, 3 stage IV CRC tissues, and their corresponding paired nontumour tissues) was extracted, amplified and transcribed into fluorescent cRNA using the Quick Amp Labeling kit (Agilent Technologies, Palo Alto, CA, USA). The labelled cRNA was then hybridized onto the Human LncRNA Array v2.0 (8 × 60 K, ArrayStar, Rockville, MD, USA), and after the washing steps, the arrays were scanned with the Agilent Scanner G2505B. Agilent Feature Extraction software (version 10.7.3.1) was used to analyze the acquired array images. Quantile normalization and subsequent data processing were performed using the GeneSpring GX v11.5.1 software package (Agilent Technologies). The differentially expressed lncRNAs with statistical significance were identified using Volcano Plot Filtering. The threshold used to screen upregulated or downregulated lncRNAs was a fold change ≥2.0 and p < 0.05. Database (DB) search The expression of lncRNA RP11 in CRC and other cancers was analyzed using the GEPIA (Gene Expression Profiling Interactive Analysis) online database (http:// gepia.cancer-pku.cn). The expression of RP11 between tumour and normal tissues or among different stages of CRC was also analyzed with GEPIA. GEPIA can deliver fast and customizable functionalities based on data from The Cancer Genome Atlas (TCGA) and provide key interactive and customizable functions, including differential expression analysis, correlation analysis and patient survival analysis [19]. We used the Kaplan-Meier plotter to assess the prognostic value of RP11, Zeb1, and their normalization to Siah1 or Fbxo45 expression in CRC patients based on the data from the GEPIA online database. The high expression was defined as greater than the median of the values of transcripts, while the low expression was defined as less than the median of the values of transcripts. Data about the expression of Zeb1 in CRC and normal tissues were further obtained from the Oncomine database (www.oncomine.org) as follows: Hong Colorectal [20] and Skrzypczak colorectal 2 [21]. The sample information and expression data are available in the Gene Expression Omnibus (GEO) database [Accession nos. GSE2091 (Skrzypczak colorectal 2) and GSE9348 (Hong Colorectal) at www.ncbi. nlm.nih.gov/geo]. The expression profiles of Zeb1, Fbxo45, METTL3 and Siah1 among the N stages of CRC in patients were downloaded from LinkedOmics (http://www.linkedomics.org), which is a publicly available portal that includes multi-omics data from all 32 cancer types from TCGA. The LinkedOmics website allowed a flexible exploration of associations between a molecular or clinical attribute of interest and all other attributes, providing the opportunity to analyse and visualize associations between billions of attribute pairs for each cancer cohort [22]. Animal studies All animal experiments were complied with the Zhongshan School of Medicine Policy on the Care and Use of Laboratory Animals. To evaluate the potential roles of RP11 in the growth of CRC, ten female BALB/c nude mice (4 weeks old) purchased from Sun Yat-sen University (Guangzhou, China) Animal Center were raised under pathogen-free conditions and randomly divided into two groups. HCT-15 RP11 stable overexpression or control cells (2 × 10 6 per mouse) diluted in 100 μl normal medium + 100 μl Matrigel (BD Biosciences) were subcutaneously injected into immunodeficient mice to investigate tumour growth. When the tumours of all mice grew into visible tumours, the tumour volumes were measured every 3 d using manual callipers and calculated using the formula V = 1/2 × larger diameter × (smaller diameter) 2 . At the end of the experiment, mice were sacrificed, and tumours were removed and weighed for use in histological and other analyses. For the in vivo liver metastasis model, HCT-15 RP11 stable overexpression or control cells (1 × 10 6 per mouse) were injected into both male and female BALB/c nude mice (n = 7 for each group) via the tail vein to analyze distant metastasis. Eight weeks after injection, the experiment was terminated, and livers were analyzed for the presence of metastatic tumours. Protein stability To measure protein stability, cells were treated with cycloheximide (CHX, final concentration 100 μg/ml) for the indicated time periods. Zeb1 expression was measured by western blot analysis. RNA immunoprecipitation RNA immunoprecipitation (RIP) experiments were performed using a Magna RIP RNA-Binding Protein Immunoprecipitation Kit (Millipore, Bedford, MA, USA) according to previously described procedures [23]. Antibodies for RIP assays of IgG, Zeb1, Siah1, Fbxo45, hnRNPA2B1, and m 6 A were diluted 1: 1000. After RIP, RNA concentrations were measured using the Qubit® RNA High-sensitivity (HS) Assay Kit and Qubit 2.0. The co-precipitated RNAs were detected by reverse transcription (RT)-PCR. The gene-specific primers used for detecting RP11 were presented in Additional file 2 : Table S2. RNA expression was normalized to the total amount of RNA used for reverse transcription. RNA pull-down/mass spectroscopy analysis LncRNA-RP11 and its antisense RNA were transcribed in vitro from the pGEM-T-RP11 vector, biotin-labelled with the Biotin RNA Labeling Mix (Roche Diagnostics, Indianapolis, IN, USA) and T7/SP6 RNA polymerase (Roche), treated with RNase-free DNase I (Roche), and purified with an RNeasy Mini Kit (Qiagen, Valencia, CA, USA). One milligram of protein from the extracts of HCT-15 cells stably transfected with pcDNA3.1-RP11 was then mixed with 50 pmoles of biotinylated RNA, incubated with streptavidin agarose beads (Invitrogen, Carlsbad, CA, USA), and washed. The proteins were resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and silver-stained, and the specific bands were excised. In-gel proteolysis was performed using trypsin (89,871, Pierce, Rockford, IL, USA). Mass spectroscopy (MS) analysis was then performed on a MALDI-TOF instrument (Bruker Daltonics) as described elsewhere [24]. mRNA stability To measure RNA stability in HCT-15 RP11 stable overexpression or control cells, 5 μg/ml actinomycin D (Act-D, Catalogue #A9415, Sigma, St. Louis, MO, USA) was added to cells. After incubation at the indicated times, cells were collected, and RNA was isolated for qRT-PCR. The mRNA half-life (t1/2) of ZEB1, Siah1 or Fbxo45 was calculated using ln2/slope, and GAPDH was used for normalization. Statistical analysis Statistical analysis was performed using SPSS software (SPSS, Chicago, Illinois, USA). The expression levels of lncRNA RP11 in CRC patients were compared with the paired-sample t test. Survival curves were generated using the Kaplan-Meier method, and the differences were analysed with the log-rank test. The χ 2 test, Fisher's exact probability, and Student's t-test were used for comparisons between groups. Data were expressed as the mean ± standard deviation (SD) from at least three independent experiments. All P values were two-sided and obtained using SPSS v. 16.0 software (Chicago, IL, USA). p < 0.05 was considered statistically significant. RP11 is upregulated in CRC cells and tissues To identify potential oncogenic lncRNAs involved in the tumorigenesis and progression of CRC, we analysed lncRNA expression profiles in matched normal and CRC tissue pairs (3 stage I CRC cases and 3 stage IV CRC cases, full data available in the GEO, Accession Number GSE110715) using microarray analysis. Hierarchical clustering showed systematic variations in lncRNA expression between stage I CRC, stage IV CRC, and their corresponding paired adjacent normal samples (Fig. 1 a). The differentially expressed lncRNAs between the CRC tissues and paired adjacent samples were further analysed. As shown in Fig. 1 b, stage I CRC and stage IV CRC tissues shared 325 lncRNAs that were upregulated, with a ≥ 2.0-fold change relative to their corresponding paired nontumour counterparts. Among these lncRNAs, 8 also exhibited greater expression in stage IV CRC tissues than that in stage I tissues ( Fig. 1 b&c, Additional file 2 : Table S1). To validate the findings of microarray analysis, we chose the 8 upregulated candidates and randomly selected 2 downregulated lncRNAs to analyse their expression levels by qRT-PCR in 5 pairs of CRC and corresponding nontumour tissues (Additional file 1: Figure S1 A). a Heat-maps of lncRNAs that were differentially expressed between stage I samples (a, cancer tissues) and matched adjacent normal samples (b, normal samples) (left) or between stage IV samples and matched adjacent normal samples (right). The colour scale shown on the left illustrates the relative RNA expression levels; red represents high expression, and green represents low expression. b Venn diagram showing the overlapping of 2-fold upregulated lncRNAs between stage I samples and normal samples, stage IV and normal, and stage IV and stage I. c Heat-maps of the 8 lncRNAs upregulated simultaneously between stage I samples and normal samples, between stage IV and normal, and between stage IV and stage I (Red probe targeted lncRNA RP11). d The relative fold of RP11 in 32 paired human colon cancer tissues versus its matched adjacent normal mucosa tissues. e The relative expression of RP11 in colon (left) and rectal (right) cancer tissues and their corresponding adjacent normal tissues based on data available from TCGA database. f The levels of RP11 in CRC cell lines and human colon mucosal epithelial NCM460 cells were measured by qRT-PCR. Data are presented as the mean ± SD from three independent experiments. *p < 0.05 compared with control The results confirmed that all 8 upregulated lncRNAs were overexpressed in CRC, whereas the expression levels of the pl078441 and agiseq14311 target genes were decreased (p < 0.05 for all). Among the eight candidate genes, targets of the CUST_8502_PI428631609 (lncRNA RP11, RP11-138 J23.1) and CUST_9335_PI428631609 (lncRNA AC123023.1) probes have been shown to be lncRNAs. Microarray analysis suggested that the elevation in lncRNA RP11 (RP11) in CRC tissues versus adjacent normal tissues was greater than that of lncRNA AC123023.1 (Table S3). qRT-PCR confirmed that the abundance of RP11 was significantly greater than that of lncRNA AC123023.1 in 5 CRC tissues (Additional file 1: Figure S1 C). RP11 located at Chromosome 5: 104,079,911-104,105,403 with the transcript length 574 nt (ENSG00000251026, Additional file 1: Figure S1 B). It was poly A-tailed due to the enrichment in bound fractions was 11-fold greater than that in unbound fractions by use of polydT-beads pull down and qRT-PCR. To confirm the role of RP11 in the progression of CRC, we compared RP11 levels in CRC tissues and paired adjacent non-cancerous mucosa from 32 individual patients (Table S1). RP11 was successfully amplified in all tumour and normal specimens analysed. According to the qRT-PCR analysis, RP11 expression was significantly increased in 30 out of 32 (93.8%) tumour samples compared with the adjacent normal mucosa tissues ( Fig. 1 d). In this cohort, the average expression level of RP11 in the tumour tissues was 48-fold greater than that in the adjacent normal mucosa tissues. However, there was no significant difference in RP11 expression between different ages, sexes or stages (Table S1), which might be due to the small sample size. We further assessed RP11 expression in a TCGA pan-cancer dataset obtained from the GEPIA online database (http://gepia.cancer-pku.cn). TCGA data confirmed that the expression of RP11 in colon and rectal carcinoma (COAD, READ) was significantly (p < 0.05) greater than that in the adjacent normal tissues (Fig. 1 e). In addition, the expression of RP11 in COAD and READ was relatively high among all measured cancers (Additional file 1: Figure S1 D and E). RP11 expression was verified in multiple colon cancer cell lines, namely, SW620, LoVo, HCT-116, Caco2, HT29, HCT-15, HCT-8, SW480, DLD1, and RKO, and in human colon mucosal epithelial NCM460 cells. The results indicated that the RP11 levels in all of the measured CRC cell lines, except RKO, were greater than that in NCM460 cells ( Fig. 1 f). SW620 cells, which were primarily derived from lymph node metastases in CRC patients, had the highest level of RP11 among all analysed cell lines ( Fig. 1 f). Collectively, these data show that lncRNA RP11 is increased in CRC cells and tissues. RP11 triggers the dissemination of CRC cells both in vitro and in vivo The potential biological roles of RP11 in CRC progression were investigated. We overexpressed RP11 in HCT-15, HCT-8, DLD1, SW480 and RKO cells (RP11 low expression cells, Additional file 1: Figure S2 A). CCK-8 analysis showed RP11 overexpression had no significant effect on the proliferation of these cells (Fig. 2 a). Consistently, RP11 silencing in SW620 or HCT-116 cells (RP11 high expression cells, Additional file 1: Figure The effects of RP11 on the in vitro migration and invasion of CRC cells were evaluated. A wound healing analysis revealed that RP11 overexpression triggered the migration of both HCT-15 (Fig. 2 b) and HCT-8 (Additional file 1: Figure S2 J) cells. Transwell analysis confirmed that RP11 can increase the in vitro invasion of HCT-15 cells (Fig. 2 c). RP11 silencing inhibited the in vitro migration (Additional file 1: Figure S2 K) and invasion (Additional file 1: Figure S2 L) of SW620 cells. CRC cells overexpressing RP11 assumed their spindle-like fibroblast appearance and lost their cobblestone-like epithelial morphology (Additional file 1: Figure S2 M), suggesting that RP11 may regulate EMT and cancer metastasis. This was confirmed by western blot analysis, which showed a decrease in the expression of epithelial cell marker E-Cadherin (E-Cad) and an increase in the expression of mesenchymal cell markers fibronectin (FN) and Vimentin (Vim) in HCT-15 and HCT-8 cells transfected with RP11 ( Fig. 2 d). RP11 silencing impaired EMT progression in SW620 (Fig. 2 e) and HCT-116 (Additional file 1: Figure S2 N) cells. Collectively, our data suggested that RP11 can induce the migration, invasion and EMT of CRC cells. To evaluate the in vivo effects of RP11 on tumour development, we examined the expression levels of EMT-related markers in RP11-overexpressing HCT-15 tumour xenografts in nude mice. At the end of the experiment, the tumour sizes, volumes and weights in the RP11 group were comparable to those in the control group (Fig. 2 f, g). This was confirmed by IHC analysis of the expression of Ki67, a nuclear antigen expressed in proliferating cells, and the Ki67 level was comparable between the RP11 and control groups (Fig. 2 h). The IHC data showed that RP11 increased the levels of Vim and FN in HCT-15 tumour xenografts (Fig. 2 h). To further determine the impacts of RP11 on in vivo metastasis, equal numbers of HCT-15 RP11 stable overexpression and control cells (1 × l0 6 in 100 μl) were injected into BALB/c nude mice via the tail vein, and distant liver metastasis was analysed. Eight weeks after injection, the experiment was terminated, and the liver was analysed for the presence of metastatic tumours. As shown in Fig. 2 i & j, the numbers and sizes of the liver tumours derived from RP11-overexpressing HCT-15 cells were significantly greater than those derived from the control cells. Collectively, our data showed that RP11 can enhance the in vitro and in vivo dissemination of CRC cells and induce EMT. Upregulation of Zeb1 mediates the RP11-induced dissemination of CRC cells LncRNA can activate the transcription of closely located genes in cis by promoting chromatin looping from transcriptional enhancers [25,26]. We therefore investigated the effects of RP11 on its nearby transcripts, including NUDT12, C5orf30, PPIP5K2, GIN1, RP11-6 N13.1, and CTD-2374C24 (Additional file 1: Figure S1 B). The expression levels of the detected genes showed no significant difference between the HCT-15 RP11 stable and control cells (Additional file 1: Figure S3 A). In SW620 cells, RP11 knockdown also had no effect on the expression of its nearby transcripts (Additional file 1: Figure S3 B). Thus, the biological functions of RP11 may not be related to the cis regulatory function. EMT-TFs such as Snail, Slug, Twist and Zeb1 can regulate the progression of EMT by targeting E-Cad expression [27]. To investigate the mechanisms responsible for the RP11-induced dissemination of CRC cells, we analysed the effects of RP11 on the expression of EMT-TFs in CRC cells. The results showed that RP11 overexpression increased the expression of Zeb1 in both HCT-15 and HCT-8 cells, while si-RP11 downregulated the expression of Zeb1 in SW620 and HCT-116 cells (Fig. 3 a and Additional file 1: Figure S3 C). RP11 overexpression or knockdown had no effect on the expression of Snail, Slug or Twist (Fig. 3 a and Additional file 1: Figure S3 C). The subcellular fraction showed that RP11 overexpression increased the nuclear accumulation of Zeb1 in HCT-15 cells (Fig. 3 b). Consistently, RP11 increased Zeb1 expression in HCT-15 tumour xenografts (Fig. 3 c). Intriguingly, neither RP11 overexpression in HCT-15 (Fig. 3 d) nor knockdown in SW620 (Additional file 1: Figure S3 D) cells had significant effect on the mRNA levels of tested EMT-TFs. Consistently, RP11 overexpression had no effect on the mRNA expression of Zeb1 in Caco2, HT-29, SW480, DLD1, or RKO cells (Additional file 1: Figure S3 E). Although Zeb1 has been well demonstrated to induce the EMT of cancer cells, including CRC cells, by inhibiting E-Cad [17,28], the role of Zeb1 in the RP11-induced dissemination of CRC cells was unknown and thus investigated. A wound healing analysis showed that Zeb1 knockdown attenuated RP11-induced cell migration (Fig. 3 e, Additional file 1: Figure S3 F). Western blot analysis confirmed that Zeb1 knockdown attenuated RP11-induced upregulation of FN and downregulation of E-Cad (Fig. 3 f). The results indicated that RP11 may increase Zeb1 expression via post-translational regulation. This was confirmed by data showing that the half-life of the Zeb1 protein in HCT-15 (Fig. 3 g) and HCT-8 (Additional file 1: Figure S3 G) RP11 stable overexpression cells was significantly greater than that in their corresponding control cells. Because ubiquitylation of Zeb1 is critical for its stabilization [29], we hypothesized that RP11 modified the ubiquitylation level of Zeb1. Immunoprecipitation results showed that RP11 can significantly decrease the ubiquitylation of Zeb1 in both HCT-15 (Fig. 3 h) and HCT-8 (Fig. 3 i) cells. Collectively, our present data suggested that the post-translational upregulation of Zeb1 is involved in the RP11-induced dissemination of CRC cells. Downregulation of Siah1 and Fbxo45 mediates RP11induced upregulation of Zeb1 Because lncRNAs can directly intact with proteins and thereby regulate protein stability [25,30], the binding of Zeb1 to RP11 was investigated by RIP-PCR. The data showed that immunoprecipitation (IP) of Zeb1 had no significant effect on RP11 recruitment in either HCT-15 or HCT-8 cells (Additional file 1: Figure S4 A). In addition, Zeb1 overexpression had no effect on the RP11 expression in either HCT-15 or HCT-8 cells (Additional file 1: Figure S4 B). Consistently, the RP11 pull-down/ MS analysis did not show binding between RP11 and Zeb1 in either the HCT-15 control or RP11 stable overexpression cells (Table S4). This suggested that the RP11-induced upregulation of Zeb1 is not due to a direct interaction. GSK-3β, β-catenin, p65, MAPK/ERK, p38-MAPK, PI3K/Akt, and STAT3 have been reported to regulate Zeb1 expression and EMT [31]. However, no significant variation was observed in the total and phosphorylated levels of these signalling molecules between HCT-15 RP11 stable overexpression and control cells (Additional file 1: Figure S4 C). To systematically investigate the specific factors involved in the RP11-induced stabilization of Zeb1 in CRC cells, we examined the mRNA expression levels of 7 reported factors in the ubiquitin-proteasome system, which can post-translationally regulate the stability of Zeb1 (summarized in Table S5). The results indicated that RP11 overexpression significantly (p < 0.05) decreased the expression levels of Siah1 and Fbxo45 but had no significant effect on other factors in either HCT-15 (Fig. 4 a) or HCT-8 (Fig. 4 b) cells. This was confirmed by a western blot analysis showing that RP11 overexpression downregulated the expression of Siah1 and Fbxo45 in both HCT-15 and HCT-8 cells (Fig. 4 c). Consistently, RP11 decreased the expression of Siah1 and Fbxo45 in HCT-15 tumour xenografts (Fig. 4 d). To verify the roles of Siah1 and Fbox45 in the expression of Zeb1, we overexpressed Siah1 and Fbxo45 in HCT-15 cells (Fig. 4 e). The results showed that overexpression of Siah1 and Fbxo45 attenuated the RP11-induced upregulation of Zeb1 in HCT-15 cells (Fig. 4 e). However, RIP-PCR showed that RP11 had no significant effect on the recruitment of Siah1 or Fbxo45 protein in HCT-15 cells (Fig. 4 f ). Consistently, the RP11 pull-down/MS analysis did not show binding between RP11 and Siah1 or Fbxo45 in HCT-15 cells (Table S4). This result suggested that RP11 downregulates the mRNA levels of Siah1 and Fbxo45 but does not bind to the Siah1 or Fbxo45 protein. RP11 regulates Siah1 and Fbxo45 expression by forming the RP11-hnRNPA2B1-mRNA complex To investigate the potential mechanisms of the RP11regulated mRNA expression of Siah1 and Fbxo45, we performed RNA pull-down assays followed by MS with biotinylated RP11 and antisense RP11 as a negative control. Among the identified proteins summarized in Table S4, hnRNPA2B1 was identified as a protein that directly interacted with RP11 (Fig. 5 a) and has been reported to shorten mRNA half-lives [32]. RIP analysis verified the interaction between hnRNPA2B1 and RP11 in HCT-15 cells (Fig. 5 b&c). hnRNPA2B1 is an RNA binding protein (RBP) and localizes in both the cytoplasm and nucleus. Our data showed that RP11 overexpression increased the cellular localization of hnRNPA2B1 in the cytoplasm in both HCT-15 (Fig. 5 d) and HCT-8 (Additional file 1: Figure S5 A) cells. RIP-PCR showed that hnRNPA2B1 could recruit both Siah1 and Fbxo45 mRNA in HCT-15 cells (Fig. 5 c). Furthermore, hnRNPA2B1 overexpression decreased the Computational analysis revealed that RP11 could directly bind to the CDS of Siah1 (Fig. 5 f) and the 3'UTR of Fbxo45 (Fig. 5 g). In vitro transcription and RIP-PCR confirmed that RP11 could directly bind to the mRNA of Siah1 and Fbxo45 in HCT-15 cells (Fig. 5 h). RP11 overexpression significantly downregulated the mRNA stability of Siah1 (Fig. 5 i) and Fbxo45 (Fig. 5 j) in HCT-15 cells. We further investigated whether binding between hnRNPA2B1 and the mRNA of Siah1 and Fbxo45 was RP11 dependent. RIP-PCR showed that the binding between hnRNPA2B1 and the mRNA of Siah1 and Fbxo45 in the HCT-15 RP11 stable overexpression cells was significantly greater than that in the control cells (Fig. 5 k). Consistently, RP11 knockdown decreased the binding between hnRNPA2B1 and the mRNA of Siah1 and between hnRNPA2B1 and the mRNA of Fbxo45 in HCT-15 cells (Additional file 1: Figure S5 C). These data suggested that RP11 regulates Siah1 and Fbxo45 expression by forming the RP11-hnRNPA2B1-mRNA complex. m 6 A modification is involved in the upregulation of RP11 in CRC cells The epigenetic mechanisms responsible for the upregulation of RP11 in CRC cells were investigated. First, treatment with 5-aza-dC (a DNA methyltransferase inhibitor) had no significant effect on RP11expression in either HCT-15 or HCT-8 cells (Additional file 1: Figure S6 A), suggesting that DNA methylation might not be involved in RP11 expression in CRC cells. The role of histone acetylation in RP11 expression was investigated by treating HCT-15 cells with specific inhibitors of HDAC1, 3, 4, 6 and 8 or broad-spectrum HDAC inhibitors such as SAHA and NaB. The data showed that these HDAC inhibitors had no significant effect on RP11 expression in HCT-15 cells (Additional file 1: Figure S6 B). This was confirmed by data showing that overexpression of HDAC6 and HDAC8 had no effect on RP11 expression in HCT-15 cells (Additional file 1: Figure S6 C). The N6-methyladenosine (m 6 A) modification modulates all stages of the RNA life cycle, such as RNA processing, nuclear export and translation [33,34], and thereby regulates the expression and functions of RNAs, The secondary structure of RP11 was predicted (http://rna.tbi.univie.ac.at/). The red colour indicates strong confidence for the prediction of each base. c RNA pull-down detection of the interaction between hnRNPA2B1 and RP11, Siah1, or Fbxo45 in HCT-15 cells. d hnRNPA2B1 expression in the cytoplasmic and nuclear fractions of HCT-15 RP11 stable overexpression and control cells were analysed by western blot. e HCT-15 cells were transfected with pcDNA (vector) or pcDNA/hnRNPA2B1 for 24 h, and the expression of Siah1 and Fbxo45 was verified by western blot analysis. f &g The computational prediction of the interaction between RP11 and the Siah1 (f) or Fbxo45 (g) mRNA based on IntaRNA 2.0 (http://rna.informatik.uni-freiburg.de/IntaRNA/Input.jsp) [53]. h After in vitro transcription to generate biotin-labelled RP-11 and RP-11 AS, RIP-PCR was performed to analyse the relative enrichment of Siah1 or Fbxo45 mRNA on RP11 in HCT-15 cells. i & j After treatment with Act-D for the indicated times, the mRNA levels of Siah1 (i) or Fbxo45 (j) in HCT15 RP11 stable overexpression and control cells were measured by qRT-PCR. k Binding between hnRNPA2B1 and Siah1 mRNA or between hnRNPA2B1 and Fbxo45 mRNA in HCT-15 RP11 stable overexpression and control cells was analysed by RIP-PCR. Data are presented as the mean ± SD from three independent experiments. **p < 0.01 compared with control including lncRNAs. m 6 A RNA-immunoprecipitation (RIP) qPCR showed 9.3-and 5.0-fold enrichment in m 6 A antibody levels of RP11 in HCT-15 and HCT-8 cells, respectively (Fig. 6 a), while the level of enrichment (2.3-fold) in NCM460 cells was significantly less than that in CRC cells (Fig. 6 a). We found that overexpression of Mettl3 (Additional file 1: Figure S6 D), the key m 6 A methyltransferase ("writer") in mammalian cells [35,36], increased RP11 expression in both HCT-15 and HCT-8 cells (Fig. 6 b). Consistently, overexpression of ALKBH5 (Additional file 1: Figure S6 E), the demethylase of m 6 A, decreased RP11 expression (Fig. 6 c). These data indicated that m 6 A positively regulates RP11 expression in CRC cells. We then evaluated the possible mechanisms involved in the m 6 A-regulated expression of RP11 in CRC cells. By treating cells with Act-D to terminate transcription, our data revealed that Mettl3 overexpression had no significant effect on the half-life of RP11 in HCT-15 cells (Fig. 6 d). The results of subcellular fractionation analysis showed that Mettl3 overexpression could markedly increase the localization of RP11 to chromatin (Fig. 6 e), which might be because Mettl3 can increase the stability of nascent RP11. However, Mettl3 overexpression had no effect on the mRNA expression of Siah1 or Fbxo45 in HCT-15 cells (Additional file 1: Figure S6 F). Furthermore, Mettl3 overexpression increased binding between RP11 and hnRNPA2B1 in both HCT-15 and HCT-8 cells (Fig. 6 f ), which might be due to Mettl3 increasing RP11 expression and hnRNPA2B1 is a m 6 A reader for the RNA processing events [37]. These data suggested that the m 6 A modification can increase RP11 expression in CRC cells by increasing RP11 nuclear accumulation. The m 6 A/RP11/Zeb1 axis and in vivo progression of CRC At this point, we asked whether there was a link between m 6 A methylation-regulated RP11, its downstream molecules Siah1, Fbxo45, and Zeb1, and clinical CRC Fig. 6 The m 6 A modification is involved in the upregulation of RP11 in CRC cells. a m 6 A RIP-qPCR analysis of RP11 in HCT-15, HCT-8 and NCM460 cells. b After transfection with vector control or ppB/Mettl3 for 24 h, RP11 expression was measured by qRT-PCR. c After transfection with vector control or pcDNA/Alkbh5 for 24 h, RP11 expression was measured by qRT-PCR. d After transfection with vector control or ppB/Mettl3 for 24 h, HCT-15 cells were further treated with Act-D for the indicated times, and RP11 expression was measured by qRT-PCR. e After transfection with vector control or ppB/Mettl3 for 24 h, the cytoplasmic, nuclear, and chromatin fractions of HCT-15 cells were separated for RNA extraction and qRT-PCR. f After transfection with vector control or ppB/Mettl3 for 24 h, binding between RP11 and hnRNPA2B1 in HCT-15 and HCT-8 cells was analysed by RIP-PCR using an antibody against hnRNPA2B1. Data are presented as the mean ± SD from three independent experiments. *p < 0.05, **p < 0.01 compared with control development. Zeb1 expression in CRC tissues was significantly (p < 0.01) greater than that in normal tissues, according to Hong Colorectal (Fig. 7 a) and Skrzypczak Colorectal 2 data (Fig. 7 b) from the Oncomine database. Significantly increased Zeb1 was observed in patients with N2 stage CRC compared to patients with N0 stage CRC (Fig. 7 c). Consistently, decreased expression of Fbxo45 was observed in patients with N2 stage CRC compared to patients with N0 stage CRC (Fig. 7 d). In addition, Mettl3 expression in patients with N2 stage CRC was significantly greater than that in patients with N1 stage CRC (Additional file 1: Figure S7 A). However, there was no significant difference for Siah1 between patients with N0, N1 or N2 stage CRC (Additional file 1: This finding suggested an increasing trend for METTL3 and Zeb1 expression and a decreasing trend for Fbxo45 expression during the malignant transformation of CRC. We further verified the co-expression relationship for RP11-regulated CRC progression. We found that RP11 expression was significantly negatively correlated with ALKBH5 expression in CRC patients (Additional file 1: Figure S7 C). This confirmed that the m 6 A can regulate the expression of RP11, further, RP11 regulated Siah1-Fbxo45/Zeb1 was involved in the development of CRC. Using the online Kaplan-Meier plotter bioinformatics tool, we found that colon cancer patients with increased RP11 expression showed reduced disease-free survival (DFS, Fig. 7 e) and overall survival (OS, Additional file 1: Figure S7 D), with no significant difference (p > 0.05). When RP11 expression was normalized to that of Siah1 (the relative levels of RP11 to that of Siah1) or Fbxo45, there was a trend towards significance for the reduced DFS of colon patients with higher RP11/Siah1 (Fig. 7 f) or RP11/Fbxo45 levels ( Fig. 7 g) compared with patients with lower values. Similarly, there was a trend towards significance for the reduced OS of colon cancer patients with higher RP11/Siah1 (Additional file 1: Figure S7 E) or RP11/Fbxo45 levels (Additional file 1: Figure S7 F) compared with those with lower values. We found that colon cancer patients with increased Zeb1 expression showed reduced DFS (Fig. 7 h) with significant difference (p < 0.05). When Zeb1 expression was normalized to that of Fbxo45 (Fig. 7 j), but not Siah1 (Fig. 7 i), the DFS of colon cancer patients with higher Zeb1/Siah1 levels (Fig. 7 j) was statistically significantly reduced compared to patients with lower values. Similarly, colon cancer patients with increased Zeb1 expression showed significantly (p < 0.05) reduced OS compared with patients with the low levels (Additional file 1: Figure S7 G). Normalization to Fbxo45 (Additional file 1: Figure S7 I), but not Siah1 (Additional file 1: Figure S7 J), further significantly reduced the OS of colon cancer patients. These results suggest that the m 6 A/RP11/Zeb1 axis triggers the in vivo progression of CRC. Discussion The application of next-generation sequencing has revealed that thousands of lncRNAs are involved in the progression of human disease. Several lncRNAs have been reported to play key roles in cancer developmental processes, including proliferation, survival, migration or genomic stability [25]. Among the few lncRNAs that have been functionally characterized, several have been linked to cancer cell invasion and metastases [38,39]. Regarding CRC progression, lncRNAs have been reported to regulate cell survival [40], tumorigenicity [10], and asymmetric stem cell division [41]. By using microarray analysis and functional screening, we show that lncRNA RP11, which is upregulated by m 6 A methylation, can trigger the migration, invasion and EMT of CRC cells via post-translational upregulation of the EMT-TF Zeb1. Our study highlights the function and mechanisms of RP11 in regulating CRC metastasis. Among the 8 simultaneously upregulated lncRNAs between stage I and normal tissues, stage IV and normal tissues, and stage IV and stage I tissues, RP11 expression in CRC tissues was not only greater than that in adjacent normal tissues but also higher than that in other cancers, suggesting that RP11 might be a specific target for CRC diagnosis and therapy. By screening for its potential roles in cell proliferation, colony formation, cell cycle progression, apoptosis, drug sensitivity/accumulation, and ROS generation via gain-and loss-of-function assessments, we found that RP11 can trigger the migration, invasion and EMT of CRC cells both in vitro and in vivo. This was evidenced by the observed upregulation of FN and vim and downregulation of E-Cad. Together with published reports of cancer metastasis-related lncRNAs, such as lncRNA-ATB [38], SChLAP1 [39], NKILA [30], and PNUTS [42], our study confirms the regulatory roles of lncRNAs in EMT and cancer metastasis. High RP11 expression correlates with positive lymph node metastasis and advanced TNM stage, suggesting that RP11 can be a strong predictor of CRC metastasis and prognosis. We find that the post-translational regulation of Zeb1 plays an essential role in the RP11-triggered dissemination of CRC. Zeb1 is a well-known and powerful EMT-TF that promotes EMT, metastasis, and the generation of cancer stem cells in many types of malignancies, including CRC [28,43]. We findd that RP11 has no effect on mRNA expression but increases the protein expression of Zeb1 in CRC cells by increasing Zeb1 protein stability and decreasing Zeb1 ubiquitination. By screening for factors responsible for the stability of Zeb1 in cancer cells, we confirm that the downregulation of Siah1 and Fbxo45 mediates the RP11-induced stabilization of Zeb1 in CRC cells. As ubiquitin E3 ligases, Siah1 and Fbxo45 can induce Zeb1 degradation through the ubiquitin-proteasome pathway [44,45]. lncRNAs can modulate the stability and nuclear turnover of specific mRNAs via RBPs and miRNAs [5]. In this work, the RP11-hnRNPA2B1-mRNA complex downregulates the mRNA stability of Siah1 and Fbxo45 in CRC cells. RP11 can be detected in both the cytoplasm and nucleus in CRC cells. The actions of RP11 towards decreasing mRNA stability through hnRNPA2B1 can be attributed to the cytoplasmic localization RP11. This is supported by the observation that RP11 increases the cytoplasmic accumulation of hnRNPA2B1, while hnRNPA2B1 overexpression decreases the expression of Siah1 and Fbxo45. Several existing studies have demonstrated that lncRNAs form complexes with RBPs and then trigger mRNA decay [32,46]. HnRNPA2B1 is known to form complexes with lncRNAs and is emerging as an important mediator of lncRNA-induced transcriptional repression [47]. Recently, lncRNA lncHC-binding hnRNPA2B1 has been reported to directly bind to the Cyp7a1 and Abca1 mRNAs and reduce their expression levels in hepatocytes [32]. In addition, hnRNPA2B1 interaction with lncRNA RMST may indicate the participation of the lncRNA in alternative splicing, mRNA trafficking, and neuronal cell survival [48]. Although our findings link RP11 and hnRNPA2B1 to suppression of mRNA stability, the detailed molecular mechanism is not currently understood in depth. This might be because hnRNPA2B1 can recruit factors involved in the mRNA degradation pathway (such as P bodies) to accelerate mRNA degradation. Finally, we explore whether m 6 A methylation, but not DNA methylation or histone acetylation, is involved in the upregulation of RP11 in CRC cells. m 6 A methylation involvement is evidenced by the observation that RP11 is significantly enriched with m 6 A-RIP and that Mettl3 significantly increases RP11 expression in CRC cells. As one of the most common RNA modifications, m 6 A can be found on almost all types of RNAs; can modulate all stages of the RNA life cycle, such as RNA processing, nuclear export and translation [33,34]; and can therefore regulate cancer progression processes, such as cell proliferation [49] and tumorigenesis [50]. However, investigations of the functions of m 6 A in lncRNAs are few. One recent study first revealed an m 6 A-dependent model of the lincRNA/miRNA interaction in which the m 6 A modification of linc1281 was required for the direct binding of let-7 to linc1281 in embryonic stem cells (ESCs) [51]. We reveal that m 6 A could increase RP11 accumulation in the nucleus and on chromatin. We find that Mettl3 overexpression could increase binding between hnRNPA2B1 and RP11 in CRC cells, which might be due to m 6 A-induced alterations in the local RNA structure and enhancements in the RNA binding of hnRNPs [52]. Considering that knowledge of the mechanism of RNA methylation is still in its infancy, additional discoveries of regulatory patterns mediated by m 6 A on the biogenesis and functions of lncRNA are worth verifying in the future. In conclusion, our findings demonstrate the pro-metastatic role of lncRNA RP11 in the dissemination of CRC cells. We have discovered that RP11 post-translationally stimulates Zeb1 expression via downregulation of the mRNA expression of Siah1 and Fbxo45 by binding to hnRNPA2B1. Furthermore, m 6 A modification may increase RP11 expression and function in CRC cells and tissues. Considering the high and specific levels of RP11 in CRC tissues, our present study provides a potent target that may serve as a predictive marker of metastasis and as an effective target for anti-metastatic therapies for CRC patients. Additional file Additional file 1: Figure S1. RP11 is increased during the tumourigenesis and progression of CRC. Figure S2. RP11 triggers the dissemination of CRC cells both in vitro and in vivo. Figure S3. Upregulation of Zeb1 mediates RP11-triggered dissemination of CRC cells. Figure S4. Downregulation of Siah1 and Fbxo45 mediates RP11-induced upregulation of Zeb1. Figure S5. RP11 regulates Siah1 and Fbxo45 expression by forming the RP11-hnRNPA2B1-mRNA complex. Figure S6. The m6A modification is involved in the upregulation of RP11 in CRC cells. Figure S7. The m6A/RP11/Zeb1 axis and in vivo progression of CRC. (DOCX 14708 kb) Additional file 2: Table S1. The clinic pathological features of clinical CRC tissues (n = 32). Table S2. Sequences of primers. Table S3. The information of 8 lncRNAs. Table S4. The protein information of RP11 pull down/MS analysis. Table S5.
9,587.2
2019-04-13T00:00:00.000
[ "Biology", "Medicine" ]
Redox state‐dependent modulation of plant SnRK1 kinase activity differs from AMPK regulation in animals The evolutionarily highly conserved SNF1‐related protein kinase (SnRK1) protein kinase is a metabolic master regulator in plants, balancing the critical energy consumption between growth‐ and stress response‐related metabolic pathways. While the regulation of the mammalian [AMP‐activated protein kinase (AMPK)] and yeast (SNF1) orthologues of SnRK1 is well‐characterised, the regulation of SnRK1 kinase activity in plants is still an open question. Here we report that the activity and T‐loop phosphorylation of AKIN10, the kinase subunit of the SnRK1 complex, is regulated by the redox status. Although this regulation is dependent on a conserved cysteine residue, the underlying mechanism is different to the redox regulation of animal AMPK and has functional implications for the regulation of the kinase complex in plants under stress conditions. The evolutionarily highly conserved SNF1-related protein kinase (SnRK1) protein kinase is a metabolic master regulator in plants, balancing the critical energy consumption between growth-and stress response-related metabolic pathways. While the regulation of the mammalian [AMP-activated protein kinase (AMPK)] and yeast (SNF1) orthologues of SnRK1 is well-characterised, the regulation of SnRK1 kinase activity in plants is still an open question. Here we report that the activity and T-loop phosphorylation of AKIN10, the kinase subunit of the SnRK1 complex, is regulated by the redox status. Although this regulation is dependent on a conserved cysteine residue, the underlying mechanism is different to the redox regulation of animal AMPK and has functional implications for the regulation of the kinase complex in plants under stress conditions. Keywords: AKIN10; protein kinase; redox regulation; reactive oxygen species; SnRK1 In plants, the SNF1-related protein kinase (SnRK1), an orthologue of the mammalian AMP-activated protein kinase (AMPK) and yeast SNF1 protein kinase, plays a central role in regulating energy homeostasis, development and various stress responses [1-7]. SnRK1 is evolutionarily highly conserved and orthologues can be found in all three domains of life [8]. Like its mammalian and yeast counterparts, the SnRK1 protein kinase is a heterotrimeric complex consisting of one catalytic S/T protein kinase 'alpha' subunit, and the two regulatory 'beta' and 'gamma' subunits [9,10]. In Arabidopsis thaliana (At), three genes encode for the SnRK1 alpha subunit isoforms. Two of them, AKIN10 (AtSnRK1a1) and AKIN11 (AtSnRK1a2) are ubiquitously expressed in vegetative tissue, while AKIN12 (AtSnRK1a3) is predominantly expressed in pollen and seeds [11]. Single knockout mutants of akin10 and akin11 produce viable offspring, whereas a respective double mutant is lethal [1,4]. Much progress has been made in understanding how AMPK activity is regulated in mammals. There, the complex is under control of a multitude of factors, ranging from direct regulation via small effector molecules such as AMP/ADP/ATP over activity modulation by AMPK-interacting proteins to more indirect regulation via hormone signalling [12]. In plants, in contrast, the molecular mechanisms regulating SnRK1 activity are still hardly understood [13]. Due to the obvious differences in physiology and metabolism between mammals, fungi and plants, many of the wellstudied regulatory mechanisms of AMPK and SNF1 seem not to be valid for plant SnRK1. A common feature of the AMPK, SNF1 and SnRK1 catalytic subunits is their several hundred-fold activation by phosphorylation of a threonine in the so-called 'activation loop' (or T-loop) which is conserved in all eukaryotic orthologues [4]. This mechanism of activation is very similar to mitogen-activated protein kinase (MAPK) activation by upstream MAPK kinases (MKKs) [14]. In mammals, T-loop phosphorylation of AMPK presents its major mode of activation and was shown to be highly dynamic and positively correlating with the AMP/ATP ratiodependent activity of the kinase [15]. Binding of AMP to the gamma subunit of AMPK activates the complex by three mechanisms: (a) promotion of T-loop phosphorylation by upstream kinases, (b) inhibition of T-loop dephosphorylation by upstream phosphatases and (c) allosteric activation of the kinase [12]. In plants, however, it is still unclear whether and how internal energy levels control the T-loop phosphorylation status of SnRK1. Although AMP also inhibits dephosphorylation of the T-loop in SnRK1, a phosphorylation-promoting effect was not observed for the plant kinase complex [16]. Nevertheless, recent studies in vegetative leaves of Arabidopsis showed dynamics in SnRK1 T-loop phosphorylation [1,7]. A drop of T-loop phosphorylation was observed within the first 20 min of an extended night period which recovered immediately after 40 min of continued extended night treatment [1]. A positive correlation between a rising AMP/ATP ratio and increasing Tloop phosphorylation was so far only observed during the first hour of submergence of whole At plants [7]. Surprisingly, a further increase of the AMP/ATP ratio after longer submergence was not reflected by enhanced T-loop phosphorylation which, in fact, decreased again after the first hour of treatment [7]. While these experiments demonstrate that the T-loop phosphorylation status of plant SnRK1 is changing under certain conditions, they also indicate that something in addition to the AMP/ATP ratio influences SnRK1 kinase activity. Furthermore, data obtained from assaying the activity of whole SnRK1 complexes [17] suggest that SnRK1 is insensitive to allosteric activation by AMP. Importantly, the residues which were found to be critical for AMP-dependent activation of AMPK are not conserved in SnRK1 [9]. Activation of SnRK1 via direct interaction with AMP, as observed for AMPK in animals, is therefore unlikely to happen, which leaves us with the question how the cellular status is relayed to SnRK1 in plants. In young tissues, this could in part be achieved by sugars, acting as small signalling molecules. Trehalose-6 -phosphate (T6P), which is thought to act as a sugar-deprivation signalling molecule, as well as glucose-6-phosphate (G6P) and glucose-1-phosphate (G1P) were found to have independent and potentially synergistic inhibitory effects on SnRK1 [18][19][20]. However, inhibition of SnRK1 activity by these sugar molecules requires an additional factor, which is only present in very young organs [19,20]. How energy availability could modulate T-loop phosphorylation and SnRK1 activity in mature tissues is still an open question. Here we report for the first time that AKIN10 activity is strongly dependent on the redox status in vitro and that this redox sensitivity is conferred by a single cysteine residue. Although the orthologous cysteine residue was found to be involved in mammalian AMPK redox regulation too [21], our data suggest that the molecular mechanism behind it differs considerably between plant AKIN10 and mammalian AMPK. Combining our results and published data on redox dynamics, we discuss the relevance of our findings on SnRK1 redox dependency in the cellular in vivo context for plants. Purification of recombinantly expressed proteins Arabidopsis thaliana AKIN10 (splicing form 1/3), basic region leucine zipper 63 (bZIP63; splicing form 2), and calcium-dependent protein kinase 3 (CPK3) were amplified from cDNA, subcloned into different expression plasmids by ApaI/NotI digestion and ligation, and transformed into the Escherichia coli strains ER2566 or BL21 for protein expression. Inactive versions of AKIN10 (K48M and T198A) and cysteine to serine mutants (C156S, C200S and C156/200S) were made by site-directed mutagenesis. Primers for cloning and mutagenesis can be found in Table 1. The AKIN10 variants and bZIP63 (in pGEX-4T; GE Healthcare, Little Chalfont, UK) as well as SnAK2 (in pDEST15; Thermo Fisher Protein expression and purification were done as described in [22]. C-terminally truncated nitrate reductase (NIA2) was recombinantly expressed and purified as described in [23]. For redox-dependent kinase assays, elution buffers were exchanged for a basic kinase reaction buffer (50 mM Hepes, 2 mM MgCl 2 , pH 7.5) using PD MiniTrap G-25 columns (GE Healthcare). Proteins were then concentrated using Vivaspin 500 Centrifugal Concentrators (GE Healthcare) and stored at À80°C after adding glycerol to a final concentration of 10%. In vitro kinase assays Proteins were recombinantly expressed in E. coli. The 'AIARA'-peptide (AIARAASAAALARRR) was obtained from GenScript as chemically synthesised peptide. The final concentration for AIARA peptide in kinase assays was 100 lM. The kinase and substrate were incubated in kinase reaction buffer (50 mM Hepes, 2 mM MgCl 2 , 50 lM ATP, pH 7.5, plus 20 lM CaCl 2 in assays with CPK3) supplemented with indicated concentrations of either DTT, reduced glutathione (GSH) or H 2 O 2 . For radioactive assays, 1 lCi P 32 cATP were included in each reaction. Incubation times and temperatures were adjusted to in vitro kinase activity: assays with CPK3 were incubated for 10 min at room temperature, while assays with GST-AKIN10 or GST-SnAK2 were incubated for 30 min at 30°C or room temperature (AIARA phosphorylation). For simulated oxidative burst assays, the respective kinases were mixed with their substrates in the presence of 3.5 mM GSH in kinase reaction buffer. Reactions were started by adding 1 lCi P 32 cATP and the respective amount of H 2 O 2 to reach the indicated H 2 O 2 concentrations. For the sequential kinase assays presented in Fig. 4C, AKIN10 was first incubated in kinase reaction buffer containing 3.5 mM GSH and 50 lM ATP either with or without SnAK2 for 30 min to allow for saturating phosphorylation of AKIN10 by SnAK2. Subsequently, bZIP63 was added to the reactions together with 1 lCi P 32 cATP, once maintaining 3.5 mM GSH and once adding 3.5 mM H 2 O 2 to the reaction in the presence of 3.5 mM GSH. The secondary reactions were run for 15 min. Reactions were stopped by addition of 49 Laemmli buffer and boiling at 95°C for 4 min. Proteins were then separated by SDS gel electrophoresis. Radioactive gels were dried and exposed on a Storage Phosphor Screen (GE Healthcare) before reading the signals with a Typhoon 8600 Variable Mode Imager (Amersham/GE Healthcare, Little Chalfont, UK). Nonradioactive gels were used for western blotting and detection of phosphorylation using phosphor-specific antibodies: a-P-14-3-3: Phospho-(Ser) 14-3-3 Binding Motif Antibody, #9601, Cell Signaling Technology (Danvers, MA, USA); and a-P-AMPK: Phospho-AMPKa (Thr172) Antibody, #2531, Cell Signaling Technology. HRP-conjugated secondary antibodies were used from GE Healthcare. Signal quantification with ImageJ The '.gel' files from the Typhoon Imager were imported in FIJI (IMAGEJ; https://imagej.net/Fiji) [24] and regions of interest were defined to extract the signal intensity of each band. For GST-bZIP63, the whole band was quantified and background-subtracted. For assays with the 'AIARA' peptide, six equally sized circles were placed within each band (avoiding areas with strong background spots) and their intensities were summed up, followed by background subtraction and subtraction of the signal intensity from the control sample without kinase. Signal intensities of each assay were normalised to the 10 mM DTT sample before calculating the mean intensities of different experiments. Band shift assay for AKIN10 oligomerisation The GST-AKIN10 variants were incubated for 20 min at room temperature in kinase reaction buffer (50 mM Hepes, 2 mM MgCl 2 , pH 7.5) containing different concentrations of either DTT or H 2 O 2 and either mixed with Laemmli buffer without or with 2-mercaptoethanol. Samples containing 2-mercaptoethanol were boiled for 5 min at 95°C to improve breaking of disulphur bonds. The proteins were then separated by SDS gel electrophoresis, followed by western blotting with an antibody against AKIN10 (a-AKIN10, AS10919; Agrisera, Ume a, Sweden). Protein sequence alignments Protein sequences of all Arabidopsis proteins were taken from TAIR (www.arabidopsis.org). Sequences of other plant AKIN10 proteins were obtained from Phytozome (https://phytozome.jgi.doe.gov/pz/portal.html) by blasting the Arabidopsis AKIN10 protein sequence. In cases with more than one hit, only the one with the highest blast score or similarity to the Arabidopsis protein was considered. Sequences for human AMPKa1 and yeast SNF1a1 were downloaded from NCBI (https://www.ncbi.nlm.nih.gov/). Protein alignments were done in Geneious (Version 10) using the multiple alignment tool and selecting ClustalW alignment with default settings (Cost matrix: BLOSUM, Gap open cost: 10, Gap extend cost: 0.1). Box-and-line alignment schemes and short extracts of text alignment were exported as images. Results In plants, redox changes of metabolites and proteins are an important integral part in signalling elicited by biotic and abiotic stimuli as well as in response to a changing energy balance [31][32][33]. A recent publication linked AMPK kinase activity to Thrx1-dependent redox regulation via conserved cysteine residues in its kinase domain and T-loop [9,21]. It is therefore tempting to hypothesise that SnRK1, too, is connected to redox signalling processes. Indeed, we observed strong redox-dependent changes in AKIN10 activity in in vitro phosphorylation assays with different substrates, supporting this idea. We first compared the phosphorylation of the previously described SnRK1 target AtNIA2 [34] in the presence and absence of 1 mM DTT. Remarkably, phosphorylation of the functionally important 14-3-3 binding site by AKIN10 required DTT (Fig. 1A). In contrast, the phosphorylation of the same residue by AtCPK3 [35], a member of the SnRK/CDPK group of plant protein kinases, was redox-independent (Fig. 1A). To further substantiate these findings, we performed a series of kinase assays covering a broad range of redox conditions, from strongly reducing (10 mM DTT) to strongly oxidising (10 mM H 2 O 2 ) conditions, with AtbZIP63, a recently well-described in vivo substrate of AKIN10 [2]. Again, we observed higher AKIN10 activity under reducing conditions. The signal was strongest with 10 mM DTT, dropped to~60% with 1 mM DTT and stayed stable at~35% between 100 lM DTT and 1 mM H 2 O 2 , before almost vanishing at 5-10 mM H 2 O 2 (Fig. 1B, Fig. S1B). Importantly, the same kinase assay series with CPK3, another known bZIP63 kinase [2], showed no redox dependency (Fig. S1A,B). As AKIN10 targets partly different residues on bZIP63 than CPK3 does, we wanted to exclude the possibility that the observed activity changes in the AKIN10 kinase assays are due to redox-dependent conformational changes of the substrate. We therefore designed the AIARA peptide as novel artificial AKIN10 substrate in our assays. This peptide is similar to the well-described AMARA [36] peptide substrate for AKIN10 but does not contain reducible/oxidisible amino acids under the tested conditions. These assays again confirmed the redox dependency of the AKIN10 kinase activity and redox insensitivity of the CPK3 kinase activity (Fig. S1C,D). It should be noted, though, that the AKIN10 redox state/kinase activity correlation blots look slightly different for assays with bZIP63 (Fig. S1B) and the AIARA peptide (Fig. S1D) AIARA peptide decreased slower and more gradually than observed for bZIP63. This indicates that in case of bZIP63 also the substrate might undergo redoxdependent conformational changes, which decrease its ability to get phosphorylated by AKIN10 but not by CPK3. We next wanted to identify the residues responsible for AKIN10 redox sensitivity. Shao et al. [21] reported that the residues C130 just before the active site and C174 in the T-loop are the main sites involved in AMPK redox regulation. Both cysteines are evolutionarily conserved, also throughout the plant kingdom ( Fig. 2A, Fig. S2), which made them the most interesting candidates for our analysis. The relative sequence position of the first cysteine is even retained in the closest homologues to the SnRK1 alpha kinases, the CIPK family, but not any more in the more distantly homologous cluster of the CPK family [37] (Fig. S3). In the AKIN10 splice variant 2, AMPK C130 and C174 correspond to AKIN10 C156 and C200 respectively. To investigate their roles for AKIN10 activity, we mutated either one or both cysteines to serine residues which are supposed to mimicking the reduced form of cysteine. The C156S mutant showed reduced kinase activity compared to the wild-type (wt) AKIN10, which is not surprising given its close proximity to the active site. The redox sensitivity, however, was retained (Fig. 2B,C). On the contrary, the C200S mutant retained most of the wt activity but the redox sensitivity was abolished (Fig. 2B,C and Fig. S1C,D). The C156/200S double mutant combined both the overall reduction in activity as well as its insensitivity to the redox state (Fig. 2B,C). Therefore, we concluded that only C200 is required for the observed dynamic redox-dependent changes in the kinase activity of AKIN10. This indicates a fundamental difference between the regulation of AKIN10 and AMPKa, as in AMPKa it was shown that the orthologous-modified cysteines did not affect intrinsic kinase activity [21]. For AMPK, it was found that under oxidising conditions, C130 and C174 participate in intermolecular S-S bond formation, leading to oligomerisation as well as to inhibition of AMPK activation by upstream kinases [21]. For this reason, we looked at AKIN10 and found that it formed oligomers in vitro too (Fig. S4). In fact, only under strongly reducing conditions (10 mM DTT), a substantial amount of AKIN10 was present in its monomeric form. Under less reducing conditions, the majority of AKIN10 was present in high molecular weight complexes (Fig. S4). Surprisingly, neither C156 nor C200 seem to play a crucial role in AKIN10 oligomerisation, as the same behaviour was observed for the wt and all three C/S variants (Fig. S4). Combined with the fact that the C200S mutant is active even under oxidising conditions, when it is fully oligomerised, this suggests that the oxidation-induced oligomerisation of AKIN10 is not responsible for its reduction in activity. As C200 is in close vicinity to T198 in the T-loop (Fig. 3A), the substrate for AKIN10-activating kinase SnAK1 and 2 [13, [38][39][40][41], we asked if the C200 redox state would also influence AKIN10 phosphorylation by SnAK2. To this end, we first performed a series of kinase assays with SnAK2 and an inactive variant of AKIN10 (K48M) as substrate under different redox conditions. Using autoradiography for analysis, we observed that phosphorylation of AKIN10 by SnAK2 was highest under reducing conditions of 10 mM DTT in the reaction and gradually dropped towards more oxidising conditions (Fig. 3B), suggesting that reducing conditions could promote AKIN10 phosphorylation by its upstream kinases. Notably, under strongly oxidising conditions (5 and 10 mM H 2 O 2 ), also SnAK2 autophosphorylation was reduced (Fig. 3B). This indicates that SnAK2 kinase activity, too, is redox-dependent, albeit to a much lower extent than observed for AKIN10. The use of a T198A mutant as non-activatable AKIN10 variant revealed that SnAK2 is also able to phosphorylate other residue(s) than T198 in AKIN10 (Fig. 3B). To obtain a more precise view on the T-loop phosphorylation, we therefore analysed redox-dependent AKIN10 phosphorylation by SnAK2 via western blotting, using an antibody which specifically recognises the phosphorylated T198. These assays basically confirmed the results from the autoradiography, demonstrating that T198 is most efficiently phosphorylated under highly reducing conditions (Fig. 3B). Interestingly, with the C200S variant as a substrate, SnAK2 could efficiently phosphorylate AKIN10 at T198 under all conditions (Fig. 3C). The moderate decrease of AKIN10 T198 phosphorylation at the two highest H 2 O 2 concentrations might be attributed to the overall diminished SnAK2 kinase activity under these conditions (Fig. 3B,C). These findings are surprising as it was reported that AMPK activation by LkB1, the functional SnAK2 orthologue, was dependent on functional C174 and C130 side chains but was almost abolished when AMPK C174S/C130S variants were used in the kinase and interaction assays [21]. This is remarkable as the T-loop of AKIN10 belongs to the most conserved features of the AMPK/SNF1/SnRK1 kinase family on sequence level ( Fig. 2A). As DTT is no naturally occurring reductant in the plant cytosol, we wanted to make sure that our observations also apply for relevant in planta reductants such as GSH. We therefore tested the effect of GSH on AKIN10 activity and phosphorylation by SnAK2 in a series of kinase assays analogous to those shown in Figs 2B and 3B. Those assays confirmed that GSH, like DTT, is able to efficiently keep AKIN10 C200 in a reduced state, thereby resulting in full intrinsic AKIN10 kinase activity (Fig. S5A) as well as allowing the phosphorylation of T198 by SnAK2 (Fig. S5B). The notion that plant metabolism relies on a reducing cytoplasm [33,42] suggests that under normal conditions, SnRK1 is present in its reduced form. This prompted us to ask if a simulated oxidative burst would lead to changes in AKIN10 activity. To this end, we set up a series of kinase assays in the presence of 3.5 mM GSH and added rising concentrations of H 2 O 2 . Indeed, with rising H 2 O 2 concentrations, we observed an increasingly diminished intrinsic AKIN10 kinase activity (Fig. 4A), as well as a decrease in phosphorylation of AKIN10 by SnAK2 (Fig. 4B). This is similar to what we observed in the sequential redox series. As the C200 redox state affects two distinct features of AKIN10the intrinsic AKIN10 kinase activity and its upstream kinase activation sitewe asked if phosphorylation of T198 in the activation loop prior to an oxidative burst would protect AKIN10 from being inactivated by oxidation. Interestingly, this was not the case. Preincubation of AKIN10 with SnAK2 to fully phosphorylate T198 did not prevent a loss of AKIN10 kinase activity in response to H 2 O 2 treatment (Fig. 4C). This suggests that oxidative bursts in the plant cell have the potential to modulate SnRK1 activity, regardless of its phosphorylation state, and could present a novel regulatory mechanism for SnRK1. Discussion One major question following up this work is whether the described redox-dependent activity changes of AKIN10 would be observable in the fully assembled AtSnRK1 complex too. Structural modelling of the At SnRK1 subunits AKIN10, AKINb1 and SNF4 on basis of the mammalian trimeric AMPK as well as AMPKa crystal structures shows that the C200 in the T-loop resides on the surface of the protein and is unlikely to be blocked by domains of the regulatory GST-AKIN10 (AKIN10 or inactive AKIN10 K/M = K48M) was first mixed with either its substrate GST-bZIP63 (bZIP63) or its upstream kinase GST-SnAK2 (SnAK2) in kinase buffer containing 3.5 mM GSH. The kinase reactions were then started by adding 32 P cATP and rising concentrations of H 2 O 2 as indicated. The proteins were separated by SDS/PAGE and phosphorylated proteins were detected via autoradiography. The coomassie brilliant blue (CBB)-stained gel is depicted below. The positions of the full-length proteins are indicated by arrowheads. Degradation products of GST-AKIN10 are marked by asterisks. (C) Redox dependency of in vitro AKIN10 activity before and after phosphorylation by SnAK2. GST-AKIN10 (AKIN10) was first prephosphorylated (left panel) or not (right panel) by SnAK2 in kinase buffer containing 3.5 mM GSH. Then, GST-bZIP63 (bZIP63) and 32 P cATP were added, either in the continued presence of 3.5 mM GSH (left lane) or in the presence of 3.5 mM GSH + 3.5 mM H 2 O 2 (right lane). bZIP63 phosphorylation was analysed by autoradiography. The CBB-stained gel is depicted below. As activated AKIN10 is~25 times more active than nonactivated AKIN10, the autoradiographs were developed separately in order to avoid oversaturation of the image from activated AKIN10. beta or gamma subunits (Fig. 3A). Also, there is neither an SH group from AKIN10 nor from its beta or gamma subunits in the vicinity of C200, so that intramolecular C200S-S-R bond formation is highly unlikely to cause the observed effects. Comparison of structures obtained from modelling AtSnRK1 on the heterotrimeric AMPK complex and AMPK alpha subunits alone revealed that the AKIN10 core elements, such as the T-loop and the active site of AKIN10, are highly similar in all analysed situations (Fig. S6). It was also shown that the regulatory beta and gamma subunits of the fully assembled SnRK1 complex do not interfere with AKIN10 activation by SnAK2 and that the kinase activities of the heterotrimeric SnRK1 complex and AKIN10 alone rise to similar levels when activated by SnAK2 [43]. Furthermore, AKIN10 targets used in this study were also identified as in vivo targets of the SnRK1 complex [1,2], indicating that using the kinase subunit alone does not affect substrate recognition. Therefore, it is likely that our results obtained on the AKIN10 subunit are also valid for the heterotrimeric SnRK1 complex. Together, these data support our notion that the AKIN10 C200 oxidation state directly influences its kinase activity as well as the ability of SnAK2 to 'recognise' and efficiently phosphorylate the AKIN10 T-loop also in the fully assembled SnRK1 complex. The next question to be addressed is how the reported redox-dependent AKIN10 activity dynamics would fit into the biological context. Plant metabolism generally relies on a reduced cytoplasm [33,42]. Under these conditions, we can assume that AKIN10 is present in a reduced form. However, local accumulation of reactive oxygen species (ROS) has been observed frequently in plants as a result of different environmental perturbations [44,45]. These ROS bursts indeed have the potential to oxidise cysteine SH-side chains of different proteins [46,47]. In this respect, C200 oxidation of AKIN10, which as we have shown leads to its inactivation regardless of its phosphorylation state, may represent a part of the mechanism inactivating AKIN10, allowing the plant to terminate the AKIN10-dependent stress response phase. A pending question connected to this proposed inactivation mechanism is if phosphatases dephosphorylating the AKIN10 T-loop [48] would also be affected by the redox state of C200. To underline the importance of our findings for AKIN10 regulation in vivo, it would be essential to show that critical AKIN10 cysteine side chains are dynamically reduced/oxidised also in vivo. In fact, first evidence that AKIN10 could be regulated by oxidation in vivo comes from cell culture experiments. In those, an oxidation-specific interaction of AKIN10 with a yap1-based probe for detection of sulfenylated cysteine side chains in proteins has been observed under oxidative stress (Frank van Breusegem, personal communication). However, it will be most important to identify which residues of AKIN10 exhibit dynamic oxidation states in vivo. Suitable methods to do so are being developed [46,47,49] and applying them specifically on AKIN10 will shed light on the extent of AKIN10 activity modulation at different cytosolic redox states. In case that oxidation of C200 in AKIN10 takes place in vivo one may also wonder which oxidation state it will reach, either sulfenylation (ÀSOH), sulfinylation (ÀSO 2 H) or sulfonylation (ÀSO 3 H). ÀSO 3 H is regarded to be irreversible in biological systems, whereas ÀSO 2 H was reported to be reversible in some cases, aided by enzyme-catalysed reactions [50]. In this context, it would be interesting to know if in plants, specific interactions between AKIN10 and thioredoxins occur, which would lead to reduction of AKIN10, similarly as reported for AMPK regulation in mammals [21]. An interesting result of this study is that neither C156 nor C200 seem to be involved in redox-dependent oligomerisation of AKIN10, which is contrary to what was observed for their orthologous residues C130 and C174 in mammalian AMPK [21]. For AMPK, oligomerisation via C130-C174 disulphide bond formation was even proposed as the mechanism for oxidation-dependent inactivation [21]. This we can exclude for AKIN10, which implies that the mechanism for oxidation-dependent inactivation of AKIN10 must be a different one. One puzzling fact about redox modulation of mammalian AMPK is that in one study, it was found to be deactivated under oxidising conditions [21], while in other studies, it was reported that oxidative stress leads to activation of AMPK via GST-mediated S-glutathionylation of the catalytic AMPK alpha and the regulatory AMPK beta subunits [51,52]. For AKIN10, it seems unlikely that S-glutathionylation has a direct influence on C200-mediated redox-dependent activity changes. In our assays, the C200S mutant of AKIN10 shows a similar kinase activity under both reducing (DTT/GSH) and oxidising conditions (H 2 O 2 ). This suggests that the reduced cysteine is sufficient for full AKIN10 activation. Furthermore, the decrease of AKIN10 activity by H 2 O 2 in the oxidative burst assays (Fig. 4A) speaks against an activation of AKIN10 by spontaneous S-glutathionylation of C200. A deactivation of AKIN10 by S-glutathionylation of C200 is equally unlikely, as AKIN10 activity was reduced under oxidising conditions in assays not containing any glutathione (Fig. 2B). Still, at this point, we cannot exclude that SnRK1 activity may be regulated via GST-catalysed S-glutathionylation on other cysteines than C200 in vivo. Even a scenario like in mammals can be imagined where different redox-mechanisms seem to modulate AMPK activity depending on the cell type the kinase is located in [21]. However, to answer these questions, further experiments targeting eventually occurring S-glutathionylation dynamics in vivo are necessary. From the presented data, we can draw the conclusion that AKIN10 activity has the potential to be redox-modulated in vivo. An estimation to what extent SnRK1 activity is redox controlled in the cell is currently difficult as the subcellular dynamics of H 2 O 2 signalling are only starting to be understood [44]. Along this line, a potential role of 'redoxosomes', a wellestablished component in mammalian redox signalling [53], has only been superficially addressed in plants so far. On the other hand, GSH was demonstrated to exhibit more subcellular dynamics than previously assumed [54]. Additionally, the subcellular localisation dynamics of SnRK1 itself adds to the complexity of this question. For yeast SNF1, transient complex formation at mitochondria has been described recently [55]. Although it is accepted that the regulatory beta subunits are responsible for AKIN10 partitioning between nucleus and cytoplasm [10], detailed knowledge on the location of transient signalling complex formation, as has been observed in yeast, is still missing in plants. However, our data provide a solid basis for designing the further necessary experiments to elucidate the full extent of the described redox-modulated AKIN10 activity in the plant itself. Supporting information Additional Supporting Information may be found online in the supporting information tab for this article: Fig. S1. Arabidopsis AKIN10 but not CPK3 kinase activity is redox-sensitive. Fig. S2. Evolutionary conservation of AKIN10 C156, 200. Fig. S3. Positional conservation of C200 and C156 in increasingly distantly related kinase families compared to SnRK1. Fig. S4. AKIN10 redox-dependent oligomerisation. Fig. S5. AKIN10 C200 can be reduced by GSH in vitro. Fig. S6. Core functional elements of AKIN10 are likely to be structurally similar in the SnRK1 heterotrimeric structure and the AKIN10 monomer.
6,876.8
2017-10-04T00:00:00.000
[ "Biology", "Environmental Science" ]
PRODUCTIVITY IMPROVEMENT IN THE UTILIZATION OF DOMESTIC AND IMPORTED INPUTS IN RESOURCE AND NON-RESOURCE-BASED INDUSTRIES : 1983 – 2005 The focus of the study is to examine the improvement in productivity on the utilization of intermediate input in resources and non resources-based industries of the Malaysian manufacturing sector. Since improvement in productivity can determine how well an input performed, our main interest rests on whether there exists any discrepancy between the performance of domestic and imported intermediate input. To undertake such an analysis, we employed various publications of the Malaysian Input-Output Tables. The input-output coeffi cients of domestic and imported inputs were then simulated by using the commodity technology model. It was anticipated that three main fi ndings could be obtained from this study. Firstly, non resourcesbased industries have shown that both inputs have a higher improvement in productivity compared to resources-based industries. Secondly, this study revealed that resource-based industries have improved productivity relatively in the imported input used compared to domestic input. Thirdly, the number of industries that were effi cient in utilizing imported input was higher, both, in resource and non resource-based industries. Results from this study show that imported intermediate input are still important in the production of manufactured products, even though many incentives have been given in order to increase the effi ciency of the domestic input used. Introduction Since the Malaysian independence in 1957, various economic policies especially on import substitution was undertaken with the intention to reduce the importation of goods which for the most part comprised of material inputs.As such, the Import Substitution Policy (1958)(1959)(1960)(1961)(1962)(1963)(1964)(1965)(1966)(1967) was implemented in particular to reduce the importation of goods mostly comprising of consumer goods, which were produced by foreign companies in the country.The policy has been subsequently followed by Phase II of the Import Substitution Policy (1981)(1982)(1983)(1984)(1985), emphasizing on the reduction of imported inputs used in the manufacturing sector (Alavi, 1996).The specifi c policy is exclusively undertaken to develop the local industry, especially the Small and Medium scale Industries (SMIs) as well as at the same time hand out incentives to foreign companies with the purpose to encourage raising the utilization of domestically produced inputs.In addition, the Investment Incentive Act (1986) gives away incentives to foreign companies that utilize domestic inputs in their production.In general, the combination of these eff orts is hoped to increase deployment of domestic inputs in their chains of production.Thus, in supporting eff orts to enhance the utilization of domestic inputs, the Malaysian government in the course of the Sixth Malaysia Plan (1991-1995), has entrusted a new institution known as the Malaysian Industrial Development Authority (MIDA) to invigorate the manufacturing sector especially by the use of resource and non-resource-based industries (Malaysia, 1991).MIDA's industrial strategy served as a conduit that reduces dependence on imported material inputs and in turn encourages the use of domestic material inputs.Implicitly, it works as a strategy in promoting the production of domestic and exports, both local and foreign companies with a high content of domestic inputs.The use of domestic inputs by resource-based industries and non-resource-based industries is actually supported by several factors.Primarily, the most important factor is to increase the domestic value-added production in both resource and non-resource industries.Furthermore, these industries need to create intense linkage between economic sectors, especially the manufacturing and agricultural sectors.In addition, these eff orts will create linkages between foreign and local industries, particularly SMIs, and fi nally, domestic inputs use can improve defi cit in the current balance of payment at the most part by reduced dependency of imported inputs. IJMS 19 (1), 87-114 (2012) Realising the above factors, the purpose of this paper is to examine the relative effi ciency of domestic inputs and imported raw materials used in industries of the manufacturing sector, which is classifi ed into resource and non-resource-based industries.Material inputs or sometimes referred to as intermediate material inputs are major 1 sources of inputs in the Malaysian manufactures.In pursuance of this issue, one has to bear in mind that the utilization of domestic input is associated with resource-based industries and imported input with the non-resource based industries.The fi ndings of this study show which industrial base utilized the inputs of domestic and imported effi ciently.In addition, this study also seeks to analyse which subsector of the manufacturing sector, that is classifi ed into resources and non-resource-based industries has more reliance on domestic input or imported raw materials between the periods of study. Therefore, the purpose of this paper is to examine productivity improvement of domestic and imported inputs used among the subsectors of the manufacturing sector, which is classifi ed into resource and non-resource-based industries.This study uncovered fi ndings as to whether inputs were used productively or effi ciently.In addition, this study also analysed which subsector of the manufacturing sector signifi cantly utilized more inputs during the phase of the study.This paper is stylized into six sections initially beginning with the introduction in section 1, followed by section 2 that discusses the related indicators of the manufacturing sector that supports the issue of the study as presented in section 1. Section 3 off ers the theoretical framework of the study.Section 4 outlines the model used in this study, data collection and input-output aggregation process.Section 5 presents the results of the study and discusses its fi ndings.Finally, section 6 provides conclusions and some policy implications related to the study. Changes in Economic Structure As clearly highlighted in Table 1, the importance of the agricultural sector is shrinking in terms of its share from Gross Domestic Product (GDP) and exports.In contrast, the manufacturing sector has gained importance in terms of the average annual rate of growth, share in GDP and percentage of exports.It should be noted that within the agricultural sector, diversifi cation had taken place thereby enabling a reduction in the traditional importance of rubber exports in the 1970s to palm oil, timber and cocoa in the 1980s and the 1990s.Similarly, the importance of tin in the mining sector had been replaced by the production of petroleum and gas.The share of manufacturing compared to export has increased since 1970.As of 2000 to 2005, its share had increased from 60.4% to more than 80.0%.Amongst the manufacturing industries, the electrical and electronics sub-sector has a contribution of more than 70.0% of Malaysia's overall export (Malaysia, 2006). The Performance of Export and Import The role of foreign direct investment (FDI) has an undeniable marked importance in the context of the Malaysian economy.It had actually experienced substantial FDI infl ows, especially in the manufacturing sector.They have unfortunately been declining in a later period, especially after China launched its world trade transition economy.Despite the above, the amount of FDI infl ows in For resource-based industries, since the majority of these sectors are domestic-oriented markets, some of them however, are also exportoriented industries, such as rubber, wood product, paper product and plastic product industries (see Appendix 2).Therefore, it is important to analyse the utilization of the domestic intermediate input, which shows that resource-based industries are expected to create a higher value added for the manufacturing products.For resource-based industries that are export-oriented market, these industries are able to maximize the output potential produced and a high use of the domestic input content in export may reduce a high defi cit in current account balance.GDP growth: 1980GDP growth: -1985: 1978=100;: 1978=100;1990-2005:1987=100.Unrestrained and high importation of raw materials for the chains of production in non-resource-based industries can exert pressure on a country's current account.In fact, defi cit in current accounts has been a major concern particularly since imported raw material creates huge leakages and heavy fi nancial burden in terms of acquiring machines, parts and technology.Although trade account balance was surpluses from 1985 to 2005, Malaysia experienced a continuous defi cit in its current account balance from 1985 to 1995 (see Table 3).The surplus in the current account balance is only exhibited in a later period 1998.Moreover, the current account defi cit has increased -2.1% in 1990 to -9.7% in 1995.Total import as a percentage of total export had recorded above 75.0%over the period of 1980 to 2005, where the highest amount of total import as a percentage of total export accounted for 99.9% in 1995.The events of import and export increased parallel to export.Since the manufactured goods contributed a large amount of Malaysia's export, export of the manufacturing sector may refl ect a high content of imported raw materials.As shown in Figure 1 and Figure 2, only two subsectors of the resource-based industries indicated an imported input which used more than 50%, while four subsectors were observed in the non-resource-based industries.These are the subsectors of chemicals and other chemical products for resource-based, and the subsectors of basic metal products, nonelectrical machinery, electrical machinery and motor vehicles for nonresource-based industries.Source.Malaysian Input-Output Tables 2005. Theoretical Framework The above relative effi ciency appraisal relates to the testing of the new growth theory especially by proponents who were once popularly headed by Kaldor.He analysed the factor of production from the viewpoint of how resources contribute towards output in the economy.Kaldor debated that in many areas manufacturing industries work faster than agriculture as assumed in the embodiment theory that both the physical and non-physical elements work in combination to the increase in output.In the productivity theory, the effi ciency of the factor of production is related to the concept of effi ciency.While productivity is the amount of output produced relative to the amount of resources used, effi ciency is the value of output relative to the costs of inputs used.A change in price of inputs might lead a fi rm to change the mix of input used, in order to reduce costs of input used, and improve effi ciency, without actually increasing the quantity of output relative to the quantity of inputs.A change in technology, however, might allow the fi rm to increase output with a given quantity of inputs; such an increase in productivity would be more technically effi cient, but might not refl ect any change in effi ciency in terms of allocation. The Input-Output Model In this study, the computations of the technical coeffi cient are adopted from the Commodity Technology Model (CTM).Unlike other conventional models, is the well-known one proposed by Leontief (1953).The model uses a single table of the input-output matrices.The transaction table 2 in the conventional model presumes that commodities and sectors are classifi ed in the same way.Thus, the technical coeffi cient of the model is called the direct technical coeffi cient, where x ij = inputs from sector i to produce outputs in sector i; x j = total inputs of sector j which is equal to the total outputs in the j throw of the input-output table. By using the CTM model, this model employs the basic table of the input-output matrices, which provides a compatible procedure with a modern input-output table.The uses of basic tables separated into two subtables consist of the 'supply' and 'use' tables (SUT), which have been suggested by many authors (Raa & Kop Jansen, 1990;Viet, 1986).The model suggests that sectors have a multitude of inputs to produce an output.Therefore, the separate table of input and output matrices that already exist in the SUT need not be forced into the single matrix, meaning that the multiplication of 'use' and 'make' matrix will result in a symmetric table.Therefore, SUT can be used IJMS 19 (1), 87-114 (2012) directly in the analysis of input-output (Raa, 2004).Moreover, it is preferable to have raw 'use' and 'make' matrices separate without purifi ed or otherwise manipulated industries. The technical coeffi cients, A c , of CTM, employs 'supply' and 'use' tables (SUT), as presented in equation (ii). ( where u denotes 'use' table and, v denotes 'supply' table. The 'use' table is also known as the input matrix, which shows the consumption of intermediate input by industries and the 'supply' table is known as the output or 'make' matrix 3 .In the system of National Accounts, the 'use' table matrix records the inputs used by industries, where u ij shows the total input of commodity i consumed by industry j.The 'make' matrix records primary and secondary products produced by each industry, where u ij shows the total output of industry i producing commodity j.In other words, commodity j is produced by industry, i (Raa and Wolff , 1991). (3) If the 'use' table matrices represent dimension products by industry and the 'make' table highlights dimension industries by products matrices, then v t (transposed) would have dimension products by industry.The input-output coeffi cient, postulate proportionality between inputs collected from the 'use' table, while the output collected from the 'supply' table needs to be transposed.In solving equation (iii) in the matrix operation, we obtain the technical coeffi cient derived from CTM (Raa, 2004) as: (4) where t and -1 resents the combined operations of transposition and inversion of the indicated matrix, and; c denotes commodity technology model. where = is a matrix = is a matrix From equation (ii), we can get equation (iii) as: By employing CTM, a best selection from all the models in the computation of technical coeffi cients, fulfi ls all axioms of the inputoutput analysis (Raa, 2004).The choice of the model is made on the basis of reasonable assumptions.This model has an assumption that each commodity has a unique input structure, irrespective of the sector of fabrication.The number of activities must equal the number of commodities.This model also assumes that each commodity is produced by the same technology, irrespective of the production of industry.In this case, industries are considered an independent combination of output in sector j and each with their separate input coeffi cients. In this study, U matrix, which is referred to input matrices are classifi ed into two.These are domestic input matrices, (U d ) and imported input matrices, (U m ).Changes in input coeffi cients for each input, domestic and imported input, can be presented as in equation (v). Change in input coeffi cients: where A cij = change in input coeffi cients; a cij = input coeffi cients from sector i to sector j or the intermediate inputs of the i th sector used by the j th sector, (i,j = 1,2,3,...,n); t 1 and t 0 = the terminal year and the initial year. Equation (3) estimates changes of domestic and imported inputs used to produce one unit of output relative to the time, which is referred to the sub-periods of the study.Both changes in domestic and imported input used can measure effi ciency of the respective input used to produce one unit value of output.This shows the requirements of the input from sector i used in sector j in order to produce one ringgit value of output, j.Therefore, from column-wise of the matrix, A, presents the amount of input required to produce one unit value of output in Malaysian ringgit.The input coeffi cient also refl ects unit cost per ringgit of output.The results of the change in input coeffi cients are expected to be both in terms of positive and negative signs.In general, a negative sign shows an improvement in productivity of the input used.This also means that the input is utilized effi ciently.On the other hand, the positive sign presents a contrasting sign of the input coeffi cients, revealing deterioration of productivity.Furthermore, change in input coeffi cients both for domestic and imported input is weighted by output to obtain weighted average of proportionate change in input coeffi cient of each sub-period of the study. Data Sources and Input-Output Aggregations This study employs data from the Malaysian I-O Tables for 1983, 1987, 1991, 2000 and 2005 Results and Discussion Based on the classifi cation in Appendix 1, resource-based industries comprise of 22 subsectors of the manufacturing sector, while nonresources-based industries consist of 9 subsectors.In this study, the results of productivity improved to produce one unit value of output measures effi ciency in the input used both for domestic and imported inputs.As shown in Table 4, for non-resources-based industries, it was found that productivity improved relatively higher compared to resources-based industries when using domestic intermediate input which is indicated by 40.1% and 35.5% during the sub-periods 1983-87 and 1987-1991, respectively.The result is similar in the case of imported intermediate input use, which is non-resource based which also indicated a high percentage of productivity improvement.These are at the amounts of 50.4%, 25.3% and 367% during the sub-periods 1983-1987, 1987-1991 and 1991-2000, respectively. For resource-based industries, this study revealed that the productivity improved relatively higher for imported input compared to domestic input used.This accounts for 22.6% and 19.5% during the sub-periods of 1983-1987 and 1991-2000.For the sub-period of 1987-1991, these industries registered a lower percentage of productivity improvement, which was only 0.7% for domestic input used and -14.7% for imported input.The lower percentage of improvement during this sub-period is due to the emerging economy from the 1985 recession.The recovery of the economy can be seen from the percentage increase in the productivity improvement both in the domestic and imported input used in the sub-period of 91-2000. During the sub-period of 2000-2005, resource-based industries indicated 12.9%, while non-resources-based industries accounted for 12.7% of productivity improvement in domestic input use.The fi nding shows that the resource-based industries have a lower productivity improvement in domestic input use among the subsectors.In contrast, the number of subsectors in resource-based industries is actually larger than the subsectors of non-resource-based industries.For imported input, both industries have shown that the percentage decreased to 2.7% and 2.4% respectively for resource and non-resource-based industries.This was lower due to the global economic slow-down during the period 2000 until 2005, and the use of imported input in resource-based industries dropped from 38.4% to 22.0% (see Appendix 2). In terms of total input, non-resource-based industries indicated 46.1%, 32.9% and 30.0%during the sub-periods of 1983-1987, 1987-1991 and 1991-2000, respectively.Resource-based industries indicated 13.0%, 8.0% and 14.3%, respectively.These are relatively lower than non-resource-based industries.The lower percentage for resources-based industries highlights that these industries still have room for improvement especially in terms of domestic input use.The improvement of domestic input will increase the value-added of the domestic input content.Meanwhile, local industry produces less wastage in domestic resources and will also reduce dependency on non-resource-based industries when using domestic input.However, for the sub-period of 2000-2005, the percentage of productivity improved in imported input use by only 4.3% for non-resource-based industries and 6.7% for resource-based industries.Productivity improvement for both the domestic and imported input can be related to a larger contribution that the intermediate input is the major component of growth in TFP for the manufacturing sector.This implies that the growth in TFP of the manufacturing sector is dependent on input growth.In other words, growth in TFP is actually led by the 'input driven' economy.This might be true as other studies found that the miracle of the East Asian economy may be characterized by the `input-led' growth (Krugman, 1994;Young, 1994b;Kim & Lau, 1994).These studies revealed that the Korean economy catchup process with the industrial nations in its late industrialization has been predominantly input-led growth.Past studies on growth with respect to Malaysia also conclude that the input growth, particularly intermediate input, makes a larger contribution to the output growth (Okamoto, 1994;Maisom, Mohd Ariff & Nor Aini, 1993;Tham, 1996;1997;Noriyoshi, Nor Aini, Zainon, Rauzah & Mazlina, 2002). The larger contribution of intermediate input to growth in manufacturing output was also obtained in several other studies.Tham (1996) found that, in general, the average value shares of intermediate input in the Malaysian manufacturing output growth between 1986 and 1990 were the highest among all the inputs.Tsao (1985) also found the same results for Singapore between 1970 and 1979, where the average value shares of intermediate input in the output growth were the highest among all inputs.Similarly, Nishimizu and Robinson (1984) also indicated the same results for Japan between 1955and 1973, Korea (1960-1977), Turkey (1963-1976) and Yugoslavia (1965Yugoslavia ( -1978)).In the same way, Gan, Wong and Tok (1993) study on the Singaporean manufacturing sector yielded a similar result, in which the major source of growth of output between 1986 and 1990 was the growth in material input.Moreover, in all these studies, input growth has contributed relatively more to output growth. Table 5 shows the number of subsectors effi cient in domestic and imported input used amongst industries of resource-and-non resource-based industries over four sub-periods of the study.Nonresource-based industries show that the percentages of the subsectors with had relatively improved productivity in domestic input used accounted for 70.0%, 80.0%, 40.0% and 88.9%, while imported input accounted for 80.0%, 60.0%, 80.0% and 11.1%, respectively.The fi ndings show that non-resource-based industries are rather effi cient in using both domestic and imported input during the study, except for the imported input in 2000-2005. In the case of resource-based industries, imported input indicated that about 85.7% of the subsectors improved, respectively for the subperiods of 1983-1987 and 1991-2000.The results show that the number of subsectors improved in imported input use is relatively larger than the others, even though the share of imported input use indicates less than 40.0% of the total input 4 (see Appendix 2).The percentage of subsectors that improved in domestic input use accounted for 38.1%, 52.4% and 47.6% during the sub-periods of 1983-1987, 1987-1991 and 1991-2000, respectively.During the three sub-periods, this study implies that the percentage of subsectors improved in domestic input use is relatively low, even though the average share of domestic input use among the subsectors is relatively high with more than 60% of the total input.However, the percentage increased to 81.8% for the subperiod of 2000-2005.The improvement in productivity in domestic input use can be seen in processed rubber, rubber products, furniture and fi xture, other chemical products and plastic products industry (see Appendix 3).The increase in the percentage of subsectors improvement in domestic input use implies that domestic input has gained improvement in productivity.The domestic input has received priority among the manufacturers resource-based industry in terms of utilization. A previous study found that resource-based industries were more export-oriented compared to the non-resource-based industries during the period 1975-1994.In addition, almost 70 per cent of the manufacturing industries were highly dependent on imported input and almost of all these industries were non-resource-based (Alavi, 1999).The result also revealed that there was a positive relationship between export share and imported input content for the nonresource-based industries.In contrast, the relationship was negative for the resource-based industries.Surprisingly, the fi ndings show that domestic-oriented industries were generally more highly dependent on imported inputs compared to the export-oriented industries. The following fi gures, from Figures 3 to 5 exhibit subsectors of resource-and non-resource-based industries in domestic and imported input use improved in productivity if located in the lower side of the horizontal line.For the sub-period 1983-1987, most subsectors of resource-based industries were relatively in the position of the improvement area, while this occurred for resource-based industries in terms of imported input use.The resources-based industries have relatively improved in domestic input use during the sub-period of 1987-1991.This can also be seen in non-resources-based industries for imported input use.The improvement in domestic input use for the period 1987-1991 may be due to the economic recession in 1985.1983-1987 1987-1991 1991-2000 2000-2005 1983-1987 1987-1991 1991-2000 2000-2005 Resources-based industries For the sub-period of 1991-2000, both industries have shown that imported input use relatively improved than domestic ones, which is similar to the fi rst sub-period of the study.On the other hand, both industries have experienced domestic input use that relatively improved compared to imported input during the sub-period of 2000-2005.The substantial progress shows that domestic input is used effi ciently in both industries, though the content of imported input has remained at 40.0% for resource-based and 50.0%for nonresource-based industries.These are contributed by a majority of the subsectors in the resources-based indutries, except beverages, wood products, paper and printing, and paint and lacquers industries.A similar contribution can be seen in most industries of non-resourcebased industries, except textiles (see Appendix 3). Conclusion and Policy Implications Based on the study, there are three main fi ndings that need to be highlighted in this paper.Firstly, this study concludes that, nonresources-based industries have shown a higher percentage of subsector improvement in using domestic and imported intermediate inputs.Secondl y, for resources-based industries, it shows a high percentage of productivity improvement in the imported input use, while domestic input use is rather low during the fi rst three sub-periods of the study.This actually refl ects that resource-based industries are relatively less effi cient in using domestic inputs compared to imported input use.Resource-based industries have shown productivity improvement in imported input use, but not for domestic input.Thirdly, the number of industries that improved in using imported input is higher, both, in resources-and non-resourcesbased industries.This indicates that both resource-and non-resourcebased industries have used imported input more productively.Meanwhile, resources-based industries do not show the use of domestic input effi ciently. The three main results of this study indicate that, fi rstly, non resourcesbased industries rely substantially on imported raw materials.Heavy reliance on the imported raw materials will have an adverse eff ect on the country's Balance of Payments.As reported by the Annual Report of Bank Negara (2005), imported raw materials constituted 20% of the total raw materials utilized in resource-based industries while in nonresources-based industries it can be as much as 60%.Most leading fi rms of the non-resource-based industries are actually multinational companies of FDI.Thus, there is no surprise that these leading fi rms of non-resource-based industries of electronics and electrical machinery have particularly a high content of imported raw materials, as high as 70%.It is also interesting to note that the share of our economy's total export by non-resource-based industries is phenomenal (more than 70.0%) compared to that of resource-based industries hovering less than 20.0% (Bank Negara, 2006).The over dependence on imported raw materials is normally a characteristic of multinational companies operating in the host countries, engaging in processing industries which import unfi nished components and export fi nished products (Tsao, 1985).This results in weak linkages between indigenous industries and foreign companies, In contrast, linkages within the multinationals' network of plants located throughout the world tend to be stronger. Secondly, the number of subsectors relatively effi cient in resourcebased industries in terms of domestic input use is smaller than imported input over the period of the study, and it shows an increase in a later period of the study.At the same time, non-resource-based industries have also shown an increasing trend in terms of the number of subsectors relatively effi cient in using domestic input.In contrast, both resource-and non-resource-based industries have shown a higher number of subsectors, which is relatively effi cient in using imported inputs.The local sources of domestic input may be due to resourcebased industries which did not use domestic input as productively as imported input, thereby leading to the probable underutilization of domestic input and non-resource-based industries which are highly dependent on imported input. Thirdly, in resource-and non-resource-based industries, imported raw materials are used more effi ciently than domestic raw materials, in terms of the number of industries effi cient over the period of the study.On the other hand, in resource-based industries domestic raw materials are not used effi ciently as well as imported raw materials.It is interesting to note that although resource-based industries sourced their material inputs domestically, the Malaysian manufacturers utilize their minor material inputs more effi ciently than their major ones.The production of the manufacturing sector implies that Malaysian manufacturers did not utilize domestic input in effi cient ways due to having substantial sources of local input.In contrast, multinational companies have shown effi ciency in both domestic and imported input use in their production. Zainal Aznam Yusof and Phang (1994) demonstrated that the largest component of cost in the Malaysian manufacturing sector was the cost of raw materials. 2. Transaction table refers to the table of intermediate inputs. 3. 'Use' matrix refers to the use of commodities by the producing industry, and the 'make' matrix shows the quantities of each commodity made by each industry.' 4. Figure 1 . Figure 1.Share of domestic and imported inputs used among subsectors of resource-based industries, 2005. shows that resource-based industries registered more than 60.0% of the share of the domestic input and less than 40.0% of the imported input, except in 1991.In contrast, non-resource-based industries have shown less than 50.0% of the domestic input and more than 50.0% of the imported input used (see Appendix 2).This implies that resource-based industries are actually sourced by domestic inputs, while non-resource-based industries rely on the imported input and the FDI in Malaysia concentrated on non-resource-based industries. Figure 2 . Figure 2. Share of domestic and imported inputs used among subsectors of resource-based industries, 2005. Figure 3 . Figure 3. Distribution of subsectors in resource and non-resourcebased industries in domestic and imported input used, 1983-1987. Figure 4 .Figure 5 . Figure 4. Distribution of subsectors in resource and non-resourcebased industries in domestic and imported input used, 1987-1991. Figure 6 . Figure 6.Distribution of subsectors in resource and non-resourcebased industries in domestic and imported input used, 2000-2005. Source.(1) and (2) are estimated from equation (4).Note.(1) and (2) estimated by average proportionate change.(i) A negative sign shows improvement in effi ciency and vice-versa.(ii) Figures in parentheses indicate percentage.(iii) 1 Average proportionate changes of other sectors are equal to the weighted average of proportionate change of other sectors. a high share of export of the manufactured goods.For this reason, these industries can also be categorized as export-oriented industries (see Appendix 1).As shown in Table2, non-resource-based industries registered a fi gure of 79. Malaysia is still higher compared to other ASEAN countries, with the exception of Singapore.The United Nations Conference Trade and Development reported that out of USD37.1 billion of FDI infl ow into the South East Asian region, Malaysia had received USD3.9 billion in 2005 (UNCTAD, 2006).As most FDI are involved in non-resource-based industries, these industries contribute Table 2 Share of Export in the Manufacturing Sector (%) Table 3 2 intermediate goods (food and beverage mainly for industry, industrial supplies, metal, fuel and lubricants, parts and accessories of capital goods (except transport equipment). 3 real published by the Department of Statistics,Malaysia.This study is classifi ed into sub-periods of1983-87, 1987-91, 91-2000 and 2000-05.The basic table of the Malaysian I-O is utilized, which includes the basic table of domestic input, imported input and output matrices.The basic table of imported input is obtained from the diff erences between the basic table of the total input and the basic table of domestic input matrices.The existing framework of national account has governed the potential maximum size of the Malaysia I-O tables.However, due to the scopes of this study that only focuses on the manufacturing sector, this study has reduced the I-O tables into 32 by 32 industries/commodities.This encompasses all 31 industries of the manufacturing sector and a 'single' sector is representing 'other sectors' which includes services, agriculture, mining and construction, and the rest of the public sectors. Table 4 Productivity Improvement by the Input Use (%) Table 5 Number of Subsectors Improved by Input Use (%)
7,200.2
2012-01-01T00:00:00.000
[ "Economics" ]
A security method of hardware Trojan detection using path tracking algorithm Recently, the issue of malicious circuit alteration and attack draws more attention than ever before due to the globalization of IC design and manufacturing. Malicious circuits, also known as hardware Trojans, are found able to degrade the circuit performance or even leak confidential information, and accordingly it is definitely an issue of immediate concern to develop detection techniques against hardware Trojans. This paper presents a ring oscillator-based detection technique to improve the hardware Trojan detection performance. A circuit under test is divided into a great number of blocks, path assignment is optimized using a path tracking algorithm, and a high coverage is reached accordingly. that the designed circuit can be maliciously altered during a manufacturing process and work in an unintended way, giving rise to a security concern. Malicious circuit alterations may cause threats to the operations of household appliances, public transportation systems and financial infrastructures, or even put weapon systems in danger. There are 3 stages involved in the development of ICs. Stage 1 is the design stage in which IPs, circuit models, design tools and designers are involved. Stage 2 is the fabrication stage involving masks, wafer and packaging plants. Stage 3 is the test stage. As many know, chip designers employ reliable computer-aided design (CAD) tools in the design stage of ASICs. Nonetheless, IPs, circuit models and standard components are seen as unreliable in the design, packaging and test stages. The fabrication stage is even unreliable, since hardware Trojans can be inserted into chips during a fabrication process. However, the fabrication and the test stages are exceptionally believed to be reliable in case they are done in semiconductor manufacturing companies or state-run institutes. As explicitly stated in [1][2][3], there are mainly two approaches, i.e., IC certified protection and hardware Trojan detection, to ensure that chips are designed and manufactured in a trustworthy way. The former refers to a certified protection against malicious alteration of designed circuits, and is definitely a high-cost scheme since all the manufacturing processes must be certified. More importantly, IC manufacturing has been globalized in an attempt to keep the manufacturing cost down, making the scheme even impracticable. Hence, the latter has been developed as a "low-cost" alternative to the former, and is investigated herein. As its name indicates, hardware Trojan detection is a technique to see whether there is any inserted hardware Trojan to win clients' trust. Performance comparison between an originally designed circuit and a circuit under is conducted as a post-manufacturing detection. In recent times, the latter has been acknowledged as the mainstream detection technique against hardware Trojans, and a continuous effort has been made to improve the detection effectiveness against a wide variety of hardware Trojans. There is a volume of publications on the precautionary measures against malicious circuit insertion and attack [4][5][6][7]. This paper presents a ring oscillator-based detection circuit, consisting of an odd number of inverters and working with an originally designed circuit. The detection circuit is configured in such a way that there is a frequency shift in the detection circuit once a hardware Trojan exists. In this paper, a combined use of a ring oscillator-based detection circuit and a path tracking algorithm is found to improve the wire and net coverages, meaning that the detection scope can be maximized. This work is presented as five sections. Section 1 is the introduction, Sect. 2 describes hardware Trojan taxonomy and basics of Trojan detection. Section 3 gives configuration and characteristics of ring oscillators, and a detailed discussion on the presented path tracking algorithm for hardware Trojan detection and even protection against activation of malicious circuits. Section 4 presents simulation results, and Sect. 5 concludes this paper and suggests a future work. Related works This section firstly describes the taxonomy of hardware Trojans, then the various detection techniques for hardware Trojans. Modules As indicated in [8][9][10][11], there are mainly two modules involved in hardware Trojans, that is, a trigger module and a payload module. The former can be triggered either internally or externally, and can remain no trigger. The latter works in the following ways: when the module is triggered, (1) messages are transmitted to hardware Trojan makers, (2) circuit operations are maliciously altered, and (3) the chip is sabotaged. Types of hardware Trojans Hardware Trojans can be classified according to the types of involved trigger and payload modules [12,13]. There are two types of trigger modules, namely digital and analogue types. The digital module is further sorted into combinational and sequential types [14]. In the combinational case [12], a hardware Trojan is triggered on the condition that A = 0 and B = 0, and there is an error in node C of the payload module, as illustrated in Fig. 1a. In the sequential case, a hardware Trojan is also known as a time bomb and is triggered using a specific sequence or periodical signals. As reported in [8,12,15], sequential type can be further categorized into synchronous, asynchronous, mixed and rare sequential types. As in the trigger modules, payload modules can be further divided into digital and analogue types. In the digital case, logic operations, or memory contents, can be maliciously altered using the nodes in the payload module, while, in the analogue case, circuit performance, e.g., efficiency, power dissipation, noise tolerance, etc., can be degraded, using malicious alteration of parameters, say, extra path delay due to the introduction of a load capacitance, as illustrated in Fig. 1b. Hardware Trojan detection solutions Hardware Trojan detection techniques [12] are discussed here. A wide variety of detection techniques had been proposed, since there is no single one technique that can successfully detect all types of hardware Trojans. Types of hardware Trojan detection approaches Hardware Trojan detection approaches are mainly classified into non-destructive and destructive approaches, as illustrated in Fig. 2, and are detailed as follows. 1. Destructive detection: Metal layers are removed using chemical mechanical polishing (CMP) process, and then circuits are viewed using a scanning electron microscope (SEM). 2. Nondestructive detection: It is further classified into invasive and non-invasive detections. The latter is done using a logic comparison between an originally designed IC and a one under test, and is further categorized into run-time and test-time detections. Test-time detections do not cause extra hardware cost, while a major disadvantage is that an originally designed IC is required for comparison purposes, and are further classified into logic test and side channel detections. In the former, signals are applied to the inputs of an IC under test, and then watch whether there is any unmatched output signal. If yes, the IC under test is very likely to have a hardware Trojan inside. In the latter, it is to investigate the changes of electric parameters due to inserted Trojans, e.g., transient current, power dissipation and path delay, as presented in [16]. Tables 1 and 2 give the advantages and disadvantages of logic test and side channel detections. 3. Design of hardware trust: It is an alternative way for hardware Trojan detection, and is done using built-in detection circuits of original designed ICs. Hardware Trojan detections According to the way Trojans are detected, side channel detections can be categorized into power-based [1,3] and timing-based detections. The power-based detection is such a great detection technique that even non-trigger hardware Trojans can be detected. As presented in [1], hardware Trojans detection was done by a combined use of power dissipation and charge/discharge approaches. In contrast, transient power supply signals were adopted to test whether there existed any hardware Trojan [3] as a way to reduce leak current and influence on fabrication parameters. As put forward in [17], the timing-based detection is a technique that mainly measured the path delay between registers, and an IC under test is diagnosed as having Trojans if the path delay is found to be longer than a threshold. Design for hardware trust is an alternative way of hardware Trojan detection. As referred to in [4], ring oscillators, consisting of an odd number of inverters, are introduced into an IC under test. Trojan is detected once there is a frequency change in a ring oscillator. Illustrated in Fig. 3 is an illustration of ring oscillators. A transient-effect ring oscillator (TERO) is proposed in [30] and added to the original circuit for detection of hardware Trojan. The TERO is a non-destructive approach [31] and more sensitive to the changes of frequency, so it is efficient relative to conventional ring oscillators. Based on measuring the time delay, ring oscillators are inserted into a scan chain for hardware Trojan detection. Therefore, the design-for-testability (DFT)-based methods [32] are applied to find the inserted Trojans and an optimal trade-off between area overhead and security strength. As indicated in [33], the ring oscillators are also adopted to detect Trojan. This approach can avoid the effect of process variations and find the location of Trojan. For 3D integrated circuits, a 3D ring oscillator is presented in [34] to detect the existence of Trojan by using delay measurements. At the same time, this approach can also be used to detect the additional delay due to process variations. As a design for hardware trust, this work involves a combined use of the detection technique and the presented path tracking algorithm to form a ring oscillator-based detection circuit. In this manner, a Trojan can be detected before being triggered. Proposed method A ring oscillator-based detection circuit is employed herein to see whether there exists any hardware Trojans, and a path tracking algorithm and a multiplexer are presented as well. The detection performance is tested on the ISCAS85 c17 benchmark. A combination of a ring oscillator This work aims to present an effective hardware Trojan detection technique and to improve the wire and the net coverages using a combination of a ring oscillator, a 2-to-1 multiplexer and a path tracking algorithm. Illustrated in Fig. 4 is a ring oscillator in the ISCAS85 c17 benchmark. A 2-to-1 multiplexer precedes a ring oscillator so as to choose a specific path for detection. With a 2-to-1 multiplexer as a controlled switch, a circuit under test can work in a way as expected, or ring oscillator can operate to detect hardware Trojans. An alternative way for hardware Trojan detection A hardware Trojan can be detected once there is a frequency shift in a ring oscillator. However, the frequency shift is susceptible to the way that the hardware Trojan is inserted. Hardware Trojans are categorized according to the way they are inserted. An original detection circuit is illustrated in Fig. 5, where PATH represents a chosen path and D denotes an added circuit for Trojan detection with the frequency where N represents the number of inverters, and D represents the delay of a single inverter. Hardware Trojans can be inserted at any position in a circuit. As categorized in Fig. 6a to 6d, it is important to understand the various ways that the Trojans are inserted. Therefore, a short circuit path is added to the original circuit to represent a frequency change in a ring oscillator. Based on the original detection circuit, Trojan detections are classified into two types according to the added path configuration. A path is on a single route in Type 1, while a path is across multiple routes in Type 2, and both types are illustrated as follows. Path is on a single route, and Type 1 is further categorized into three modes below. As illustrated in Fig. 6a, a path is inserted between the input and PATH. Taking into account the added path, the frequency of the detection circuit is modified as where D ip , D p and D d represent the delay between the input and PATH, the delay of PATH and the delay of the added inverters, respectively. As illustrated in Fig. 6b, a path is inserted between the output and PATH. Taking into account the added path, the frequency of the detection circuit is now modified as where D op , D p and D d represent the delay between the output and PATH, the delay of PATH and the delay of the added inverters, respectively. Mode 3 is further classified into five cases, and the frequency is now modified as where D ex , D p and D d represent the delay of the added path, the delay of PATH and the delay of the added inverters for detection. As illustrated in Fig. 6c, a path is added inside PATH in case 1, is added outside PATH in case 2, is added within the detection circuit in case 3, is added between the detection circuit and input/output in case 4, and is added inside the detection circuit and PATH in case 5. Not as in Type 1, a path is added between PATHs. As illustrated in Fig. 6d, Type 2 is further categorized into 8 cases, that is, a path is added between input 1 and input 2 in case 1, is added between output 1 and output 2 in case 2, is added between input 1 and output 2 in case 3, is added between PATH 1 and PATH 2 in case 4, is added between input 1 and PATH 2 in case 5, is added between output 1 and PATH 2 in case 6, is added between the detection circuits D 1 and D 2 in case 7, and is added between D 1 and PATH 2 in case 8. As explicitly stated previously, paths are added across PATHs, meaning that it is rather difficult to analyze the frequency of Type 2 cases. Besides, added paths in either type are found to demonstrate a strong effect on the frequency. Detection circuits for series and parallel configurations Detection circuits are categorized into two types according to the way they are configured, i.e., series, parallel and mixed configurations, and are detailed in turn as follows. Illustrated in Fig. 7 is the configuration of an original detection circuit. Firstly illustrated is the series configuration, and is then the parallel configuration which is further categorized into two cases. A mixed configuration is finally described. Series configuration is illustrated in Fig. 8, and all the PATHs are connected in series. A loop is formed by a detection circuit D 1 and the PATHs so as to detect a hardware Trojan. Parallel configuration is further classified into two types according to the configuration of added paths. A path is added to a single PATH in Type 1 configuration, while is added between PATHs in Type 2 configuration. It is assumed that a hardware Trojan is inserted into a single PATH, and a detection circuit is built across each PATH accordingly, as illustrated in Fig. 9a. However, Fig. 9a can be simplified into Fig. 9b, due to the fact that there is a frequency change once a hardware Trojan is inserted into an arbitrary PATH. As illustrated in Fig. 10a, a hardware Trojan is inserted into a path, shown in blue, between PATHs. In this context, there is no way that the inserted Trojan can be detected using the configuration in Fig. 10a, but instead can be detected using the configuration in Fig. 10b where a detection circuit D ex , shown in red, is inserted into the blue path. The mixed configuration is finally illustrated in Fig. 11, where a mixture of the abovestated series and parallel configurations is employed for hardware Trojan detection. Algorithm In a large-scale integrated circuit, there are tens of thousands of paths. Therefore, path selection is seen as critical for the circuit detection performance optimization. A brief flowchart As illustrated in Fig. 12, a Verilog file is first read for getting the configuration information of a circuit. Subsequently, matrices are built for path search using the Verilog file, and path match is performed to locate the optimal path and then to assign the input nodes to the chosen path. The above steps comprise the path tracking algorithm herein, while the rest is referred to as the multiplexer match algorithm aiming to improve the coverage in an attempt to reach a 100% detection. As illustrated in Fig. 13, a Verilog file contains the information on each gate and its inputs. A total of 2 matrices are created using the Verilog file. The first of the two matrices clearly specifies the configuration between nodes, according to which path tracking is performed until all the edges are detected. A logic 0 and a logic 1 represent the existence and non-existence of interconnection between nodes, respectively. The second matrix clearly specifies the configuration between nodes and inputs, due to which path match can be performed efficiently. A logic 0 and a logic 1 in this matrix represent the same things as in the first matrix. The "path search" step, illustrated in Fig. 12, involves the first matrix. Path tracking algorithm Path tracking is performed as a prerequisite of the "path match" step, and is illustrated as follows. Line 7 indicates the number of edge traversals using the first matrix, as listed in Table 3. Subsequently, lines 15-21 perform the intersection of the node configurations and path, as illustrated in Fig. 14. Paths with the intersection containing the largest number of elements are chosen, as illustrated in Table 4. Path tracking is optimized using Table 4, and is described as follows. Path with the least number of edge traversals is chosen for optimization, since it is likely to have an inadequate number of inputs. In this case, path 4 is chosen for the reason that merely edges 3 and 6 are traversed. Line 22 assigns the chosen paths to an input using the second matrix, as shown in Table 5. As illustrated in Table 4, node 2 is the first node in path 4, and a corresponding gate is chosen. In this case, two inputs are available for choice, either of which can be taken to form a path and a ring oscillator-based detection circuit accordingly. Lines 23-25 delete part of the chosen inputs, edges and paths, constituting a detection circuit, to avoid repetition, meaning that an input is unlikely to be assigned to two different paths. For instance, deletion of path 4 and input 6 gives Tables 6, 7 and 8. The path tracking algorithm terminates once the following conditions, i.e., those listed on line 9, are fulfilled. Condition 1 All the edges are traversed, while a number of inputs may not get involved. Condition 2 Contrary to condition 1, all the inputs are involved, while not all the edges are traversed. Condition 3 Part of the edges cannot be traversed, using the remaining inputs. Condition 1 is detailed as follows. It is that all the edges are covered by the paths, meaning that a hardware Trojan detection may not involve all the inputs. Condition 1 is further classified into two cases. In the first case, a Trojan is maliciously inserted right at the input of a circuit, and the introduction of a ring oscillator requires a multiplexer, leading to a rise in the cost. In contrast with the first case, there is no point to handle the second case. Condition 2 is due to the shortage of inputs. It is that the inputs are all involved, while not all the edges are traversed, meaning that the path match can be performed by no means anymore. There are solutions to the hardware limitation, e.g., use of alternative multiplexers, while the issue is not addressed herein. Condition 3, as opposed to condition 2, frequently occurs in large scale circuits, and is simply due to a shortage of edges. The above steps are illustrated as a flowchart in Fig. 15, where path match mechanism is highlighted gray. To begin with, paths are chosen from those given in the "path search" step, and the output of the final step is determined as the path for building a ring oscillator. Multiplexer match algorithm An algorithm, designated as the multiplexer match algorithm and listed below, is developed to improve the coverage, since a 100% coverage cannot be reached by a single use of the above-stated path tracking algorithm, and the use of a multiplexer is found to be a key factor in the determination of the coverage. Line 29 gives the value of N as a function of the number of paths involving an input, and line 30 specifies the required multiplexer. For instance, an 8-to-1 multiplexer is required in the case of UIN = 7. Experimental results Computer simulations are conducted and performance is then validated using the ISCAS85 benchmark. The simulation results are presented as parts 1-4. Part 1 is the simulation results by the single use of the path tracking algorithm, part 2 is those by a combined use of the path tracking and the multiplexer match algorithms, part 3 is a performance comparison between parts 1 and 2, and finally part 4 gives the number and the types of the multiplexers employed and a delay comparison among benchmarks. Each case involves two quantities, the first of which is the wire coverage, defined as where the numerator and the denominator represent the number of the internal wires detected and the total number of the internal wires, respectively. The second is the net coverage, defined as where the numerator and the denominator denote the number of the detected internal and input wires and the number of the internal and the input wires, respectively. Single use of the path tracking algorithm The path tracking algorithm aims to improve the wire coverage, while merely employing a 2-to-1 multiplexer. Comparisons on the wire and net coverages among benchmarks are, respectively, illustrated in Figs. 16 and 17. Combined use of the path tracking and the multiplexer match algorithms This combination aims to improve the wire and net coverages through a proper choice of multiplexers according to the number of paths involving an input. As in the (5 Performance comparison between both algorithms Performance is compared between both algorithms in terms of the wire and the net coverages. Wire coverage comparison As illustrated in Fig. 20, a 100% wire coverage cannot be reached in a large-scale benchmark by a single use of the path tracking algorithm, while can be achieved in each benchmark by a combined use of the path tracking and multiplexer match algorithms. Net coverage comparison As illustrated in Fig. 21, a 100% net coverage cannot be reached in a large-scale benchmark by a single use of the path tracking algorithm, while can be achieved in each benchmark by a combined use of the path tracking and multiplexer match algorithms. The number of multiplexers required between two coverages By using a combination of the path tracking and multiplexer match algorithms, comparisons of the number of multiplexers required between the wire and net coverages are, respectively, illustrated in Figs. 22 and 23. It is clear that the number of multiplexers required for the net coverage is increased significantly, particularly a 2-to-1 multiplexer. Delay comparison To validate the performance of proposed algorithms, the delay is compared between two coverages. We assume that each multiplexer has one unit delay for simplicity. This approach makes this method can migrate to each different logic and process technology very easily such as static, dynamic and domino logics. The delay comparison between the wire and the net coverages is listed in Table 9 for each benchmark circuit. Conclusions This paper presents a ring oscillator-based technique to improve the wire and the net coverage. A circuit under test is divided into a great number of blocks, into each of which a ring oscillator introduced as a way to detect hardware Trojans. Furthermore, a path tracking algorithm is presented herein to optimize the path assignment. Accordingly, simulation Fig. 22 Comparison of the number of multiplexers required among benchmarks for the wire coverage. By using a combination of the path tracking and multiplexer match algorithms, comparison of the number of multiplexers required for the wire coverage is illustrated in this Figure Fig. 23 Comparison of the number of multiplexers required among benchmarks for the net coverage. By using a combination of the path tracking and multiplexer match algorithms, comparison of the number of multiplexers required for the net coverage is illustrated in this Figure
5,640.6
2022-09-05T00:00:00.000
[ "Computer Science" ]
The Dynamic Impact of Agricultural Fiscal Expenditures and Gross Agricultural Output on Poverty Reduction: A VAR Model Analysis : China was the first developing country to achieve the poverty eradication target of the 2030 Agenda for Sustainable Development Goals (SDG) 10 years ahead of schedule. Its past approach has been, mainly, to allocate more fiscal spending to rural areas, while strengthening accountability for poverty alleviation. However, some literature suggests that poor rural areas still lack the endogenous dynamics for sustainable growth. Using a vector autoregression (VAR) model, based on data from 1990 to 2019, we find that fiscal spending plays a much more significant role in reducing the poverty ratio than agricultural development. When poverty alleviation is treated as an administrative task, each poor village must complete the spending of top-down poverty alleviation funds within a time frame that is usually shorter than that required for successful specialty agriculture. As a result, the greater the pressure of poverty eradication and the more funds allocated, the more poverty alleviation projects become an anchor for accountability, and the more local governments’ consideration of industry cycles and input–output analysis give way to formalism, homogeneity, and even complicity. We suggest using the leverage of fiscal funds to direct more resources to productive uses, thus guiding future rural revitalization in a more sustainable direction. Introduction According to the World Bank, China has lifted more than 850 million out of poverty since its reforms began in 1978, contributing over 70% to global poverty reduction [1]. China was the first developing country to reach all the Millennium Development Goals (MDG) by 2015 and achieve the poverty eradication target set out in the 2030 Agenda for Sustainable Development Goals (SDG) 10 years ahead of schedule [2]. The country has now set a five-year transition period (2021-2025) gradually shifting the policy focus from poverty alleviation to the comprehensive and holistic promotion of rural vitalization. By 2025, China's agricultural and rural modernization is expected to make substantial progress, with a more solid agricultural foundation, a narrowing of the income gap between urban and rural residents, and the basic modernization of agriculture where conditions permit [2]. It is, therefore, necessary to sort out the factors behind China's past triumph over poverty and analyze the dynamic impact of these variables on the effectiveness of poverty reduction at a macro level, in order to better implement future rural revitalization. Numerous recent policy studies have reviewed China's campaign against poverty. One of the highlights is the role of cadre residence in the targeted poverty alleviation project. The program has selected 255,000 village-based working groups and more than 3 million village committee first secretaries to be stationed in 832 impoverished counties and 128,000 poor villages to accurately identify the poor population and accompany them with tailored poverty eradication measures. Studies have shown that the match, competency, and effort of resident cadres; the scale of financial support; and the integration of top-down policy and bottom-up autonomy are important factors affecting the effectiveness of poverty eradication [1,[3][4][5][6][7]. Another highlight is the ongoing financial commitment. During 2016-2019, China's national general public budget spending on agriculture and rural areas reached RMB 6.07 trillion (approximately US$925.9 billion). This figure represents an average annual growth of 8.8%, which is higher than the growth of national fiscal expenditures. Over the years, fiscal agricultural and rural expenditures have mainly supported the supply of basic agricultural products, such as grain and pork, poverty eradication, and industrial supply-side structural reform, improvement of weak links related to agriculture and rural areas, and enhancement of the rural governance systems [8]. In the next five years, the financial input to support agriculture and rural areas will be further increased. However, some researchers have found that, when viewed in a sustainable livelihood framework, fiscal funding at this stage have not been effective in helping poor people build the assets they need to sustain an adequate living income [9]. Similar findings have shown that infrastructure development and promotion of rural labor to urban job markets are the main drivers for poverty alleviation in poor counties in southwest China, while policy tools, such as increasing funding for rural compulsory education, training farmers to equip them with technology, financial support for the new rural cooperative health care system, and providing agricultural insurance and easy access to loans for farmers, have not been effective in alleviating poverty [10]. Other studies have recognized the importance of upgrading in agricultural value chains and other non-fiscal policies that remain inadequate in practice [8,11]. Due to the low profitability of agricultural and sideline products, the public finances of all levels of government are often determined by the level of development of the secondary and tertiary sectors. Public financing in rural areas is, therefore, often more difficult. Support for agriculture and rural areas is channeled directly through various production expenditures by the state treasury to collectives or households, and indirectly through various allocations to public enterprises and institutions, such as rural water conservancy, in addition to being financed by extrabudgetary revenues from local treasuries (such as organizational activities, depreciation funds, and budget contract balances) and low-interest loans from the financial sector. Since the abolition of agricultural taxes in 2006, grassroots governments, which used to rely on taxes and fees collected from the countryside to sustain their operations, have shifted to relying on transfers from higher levels of government, while rural societies have shifted mainly to self-governance. However, in recent years, with the implementation of targeted poverty alleviation, administrative power has been re-extended downwards with the presence of cadre residence and industrial poverty alleviation funds. In the recent three-year (2018-2020) action plan to eliminate poverty, poverty alleviation by developing industries was treated as an administrative task and every poor village had to complete the spending of top-down industrial poverty alleviation funds within a certain time frame. However, in terms of industry cycles, it takes at least five to ten years for a region's specialty agriculture to succeed, marked by the formation of a cluster of villages featuring a particular industry with brand recognition and market share. Therefore, the greater the pressure of poverty eradication assessment from the higher-level government and the more funds allocated, the more the lower-level government's industrial poverty alleviation projects become an anchor of accountability, and the more the local governments' consideration of industry cycles and input-output analysis gives way to formalism, homogeneity, and even complicity [12]. Various case studies have identified a number of common issues. First, although the three-year action plan has improved the rural living environment, the lack of agriculture infrastructure in less developed rural areas, such as small-scale water conservation, farmland improvement, and agricultural logistics and e-commerce, still hinders agricultural industrialization. Second, cross-regional pairing-off cooperation with various pro-consumption projects as administrative tasks masks a mismatch between supply and demand, a lack of competitiveness of products, and a homogenization of specialty agricultural capacity. Third, the strict regulation of poverty alleviation funds and the obligation to employ local poor people for enterprises using them have raised the cost of doing business, and marketoriented and efficient enterprises lack the incentive to participate in poverty alleviation projects [12,13]. Literature and Hypothesis Development There are two streams of literature summarizing poverty alleviation approaches, one emphasizing exogenous factor, such as participatory poverty alleviation [14], social support network [15,16], and structuralism theory of state intervention [17][18][19], and the other emphasizing endogenous developments, such as economic growth [20], agriculture industrialization [21][22][23][24], cultural tourism development [25][26][27], and technological advances in agriculture [28]. Exogenous drivers, such as national systems and top-down policies, play a crucial role in poverty alleviation, but, as some literature suggests, rural poor areas still lack endogenous dynamics for sustainable growth. Recently, many scholars have started to focus on the issue of long-term mechanism for rural revitalization, which supports that agricultural fiscal expenditures are not sustainable despite their short-to medium-term effectiveness [29][30][31]. The existing literature on the prevalence of a lack of sustainable endogenous development in less developed rural areas is either based on case studies in poor villages or static analyses in selected rural areas, and there are few empirical researches using macroeconomic indicators from a holistic and dynamic perspective. One of the studies used least squares regression to empirically analyze the effect of fiscal policy on agriculture and agricultural economic growth [32], but it did not address the issue of poverty alleviation. This paper is one of the first attempts to empirically analyze the factors affecting the effectiveness of China's poverty eradication approaches and their dynamic relationships at the macro level. As shown in Figure 1, from a macroeconomic perspective, poverty alleviation strategies, such as agricultural tax abolition, investment in rural infrastructure, microfinance loans, and interest subsidies for poor households, targeted poverty alleviation through relocation, education, technical assistance, health promotion, photovoltaic power generation and e-commerce, and pairing-off cooperation all involve agricultural fiscal expenditures at one level of government or another at some point. These approaches can be categorized as exogenous financial support to agriculture and rural areas. On the other hand, the opening up of agricultural markets, the development of agricultural industries, the advancement of agricultural technology, and the diversified business activities of rural poor households can be categorized as endogenous development of impoverished rural areas. A body of literature shows that industrial development in poor areas still suffers from a lack of endogenous dynamics [2,30,31]. Here, we put forth two hypotheses: Hypothesis 1 (H1). There is a stationary relationship between agricultural fiscal spending (exogenous factor), gross agricultural output (endogenous factor), and poverty alleviation, i.e., their basic processes and the underlying relationships among them are essentially stable. Hypothesis 2 (H2). Agricultural fiscal spending (exogenous factor) is statistically more significant than gross agricultural output (endogenous factor) in reducing the proportion of the population living in poverty. We regress poverty alleviation with lagged explanatory variables including agricultural fiscal expenditure, gross agricultural product and poverty incidence using annual data from the statistical yearbook for the years 1990 to 2019. The dynamic interactions and time lags between fiscal expenditure, agricultural development, and poverty alleviation complicate our identification. To address this issue, we use a vector autoregression (VAR) model, i.e., with minimal economic assumptions, to assess the dynamics of the joint variables and the interactions between them. Our key findings are: (1) China's poverty eradication efforts over the past 30 years have not seen a major shift in mechanism or intensity, but have on the whole been steady, with intensity and results commensurate with each other, and even when it has reached the most difficult populations, it has still achieved stable poverty reduction outcomes through administrative accountability and precision poverty alleviation; (2) As the time required for the success of the agricultural industry is much longer than the allocation and assessment cycle of financial support for agriculture, the higher the incidence of poverty and the greater the pressure to eradicate poverty, the more funds to support rural development and production, and the more poverty alleviation projects become an anchor of accountability. Coupled with the fact that these areas are often poorly endowed, local governments' consideration of industrial cycles and input-output analysis has given way to formalism, homogeneity, and even complicity, and the endogenous impetus for agricultural development remains insufficient. Overall, fiscal spending has played a much greater role in reducing poverty rates than agricultural development. The rest of the article is outlined as follows. Section 3 discusses the data and research methods. Section 4 presents our empirical tests and result analyses. Section 5 discusses the results, and Section 6 concludes the article. Here, we put forth two hypotheses: Hypothesis 1 (H1). There is a stationary relationship between agricultural fiscal spending (exogenous factor), gross agricultural output (endogenous factor), and poverty alleviation, i.e., their basic processes and the underlying relationships among them are essentially stable. Hypothesis 2 (H2). Agricultural fiscal spending (exogenous factor) is statistically more significant than gross agricultural output (endogenous factor) in reducing the proportion of the population living in poverty. We regress poverty alleviation with lagged explanatory variables including agricultural fiscal expenditure, gross agricultural product and poverty incidence using annual data from the statistical yearbook for the years 1990 to 2019. The dynamic interactions and time lags between fiscal expenditure, agricultural development, and poverty alleviation complicate our identification. To address this issue, we use a vector autoregression (VAR) model, i.e., with minimal economic assumptions, to assess the dynamics of the joint variables and the interactions between them. Our key findings are: (1) China's poverty eradication efforts over the past 30 years have not seen a major shift in mechanism or intensity, but have on the whole been steady, with intensity and results commensurate with each other, and even when it has reached the most difficult populations, it has still achieved stable poverty reduction outcomes through administrative accountability and precision poverty alleviation; (2) As the time required for the success of the agricultural industry is much longer than the allocation and assessment cycle of financial support for agriculture, the higher the incidence of poverty and the greater the pressure to eradicate poverty, the more funds to support rural development and production, and the more poverty alleviation projects become an anchor of accountability. Coupled with the fact that these areas are often poorly endowed, local governments' consideration of industrial cycles and input-output analysis has given way to formalism, homogeneity, and even complicity, and the endogenous impetus for agricultural development remains insufficient. Overall, fiscal spending has played a much greater role in reducing poverty rates than agricultural development. The rest of the article is outlined as follows. Section 3 discusses the data and research methods. Section 4 presents our empirical tests and result analyses. Section 5 discusses the results, and Section 6 concludes the article. Data Our analysis is based on data from statistical yearbook complied by National Bureau of Statistics of China from 1990 to 2019. The statistical yearbook contains financial support for agricultural used by governments at all levels, including expenditure for capital construction on agricultural public enterprises and institutions, utility fees, and the science and technology promotion fees. It also contains data on funds used to support rural development and production, such as subsidies for small farmland water conservancy and soil and water conservation, funds to support rural production organizations, subsidies for rural agricultural technology promotion and plant protection, subsidies for rural pasture and livestock protection, subsidies for rural afforestation and forestry protection, and subsidies for rural aquaculture. The Bureau of Statistics also compile statistics on gross agricultural product and nationwide rural poverty incidence by year. Variables In order to reveal the dynamic relationship between agricultural fiscal expenditure at all levels, gross agricultural output and the effectiveness of poverty alleviation, we transformed data into three variables, among which, agricultural fiscal expenditure (ZFTR) includes expenditures on agricultural production and administration, appropriations for capital construction, new product promotion funds, rural relief funds, and others, gross agricultural output (NYCZ) is the total output value of agriculture, forestry, animal husbandry, and fishery, and poverty alleviation (CX) equals 1 minus poverty incidence, and then multiplied by 100. To overcome heteroskedasticity in the data and drastic data fluctuations, the values of each of the three variables were taken as logarithms. Methods Vector autoregression (VAR) is a multivariate forecasting algorithm that is used when two or more time series influence each other. The structure is that each variable is a linear function of past lags of itself and past lags of the other variables, and its standard form is as follows: where X t is the k-dimensional endogenous variable vector and p is the lag order. A p is the k-dimensional coefficient matrix, e t is the k-dimensional random error term vector which is a white noise process, i.e., its elements cannot be correlated with their respective lag terms and variables on the right side of the model. VAR model can be used to evaluate the dynamic relationship of the joint endogenous variables and the interaction among them with minimal economic assumptions [33]. The literature review shows that there is a dynamic interaction among agricultural fiscal spending, gross agricultural output and poverty alleviation. Moreover, there is a certain lag in the role of agricultural fiscal spending and agricultural output on the effectiveness of rural poverty reduction, so the VAR model can be used to reveal the dynamic relationship between these three variables. Our aim is first to test whether there is some stationary relationship between agricultural fiscal expenditure, gross agricultural product, and poverty alleviation, i.e., whether their time series and their underlying relationships are generally stable such that no major shifts in mechanism or intensity have occurred over the past three decades. We then sort out the statistically more significant factors behind China's poverty alleviation to see whether it is exogenous fiscal spending or endogenous development. Unit Root Test To avoid the problem of spurious regressions, we need to check whether the time series are stationary, i.e., their means and variances are constant over time and do not Table 1. In (c,t,p), c is the constant, t is the trend term, p is the lag order, and D is the first difference. From Table 1, we can see that the three variables lnCX, lnZFTR, and lnNYCZ are non-stationary, but their first-order differences DlnCX, DlnZFTR, and DlnNYCZ are all stationary. This indicates that poverty alleviation, agricultural fiscal expenditure and gross agricultural output, are all integrated of order one. These variables satisfy the necessary condition for cointegration. Thus, we have the following model: where t = 1,2, ..., 30 denotes the period from 1990 to 2019, e t is the error term, and θ is the coefficient of variables. The stationary process of first-order differences in agricultural fiscal expenditure, gross agricultural output and poverty alleviation suggests that there has been no major shift in the basic characteristics of China's agricultural industry, the tilt of fiscal policy towards it or the intensity of poverty alleviation over the past three decades. Cointegration Test The general assumption is that fiscal expenditure and agricultural output affect poverty reduction and therefore these variables may be cointegrated. As a result, they may lead to the estimation of a stationary variable. The Johansen cointegration test in VAR will help to test this. The initial Johansen test is a test of the null hypothesis of no cointegration against the alternative of cointegration, and the results are shown in Table 2. As can be seen from Table 2, the null hypothesis of no cointegration, at most one cointegration and at most two cointegrations among the three variables DlnCX, DlnZFTR, and DlnNYCZ are rejected, and thus the cointegration of the three variables as in model (2) holds. This suggests that there is a stable long-term equilibrium relationship between agricultural fiscal expenditure, agricultural output, and poverty alleviation and the effectiveness of rural poverty alleviation can be explained by the past values of agricultural fiscal expenditure, agricultural output, and poverty incidence. To further examine the stationarity of the model and to determine the optimal lags p, the AR(p) model is tested. Since AR(p) processes are VAR processes on a higher-dimensional state space, either both the AR(p) and the VAR are stationary, or neither is. A necessary condition for an AR(p) to be stationary is that all the eigenvalues of the corresponding VAR's β matrix be inside the unit circle. As shown in Figure 2, when the lag is set from 1 to 5, all the eigenvalues are within the unit circle. However, when the lag is greater than 5, some of the eigenvalues are outside the unit circle. Combining the test results of AR(p) model and the information criterion of Akaike information criterion (AIC), Hannan Quinn information criterion (HQIC), and Schwartz information criterion (SBIC), the optimal lag length is at lag 5. Sustainability 2021, 13, x FOR PEER REVIEW 7 of 13 sional state space, either both the AR(p) and the VAR are stationary, or neither is. A necessary condition for an AR(p) to be stationary is that all the eigenvalues of the corresponding VAR's β matrix be inside the unit circle. As shown in Figure 2, when the lag is set from 1 to 5, all the eigenvalues are within the unit circle. However, when the lag is greater than 5, some of the eigenvalues are outside the unit circle. Combining the test results of AR(p) model and the information criterion of Akaike information criterion (AIC), Hannan Quinn information criterion (HQIC), and Schwartz information criterion (SBIC), the optimal lag length is at lag 5. By choosing lag 5, the VAR model can be constructed and shown in Table 3. The results in Table 3 show that fiscal expenditure has a significant impact on the effectiveness of poverty reduction in the subsequent five years, but the significance of agricultural development only gradually becomes apparent after a lag of five years. In addition, the effectiveness of poverty eradication had a significant impact on itself in the subsequent two years. By choosing lag 5, the VAR model can be constructed and shown in Table 3. The results in Table 3 show that fiscal expenditure has a significant impact on the effectiveness of poverty reduction in the subsequent five years, but the significance of agricultural development only gradually becomes apparent after a lag of five years. In addition, the effectiveness of poverty eradication had a significant impact on itself in the subsequent two years. Granger Causality Test From Table 4, we can draw the following conclusions. First, with lag 1, neither DlnZFTR nor DlnNYCZ are Granger reasons for DlnCX, i.e., the effects of fiscal expenditure and agricultural output in poverty reduction cannot be fully realized in a short period of time. However, with lags from 2 to 5, both DlnZFTR and DlnNYCZ are Granger reasons for DlnCX, i.e., the effects of fiscal expenditure and agricultural output in poverty reduction have since gradually come into play, and the development of fiscal support and agricultural production can jointly contribute to the reduction in rural poverty rate. Impulse Response Analysis Impulse response functions can be used to further analyze the VAR model, whose main purpose is to describe the evolution of the system in reaction to a shock in one or more variables. The results of the impulse response analysis are shown in Figure 3, where the X-axis represents the time period and Y-axis represents the strength of response. As shown in Figure 3, when DlnCX received one unit shock from itself, it caused its current increase, but after period t = 2, DlnCX quickly approached 0. It means that the impulse of DlnCX only has short-term effects to itself and that the effect of poverty reduction in rural China is relatively stable year by year. When DlnZFTR receives one unit impact, it drives a small upward trend of DlnCX in the t = 1 to 5 period, and then remained stable. The results indicate that the agricultural fiscal support has a good impact on improving the effect of rural poverty alleviation, and the effect is relatively stable, and the promotion effect will be maintained in the longer term. However, when DlnCX was hit by DlnNYCZ, i.e., the agricultural production had a good impact on improving the effect of rural poverty alleviation, the effect was less than that of agricultural fiscal support. When DlnNYCZ was hit by DlnZFTR in the t = 1 to 5 period, it first showed a relatively large increase, then a decrease, and then remained stable. It means that in the short term, the level of agricultural development is influenced by the agricultural fiscal input, and the fluctuations are relatively large, but in the longer term, the effect remains stable. Overall, the effects of the nine dynamics listed in Figure 3 eventually leveled off, which means that the VAR impulse response function constructed in this paper is meaningful. It can also be found that when the shock variable changed, the effect of the shocked variable needed to be adjusted after some time before its effect can be revealed. When DlnZFTR receives one unit impact, it drives a small upward trend of DlnCX in the t = 1 to 5 period, and then remained stable. The results indicate that the agricultural fiscal support has a good impact on improving the effect of rural poverty alleviation, and the effect is relatively stable, and the promotion effect will be maintained in the longer term. However, when DlnCX was hit by DlnNYCZ, i.e., the agricultural production had a good impact on improving the effect of rural poverty alleviation, the effect was less than that of agricultural fiscal support. When DlnNYCZ was hit by DlnZFTR in the t = 1 to 5 period, it first showed a relatively large increase, then a decrease, and then remained stable. It means that in the short term, the level of agricultural development is influenced by the agricultural fiscal input, and the fluctuations are relatively large, but in the longer term, the effect remains stable. Overall, the effects of the nine dynamics listed in Figure 3 eventually leveled off, which means that the VAR impulse response function constructed in this paper is meaningful. It can also be found that when the shock variable changed, the effect of the shocked variable needed to be adjusted after some time before its effect can be revealed. Variance Decomposition The variance decomposition indicates how much information each variable has about the other variables in the auto regression. In order to further understand and compare the contribution degree of fiscal policy and agricultural development on the effectiveness of rural poverty reduction, the variance decomposition can be applied, and the results are shown in Table 5. Variance Decomposition The variance decomposition indicates how much information each variable has about the other variables in the auto regression. In order to further understand and compare the contribution degree of fiscal policy and agricultural development on the effectiveness of rural poverty reduction, the variance decomposition can be applied, and the results are shown in Table 5. As can be seen from Table 5, the effectiveness of rural poverty alleviation (CX) was impacted by itself only in the t = 1 period of variance decomposition, while the impact of fiscal policy (ZFTR) and agricultural development (NYCZ) on the effectiveness of rural poverty reduction only showed up at the t = 2 period of variance decomposition, with their contribution degree of 13.817% and 10.171%, respectively. Thereafter, the contribution degree of ZFTR increased, with small fluctuations, and reached its maximum value of 18.565% at the t = 7 period. Meanwhile, the contribution degree of NYCZ increased relatively slowly, reaching a maximum value of 10.205% at the t = 5 period, and its contribution degree is smaller than that of ZFTR, and the maximum difference of contribution is more than 8%. This shows that fiscal policy and agricultural development have significant effect on the rural poverty alleviation, and the former has a stronger degree of contribution; meanwhile, under the mutual promotion of these exogenous and endogenous factors, the effectiveness of poverty reduction in rural China has continued to develop in a better direction based on its own effectiveness in the previous stage, and finally achieved the target of extreme poverty eradication for all. Discussion Recent literature has conducted comprehensive studies and reviewed China's campaign against poverty, with the highlight being the targeted poverty alleviation project [1,[3][4][5][6][7]. But from a macroeconomic perspective, poverty alleviation strategies, such as tax repeal, infrastructure investment, interest subsidies, targeted poverty alleviation, and pairingoff cooperation all involve agricultural fiscal expenditures at one level of government or another. These approaches can be categorized as exogenous financial support to agriculture and rural areas. The opening up of markets, the development of agricultural industries, the advancement of agricultural technology can be categorized as endogenous development of poor rural areas. The overall conclusion is that China's current achievements in poverty alleviation have been driven more by exogenous fiscal policies, and the lack of sustainable endogenous development is still prevalent in poor rural areas. Our key findings from the VAR model are: (1) The constructed VAR model is stable, and the lag order is consistent with the empirical experiences. This suggests that China's poverty eradication efforts over the past 30 years have not seen a major shift in mechanism or intensity, but have on the whole been steady, with intensity and results commensurate with each other, and even when it has reached the most difficult populations, it has still achieved stable poverty reduction outcomes through administrative accountability and precision poverty alleviation; (2) The Granger causality test finds that agricultural fiscal policy at all levels and agricultural development have strong contributions to the effectiveness of rural poverty alleviation. However, conversely, the feedback of achievements of poverty reduction does affect central and local governments' decisions on fiscal policy and levels of agricultural development in the short run. This suggests that China's campaign against poverty is based on a longer-range objective, with the aim of significantly reduce disparities in urban-rural development. This conclusion is consistent with what the data on agricultural fiscal expenditures and the gross output of agriculture, forestry, animal husbandry, and fishery industry in the statistical yearbook show. In reality, the increase in government inputs to support agriculture and the growth in agricultural output will obviously promote the effectiveness of poverty alleviation in poor areas, but the effectiveness of poverty alleviation in poor areas does not necessarily promote or reduce the increase in government inputs or the growth in agricultural output value, the latter two being determined by multiple factors; (3) Impulse response analysis finds that the effects of fiscal inputs and the level of agricultural development on the effectiveness of rural poverty reduction are stable in the long run, although there are some small fluctuations in the short run. Meanwhile, the level of agricultural development in the short run is influenced by the agricultural fiscal support, and this influence is also stable in the long run. Therefore, it can be said that the increasing government financial support to agriculture and the continuous development of agricultural production are the key factors to promote the effectiveness of rural poverty alleviation; (4) Variance decomposition finds that the short-term effectiveness of poverty alleviation was continuously improved on the basis of previous effectiveness, but in the long run, fiscal spending has played a significantly larger role. In other words, the effectiveness of rural poverty alleviation relies heavily on the external factor of financial support from governments at all levels, while the endogenous development of agricultural production and rural areas plays a much less significant role. This confirms with the literature that China's current achievements in poverty eradication have been driven more by exogenous fiscal policies, and the lack of sustainable endogenous development is still prevalent in poor rural areas. Conclusions China's approach to poverty alleviation has been mainly to allocated more fiscal spending to rural areas, while strengthening accountability for poverty alleviation. However, some literature suggests that poor rural areas still lack the endogenous dynamics for sustainable growth. Using a VAR model based on data from 1990 to 2019, We find that China's poverty eradication efforts over the past 30 years have not seen a major shift in mechanism or intensity, and that fiscal spending plays a much more significant role in reducing the poverty ratio than agricultural development. As the time required for the success of the agricultural industry is much longer than the allocation and assessment cycle of financial support for agriculture, the higher the incidence of poverty and the greater the pressure to eradicate poverty, the more funds to support rural development and production, and the more poverty alleviation projects become an anchor of accountability. Coupled with the fact that these areas are often poorly endowed, local governments' consideration of industrial cycles and input-output analysis has given way to formalism, homogeneity and even complicity, and the endogenous impetus for agricultural development remains insufficient. Therefore, in the future, it is necessary to set up a national integrated rural revitalization fund as a vehicle for investment in the agricultural industry, to develop specialty agriculture according to local conditions, to match the investment cycle with the agricultural industrialization cycle, to shift the focus of investment in rural infrastructure to production-oriented infrastructure such as agricultural water conservancy and farmland improvement and agricultural logistics and e-commerce, to establish a national information network for agricultural products to prevent the homogenization of industrial poverty alleviation, as well as to better solve the problems of adverse selection by leading enterprises and the moral hazards of agricultural workers. This study can be further extended. Firstly, the empirical analysis can go beyond the overall impact of fiscal support for agriculture on poverty alleviation and consider fiscal expenditure on industrial poverty alleviation, education poverty alleviation and rural infrastructure to form a panel structure for multiple regression analysis. Secondly, as China's economic structure has obvious east-west and north-south differences, the impact of fiscal spending on agriculture on poverty alleviation will vary from region to region, with some regions being essentially industrialization in the periphery and others being essentially modernization of agriculture, so the measure of endogenous dynamics and sustainability is not necessarily just the level of agricultural output. These issues still require further research.
7,702.8
2021-05-21T00:00:00.000
[ "Economics" ]
Meleagrin, a New FabI Inhibitor from Penicillium chryosogenum with at Least One Additional Mode of Action Bacterial enoyl-acyl carrier protein reductase (FabI) is a promising novel antibacterial target. We isolated a new class of FabI inhibitor from Penicillium chrysogenum, which produces various antibiotics, the mechanisms of some of them are unknown. The isolated FabI inhibitor was determined to be meleagrin by mass spectroscopy and nuclear magnetic resonance spectral analyses, and its more active and inactive derivatives were chemically prepared. Consistent with their selective inhibition of Staphylococcus aureus FabI, meleagrin and its more active derivatives directly bound to S. aureus FabI in a fluorescence quenching assay, inhibited intracellular fatty acid biosynthesis and growth of S. aureus, and increased the minimum inhibitory concentration for fabI-overexpressing S. aureus. The compounds that were not effective against the FabK isoform, however, inhibited the growth of Streptococcus pneumoniae that contained only the FabK isoform. Additionally no resistant mutant to the compounds was obtained. Importantly, fabK-overexpressing Escherichia coli was not resistant to these compounds, but was resistant to triclosan. These results demonstrate that the compounds inhibited another target in addition to FabI. Thus, meleagrin is a new class of FabI inhibitor with at least one additional mode of action that could have potential for treating multidrug-resistant bacteria. Introduction Multidrug-resistant bacteria such as methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant Enterococci, and vancomycin-resistant S. aureus have become an important global health concern [1,2]. One approach to combat antibiotic resistance is to identify new drugs that can function through novel mechanisms of action. One such target is bacterial type 2 fatty acid synthesis (FASII), which is essential for bacterial cell growth [3][4][5]. FASII is conducted by a set of individual enzymes, whereas mammalian fatty acid synthesis is mediated by a single multifunctional enzyme-acyl carrier protein (ACP) complex referred to as type I. Enoyl-ACP reductase catalyzes the final and rate-limiting step of the chain-elongation process of the FASII. Four isoforms have been reported for enoyl-ACP reductase. FabI is highly conserved among most bacteria, including S. aureus and Escherichia coli. Streptococcus pneumoniae contains only FabK, whereas Enterococcus faecalis and Pseudomonas aeruginosa contain both FabI and FabK, and Bacillus subtilis contains both FabI and FabL. Recently, the FabV isoform was isolated from Vibrio cholera, Pseudomonas aeruginosa, and Burkholderia mallei [6,7]. No analogue protein is present in mammals for similar transformation; thus, FabI inhibitors should not interfere with mammalian fatty acid synthesis. Because of these properties, FabI is an attractive target for antibacterial drug development [8,9]. As drugs with single targets such as rifampicin and fosfomycin are particularly vulnerable to mutational resistance [10], FabI-specific inhibitors also have a tendency to develop resistance in bacteria by mutations that alter the drug-binding site. FabI is known to be the main target for triclosan and isoniazid, which have been used in consumer products and for treating tuberculosis, respectively [11,12]. Triclosan-resistant bacteria and isoniazid-resistant M. tuberculosis are highly prevalent because of point mutations in their FabI genes [13][14][15]. In addition, rapid mutation development has been often reported in synthetic FabI inhibitors [16]. Thus, it has been recently emphasized that ideal antibiotics should bind to multiple targets [17]. Many FabI inhibitors have been reported from high-throughput screening of existing compound libraries. However, most are not suitable for the development of new antibiotics because of their lack of permeability into cell membranes and efflux in addition to their high mutational frequency [18]. The problem with such screening results lies in the compound libraries, which are systematically biased. Microorganisms produce diverse antibiotics that function in an antagonistic capacity in nature where they have competition. Most antibacterial agents in clinical use today are either microbial products or analogs [19]. A few FabI inhibitors have been reported from microorganisms [20][21][22], and most of these are phenolic compounds. Therefore, more unique FabI inhibitors need to be obtained from microorganisms. During our continued screening for FabI inhibitors from microbial metabolites, we found meleagrin (1) with a druggable structure during solid-state fermentation of a seashore slimederived Penicillium chrysogenum, a penicillin-producing species ( Figure 1). Here, we report the isolation and analog preparation of meleagrin, in addition to its inhibition of FabI isoforms and whole cells of various pathogenic bacteria, target validation, and its multitarget effect. Bacterial strains The bacterial strains used in the antibacterial activity assays were obtained from the Culture Collection of Antimicrobial Resistant Microbes of Korea and the Korean Collection for Type Cultures. The pump-negative (tolC) E. coli EW1b was obtained from the E. coli Genetic Stock Center of Yale University. Screening and isolation of compound 1 Over 25,000 microbial extracts composed of actinomycetes and fungi were screened against S. aureus FabI and confirmed through a target-based whole cell assay by using fabI-overexpressing S. aureus. This analysis led to the identification of compound 1 from fungal strain F717 (Fig. 1). Compound 1 was isolated from the fermented whole medium of the fungal strain F717, which was isolated from seashore slime collected at Daechun beach, Chungcheongnam-do, Korea. The strain was identified as Penicillium chrysogenum based on standard biological and physiological tests and taxonomic determination. Seed culture was conducted in a liquid culture medium containing 2% glucose, 0.2% yeast extract, 0.5% peptone, 0.05% MgSO 4 , and 0.1% KH 2 PO 4 (pH 5.7 before sterilization). A sample of the strain from a mature plate culture was inoculated into a 500-mL Erlenmeyer flask containing 80 mL of the above sterile seed liquid medium and cultured on a rotary shaker (150 rpm) at 28uC for 3 days. Subsequently, 5 mL of the seed culture was transferred into 500-mL Erlenmeyer flasks (54 flasks) containing 80 g of bran medium, which was cultivated for 7 days at 28uC to produce the active compound. The culture solid state was extracted with 80% acetone, and the extract was concentrated in vacuo to an aqueous solution. The aqueous solution was then extracted 3 times with an equal volume of ethyl acetate (EtOAc). The EtOAc extract was concentrated in vacuo to dryness. The crude extract was subjected to SiO 2 (Merck Art No. 7734.9025) column chromatography followed by stepwise elution with CHCl 3 -MeOH (100:1, 50:1, and 10:1). The active fractions eluted with CHCl 3 -MeOH (50:1) were pooled and concentrated in vacuo to give an oily residue. The residue was applied again to a Sephadex LH-20 and then eluted with CHCl 3 -MeOH (1:1). The active fraction was dissolved in MeOH and was further purified by reverse-phase high-performance liquid chromatography (206150 mm; YMC C 18 ) by using a photodiode array detector. The column was eluted using MeOH: H 2 O (75:25) at a flow rate of 5 mL/min to afford compound 1 with .99% purity at a retention time of 19.4 min. The chemical structure of compound 1 was determined to be meleagrin [23] by mass spectroscopy (MS) and nuclear magnetic resonance (NMR) spectra as follows: Preparation of derivatives of compound 1 Several derivatives of 1 were obtained by chemical modification of functional groups such as hydroxyl and amine groups (Fig. 1). Demethoxylation of compound 1 afforded glandicolin A (2) together with compound 7 as a byproduct. Methylation of compound 1 produced oxaline (3), N 14 -methylmeleagrin (4), and O,N 14 -dimethylmeleagrin (5). O,N 14 -dimethylglandicolin (6) was obtained by methylation of compound 2. Details regarding the preparation procedures and spectral data of compounds 2-7 are presented in Information S1. FabI and FabK assay S. aureus FabI and E. coli FabI enzymes were cloned, overexpressed, and purified as described previously [24]. The wild-type fabK gene was amplified by PCR from genomic DNA obtained from Streptococcus pneumoniae KCTC 5412 by using the primers 59-GGAAACCATATGAAAACGCGTATTACGAA-39 and 59-CCGCTCGAGGTCATTTCTTACAACTCCTGT-39, which contained NdeI and XhoI restriction sites, respectively. After the DNA sequence was confirmed, the gene was cloned into the pET22b vector (Novagen, Gibbstown, NJ, USA). The construct was transformed into E. coli BL21 (DE3) for expression following induction with isopropylthiogalactoside. The C-terminal Histagged protein was purified as described previously [24]. Assays were conducted in half-area, 96-well microtiter plates. The compounds were dissolved in DMSO and evaluated in 100-mL assay mixtures containing components specific for each enzyme (see below). Reduction of the trans-2-octenoyl N-acetylcysteamine (t-o-NAC thioester) substrate analog was measured spectrophotometrically following the utilization of NADH or NADPH at 340 nm at 30uC for the linear period of the assay. S. aureus FabI assays contained 50 mM sodium acetate (pH 6.5), 200 mM t-o-NAC thioester, 200 mM NADPH, and 150 nM S. aureus FabI. NADH was used as a cofactor rather than NADPH for the E. coli FabI assay. Substrate concentrations used for the Lineweaver-Burk plot were 100, 200, 300, and 400 mM, whereas the concentrations of the cofactor were 100, 200, 400, and 600 mM. The rate of decrease in the amount of NADPH in each reaction was measured with a microtiter enzyme-linked immunosorbent assay (ELISA) reader by using the SOFTmax PRO software (Molecular Devices, Sunnyvale, CA, USA). The inhibitory activity was calculated according to the following formula: % of inhibition = 1006 [12 (rate in the presence of compound/ rate in the untreated control)]. IC 50 values were calculated by fitting the data to a sigmoid equation. An equal volume of DMSO solvent was used for the untreated control. FabK assays contained 100 mM sodium acetate (pH 6.5), 2% glycerol, 200 mM NH 4 Cl, 50 mM t-o-NAC thioester, 200 mM NADH, and 150 nM S. pneumoniae FabK. Fluorescence quenching assay Fluorescence spectra were measured using a SHIMADZU fluorescence spectrophotometer (model RF-5310PC). S. aureus FabI (15 ng/ml) was incubated with different concentrations of triclosan (1, 2, 4, 8, and 16 nM in PBS buffer) and compounds 1, 5, or 7 (10, 20, 40, 80, and 160 nM in PBS buffer). Protein quenching was monitored at 25uC by using 5-nm excitation and 5-nm emission wavelength. The excitation wavelength was 280 nm, and the emission spectra were measured between 290 and 430 nm. Determination of minimum inhibitory concentrations (MICs) Whole-cell antimicrobial activity was determined by broth microdilution as described previously [21]. The test strains except for S. pneumoniae were grown to mid-log phase in Mueller-Hinton broth and diluted 1,000-fold in the same medium. Cells (10 5 /mL) were inoculated into Mueller-Hinton broth and dispensed at 0.2 mL/well into a 96-well microtiter plate. S. pneumoniae was grown in tryptic soy broth supplemented with 5% sheep blood. MICs were determined in triplicate by serial 2-fold dilutions of test compounds. The MIC was defined as the concentration of a test compound that completely inhibited cell growth during a 24-h incubation at 30uC. Bacterial growth was determined by measuring the absorption at 650 nm by using a microtiter ELISA reader. Measurement of the inhibition of macromolecular biosynthesis To monitor the effects of compound 1 on lipid, DNA, RNA, protein, and cell wall biosynthesis, its effects on the incorporation of S. pneumoniae were measured as described previously [21]. S. aureus was exponentially grown to an A 650 of 0.2 in Mueller-Hinton broth. S. pneumoniae was grown in tryptic soy broth supplemented with 5% sheep blood. Each 1-mL culture was treated with drugs at 2 times the MIC for 10 min. An equal volume of DMSO solvent was added to the untreated control. After incubation with the radiolabeled precursors at 37uC for 1 h, followed by centrifugation, the cell pellets were washed twice with PBS buffer. After acetate incorporation, the total cellular lipids were extracted with chloroform-methanol-water. The incorporated radioactivity in the chloroform phase was measured by scintillation counting. For the other precursors, incorporation was terminated by adding 10% (w/v) TCA and cooling on ice for 20 min. The precipitated material was collected on Whatman GF/C glass microfiber filters, washed with TCA and ethanol, dried, and counted using a scintillation counter. The total counts incorporated at 1 h of incubation without inhibitors ranged from .7,000 for [U- 14 Frequency of the spontaneously resistant mutant The frequency of spontaneous resistance was determined for S. aureus RN4220, S. aureus KCTC 1916, and E. coli KCTC 1942. E. coli KCTC 1942 is highly sensitive to antibiotics. The organisms were grown to log-phase by dilution of an overnight culture in fresh media and re-incubation at 35uC until the cultures reached a cell density of approximately 10 9 CFU/mL. A volume of 100 ml of the bacterial suspension was then applied to solid media containing 46 MIC of 1, 5, or triclosan. Inocula were determined by applying 100 ml of 10-fold dilutions on solid media without drug. Colony-forming units were counted after 48 h incubation at 35uC. The ratio of the number of colonies on drug-containing plates to that on control plates was calculated as the in vitro frequency of isolation of CFU. Overexpression assay An overexpression assay using S. aureus RN4220, S. aureus RN4220 (pE194), and S. aureus RN4220 (pE194-fabI) was conducted to perform target validation of FabI inhibitors as described previously [21]. Additionally, both fabIand fabKoverexpressing E. coli were constructed to test a multitarget effect of the compounds. The wild-type fabI gene from the genomic DNA of E. coli W3110 was amplified by PCR by using the primers 59-ATGGGTTTTCTTTCCGGTAAGCGCA-39 and 59-TTTCAGTTCGAGTTCGTTCATT-39. The wild-type fabK gene from the genomic DNA of S. pneumoniae KCTC 5412 was amplified by PCR by using the primers 59-ATGAAAACGCG-TATTACA-39 and 59-GTCATTTCTTAC AACTCCTGTCCA-39. The resulting products were cloned into the pBAD-TOPO TA expression vector (Invitrogen, Carlsbad, CA, USA) to yield the pBAD-fabI and pBAD-fabK recombinant plasmids, which placed the expression of the genes fabI and fabK, respectively, under the control of the arabinose promoter [25]. Recombinant pBAD-fabI and pBAD-fabK were then introduced into the pump-negative (tolC) E. coli EW1b via electroporation to generate E. coli EW1b (pBAD-fabI) and E. coli EW1b (pBAD-fabK), respectively. Isolation of meleagrin as a new FabI inhibitor A FabI inhibitor was isolated from Penicillium chrysogenum F717, which is known as a penicillin-producing species. MS and NMR spectral analyses of the inhibitor revealed that it was meleagrin (1) (Fig. 1). Compound 1 inhibited both E. coli and S. aureus FabI with Mode of FabI inhibition The FabI reaction mechanism requires the nucleotide cofactors NADH or NADPH as the first substrates [26]. The FabI inhibitor could bind to the free enzyme, the enzyme-substrate complex, or both to prevent catalysis. In the first case, the inhibition pattern with respect to the cofactor would be competitive; in the second, the inhibition pattern would be non-competitive; and in the third case, mixed-type inhibition would occur. Inhibition of S. aureus FabI by compound 1 was mixed with respect to trans-2-octenoyl Nacetylcysteamine, with a K i value of 39.8 mM ( Fig. 2A and 2C). In addition, compound 1 exhibited mixed inhibition with respect to NADPH, with a K i value of 32.3 mM (Fig. 2B). Thus, compound 1 must bind to both the free enzyme and the FabI-NADPH complex to prevent binding of the nucleotide cofactor and the substrate, respectively. Effects of structural changes in compound 1 on FabI and related activity To determine whether structural changes in compound 1 influence its effects on FabI, compound 1 and its derivatives were tested against S. aureus and E. coli FabI and bacterial growth ( Table 1). Compounds 5 and 6, which were modified at both the 9-OH and 14-NH groups, produced a significant increase in S. aureus and E. coli FabI-inhibitory activity, and they enhanced antibacterial activity against S. aureus and E. coli. In contrast, compounds 2, 3, and 4, which were modified at the 1-NH, 9-OH, and 14-NH groups, respectively, did not affect activity. Compound 7, which was brominated at the benzene ring of compound 2, totally lost its activity. Effects on fluorescence quenching of S. aureus FabI We examined whether active compounds directly bind with FabI by fluorescence quenching analysis. S. aureus FabI displayed strong maximal fluorescence at 307 nm after excitation at 270 nm (Fig. 3), whereas triclosan, kanamycin, 5, and 7 had no fluorescence at this wavelength (data not shown). When S. aureus FabI was incubated with increasing amounts of active compound 5, its fluorescence intensity decreased gradually (Fig. 3A), whereas the inactive compound 7 did not exhibit such an effect (Fig. 3B). Compound 1 showed the same pattern as compound 5 (data not shown). As a positive control, triclosan binding resulted in fluorescence quenching of S. aureus FabI (Fig. 3C), whereas kanamycin as a negative control did not (Fig. 3D). These data indicate that the active compounds 1 and 5 directly interact with S. aureus FabI, whereas compound 7 does not, thus explaining their effects on FabI. Inhibition of cellular fatty acid synthesis To evaluate whether the active compounds inhibit cellular fatty acid synthesis, we determined whether the compounds inhibited the incorporation of acetate into membrane fatty acids in vivo. We measured their effects on the incorporation of [1-14 C] acetate into membrane fatty acids in S. aureus. In agreement with their antibacterial activity and FabI-inhibitory activity, the more active compounds 5 and 6 indeed blocked incorporation of radioactivelylabeled acetate into chloroform/methanol-extractable phospholipids in vivo in a concentration-dependent manner, with approximately 2-fold higher activity than the less active compounds 1, 3, and 4 ( Table 1). The inactive compound 7 did not exhibit such fatty acid synthesis inhibition even at 200 mM, as expected. As a positive control, triclosan inhibited fatty acid synthesis in a concentration-dependent manner (data not shown). In contrast, the incorporation of leucine into proteins was not inhibited by the active compounds (Table 1), whereas the protein synthesis inhibitor, chloramphenicol, inhibited incorporation (data not shown). Antibacterial activity Consistent with their FabI-inhibitory activity, compounds 5 and 6 showed 2-4 times higher antibacterial activity than compound 1 against S. aureus RN4220 and the highly sensitive strain E. coli KCTC 1924 (Table 1), as expected. Interestingly, compounds that were inactive against the FabK isoform exhibited antibacterial activity against S. pneumoniae KCTC 3932, which contains only the FabK isoform. This finding suggests that the compounds inhibit not only FabI but also another target. Effects on fabI-overexpressing S. aureus The increase in the MIC for the fabI-overexpressing strain relative to the wild type is indicative of FabI being the mode of antibacterial action [27]. The antibacterial activity of the active compounds for the fabI-overexpressing strain was investigated to determine whether overexpression of fabI shifted the MIC for S. aureus. The MICs for the fabI-overexpressing strain S. aureus RN4220 (pE194-fabI) were 4-8-fold higher than those of the wildtype strain S. aureus RN4220, or the vector-containing strain S. aureus RN4220 (pE194) ( Table 2). The MIC for triclosan in the fabI-overexpressing strain increased, which was used as a positive control. Erythromycin, the selection marker for the vector pE194, increased the MICs for both the fabI-overexpressing strain and the vector-containing strain, which indicated that the engineered constructs functioned as expected. Antibiotics with different modes of action such as oxacillin and norfloxacin were applied as negative controls and did not change the MICs of the 3 strains, which indicates that altered expression of fabI does not alter the sensitivity of cells to antibiotics in general. These results indicate that the active compounds inhibited the growth of S. aureus by inhibiting the fabI-encoded ENR. Frequency of spontaneously resistant mutants We isolated resistant mutants to determine which other gene or genes were targeted by the active compounds ( Table 3). As a control, triclosan-resistant mutants were isolated at a frequency of 3.3060.13610 28 , 2.5860.04610 29 , and 9.0760.08610 28 from S. aureus RN4220, S. aureus KCTC 1916, and the antibioticsensitive E. coli KCTC 1942, respectively. However, no mutants resistant to compounds 1 and 5 were detected from the strains tested. These results suggest that compounds 1 and 5 inhibit multiple targets. Effects on macromolecular biosynthesis To identify other pathways inhibited by compound 1, the effects of compound 1 on the incorporation of radiolabeled precursors of macromolecular synthesis in S. pneumoniae and in S. aureus were investigated. All reference antibacterial agents selectively inhibited the macromolecular synthesis pathway, which is consistent with their known mechanism of action ( Table 4). Compound 1 inhibited the incorporation of acetate into lipids in both S. aureus and S. pneumoniae by 62% and 65%, respectively, whereas the incorporation of thymidine, uridine, isoleucine, and N-acetylglucosamine, into DNA, RNA, protein, and the cell wall, respectively, was not inhibited. Because compound 1 is inactive against the FabK isoform, these data suggest that compound 1 inhibits at least one additional target in addition to FabI in the fatty acid pathway. Effects on fabK-overexpressing E. coli To demonstrate that active compounds 1 and 5 inhibit not only FabI but also an additional target, we cloned fabK and fabI into an arabinose-inducible expression system, vector pBAD TOPO, and placed this plasmid in a TolC-negative E. coli host. Because FabK is resistant to compounds 1 and 5, if the compounds inhibited only FabI, expression of FabK in E. coli would lead to resistance to compounds 1 and 5 because the expressed FabK can compensate for the inhibited FabI. As expected, the MICs of compounds 1 and 5 for fabI-overexpressing E. coli EW1b (pBAD-fabI) were 4-fold higher than those for wild-type E. coli EW1b and vector-containing E. coli EW1b (pBAD) in the presence of arabinose (Table 5). However, the MICs for the fabK-overexpressing E. coli EW1b (pBAD-fabK) did not change. As a positive control, triclosan, which does not inhibit FabK, showed inducer-dependent higher MICs for fabK-overexpressing E. coli and fabI-overexpressing E. coli. Therefore, S. pneumoniae FabK replaced E. coli FabI for fatty acid synthesis, which, in turn, indicates that FabI is the only target of triclosan in this system. Ampicillin, which is the selection marker for the pBAD vector, increased the MICs for all vector-containing strains, thereby demonstrating normal functioning of the constructs. Actinonin, which is a PDF inhibitor applied as a negative control, did not change the MICs of any of the tested strains. This result clearly indicates that active compounds 1 and 5 inhibit an additional target as well as FabI, unlike triclosan. Discussion We screened 25,000 microbial extracts consisting of actinomycetes and fungi to identify new FabI inhibitors. Meleagrin was isolated from the solid-state fermentation of the fungal strain P. chrysogenum F717. Meleagrin was previously isolated from P. meleagrinum [28] and P. chrysogenum [29], but its biological activity, including antimicrobial activity, has not been reported. Although its activity was weak, meleagrin clearly showed inhibition selective for S. aureus FabI over S. pneumoniae FabK. Importantly, the binding of meleagrin with S. aureus FabI was demonstrated by the fluorescence quenching assay. Furthermore, its inhibition of FabI was supported by results obtained using its chemical derivatives, the intracellular fatty acid synthesis assay, and the fabI-overexpressing assay. Interestingly, meleagrin and its more active derivatives showed antibacterial activity against S. pneumoniae, in which FabK is the sole enoyl-ACP reductase, and it did not produce spontaneously resistant mutants of S. aureus or E. coli, in contrast to triclosan, which suggests that meleagrin inhibits multiple targets. Meleagrin inhibited the incorporation of radiolabeled acetate into lipids in S. pneumoniae and S. aureus, whereas incorporation of thymidine (DNA), uridine (RNA), isoleucine (protein), and N-acetylglucosamine (cell wall) was not inhibited, which indicates that these compounds inhibit fatty acid synthesis through one or more modes of action in addition to FabI inhibition. The multitarget effect was confirmed by the fabKoverexpression assay in E. coli. The multitarget effect is very important from the point of view of drug development because a single point mutation in one gene for a drug with a single target renders the strain resistant and the drug useless. Thus, when considering that one of the advantages of antibacterial agents having multiple targets is the reduced development of drug resistance [10], meleagrin and its derivatives hold promise for the development of new antibiotics that can treat infections caused by multidrug-resistant pathogens. Several FabI inhibitors have been reported, and most were derived from compound libraries and were synthetically developed using structure-based approaches, including 1,4-disubstituted imidazoles, aminopyridines, naphthyridinones, and thiopyridines [30]. Although synthetic inhibitors are potent, they have a disadvantage, as resistant mutants occur at relatively high frequency [16], A few natural FabI inhibitors have been reported, such as vinaxanthone [21], cephalochromin [31], kalimantacin/ batumin [22], EGCG, and flavonoids [32]. EGCG and flavonoids inhibit several targets such as FabG, FabZ, and FabI. The mode of action of vinaxanthone, cephalochromin, and kalimantacin/ batumin was demonstrated by FabI-overexpressing strains. To our knowledge, this is the first study on a multitarget effect of FabI inhibitors. In summary, meleagrin is a new class of FabI inhibitor with antibacterial activity against multidrug-resistant bacteria such as MRSA and QRSA. Meleagrin is structurally unique, and it inhibits at least one more target in addition to FabI, thereby resulting in a no resistance mutant; thus, meleagrin may have potential as a useful lead compound for the development of a new anti-MRSA agent.
5,731.6
2013-11-28T00:00:00.000
[ "Chemistry", "Medicine" ]
Design a Compact Printed Log-Periodic Biconical Dipole Array Antenna for EMC Measurements : This article presents the design, modeling, and fabrication of a printed log-periodic biconical dipole array antenna (PLPBDA) for electromagnetic compatibility (EMC) measurements. The proposed structure used bow tie-shaped dipoles instead of typical dipoles to achieve a size reduction of 50% and bandwidth enhancement of 170% with the help of PCB technology. Furthermore, the balanced feeding method and the modifications in bow tie-shaped dipole dimensions were utilized to obtain broad bandwidth of 5.5 GHz (from 0.5 GHz to 6 GHz). This structure comprises 12 dipole elements with a compact size of 170 × 160 × 1.6 mm, reflecting low fluctuations gain of about (4.6–7) dBi with the help of an extra dipole. Moreover, the achieved frequency and radiation characteristics (simulated and measured) agree with each other and are compatible with the results of classical EMC antennas. The achievements of this structure showed promising results compared to both literature reviews and reference antenna Hyper LOG ® 7060 offered for sale. Introduction With the rapid growth of wireless communications, the need to use Ultra-wideband antennas in such systems has appeared in various applications. Furthermore, the antenna design is the key to every wireless system since it can control its radiation characteristic according to the application's specifications [1]. Ultra-wideband antennas are used in different applications, e.g., in radio systems for communications and in electromagnetic compatibility (EMC) for measurement applications. The evolutions in wireless systems have motivated researchers to develop new communication forms to exploit the spectrum in the best way and enhance the reception quality [2][3][4][5]. Therefore, the cognitive radio technology was the best solution for this matter as it consists of two different antennas, one for sensing with Ultra-wideband of (3.1-10 GHz) to identify the state of the band-idle or active, while the other antenna is a communication antenna (reconfigurable antenna) [6][7][8][9][10][11]. This research focuses on the EMC applications, whereas an Ultra-wideband antenna can be used as a reference antenna for emission and immunity tests of the device under the test (DUT) inside the EMC chamber [12]. Several antenna configurations have been utilized to measure electromagnetic interference (EMI) based on the operation frequency and radiation characteristics. For instance, in [13], the authors showed the AF characteristics of the sleeve dipole antenna for EMC measurement by changing the sleeve dipole parameters, which offered an 86% size reduction compared to the conventional biconical antenna with similar characteristics. The performance of the log-periodic dipole array antenna was improved in [14] using a saw-tooth shape feedline. The successive dipoles will be arranged in the same horizontal plane, eliminating the unwanted vertical electric field component. A complimentary log-periodic dipole array with cross-polarization was proposed in [15]. This structure has a set of dipole antennas orthogonal to conventional log-periodic dipole antennas, offering a circular polarization without any hybrid junction. A pair of printed broadband Vivaldi antennas with a coaxial feeding method operating from 0.5 GHz to 4 GHz was designed, fabricated, and tested [16]. Moreover, the proposed design served as a reference antenna for EMC measurement since it exhibited stable radiation characteristics and a maximum gain of 6.2 dBi. The width of the ridge of the double ridge guide horn (DRGH) antenna was tapered linearly in [17]. This process maximized the effective radiation aperture and reduced the beamwidth compared to conventional 1-18 GHz DRGH. Another horn antenna with miniature size and wide bandwidth was presented in [18], where the idea of extending the lower frequencies was inspired by the fishtail structure and classical ridge structure. UWB skeletal antenna was proposed in [19]. This antenna showed good results in VSWR compared with the biconical antenna in the band up to 200 MHz, which is considered another wire UWB antenna family. Ref. [20] presented a novel UWB monopole antenna for EMC measurement applications, and this antenna covered two bands (0.79-1 GHz) and (1.37-10 GHz). In Ref. [21], the authors proposed a novel method for optimizing small elliptical planner dipole antenna for ultra-wideband EMC applications. The characteristics of this antenna-like wide band (1)(2)(3)(4)(5) and flatness gain enabled it to be a powerful tool for EMC measurements. The LPDA antenna is extensively used because it provides a high directivity and flat gain over the wideband spectrum [22]. Moreover, an LPDA antenna is called frequency-independent when the ratio of higher frequency to the lower frequency is more than ten times, where the impedance and radiation characteristics remain constant as a function of frequency. The lower operating frequency of the LPDA determines its size and, consequently, the length of the most extended dipole. Since the aimed wide operation frequency band starts from 500 MHz, the LPDA length will be considerably large. To overcome this LPDA size limitation, the printed log-periodic dipole array (PLPDA) antenna has been presented recently utilizing printed circuit board (PCB) technology that offers good specifications such as low cost, low profile, small size, and easy fabrication [23]. In PLPDA, all the parameters of the conventional LPDA antenna are divided by the square root of the effective dielectric substrate ( √ ε e f f ). The majority of EMC reference antennas are dedicated to serving in the band starting from 700 MHz to 2.4 GHz since this band is occupied by different applications such as GSM 850-900 MHz, mobile 1800 MHz, 3G 2100 MHz, and Wi-fi 2400 MHz and has a high probability to interference [24]. On the other hand, the band from 2.5 GHz to 6 GHz must be taken into account due to the fact it is occupied with another set of critical applications such as WiMAX 3.5 GHz and 5.3 GHz, mid bandwidth for 5G 2.5-3.8 GHz, PAN 4.8 GHz, and WLAN 5.8 GHz [25]. In the last decade, several structures of PLPDA serving different applications were presented; some offer size reduction, while others provide wide bandwidth. For instance, Casula et al. [26] showed an ultra-wideband (4-18 GHz) printed log-periodic dipole array antenna design with 15 dipoles. An infinite balun was realized using two symmetrical coaxial cables attached at the top and bottom sides. Moreover, this antenna was designed to stabilize its radiation pattern without changing the phase center during the operating band. Step-by-step design procedures for PLPDA antenna were illustrated in [27]. The design started with nine dipole elements according to the spacing and scaling factor values of 0.78 and 0.14. Then, three extra dipoles were added to satisfy the condition (S11 < −10 dB) through the whole operation band. Therefore, this antenna offered wide bandwidth starting from 800 MHz to 2.5 GHz with size reduction using only 12 dipoles. In [28], a PLPDA antenna with a balanced feed structure was presented. The authors modified the width of the feeding lines to compensate soldering effect and offer broadband impedance bandwidth starting from 500 MHz to 3 GHz. Furthermore, a stable high gain with low tolerance of 0.5 dB was achieved. In [29], 48 dipole elements were utilized to obtain wide bandwidth of 8.5 GHz using the hat-loaded technique for the first three dipoles and the technique of Tshaped loaded for the following three dipoles. Moreover, wide impedance bandwidth was achieved using meandered line and trapezoid stub methods. Another wideband PLPDA structure (0.5-10) GHz with 25 dipole elements was presented in [30]. Wide bandwidth and size reduction were achieved using dual-band dipole technology. Ref. [31] offered PLPDA of (0.8-2.5) GHz bandwidth using 12 dipole elements with a small size. These 12 dipoles were arranged in a way so that the length of each one decreases gradually relative to the next one, and each dipole resonates at its center frequency to cover the overall EMC spectrum L-band. A wideband printed LPDA antenna (0.4 GHz to 8 GHz) was proposed in [32]. The low-frequency response of this structure was improved by replacing the most extended traditional dipole with a triangular shape and optimizing the width, length, and spacing of the following four dipoles. The upper-frequency range of the proposed PLPDA antenna in [33] was increased to operate from 780 MHz up to 18 GHz by introducing a ratio factor parameter that used the truncate method to improve the properties of this antenna. One of the motivations for using a compact PLPDA antenna rather than the classical one in EMC measurement is the shorter measurement distance. The shorter measurement distance can achieve a high strength field in the uniform field area (UFA) without increasing the input power in the radiated immunity test. Furthermore, radiation emission and radiation immunity are essential criteria for EMI measurements and should be performed in the far-field region. Figure 1 depicts the EMC measurements setup according to CISPR standards. The radiation pattern of the reference antenna must cover the device under the test to obtain a proper response. Usually, the devices under the test have different dimensions. Therefore, other reference antennas are required to obtain the maximum field strength. Unfortunately, having many antennas in one EMC laboratory is not the right choice. The alternative solution is to have a small number of reference antennas. The maximum field strength is achieved by changing the measurement distance according to the device under the test. Therefore, the compact antennas are fit with changing the test distance since the DUT is still in the far-field region of these antennas [34]. The minimum measurement distance is controlled by the largest dimension of the antenna (D), DUT dimension, and the maximum resonance frequency (above 1 GHz) according to CISPR 16-1-2 [35] Let us compare the classical log periodic dipole array antenna, which has the highest dimension, D = 340 mm, with the compact antenna having D = 170 mm. Both antennas are operating up to 6.5 GHz. According to (1), the shorter measurement distance for the classical antenna should be ds ≥ 3 m. On the other hand, the minimum measurement distance for the proposed antenna should be ds ≥ 0.6 m, while it must be ≥1 m according to CISPR standard [35]. Due to the compact size of the proposed antenna, the measurement distance (ds) could be alternated to 1.25 m in the case of small DUTs, and in this case, the illumination area will be 1.5 m, making it suitable for most DUTs. The other motivation is the test configuration issue. Based on EMC standards, i.e., CISPR 16-2-3, the minimum distance between the reference antenna and the ground plane must not exceed 25 cm. The main problem will occur through the test with the vertical orientation of the antenna, where the antenna will be very close to the ground, especially at low frequency. This problem will lead to wrong measurements due to interference between the reference antenna and the ground plane [36]. This problem will not be an issue in the printed reference antenna due to the small size they have by using a substrate with high relative permittivity ε r = 4.3 to minimize the size, and hence, it satisfies the condition even with low frequencies. This paper presents an analytical study for a small-size printed log-periodic dipole array antenna based on bow tie-shaped dipoles instead of the typically printed dipoles. This structure aims to tackle both goals-bandwidth enhancement and size reduction-to serve as a reference antenna in EMC measurements for the band starting from 0.5 GHz to 6.5 GHz. Section 2 describes the comparative analysis of conventional and bow tie-shaped dipoles. The basic design of the log-periodic antenna is illustrated in Section 3. Section 4 briefly discusses the various feed techniques and their effect on the antenna characteristics, while Section 5 demonstrates the simulation and measurement results by comparing the literature reviewed and the proposed design. Finally, Section 6 presents a comprehensive conclusion with recommendations. Comparative Analysis of Conventional and Biconical Dipoles This section focuses on the benefits of using a biconical dipole instead of a classical one. The size reduction and bandwidth enhancement benefits have been demonstrated by designing and simulating two dipoles-conventional dipole and biconical dipole-using CST Microwave studio with a discrete feeding port, as shown in Figure 2. Furthermore, both dipoles have a length of (L = 170 mm) and widths (w1 = 6 mm and w2 = 16 mm, respectively). The proposed structures are based on an FR-4 substrate with a size of Ls = 180 mm and Ws = 160 mm. Figure 3 shows the reflection coefficient in dB versus frequency. It is clear that the proposed dipole requires a length less than a conventional one to achieve the same resonance frequency, and this process will develop by using an array of these dipoles. Moreover, the proposed dipole offers wider bandwidth than the traditional one, as is evident in the biconical dipole impedance curve in Figure 4, which is flatter with frequency than the conventional dipole curve. Figure 5 shows the role of dimension d in mm for tuning the bandwidth of the biconical antenna. This feature is valuable in the following design steps to obtain the optimum dimensions of the PLPDA antenna. It can be concluded that keeping the starting points of the electromagnetic waves close to each other directly impacts broadening the bandwidth [1], which is why the biconical dipole has wider bandwidth than the conventional one. Printed Log-Periodic Dipole Array Antenna Design The log-periodic dipole antenna was first derived from the conventional dipole (radiate at half wavelength) by Isbell [37]. It consists of several dipoles, and each one resonates at its wavelength corresponding to the length. It is worth mentioning that all dipoles whose lengths are higher than wavelength act as reflectors, whereas they would act as directive dipoles if their lengths are smaller than the wavelength [38]. Moreover, the classical analysis method was described by Carrel [39], which presents straightforward procedures for the design using the following six steps: 1. According to the desired directivity, scaling factor (τ) and spacing factor (σ) can be evaluated from the point intersection of the straight line σ = 0.243 τ − 0.051. 2. Using Equations (2)-(4) to find out the maximum number of dipoles where B S and B ar present the structure bandwidth and the active region bandwidth, respectively. 3. The length of the most extended dipole (first one), which matches the lowest frequency, can be found from Equation (5). 4. The distance between each successive dipole can be calculated using Equation (6). 5. Finally, the length of the dipoles, the width of the dipoles, and the spacing between dipoles should be divided by the square root of the effective dielectric constant, L n √ ε e f f , W n √ ε e f f , and R n √ ε e f f , respectively [32]. The effective dielectric constant is described by Equation (12). According to the EMC measurement application, low bandwidth and large size were the main issues in designing printed log-periodic antennas. Using an antenna as a reference in EMC measurements requires wide bandwidth to cover the electromagnetic interference (EMI) with the communications bands that spread in the whole spectrum. On the other hand, the size was a considerable impact factor in the shorter measurements distance and test configuration. Therefore, the classical dipole elements were replaced with a trapezoidal shape to form a biconical array antenna instead of the typical dipole array since the biconical antenna offered a wider bandwidth than a classical dipole antenna [40]. By doing so, the proposed design has achieved both bandwidth improvement and size reduction simultaneously. The geometrical shapes for both conventional and biconical dipole array antennae are presented in Figure 6. The spacing between adjacency dipoles becomes smaller as it approaches the highfrequency dipoles. In contrast, low frequencies at the longest dipoles have higher bandwidth than the lowest length dipoles, which have sharp bands. Therefore, the spacing should be obtaining small to make these sharp bands close to each other, and consequently, it leads to achieving a wide band. Figure 7 shows the reflection coefficient of the conventional and proposed designs. The biconical dipoles have significantly impacted the impedance bandwidth (from 0.5 GHz to 5.5 GHz) compared with linear dipoles (from 0.7 GHz to 3.3 GHz). Hence, the biconical dipoles have better performances than the conventional dipoles. Even with this promising result of the reflection coefficient of biconical dipoles array antenna, the voltage standing wave ratio still does not satisfy the condition VSWR < 2, especially at 2.4 GHz, in which the reflection coefficient is approximate −9 dB. Hence, an extra dipole (conventional one) is designed and optimized to eliminate this reflection, and the result is shown in Figure 8. This additional dipole was inserted between the input port and the biconical element number 11 [27]. It is clear that through the whole frequency band from 0.5 GHz to 6 GHz, the VSWR < 2, and the reflection coefficient value is now below −10 dB. Furthermore, it was found that the changing of the extra additive dipole length has also significantly affected the gain values. Figure 9 presents the gain variation with different lengths of this extra dipole. The length L12 = 10 mm reflects the lowest gain fluctuations, which is necessary to achieve a good antenna factor result with low uncertainty. However, the gain could be flatter with increasing the length of the additive dipole, but it will corrupt the impedance matching since there is a trade-off process. Therefore, L12 = 10 mm is the optimum value for both the s-parameter and the gain. Figure 10a shows the optimized geometrical shape of the design, while Table 1 illustrates the optimum values of each dipole element's parameter. The width of the dipole is set at the value of 10 mm except for the first dipole's width (W1 = 13 mm) and the extra dipole's width (W12 = 5 mm). On the other hand, parameter (d) plays a vital role in having broadband impedance matching since it is the central part of modifying every dipole's biconical shape, as shown previously in Figure 5. Finally, an optimization process took place on the overall dimensions to obtain better performances using Microwave CST Studio's facilities [41]. The utilized structure is epoxy FR-4 relative permittivity ε r = 4.3, and loss tangent of tanδ = 0.025. Figure 10b depicts the cross-section area of the proposed structure. Feeding Techniques The feeding techniques play a vital role in designing the log-periodic dipole antenna array. The typical feeding method consists of two non-radiated microstrip lines attached to each substrate's side to connect the successive dipoles. There will be a 180 • phase difference between every two consecutive dipoles, ensuring the energy will radiate only from the exciting dipole. At the same time, there is no contribution by the coupling from the next dipole, which has a reverse direction. The width of each microstrip feeding line w f can be calculated using Equation (13) [40]. where h is the height of the substrate, z 0 represents the characteristic impedance 50 Ω. In this work, the balance feeding method is employed. The top microstrip line has a width w f 1 = 3.5 mm while the width of the bottom microstrip line is w f 2 = 5 mm, The 50 Ω impedance point is usually located near the narrow tip of the PLPDA antenna. To achieve impedance bandwidth less than −10 dB for all elements in the array, a parametric sweep has been performed to change the width of the transmission line by creating a balun to balance the surface currents distribution between the two sides of the transmission line as described in [28]. These optimum values are obtained with the help of CST Microwave studio to obtain wider impedance bandwidth starting from 0.5 GHz to 6 GHz. Simulation and Measurement Results The proposed design has been fabricated and tested in the EMC laboratory at the University of West Bohemia, Faculty of Electrical Engineering. Furthermore, the fabrication process takes place with the help of an LPKF ProtoMat S100 CNC machine. The utilized structure is epoxy FR-4 with relative permittivity ε r = 4.3, and loss tangent of tanδ = 0.025. The phototype of the fabricated design is shown in Figure 11. S11-Parameter The reflection coefficient of the proposed design has been measured using the RIGOL DSA875 Spectrum Analyzer (9 kHz-7.5 GHz) with directional couplers (RIGOL VB 1032 and RIGOL VB 2032), as shown in Figure 12a. These two directional couplers are utilized simultaneously to cover the band up to 8 GHz (RIGOL VB 1032 (0.1-3.2 GHz) and RIGOL VB 2032 (2 to 8 GHz)). Figure 12b shows the simulated and measured return losses. It can be seen that the design offers a wide impedance bandwidth of 0.55-6 GHz in both simulation and measurement results. Surface Current Distribution The surface current distribution is the best way to take a deep sight of the structure's behavior, and fortunately, this property exists in the simulation of CST Microwave studio software. Figure 13 demonstrates the surface current distribution at various frequency bands. The active region's transition from the large dipoles to the smaller ones coincides with the resonance frequency transition. The active region's smooth and continuous transition will reflect high gain and radiation pattern stability. Radiation Pattern EMC chamber at the University of West Bohemia was utilized for radiation pattern and gain measurements, as shown in Figure 14. The proposed antenna is rotated around its axle 360 • in both vertical and horizontal directions to achieve the results of the E-plane and Hplane, respectively. The simulated and measured radiation patterns in both the E-plane and H-plane are shown in Figures 15 and 16, respectively. Furthermore, the measured radiation patterns in both planes are agreeable with the simulated results from CST Microwave studio. The direction of the main lobe is at 90 • for both elevation and azimuth plane. The back lobe of the azimuth plane is prominent at low frequencies and decreases gradually with high frequencies. The radiation pattern deterioration is clearly observed at high-frequency bands in both E-field and H-field presented in Figures 15 and 16, respectively [42]. Axial Ratio, Co and Cross-Polarization One of the important parameters in designing a reference antenna is the Co-polarization (desired radiation) and Cross-polarization (orthogonal to the desired radiation) of the radiation pattern in both azimuth and elevation planes. The EMC reference antenna is required to be linearly polarized in the design. However, a slight difference in behaviors will appear from the designers' intentions. For instance, even the log-periodic dipole array antenna that uses opposite dipoles arrangement to provide smooth phase transportation between the elements shows an elliptical polarization. Furthermore, unwanted radiation will show up when the phase is not in the main element (cross-polarization), and this part cannot be eliminated. Instead, it could be minimized with the proper design. According to the EMC applications, the acceptable limit of cross-polarized rejection ratio is (14 dB to 20 dB) [43]. According to EMC standards, the cross-polarized in both E-plane and H-plane for the proposed antenna have been satisfied. The term axial ratio (AR) demonstrates the type of polarization, whether circular, elliptical, or linear. The axial ratio of the circular polarization pounces between 0 and 3 dB, the AR of elliptical polarization is higher than 3 dB, and finally, the linear polarization will stand for AR going to infinity (theoretically). In fact, there is no practical/industrial norm for differentiating elliptically polarized antenna from linearly polarized antenna in terms of the axial ratio. Authors in [44] claimed the proposed design exhibits linear polarization with AR > 10 dB in planes (ϕ = 0, θ = 0) and (ϕ = 90, θ = 0) since linear polarization may be viewed as a special case of elliptical polarization. The phase difference between the two gain components was close to zero in the direction (θ = 0), indicating linear polarization raised from the radiation pattern properties [45]. The proposed structure offered linear polarization with more than 20 dB AR except for some frequencies, as shown in Figure 17. Realized Gain and Antenna Factor As we mentioned earlier, the bandwidth and the antenna factor are critical factors in designing the reference antenna. Wide bandwidth allows the detection of the EMI over a wide range of applications. At the same time, the antenna factor measures how much the structure is good to work as a reference antenna. The antenna factor is dedicated to finding out the incident field in space by knowing the received voltage. According to Equation (14), it is clear that the antenna factor is inversely proportional to the wavelength times the root square of the realized gain [16]. Equation (15) The simulated and measured realized gain (dBi) are depicted in Figure 18. Relatively small fluctuations in gain values (4.6-7) dBi reflect good behavior in antenna factor values (24-41) dBm −1 . Figure 19 shows the antenna factor versus frequency, while the antenna factor values are listed numerically for each frequency band in Table 2. It can be seen that the gain and, consequently, the antenna factor are in line with the typical values of a standard EMC antenna [46]. Comparison with the Literature Reviewed In the last decade, several structures of PLPDA antenna were proposed to work as a reference antenna inside the chamber for EMC measurement, and these articles have used different techniques for size reduction and bandwidth enhancement. Table 3 lists the design specifications and achievements for these literature articles. It is worth mentioning that the relative bandwidth (FBW) presents the percentage of increased bandwidth and can be evaluated using Equations (16) and (17). Additionally, the size here is in terms of the wavelength that matches the lower frequency band (f l ). Table 3 illustrates the design specifications for several proposed designs that have been presented to serve as a reference antenna for EMC measurements inside the chamber [27][28][29][30][31][32]. The bandwidth enhancements and the size reduction are the main goals for all these works as they are controlled by the number of dipole elements and the spacing factor. For instance, [29] offers wide impedance bandwidth of about 8.5 GHz (FBW = 177%) with a fluctuating gain of 2.4-7.8 dBi, while it is required 48 elements with a size of 0.49 × 0.355 λ L . Authors in [30] use the dual-band dipole element technique to achieve a wide bandwidth of 9.5 GHz (FBW = 181%), with a gain of 3-6 dBi, while it requires 25 elements with a size of 0.36 × 0.43 λ L . On the other hand, our work aims to tackle bandwidth enhancement and size reduction goals. The proposed design uses a biconical dipole to obtain wide impedance bandwidth of 5.5 GHz (FBW = 170%), with a relatively low fluctuated gain of 4.6-7 dBi, and it requires only 12 elements based on a small size of 0.28 × 0.26 λ L . Table 4 presents the miniaturization techniques that have been used in [29,30] and the size reduction percentage compared to our works. Moreover, the constant gain behavior in the whole frequency band reflects a good antenna factor compared to the commercial design. Table 4. Comparison between the miniaturization techniques used in [29,30] and the proposed work. Comparison with the Commercial LPDA Antenna (HyperLOG ® 7060) A comprehensive comparison between the proposed structure and the commercial design HyperLOG ® 7060 from the AARONIA AG website is listed in Table 5 [47]. HyperLOG ® 7060 antenna has relative bandwidth of 158% with a size of 340 × 200 × 25 mm. On the other hand, the proposed antenna has better relative bandwidth of 170% with a compact size of 170 × 160 × 1.6 mm. Moreover, both commercial and proposed designs have an acceptable low variation gain related to the EMC applications and reflect good antenna factor values (AF). The Antenna factor (AF) measures how much the proposed design is suitable to serve as a reference antenna by comparing the AF of the proposed structure with the standard AF. Unfortunately, none of the reviewed literature presents the antenna factor. In this work, the antenna factor of the reviewed literature, whose cover band is up to 6 GHz, and the proposed design were calculated from its given gain in dBi using Equation (15). The results are compared with the commercial Hyperlog 7060, as shown in Table 6. The AF of the proposed design has lower tolerance than the commercial Hyperlog 7060 due to the tiny fluctuations in the realized gain. It is worth mentioning the minimum 3 dB beamwidth of the proposed antenna is agreeable with the standard limits of the classical PLPDA in CISPR 16.1.2, as shown in Table 7. The minimum dimension of w can be calculated using Equation (18), where w is the minimum dimension of the line tangent to the DUT formed by the minimum 3 dB beamwidth (∅3dB), as shown in Figure 1. w = 2 × d × tan(0.5 × ∅3dB) (18) where, d is the minimum measurement distance between the reference antenna and the DUT and can be either 1 m, 3 m, or 10 m. Conclusions A Compact size log-periodic dipole array antenna is designed, modeled, and fabricated. This design is dedicated to serving as a reference antenna for EMC measurement. The use of dipoles with biconical shapes rather than normal ones has reflected a size reduction of (50%) and bandwidth enhancement (relative bandwidth of 170%). Furthermore, the balance feeding method is deployed to obtain wideband impedance matching (from 0.5 GHz to 6 GHz). The compact size has given the freedom to change the measurement distance to 1.25 m in the case of a small DUT, and in this case, the illumination area will be 1.5 mm, which is suitable for most DUTs. A good value of the realized gain has been achieved with tiny fluctuation (4.6-7) dBi through the whole bandwidth with the help of an extra dipole. Calculating the antenna factor and comparing it with the standard result of the conventional LPDA antenna is a trusted investigation method to show the validity of the proposed design. For instance, an antenna factor (23-41) dB/m for the proposed design is compared to the antenna factor (26-41) dB/m for a commercial (0.7-6) GHz LPDA antenna (HyperLOG ® 7060). Moreover, more investigations could be performed on this antenna for future work, such as the calibration and modeling of an equivalent circuit.
7,032.2
2022-09-11T00:00:00.000
[ "Engineering", "Physics" ]
Using Graphs of Queues and Genetic Algorithms to Fast Approximate Crowd Simulations † : The use of Crowd Simulation for re-enacting different real life scenarios has been studied in the literature. In this field of research, the interplay between ambient assisted living solutions and the behavior of pedestrians in large installations is highly relevant. However, when designing these simulations, the necessary simplifications may result in different ranges of accuracy. The more realistic the simulation task is, the more complex and computational expensive it becomes. We present an approach towards a reasonable trade-off: given a complex and computational expensive crowd simulation, how to produce fast crowd simulations whose results approximate the results of the detailed and more realistic model. These faster simulations can be used to forecast the outcome of several scenarios, enabling the use of simulations in decision-making methods. This work contributes with a simplified faster simulation model that uses a graph of queues for modeling an environment where a set of agents will navigate. This model is configured using Genetic Algorithms (GA) applied to data obtained from complex 3D crowd simulations. This is illustrated with a proof-of-concept scenario where a 3D simulation of one floor of a faculty building, with its corresponding students, is re-enacted in the network of queues version. The success criteria are achieving a similar total number of people in particular floor areas along the simulation in both the simplified simulation and the original one. The experiments confirm that this approach approximates the number of people in each area with a sufficient degree of fidelity with respect to the results that are obtained by a more complex 3D simulator. Introduction Crowd simulation is a challenging application area for agent-based modeling and simulation [1,2].There exists an active field of research on this topic thanks to its usefulness in several applications such as evacuations planning, designing and planning of pedestrian areas, indoor buildings, subway, sport-stadiums, among others.Several agent-based tools for this kind of simulation, both commercial and research based, address the problem, such as: Vadere http://www.vadere.org/,Pedestrian Dynamics https://www.incontrolsim.com/product/pedestrian-dynamics/,PEDSIM http: //pedsim.silmaril.org/,or Legion http://www.legion.com/.These systems try to represent with high fidelity and realism the behavior of the individuals in a crowd simulation, usually including a 3D graphic representation of the individuals, complex animations, calculation of shorter paths and interpolated movements.These simulators usually require the definition of 3D models that represent the environment or the physical area to simulate, the pedestrians avatars and obstacles, to cite some.Although useful for uses such as modeling the interplay between pedestrians and ambient assistive solutions [3], building these assets is expensive both in resources, time, and computational power.Some works propose mathematical techniques to solve crowd movement problems [4], as a faster approximation method to simulate crowds behavior [5,6].In these systems, the environment is modeled as a network of walkway sections, where the nodes represent rooms and the links represent doors.Each pedestrian is treated as a separate agent.In others approaches [7] the agents are modeled as unique flow objects where the time to traverse each link is dependent on the overall density of pedestrians present at the link.The problem of these methods is their trade-off with other features, such as accuracy in the movement of characters, so as to have a faster output.Nevertheless, there are situations where it is better to sacrifice the accuracy of these systems in order to obtain a faster simulation (for instance, a reinforcement learning system for simulating pedestrian navigation [8]) or for running them in not very powerful computing devices (for instance, a smart-phone without Internet access inside a building does not have communications during an evacuation). The fact that some simulations are expensive does not mean that such simulations are useless or expendable.Sometimes complexity is really needed, but in other cases efficiency and a high performance is preferred, so it is interesting to have an alternative to be able to maintain both kinds of simulations (high accuracy and fast simulations).Then the research question is how to combine both and keep them consistent during the development or in production. This work contributes to this issue with a method to derive a fast crowd simulation from a more complex and slow crowd simulation.If the faster simulation produces valuable results for some variable or parameter of interest, such results need to be consistent with those returned by the more complex simulation along the duration of the original simulation. This paper shows how this can be done using as the selected parameter the occupation of spaces within the floor of a university building.As a first step, the paper will focus on final values of this parameter to investigate if the approach is correct.In this way, future work can focus on intermediate states, which are more challenging.The faster solution is based on a network of queues model, and the derivation method is based on genetic algorithms (GA), which produces particular configurations for the queues network.The GA is applied over a set of samples that were obtained from a more complex crowd simulation to determine the runtime parameters for the queue based simulation. This queue-graph approach is similar to the one presented in [5,6].The queuing network is named graph of queues and contains two types of elements: nodes and edges.The nodes of the graph represent the different walkable spaces (not necessarily rooms) in which the physical space to simulate has been divided.These nodes contain a queue of pedestrians with a fixed size.And the edges of the graph represent the interconnections between two nodes and restrict the flow of pedestrians that can pass through them per unit of time. The problem of these methods is the high number of the system parameters that have to be set to configure the system, and to assure that they have a good degree of similarity with the environment to simulate.For example, setting the maximum queue size of each node, or the pedestrian flow in each area, becomes unmanageable when the number of areas is high.This is solved here by using genetic algorithms (GA) [9].The GA use as input the number of pedestrians allocated in different pre-determined areas, which are represented by nodes in the graph.This traffic information is obtained from a more complex simulation.The GA uses this information to find a configuration of the parameters of the model that gets a count similar to the system we want to approximate. The remainder of this paper is organized as follows.Section 2 reviews related work.Then, Section 3 presents our approach to build faster simulators and how to configure them by using a genetic algorithm.Section 4 shows the results of the experimentation.Finally, Section 5, presents some conclusions and future lines of work. Related Work In the last decade, most works on crowd modeling follow an agent-based approach where the agents move through a two or three-dimensional map.Some works are focused on simple models.For instance, in [10], the authors use a 2D map that it is divided into small squares that represent places in the environment that can be free or occupied by an obstacle or by an agent.This approach is a good solution to reduce the computational cost and the developing time when the world modeled is small.Nevertheless, it requires having an expensive cell structure used to calculate the path-finding comparing with other models such as navmesh [11].Another problem with this representation model is the lack of accuracy that it is produced in the frontiers that delimited the walkable and no-walkable areas and the obstacles [12] but, in certain environments, the model may be sufficiently precise. This approach is less computationally expensive than other approaches such as the one used by Narain et al. [13], where the realism prevails using techniques more computationally demanding.The work cited want to manage thousands of pedestrian in 3D, with animations as opposed to the previous one who intended to get a less detailed simulation.They aim of this work is to replicate some behaviors that appear when the density of the individuals is very large, like the pilgrimage to Mecca or the evacuation of a sport stadium.This kind of situations are close to those that our work intends to consider.In the experiments, we compare ourselves with an indoors simulator called Massis [14], with similar realistic simulations.Another example of this most realistic approach is Clearpath [15] where the authors use the power of parallel computing and a custom collision avoidance algorithm that simulates the behavior of thousands of agents in complex 3D environments in real-time. These so strongly disparate approaches show us that different levels of realism and accuracy are needed, depending on the environment to be simulated and the information that we want to obtain from the simulation.Despite this, even the simplest 2D models are computationally demanding thanks to the cost of the path-finding algorithms.For that reason, other computationally lighter models are also used.One of the most used is Queue theory [16]. Queue theory is a mathematical model of waiting lines.It can be applied to a multitude of the human process.For instance in [17] authors describes a model of queue theory for modeling the pedestrian traffic flow.They call it M/G/c/c queue model.Their queue model shows adequately the congestion that occurs in public buildings when pedestrians arrival to the building follows a Poisson distribution.Xu et al. [18] use a queuing network to model the pedestrian flow in a subway station.The proposed model has the limitation that it is not bidirectional.In the bidirectional sections, the system uses two independent links, for that reason the traffic in one direction not hinder to another.In addition, the authors assume that passenger arrival and departure at the platform follows another Poisson probability distribution.In our approach, we do not assume any specific statistical distribution function because it depends on the context to be simulated.Our simulation models the pedestrian arrival by using simple agents that move from one point to another.In each simulation, the arrival frequency will be different and it is part of the simulation.Each agent has a previously defined behaviour which is simplified by a list of way-points.This approach is more costly but also more real and independent of the environment to be simulated.Moreover, our model allows bidirectional communication between the different nodes. To conclude, in our approach we use genetic algorithms to adjust the queue model to the simulation context.Some uses of genetic algorithms [19] in crowd simulations have been documented in the literature, but not specifically in the optimization of a queue system.In our approach, the genetic algorithm obtains a configuration of parameters of the queue model.That it is a different approach to the typical ones that can be found in the literature where is usually used in the optimization of the trajectory or to achieve a more realistic behavior of the agents.To cite some examples of these we can see the work of Wolinski et al. in [20] where the authors describe an optimization framework that uses GA in combination with greedy algorithms to create the pedestrian trajectories.In this work, the authors compare the optimization of two methods of collision avoidance: RVO2 [21] and social forces.Another approach to use GA in crowd simulation is shown in [22], where different methods of partitioning the space are compared. Model for Crowd Simulation The purpose of this work is to obtain a simplified model that subsumes the main aspects of a more complex pedestrian simulation while returning similar results in significantly less time.In this section, we describe the graph model that we propose and the genetic algorithm used to configure and optimize it. Graph of Queues In this work, the physical environment is modeled using an undirected graph where the nodes of the graph are the spaces where the pedestrians can transit and the edges of the graph are doors or other structures that communicate the spaces among them.We assume, for simplicity, that these structures (edges) can be traversed by a pedestrian in zero units of time.Each node internally is modeled with a queue of a specific size. The time that pedestrians spend walking between two points is modeled with a variable in each node that establishes the time that an individual takes to go through the node at the average speed of an adult (about 1.2 m/s [23,24]) In this model, the pedestrians are queued when they arrive to a node.Furthermore, each edge in the graph can extract an amount of pedestrians buffered in the node each time step.Each agent representing a pedestrian has a path to follow, and selects the edge to leave the node following this path.A pedestrian can leave its current node when it stays enough time.We consider that this is enough time when the waiting time was sufficient for an agent to traverse the space in the original simulation. This time is calculated using the average time to traverse the node, which is stored in each node, and the speed of the pedestrian (usually the average speed).In the experiments, we assume that all the pedestrians walk at the average speed for simplicity. The number of pedestrians that can departure from a node is determined by the number of individuals that can move simultaneously through the edges that connect the node with its neighbors within the simulated time interval.These edges are derived from the physical corridors that connect areas.For example, if there were twenty simulated pedestrians waiting to get outside a node, and if the edges that communicated this node with others could accommodate 10 of them each cycle, it would take two cycles to evacuate the waiting pedestrians. According to these abstractions, we define a graph of queues framework, as Figure 1 shows.The framework is represented by the structure G(N, E, M), as an undirected graph structure composed of a set of nodes N = {n 1 , n 2 , ...n n } that represent the different subdivisions of the simulation physical space; the set of edges E = {e 1 , e 2 ...e m } that represent the interconnection between two nodes; and a movement function (M) that determines when a pedestrian can move across nodes.Each node contains a queue of pedestrians where pedestrians can be removed (they leave the node) or added (they arrive to the node).Given a pedestrian, its movement across nodes is determined by the connecting edges (E) and a movement function (M).The set of edges E represents the interconnection between two nodes n i , n j ∈ N such that: e(n i , n j ) : a character can move from node n i to node n j (1) The division is determined manually by the designer of the simulation.This subdivision must be done taking into account the physical configuration of the building.Each node has a pair of values q i , t i that represents the maximum queue size (q) and the average time to traverse the node (t): Each edge has a parameter (c) that represents the number of pedestrians that can move between both nodes involved in the edge (in the Equation (3) these are n i and n j ): One pedestrian p k ∈ P = {p i , p 2 ...p p } can be defined as a sequence of consecutive movements It is necessary to account its current queue (the node it is currently occupying) and its waiting time in the current queue (w). Each cycle moves a pedestrian from the current node n c to the next n i through an edge e k when the pedestrian has waited for a certain time w k and the following condition is satisfied: where |n i | is the current size of the queue in the node n i , c k is the capacity of the edge e k and |e k | is the number of pedestrians that have transited by the edge e k and p k in this time interval.In other words, a movement between two nodes n c , n i is possible if there exist an edge that interconnects both nodes and the waiting time of the pedestrian in the source node is greater than the time to traverse the source node, and the number of pedestrians enqueued in the destination node is less than its maximum capacity, and, finally, this edge is not fully occupied by other pedestrians. The queue-based simulation will be defined over G as a cycle where: 1. For each pedestrian, determine if it can move from one node to the other. 2. If a pedestrian cannot move to the next node according to M, it waits in the current node until it can do it, increasing its current waiting time w by some δ. 3. If a pedestrian can move, then add the pedestrian to the queue of the next node in the pedestrian travel sequence and remove it from the previous node. Configuration and Optimization of the Model The configuration of this model requires defining the topology of the graph, the size of each node (max queue size), the average time to traverse each node, and the size of each connection (the number of pedestrians that can transit for each edge in each simulation's step time).This configuration is a tedious problem that involves measurements of the free space of each node, the average time to traverse them, and the flow density in the interconnections.Moreover, the hand-made optimization of this model probably requires a process of trial and error to correctly set the parameters. For that reason, we use a genetic algorithm to obtain the configuration of all the parameters of the model, namely: the size of the queues, the time to traverse the nodes, and the number of pedestrians that can cross an inter-node connection.Each particular configuration of these variables will become an individual of the population. The algorithm needs a reference criterion that guides the search and that determines whether the obtained configurations are correct.This reference criterion should be easy to calculate and obtain in order to consider a large population of individuals. In this case, we propose to count the number of persons leaving each node in the queue simulation at different instants of time.This information can be easily obtained from a real scenario (there are automatic methods but also it can be collected manually by counting people) and specially for the initial complex simulations we wanted to address.With this information, we can compare the results obtained by the simulator we want to approximate with the results obtained by our model.But also it will easily obtain this information in a real environment, if that is the current application scenario. As a first step towards a more complete solution, this space occupation measurement is computed at the end of the simulation.Intermediate states are not checked, yet.We will address this information as the reference traversal dataset. Therefore, the purpose of the genetic algorithm is to find an individual that represents the best configuration of the queues graph model to obtain the closest values to the final total people account per designated section.The error will be computed with respect to the known values of the reference traversal dataset.This dataset store the number of pedestrians counted with each node in the simulation. The genetic algorithm will follow the following steps: Reproduction, crossover, mutation and replacement.In the reproduction phase, the algorithm selects the individuals that make up the next generation.This selection is made based on how good they are solving the problem (in this case, reproducing the reference traversal dataset).There are different types of selection in the literature.We have implemented three of them: tournament, hierarchical and roulette [25].With the selected individuals the algorithm crossovers them with a certain probability.The crossover mechanism is similar to the genetic crossover among chromosomes and there exists a plethora of methods.We implemented three of them: uniform, point, or multi-point crossover [25].Next, the algorithm, with a certain probability, modifies randomly the individual in the mutation phase.Finally, the new individuals are evaluated using a fitness function that quantifies how good the individual is to solve the problem.The new population replaces the previous and the algorithm repeats the process as many times as the number of generations has been configured. In our approach, each individual of the GA has already been introduced as a specific configuration of the queues graph model.The genotype (the value of the individual) encodes each parameter of the system as an integer number.The average time to cross a node must be a discrete magnitude to be represented by an integer number.A unit can, for example, represent one second or half a second, depending on the accuracy that we want to obtain. In order to keep the consistency among the different types of parameters, an individual is coded using three arrays of parameters: The size of the queues of each node, the time to traverse each node, and the size of each link.The number of edges and nodes can be different, for that reason the genetic operators are applied to each genotype separately.Figure 2 shows an example of coding a simulation graph into an individual in the genetic algorithm. Also the implementation allows elitism and dynamic diversity control using as metric the entropy of the population [26].Using the diversity parameter of the GA, we manage to adjust the probability of the mutation and the size of the tournament.Whether this selected method is used depends on the diversity value.If the diversity value is near to 1 the mutation probability will be low and the tournament size will be high, maintaining a greater selective pressure.This combination favors the convergence towards the nearest local minimum.On the contrary, whether the diversity is low, the mutation probability will be high and the tournament size will be smaller, reducing selective pressure, allowing to increase the diversity, and increasing the exploration of the new local minimums.All the operators have to preserve the integrity of the individuals.The operators always must generate valid solutions within the value ranges of the three parameter types.Their ranges are shown in the Table 1: Table 1.The range of value of three types of parameters codified in the genetic individual. Min Max Max Queue size 5 600 Time to traverse a node 0 40 Link size 2 20 The fitness function in the GA is the execution of the queues graph simulator that is configured with the different individuals of the population.When the simulation is finished, the graph simulator returns the number of counted pedestrians in each node during the simulation time.This result is compared with the results that were obtained by a the 3D multi-agent simulation system MASSIS (a multi-agent simulator of pedestrian crowds) [14], the reference traversal dataset. The number of pedestrians per second that are present in an area is obtained automatically in the simulation by inspecting it in run time.However, it is not distinguished whether, within two time intervals, the same person has been counted twice.This information is used in the fitness function to compare the result between the queue graph simulation and the original simulation in MASSIS.The similarity measure used was one minus the relative average error in each node. where e(n i ) is shown in the Equation ( 6) and calculates the absolute error in a node n i divided by the people counted by the MASSIS simulation.In this equation, we denote p(n i ) as the number of people counted by the MASSIS simulation in the node n i and p (n i ) as the number of people counted by the graph simulator. The fitness function discards the nodes that have counted 0 individuals because they would artificially increase the similarity by having an error of 0 and therefore a similarity of 1 always, which would increase the average.Section 4 describes some of the experiments that have been performed taking into account the solution proposed to the problem in this section with different simulation scenarios and different GA configurations.In addition, the different results that are obtained for different scenarios are discussed. Experimentation We have performed a set of experiments with the goal of testing whether the queue graph simulator configured with GA can get a final people account per section that is similar to the one obtained with a more complex 3D simulation.If the premise is correct, this system could be used as reliable fast approximation of a more complex simulator in certain environments where using a most complex simulator is not applicable or too costly.The introduction (Section 1) has presented some of these scenarios where it is interesting to consider the use of the fast simulator, e.g., in a wearable or smart device with a significant lower computation capability, or in machine learning scenarios. As a proof of concept, a run of the queue graph simulator, as introduced in Section 3.1, was fast enough to apply GA optimization and produce the results presented in this section.In these experiments, we used the graph of queues as a part of the fitness calculation of population with hundreds of individuals that represented valid configurations of the simulator and the evaluation of each generation only took a few seconds running on a laptop, whereas the original 3D simulator took several minutes.The modeled scenario was the floor areas represented by the Figure 3 whose actual 3D representation, segmented by area, is shown in Figure 4. To evaluate the results, we used a reference traversal dataset that counted the persons occupying a section, as it was mentioned in Section 3. We compared the number of people that were counted in MASSIS and the queues graph approximation configured with GA.Both simulators measured their environment with the same time interval, which was defined as 1 s for these experiments.The number of agents that represented a pedestrian in both simulators was the same and followed the same routes.The simulation length was also the same in both systems but in MASSIS it was executed in real-time, while our approximation was run as fast as the computer could. The environment that has been chosen to perform the experimentation is a 3D model of the building of the Faculty of Computer Science of the Universidad Complutense of Madrid.Using this model, we defined a simplified graph that is shown in Figure 4.This subdivision in areas became nodes in the queue graph (see Figure 3) with the sole criteria of creating nodes as squares when possible so that the average traverse time (parameter used in the queue simulator) was a realist approximation. In order to simplify the experiment, all the agents (i.e., people that were simulated) moved with the same speed.The simplified graph has 28 nodes that represent the different places of a floor of the building, and the interconnections between them.Some places represent rooms or corridors but others are subdivided to get a better representation of the environment.Some interconnections are doors but other interconnections do not have associated structures. In the experiments, we have simulated different scenarios using this environment with different amounts of people through different routes that produced different people accounting in the nodes.The configuration of the nodes and the places was the same in all the scenarios that have been simulated. The simulated scenarios in the experiment were: • Scenario 1: The students could enter to the Faculty by two gates, the main gate and the back gate.Students entering through the main gate went to the classrooms 1 to 3. Students entering through the back door went to the classrooms 3 and 4. The number of students simulated was 150, with 30 for each classroom.• Scenario 2: Students evacuated the classrooms on an emergency situation.Each class left the building by the nearest gate.The number of students simulated was 150, with 30 for each classroom.• Scenario 3: This simulation showed some typical behaviors that occur in a university.A group of people entered the Faculty building while other group of students left.Other group left the cafeteria and went to their classrooms, some students changed the classroom and, finally, a group of people waited for an event in the Hall of events.The number of people simulated was 210. • Scenario 4: Entry of the morning shift and departure of the afternoon shift.The number of students simulated was 240. Each scenario was run in the original 3D simulator to obtain the corresponding reference traversal datasets. Table 2 shows the results obtained for each scenario at the end of the simulation.The last column, sim, shows the similarity obtained with respect the reference traversal dataset.The rest of the table shows the configuration of the GA that has been applied to get the result, where P is the population size, Gen is the number of generations used, Sel is the selection method used, Cross is the crossover method used, Mr is the mutation rate (that we remember is dynamic depending on the diversity and it varies between the range shown in the table as explained in Section 3), Cr is the crossover probability and El is the degree of elitism.The selection method used were: tournaments (TU in the table); and the crossover methods used where uniform crossover (UN in the table).We performed different executions with different configurations of the GA, but the better results were obtained with the tournaments selection and the uniform crossover.For space reasons, only the results for the first and second scenarios and shown in Figure 5.As Table 2 shows, the similarity between both simulators is very high for all scenarios that have been tested, especially in the first and second scenarios, which get a similarity degree close to 95%. Figure 5a,b show the results obtained by both simulators and indicate that in most nodes the result is very close between them. The bigger deviations have been obtained in the nodes MainGate and BackGate.They refer to the primary and back door that are used to enter the Faculty.In both simulations, most of the students started from both nodes, so the differences in these nodes represent likely the result of some bottlenecks that have delayed the entrance of students at the university in the graph of queues to a greater extent than in the MASSIS simulation.In the rest of the nodes, the number of counted people is remarkably similar. In the third and fourth scenarios, the accuracy is lower than in others but it is also high (close to 90%).Simulation for scenario 3 shows that the bigger deviation was produced in the nodes ElevatorsLobby and the first part of the entrance hall.The simulation four presents minor deviations with the rest of the scenarios. As the results show, the system can approximate the results obtained by MASSIS whether we optimize the graph with GA with the same scenario in both simulators.This result is promising and it can apply in known scenarios, but we want to identify the ability of the model to generalize the results in more than one scenario.With this premise, we have realized two additional experiments. First, we have modified the fitness function of the GA to simulate several scenarios at the same time.The total fitness of the individual represents the average of the accuracy taken for each scenario.This function aims to find a graph configuration that allows approximating all these scenarios with a right accuracy.The experiment obtained a accuracy average of 79.81%, executing the GA during 1500 iterations with a population of 1000 individuals and similar configuration from the previous experiments in the rest of parameters.The accuracy achieved is lower than the previous experiments because, in those experiments, the simulator is over-fitting to maximize the results in the scenario used in the fitness function.With this new fitness function, the GA tried to find a generic configuration that fits all these scenarios at the same time. The second experiment aimed to configure the queue graph with a set of scenarios and later on using the same queue graph to approximate other different non-trained scenario to validate its performance.Using the take-one-out approach, one scenario was used to validate while the others were used to train.Table 3 shows the results of this experiment with different scenarios used in the fitness function in the validation scenario.As it can be seen, the similarity obtained in the third and fourth scenarios is acceptable, with around 70% of accuracy.However, in the first and second scenarios is very poor.That is because the third and fourth scenarios are more similar to each other than the other two.The first and second scenarios are entirely different from the rest and the model is not right configured for them when they are not present.This experiment shows that the capacity of the model to generalize is not too good as it tends to over-fit.Therefore, if we want to configure a graph that can approximate several types of scenarios (different from the usages in the configuration process) we must select carefully the set of scenarios used in the configuration.These scenarios must be representative of the possible scenarios to simulate.Otherwise, the simulator may not predict these scenarios correctly.That result is probably caused by two circumstances.On the one hand, the limitations of the graph of queues used; on the other hand the over-fitting produced by the GA that the graph to maximize the results in the scenarios used to setup the fitness function. This result does not invalidate the approach, but suggests that a more complex method is needed.We expect that a way to overcome this is the classification of the current situation in a number of known and trained scenarios whose accuracy is satisfactory.In this way, the right queue graph configuration could be selected and the results correctly predicted. Conclusions and Future Work This paper has presented a fast and easy-to-configure method to approximate complex pedestrian crowd simulators using a graph of queues that is capable to approximate them with a high degree of similarity. As the experiments have demonstrated, this approach can approximate more quickly a variable whose final value is obtained after a complex simulation, which makes this system an ideal method to use as an alternative in a multitude of environments where execution time (and low computing resources) is crucial.The graph of queues simulator is auto-configured using a GA that optimizes the similarity in terms of the number of people that is present in certain areas of the simulated building. The interest of the queues of networks is that we could obtain intermediate results for the variable we were observing.In the experiments, we focused on its final value to validate the approach, but we intend to run additional experiments and adjustments to estimate the variations of the variable along the simulation and how consistent it is with respect to those values observed in the original simulation. Currently, the main issue is that it tends to over-fit trained scenarios leading to unsatisfactory results in others.A more generic approach to the problem is one of the aims to work in the future. As an additional line of research, we want to use this estimator of crowd simulations into a machine learning system.Also, for creating cheap simulations in smart-devices that can simulate in few seconds which will be the result of an evacuation of a public building at the exact moment of an event occurring, without the need of using cloud-based alternatives as in some emergence situations there may be not good Internet connectivity. Figure 1 . Figure 1.Example of a graph of queues. Figure 2 . Figure 2. Example of coding a simulation graph as an individual of the genetic algorithm. Figure 3 . Figure 3.The graph model of the faculty. Figure 4 . Figure 4.The division of the environment in Regions.(Capture obtained from the MASSIS simulator.Leyend layer added afterwards). Figure 5 . Figure 5. Experiments results comparing MASSIS and graph of queues per node in the first and second scenarios. Table 2 . Table that summarizes the results obtained by the best individual obtained after the application of the GA in the different scenarios. Table 3 . The summary table with the scenarios used to configuring the simulator and the scenario used to validate it.
8,323
2018-10-25T00:00:00.000
[ "Computer Science" ]
Rejoinder to discussion of the paper “Human life is unlimited—but short” What can be learned from data about human survival at extreme age? In this rejoinder we give our views on some of the issues raised in the discussion of our paper Rootzén and Zholud (Extremes 20(4), 713–728, 2017). Introduction We thank the discussants 1 for very stimulating, thought-provoking, and educational comments. We were impressed by the title of Davison's contribution, and by the precise prediction in the bible cited by Stoev & Battacharya, and by their attempt to "play God's advocate". Biology, accident deaths, compression of morbidity Nerman gives a quick and useful pointer to the very large literature on biological theories of aging, and writes that the question of a limit for human lifespans is not primarily statistical, but biological. We agree with this, but also think that one should use available data as efficiently as possible to give an empirical underpinning of biological theories-and also because of the intrinsic interest of the problem. In their impressive contribution-a full paper on its own-Stoev & Battacharya make the intriguing comment that not knowing the cause of death may lead to bias. E.g., if the roof of the home of a supercentenarian falls down and kills all of the inhabitants, the observed supercentenarian life length should perhaps be considered as censored rather than fully observed. In a less dramatic, and often occurring event, if a supercentenarian has a fall which shortens her life length, should this be taken into account in the analysis? It could perhaps have been avoided by changing the layout of the home. But on the other hand the fall is usually also an effect of the frailty of the supercentenarian. Should the answer to the question about a biological limit for the human lifespan aim at describing life lengths of humans living in a "test tube" were no falls are possible and where they will not die from infectious diseases? But, could any human live like this? In the end, perhaps the most interesting approach is still to study lifespans as they are observed under the biological and cultural circumstances which the supercentenarians have lived under. However, if one takes cause of death into account in a statistical analysis this would amount to changing some observations from truncated to truncated and censored, and would lead to a longer estimated lifespan. We agree with the hope expressed by Stoev & Battacharya that extreme value statistics could contribute to the quite difficult statistical analysis surrounding the question whether we will "age healthy" or "age sick". This discussion was started by the "compression of morbidity" hypothesis of Fries (1980). His arguments build on an assumed finite limit for the human lifespan, but an unbounded lifespan is also compatible with both compression and expansion of morbidity. Fries inferred a finite limit of life lengths from an ideal "rectangularization" of survival curves, from a projected upper limit of 85 years for life expectancy, to be achieved in 2045, and from the fact that at the time when his paper was written the largest known human lifespan was 114 years. However, the ideal rectangular shape of the survival curve in Fig. 1 in his paper is contradicted by the very dramatic increase in survival up to age 100; see Vaupel (2010). Further, in 2015 the expected life length for women in Japan was 86.8 year, well above the Fries limit; finally the IDL data base contains 10 humans validated at level A and living longer than 115 years and the version of the GRG data base used in our paper contains an additional 14, with several added later. Practical extreme value statistics To address comments by Nerman, Segers, Stoev & Batacharya, and Zhou: The goals of an extreme value statistics analysis more often than not are both to increase the understanding of the extreme events, say extreme life lengths, and to extrapolate the distribution of event sizes a bit outside the range of observations-but never to extrapolate all the way to infinity. Our practical approach to this is to find the simplest possible model for the extreme observations at hand, and then use it for understanding and extrapolation. Occam's razor is the classical expression of "simple", and "simple" is also expressed in Einstein's adage "Raffiniert ist der Herrgott, aber boshaft ist er nicht". This is the hope that in the absence of information to the contrary simple models are those which describe our world most usefully. Simple models increase understanding: One learns from the ways data agree with or deviate from the simple model, and learning increases if different researchers start by trying the same simple model, as opposed to when everyone uses a different complicated model. "Simple" means different things in different contexts. For excesses of high thresholds, the simplest model is that they follow an exponential distribution: then excesses of even higher threshold have the same exponential distribution, and the exponential distribution (of course, and for many parent distributions) occurs as the limiting distribution of scale normalized excesses. Assuming an exponential distribution is the default in statistical reliability theory. The second simplest one is the family of generalized Pareto (GP) distributions. For GP distributed excesses, excesses of a higher threshold also follow a scale changed version of the same GP distribution, and the GP distributions is the family of distributions which can be obtained as limits of distributions of threshold excesses. These characterizations are completely parallel to the properties of the normal distribution which make it the simplest distribution in nonextreme statistics. The next level of generality could then be to include covariates in the parameters, or to use second order regular variation to construct a more general family of distributions, or . . . . For the IDL supercentenarian data the simplest model is an exponential distribution for excess ages, without any influence of the covariates sex, time, or group of countries, as checked by embedding it in the family of generalized Pareto distributions and by testing for non-exponentiality and for inclusion of covariates. A further crucial confirmation of the simple model is given by the nonparametic analysis in Gampe (2010). The model that survival after age 110 is exponential thus constitutes what we can learn from existing data, and what can be used for extrapolation. As always extrapolation beyond the range of data comes with caveats, as discussed below. Nerman does not see any convincing reason to restrict analysis of excess life length data by assuming that they follow a generalized Pareto distribution, and is not convinced by the extrapolation to the age range 120-130 years. Above we have set out our reasons for disagreeing with Nerman's first point. But we think Nerman is right in his comment about extrapolation: Data convincingly shows an exponential distribution of survival for ages 110-115 and indicate that for ages 116-122 survival is also exponential. For ages 123-130 there is no data, and reality could turn out to be different. Extrapolation to these ages is still useful, we believe, because of its intrinsic interest, and because it makes it possible to detect interesting changes in survival as quickly as possible. And then, to extrapolate one should use the simplest model. In contrast to Nerman, Stoev & Battacharya write "Extreme Value Theory is the most natural framework that can provide a principled answer to the question about whether or not natural human lifespan is finite". Davison, Segers, and Zhou also use this framework, but raise a number of questions related to our analysis. Under the heading "Uncertainty quantification", Segers assumes that data follows a generalized Pareto distribution and discusses the issue that from finite data one can never be sure that a parameter of this distribution has a specific value, say 0. A general version of this problem is that if a smaller statistical model is continuously embedded into a larger one, then from observing a finite number of values one can never be sure that the smaller model is the right one. An extreme and unwanted conclusion from this argument would be that model selection, one of the most important tools of applied statistics, is invalid and that one always should use the largest model one could imagine. For some discussion of this issue, see Section 3.2 of our paper. We found the philosophical arguments in Mayo and Cox (2006) helpful. Stoev & Battacharya address the same issue as Segers from a different angle by using "testing affinity" to quantify the statistical difficulty of the question of finiteness or not of the human lifespan. Their conclusion is that the amount of data so far available may not be sufficient to give a very confident answer to the question. Similarly, Zhou, using expected information rather than observed information and assuming untruncated observation, notes that for the number of observations in the IDL database, an estimate of −0.082 or lower of the shape parameter γ has to be obtained before the null hypothesis γ = 0 can be rejected. A similar way to treat the same issue, briefly mentioned in our paper, is through power calculations. Zhou makes the most detailed use of extreme value theory by assuming that human life lengths belong to the domain of attraction of an extreme value distribution with a second order index ρ, and writes that then the optimal sample fraction, k, to use is O(n 2ρ/(2ρ−1) ), where n is the total number of observations. The total number of deaths, n, in the countries and time periods included in the IDL data is of the order 10 8 (so very likely the IDL data is the most extreme one any of us has seen). Solving the equation n 2ρ/(2ρ−1) = 566 one can see that as soon as the second order index is less than −.26 then the IDL sample size is smaller than what would be optimal, and hence that bias does not dominate. However, to use calculations like this one for practical statistics is carrying mathematics too far, we believe. Instead second order variation could be seen as a way to construct more general models that include the generalized Pareto models. Further, from our practical point of view, Zhou's comment about the existence of distributions which have a finite endpoint but with asymptotically exponential threshold excesses is irrelevant. We do not try to find a γ which lives all the way out in asymptotia, but use asymptotic reasoning to suggest suitable models for the data which has been observed. In conclusion, the comments about the limited statistical resolution of the IDLor any-data set are relevant and have to be kept in mind when using our results (as also discussed in our paper). Similarly, one never knows if a prediction outside of the range of the data will hit the mark. But available data does not give any reason to come to any other conclusion than that survival after age 110 is exponential, so that human life is unlimited but short. Confidence intervals and GP fitting to data covering lower ages Davison used the IDL validation level A data (using also the parts of the US and Japan data which were excluded in our paper) to provide profile likelihood confidence intervals for the endpoints of the fitted GPD distributions. He obtained intervals which all contain ∞ and with relatively high lower limits, and made the remark that these intervals probably are conservative. Stoev & Battacharya used new statistical technology developed in their contribution to provide confidence intervals for the endpoint of the distribution of lifespans, and only used the 100 or 200 longest lifespans. Their intervals are built on regular variation at a finite endpoint, and are similar, but somewhat wider than Davison's intervals, as can be expected since they use less of the data. As far as we understand Stoev & Battacharya did not take truncation into account. We wonder if their methods could be modified to handle truncation. We agree that confidence intervals are a useful way of complementing the tests performed in our paper. We also enjoyed the Stoev & Battacharya simulation-based heatmaps. Davison next comments that his results disagree with those of Einmahl et al. (2017), who use Dutch data to conclude that there is a finite limit to the human lifespan. He writes that one possible explanation is that it is unreasonable to extrapolate from the very old persons in the Dutch data to (the even much older) supercentenarians and raises the possibility that, say, a logistic force of mortality function which first increases and then plateaus out could fit the Dutch data. Davison notes that this plateauing in fact may also show up in the Dutch data. We completely agree with Davison's comments: The Dutch data is dominated by ages around 100 where human mortality is clearly increasing, and to accommodate this a fitted GP distribution has to have an increasing force of mortality, or equivalently a finite endpoint. However, the (in fact quite surprising) fact shown by the IDL data, that after age 110 human mortality is at a constant plateau, is then not caught by the GP model. Additionally, Einmahl et al. (2017) present the pooled estimates 114.1 years for the limit of life length for men and the limit 115.7 for women, based on their Dutch data: However, the IDL data and our GRG data together contain 7 men who lived longer than the limit 114.1 years, and 10 women who lived longer than the limit 115.7 years, and right now (April 19, 2018) the GRG database lists 3 women who are alive and older than 115.7 years. Jeanne Calment lived even longer than the pooled 95% upper confidence limit 120.3 for the endpoint of the lifespan for women given in Einmahl et al. (2017), and longer than the upper endpoint of more than half of the confidence intervals in this paper. Davison also presents a quote from Einmahl et al. (2017) which argues for the use of death cohorts (rather than birth cohorts) and then in a section titled "Non-stationarity" presents an analysis which discusses the bias arising from using death cohorts. We again agree with Davison's analysis. 2 Truncation and censoring We appreciate the positive comments by Keiding, Davison, Stoev & Battacharya, and Zhou on our efforts to incorporate the details of the IDL sampling frame into the statistical analysis, and found Davison's point process based derivation of the likelihood function for truncated data instructive. Keiding suggests that it would be possible to use the same techniques to handle also the 2000 -mid 2003 US Data. We did not do this because the US data did not give the exact dates of deaths, only the death year, which makes it unclear how to handle the truncation. This was possible to do for the longer time period 1980-2000, see discussion in our paper, but seemed problematic for a 2.5 year period. As a further comment, the 2000 -mid 2003 data only include persons who were alive Jan 1, 2000, and this also had to be included in the analysis, (Rootzén and Zholud 2016), which might make it even more fragile. As a more general comment, taking truncation into account in the analysis often did not change estimates much. But one cannot know if this is the case or not without doing the correct analysis, which takes truncation into account. And, it did make a difference for some of the analyses. Age-biased sampling and the GRG database Zhou provides a number of examples which illustrate how conclusions may be distorted if the sample is age-biased. A general view of this is that, for all practical purposes, age bias can transform any age distribution to any other age distribution with the same, or smaller, support. The argument is as follows. Assume that observations are i.i.d. and that the true age distribution is supported on the entire real line, and has continuous probability density function g(x) > 0. Further assume that the "probability" of including a life length x in the sample is h(x). Then the density function of the observations in the sample is Let f (x) be some other probability density function on the positive real line, and assume first that there is a constant K such that sup{f ( g (x) in Eq. 1 gives that the density of the observations is f (x). If instead sup{f (x)/g(x); 0 ≤ x} = ∞ then assume that it is possible to find an A such that := ∞ A f (y)dy is arbitrarily small, and such that sup{f (x)/g(x); x ≤ A} ≤ 2 An "extreme" example which illustrates what could happen if one uses death cohorts is as follows: Suppose that in a large country all men which are born in an even year are drafted into war and killed, and that one studies life lengths of men who died at age 110 or over in some specific even year. One conclusion would then be that male supercentenarians in this country only could live an odd number of years. This conclusion has nothing to do with the biology of aging, it is an artefact caused by the wars. Davison refers to a milder version of this example, the European heatwave of 2003 which killed many old persons. K, and set h(x) g(x) for x ≤ A and h(x) = 0 otherwise. Then Eq. (1) gives that the density of the observed values is and hence, by making small, the distribution given by e(x) can be made to be arbitrarily close to the distribution given by f (x). A similar argument applies if the support of g(x) is a subset of the positive real line. From a practical point of view, it is not likely that age-biased sampling would change a distribution to a substantially different one. However, age-bias could easily change an age distribution to a similar one, say, change a GP distribution to another GP distribution with a somewhat different shape parameter. The authors of the IDL database have made a serious effort to avoid age-bias. In contrast, clicking on the link to "GRG World Supercentenarian Rankings List" on GRG (2016), and scrolling to the bottom of the page one can read "To Our Readers: Do you know of someone aged 110 or older currently living who is not on this list, but has the documents to prove it? In this case, please contact one of our two Supercentenarian Claims Investigators". Thus GRG data are collected by investigating claims sent to the GRG group. It is inconceivable that this collection method would not lead to age-bias. Most likely it is more probable that older supercentenarians are reported. Hence the bias goes in the opposite direction to the examples in Zhou (2018), and if anything, would change a true γ to a smaller one. follows that cohort maxima for different years have different distributions: the distribution of the maximum of a larger cohort is stochastically larger than the maximum of a smaller cohort. Hence analyzing cohort maxima as if they are identically distributed is wrong. This is the same mistake as made by Dong et al. (2016). The end of Section 2 of Ferreira and Huang (2018) contains a discussion of whether truncation should be taken into account. We find this discussion confusing. It concerns the rationale for the formula on p. 724 of our paper. In this expression the numerator describes the real age distribution, which is a threshold-stable GP distribution as it should be, while the denominator comes from the sampling frame and has nothing to do with threshold stability or extreme value theory. Also, in reply to the penultimate sentence of Section 2 of Ferreira and Huang (2018): "model checking and optimizing estimation methods" is possible also for analyses which take truncation into account, see e.g. our paper and Rootzén and Zholud (2016). (2018) mentions a private communication about how our GUI, LATool, computed the interval (b, e) which is used to correct the likelihood for truncation. However, what we told Segers was inaccurate. For each country, our code computed b as the beginning of the year where the first death occurred, and e as the end of the year with the last death. This means that our intervals (b, e) agreed with the intervals (b, e) given in the IDL metadata, except in three cases: the b for Spain, and the e-s for Japan and the USA. We have now written an updated version of LATool, available in the supplementary material of this rejoinder, which throughout uses the intervals (b, e) given in the metadata. We have also fixed a bug in LATool and improved the estimation procedure. This has lead to some changes of values in our paper. For completeness we have included updated versions of these in the supplementary material. The changes have no influence on the conclusions or discussion in our paper. However, for the new version of Fig. 5, left panel, Keiding's comment "the observed quantiles seem to sit rather marginally among the simulations" no longer applies. We have also tried to make LATool more user-friendly, and hope it will be used for alternative analyses. Updated version of LATool Section 1 of Segers A misprint Johan Segers pointed out a misprint: the paper three times says that we have excluded persons "who died in Japan after August 31, 2003" from the analysis. This should be "who died in Japan after September 30, 2004". The secret of (extremely) long life Except for Keiding, all discussants tackle the question of whether there exists, or doesn't exist, a finite limit to the human lifespan, but do not write about the conclusion that there is no detectable difference between females and males, between (groups of) countries, or between time periods. The first question was also our motivation for starting this research: to find out if there is a hard biological limit to the human lifespan. However, we now think the latter conclusion is the most interesting and intriguing one. Differences would have pointed to factors which are important for long life, and which we could use to live longer-and this question interests most of us. Much of supercentenarian research is driven by this question. A non-statistical approach to this question is taken in Jeune et al. (2010) where the authors describe the life stories of the longest-living humans. Their conclusion is "The life journeys of these very old people differed widely, and they are almost without common characteristics, aside from the fact that the overwhelming majority are women (only two are men), most smoked very little or not at all, and they had never been obese. Still, they all seem to have been powerful personalities, but decidedly not all were domineering personalities. They are living examples of the fact that it is possible to live a very long life while remaining in fairly good shape. Although these people aged slowly, all of them nonetheless became extremely frail in their final years." This agrees completely with our result that none of the most obvious factors seem to influence the chance to live very long. There now is quite some exciting ongoing research which tries to find genetic factors which make long life possible. And, as written by Nerman, "in the era of quick development of organ transplantations, of stem cell therapies and of regenerative medicine" it seems quite possible that in the near future the human life span will become (much) longer. However, so far presumably these efforts have not been crowned with success-if they had we would all know about it. So, the secret of extremely long life is still hard to find! Electronic supplementary material Updated versions of LATool, the MATLAB toolbox for life length analysis, and of Figure 5 and Tables 2-5 in Rootzén and Zholud (2017).
5,539.8
2018-07-21T00:00:00.000
[ "Philosophy" ]
Comparison of orders generated by Ky Fan type inequalities for bivariate means In this paper, we deal with seven types of Ky Fan type relations between bivariate, symmetric and homogeneous means. For each relation we determine necessary and sufficient conditions for means to be in this relation. Additionally, we investigate the dependencies between these relations. A little bit of history Let A n , G n and H n denote the arithmetic, geometric and harmonic means of positive arguments x = (x 1 , . . ., x n ) . It is well known that for arbitrary x the inequalities hold In the 1960s it was discovered that for x ∈ (0, where 1 − x = (1 − x 1 , . . ., 1 − x n ).Beckenbach and Bellman [5] attributed the right-hand side of this elegant result to Ky Fan, while the left inequality was proved by Wang and Wang [15].Later on, similar results were obtained for other multivariate, homogeneous means, also in weighted cases, see e.g.[8,14]. As the three classical means are members of the monotone family of power means A r,n (x) = A 1/r n (x r ), x r = (x r 1 , . . ., x r n ), where A 0,n = G n , it is natural to ask whether similar inequalities hold for them.Chan et al. [6] discovered, that for r < s the inequality A r,n (x) A r,n (1 − x) ≤ A s,n (x) A s,n (1 − x) , hold only if the number of variables equals 2. For more details see the review paper by Alzer [4]. Neuman and Sándor [11] proved the following sequence of inequalities for bivariate means.For x, y ∈ (0, where L, P, NS, T stand for the logarithmic, first Seiffert, Neuman-Sándor and second Seiffert means. The following inequalities look like intriguing companion of ( 1) The outermost inequality is due to Sándor [13], while the refinement was found by Alzer [2]. In [12] Neuman and Sándor developed a method that allowed them to deduce the following chain of inequalities from (2) . The additive counterparts of ( 1) and (3) were found by Alzer [1], [3] G and What is even more important from our point of view, Alzer also found out that the analogous inequality between the geometric and harmonic means is not valid.Three more types of inequalities between classical means have been investigated by mathematicians.For x = (x 1 , . . ., In [7] we find the counterpart of the classical Ky Fan inequalities There is also a version for differences of reciprocal of means proven in [9] 1 Finally, the additive analogue follows from Jensen's inequality for log(1 + exp y) by substitution y i = log x i [10, 3.2.34],while can be obtained applying the Jensen inequality and substitution y i = 1/x i to the function y 1+y .Like in the Ky Fan case, similar inequality between the geometric and harmonic means cannot be established. What is this paper about? In the previous section we presented seven types of inequalities between classical means: arithmetic, geometric and harmonic.We also know that similar inequalities can be established for other bivariate means. In this paper we focus on the family of bivariate, symmetric and homogeneous means defined on R + .We shall denote this family by S. Each type of inequalities defines certain relation S, e.g. the standard inequality M ≤ N leads to the relation {(M, N ): ∀x, y > 0 M(x, y) ≤ N (x, y)}. The aim of this paper is twofold: (a) for each relation we determine necessary and sufficient conditions for means to be in this relation, and (b) investigate the dependencies between these relations. The conditions in (a) will be expressed in a uniform language of Seiffert's functions (see Definition 3.1), which will allow them to be easily compared and to prove dependencies. The first results in this direction were established by one of the authors in [17]. Another essential class of functions we use in this paper is the class of Seiffert functions. is called Seiffert function. If for x = y we denote z = |x−y| x+y , then the important relation between means and Seiffert functions is given by the following identity from [16]: This shows the one-to-one correspondence between means and Seiffert functions given by the formula The name of Seiffert function comes from Heinz-Jürgen Seiffert, who was the first to show that The means will be denoted by uppercase letters, while lowercase ones will denote the corresponding Seiffert functions.We shall use sansserif font to denote the well-known means and their Seiffert functions.In particular formula (4) can be written as which reflects the condition fulfilled by every mean. Let us note two obvious properties of Seiffert function the follow immediately from (4): If there is no risk of ambiguity, we shall skip the argument of means.Next we recall the definition of Ky Fan functions from [17]. Clearly every nondecreasing function is a Ky Fan functions, but this class is much broader.For example, if f is nondecreasing in (0, 1/3) and f (x) ≥ f (1/3 − ) for x ≥ 1/3, than f is also a Ky Fan function.The following property will be useful Property 3.2 If f is a Ky Fan function and lim x→0 The Ky Fan functions will play an important role in our considerations due to the following fact proven in [17].Theorem 3.1 For a function f : (0, 1) → R the following conditions are equivalent We shall give here another useful equivalence. Theorem 3.2 For a function f : (0, 1) → R the following conditions are equivalent , which shows that |(1+x)−(1+y)| (1+x)+(1+y) assumes all values between 0 and a as x varies from zero to infinity and a remains fixed.Therefore (ii) can be written as follows: for each 0 < a < 1 sup t<a f (t) ≤ f (a), which yields (i). ∧-shaped functions will appear quite often in our examples, so it is good to have an easy criterion to check if such a function is a Ky Fan. Lemma 3.1 For a ∧-shaped function f the following conditions are equivalent Proof Choose arbitrary 0 < a < 1 and 0 < t < a 2a+1 and suppose (b) holds.This means . This means f is Ky Fan.If (b) does not hold, then by continuity of f one can easily find an a close to 1 such that f ( a 2a+1 ) is strictly greater than f (a), so the Ky Fan condition is not satisfied. Let us note one more fact, quite surprising, but extremely important in our considerations. Lemma 3.2 Every Seiffert function is a Ky Fan function. Proof Choose arbitrary 0 < a < 1 and 0 < t < a 2a+1 .Then for every Seiffert function f one has Let us now define the relations between means. Definition 3.4 For means M, N we define the following relations Necessary and sufficient conditions In this section, we will show how the different types of inequality between means can be expressed by the properties of their Seiffert functions.These results will be the basis for comparing the relationships between different types of inequality. Recall that by M, N we denote the means, and by m, n their respective Seiffert functions.Let's start with the most basic.Proof Follows immediately from (5). Following three theorems concern Ky Fan type inequalities. and application of Theorem 3.1 completes the proof. And now three theorems where M(1 + x, 1 + y) are involved. Examples Before formulation of our results regarding the dependencies between relations ( 6)-( 12) let us consider some examples of means and investigate their properties. Example 5.1 Consider two Seiffert functions and their corresponding means . Consider now the family of weighted harmonic means of K 0 and K 1 : , where 0 < t < 1. The reader will easily verify that their Seiffert functions are We will investigate relations between K t and the arithmetic mean A (recall that a(z) = z). Let us begin with the function (k t − a)(z).We have . We have k t a (0 , so it strictly increases for t ≤ 1/3 and is ∧-shaped otherwise.Using Lemma 3.1 we see that k t a is Ky Fan if, and only if t ≤ 3/7.Finally we came to the functions 1 We easily calculate that a is Ky Fan for all t ∈ (0, 1).Note that it is negative for 0 < t < 1/2 and changes sign for 1/2 < t < 1. Using Theorems 4.1-4.7 we can summarize this example as follows Our second example will be quite similar, but the differences are essential. Example 5.2 Consider two Seiffert functions and their corresponding means and as in Example 5.1, we create a family of weighted harmonic means , where 0 < t < 1, with Seiffert functions We have (a ), so the function a − j t is increasing for t ≤ 3/8 and ∧-shaped otherwise.Thus we conclude that a − j t is nonnegative if, and only if t ≤ 1/2 and is Ky Fan function if, and only if t ≤ 117/238 ≈ 0.4916. Let us establish the properties of ) 2 we see that the function a j t is strictly increasing for t ≤ 1/3 and ∧-shaped otherwise.Using Lemma 3.1 we discover that it is Ky Fan for t ≤ 9/19 ≈ 0.4737. Finally we consider 1 We see that 1 j t − 1 a (0 + ) = 0. We shall show that it is or increasing or ∧-shaped.Let us calculate the derivative , it has a real negative root.Our goal is to show, that is does not have more than one root in the interval (0, 1).Consider two cases: If for some t, the polynomial L t had two additional roots in the interval (0, 1), then its derivative would be positive to the right of the largest of them.But it is impossible because and changes sign once otherwise.Therefore 1 ≈ 0.2753 and is ∧-shaped otherwise. Using Lemma 3.1 we find that Example 5. 3 We rewrite inequalities (4) in an equivalent form and use it to define the Seiffert function. The mean generated by m c is given by the formula Let us recall that a(z) = z, so The expression d dz z 2 (2c − z) has two zeroes at z 0 = 0 and z 1 = 4 3 c.Therefore a m c is strictly increasing if c ≥ 3 4 and is ∧-shaped otherwise.Applying Lemma 3.1 we see that it is a Ky Fan function if and only if c > 13 24 .Consider now the difference . We are interested in the sign of which is the same as the sign of u c (z) = z 4 − 4cz 3 + 4c 2 z 2 − 4c 2 z + 6c 3 .We have and lim x→0 + m n (x) = 1.This fact, Property 3.2 and properties of monotonic functions in conjunction with Theorems 4.2, 4.5, 4.3 and 4.6 show that the relations ≺ C , ≺ R , ≺ + C , ≺ + R are also antisymmetric.Therefore ≤, ≺ C , ≺ R , ≺ + C , ≺ + R define partial orders on the set of means.Surprisingly, the relations ≺ A and ≺ + A are not reflexive.Let us spend a while investigating this interesting phenomenon. Let us rewrite the definition of Seiffert functions (4) in the following form: The relations ≺ A and ≺ + A are determined by the difference of reciprocals of Seiffert functions.For c ∈ By Theorems 4.4 and 4.7, for any c . Now we shall prove some lemmas about properties of our relations. Proof Follows immediately from Theorems 3.1, 3.2 and the fact, that every nondecreasing function is a Ky Fan function.Proof Suppose M ≺ R N does not hold.Then for some 0 < a < 1 and 0 < t < a 2a+1 the difference of corresponding Seiffert functions satisfies This yields n(t) > n(a) which is impossible, since n is a Seiffert function and thus Ky Fan. Example 5.5 shows that the additive relations do not imply nor classical nor reciprocal relations.One of the main reasons for that is two means can be in an additive relations even if they are not comparable in the ordinary sense.But looking at Example 5.1 (0.48 < t < 0.5) we see that even if the difference of reciprocals of Seiffert functions preserves sign, the classical and reciprocal relations may not hold.As we shall see from the Lemma below, the reason is that the means in Example 5.1 are comparable in a wrong direction. x+y and T(x, y) = |x − y| 2 arctan |x−y| x+y are means (called today the first and the second Seiffert means). Theorem 4 . 2 [ 17 ,Theorem 4 . 4 Theorem 3] Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent:1.M ≺ C N , 2.m n is a Ky Fan function.Theorem 4.3 [17, Theorem 4] Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent: (a) M ≺ R N , (b) m − n is a Ky Fan function.Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent: (a) M ≺ A N , (b) 1 m − 1 n is a Ky Fan function.Proof By (5) the inequality in (a) can be written as |x − y| 2m |x−y| 2 Theorem 4 . 5 Theorem 4 . 6 Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent:(a) M ≺ + C N , (b) mn is a nondecreasing function.Proof Using (5) the inequalityM(x, y) M(x + 1, y + 1) ≤ N (x, y) N (x + 1, y + 1)can be written as m n|(1 + x) − (1 + y)| (1 + x) + (1 + y) ≤ m n |x − y| x + yand application of Theorem 3.2 completes the proof.Proofs of the following two theorems are similar.Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent: (a) M ≺ + R N , (b) m − n is a nondecreasing function.Theorem 4.7 Let M, N be means and m, n their Seiffert functions.The following conditions are equivalent: is strictly increasing for c ≥ 1 and-by Lemma 3.1-is a Ky Fan function for c ≥ 2 3 .The definition of m c implies that for all c we have m c ≤ a and m c (z) = c 2 z c 2 +z 2 (2c−z) .Let us investigate the function Lemma 6 . 2 2 . 6 . 3 If M, N are means and M ≺ R N or M ≺ + R N , then M ≤ N .Proof By Theorem 4.3 the difference of corresponding Seiffert functions m − n is Ky Fan, and since lim z→0 + (m − n)(z) = 0 we have m − n ≥ 0 by Property 3.Lemma If M, N are means and M ≺ C N or M ≺ + C N , then M ≺ R N . Lemma 6 . 4 If M, N are means and M ≺ A N and N ≤ M, then N ≺ C M and N ≺ R M. Proof If 0 < a < 1 and 0 < t < a 2a+1 , then for corresponding Seiffert functions we have 0 ≤ n(t) − m(t) m(t)n(t) ≤ n(a) − m(a) m(a)n(a) .Multiplying this by n(t) ≤ n(a) and adding 1 we get n m (t) ≤ n m (a), which is equivalent to M ≺ C N .By Lemma 6.3 M ≺ R N is also valid. Table 1 shows the correspondences between the relations.Symbol in row R 1 and column R 2 means that a R 1 b does not imply a R 2 b nor bR 2 a for all a, b.Symbol ⇒ in row R 1 and column R 2 means that a R 1 b implies a R 2 b for all a, b.Symbol ⇐ in row R 1 and column R 2 means that a R 1 bimplies bR 2 a for all a, b.Letters refer to justifications listed below.The last line has been added to illustrate the special case described in Lemma 6.4.Take t = 1/2 in Example 5.2, b Lemma 6.2, c Take t = 0.48 in Example 5.2, d Lemma 6.1 (every nondecreasing functions is Ky Fan), e Lemma 6.3, f Take t = 0.42 in Example 5.2, g Example 5.5, h Take t = 0.3 in Example 5.2, i Example 5.7, j Take c=0.65 in Example 5.3, k Take c=0.9 in Example 5.3, l Example 5.8, m Take t = 0.35 in Example 5.2, n Take t = 0.4 in Example 5.2, o Lemma 6.4. Table 1 Correspondences between relations
3,994.2
2023-09-30T00:00:00.000
[ "Mathematics" ]
Reconsidering plasmid maintenance factors for computational plasmid design Plasmids are genetic parasites of microorganisms. The genomes of naturally occurring plasmids are expected to be polished via natural selection to achieve long-term persistence in the microbial cell population. However, plasmid genomes are extremely diverse, and the rules governing plasmid genomes are not fully understood. Therefore, computationally designing plasmid genomes optimized for model and nonmodel organisms remains challenging. Here, we summarize current knowledge of the plasmid genome organization and the factors that can affect plasmid persistence, with the aim of constructing synthetic plasmids for use in gram-negative bacteria. Then, we introduce publicly available resources, plasmid data, and bioinformatics tools that are useful for computational plasmid design. Introduction Plasmids are autonomously replicating DNA molecules present in microorganisms. Plasmids are also known to be mobile genetic elements that can be horizontally transferred among different organisms [1,2]. Plasmids can be considered as genetic parasites in the sense that their reproduction depends to some extent on their host and that they do not necessarily share the fate of a specific cell lineage, as they are horizontally transmissible. Plasmids have been used as primal genetic tools for exogenous DNA expression and microbial metabolic engineering. The importance of plasmid vectors has increased in recent years [3]. Currently, the plasmid genome is difficult to design computationally because the elements contained within plasmids are not conserved across plasmid groups. Additionally, a number of factors affect plasmid persistence. Understanding the key factors affecting replication and stable maintenance of plasmids in a host cell population is essential to control plasmids as synthetic vectors. Conversely, construction of synthetic vectors based on our knowledge and testing its persistence in a model host could indicate how far we have to go to understand undiscovered plasmid maintenance factors. If designed plasmids are stably maintained, it follows that the selected elements (genes, intergenic regions, etc.) play a positive role in plasmid persistence. In this review, we summarize current knowledge of the key factors that affect plasmid persistence and then introduce publicly available resources (plasmid data and bioinformatics tools) potentially useful for designing synthetic plasmids, aiming at their use in Escherichia coli and other gram-negative bacteria. Reviews of the mechanisms of action of each element of a plasmid's basic function can be found elsewhere ( [4,5] for partition, [6,7] for transfer, [8][9][10] for replication, and [11] for toxin-antitoxin mechanisms). In this review, incompatibility (Inc) group classification is used to refer to plasmid groups [12,13]. Inc. groups and representative plasmid vectors relevant to gram-negative bacteria are listed in Table 1. Different plasmids belonging to the same Inc. group are incompatible and unable to be inherited in a single bacterial cell line. We note that there are also conditions, however, in which very similar or identical replicons can co-exist in the same cell [14]. Some Inc. groups defined in Pseudomonas are equivalent to those defined in Escherichia coli; for example, IncP-1, IncP-3, IncP-4, and IncP-6 are equivalent to IncP, A/C, IncQ, and IncG/U, respectively [15,16]. Key Factors in Plasmid Design Based on recent progress in plasmid biology and bioinformatics, we consider three factors that should be taken into account to design a synthetic plasmid ( Table 2): 1) plasmid gene content; 2) interaction with host (host factors and fitness cost imposed by plasmids); and 3) constraints in genome (size, sequence composition [e.g., G + C content, oligonucleotide composition, and codon usage], and gene direction). These factors are described in detail in the subsequent sections. Defining the Plasmid Core Plasmids show gene content variations, even within the same Inc. group [17]. Thus, plasmids are likely to experience gene gain and loss over evolutionary time [18,19]. A comparative analysis of closely related taxa can categorize a genome into two parts: (i) "core" genes conserved in all members within a defined group (e.g., bacterial species, Inc. group, etc.), and (ii) "noncore" genes absent in some members within the group. Being a core gene does not necessarily mean that the gene positively contributes to plasmid maintenance in particular hosts, but suggests that the gene sets have co-evolved together since the divergence from the most recent common ancestor. The long-term co-evolution of core genes can result in the formation of an operon with a coordinated regulatory system that balances the efficiency of horizontal and vertical transmissions [20][21][22]. These core genes may be linked together upon construction of a vector. A recent analysis of recombination tracts in the plasmid core genome highlighted a block of evolutionarily linked genes [23]. These findings also suggest that the plasmid core undergoes recombinational allelic exchange within the group at an evolutionary time scale. Core and noncore genes can be identified by homologous gene clustering for a defined plasmid group, e.g., using all-against-all protein sequence comparisons with BLASTP [24]. We previously found that homolog clusters specific to each of the six Inc. groups (F, H, I, N, P-1, and W) ( Table 1) were involved in plasmid replication, partition, and transfer [17]. Based on the BLASTP (E-value b1e−5) comparison, replication initiation (Rep) proteins for the six Inc. groups (RepB and RepE for IncFI, RepAfor IncFII, RepHIA for IncH, RepZ for IncI, RepA for IncN, TrfA for IncP-1, and RepA for IncW) formed distinct homolog clusters (exceptions were RepB and RepHIA, which formed a single homolog cluster) and were conserved in all members within each of the Inc. groups. Functional Modules Comprising a Plasmid Gene products, which contribute to plasmid maintenance in bacterial hosts, require cis-acting sites to elicit their functions. In this review, a functional module is defined as a pair of gene products and its acting site on a plasmid. Each functional module often contains its own regulatory function. In such cases, the elements of each functional module should not be separated upon construction of a synthetic plasmid. Below, we briefly describe the features of representative functional modules comprising a plasmid, i.e., replication module, partition module, toxin-antitoxin module, multimer resolution module, DNA transfer module, and antirestriction module. Plasmid genomes are often considered an assembly of these functional modules (Fig. 1). Plasmid functional modules are potential sources for biological parts for synthetic biology projects, such as BioBrick [26] and SEVA [27]. 2.1.2.1. Replication Module. Plasmids can carry two types of replication origins; one is a vegetative origin (oriV), and the other is a transfer origin (oriT). In this section, we describe the replication module that uses oriV. In ColE1-type plasmids , the replication module consists of oriV and genes for two noncoding RNAs (RNA I and RNA II) and Rop protein, which are produced from near the oriV [28]. RNA II is converted to primer RNA (thus acts as an initiator of replication), whereas RNA I and Rop protein cooperatively inhibit RNA II maturation (thus act as inhibitors of replication). The copy number of ColE1-type plasmids is maintained at around 10-15 copies/cell [9]. This type of replicon has been used as a cloning vector, including pUC and pET vectors ( Table 1). For pUC vectors, deletion of the Rop protein gene and a point mutation in RNA II result in a dramatic increase in copy numbers [187]. "-" indicates that the genes involved in conjugation have not been detected, whereas "NA" indicates that the nucleotide sequences of the plasmid are not available. f Plasmid host range determined based on genome sequencing projects (hosts in which a plasmid has been found) and/or filter mating assays. g Original hosts are unknown because exogenous plasmid capturing was used. (500-700 copies/cell) [29]. The replication modules of so-called iteroncontaining plasmids, e.g., RK2 and R6K ( Table 1), consist of a replication initiation protein (Rep protein) gene and oriV, which are in general located next to each other on the plasmid ( Fig. 2A). oriV contains a Rep protein-binding region (iterons), host DnaA-binding region (DnaAboxes), and DNA unwinding elements (DUE), which are motifs in an A + T-rich region within oriV [8]. Rep proteins act as both initiators and inhibitors of replication [8,28,30]. Purified Rep proteins are mostly dimeric, whereas only monomeric Rep protein is active in unwinding DUE (Fig. 2B). DnaB helicase is loaded onto unwound DUE via either a host DnaA-dependent or -independent manner. Rep protein can also bind a specific strand of unwound DUE and assists replisome assembly on one strand via direct interaction with β-clamp, leading to unidirectional replication [31]. Dimeric Rep proteins prevent oriV melting by pairing iterons in a phenomenon called handcuffing ( Fig. 2C) [32][33][34] An increased monomer to dimer ratio dissociates the paired iterons [32,33]. A Rep protein mutant of R6K π protein (pir-116 allele) [35] lacks replication inhibition activity (unable to form dimer) and has been used to increase the vector copy number only in specific Escherichia coli cloning hosts [36]. The copy number of iteron-containing plasmids is normally 1-8 copies/chromosome [37]. Replication initiation from oriV usually requires host DnaA. Theta-type replication can be either uni-or bidirectional, whereas rolling circle replication is unidirectional [9]. Strand-displacement replication of IncQ plasmids is bidirectional [10]. In most plasmids, theta-type replication is unidirectional (exceptions include the linear Streptomyces plasmid [38]), and there is no replication termination site (exceptions include the plasmid R6K [9,39]). Partition Module. Naturally occurring low copy number plasmids have active segregation mechanisms to avoid plasmid loss upon cell division. These mechanisms are equivalent to the function of the spindle apparatus in a eukaryotic cell [40]. Currently, three types of segregation mechanisms have been proposed [4,5]. Each system consists of a centromere site (often referred to as parS), centromere-binding protein (ParB), and motor protein (ParA). Here, we call a set of the genes and sites for those elements a partition module. A centromere site is generally located directly upstream or downstream of par genes [41,42]. The segregation mechanism employed by the type I partition system is shown in Fig. 3. In P1 prophage (Table 1), the partition module consists of the parA-parB operon and its downstream parS region, which contains multiple ParB binding sites and a host IHF binding site [41]. ParA molecules bound to ATP (ParA*) can bind to DNA non-specifically and thus localize to the nucleoid. The binding of ParB to ParA* activates ATP hydrolysis by ParA, disrupting the ability of ParA to bind to DNA and releasing it from the nucleoid. Once ParA* is cleared, the ParB/plasmid complex diffuses through the nucleoid until it makes contact with ParA*. ParB/plasmid complexes in close proximity generate repulsive forces as they clear ParA* between them. Therefore, replicated plasmid copies are respectively pulled to the opposite ParA*-dense area following the gradient of ParA* (Fig. 3) [5]. repABC family plasmids from Alphaproteobacteria [43] carry a replication module and partition module in the same locus (repABC), and the repABC locus has been used as a vector core for certain types of vectors [44]. 2.1.2.3. Toxin-antitoxin (TA) Module. Because plasmids are not tightly connected to the chromosome that carries genes essential for bacterial hosts, cell division can generate plasmid-free cells. If the plasmid is lost upon cell division, the plasmid-free cells, which grow faster than plasmid-containing cells can show an increase in relative population size. This phenomenon can be suppressed by a mechanism called postsegregational cell killing, wherein plasmids produce both stable toxin and unstable antitoxin that counteracts the toxin; plasmid loss results in increased toxin levels in the cells, leading to growth inhibition or cell death of plasmid-free cells [45]. The genetic module responsible for this phenomenon is called the TA module. TA modules can be categorized into six groups according to the mechanism of action [11,46,47]. The first TA system discovered is the hok/sok system of plasmid R1 (Table 1) [45], currently classified as a type I TA module, in which sok encodes an antisense RNA that inhibits the translation of the Mok protein, a regulator for Hok toxin, which generates pores in the cell wall. The hok/sok module of plasmid R1 was applied to improve vector maintenance in the chemostat [48]. The ccdA/ccdB module discovered in plasmid F (Table 1) [49] has also been used in biotechnology. The CcdB toxin inhibits the function of the host DNA gyrase. The ccdB gene has been used as a counter selection marker [50], e.g., in Gateway cloning technology and allele replacements in the chromosome [51,52]. By separating the toxin element and the antitoxin element of a TA module into the chromosome and vector, respectively, StabyCloning technology (Delphi Genetics) enables stable maintenance of a protein-expression vector in the Escherichia coli cell population. Table 2 Key factors in the construction of a plasmid vector. Factors Notes (what should be considered) Plasmid gene content Include a set of plasmid core genes. Include selection marker or a toxin-antitoxin system to prevent generation of plasmid-free cells. Include cis-elements, such as centromere-like site and resolution site. Interaction with host Select a basic replicon that has evolved in species closely related to a model host. Transcriptional regulator or NAPs (H-NS homologs) for plasmid genes could reduce the fitness cost imposed by the plasmid. Constraints in genome The G + C content of the plasmid should match that of the host. Highly expressed essential genes should be on leading strands. Multimer Resolution Module. Replicated plasmid copies can recombine into a dimer to multimer via homologous recombination; this negatively affects plasmid partition. Naturally occurring plasmids encode a genetic module to resolve this problem. Small mobilizable plasmids use host-encoded proteins (site-specific recombinases XerC and XerD and accessory proteins PepA and ArgR [53]) for their dimer resolution, and the plasmids carry only a cis-acting resolution site (e.g., cer for ColE1 and its related plasmids, psi for pSC101 [54][55][56]). Larger self-transmissible plasmids, e.g., IncP-1 plasmids, carry a hostindependent multimer resolution module consisting of a site-specific recombinase (resolvase) gene and a resolution site that also functions as a regulatory region for the resolvase gene [57]. Lack of a resolution module on the plasmid appears to be eventually compensated for by the acquisition of a functionally equivalent cointegrate-resolution system of a Tn3 family transposon, according to observations in experimental evolution [58]. 2.1.2.5. DNA Transfer Module and Antirestriction. Conjugative transfer is an important feature of plasmids that enables them to spread genetic information among bacteria (current paradigms for conjugation are summarized in [7]). There are self-transmissible plasmids, mobilizable plasmids, and nonmobilizable or nontransferrable plasmids [59]. The self-transmissible plasmids carry all the gene sets and a cis-acting site (oriT) required for mating pair formation and DNA processing, whereas mobilizable plasmids carry the genes and site only for DNA processing. The Ti plasmid of genus Agrobacterium carries two types of DNA transfer modules: (i) tra/trb operons for DNA transfer between bacteria and (ii) a vir operon for DNA transfer between bacteria and plants [60]. Plasmids from gram-negative bacteria generally use a type IV secretion system for DNA transport, whereas some plasmids from gram-positive bacteria use different DNA transport mechanisms [7,61,62]. (E) When replicated plasmid copies are present in close proximity, a ParA*-free area is generated between them. Each ParB/plasmid complex diffuses until finding its closest ParA*; thus, their interactions are repulsive. (F) ParB/plasmid complexes are pulled to ParA*-dense areas at opposite ends, following the gradient of ParA*. Illustration follows [5], with modifications. Non-self-transmissible plasmids, including IncQ plasmids represented by RSF1010 (Table 1), can be mobilized by self-transmissible plasmids, e.g., by the IncP-1 plasmid RK2 [63]. oriT has been embedded in some cloning vectors to mobilize the vectors into various hosts for which transformation methods have not been established or are inefficient [64][65][66]. Plasmid gene content analysis revealed that the complete gene set responsible for self-transmissibility is not necessarily conserved across members of each self-transmissible plasmid group, e.g., IncW and IncP-1 [17]. Interestingly, a gene encoding an antirestriction protein, which blocks the host's restriction system upon plasmid entry into new hosts, was found to be an element of the plasmid core in IncP-1 and IncW [17]. ArdB, KlcA , ArdA, and ArdC homologs can confer antirestriction against the host's type I restriction-modification system [67][68][69]. These antirestriction genes may be important for transfer of synthetic plasmids between different bacterial lineages. Testing the Functionality of Functional Modules To evaluate the contribution of each functional module to plasmid maintenance, a set of highly unstable broad-host-range plasmid vectors based on the RK2 replicon of the IncP-1 group has been constructed [70]. For example, the functionality of the partition module of a IncU plasmid (Table 1), the chromosome partitioning system of Pseudomonas aeruginosa, and the hipAB TA system of the Paracoccus kondratievae plasmid have been confirmed using these vectors [70]. Selection Markers Antibiotics have traditionally been used to select plasmid-containing cells in culture in the laboratory. Mainly for biosafety reasons, various antibiotic marker-free selection approaches have been developed [3,71,72]. Some of the tricks used in such approaches are based on plasmid-derived elements: for example, the RNA I and II of plasmid ColE1 have been used in an antibiotic-free host-vector system [73]. Interactions with the Host Early biochemical studies and recent experimental evolution studies have suggested the importance of host factors and fitness cost for plasmid carriage. These factors are discussed below. Host Factors Most plasmids require the host's replication initiator DnaA and DNA helicase encoded by the host chromosome or plasmid itself upon replication initiation from oriV [9]. Whether plasmids can load DNA helicase at the oriV using DnaA or plasmid's Rep protein determines the capability of plasmid replication in the host cells and their replication host range [74,75]. Nucleoid-associated proteins (NAPs), such as histonelike nucleoid-structuring protein (H-NS) are known to make the DNA structure more compact [76]. Moreover, chromosomally encoded NAPs have been shown to affect gene expression from the IncP-7 plasmid pCAR1 [77,78]. Fitness Cost Imposed by Plasmids When plasmids are introduced to novel hosts, plasmids initially impose a fitness cost on the hosts and are thus not necessarily stably maintained, particularly in laboratory systems [72,79,80]. It should be noted that in nature, plasmids can persist without positive selection, despite their detectable costs in laboratory systems [72]. Resequencing of experimentally evolved plasmid-host pairs in several independent studies suggests that initial interactions between the host gene and plasmid gene are unfavorable for the host's growth. Although the cause of the cost can be different among plasmid-host pairs, reduced interaction appears to improve host growth and plasmid maintenance [81][82][83]. These observations are consistent with the complexity hypothesis, which states that the number of interaction partners predicts the horizontal transfer ability of a gene [84,85]. Using a series of antibiotic resistance genes as a model of horizontally acquired genes, Porse et al. [86] demonstrated that physiological interaction of the gene products with hosts imposes a greater cost than nucleotide signals, e.g., G + C content and codon usage. The cause of costs may be relevant to the interactions summarized elsewhere [87] (e.g., disruptive interactions with cellular networks). Currently, it is difficult to predict which interactions negatively affect host fitness and plasmid persistence for an arbitrarily chosen host-plasmid pair. Experimental evolution may help reduce the fitness cost imposed by a synthetic plasmid. Transcriptome disturbance by a plasmid in a new host is initially high, but will be reduced during fitness cost amelioration [81,82]. Moreover, plasmids encoding H-NS-like stealth protein reduce their fitness cost probably by silencing transcriptional activities of genes in the A + T rich region through the binding of H-NS-like proteins [78,88]. In contrast to smaller or nontransmissible plasmids, larger and transferable plasmids carried multiple NAP genes [89,90]. Three different NAPs encoded on plasmid pCAR1 are involved in plasmid stability and its conjugation in the host cells [91]. Therefore, minimizing unnecessary transcription may be important for minimizing the cost imposed by plasmids. Constraints in the Genome Bioinformatics analysis revealed constraints in plasmids with respect to size, sequence composition (G + C content, oligonucleotide composition, and codon usage), and gene direction. These features may be a result of plasmid-host co-evolution, which can stabilize plasmids in host cell populations. It is important to note that the sequence composition can vary among genes/segments within a plasmid/genome [23,92,93]. Size Constraint The size distribution of sequenced plasmids available in public databases has been studied. For example, sizes for the 4602 completely sequenced plasmids ranged from 744 bp to 2.58 Mb with a mean value of 80 kb, and the mean value of sizes for mobilizable plasmids was smaller than that for transmissible plasmids [59]. Among the 92 plasmids from the IncF, IncH, IncI, IncN, IncP-1, IncW, A/C, IncL/M, IncP-9, IncQ, IncU, PromA, and Ri/Ti groups used in Suzuki et al. [17], sizes for the non-self-transmissible IncQ plasmids (median size of 8.7 kb) were smallest. Among the self-transmissible plasmids belonging to the six Inc. groups F, H, I, N, P-1, and W, the median value of sizes (kb) was highest for the IncH group (241 kb), followed by those of the IncF (110 kb), IncI (101 kb), IncP-1 (66 kb), IncN (64 kb), and IncW (39 kb) groups (Fig. 4). Because each plasmid group has specific range of genome sizes, it may be important to keep plasmid size in the appropriate range considering the replicon type used in the vector. Plasmid size may be associated with copy number. For 11 plasmids found in Bacillus thuringiensis strain YBT-1520, the plasmid sizes (ranging from 2 to 416 kb) and the copy numbers determined by quantitative polymerase chain reaction (ranging from 1.38 to 172) were negatively correlated [94]. Plasmid F, a member of IncF (median size: 1110 kb), is present at 1 or 2 copies per chromosome, whereas the copy number of RK2, a member of the IncP-1 group (median size: 66 kb), is 3-5 copies/chromosome (in the presence of large replication protein TrfA1), or 1-2 copies/chromosome (without TrfA1) [95]. Plasmid pR28, a member of the IncP-9 group (median size: 83 kb) has a copy number of 1.6-3.7/ chromosome [58]. Copy numbers of the IncQ mobilizable plasmids (median size: 8.7 kb) are 10-16/chromosome [96]. Copy numbers of ColE1related plasmids are 20-44/chromosome [87,97]. Conlan et al. (2014) determined the copy numbers of plasmids in Enterobacteriaceae (3 A/ C, 6 IncF, 1 IncHI2, 8 IncN, and other plasmids) from the average sequence coverage (depths of PacBio and MiSeq reads) of each plasmid relative to that of the chromosome and showed that copy numbers were 1-3/chromosome [98]. Plasmid copy number estimates can vary, depending on bacterial growth conditions and DNA extraction methods used [97,99]. Therefore, copy number data should be interpreted carefully. To the best of our knowledge, there is no database that catalogs the plasmid copy numbers in various hosts under the same experimental conditions. The elucidation of clear features of plasmid maintenance functions associated with copy number or replicon type requires further investigation. G + C Content G + C contents vary widely among bacterial genomes, putatively reflecting a balance among biases generated by mutation and selection [100]. Because bacterial genomes have small regions of noncoding DNA and more protein-coding constraints on firstand second-codon positions than on third-codon positions, most of the variations are due to synonymously variable third-codon positions [101,102]. Growth rate experiments in Escherichia coli and Caulobacter crescentus showed that decreased genic G + C contents at synonymous sites have negative effects on bacterial fitness when gene expression levels are induced [100,103]. Previous studies have reported that small bacterial genomes tend to exhibit low G + C contents, with some exceptions [104], and that intracellular symbionts, such as plasmids and phages, tend to have lower G + C contents than their hosts [92,105,106]. For the 209 plasmids and their host chromosomes, the G + C contents are highly correlated, and in 164 (78.5%) of cases, plasmids had lower G + C contents than their hosts (Fig. 5). Possible explanations for the lower G + C contents of plasmids than those of hosts include the selection of plasmids that tolerated gene silencing by host H-NS [88,107] and reduced nucleotide synthesis costs [105]. Thus, it may be important that the G + C contents of synthetic plasmids match those of the host chromosomes. Oligonucleotide Composition The composition of oligonucleotides, such as di-, tri-, and tetranucleotides (also known as k-mers, such as 2-, 3-, and 4-mers), has been studied for the characterization and classification of various organismal genomes [108,109]. Plasmids have oligonucleotide compositions similar to those of their host chromosomes [93,109]. The compositional similarities of plasmids and their hosts suggests that plasmids have acquired hosts' nucleotide compositions due to amelioration by hostspecific mutational biases [110]. Thus, possible plasmid-host pairs are predictable based on the similarity of their oligonucleotide compositions [17]. Earlier studies investigated sequence motifs in the IncP-1 plasmids RK2 [111] and R751 [112] and suggested that some sequence motifs (e.g., tetranucleotide and hexanucleotide palindromic sequences acting as restriction-modification sites) may have been eliminated from plasmids through natural selection. Computational analysis of oligonucleotide compositions has been used to identify novel regulatory DNA sequence motifs [113], some of which may be important for stable plasmid maintenance. In bacteria such as Escherichia coli and Bacillus subtilis, highly expressed genes (e.g., those encoding translation elongation factors and ribosomal proteins) tend to preferentially use a subset of synonymous codons that are best recognized by the most abundant tRNA species [118,119]. This is considered evidence of natural selection on synonymous codon usage for translational efficiency and/or accuracy (also called translational selection) [114,120]. Previous studies have indicated that the strength of translational selection on chromosomes varies among bacteria and that fast-growing bacteria with more rRNA and tRNA genes are subjected to strong selection pressure [102]. The strength of translational selection also varies among replicons within the same organism; for example, in Sinorhizobium meliloti, codon usage of the chromosome and plasmids pSymB and pSymA reflects their importance for competitive cell growth and expression during the free-living stage of the organism [121]. The codon usage of plasmids is not always similar to that of the host chromosome. Measuring the distance between the codon usages of pairs of Agrobacterium tumefaciens replicons (circular and linear chromosomes and plasmids pAt and pTi) revealed that the distances between chromosomes and plasmids are larger than the distances between the two chromosomes (circular and linear) or the two plasmids (pAt and pTi) [122]. For each pair of three Agrobacterium species (Agrobacterium tumefaciens C58, Agrobacterium vitis S4, and Agrobacterium radiobacter K84), codon usages of their plasmids, with varying gene contents, are more similar than codon usages of their chromosomes [123]. It remains unclear whether codon usage influences Fig. 5. Plot of G + C contents of 209 plasmids and their host chromosomes. Each point represents a plasmid-chromosome pair from 209 prokaryotes. To minimize the bias in the numbers of sequenced organisms and replicons available in public databases (e.g., thousands of genome projects for Escherichia coli, and multireplicons for Borrelia species), RefSeq data for completely sequenced prokaryotes that consist of one chromosome and plasmid were retrieved on April 17, 2017 from a list of all selected representative prokaryotic genomes (ftp://ftp.ncbi.nlm.nih.gov/genomes/GENOME_REPORTS/prok_ representative_genomes.txt). The G + C contents of plasmids tend to be lower than (and are correlated with) those of the host chromosomes. stable plasmid maintenance in hosts, and the fitness cost imposed by plasmids is still unknown. Gene Direction Bioinformatics algorithms based on replication strand biases, such as GC skew, defined as (C -G)/(C + G), have been used to predict replication origin and terminus in bacterial chromosomes and plasmids [124][125][126]. The degree of GC skew is different between plasmids with and without rolling circle replication and is correlated between plasmids and chromosomes of bacteria, suggesting that replication-related mutation and selection determine the strength of GC skew for replicons within the same host [127]. Previous studies reported that coding sequences (5′ to 3′ orientation) in the bacterial chromosome are preferentially located on the template strands for lagging-strand synthesis (also simply referred to as leading strands [128]), and this codirectional bias of replication and transcription is further enriched in essential and/or highly expressed genes [128][129][130][131]. It remains unclear whether gene expressivity and essentiality influence the orientation bias of plasmid genes; however, it may be better to carry important genes on the leading strand of the synthetic plasmids, following the trend in the chromosome. Publicly Available Resources Comparative sequence analyses of closely related plasmids with different features, such as replication, maintenance, transfer, and host range, can provide hypotheses regarding genetic determinants of these plasmid features. Over the past 10 years, plasmid sequence data have been dramatically increased, and convenient bioinformatics tools have been developed to manage and analyze the data. These resources are briefly described in this section. Plasmid Sequence Data High-throughputDNA sequencing has generated a large amount of plasmid sequences, which can be retrieved from the International Nucleotide Sequence Database Collaboration or INSDC: DDBJ, EMBL-EBI, and NCBI (http://www.insdc.org). As of 2010, the 1,730 complete plasmid sequences in GenBank were obtained from plasmid-sequencing projects (62%) and microbial genome projects (38%) [132]. In 2015, Shintani et al. [59] Because INSDC databases covering all available nucleotide data are not always well curated and structured, secondary databases have been developed. For example, the ACLAME database (http://aclame. ulb.ac.be) [133] has been developed and used to investigate the general features of sequenced plasmids, such as their distribution per host species [134]. Orlek et al. [135] presented a curated dataset of complete Enterobacteriaceae plasmids compiled from the NCBI database (https:// figshare.com/s/18de8bdcbba47dbaba41). The web servers PLSDB (https://ccb-microbe.cs.uni-saarland.de/plsdb/) [136] and pATLAS (http://www.patlas.site) [137] provide a more comprehensive collection of bacterial plasmids retrieved from the NCBI nucleotide database. Bioinformatics Tools Bioinformatics tools can be used to design synthetic plasmids by searching, assembling, and adjusting key factors, including functional module (genes and cis-element) and genome constraints. Table 3 lists bioinformatics tools for plasmids with their URLs. Concluding Remarks Plasmids have been used as primal genetic tools for microbial engineering, particularly for nonmodel organisms. In synthetic biology, there have been attempts to build a vector by assembling functional modules [27,150]. Fortunately, the number of known plasmid sequences has increased dramatically in recent years, which has enabled us to detect core genes and co-evolving gene sets for each plasmid group. Plasmid functional modules identified by experimental or bioinformatics methods can contribute to biological parts/module databases, such as BioBrick [26], SEVA [27], and Clostron [150]. Following the rules of the natural plasmid genome, we can design synthetic plasmids. For example, a set of core genes as well as a selection marker or TA system should be included to prevent generation of plasmid-free cells ( Table 2). The G + C content of a plasmid should be similar to (and lower than) that of the host, and highly expressed essential genes should be located on lagging strand templates. We also emphasize that optimization of external settings for the plasmid, for example, type of growth medium and presence or absence of spatial structure in the growth environment, could greatly influence plasmid population dynamics. Although further work is needed, a synthetic biology approach, e.g., de novo synthesis of artificial plasmids followed by Table 3 List of bioinformatics tools for plasmid sequence analysis and vector design. Usage Name URL Viewing/editing plasmid sequences ApE (A plasmid Editor) experimental evaluation of plasmid maintenance, may lead to the construction of stable vectors and improve our understanding of why plasmids are so successful as genetic parasites. Competing Interests The authors declare no competing interests.
7,219.8
2018-12-15T00:00:00.000
[ "Computer Science", "Biology" ]
The cGAS/STING/TBK1/IRF3 innate immunity pathway maintains chromosomal stability through regulation of p21 levels Chromosomal instability (CIN) in cancer cells has been reported to activate the cGAS–STING innate immunity pathway via micronuclei formation, thus affecting tumor immunity and tumor progression. However, adverse effects of the cGAS/STING pathway as they relate to CIN have not yet been investigated. We addressed this issue using knockdown and add-back approaches to analyze each component of the cGAS/STING/TBK1/IRF3 pathway, and we monitored the extent of CIN by measuring micronuclei formation after release from nocodazole-induced mitotic arrest. Interestingly, knockdown of cGAS (cyclic GMP-AMP synthase) along with induction of mitotic arrest in HeLa and U2OS cancer cells clearly resulted in increased micronuclei formation and chromosome missegregation. Knockdown of STING (stimulator of interferon genes), TBK1 (TANK-binding kinase-1), or IRF3 (interferon regulatory factor-3) also resulted in increased micronuclei formation. Moreover, transfection with cGAMP, the product of cGAS enzymatic activity, as well as add-back of cGAS WT (but not catalytic-dead mutant cGAS), or WT or constitutively active STING (but not an inactive STING mutant) rescued the micronuclei phenotype, demonstrating that all components of the cGAS/STING/TBK1/IRF3 pathway play a role in preventing CIN. Moreover, p21 levels were decreased in cGAS-, STING-, TBK1-, and IRF3-knockdown cells, which was accompanied by the precocious G2/M transition of cells and the enhanced micronuclei phenotype. Overexpression of p21 or inhibition of CDK1 in cGAS-depleted cells reduced micronuclei formation and abrogated the precocious G2/M transition, indicating that the decrease in p21 and the subsequent precocious G2/M transition is the main mechanism underlying the induction of CIN through disruption of cGAS/STING signaling. Signaling through cGAS, a protein that detects DNA in the cytosol, prevents chromosome instability (CIN) in cancer cells by regulating the levels of the cell-cycle inhibitor p21. Alterations in chromosome number or structure are hallmarks of cancer cells, but their contribution to disease is unclear. Previous studies have shown that CIN activates cGAS, triggering the activation of an immune response and cell death. However, Jae-Ho Lee at Ajou University, Suwon, South Korea and colleagues now show that defects in cGAS signaling in cells treated with a drug that stops cell-cycle progression leads to CIN and the formation of extra-nuclear bodies containing damaged chromosome fragments, known as micronuclei. Restoring cGAS activity or increasing p21 expression levels prevented micronuclei formation, highlighting a mechanism through which cancer cells can maintain chromosomal stability. Introduction Innate immunity provides a line of defense against invading pathogens because it detects pathogenassociated molecular patterns (PAMPs) and induces an immune response that eradicates the pathogens. Sometimes, the immune system is activated in the absence of infection owing to the presence of damage-associated molecular patterns (DAMPs) that can be released during sterile inflammation or injury. Accordingly, each cell has various pattern-recognition receptors (PRRs), each of which has a predefined role 1 . Cyclic GMP-AMP synthase (cGAS) is one such PRR that detects cytosolic doublestranded DNA (dsDNA), whether foreign or self. Upon detection of dsDNA, cGAS binds it and synthesizes the second messenger cyclic GMP-AMP (cGAMP) 2,3 . cGAMP then binds the endoplasmic reticulum transmembrane protein stimulator of interferon genes (STING), which becomes active and translocates to the intermediate compartments between the endoplasmic reticulum and Golgi 4 . During translocation, cGAMP recruits TANK-binding kinase-1 (TBK1), which phosphorylates STING, leading to recruitment of interferon regulatory factor-3 (IRF3) 5 . TBK1 phosphorylates IRF3, causing it to dimerize and move into the nucleus, where it induces transcription of genes encoding various cytokines, interferons, and chemokines. TBK1 also phosphorylates Iκβα, an inhibitor of the transcription factor NF-κB (nuclear factor kappa-light-chain-enhancer of activated B cells), marking it for proteasomal degradation; Iκβα degradation releases NF-κB, which translocates together with IRF3 into the nucleus, providing a synergistic response against invading pathogens 6 . Genomic instability is a hallmark of cancer. The most common causes of genomic instability are chromosomal missegregation and impaired DNA damage repair (DDR) pathways. There are two possible outcomes after a cell has undergone genomic instability: DNA mutations and/or chromosomal instability (CIN) 7 . CIN can be structural or numerical. Structural CIN results in phenotypic manifestations, such as the formation of micronuclei, binuclei, or multinuclei, whereas numerical CIN gives rise to aneuploidy-an abnormal number of chromosomes 8 . However, the prominent effect in chromosomally unstable cells is an increase in the formation of micronuclei, reflecting the fact that this outcome may arise from two major chromosomal segregation errors-lagging chromosome or chromatin bridge formation-during the preceding mitosis. Because cancer cells are known to rapidly proliferate and have compromised cell-cycle checkpoints, they frequently undergo chromosomal missegregation events during mitosis that, upon successive rounds of cell division, result in CIN 8 . It was previously reported that cGAS is capable of detecting dsDNA inside ruptured micronuclei, which have fragile envelopes; this detection results in the activation of downstream signaling, indicating that CIN activates the cGAS/STING pathway mainly through micronuclei formation [9][10][11][12][13] . The outcome of activation of the cGAS/STING pathway with respect to cancer progression is a matter of controversy. A recent report indicated that activation of this pathway elicits an antitumor response that is subsequently exploited by cancer cells to evade immune surveillance by containing the immune response within the tumor microenvironment at suboptimal levels and promoting tumor metastasis through activation of the noncanonical NF-κB pathway 14 . However, some reports have suggested an opposite role of cGAS/STING pathway activation in tumor progression and metastasis, suggesting that cancer cells with elevated levels of cGAS/STING/IRF3 proteins show enhanced cGAS-STING pathway activation, which induces mitochondrial outer-membrane permeabilization and causes apoptotic cell death 15,16 . Given that there are numerous reported effects of CIN on cGAS activation via micronuclei formation, the relevant mechanism, namely, the effects of the cGAS/STING pathway on CIN, has not attracted serious research interest. Nevertheless, it has been suggested that cGAS can indirectly decrease CIN by detecting cytosolic DNA in the form of micronuclei and eliciting an innate immune responses that removes cells with CIN. However, whether cGAS can directly contribute to CIN, that is, without the involvement of immune responses, has not been addressed. Here, we demonstrate for the first time that stably decreasing cGAS expression levels in different cancer cell lines (HeLa and U2OS) facing nocodazole-induced mitotic arrest induces chromosomal missegregation events, leading to increased micronuclei formation. Moreover, cGAS add-back or cGAMP transfection rescues micronucleated cells by restoring proper regulation of chromosomal segregation, an effect that is dependent on cGAS enzymatic activity. We also found that the downstream pathway components STING, TBK1, and IRF3 are necessary for the induction of proper chromosomal segregation. Interestingly, our data suggest that these effects of the cGAS/STING/TBK1/IRF3 pathway are mediated by p21 downregulation. Synchronization and drug treatment To synchronize the cells at the G1/S phase by double thymidine block (DTB), cells were grown on coverslips and incubated in growth medium containing 1 mM thymidine (Sigma, T9250) for 16 h. Cells were then released from the thymidine block by first washing with thymidine-free medium (first release) and then culturing them in growth medium for 8 h. Subsequently, cells were subjected to a second thymidine block for an additional 16 h. For G2/M phase-arrested cells, cells were synchronized by the double thymidine block as described above, and then cells that were previously arrested with thymidine for 16 h were washed with thymidine-free medium before being cultured in complete medium for 7 h (for HeLa cells) or 8 h (for U2OS cells). Then, the cells were cultured in medium containing 9 μM RO3306 (Enzo, ALX-270_463) to arrest cells at the G2/M transition for 2 h (for HeLa cells) or 3 h (for U2OS cells). To induce mitotic arrest, cells were synchronized at prometaphase with 100 ng/mL nocodazole (Sigma, M1404) for 16 h. To inhibit protein degradation, cells were treated with 10 μM MG132 (Sigma, C2211) for 8 h after knocking down STING. For inhibition of protein translation, cells were transfected with siRNA targeting STING and then were treated with 10 μg/mL cycloheximide (Sigma, C7698) for the indicated time intervals. Immunoblotting Conventional immunoblotting was performed as previously described using the corresponding antibodies. Briefly, cell lysates (30 μg) were resolved by sodium dodecyl sulfate-polyacrylamide gel electrophoresis and were then transferred to polyvinylidene fluoride membranes. After blocking for 1 h at room temperature (RT) with TBS containing 0.1% (V/V) Tween-20 and 5% (W/V) nonfat milk, membranes were incubated with the corresponding primary antibodies at 4°C, which was followed by washing with TBS containing 0.1% Tween-20 and incubation with a horseradish-peroxidase-conjugated anti-rabbit or anti-mouse IgG (Amersham Biosciences, Piscataway, NJ) for 1 h at RT. Detection was carried out using ECL reagents (Amersham Biosciences) and exposure of the membranes to X-ray film. Immunocytochemistry Mitotic cells were split onto poly-L-lysine (PLL, P6282, Sigma-Aldrich)-coated slides. Then, cells growing on the slides and fixed in 100% methanol for 15 min at −20°C. Fixed cells were preincubated in blocking solution (3% BSA in PBS), which was followed by incubation with primary antibodies at 4°C overnight. Cells were then washed three times in PBS with shaking and then were probed with fluorophore (Cy3 or Alexa Fluor 488)-conjugated anti-mouse or anti-rabbit secondary antibodies. After washing three times with PBS, DAPI (Invitrogen, D3571) was used for DNA counterstaining. Three washes with PBS were followed by mounting in mounting solution (Biomeda, M01). The samples were examined under a fluorescence microscope (Axio Imager M1, Carl Zeiss). Time-lapse analysis HeLa cells were transfected with siControl or sicGAS and then were treated with nocodazole (100 ng/mL) for 16 h. Then, cells at prometaphase were collected by shaking the dish, and then they were seeded (1 × 10 4 cells/ well) in a four-well glass dish (Thermo Scientific™ Nunc™Lab-Tek II Chambered Coverglass, MA, USA) and incubated overnight in standard culture conditions to enable estimation of the duration of mitosis along with visualization of unstable chromosomal phenotypes with time-lapse photomicroscopy. To visualize chromosomes, cells were incubated with 1 μg/mL Hoechst 33342 (Thermo Scientific™ Hoechst® 33342, MA, USA) for 30 min. Fluorescence images were acquired every 5 min for 24 h while using a Nikon eclipse Ti camera (Tokyo, Japan) with a ×40 dry Plan-Apochromat objective. Images were captured with an iXonEM +897 Electron Multiplying charge-coupled device camera (Teledyne Princeton Instruments, Trenton, NJ, USA) and analyzed in the Nikon Imaging Software (NIS)-elements advanced research (AR) (Nikon, Tokyo, Japan). Mitotic index Mitotic cells were stained with aceto-orcein solution in 60% acetic acid (Merck, ZC135600) to visualize the condensed chromosomes. To determine the mitotic index, the percentage of mitotic cells with condensed chromosomes was quantified under a light microscope. Statistical analysis Most data are presented as the means ± standard deviations (SDs). Each experiment was performed in triplicate. Statistical differences were analyzed by Student's t-test, and asterisks (*) indicate significant differences: *P < 0.05; **P < 0.01; and ***P < 0.005. CIN is enhanced in cGAS-depleted cells To determine whether cGAS directly contributes to CIN without the involvement of the immune system, to find a model suitable for our experimental setting we addressed the cGAS/STING levels in four cell lines: hTERT-RPE1, U2OS (human osteosarcoma cell line), HEK293T (human embryonic kidney cell line), and HeLa cells. Both cGAS and STING were detected in HeLa and U2OS cells but not in hTERT-RPE1 or HEK293T cells (Fig. S1a). Moreover, STING expression levels were very low in U2OS cells compared with that of HeLa cells (Fig. S1a). We then confirmed that cGAS localizes to micronuclei, as previously reported in cGAS-positive cell lines, following treatment of these cell lines with nocodazole (100 ng/mL) for 16 h to induce mitotic arrest; then cells were released from the arrest for 10 h, which enabled us to observe the cells in subsequent interphase (Fig. S1b, c). To confirm that the cGAS/STING/TBK1/IRF3 pathway is intact and functional in HeLa cells, we introduced dsDNA into these cells by transfecting them with an empty pcDNA vector and then assessed cGAS-STING pathway activation by determining phospho-TBK1 and phospho-IRF3 levels (Fig. S1d). The results of this analysis suggested that HeLa cells were a suitable cell line for investigating the contribution of the cGAS/STING pathway to CIN. Although there is a possibility that the cGAS/ STING/TBK1/IRF3 pathway can be indirectly affected by the hypertriploid chromosome number in HeLa cells, which is not known, our decision to use the cell line was based solely on cGAS and STING expression levels as detected by western blot for the four tested cell lines (Fig. S1a). We then used an RNA interference (RNAi) approach to knock down cGAS in HeLa cells; we used two different small interfering RNAs (siRNAs) and measured the extent of CIN by monitoring micronuclei formation 10 h after release from nocodazole-induced mitotic arrest (100 ng/mL nocodazole for 16 h) (Fig. 1a). Interestingly, immunocytochemical analysis revealed a marked increase in the fraction of micronucleated cells among cGASdepleted cells compared with that of control siRNAtransfected cells (Fig. 1b), suggesting that cGAS is necessary to maintain chromosomal stability within cycling cells. To test this hypothesis, we compared micronuclei formation in wild-type (WT) HeLa and cGAS-knockout HeLa cells. cGAS-knockout HeLa cells exhibited an increased CIN phenotype similar to that of cGAS-knockdown HeLa cells, as evidenced by an increase in the percentage of micronucleated cells compared with that of WT HeLa cells (Fig. 1c). Moreover, the addition of WT cGAS (siRNA-resistant) to cGAS-depleted HeLa cells clearly decreased the number of micronucleated cells, confirming cGAS-dependent regulation of chromosomal stability (Fig. 1d). Additionally, transfection of cGASknockout HeLa cells with a plasmid encoding WT cGAS also reduced micronuclei formation (Fig. 1e). To exclude the possibility that this is a cell-type specific response, we transfected U2OS cells with an siRNA targeting cGAS. These cells displayed an increase in micronucleation compared to that of the control cells, which is similar to what was observed in cGAS-knockdown HeLa cells (Fig. 1f). These data clearly suggest that cGAS plays a role in maintaining chromosomal stability. Chromosomal segregation defects are enhanced by cGAS depletion Micronuclei primarily arise from two basic errors: lagging chromosomes and chromatin bridges, which may be partly induced by multipolar division in the mitotic phase 18 . To address the relationship between these phenotypes and abnormal chromosomal segregation, we monitored cells in anaphase 1 h after release from nocodazole-induced mitotic arrest by counting cells with lagging chromosomes, multipolar division, or chromatin bridges (Fig. 2a). We found that these chromosomal missegregation phenotypes were significantly more abundant in cGAS-knockdown cells than they were in the control cells (Fig. 2b). Add-back experiments in which cGAS-knockdown cells were transfected with an siRNA-resistant WT MYC-cGAS expression plasmid clearly revealed that upon restoration of cGAS protein, cells regained the ability to undergo proper chromosomal segregation (Fig. 2c). Moreover, a comparison of cGAS-knockout and WT HeLa cells undergoing chromosomal segregation 1 h after release from mitotic arrest revealed that more chromosomal segregation errors occurred in cGAS-knockout cells than in WT cells ( Fig. 2d). Again, the addition of cGAS to cGAS-knockout HeLa cells rescued the chromosomal missegregation phenotype (Fig. 2e). These findings clearly indicate that cGAS is necessary for normal chromosomal segregation during mitosis, and it suppresses CIN during cell-cycle progression. cGAS activity is necessary for this effect. To this end, we transfected cGAS-knockout HeLa cells with plasmids coding WT cGAS or a catalytic-dead (CD) cGAS mutant and then arrested them at mitosis by treating with nocodazole, as described above. Western blotting confirmed that the cGAS CD mutant was unable to induce phosphorylation of IRF3 and thus could not activate the cGAS/STING pathway. Immunocytochemical analysis performed 10 h after release from nocodazole arrest revealed that the cGAS CD mutant, unlike the WT cGAS, was completely incapable of reducing the number of cells containing micronuclei, as summarized in Fig. 3a. Consistent with this, transfection of the cGAS-knockout HeLa cells with cGAMP significantly decreased the number of cells exhibiting micronuclei formation (Fig. 3b). Finally, transfection of cGAS-knockdown HeLa cells with cGAMP also resulted in fewer micronucleated cells than what was observed in the cGAS-knockdown cells transfected with a vehicle control (Fig. 3c). Collectively, these findings conclusively demonstrate that cGAS acts through the promotion of cGAMP synthesis during cell-cycle progression to inhibit micronuclei formation, thus confirming the requirement for cGAS enzymatic activity in reducing the CIN phenotype in successive cell cycles. STING acts as a mediator in regulating chromosomal stability cGAS activation results in the synthesis of cGAMP, which then binds STING to activate downstream signaling. To determine whether STING plays a role in chromosome stability, we transfected HeLa cells with an siRNA targeting the 3′-UTR of STING (siSTING) and assessed micronuclei formation after release from nocodazole-induced mitotic arrest. Indeed, siRNAmediated STING knockdown was accompanied by an increase in micronucleated cells compared with that of the siControl-transfected cells, suggesting the requirement of STING in the cGAS-dependent maintenance of proper chromosomal segregation (Fig. 4a). To confirm that this effect from STING is dependent on its activity, we added back WT STING, a constitutively active STING mutant, or an inactive STING mutant to STING-depleted HeLa cells. Western blotting confirmed that both WT and constitutively active STING were able to induce phosphorylation of IRF3, whereas the inactive STING mutant was not. Importantly, the inactive STING mutant failed to abrogate micronuclei formation, whereas cells transfected with either WT STING or constitutively active STING effectively suppressed micronuclei formation (Fig. 4b). We also transfected these mutants into cGAS-knockout HeLa cells and again observed an increase in the percentage of micronucleated cells in cells transfected with either the inactive STING mutant or the control vector; in contrast, transfection of cells with WT STING or constitutively active mutant STING substantially decreased the percentage of micronucleated cells (Fig. 4c). Taken together, these results indicate that STING activation mediates cGAS regulation of chromosomal stability. TBK1 and IRF3 are also necessary for the maintenance of chromosomal integrity We further tested the possible involvement of downstream components of the cGAS/STING pathway, TBK1 and IRF3, in maintaining chromosomal stability. . b Thirty hours after transfection with sicGAS, mitotic cells were collected by shaking the plate, and they were then treated for 16 h with nocodazole and seeded on a PLL-coated coverslip. After 1 h, cells were fixed with 100% ice-chilled methanol, and ICC was performed. Cells showing the indicated chromosomal segregation errors were calculated as percentages. The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. c Co-transfection of sicGAS with pcDNA or MYC-cGAS (non-targeting siRNA) in HeLa cells for 20 h before beginning treatment with nocodazole, which lasted for 16 h. Cells arrested in mitosis were collected and reseeded on PLLcoated coverslips. After 1 h, the percentage of micronucleated cells was quantified using ICC. The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 by Student's t-test. d cGAS−/− and wild-type HeLa cells were treated with nocodazole for 16 h. Then, mitotic cells were collected by shaking the plate, and the collected cells were seeded on PLL-coated coverslips for 1 h before ICC was performed to analyze the number of cells showing the indicated chromosomal missegregations. The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. e cGAS−/− HeLa cells were transfected with pcDNA or MYC-cGAS and then were treated with nocodazole for 16 h to induce mitotic arrest. After release from nocodazole treatment, mitotic cells were seeded on PLLcoated coverslips for 1 h and were then subjected to ICC to quantify the percentage of cells with chromosomal segregation errors. The results are given as the mean ± SD from three independent experiments (n = 300). *P < 0.05, ***P < 0.001 as assessed by Student's t-test. Knockdown of TBK1 in HeLa cells also resulted in more cells with micronuclei than what was observed following transfection with the siControl (Fig. 5a), suggesting the involvement of TBK1. We then knocked down IRF3 using siRNA and induced mitotic arrest by treatment with nocodazole. Again, cells showing attenuated IRF3 levels displayed increased micronuclei formation compared with that of control cells (Fig. 5b). These data, in addition to our previous findings, indicate that all components of the cGAS/STING/TBK1/IRF3 pathway play discrete roles in maintaining chromosomal stability as cells undergo cellcycle progression. cGAS/STING pathway-dependent expression of p21 We next asked what the possible mechanism could be by which the cGAS/STING/TBK1/IRF3 pathway regulates chromosomal segregation. This pathway could directly affect the mitotic process or act on the interphase process, which could subsequently induce chromosomal missegregation. Previous reports suggest two possible scenarios in which the cGAS/STING pathway might affect CIN. The first possibility is that it affects centrosome number, given reports that TBK1 interacts with centrosomal proteins to play a role in microtubule stability 19 . However, we found no significant difference in centrosome number after STING depletion or add-back (Fig. S2a, b). The second possibility is that STING acts as a positive regulator of p21 levels through activation of the NF-κB and p53 axes 20 . Since p21 can inhibit cell-cycle progression, including the G2/M transition, downregulation of p21, which is expected in the absence of STING, might affect G2/M transition and subsequent mitotic events as well. Thus, we analyzed the expression . The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. c HeLa cells transfected with sicGAS for 20 h were transfected with or without cGAMP for 6 h and then subjected to nocodazole treatment for 16 h. Ten hours after release from nocodazole treatment, western blotting (left panel) with the indicated antibodies and ICC (right panel) was performed to quantify cells displaying a micronuclei phenotype. The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. levels of p21 protein in cGAS-depleted HeLa cells and found that they were significantly reduced relative to their levels in WT HeLa cells, suggesting that cGAS serves as an upstream signaling protein that regulates p21 levels during cell-cycle progression (Fig. 6a). We further observed that p21 levels were reduced in HeLa cells with STING knocked down (Fig. 6b), as reported previously 20 . We then tested the effects of IRF3 knockdown; previous studies reported that IRF3 enhanced transcriptional activation of p53 and p53-dependent growth inhibition 21,22 . Interestingly, IRF3 downregulation also resulted in significantly decreased p21 levels, suggesting a role for IRF3 in regulating p21 levels in our experimental setting (Fig. 6c). Next, we addressed whether changes in p21 levels by cGAS/STING/TBK1/IRF3 were dependent on p53. We first depleted p53 using siRNA and found that it led to a significant decrease in p21 along with an increase in micronuclei formation, indicating that p21 levels in The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. c cGAS−/− HeLa cells were transfected with the indicated plasmids for 12 h before nocodazole treatment commenced for 16 h. Ten hours after release from mitotic arrest, cells were subjected to western blotting with the indicated antibodies and ICC to calculate the percentage of cells showing micronuclei formation after nocodazole release; representative fluorescent images show micronuclei (white arrowhead denotes micronuclei). The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. these cells were dependent on p53 (Fig. 6d). Next, we depleted cGAS, p53, or both and observed that (1) cGAS depletion does not affect p53 levels and (2) even without p53, p21 downregulation was evident after cGAS knockdown, indicating that p21 downregulation in cGASdepleted cells is p53-independent (Fig. 6e). A reduction in p21 levels would result in precocious mitotic entry, which might induce chromosomal missegregation because of unresolved DNA damage or uncoordinated mitotic entry, among other possibilities [23][24][25] . Although there was no significant change in the extent of DNA damage in cGAS-depleted cells or cGAMP-transfected HeLa cells (Fig. S2c, d), measurements of the mitotic index after release from the double thymidine block revealed that STING-depleted HeLa cells reproducibly exhibited a G2/M transition that was 1-h earlier than that of WT HeLa cells (Fig. 6d). This suggests the possibility that activation of the cGAS/STING pathway leads to upregulation of p21 during the G2 phase, which would allow cells sufficient time to properly prepare for mitotic entry before entering into mitosis. p21 plays an important role in cGAS depletion-induced CIN To determine whether the increase in CIN phenotypes (i.e., micronuclei) following attenuation of the cGAS/ STING pathway is primarily attributable to a decrease in p21 levels, we downregulated p21 by using an RNAi approach, and we counted the number of cells with micronuclei. Indeed, the percentage of micronucleated cells was robustly increased in p21-deficient cells compared with that of control cells after release from nocodazole-induced mitotic arrest (Fig. 7a); this effect was accompanied by precocious entry into mitosis after release from double thymidine block (Fig. 7b). Next, we tested whether overexpression of p21 could overcome the cGAS-depletion-induced micronuclei phenotype. Importantly, overexpression of p21 in cGAS-knockdown cells resulted in a decrease in the number of cells with CIN compared to that of cGAS-knockdown cells transfected with an empty pcDNA vector control (Fig. 7c); overexpression of p21 also abolished the precocious G2/M transition induced by cGAS depletion (Fig. 7g). Similarly, overexpression of p21 in cGAS-knockout HeLa cells decreased the number of micronucleated cells by approximately 50% compared with that of pcDNAtransfected cGAS-knockdown HeLa cells (Fig. 7d). These data strongly suggest that cGAS depletion-induced CIN due to a decrease in p21 levels following cGAS depletion, which caused precocious entry into mitosis. If precocious entry into mitosis results in chromosomal missegregation, inducing a delay in G2/M transition under these conditions should decrease CIN phenotypes. To test this hypothesis, we used RO3306, an inhibitor of cyclin-dependent kinase-1 (CDK1), to delay the G2/M transition after release from the double thymidine block and assessed its effect on cGAS depletion-induced CIN. Indeed, RO3306 significantly attenuated cGAS depletioninduced micronuclei formation (Fig. 7e), which is an effect that was accompanied by abrogation of the precocious G2/M transition (Fig. 7f). Collectively, our findings suggest that CIN caused by downregulation of the cGAS/ STING pathway arises from a precocious G2/M transition that occurs because of decreased p21 levels. Discussion The cGAS/STING pathway has been extensively studied as a part of the innate immune system. However, reports on its impacts on cell-cycle progression are sparse, and little or nothing is known about its effects on chromosomal segregation. Our findings provide the first evidence for a role for the cGAS/STING pathway in maintaining chromosomal homeostasis as a cell undergoes division. There have been reports of a role for STING in maintaining chromosomal stability via the NF-κB/p53/p21 axis and a role of TBK1 in regulating chromosomal segregation during mitosis through binding to Cep170 and NuMA 19,20 . Another report also suggested that IRF3 overexpression causes cell-cycle arrest at the G1/S phase, resulting in inhibition of DNA synthesis 26 . Some recent reports have suggested that detection of DNA in ruptured micronuclei by cGAS can elicit an immune response that helps eliminate cells with CIN phenotypes, thus indirectly decreasing CIN 15,16 . However, there have been no reports on possible direct contributions of the cGAS/STING/TBK1/IRF3 pathway to CIN. Our findings suggest the first mechanism by which the cGAS-STING pathway directly regulates CIN without involvement of the immune system and demonstrate that all components of the cGAS/STING/TBK1/ IRF3 signaling pathway function together to maintain chromosomal stability. Theoretically, the cGAS/STING pathway might affect chromosomal stability through actions during interphase that subsequently induce CIN during mitosis or through direct effects on mitotic progression. In terms of the first of these two possibilities, there is literature supporting the conclusion that in interphase cells, cGAS is capable of detecting dsDNA inside micronuclei with fragile envelopes and subsequently eliciting a downstream pathway that induces transcription of inflammatory cytokines and chemokines via the transcription factors IRF3 and NF-κB. The net effect of the operation of this pathway is to recruit cytotoxic T cells to the tumor microenvironment and promote apoptotic cell death, thereby indirectly decreasing the number of cells with CIN. It has also been reported that IRF3 can induce transcription of p53, resulting in upregulation of p21, which arrests cell-cycle progression 21,22 . Here, we clearly showed that IRF3dependent downregulation of p21 is involved in cGAS depletion-induced micronuclei formation via precocious entry into mitosis (Figs. 6 and 7). In terms of possible direct effects of the cGAS/STING pathway during mitosis, the cGAS/STING pathway might affect microtubule stability or spindle assembly, given that TBK1 is known to interact with the centrosome proteins Cep170 and NuMA to regulate mitotic progression 19 . The cGAS/STING pathway may also be involved in various mitotic events, including cytokinesis, mitotic checkpoint function and mitotic cell death, as well as progression at prometaphase (Fig. S3). However, dissecting the direct role of the cGAS/ STING pathway in mitotic progression through regulation of the G2/M transition independent of p21 will require additional and more precisely designed studies. Various outcomes have been attributed to the downregulation of p21 during the interphase of cell-cycle progression. The first is that p21 downregulation can force mitotic entry of S phase-arrested cells, since p21 is no longer available to inhibit the formation of cyclin B1 and CDK1 complexes, resulting in premature mitotic entry that initiates before DNA synthesis is complete. This gives rise to abnormal mitotic phenotypes with dispersed chromosomes and disorganized bipolar spindle assembly. These mitotic cells will "slip through" this forced mitosis, leading to gross micronucleation and apoptotic cell death 25,27 . The second outcome follows from the fact that p21 is considered the sole regulator of the G2/M DNA damage checkpoint. When p21 is depleted, cells with DNA double-strand breaks in the preceding S phase can transit into the M phase without proper DDR because p21 can no longer inhibit the (see figure on previous page) Fig. 6 cGAS-STING pathway-dependent p21 expression levels resulting in abnormal cell-cycle progression. a HeLa cells were transfected with sicGAS, and after 48 h they were subjected to western blotting with the indicated antibodies. b HeLa cells transfected with an siRNA targeting STING, and after 48 h the cell lysates were subjected to western blotting with antibodies against p21, STING, and GAPDH. c IRF3 was knocked down using RNA interference, and after 48 h cell lysates were collected. Cell lysates were subjected to western blotting with the indicated antibodies. d p53 was knocked down in HeLa cells using sip53, and micronuclei were checked 10 h after release from nocodazole by ICC (right panel); p21 levels were assessed with the indicated antibodies by western blot (left panel). The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. e HeLa cells were transfected with sip53, sicGAS, or both, and then micronuclei formation was quantified after release from nocodazole using ICC (right panel). cGAS, p53, and p21 levels were assessed with the indicated antibodies using western blot. The results are given as the mean ± SD from three independent experiments (n = 300). ***P < 0.001 as assessed by Student's t-test. f HeLa cells were transfected with siSTING and then were subjected to a double thymidine block with 2 mM thymidine for 40 h. Six hours after release from the DTB, quantification of the percentage of mitotic cells with condensed chromosomes was performed by aceto-orcein staining at the indicated time points. The results are given as the mean ± SD from three independent experiments (n = 300). phosphorylation of CDK1 at threonine 161, which is necessary to enforce the G2 DNA damage checkpoint, leading to chromosome missegregation events and micronuclei formation [28][29][30][31] . This is consistent with our observations, given that p21 downregulation was able to enhance micronuclei formation and support precocious G2/M transition, giving rise to CIN. However, we could not detect an increase in γH2AX foci to prove our hypothesis that a decrease in p21 levels can result in inefficient DDR in the preceding interphase, leading to enhanced micronuclei formation in our system (Fig. S2). There are two theoretically possible mechanisms by which a decrease in the activity of the cGAS/STING/ TBK1/IRF3 pathway might downregulate p21. One is that the decrease in p21 levels is a consequence of transcriptional changes that would otherwise be controlled by the cGAS/STING pathway, which is depleted; this possibility is based on the fact that p21 levels are mainly regulated at the transcriptional level via various mechanisms. This mechanism predicts the possible involvement of IRF3 and/or NF-κB transcriptional activity. The second possible mechanism is that changes in p21 levels that occur after depletion of the cGAS-STING pathway are attributable to post-translational changes. The half-life of p21 in actively dividing cells has been reported to be 20-60 min 32 . Three E3 ubiquitin ligase complexes, SCF Skp2 , CRL4 Cdt2 , and APC/C Cdc20 , are involved in p21 degradation at specific stages of the cell cycle. Since APC/ C Cdc20 is known to be involved in p21 degradation during G2 and M phases, it may be involved in p21 downregulation in these circumstances 33 . Moreover, there are two other reports suggesting that post-translational modification of p21 protein-acetylation and deubiquitylation mediated by Tip60 and USP11-deubuquitylase, respectively-regulates cell-cycle progression and DNA damage responses by increasing p21 expression levels 34,35 . These observations suggest the need for additional studies on the possible association between the cGAS/STING/ TBK1/IRF3 axis and the post-translational modification of p21 protein. Ongoing efforts in our laboratory are focused on deciphering the precise mechanism underlying the cGAS/STING pathway-dependent regulation of p21 levels.
7,821.8
2020-04-01T00:00:00.000
[ "Biology" ]
MAPCap allows high-resolution detection and differential expression analysis of transcription start sites The position, shape and number of transcription start sites (TSS) are critical determinants of gene regulation. Most methods developed to detect TSSs and study promoter usage are, however, of limited use in studies that demand quantification of expression changes between two or more groups. In this study, we combine high-resolution detection of transcription start sites and differential expression analysis using a simplified TSS quantification protocol, MAPCap (Multiplexed Affinity Purification of Capped RNA) along with the software icetea. Applying MAPCap on developing Drosophila melanogaster embryos and larvae, we detected stage and sex-specific promoter and enhancer activity and quantify the effect of mutants of maleless (MLE) helicase at X-chromosomal promoters. We observe that MLE mutation leads to a median 1.9 fold drop in expression of X-chromosome promoters and affects the expression of several TSSs with a sexually dimorphic expression on autosomes. Our results provide quantitative insights into promoter activity during dosage compensation. Overall, Akhtar and colleagues have combined existing methodologies to develop a novel MAPcap method which is improved because it allows quantitative comparisons not possible with prior approaches for quantifying TSS. They have used early multiplexing of samples, removed PCR duplicates by random barcodes and used external spike in controls to allow accurate quantification of TSS expression. They quantified X-chromosome dosage compensation and discovered that MLE has an interesting sex-specific role in the brain. The authors also confirmed prior observations that different roX promoters are developmentally regulated. There are several important concerns to address: 1) The authors measure dosage compensation without looking at elongation which is a caveat that should be mentioned in the text. 2) All figures are very small and many are missing axis labels. Please recheck all labels and increase the size of figures. 3) I would like to see more details about their computational icetea method. I read their tutorial on Bioconductor but I do not think they provided enough detail in this paper. 4) The authors have shown enhancer RNAs that have stage specific expression but they should have determined whether eRNAs also showed sex-specific expression. Were there no sex-specific eRNAs (Fig. 2g)? This should be mentioned and commented on. 5) Embryo and sexed larval brains is not the best comparison because embryo is both male and female and brain is a specific tissue while embryo represents a whole organism. It would be much better if larval and adult brain could be compared in males and females separately. 6) Does MLE have any role in females or on autosomes? This should be addressed. Specific comments: Their materials and methods missed some detailed information such as the stage and number of embryos used. 2) FigS1d shows CAGE to have much higher sensitivity and precision than MAPCap in detecting TSS. Is this due to the superiority of CAGE, or were the modENCODE CAGE data used in defining TSS in the ensembl annotation. The authors should check this and if this is the case an annotation predating or not defined with the CAGE data should be used. If DNA accessibility or active histone marks are available for these tissues they could be used as orthogonal evidence for transcription in comparing the methods. FigS1d also shows MAPCap with similar sensitivity to RAMPAGE, but lower precision, does this suggest that MAPCap has a higher false positive rate? Can this be explained? 3) The correlation between RAMPAGE and RNAseq should be added, to contrast the CAGE and MAPCap with Fig1d,e to demonstrate if accuracy of gene expression estimates compared to RNAseq is an advantage for MAPCap. 4) The authors give handling low input samples, in combination with sample multiplexing as an advantage of MAPCap. What are the minimum or recommended RNA input requirements and how do they compare with other methods. 5) The introduction states that "the variability of promoter usage and expression of different transcripts (such as ncRNAs, eRNAs etc.) has not been investigated" in dosage compensation. However, ncRNA and eRNA were excluded from the DE analysis performed in the MLE mutant investigation of dosage compensation by counting just reads overlapping 5'UTRs. If DE analysis of ncRNA is possible with MAPCap / icetea this could be demonstrated. 6) In the abstract "high-resolution detection of transcription start-sites and differential expression analysis in a single setup, using a fast and simple protocol" is misleading considering the protocol requires multi-step, multi-day procedure. The authors should be more precise in the description of the method. And provide more detail including initial RNA amount, how is the quantity and quality of RNA change during the steps, as it includes multiple rounds of heating, sonication, and incubation periods. What is the mapping rate to different genomic regions (5', intergenic, intronic, etc) compare to other methods? 7) In page 6, the authors state that "the s-oligo incorporates the sequences of both standard sequencing adapters which omits the usage of an RT-primer and allows for a highly efficient intramolecular ligation." The authors should describe more in detail how the s-oligo is constructed and the nature of the oligo: RNA based or DNA based? If DNA based, please describe the ligation efficiency between s-oligo to RNA as this process is known to be highly inefficient. If RNA-based, is it complete or partial modification on the oligo? While the RT-primer is not used, reverse transcription is still taking place and it is surprising that G addition is not observed in the MAPCap protocol. Can authors explain how and why G addition is omitted as compared to other protocols? 8) The authors state in page 9 that "replicate-based analysis increases sensitivity and robustness of TSS detection." But isn't this to be expected when two libraries are combined instead of looking at one library at a time? Please explain in more detail the strength of this approach when considering the same read (depth) coverage with other protocols and provide statistics to state that the method yields more robust TSS detection. 9) The authors state in page 11 that "an in-vivo high resolution analysis comparison different promoters has not been done before" which is clearly not true and many in vivo based analysis using CAGE have been reported. This in particular shows allele-specific usage of TSS in zebrafish: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4820030/ In the present study, Bhadwaj et al. describe a new quantitative method called 'MAPCap' for detecting transcription start sites at high spatial resolution as well as gene expression levels at the same time. MAPCap combines affinity purification of mRNAs containing a 5' Cap with the sequencing library preparation of the FLASH protocol, an approach that was previously developed by the same group (Aktas et al., Nature, 2017). The MAPCap sequencing library preparation relies on random barcodes which allow the identification and computationally removal of PCR duplicates, and also includes spike-in controls to quantify changes in gene expression. Additionally, the authors provide a new R/Bioconductor package called 'icetea' for the computational analysis of MAPCap data. The authors apply the new method to developing fly embryos and larvae and observe developmental stage-and sex-specific TSS activities. By using mutants of the maleless RNA helicase (MLE), which is important for balancing X chromosomal gene dosage between male and female flies, the authors provide evidence for a global 2-fold downregulation of TSS activity on the X-chromosome. Although, a set of protocols are available for determining the exact genomic locations of TSSs these methods usually do not allow the accurate quantification of gene expression levels. MAPCap overcomes this limitation by combining the purification of capped mRNAs with a state-of-the-art sequencing library preparation method that also includes spike-in RNA controls. I have no doubts that this protocol will be broadly applicable because it allows TSS and differential gene expression to be measured at the same time and due to the computational analysis package that is provided for MAPCap data. The new method and the major findings of this study will be of broad interest for researchers in the transcription field and for colleagues in other research areas with a general interest in gene regulation. Here are some comments that need to be addressed prior to publication. Major comments: (1) Several protocols are available to determine TSS positions at high resolution such as CAGE, RAMPAGE, GRO-cap and others. The authors now present a new approach for profiling TSSs at high resolution. A question that immediately arises in the first section of the Results section is how reproducibly TSS locations are detected by MAPCap between biological replicates? The authors should provide this information early on, for instance at the beginning of the second paragraph (page 6). (2) Along these lines, the authors also compare MAPCap data with data obtained from other TSS profiling methods such as CAGE. On page 7 (second paragraph) the authors claim that 'MAPCap signal shows good correlation with other protocols'. The authors should mention the correlation coefficients between the different methods here. New figures that result from this analysis can be added to the supplement. (3) The authors claim that MAPCap is 'fast and easy to perform'. This statement is vague. RAMPAGE takes two days. How long does MAPCap take? What does 'easy' exactly mean here? Less experimental steps? A detailed comparison with the experimental steps of CAGE, RAMPAGE and CRO-cap would help. Along these lines, the authors should highlight in Figure 1a the new steps of the MAPCap approach and should clearly label the steps that were adapted from pre-existing methods such as from the FLASH protocol (mainly the sequencing library prep). (4) MAPCap as compared to CAGE and RAMPAGE shows a relatively low precision (~ 0.5) using paraclu ( Figure S1d). This means that 1 out of 2 identified TSSs is a false positive. This needs to be discussed in the manuscript. How is the sensitivity and precision improving when the 'local enrichment' algorithm that is implemented in 'icetea' is applied? Minor comments: (1) Regarding the description of the MAPCap library preparation in the methods section: please add the unit number for each enzyme that is used, including for RNase H, Terminator exonuclease etc., and also mention how much s-oligo was used for ligation. This information is essential for potential future users of this approach. (2) The authors provide evidence that a removal of PCR duplicates in the absence of random barcodes lead to near-complete loss of signal in case of the CAGE data, as expected (Fig. 1f). This is shown for only one (Chc) gene. The authors should provide evidence, if this is also true on a global scale. (3) On page 3 second paragraph and on page 7 third paragraph the authors mention that RAMPAGE relies on 'pseudo-random barcodes'. What is the difference to 'real' random barcodes? Please clarify in the main text. (4) Page 9, second paragraph: what do the authors mean by 'long TSSs'? Extended genomic region with multiple TSSs? This needs to be clarified in the main text. (5) The font sizes need to be increased in almost all figures, including the supplementary figures. Labels and legends are sometimes missing (e.g. Figure 3d, h) and genome tracks are lacking gene labels (e.g. Figure 2d, e and Figure 3e, g). This needs to be fixed. Figure 1b: the color between CAGE (dark blue) and RAMPAGE (light blue) can hardly be distinguished. Please change. (6) The manuscript contains several typos that need to be fixed such as on: • We provide a detailed, step-by-step MAPCap protocol in the Nature Protocols format (as reviewer's file, also uploaded to protocol exchange) that would simplify its widespread adoption. • We have expanded our comparison of MAPCap with external protocols by adding additional statistics and evidence for the TSS, such as ChIP-seq datasets of active marks and DNAseq-seq data in embryos. • We added a new MAPCap experiment in mouse ESCs, comparing it with nAnTiCAGE as well as demonstrating the allele-specific detection of TSS. • We added a new MAPCap experiment in S2 cells to show the TSS enrichment at various concentrations of starting RNA (up to 100ng). • We show that icetea provides superior results using replicates even on external datasets, using CAGE and DNAseq-seq analysis in S2 cells. • We expanded our analysis of stage and sex-specific eRNAs by adding online CAGE data in adult male and female heads and comparing to our embryo and larvae MAPCap. Finally, we added various clarifications and comparison in the text, expanded the figure sizes and answered all other individual issues raised by the reviewers. All these changes are highlighted in yellow. Reviewer #1 Overall, Akhtar and colleagues have combined existing methodologies to develop a novel MAPcap method which is improved because it allows quantitative comparisons not possible with prior approaches for quantifying TSS. They have used early multiplexing of samples, removed PCR duplicates by random barcodes and used external spike in controls to allow accurate quantification of TSS expression. They quantified X-chromosome dosage compensation and discovered that MLE has an interesting sex-specific role in the brain. The authors also confirmed prior observations that different roX promoters are developmentally regulated. We thank the reviewer for the supportive comments and finding our method novel. There are several important concerns to address: 1) The authors measure dosage compensation without looking at elongation which is a caveat that should be mentioned in the text. Indeed, MAPCAP is specifically designed to assess the transcription starts by mapping transcription start sites. We have now elaborated on this point and indicated which methods will be suitable to calculate initiation and elongation rates ( page 19 ). 2) All figures are very small and many are missing axis labels. Please recheck all labels and increase the size of figures. Apologies for the inconvenience. We have expanded the font sizes in the figures and placed labels wherever required. 3) I would like to see more details about their computational icetea method. I read their tutorial on Bioconductor but I do not think they provided enough detail in this paper. We have now added details of the methods implemented in the icetea in a new section "TSS analysis methods implemented in the icetea package" under methods ( page 32-33 ). 4) The authors have shown enhancer RNAs that have stage specific expression but they should have determined whether eRNAs also showed sex-specific expression. Were there no sex-specific eRNAs Indeed, we analyzed the difference in eRNAs between sexes and detected sex-specific eRNAs ( Fig. 2h , Supplementary Fig. 4e ). We note one caveat, that these eRNAs show high variability in expression between biological replicates and therefore it's difficult to conclude whether a majority of them are truly sex-specific ( Supplementary Fig. 5d ). Therefore, we focussed more on the stage-specific eRNAs in the manuscript. 5) Embryo and sexed larval brains is not the best comparison because embryo is both male and female and brain is a specific tissue while embryo represents a whole organism. It would be much better if larval and adult brain could be compared in males and females separately. We thank the reviewer for the suggestion. Upon revision, we have now expanded the analysis ( Fig. 2g-h, Supplementary Fig. 4b-d ) to include CAGE data from male and female adult heads. Most eRNAs are stage-specific rather than sex-specific. However, we agree that it would be interesting to investigate the tissue specificity of eRNA expression using MAPCap by doing a larger tissue-by-tissue comparison in a future study. 6) Does MLE have any role in females or on autosomes? This should be addressed. There might be some misunderstanding by the reviewer. Indeed, we did investigate the role of MLE in females and on autosomes. We find that genes with sexually dimorphic expression are regulated by MLE in both, males and females, and many of them are also on autosomes. This points to a novel role of MLE in regulating an X-independent process in both sexes ( Page 13-14, Fig. 3f, Supplementary Fig. 4e-h ) Specific comments: Their materials and methods missed some detailed information such as the stage and number of embryos used. Thanks for pointing this out. We have added these details in the revised version ( page 22 ). We used log(Counts per million) of reads for all these plots (with a pseudocount of 1 to avoid the log of zeros). We updated Fig. 1c-d, Supplementary Fig. 1 and the corresponding legend to mention this. Page 9: why brain tissue, why not study in sexed embryos? Why this stage? is this the stage with brain developed? Sexual dimorphism in the brain leads to sex specific behaviour in flies, such as male courtship behaviour. Although structural and transcriptomic studies have been performed on the brain, an analysis of promoters usage has been lacking. Therefore, we thought it serves as a nice system to perform MAPCap, as our data shall also serve as a useful resource for future studies. Although cells that contribute to brain start differentiating in early embryos, a functional and dissectable brain can only be obtained in larval stage. We specifically chose L3 larvae as this is the stage where defects in dosage compensation lead to male lethality. This also gives us an opportunity to understand the role of MLE in sexual dimorphism, as we discussed in the manuscript ( Page 13-14 ). We have clarified these rationale in the revised version ( page 10 ). Along with the concordance of differential expression estimated provided before ( Fig. 3d, supplementary Fig. 5a ), we have now added the GO term analysis and venn diagrams in Supplementary Fig. 5b-c . We used DEGs from both methods for the comparison of wild-type males and females, while we used DEGs from MAPCap for Wild-type: mutant comparison. Page 13 top paragraph: MLE mutants caused both autosomal gene and X-linked gene to change (Fig 3f), so the calculation of the X:A ratio in mutant males and females may not be appropriate. We only calculated X:A ratio in mutants as we were curious to compare it with the X:A dosage compensation estimates presented in previous studies (which used other orthogonal approaches). For all other analysis, we have only used X:X ratios between sexes. It would be better to plot a histogram of the changes on the X and the changes and autosomes and overlay them. We have added this in the revised version ( Fig. 4a ). We have added the comparison with Lott and Eisen (2011) study that studies the difference in gene expression between sexes during the onset of dosage compensation in early embryos ( Supplementary Fig. 6d, page 15 ). We find that MLE-sensitive genes identified in our study seem to be tightly dosage compensated while the MLE-insensitive genes have a compensation score (male:female slope ratio) similar to autosomes. We couldn't download the gene list from We had performed t-test assuming a normal/log-normal distribution of gene expression. During the revision, we checked for normality of both expression and distance to nearby features using shapiro-wilk test. Results suggest that the assumption is correct (p < 2.2e-16). A KS test however, also indicates a significant difference (P ~ 0). Fig 2d; page 7, top paragraph ). Therefore, we believe that MAPCap would be indeed appropriate to study imprinting and allele-specific expression at TSS. We are interested in further optimizing and applying MAPCap to study these processes in mammals in the future. Reviewer #2 The ( Fig. 1f , Supplementary Fig. 2b-c; page 7, top paragraph ). The correlation is lower than our initial expectation. However, this is very likely be due to 1) a technical issue: low material left after cap enrichment, leading to higher PCR cycles (18 cycles) 2) the two cell lines are not exactly of the same background. We strongly believe that the correlation can be further improved if both the protocols are systematically performed in parallel on the same cell line. This, however, deserves an independent benchmarking study which is beyond the scope of the current manuscript. Nonetheless, we keep this new analysis in the manuscript, as it provides a "proof of principle" experiment to show the versatility of MAPCap use in mammalian cells. 2) FigS1d shows CAGE to have much higher sensitivity and precision than MAPCap in detecting TSS. Is this due to the superiority of CAGE, or were the modENCODE CAGE data used in defining TSS in the ensembl annotation. The authors should check this and if this is the case an annotation predating or not defined with the CAGE data should be used. We agree with the reviewer and this is indeed the case. We used the ENSEMBL (release 79) annotation of the TSS for this analysis. For Drosophila , both ensembl and UCSC annotations are in fact, obtained from Flybase : • http://mar2015.archive.ensembl.org/Drosophila_melanogaster/Info/Annotation • http://hgdownload.cse.ucsc.edu/goldenPath/dm6/database/ Since 2010, the Flybase maks use of the TSS mapping data from modENCODE and other TSS mapping projects (including RAMPAGE) to update its annotation, and therefore using any (UCSC/Ensembl/Flybase) annotation of the dm6 assembly would produce results in favor of the CAGE and RAMPAGE protocols: • https://wiki.flybase.org/wiki/FlyBase:Gene_Model_Annotation_Guidelines . • http://www.g3journal.org/content/5/8/1721.long We therefore do not expect to perform better in these metrics when using any dm6 annotation as a gold standard but rather aim to achieve comparable results. If DNA accessibility or active histone marks are available for these tissues they could be used as orthogonal evidence for transcription in comparing the methods. We thank the reviewer for the suggestion. Upon revision, we have now used the modENCODE ChIP-seq data for active histone marks (H3K4me1, H3K4me3 and H3K27ac) and DNAse-seq data from a comparable stage as additional evidence for an active TSS ( Supplementary Fig. 3b ). We find that about 1/5th of the TSSs classified as "false positives" have additional evidence of active transcription. FigS1d also shows MAPCap with similar sensitivity to RAMPAGE, but lower precision, does this suggest that MAPCap has a higher false positive rate? Can this be explained? In the previous analysis we plotted the metrics (precision/sensitivity and F1-score) at the minimal peak detection threshold from paraclu; signal enrichment (density rise) >= 1, number of reads in peaks >= 1, which lead to a high number of false positives in the MAPCap data. In the new comparison, we optimized these parameters to achieve the maximum F1-score for each protocol independently (similar to Adiconis et al., 2018). We also utilized the DHS peaks to detect false negatives. This improves the precision-sensitivity metric for MAPCap (revised Supplementary Fig. 3a ). Further, adding additional evidence as suggested by the reviewer, we find that 1/5th of "false positive" peaks in MAPCap have additional supportive evidence ( Supplementary Fig. 3b ). We would therefore suggest that MAPCap performs comparable to RAMPAGE and CAGE in TSS detection. We now added this in Supplementary Fig. 1h . We have further added the PCA plot in Supplementary Fig. 1f to show the comparison of all 3 protocols in the same plot. 4) The authors give handling low input samples, in combination with sample multiplexing as an advantage of MAPCap. What are the minimum or recommended RNA input requirements and how do they compare with other methods. In order to show the effect of low input samples on peak calling and signal enrichment, we added data from a MAPCap experiment on wild-type S2 cells with a range of starting material. MAPCap shows good enrichment upto 100ng total RNA as starting material, but in absence of replicates, we recommend starting with about 500ng to avoid false positive peaks ( Fig. 1g , Supplementary Fig. 2 e ). We have discussed these results on page 7 and the comparison with other protocols under the discussion section ( page 16-17 ). ( page 14, Fig. 4b ). 5) The introduction states that "the variability of promoter usage and expression of different transcripts 6) In the abstract "high-resolution detection of transcription start-sites and differential expression analysis in a single setup, using a fast and simple protocol" is misleading considering the protocol requires multi-step, multi-day procedure. The authors should be more precise in the description of the method. And provide more detail including initial RNA amount, how is the quantity and quality of RNA change during the steps, as it includes multiple rounds of heating, sonication, and incubation periods. What is the mapping rate to different genomic regions (5', intergenic, intronic, etc) compare to other methods? We agree that a more detailed description of MAPCap protocol would be necessary for its broad adoption. We therefore provide a detailed, step-by-step MAPCap protocol in this revision ( reviewer's file, which we have also uploaded on the Nature Protocols exchange platform), mentioning the time required for each step and RNA quality plots. Mapping rates are shown in Supplementary Fig. 2b. 7) In page 6, the authors state that "the s-oligo incorporates the sequences of both standard sequencing adapters which omits the usage of an RT-primer and allows for a highly efficient intramolecular ligation." The authors should describe more in detail how the s-oligo is constructed and the nature of the oligo: RNA based or DNA based? If DNA based, please describe the ligation efficiency between s-oligo to RNA as this process is known to be highly inefficient. If RNA-based, is it complete or partial modification on the oligo? While the RT-primer is not used, reverse transcription is still taking place and it is surprising that G addition is not observed in the MAPCap protocol. Can authors explain how and why G addition is omitted as compared to other protocols? We have added more details of the s-oligo and G-addition ( page 5-6 ) under the "Results" section. The s-oligo is an RNA-DNA hybrid where the RNA-part is used for ligation and the DNA part serves as the pre-designed template for reverse transcription. Further details on the design of s-oligo would also appear in our upcoming manuscript (under review) where we studied several human RNA-binding proteins using FLASH. We therefore describe the design here in brief in order to avoid redundancy. The SuperScript-III reverse transcriptase (used in MAPCap) has intrinsically low terminal nucleotide transferase (TdT) activity, which, in fact, depends on the concentration of enzyme, time of incubation, temperature and buffer condition 1,2 . Cap-trapping methods enhance this by changing the buffer concentration of Mn+ and Mg+ ions 3 , while template-switching protocols further maximize this by additionally providing the non-template oligos 4 . In MAPCap, we simply skip these optimization steps as we do not rely on them for selection of cDNA Cap, therefore reducing the 'G' addition bias. Combination of replicates can be performed by pooling (merging of reads) or intersection (taking common TSS) of replicates. In the revised version, we have further added the comparison of our algorithm with these two alternative approaches, and used AUPRC (area under precision-recall curve) as the performance metric. Our method provides better results then these alternatives (Revised Fig. 2a-b ). Our algorithm is protocol-agnostic and can also be applied to other protocols with replicates to improve accuracy of TSS detection. In the revised manuscript, we show the results with 1 Million subsampled reads for both MAPCap and CAGE data. In case of CAGE data, we obtain many more true positive peaks while keeping the same optimal F1-score as with paraclu ( Fig. 2b and Supplementary figures 3d-f , page 7-8 ). 9) The authors state in page 11 that "an in-vivo high resolution analysis comparison different promoters has not been done before" which is clearly not true and many in vivo based analysis using CAGE have been reported. This in particular shows allele-specific usage of TSS in zebrafish: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4820030/ We referred to this in the context of dosage compensation in flies. We have now clarified this in the text ( page 13 ). 10) In addition to Spike-in normalization, it would be relevant to compare up/down regulated genes after more conventional approaches to data normalization methods such as TMM, DESEQ, RC, etc. We have added a comparison of our results with those obtained from TMM and DESeq normalization in terms of overall numbers, direction and fraction of affected TSS on X-chromosome. These alternative normalization methods produced a balanced set of Up and Down-regulated genes and masked a global expression shift on the X-chromosome. We added the comparison in Supplementary Fig. 5i and expanded the text ( page 13 ). Minor Points Peak calling with paraclu is often performed on pooled reads from all samples, the authors should also add this comparison to Fig 2a. We have now expanded Fig. 2a to compare our results with paraclu on "pooled" samples, as well as on individual samples followed by taking an "intersection" of results. Indeed, the icetea method provides better accuracy in TSS detection than either of these approaches. The authors should specify the procedure used for 'internal normalization'. In the original submission, we used DESeq2 method as "internal normalization" to compare with our spike-in results, which we have now expanded to include multiple methods ( page 14 ). We provide multiple normalization methods such as TMM, DESeq2 etc. in the icetea package, which we have now mentioned in the extended description of the icetea package under "methods" ( page 32-33 ). We have now provided a metagene profile of the different protocols in Supplementary Fig. 2a , which shows 5'-specificity as well as the preservation of signal after removal of PCR duplicates in the MAPCap protocol. We typically find that >60% of duplicate-free reads in our MAPCap experiments are within TSS ( Supplementary Fig. 4f ). This is close to CAGE/RAMPAGE (70-80%) and better than low input protocols such as slicCAGE/nanoCAGE. However, this can further be increased if the TSS detection using icetea is optimized. Instead of a single gene in Are replicates available for either the CAGE or RAMPAGE to compare with Fig S1c? Replicates are available for the CAGE data in S2 cells from modENCODE, we have processed them using the same parameters as MAPCap and added them for comparison (both, before and after PCR de-duplication) in Supplementary Fig. 1e-f . Accession numbers for external datasets used should be added (CAGE, RAMPAGE, RNAseq). We added accession numbers for all external datasets as a Greek characters delta and mu are not inserted leaving boxes. We replaced the figure labels and fixed the issue with greek letters. Reviewer #3 In We thank the reviewer for appreciating the usefulness of our study and for the encouraging remarks. Here are some comments that need to be addressed prior to publication. We thank the reviewer for this comment. Upon revision, apart from the correlation on known 5'-UTRs, we now also provide correlation between replicates on the detected TSS. For comparison, we also calculated similar correlation on modENCODE CAGE data for S2 cells with replicates, using identical approach, with and without PCR duplicates (see methods; page). MAPCap shows a consistent 90-94% correlation between replicates on previously known 5'UTRs as well as on detected TSSs. CAGE also performs at the similar range, although the duplicate-free correlation is relatively lower. We added these results in Supplementary Fig. 1d-f and in the second paragraph of our results section ( page 5 ), as suggested. We have added the correlation coefficients in the text ( page 6 ), and have further expanded the Supplementary Fig. 1 to add scatter plots and also a PCA plot to show relationship between the methods. Fig. 1a to highlight the steps which are common between the MAPCap and FLASH protocol. (4) MAPCap as compared to CAGE and RAMPAGE shows a relatively low precision (~0.5) using paraclu ( Figure S1d). This means that 1 out of 2 identified TSSs is a false positive. This needs to be discussed in the manuscript. How is the sensitivity and precision improving when the 'local enrichment' algorithm that is implemented in 'icetea' is applied? Upon revision, we further investigated into the "false positives" from the MAPCap protocol. After taking the maximal F1-score for each protocol independently, the gap in sensitivity between MAPCap and other protocols reduces (please also see reply to comment #2 from Reviewer #2). We then looked for further supportive evidence using DNAse-seq and ChIP-seq data on these false positive TSSs and find that 1/5th of the False positive TSS have orthogonal supporting evidence. We have added these results in the manuscript ( page 8-9; Supplementary Fig. 3a-b ). We have also plotted the precision-recall curve and added the AUPRC values ( Fig 2b ) to show how the sensitivity and precision improve when icetea is applied to the data. Minor comments: (1) Regarding the description of the MAPCap library preparation in the methods section: please add the unit number for each enzyme that is used, including for RNase H, Terminator exonuclease etc., and also mention how much s-oligo was used for ligation. This information is essential for potential future users of this approach. We agree with the reviewer that these details are important. Our step-by-step protocol ( reviewer's file ) includes the concentration/units of all required reagents. (2) The authors provide evidence that a removal of PCR duplicates in the absence of random barcodes lead to near-complete loss of signal in case of the CAGE data, as expected (Fig. 1f). This is shown for only one (Chc) gene. The authors should provide evidence, if this is also true on a global scale. We have added the metagene profiles showing TSS signal between protocols on all genes after removing PCR duplicates ( Supplementary Fig. 2a ). We actually used "long TSSs" exchangebly with "broad TSSs" in that paragraph. We apologise for the confusion and have replaced "long TSSs" with "broad TSSs" in the revised version. Figure 3d, h) and genome tracks are lacking gene labels (e.g. Figure 2d, e and Figure 3e, g). This needs to be fixed. We have increased the font size for all figures, placed axis labels and gene names, and also updated Fig. 1b to better distinguish the colors. We have fixed the above along with other typos that we could identify in the manuscript.
7,883.4
2019-07-30T00:00:00.000
[ "Biology" ]
Chronic Melatonin Administration Reduced Oxidative Damage and Cellular Senescence in the Hippocampus of a Mouse Model of Down Syndrome Previous studies have demonstrated that melatonin administration improves spatial learning and memory and hippocampal long-term potentiation in the adult Ts65Dn (TS) mouse, a model of Down syndrome (DS). This functional benefit of melatonin was accompanied by protection from cholinergic neurodegeneration and the attenuation of several hippocampal neuromorphological alterations in TS mice. Because oxidative stress contributes to the progression of cognitive deficits and neurodegeneration in DS, this study evaluates the antioxidant effects of melatonin in the brains of TS mice. Melatonin was administered to TS and control mice from 6 to 12 months of age and its effects on the oxidative state and levels of cellular senescence were evaluated. Melatonin treatment induced antioxidant and antiaging effects in the hippocampus of adult TS mice. Although melatonin administration did not regulate the activities of the main antioxidant enzymes (superoxide dismutase, catalase, glutathione peroxidase, glutathione reductase, and glutathione S-transferase) in the cortex or hippocampus, melatonin decreased protein and lipid oxidative damage by reducing the thiobarbituric acid reactive substances (TBARS) and protein carbonyls (PC) levels in the TS hippocampus due to its ability to act as a free radical scavenger. Consistent with this reduction in oxidative stress, melatonin also decreased hippocampal senescence in TS animals by normalizing the density of senescence-associated β-galactosidase positive cells in the hippocampus. These results showed that this treatment attenuated the oxidative damage and cellular senescence in the brain of TS mice and support the use of melatonin as a potential therapeutic agent for age-related cognitive deficits and neurodegeneration in adults with DS. Introduction Down syndrome (DS) is characterized by a triplication of a complete or partial copy of chromosome 21 (Hsa21).The cognitive impairment of DS individuals is partially due to developmental alterations, although in later life stages, this impairment is progressively aggravated due to accelerated aging and the development of Alzheimer's disease (AD) neuropathology [1,2]. Among the different mouse models of DS, the Ts65Dn (TS) mouse, that contains three copies of 92 genes orthologous to Hsa21 genes [3], recapitulate numerous phenotypic characteristics of DS, including cognitive deficits and alterations in brain morphology and function [1,4], that become more pronounced as the animals age [5]. One of the mechanisms that contributes to the accelerated aging, cognitive and neuronal dysfunction in DS is increased oxidative stress that is present from early life stages affecting neurogenesis and differentiation, connection and survival [6][7][8].During later life stages, oxidative stress is aggravated in DS and in the TS mouse [9][10][11][12] contributing to their progression of cognitive and neuronal degeneration [4,9,13]. The enhanced oxidative stress found in DS individuals and in the TS mouse is caused by the triplication of several Hsa21 genes [9] including the SOD1/Sod1, the gene responsible for the formation of superoxide dismutase (SOD), which catalyzes the conversion of superoxide anions into hydrogen peroxide (H2O2).The increase in SOD activity results in the formation of disproportionate levels of H2O2, leading to the overproduction of highly reactive oxygen species (ROS).In addition, oxidative stress induces cell senescence [14,15], a process that is characterized by permanent arrest of cell proliferation [16]. Melatonin is an indoleamine mainly synthesized and secreted by the pineal gland.Its production progressively decreases as animals age [17], and its exogenous administration has been demonstrated to induce neuroprotective effects [18][19][20].Melatonin protects against oxidative stress regulating anti-and pro-oxidant enzymes, acting as a potent ROS scavenger [21] and repairing molecules damaged by ROS overgeneration.Thus, melatonin has been proposed to be a powerful tool in the treatment of neuropathologies in which oxidative stress is enhanced. Our previous studies [10,22], showed that melatonin treatment during adulthood improved spatial memory and hippocampal long-term potentiation (LTP) in TS mice.This functional benefit was associated with protection against cholinergic neurodegeneration and the normalization or attenuation of several hippocampal neuromorphological alterations of TS mice.The aim of this study was to evaluate the oxidative status and the density of senescent cells in the brain of adult TS mice and to evaluate the effect of chronic melatonin treatment on these processes. Animals and housing This study was approved by the Cantabria University Institutional Laboratory Animal Care and Use Committee and was carried out in accordance with the Declaration of Helsinki and the European Communities Council Directive (86/609/EEC).Mice were generated, karyotyped, housed and maintained as previously described Corrales et al. [22]. Melatonin treatment and experimental groups TS and euploid littermates (CO) mice were orally treated with melatonin (Mel: 100 mg/L; Sigma-Aldrich, Madrid, Spain) or its diluent from 6 to 12 months of age as previously described Corrales et al. [10] and assigned to one of the following experimental groups: TS-Mel, CO-Mel, TS-vehicle or CO-vehicle.Six animals per group were used to assess the effects of melatonin administration on the oxidative stress assessments, while 7 extra animals per group were used to perform the senescence study. Oxidative stress assays Sample preparation.Cortex and hippocampal samples were homogenized in cold buffer containing 20 mM sodium phosphate, pH 7.4; 0.1 % Triton; and 150 mM NaCl (1:20 w/v) and centrifuged at 5000 g for 5 min.The supernatant was used to perform all the biochemical determinations that were carried out in triplicates. Antioxidant enzyme assays.Enzymatic activities were measured as described by Parisotto et al. [23].Briefly, CAT activity was analyzed at 240 nm quantifying the decrease in the level of H2O2 (expressed in mmol/min/g) in a 10 mM H2O2 solution.In order to determine SOD activity, the oxidation of epinephrine (pH 2.0 to pH 10.2), which produces superoxide anion and a pink chromophore (expressed in USOD/g), was quantified at 480 nm.GPx activity was determined measuring the oxidation of NADPH at 340 nm (expressed in μmol/min/g).Glutathione reductase (GR) activity (expressed in μmol/min/g) was analyzed quantifying the oxidation of NADPH at 340 nm due to the formation of GSH from GSSG via the GR that is present in the assay solution. Lipid peroxidation assessment: Lipid oxidation was determined spectrophotometrically at 535 nm via the quantification of thiobarbituric acid-reactive substances (TBARS, expressed in nmol/g) as described by Parisotto et al. [23]. Protein carbonyls (PC): Oxidative damage caused by protein carbonylation was determined measuring carbonyl absorbance at 360 nm as previously described [23].The PC concentration was expressed in mmol/mg. Tissue preparation: The animals were anesthetized and perfused and the hippocampi were removed and processed for histology and cell counting as previously described [10].Nissl staining. To calculate the area of the subgranular zone (SGZ) of each mouse, a randomly chosen series was used to perform Nissl staining.The SGZ area was measured via the standard Cavalieri method as described previously Llorens-Martín et al. [24].Briefly, the total SGZ extension was measured using a semiautomatic system (ImageJ v.1.33,NIH, USA, http://rsb.info.nih.gov/ij/) with the series of images from toluidine blue-stained sections.We then drew the SGZ below the internal side of the granular cell layer on the computer screen and measured the length of the resulting lines.The SGZ area of a series was calculated by multiplying the total SGZ extension by the thickness of the sections (50µm).Histochemical detection of senescence-associated β-galactosidase.To estimate the density of senescent cells in the SGZ of the dentate gyrus (DG) in the different groups of mice we used the SA-β-gal assay (senescence-associated β-galactosidase) method described by He et al. [25]. The hippocampal sections were washed twice with PBS and fixed for 15 min at room temperature with a 0.5 % glutaraldehyde solution.Next, the sections were washed and incubated with a staining solution containing 5-bromo-4chloro-3-indolyl-β-D-galactopyranoside (X-gal, Thermofisher Scientific, MA, USA) for 24 h at 37°C, mounted on Superfrost plus glass slides, dehydrated, cleared, and coverslipped with mounting medium.The density of SA-β-gal-positive cells (showing a blue reaction product over the cell soma) was determined by counting all blue cells in the SGZ of the DG of each animal using a Zeiss Axioskop 2 plus microscope with a 40X objective and dividing this number by the SGZ area. Statistical analysis Data were analyzed using MANOVA ('genotype' x 'treatment') followed by post-hoc group comparisons after Bonferroni corrections when all groups were compared and Student's t-test when two individual groups were compared.All of the analyses were performed in SPSS (version 22.0, Chicago, IL, USA) for Windows. Results To determine the brain oxidative stress status of adult TS mice and the possible beneficial effects of chronic administration of melatonin, we evaluated the levels of brain oxidative damage and the state of the antioxidant enzymatic system in the hippocampus and cortex of vehicle-and melatonin-treated TS and CO mice.Table 1 shows the results of the multivariance analysis on each variable assessed.To evaluated the effects of melatonin treatment on the levels of brain oxidative damage, we measured the levels of PC (a marker of protein damage induced by ROS) and of TBARS (a marker of lipid damage induced by ROS or lipid peroxidation). Although MANOVA revealed no significant differences due to the genotype, post hoc analyses showed a significant increase in the level of oxidized proteins (PC, ~10%; t=2.48; p=0.038;Fig. 1A) and of lipid peroxidative damage (TBARS, ~16%; t=2.33; p=0.042;Fig. 1B) in the hippocampus of adult vehicle-treated TS mice compared to CO mice under the same treatment. However, no differences in the cortical levels of both markers were observed between the different groups of animals (Fig. 1A and B).Melatonin significantly decreased the levels of protein damage (~12%; t=5.86; p=0.001) and lipid peroxidation (~29%; t=5.14; p<0.001) in the hippocampus of TS mice (Fig. 1A and B, Table 1) and produced a significant decrease in the levels of PC in the hippocampus of CO mice (~10%; t=2.68; p=0.028;Fig 1A, Table 1). Next, we examined whether melatonin might also exert its antioxidant effects by regulating the activity of the most important antioxidant enzymes in the cortex and hippocampus of the animals. As expected, SOD activity was increased (~143%) in the cortex of TS mice treated with vehicle compared to their CO littermates (Fig. 2A, Table 1).In the hippocampus, although MANOVA revealed the absence of significant differences due to genotype, post hoc comparisons showed higher SOD activity in vehicle-treated TS mice than in CO mice (~33%) under the same treatment (t=2.31;p=0.046).Melatonin administration did not modify SOD activity in either brain region (Table 1). We then analyzed the activity of CAT and GPx because the activity of SOD must be coordinated with the activity of these two antioxidant enzymes to metabolize the H2O2 that is produced by SOD into H2O and O2.As shown in Fig. 2B, CAT activity was increased in the cortex (~64%) but not in the hippocampus of TS animals.In addition, GPx activity was increased in both brain structures (cortex: ~34%; hippocampus ~27%, Fig. 2C) in vehicle-treated TS mice when compared to CO animals (Table 1).Chronic melatonin administration did not modify the activity of CAT or GPx in the cortex or hippocampus of TS or CO mice (Figs.2B and C, Table 1). To detoxify hydroperoxides, GPx requires the participation of glutathione (GSH) as a co-factor to counteract the continuous formation of its oxidized form (GSSG), which is very toxic to cells. Thus, we also measured the activity of GR, a flavoprotein that allows the conversion of GSSG back into GSH.GR activity did not differ in the cortex or hippocampus of TS or CO mice that were treated with melatonin or vehicle during adulthood (Fig. 2D, Table 1). Finally, we measured the activity of GST, an enzyme that participates in the detoxification of the endogenous hydroperoxides continuously generated through cellular lipoperoxidation processes. TS mice presented less hippocampal GST activity (~12%) than CO mice that received the same treatment.However, GST activity was similar in the cortex of the different groups of mice (Fig. 2E, Table 1). Because oxidative stress induces premature cellular senescence, we assessed the density of SAβ-gal-positive cells in the hippocampus of the four groups of mice.TS mice treated with vehicle presented a higher density of β-gal-positive cells than CO mice, indicating greater hippocampal senescence (MANOVA `genotype´: F(1,25)=6.35,p=0.019;Fig. 3).Interestingly, melatonin treatment significantly reduced the density of cells with a senescent phenotype in the DG of TS mice (MANOVA `treatment´: F(1,25)=9.07,p=0.005;Fig. 3B). Discussion We have previously demonstrated that melatonin exerts cognitive-enhancing effects in the adult TS mouse [10,22].These beneficial effects of melatonin in TS animals were partially due to the prevention of cholinergic degeneration and to the normalization of the function and/or morphology of the hippocampus.In this study, we evaluated other mechanisms that could be involved in the altered cognition of these mice, i.e., the oxidative stress status and the density of senescent cells in the brain, as well as the effect of melatonin treatment on these processes. We first evaluated the status of the antioxidant defense system in the brain of TS and CO mice. Consistent with the triplication of the Sod1 gene and the increased activity of SOD observed in different tissues in DS individuals and TS mice [15,23,26,27], we found that SOD activity was increased in the cortex and hippocampus of TS mice.Although SOD catalyzes the dismutation of superoxide, a free radical with high toxicity, to non-radical molecules such as oxygen and H2O2, the accumulation of H2O2 or its inefficient removal, leads to the formation of the most deleterious hydroxyl radical (HO -) that damage membrane lipids, proteins and other biomolecules [28].Thus, the increase activity of SOD found in the brain of TS mice may lead to an excess of OH - production resulting in high oxidative damage. To prevent the accumulation of H2O2, SOD activity must be balanced with GPx and CAT activities.In DS neurons, higher SOD activity without the concomitant increase of complementary antioxidant enzymes activities, CAT and GPx, create a redox imbalance that may not efficiently neutralize the excess of H2O2 [29].In this study, while GPx activity was increased in the cortex and hippocampus, CAT activity was only increased in the cortex of TS mice.The increase in GPx activity found in the brains of TS mice may be due to induction of the enzyme by excess H2O2 and lipid peroxides as an adaptative response to oxidative stress [30].However, the fact that CAT activity was not increased in the hippocampus of TS mice to compensate the enhanced SOD activity could result in insufficient removal of H2O2, which would favor the generation of HO -; this, in turn, could produce persistent oxidative stress. Among the anti-oxidative stress-related enzymes, GST acts in xenobiotic detoxification by catalyzing the conjugation of GSH to chemical toxins [31] and contributing to the detoxification of the endogenous hydroperoxides that are continuously generated through cellular lipoperoxidation processes [32].In agreement with previous observations in the blood of DS subjects [27,31], in this study, the activity of the GST enzyme in the hippocampus of TS mice was lower than that in CO mice.Considering the antioxidant effects of this enzyme [31], this may be an additional factor contributing to the oxidative damage in this structure in this model of DS. The levels of GSH, another major neuronal endogenous antioxidant, are reduced in DS subjects [27,31].Consistent with previous studies [33], we found no differences in the brain activity of GR (a central player in the conversion of GSSH to GSH) between CO and TS mice.However, the levels of GSH also depend on the levels of γ-glutamylcysteine synthase and glucose-6-phosphate dehydrogenase [34] that were not measured in our study. Melatonin has been shown to reduce oxidative stress through different mechanisms [21,34,35]: indirectly, by modulating the activity of the enzymes that are involved in controlling oxidative processes (up-regulating antioxidant and down-regulating pro-oxidant enzymes); and directly, by acting as a free radical scavenger and interacting with oxidative radicals. In this study, chronic melatonin treatment did not modify the activity of SOD, CAT, GPx, GR or GST enzymes, suggesting that melatonin does not exert its antioxidant effects in the brains of adult TS mice by regulating the antioxidant defense system.This lack of effect of melatonin to modulate the indirect mechanism to reduce oxidative stress in TS mice might be due to the fact that the ability of this indoleamine to detoxify free radicals is highly variable and depends on the tissues and species involved [35,36].For example, Olcese et al. [37] found that long-term melatonin administration to a mouse model of AD diminish the mRNA expression levels of the antioxidant enzymes SOD, CAT and GPx.However, a previous study in another model of AD reported opposite effects of melatonin treatment on antioxidant enzymes [38].In TS mice, the genetically predetermined increase in SOD activity induces a dysregulation of the other two enzymes and other triplicated genes also influence the redox-metabolism [39].These factors are likely to be contributing to the divergent results found after melatonin administration in the expression levels of these enzymes in TS mice and in murine models of AD.Furthermore, the regulation of antioxidant enzymes by melatonin can be also different depending upon under basal or elevated oxidative stress conditions [34] as occur in the brain of TS mice. Melatonin can also contribute to neuroprotection exerting other indirect antioxidative and prosurvival effects by regulating different anti-or pro-apoptotic proteins and enzyme cofactors [40,41].In addition, melatonin also modulates a broad set of ROS-and survival-related signaling pathways that implicate transcriptional factors, such as NF-κB or Akt [42][43][44], that activate different pro-inflammatory genes involved in age-related processes under increased oxidative stress.Future studies should explore the effects of melatonin administration in these signaling pathways in TS mice. Lipids and proteins are the molecules that are most prone to undergo major oxidative injury. Therefore, we measured the levels of oxidative damage in proteins and lipids in the brains of TS mice.Consistent with previous studies in DS individuals and TS mice in which other markers of oxidative stress were analyzed [10,12], the levels of TBARS and PC were increased in the hippocampus, but not in the cortex, of vehicle-treated TS mice.These findings suggest that in adult TS mice, cells in the cortex may exhibit greater tolerance or better regulation against oxidative stress than in the hippocampus.Further support for this hypothesis comes from the finding that the activity of CAT and GPx was also increased in the cortex of TS mice.These effects may compensate for the enhanced SOD1 activity, leading to a reduction in the production of ROS and of oxidative injury in the cortex.Conversely, an imbalance of these enzymes in the hippocampus would lead to increased oxidative stress in this structure, which could play a role in the cognitive deficits and neurogenesis alterations that have previously been described in this model of DS [4].Therefore, further studies are needed to determine whether brain structurespecific manipulations of the levels of the SOD and catalase enzymes, may account for the higher vulnerability of hippocampus to suffer oxidative stress damage. The second mechanism by which melatonin reduces oxidative damage is its direct ROS scavenging properties [21], detoxifying a variety of free radicals and reactive oxygen intermediates, including OH -, peroxynitrite anion, singlet oxygen and nitric oxide [45] to avoid oxidative damage.In this study we showed that melatonin treatment rescued the levels of protein damage and lipid peroxidation in the hippocampus of TS mice because they did not differ from those of CO mice, indicating that the main antioxidant action of this indoleamine in this mouse model of DS may be due to its action as a free radical scavenger.This protective effect of melatonin against lipid and protein damage is consistent with previous reports in other mouse models with different neuropathologies [20,38] or with brain damage induced by radiation or by toxin exposure [46,47] in which oxidative stress plays an important role. In addition, oxidative stress is an important factor that causes cell senescence [14,15,25]. Because the hippocampus of TS mice seems to be exposed to greater amounts of oxidative stress, we analyzed the density of senescent cells in this structure.We found that vehicle-treated TS mice showed a higher density of SA-β-gal-positive cells in the DG of the hippocampus than CO mice.Consistent with these findings, it was recently demonstrated that fibroblasts with trisomy 21 present signs of premature cell senescence secondary to increased oxidative damage [15]. Interestingly, melatonin normalized the density of SA-β-gal-positive cells in TS mice to a level comparable to the one found in vehicle-treated CO mice, indicating a potential antiaging protective effect in the hippocampus of this model.These results are consistent with previous studies that showed that melatonin treatment effectively reverses H2O2-induced senescent phenotypes in mesenchymal stem cells [14].Thus, it is likely that the melatonin-induced antioxidant effects in the hippocampus of TS mice may be involved in the reduction of senescent cells in this structure. Although administration of the antioxidant analogue of the nootropic piracetam SGS-111 from conception or during adulthood failed to improve cognition in the TS mouse [48], other antioxidants, such as vitamin E, normalize oxidative stress and delay impairments in cognitive performance [11].It has been demonstrated that, due to a variety of physiological and metabolic advantages [35], the protective effects of melatonin against oxidative damage are more potent than the ones induced by vitamins C or E [21], partially because the metabolites formed by free radical scavenging also present antioxidant activity [21,35].Because the pro-cognitive effects of melatonin are related to its multiple antioxidant actions [20,37,46,47] and the learning and memory deficits in TS mice are associated with increased brain oxidative stress, our results suggests that the beneficial effects found in TS mice cognition after melatonin treatment [22] could be partially due to its antioxidant action as a ROS scavenger. However, because melatonin has been also demonstrated to exert other neuroprotective effects, its administration could improve cognition by other mechanisms such as recovering LTP or promoting neurogenesis [19,20,[49][50][51].In our previous work [10], melatonin also decreased cholinergic degeneration, increased the density of proliferating cells, of differentiated neuroblast and of mature granular cells and improved the impaired LTP in the hippocampus of the TS mice. And, in the present study, melatonin also reduced the density of cells with a senescent phenotype in TS mice.These results support the idea that the pro-cognitive effects of this indoleamine could also be due to other neuroprotective actions.Therefore, melatonin could be a more effective pharmacological tool to reduce several DS-altered phenotypes than other antioxidants due to its multiple effects. Conclusions In summary (see Fig. 4), the results of the present study revealed an enhanced pro-oxidant status in the hippocampus of adult TS mice.This persistent oxidative state may account for the higher density of cells with a senescent phenotype that was demonstrated for the first time in the hippocampus of TS mice.Melatonin administration exerted antioxidant effects in the hippocampus of TS animals that were apparently not mediated by regulation of the activity of the antioxidant defense system but by the effects of melatonin as a ROS scavenger, which led to attenuation of the levels of oxidative damage.In addition, melatonin exerted an antiaging effect, as demonstrated by a reduction in the density of senescent cells in the hippocampus of TS mice. The present results provide further support for the neuroprotective effects of melatonin administration to adult TS animals and suggest that melatonin could be a potential beneficial supplement for the treatment of the age-related cognitive decline in DS individuals. Fig. 1 . Fig. 1.Mean ± S.E.M. of the levels of PC (A) and TBARS (B) in the cortex and hippocampus of Fig. 2 . Fig. 2. Mean ± S.E.M. of the activity levels of different antioxidant enzymes in the hippocampus Fig. 4 . Fig. 4. Schematic diagram summarizing the differences in the brain oxidative status between TS Table 1 . F values of MANOVA (genotype x treatment) for oxidative stress markers and antioxidant enzymatic activities in the cortex and hippocampus of the four groups of animals.
5,447.4
2016-07-23T00:00:00.000
[ "Biology", "Psychology" ]
The Crucial Role of Timely Forensic Examinations in Investigating Crimes against the Sexual Integrity of Minors: A Case Study of Kazakhstan’s Forensic Analysis System Timely appointment of forensic examinations in the prosecution of crimes against the sexual integrity of minors is the key not only to their rapid and complete investigation, but also the most important means of proving the guilt of suspects. In most cases, apart from the victim’s testimony and the results of identification, the guilt of the criminal is confirmed only with the help of forensic expert opinions, and a delay in the implementation of forensic analysis can lead to the irreparable loss of traces of a criminal offense. The role of forensic expertise in the fight against crimes against the sexual integrity of minors in modern realities is massively increasing, as it directs investigations and provides evidence to combat the changing face of crime. In recent years, the Republic of Kazakhstan has experienced a qualitative development of the forensic analysis system, which is reflected in nu - merous adopted legislative initiatives. Proper organisation of criminal investigation and high-quality interaction of intelligence and investigative services in collecting and recording evidence, as well as strict compliance with the requirements of the Criminal Procedural Code of the Republic of Ka-zakhstan during forensic analysis, allow identifying crimes against the sexual integrity of minors and bringing the perpetrators to criminal responsibility. Strict adherence to protocols and procedures that ensure the integrity of medical records, documentation and all collected clinical and forensic evidence can only increase the value of a medical assessment of child sexual abuse during a forensic analysis. INTRODUCTION Until the early 1970s, sexual abuse of children was considered a rare phenomenon and concerned only the impoverished segments of society. Increased public awareness led to an increase in the volume of reporting; for example, from 1970 to 1990, the number of reports of sexual abuse of children increased more than of other categories of neglect or abuse. 1 In the countries of the Commonwealth of Independent States, this problem is very relevant in modern realities. For example, in the Republic of Kazakhstan, the fact that, according to official data from the Ministry of Internal Affairs, 550 crimes against the sexual integrity of minors were committed only in the period from January to August 2020 2 is of great concern. Offenses that violate the sexual sanctity of minors are particularly alarming and require legal consequences. Different societies, with their unique history and culture, hold responsible individuals accountable for these types of crimes. Sexual violence includes any behavior involving a child before they reach the age of consent that is meant to satisfy the desires of an adult or significantly older child. The most widespread form of sexual violence is child abuse committed by family members, known as incest. However, it can be challenging to detect and control sexual violence within families. The conduct of forensic medical, biological, psychiatric, and other types of analyses plays a crucial role in resolving these crimes. The dynamics of sexual violence against children differ from the dynamics of sexual violence against adults. 3 In particular, children rarely talk about sexual violence immediately after the event. Moreover, disclosure is usually a process, not a single episode, and is often initiated after a physical complaint or a behavior change. The examination of children requires special skills and methods of collecting anamnesis, conducting a forensic medical analysis. Obvious signs of trauma to the genitals are rarely observed in cases of sexual abuse of children since physical force is rarely used. The accurate interpretation of injuries resulting from crimes against the sexual integrity of minors requires special training and, if possible, experts in this field should be consulted. In this regard, World Health Organisation strongly recommends a second consultation. Although a physical examination may not be necessary, a second consultation makes provision for an opportunity to analyze any psychological problems that may have occurred since then and to ensure that the child and its guardian receive adequate social support and counseling. 4 The expert's opinion is crucial for the investigation of criminal proceedings relating to crimes against the sexual integrity of minors, it is the basis for bringing a person to criminal responsibility; the correct qualification of the crime; changing the charge; it can expose the suspect or victim of giving false testimony. 5 As a result, it is critical to prevent errors in the activities of experts upon forensic analysis. The above actualizes the consideration of the role of forensic analysis in proving cases of crimes against the sexual integrity of minors on the example of the Republic of Kazakhstan. To achieve the intended purpose, the following tasks of the study were defined: 1) to analyze the current legislative initiatives of the Republic of Kazakhstan regarding the forensic analysis procedure, as well as its role in proving cases of crimes against the sexual integrity of minors; 2) to identify current issues in this area and possible ways to improve the mechanism of conducting forensic analysis in cases of crimes against the sexual integrity of minors in the Republic of Kazakhstan. MATERIALS AND METHODS Considering the complex and historical nature of the subject matter, the philosophical basis of the methodological paradigm of the study is dialectics as a general philosophical method of cognition. In the course of the study, the dialectical method allowed sounding out a holistic view of the development and organizational and tactical support for the investigation of crimes against the sexual integrity of minors. Comparative legal and formal legal methods were used in the analysis of the provisions of the current criminal procedural legislation of the Republic of Kazakhstan. Comparative legal and formal legal methods were used in the analysis of the provisions of the current criminal procedural legislation of the Republic of Kazakhstan. The comparative legal method was also used during the comparison of scientific research and concepts available in Kazakh and world science, the provisions of other regulations. This method was used to identify positive legislative practice that is appropriate and possible for testing on the territory of the Republic of Kazakhstan, considering the specific features of the national legal system. The research methodology was based on a set of principles, among which the principle of unity of theory and practice is the main one. The study involved an integrated approach, which became its methodological basis and allowed for systematic consideration of certain issues. The chosen methodological approaches allowed studying the subject matter in the unity of social content and legal form. Forensic analysis in the criminal justice system of Kazakhstan Forensic analysis is an essential element of the criminal justice system. Forensic experts examine and analyze evidence from crime scenes and other sources to obtain objective results that can help in the investigation and prosecution of criminals or remove suspicion from an innocent person. On 10 February 2017, Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2023 04:56:08 U M C S Law of the Republic of Kazakhstan No. 44-VI "On forensic expert activity" was adopted, 6 which consolidated the basic requirements for the procedure of forensic expert analysis, which are based on the achievements of science, technology, special disciplines, and their main purpose is to clarify the factual circumstances of what happened. This activity is performed by specialists of forensic analysis bodies, persons conducting the procedure of forensic expert analysis based on a license, or by other persons in accordance with the procedure and on the conditions corresponding to the legislation (Article 273 of the Criminal Procedure Code of the Republic of Kazakhstan of 4 July 2014 No. 231-V 7 ); the grounds for the analysis are a procedural document and objects that are subject to research (Article 272 CPC RK); the results of the analysis are drawn up in the form of an expert opinion, which acts as a procedural source of evidence (Article 283 CPC RK); the period of judicial expert research is a period of time not exceeding 30 days, except for cases that comply with the legislation of the Republic of Kazakhstan (Article 34 of the Law of the Republic of Kazakhstan "On forensic expert activity"). Moreover, the corresponding terms are established in accordance with the Rules for determining the categories of complexity of forensic analyses. 8 The forensic expert system of the Republic of Kazakhstan is currently represented by the Republican State-owned Enterprise "Forensic Expertise Centre" of the Ministry of Justice of the Republic of Kazakhstan. 9 The forensic analysis is performed according to the "List of types of forensic analyses" established by the Ministry of Justice of the Republic of Kazakhstan, 10 the State Register of Forensic Research Methods of the Republic of Kazakhstan, 11 which are subject to mandatory U M C S validation, and the Rules for handling objects of forensic analysis. 12 Acceptance of research objects, their accounting, and storage for the duration of the analysis, as well as their return are performed in accordance with the requirements of the Rules for handling objects of forensic analysis. Having studied the international experience of the countries of the Organisation for Economic Cooperation and Development and the countries of the Eurasian Economic Community, international associations of forensic expert institutions, since 2016, the World Bank's Strengthening Forensic Expertise Project has been implemented in the Republic of Kazakhstan 13 with the support of such international organizations as Key Forensic Services, 14 King's College London, 15 which contains the development, piloting of basic data on the effectiveness of forensic analysis, modernization of the legislative and institutional structure of forensic analysis and improving the level of qualification of forensic experts. It is especially necessary to note the fact that the Law "On forensic expert activity" for the first time indicated a new form of activity of licensed forensic experts, consolidating the Chamber of Forensic Experts of the Republic of Kazakhstan. In 2020, the Ministries of Justice, Internal Affairs, and the Prosecutor General's Office of the Republic of Kazakhstan signed a joint order on the transfer of certain types of forensic analyses appointed by the internal affairs bodies in criminal cases to a competitive environment through outsourcing. 16 When it comes to crimes committed against the sexual integrity of minors, forensic analysis is frequently required during criminal investigations. In the Republic of Kazakhstan, the Criminal Code 17 specifies the provisions that outline the legal responsibility for these crimes: rape of minors (Article 120), violent acts of a sexual nature (Article 121), sexual intercourse or other acts of a sexual nature with a person under the age of 16 (Article 122), coercion to sexual intercourse, sodomy, lesbianism, or other acts of a sexual nature (Article 123). Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2023 04:56:08 U M C S Sexual abuse against children encompasses a range of actions, such as touching or exposing the genitals, forcing children to view pornography or participate in its production, and engaging in vaginal, oral, or rectal penetration. Crimes against the sexual integrity of minors involve sexual activity with a child who is unable to provide informed consent or is unprepared for such activity due to their age or developmental stage, and which violates societal norms or laws. Such violence may occur between a child and an adult or another child in a position of trust or power, and may involve inducing or coercing a child into illegal sexual activities, exploiting a child in prostitution, or using children in pornographic productions. 18 If it is necessary to appoint a forensic analysis, a corresponding resolution is issued (Article 272 CPC RK), and its conduct takes place with the written consent of the persons to be accordingly analyzed (Article 274 CPC RK). The appointment and performance of expert analysis are mandatory if it is necessary to establish the following in the case: the causes of death; the nature and severity of the harm to health; the age of the suspect, the accused, and the victim; the mental or physical condition of the suspect and the accused; the mental or physical condition of the victim, the witness; other circumstances of the case that cannot be reliably established by other evidence (Article 271 CPC RK). Most often, during criminal proceedings for these crimes, a forensic medical analysis of the victim and the suspect, a forensic medical examination of the objects of investigative information, a forensic psychiatric analysis of the suspect, a forensic psychological analysis of the victim, and general forensic analysis are appointed. The purpose of medical forensic analysis is to establish and evaluate facts wherein, apart from the knowledge in general forensic medicine, special scientific knowledge in forensic identification and the use of various special laboratory research methods (physical, photographic, technical, chemical, mathematical) are required. Forensic examination of minors in cases of sexual abuse: importance, challenges, and common mistakes To obtain data on the commission of sexual acts related to vaginal, anal, or oral penetration into the child's body using genitals or any other object, as well as to establish the type of bodily injuries indicating a violent sexual act, after the opening of criminal proceedings against the victim, a forensic medical examination is appointed. It is used to determine the presence of the fact of sexual intercourse and the consequences of rape (the severity of bodily injuries, pregnancy, infection with venereal disease, etc.). The object of research of this type of analysis in cases of crimes against the sexual integrity of minors is the victim, the suspect, their 18 E. Quayle, Prevention, Disruption and Deterrence of Online Child Sexual Exploitation and Abuse, "ERA Forum" 2020, vol. 21, pp. 429-447. Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2023 04:56:08 U M C S clothes, shoes, as well as other material evidence important for establishing accurate and reliable information that will be further recorded in the expert's opinion. Before sending the case materials, clothing, shoes, and other things seized from the victim or suspect for examination, the investigator and the prosecutor must receive samples of blood, saliva, and hair of the victim or suspect for comparative research during the preparation of examinations. It is important to note that not every examination will involve so many steps and analyses, as the child may be harmed in different ways. The examination of a minor in a situation of suspected sexual abuse is a delicate process that requires careful attention to detail and a thorough understanding of the potential challenges that may arise. Unfortunately, there are several common mistakes that can occur during the examination process, which can compromise the safety and well-being of the minor involved. Some of the most common mistakes include: 1. Lack of proper training. Examiners who lack proper training and experience in conducting examinations of minors who have experienced sexual abuse may inadvertently harm the minor through inadequate or inappropriate examination techniques. 2. Insufficient attention to the minor 's emotional needs. It is important to remember that minors who have experienced sexual abuse are likely to be traumatized and emotionally fragile. Examiners who fail to provide appropriate emotional support and guidance during the examination process may inadvertently cause further harm. 3. Failure to document findings accurately. Failure to document the findings of the examination accurately can lead to incorrect conclusions, improper treatment, or a failure to provide adequate support to the minor and their family. 4. Insufficient communication with other professionals. The examination of a minor in a situation of suspected sexual abuse often involves a team of professionals, including law enforcement, child protective services, and medical personnel. Insufficient communication between these professionals can lead to confusion and inconsistencies in the examination process. To avoid these common mistakes, it is important for examiners to receive proper training and to maintain ongoing education in the field of forensic examination of minors. Examiners should also prioritize the emotional needs of the minor and their family, providing appropriate support and guidance throughout the examination process. It is also essential to document findings accurately and to maintain clear communication with other professionals involved in the case. Finally, it is crucial to ensure that the minor's safety and well-being are protected throughout the examination process, which may include providing appropriate medical care and counsellng, as well as working with law enforcement and child protective services to ensure that the minor is safe from further harm. Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2023 04:56:08 U M C S Based on the above, it can be concluded that in recent years there has been a qualitative development of the forensic analysis system in the Republic of Kazakhstan, which has been reflected in numerous adopted legislative initiatives. Forensic analyses play a decisive role in the investigation of attacks on the sexual integrity of minors since they can be used to procure objective evidence of the involvement of particular persons in a crime, and the essence of which is to conduct an expert study necessary to clarify certain circumstances of the case that are important for criminal proceedings. Thus, considering the nature and complexity of the investigation of criminal cases of this category, it can be concluded that only the proper organization of a criminal investigation and a qualitative interaction of intelligence and investigative services in gathering and recording evidence and rigorous compliance with the CPC RK in conducting forensic analysis allows identifying crimes against sexual inviolability of minors and bringing offenders to criminal responsibility. DISCUSSION In cases of confirmed sexual violence against minors, physical evidence may not always be present, making the child's testimony crucial in determining the likelihood of abuse. A conversation, as emphasized by A.P. Giardiano and M.A. Finkel, should start with topics that are interesting and not "dangerous" for the child. This study nicely complements the current article with observations on conducting an examination. It is important for experts to establish a positive and comfortable rapport with the child during questioning, as children may be intimidated by doctors or authority figures. Wearing non-intimidating clothing and starting with topics that interest the child can help establish trust. Interviews should be conducted in private, unless the child's guardian is involved in the abuse. Careful documentation of the questions and answers is necessary. 19 However, the impact of abuse may not always be immediately apparent, especially in young children or when the abuser is known to the child. Emotional distress may be the only symptom present during the examination, with more severe effects manifesting later in life. Most psychological disorders related to sexual abuse occur in adulthood rather than in childhood. 20 The reasons for a child not showing physical evidence of sexual abuse can be varied, according to experts. It is crucial to consider that sexual violence against a child can encompass non-penetrative forms, such as touching or caressing erogenous zones, or other forms of violence that do not involve physical contact. Ad-U M C S ditionally, fears of losing access to the victim can lead to an abuser engaging in non-traumatic sexual actions, as noted by M. Pillai. 21 Furthermore, detecting the biological material of the offender on the victim's body may be challenging due to the short period during which such traces remain, even in cases involving penetration or types of touching with ejaculation, according to the observations of M. Nittis and M. Stark. 22 Finally, research by A.K. Myhre et al. showed that genital or anal injuries heal quickly, leaving no traces or only non-specific ones. 23 These findings are consistent with the results reported in the current article. In addition, all the dangers of delaying the examination are described in more detail. Therefore, it is possible that physical evidence may not be found during a forensic analysis of sexual violence due to the passage of time between the occurrence of the abuse and the examination. Finally, it is necessary to consider the possibility of false reports of sexual violence. The minor's dependence on the aggressor makes it probable that a false report will be created. In situations where the perpetrator of sexual violence is a family member, particularly a parent, there can be a great deal of fear surrounding the disclosure of abuse. Victims may be threatened with rejection or other forms of punishment, leading to feelings of conflict and ambivalence. This can be especially true when the perpetrator is also the provider of financial support, affection, and attention, as noted by A.H. Green. 24 Even in cases where the abuser is a stranger, there can be overwhelming feelings of shame and guilt that make it difficult for victims to come forward. In addition, the fear of potential threats or retaliation can prevent victims from disclosing information about the abuse. W. Silva and U. Barroso emphasized the low sensitivity of forensic medical analyses for sexual violence detection. 25 The identification of sexual violence is highly reliant on the information provided in the medical records; nevertheless, a forensic medical examination is regarded as an effort to acquire physical proof of a criminal act. Such research, akin to the present paper, highlights the need to enhance the quality of forensic medical assessment. Nonetheless, the authors of the article propose a distinct remedy -the engagement of independent experts. From the standpoint of medical evidence of the fact of violence, the main role belongs to doctors of medical institutions, where children are usually first taken for examination. Y.N. Grigorieva pointed to the fact that most of the entries do not meet the requirements for registration of medical documentation: failure to record the time of examination, the colors and hints of bruises, describe the condition of abrasions, edges, tips, and bottom of the wounds, the exact location, quantity, shape, and size of the injuries, including follow-up. 26 Doctors confuse anatomical names, give contradictory information within one examination. Frequently, examinations performed within a few hours by doctors of the same specialty are contradictory. Entries in medical records are sometimes illegible. All of the above prevents any conclusions on the prescription of injuries, their nature, etc. Taking smears and other traces of biological origin usually does not cause difficulties, but there have been cases in which smears taken within 5 days by forensic experts had a different result with those taken on the day of the incident, which indicates an incorrect technique for sampling biological material. 27 This state of affairs can be explained by the lack of understanding and skills of doctors of clinical specialties to describe the genitals and their injuries in children and adolescents for the purpose of further forensic medical analysis. The authors of the current article agree with the fact that the low qualification of doctors can cause errors during the examination, as well as with the fact that it is necessary to take care of improving the qualifications of personnel. In recent years, as T.Z. Zhakupova et al. rightly emphasized, major efforts were made to centralize and optimize the functionality of forensic activities in the Republic of Kazakhstan, which allowed concentrating financial resources and creating a unified infrastructure, optimizing the number of administrative staff in terms of the transformation of forensic activities, creating the House of Forensic Experts and regulating its activity, thereby creating the basis for the transition to a competitive environment. 28 This proposal coincides with the author's opinion. Private examinations solve a number of issues, so they are definitely a positive step. Currently, some problematic issues concerning the training and advanced training of forensic experts remain unsolved in the Republic of Kazakhstan. In this regard, G.T. Alayeva indicated the need to address the issue of equal methodological support of a forensic activity, for both experts -employees of the state body, 26 Y.N. Grigorieva and licensees, and highlighted the presence of certain discrimination in access to training, participation in international programmes, new scientific and methodological information relating to licensees of forensic activities. 29 According to M.B. Kurbanmaev and T.A. Shagdarova, an increase in the level of professional knowledge will be facilitated by the involvement of private experts in research work, the development of methods of forensic expert analysis, their approbation and implementation in the practice of forensic analyses conducted by them. 30 These initiatives, according to scientists, are one of the development factors of the institution of private forensic experts in the Republic of Kazakhstan. As a result of the discussion, it can be concluded that the diagnosis of sexual abuse of children can often be made based on the child's medical history. Physical examination alone is rarely diagnostic without anamnesis and/or some particular laboratory data. The doctor's duty is to interpret the injury, gather samples, treat the injury and, above all, help and support the vulnerable patient. Strict adherence to protocols and procedures that ensure the integrity of medical records, documentation, and all collected clinical and forensic evidence can only increase the value of a medical assessment of abuse. Attention to detail will benefit a child victim of sexual abuse by improving the identification of trauma, providing better prevention of pregnancy and infection, and will also contribute to a more effective investigation and prosecution of the offender. At present, in the Republic of Kazakhstan, there is a need to consider the issue of equal methodological support for forensic expert activities, both experts-employees of the state body and licensees, the involvement of private experts in research work. The decision to allow private entities to conduct forensic examinations in Kazakhstan was made in an effort to improve the quality and speed of forensic examinations, as well as to reduce the backlog of cases awaiting examination. Prior to this decision, all forensic examinations were conducted by state-owned forensic institutions, which often experienced delays due to a lack of resources and staffing. Since private entities were allowed to conduct forensic examinations, there has been an increase in the number of examinations performed, which has led to a reduction in the backlog of cases awaiting examination. Additionally, private entities have been able to provide more specialized services in certain areas of forensic examination, such as DNA analysis and digital forensics, which has led to improvements in the quality of examinations in those areas. 31 In recent years, the Republic of Kazakhstan has experienced a qualitative development of the forensic analysis system, which is reflected in numerous adopted legislative initiatives, and which plays a crucial role in the investigation of attacks on the sexual integrity of minors, since they help obtain objective evidence of the involvement of particular individuals in the crime. Proper organization of criminal investigation and high-quality interaction of intelligence and investigative services in collecting and recording evidence, as well as strict compliance with the requirements of the CPC RK during forensic analysis, allow identifying crimes against the sexual integrity of minors and bringing the perpetrators to criminal responsibility. CONCLUSIONS It can be concluded that the forensic expert becomes a witness to various aspects of the case, including sociological and psychological aspects in addition to technical details and fundamental scientific grounds. The expert opinion should be reasoned and contain answers to the questions put to them within their competence, and ensure specificity. The globalization caused by information technologies requires law enforcement officers, judicial authorities, and lawyers to become more familiar with the basics of digital forensic science. An integrated approach to the study of physical evidence containing video information will allow experts of different specialties to solve their specific questions within the framework of one analysis, conduct research in parallel, perform the necessary interaction, give answers to complex questions and, as a result, substantially reduce the time of preliminary investigation and judicial investigation. The data presented in this study allow suggest that activity errors during the forensic analysis can be caused by the unsatisfactory work of the investigator (court), i.e. the subject appointing the analysis, as well as by forensic experts who are not sufficiently aware of the subtleties of the methodology for investigating crimes against the sexual integrity of minors; their low level of professionalism, insufficient awareness of the possibilities of the assigned analysis and requirements for materials that should be submitted to the expert. These include incompleteness of the submitted materials, their unreliability, or poor quality. The Impact Cyber Trust project can assist forensic experts of the Republic of Kazakhstan in solving crimes against the sexual integrity of minors while problems with the lack of data exchange, which is the key to improving the quality and pace of research, especially in such an area as forensic analysis of computer technology. Nowadays, the Republic of Kazakhstan faces a need to consider the issue of equal methodological support for forensic expert activities, both experts -employees of the state body -and licensees, the involvement of private experts in research work, the development of methods of forensic expert analysis, their approbation Pobrane z czasopisma Studia Iuridica Lublinensia http://studiaiuridica.umcs.pl Data: 15/09/2023 04:56:08 U M C S and implementation in practice of forensic analyses carried out by them. These initiatives, according to scientists, are one of the development factors of the institution of private forensic experts in the Republic of Kazakhstan.
6,743.8
2023-06-27T00:00:00.000
[ "Computer Science" ]
Foundation Pattern, Productivity and Colony Success of the Paper Wasp, Polistes versicolor Polistes versicolor (Olivier) (Hymenoptera: Vespidae) colonies are easily found in anthropic environments; however there is little information available on biological, ecological and behavioral interactions of this species under these environmental conditions. The objective of this work was to characterize the foundation pattern, the productivity, and the success of colonies of P. versicolor in anthropic environments. From August 2003 to December 2004, several colonies were studied in the municipal district of Juiz de Fora, Southeastern Brazil. It was possible to determine that before the beginning of nest construction the foundress accomplishes recognition flights in the selected area, and later begins the construction of the peduncle and the first cell. As soon as new cells are built, the hexagonal outlines appear and the peduncle is reinforced. Foundation of nests on gypsum plaster was significantly larger (p < 0.0001; χ2 test) in relation to the other types of substrate, revealing the synantropism of the species. On average, the P. versicolor nest presents 244.2 ± 89.5 (100–493) cells and a medium production of 171.67 ± 109.94 (37–660) adults. Cells that produced six individuals were verified. Usually, new colonies were founded by an association of females, responsible for the success of 51.5%. Although these results enlarge knowledge on the foundation pattern of P. versicolor in anthropic environments, other aspects of the foundation process require further investigation. Introduction The neotropical social wasp, Polistes versicolor (Olivier) (Hymenoptera: Vespidae), possesses nests consisting of a single comb fixed to the substratum by a peduncle (Richards 1978). The simple arrangement of suspended cells seems to protect the colony from ant attacks, which constitute the largest predatory pressure for social wasps (Jeanne 1975(Jeanne , 1980Post and Jeanne 1985). P. versicolor colonies can be found in different types of substrata such as leaves, branches, roots, stones, and also in abandoned nests of other social wasp species. The nests are built with vegetable material, chewed and mixed with secretion of salivary glands, and the peduncle is resinous (West-Eberhard 1969;Spradbery 1973;Reeve 1991). In anthropic areas, the presence of nests using several structures as nesting substrata have been observed (Fowler 1983;Giannotti 1992;Lima et al. 2000;Prezoto 2001;Prezoto et al. 2007). However, the behavior and the biology of this species in the anthropic environment arelittle understood. During nest foundation the solitary nesting females typically construct and oviposit in combs with from 20 to 30 cells (West-Eberhard 1969). A Polistes foundress has at least two reproductive options besides solitary nest founding. She can join conspecific females in another nest or attempt to take over a nest initiated by a conspecific female (Reeve 1991). This behavior creates a series of advantages to the new nests, as productivity can increase and consequently, colony success can increase, offspring survival can improve in the case of dominant female death, as well as providing a more effective defense against natural enemies (West-Eberhard 1969;Itô 1985;Butignol 1992;Giannotti and Mansur 1993;Tannure and Nascimento 1999;Sinzato and Prezoto 2000;Tibbetts and Reeve 2003). During the colony's foundation (i.e. prior to eclosion of new adults), aggressive interactions happen among the nestmates, many times involving intense fights (West-Eberhard 1969;Gamboa and Dropkin 1979;Strasmmann 1989). The objective of this work was to characterize the foundation pattern, the productivity and the colony success of P. versicolor in anthropic environments. Material and Methods The study was conducted from August 2003 to December 2004 in Juiz de Fora municipal district (21º 46' S; 43º 21' W, medium altitude of 678 m), Minas Gerais state, Southeastern Brazil, characterized by a high tropical climate according to the Koppen Scale. For obtaining data, the work was divided into three stages: foundation pattern characterization and nesting substrata, nest productivity analysis, and attendance and success of the colonies. Foundation pattern characterization and nesting substrata For the information collection on the foundation pattern of P. versicolor colonies, weekly visits to the colonies took place at different places in the city of Juiz de Fora, preferably at the end of the afternoon (17 h), when the individuals were finishing their foraging activities, allowing a more precise counting of the number of foundresses present in the colony. During the visits, the following parameters were noted: number of females involved during the colony foundation phase (n = 100 nests) and substratum type used for the colony foundation (n = 192). In addition, behavioral information (ad libitum sensu Altmann 1974) exhibited by the individuals was obtained relating to the new nest construction process. Nest productivity analysis For the productivity analysis, 37 P. versicolor nests collected at different places around the study area were sampled. The nests were dissected, and the information schematized in mappings in standardized leaves. For each nest, the following parameters were observed: total number of cells, total of productive cells, total of produced adults (by the counting of meconium layers deposited in the cells), number of adults produced per cell, and the ratio of produced adults/cells. Attendance and success of the colonies The P. versicolor colonies were considered successful when they reached the postemergence phase, according to the classification proposed by Jeanne (1972), with the production of at least one adult. A hundred colonies were followed from the foundation phase to the first adult's emergence and/or abandonment of the nest. The colony was considered unsuccessful when, for three consecutive visits, the complete absence of adults and immatures was observed, besides the lack of eggs postures and new cells construction. Statistical analyses In order to verify the difference existence among the categories of nesting substrata used by P. versicolor, the 2 test was applied. The Spearman correlation test was used to correlate the total number of cells and the total number of adults produced in the sampled nests. The tests were completed using Bioestat 4.0. Results and Discussion Foundation pattern characterization of the colonies The P. versicolor nests were built with chewed vegetable material, which was added in the peduncle and in the cells, resulting in a grayish coloration. Before beginning the construction, the foundress made recognition flights, and inspected the structures to be used for the nest ( Figure 1A). This construction pattern is similar to those described for other Polistes species (West-Eberhard 1969;Reeve 1991;Karsai and Theraulaz 1995). Once the place for the new colony foundation was established, the nest construction began, starting with the peduncle. This was followed by construction of the first cells with a circular format ( Figures 1B and 1C), and as the number of cells increased, these assumed hexagonal outlines ( Figures 1D and 1E). With the increase in number of cells, the peduncle is reinforced through the addition of chewed vegetable fiber. Colony contact to the substratum was reduced by the fine peduncle that represents an defensive adaptation against predatory pressure from ants (Jeanne 1975). Six behavioral actions exhibited by P. versicolor during nest construction were identified, corroborating the description of the genus by Evans and West-Ebehard (1970): 1) Inspection of sites for nest foundation which is characterized by flights close to the selected area. The foundress touches the substratum with the antennae. 2) Construction of the pedicle: vegetable fiber is chewed with saliva, which is then attached to the substrate for construction of the peduncle in thread form. 3) Initial cell construction: after construction of the peduncle, the initial cell is constructed with a circular format. During this activity, the female constantly touches the cell sides with the antennae. 4) Construction of peripheral cells: new cells are added around the initial cell, assuming hexagonal outlines as they are attached to neighboring cells. 5) Cell prolongation: as larvae develop, chewed vegetable fiber is added to the extremities of the cells, elevating their height. 6) Peduncle invigoration: as cell number increases, the peduncle is reinforced with construction material, which makes it thicker to support the nest as it enlarges. The results of this study demonstrated the P. versicolor synantropism in relation to the constructions with little human interference, which is a behavior already described for the species (Butignol 1992;Sinzato and Prezoto 2000), as well as for P. lanio (Giannotti 1992) and P. simillimus (Prezoto 2001). The vegetation present in the anthropic environment was used by a small number of colonies, that might be attributed to the fragility of the plants, that consisted of species used for gardening that did not offer appropriate support for the nest, and exposed the nest to the stress of weather. Lima et al. (2000) studied the substrata used by the social wasps in an area close to this study area, and they verified that the Polistes species found nested preferably in human constructions; finding nests in the vegetation was rare. Butignol (1992) observed that the plants used as substratum by P. versicolor, in Florianópolis, south of Brazil, had perennial leaves, such as Acacia podzarilifolia, Fucreasea gigantea and Acalipa wilkesianae. The use of plants as nesting substratum in anthropic environments was also observed by Giannotti (1995), who recorded nine Polistes subsericeus colonies in a single Pandanus veitichi (Pandanaceae) plant. The author suggested that this plant offers a protected and criptic shelter for this species' colonies. Although the anthropic environment offers nesting resources, some species demonstrate preference for nesting in the natural environment, as observed by Claperton (2000) for Polistes humilis and Polistes chinensis antennalis. P. versicolor foundations were also registered at places used previously by other conspecific colonies (n = 25). This behavior was also registered by Giannotti (1992) for 12 P. lanio colonies and by Prezoto (2001) for 13 P. simillimus colonies. Prezoto (2003) suggested that this behavior reflects the ability to perceive and analyze information regarding the appropriate nesting sites perhaps including odor left by an old colony that could be an incentive for nest foundation. Colony productivity It was verified that the medium number of cells produced by P. versicolor nests was 244.2 ± 89.5 (100-493); the unproductive cells percentage was 44.5% (13.6-72.2%), and the average number of adults produced per nest was 171.67 ± 109.94 (37-660) ( Table 2). The ratio of individuals produced per cell was 0.66, registering a maximum of six uses in a single cell. Gobbi and Zucchi (1985) studied the productivity of P. versicolor in an anthropic area, the municipality of Ribeirão Preto, São Paulo state, Southeastern Brazil, in 1975 and 1976, and they observed a variation in the number of cells produced (191.80 ± 56.51 and 221.67 ± 132.05, respectively), irrespective of the number of adults produced those years (98.70 ± 40.09 in 1975 and 174.61 ±153.20 in 1976). Based on these results, the authors suggested that P. versicolor may present short cycle colonies (some months), as information was registered in 1975 and long cycle colonies (around a period of a year), as registered for those colonies studied in 1976, what favored a larger productivity of the latter. The results found in the present study are similar to those verified by Gobbi and Zucchi (1985) for the colonies studied in 1976. This suggests that the colonies studied presented a long cycle, which is a common occurrence in anthropic areas. In a comparative work, Gobbi et al. (1993) studied P. simillimus and P. versicolor productivity, and they verified an average of 391.3 ± 302.34 and 80.0 ± 114.88 cells per nest, respectively. About 60% of the P. simillimus colonies used the cells to produce two adult generations, while for P. versicolor, only 25% of the colonies used cells more than once. However, for P. versicolor the authors found cells that produced three generations. Giannotti (1997) verified that P. cinerascens nests in Rio Claro, São Paulo, included 102.9 cells and 94.2 individuals on average per nest, whose adult/cell ratio was 0.8 and some cells produced up to four individuals. Santos and Gobbi (1998), in a Savanna area in Bahia, verified that the P. canadensis nests possess 184.17 (29-477) cells on average, which produce 163 (10-576) individuals on average, with a single cell able to produce up to four individuals. According to Prezoto (2001), P. simillimus produces nests with about 337.28 (8-1325) cells, and 57.21% (4.38-95.46%) of these are unproductive, reflecting a small number of reutilizations (36% for two uses and 24% for three). He also affirms that P. simillimus nests can produce 256.36 (1-1355) adults on average, with a ratio of 0.44 (0.04-1.02) adults produced per cell, a smaller value than the one found for P. versicolor in our study. There was a positive correlation (r = 0.8498; p < 0.001, Spearman correlation test) between the total number of cells and the total number of adults produced by the P. versicolor nests. As the colonies grew, there was an increase in the number of adults produced, as well as more cell reutilization, while for other species the number of reutilizations is smaller, as in P. simillimus (Prezoto 2001). Ramos and Diniz (1993), also studying P. versicolor in an urban area of Brasília, observed a positive correlation (r = 0.902, p < 0.001) between the number of cells and the number of adults produced, and the cells were used up to four times. The P. versicolor unproductive cells were concentrated on periphery of the comb, which was also noted for P. canadensis (Santos and Gobbi 1998) and P. simillimus (Prezoto 2001), and the cells with the largest number of utilizations were located in close proximity to the peduncle and in the central nest area, that are the oldest part of the comb. This disposition can work as a strategy against the predatory pressure, parasitism and reproductive conflicts, all mentioned by Gobbi et al. (1993) as factors that impose limits on the number of cells built in Polistes nests. Colony success Most of the new P. versicolor nests were founded by a foundresses association (n = 68) ( Figure 1F), which was responsible for the largest number of successful colonies (n = 35; 51.5%). However, foundation by solitary females presented smaller incidence (n = 32), whose success was even smaller (n= 3; 9.4%). Studies accomplished at other places in Brazil describe foundress association as a foundation type commonly observed for P. versicolor (Itô 1985;Butignol 1992;Giannotti and Mansur 1993;Ramos and Diniz 1993;Tannure and Nascimento 1999;Sinzato and Prezoto 2000). Females association is also a common strategy in other neotropical species such as Polistes ferreri (Tannure and Nascimento 1999), P. canadensis (Itô 1985) and P. lanio (Giannotti 1992). However, Prezoto (2001) observed that the foundation by a single female constitutes 56.3% of P. simillimus foundations, with success of 37.09% of them. In spite of that, the author observed that, even being the smallest part of the total foundations, the foundress association was responsible for the largest number of successful colonies in P. simillimus. Itô (1985) observed that in colonies with a larger number of foundresses the duration of the pre-emergence phase period was reduced and the group size was related positively with the number of built cells, and these colonies were more productive. However, the author found that productivity of individual foundresses was lower in these colonies. Therefore, the females' association during the foundation is interpreted as an optimization strategy, in which ecological pressures such as parasitism and usurpation, social pressures such as the effects of the ergonomic synergism, and the increase of survival levels are all associated (West-Eberhard 1969;Gamboa 1978;Gibo 1978;Strassmann 1981Strassmann , 1989Itô 1985;Reeve 1991;Wenzel 1996). The high failure number (90.06%) of the colonies founded by a single female P. versicolor in the present study occurred mainly because the foundress abandoned the nest during the initial colony establishment phase, before the larvae appeared. This same phenomenon was described by Tannure and Nascimento (1999) for the same species. It is believed that this behavior is associated with the fact that the wasps migrate in search of association with other foundresses. Other factors as foundress death or disappearance and dominance disputes also promote colony failure in Polistes species (Reeve 1991;Giannotti and Mansur 1993;Tannure and Nascimento 1999;Prezoto 2001). This study's results demonstrate that P. versicolor nesting behavior is very similar to that described for other Polistes species. In an anthropic environment, P. versicolor exhibited a preference for artificial substrata for nesting, which probably provides larger longevity for the nests due to protection from the stress of weather. In this type of environment, usually a group of females found their nests in different climatic situations, which results in production of colonies of various sizes and also causes a different productivity among them. Although these results enlarge knowledge on the P. versicolor foundation pattern in anthropic environments, there are many subjects needing further study mainly to increase knowledge about nesting behavior of other neotropical Polistes species.
3,840.2
2010-08-05T00:00:00.000
[ "Environmental Science", "Biology" ]
Structure and Properties of Water in a New Model of the 10-Å Phase: Classical and Ab Initio Atomistic Computational Modeling : The 10-Å phase is an important member of the family of dense hydrous magnesium silicates (DHMSs) that play a major role in the water budget in the Earth’s upper mantle. Its nominal composition is usually written as Mg 3 Si 4 O 10 (OH) 2 · x H 2 O, and its structure is often described as layers of talc with some amount of water present in the interlayer space. However, its actual structure and composition and the detailed mechanisms of retaining H 2 O molecules within the mineral are not yet sufficiently known. In particular, more recent spectroscopic and diffraction data indicate the presence of Si vacancies in the tetrahedral silicate sheets of the 10-Å phase leading to the formation of Q 2 -type Si sites terminated by silanol groups. These silanols are, in turn, hydrogen bonded to interlayer H 2 O molecules. Here, we use classical and ab initio molecular dynamics (MD) simulations to compare the structures and properties of ideal and defect models of the 10-Å phase under ambient conditions. For classical MD simulations, the most recent modification of the ClayFF force field is used, which can accurately account for the bending of Mg–O–H and Si–O–H angles in the mineral layers, including the structural defects. The crystal lattice parameters, elastic constants, structure, and dynamics of the interlayer hydrogen bonding network for the model 10-Å phase are calculated and compared with available experimental data. The results demonstrate that the inclusion of Si vacancies leads to better agreement with crystallographic data, elastic constants, and bulk and shear moduli compared to a simpler model based on the idealized talc structure. The results also clearly illustrate the importance of the explicit inclusion of Mg–O–H and Si–O–H angular bending terms for accurate modeling of the 10-Å phase. In particular, the properly constrained orientation of the silanol groups promotes the formation of strong hydrogen bonds with the interlayer H 2 O molecules. Introduction The so-called 10-Å phase (TAP) belongs to the family of dense hydrous magnesium silicates (DHMS). It plays a key role in transporting and storing water in the Earth's mantle at subduction zones [1][2][3][4][5]. Therefore, TAP structural, thermodynamic, and mechanical properties are important, especially at high pressures and temperatures [6][7][8][9]. TAP is assumed to have the composition of Mg 3 Si 4 O 10 (OH) 2 ·xH 2 O and consists of talc-like T-O-T layers where one sheet (O) of octahedrally coordinated Mg atoms is sandwiched from both sides by two sheets (T) of tetrahedrally coordinated Si atoms. In contrast to hydrophobic talc, TAP is assumed to contain some amount of H 2 O molecules in the interlayer between its T-O-T layers. Water contents of x = 2/3 [7], x = 1.0 [10], and x = 2.0 [6] are most commonly assumed (Figure 1). The H 2 O molecules occupy the six-membered siloxane rings of the tetrahedral sheets. According to experimental results, TAP has different types of symmetry [9,11]. Comodi et al. [11] showed that the TAP structure is very similar to that rings of the tetrahedral sheets. According to experimental results, TAP has different types of symmetry [9,11]. Comodi et al. [11] showed that the TAP structure is very similar to that of a homo-octahedral, 1M trioctahedral mica, and has a monoclinic unit cell with the space group C2/m. However, Pawley et al. [9] used the trigonal 3T polytype structure of TAP for studying its volumetric behavior at high pressures and temperatures. . Green octahedra-MgO, blue tetrahedra-SiO, red spheres-O, white spheres-H. VESTA software [12] was used to visualize the atomistic model. 29 Si NMR spectroscopic measurements indicate the presence of Si vacancies in the tetrahedral sheet of TAP [13,14]. Each such vacancy results in the formation of one additional Mg-O-H and three additional Si-O-H groups in the tetrahedral sheet ( Figure 2). Thus, each Q 2 -type Si site (see Figure S1 of the Supplementary Materials) contains silanol groups that donate hydrogen bonds (H-bonds) to the interlayer H2O molecules. These results suggest that the formation of TAP can involve a defect mechanism that allows favorable H-bonding interaction between the interlayer H2O molecules and the normally hydrophobic siloxane oxygens of the talc-like layer [14,15]. The proportion of the Q 2 -type Si sites inferred from the NMR data is estimated to be around 10% [14], corresponding to one silanol for each pair of six-membered siloxane rings of the talc tetrahedral sheet. (Here we are using the common notation for silicon-oxygen tetrahedra, Q n , where the superscript shows the number of other silicon-oxygen tetrahedra attached to the silicon tetrahedron under study [16,17]). [12] was used to visualize the atomistic model. 29 Si NMR spectroscopic measurements indicate the presence of Si vacancies in the tetrahedral sheet of TAP [13,14]. Each such vacancy results in the formation of one additional Mg-O-H and three additional Si-O-H groups in the tetrahedral sheet ( Figure 2). Thus, each Q 2 -type Si site (see Figure S1 of the Supplementary Materials) contains silanol groups that donate hydrogen bonds (H-bonds) to the interlayer H 2 O molecules. These results suggest that the formation of TAP can involve a defect mechanism that allows favorable H-bonding interaction between the interlayer H 2 O molecules and the normally hydrophobic siloxane oxygens of the talc-like layer [14,15]. The proportion of the Q 2 -type Si sites inferred from the NMR data is estimated to be around 10% [14], corresponding to one silanol for each pair of six-membered siloxane rings of the talc tetrahedral sheet. (Here we are using the common notation for silicon-oxygen tetrahedra, Q n , where the superscript shows the number of other silicon-oxygen tetrahedra attached to the silicon tetrahedron under study [16,17]). Given the uncertainties in the structure and composition of TAP, computational atomistic modeling can be used as a powerful tool to clarify this picture. Classical MD simulations using semi-empirical force fields to describe interatomic interactions within model systems have made a significant contribution over the last 10-15 years to the detailed understanding of the structure and properties of clays and clay-related minerals, other complex nanostructured and nanoporous materials, and their interaction with water and aqueous solutions [18][19][20][21][22][23]. ClayFF [18,22] has emerged as one of the most popular and widely used force fields and has already been thoroughly tested in the atomistic simulations of such systems [22][23][24]. Its recent modification, ClayFF-MOH, explicitly takes into account Metal-O-H (M-O-H) angular bending motions in the mineral structure [25,26], leading to a better description of the hydroxyl behavior on the edges of mineral particles and at irregular surfaces [27][28][29]. At the same time, ab initio methods of atomistic modeling are now also widely used to study layered minerals and other similar materials [30][31][32][33][34][35][36][37][38][39][40]. Based on a more rigorous quantum chemical foundation of the density functional theory (DFT) than force-field-based methods, they require, however, orders of magnitude more computational power for their effective use (e.g., [38]). hydrophobic siloxane oxygens of the talc-like layer [14,15]. The proportion of the Q 2 -Si sites inferred from the NMR data is estimated to be around 10% [14], correspondin one silanol for each pair of six-membered siloxane rings of the talc tetrahedral sheet. ( we are using the common notation for silicon-oxygen tetrahedra, Q n , where the su script shows the number of other silicon-oxygen tetrahedra attached to the silicon t hedron under study [16,17]). Figure 2. The structural defect in the tetrahedral sheet of TAP. Green-MgO octahedra, blue-SiO tetrahedra, red spheres-O, white spheres-H. (See Section 3.2 for explanation of the labels for each specific type of atoms in the structure.) VESTA software [12] was used to visualize the atomistic model. Both classical and ab initio methods have been already successfully applied to study the structure and properties of talc, its interaction with water, and an idealized model of TAP, which did not include any structural defects in the talc-like layers [30,39,41,42]. Here, we used classical molecular dynamics (CMD) and ab initio molecular dynamics (AIMD) simulations to study the structure and properties of a more realistic model of TAP that includes structural defects. All CMD simulations were performed both for original and modified versions of the ClayFF force field. Comparing these CMD results with the results of the AIMD simulations and available experimental data allowed us to make a judgment about the reliability and ranges of applicability for each of the computational approaches. All simulations reported here were performed at ambient conditions with the objective of testing the new models and new version of the force field. CMD and AIMD simulations at high temperatures and pressures using these models and analysis of the potential geochemical and geophysical implications of these results will be reported in a separate publication. Construction of TAP Models The 1M polytype of the phlogopite-type stacking TAP with water content x = 1 was used in this paper as it is estimated to be most stable under ambient conditions [1,41]. The ideal model of TAP was based on experimental single-crystal X-ray diffraction data, which provided unit cell parameters of a = 5.323(1) Å, b = 9.203(1) Å, c = 10.216(1) Å, and β = 99.98 • with space group C2/m [11]. The supercell containing 6144 atoms (8 × 4 × 4 unit cells along a, b, and c vectors, respectively) was used in the CMD simulations. For the AIMD calculations, smaller supercells with only 768 atoms (4 × 2 × 2 unit cells) were used. The smaller supercell with 768 atoms was used as the basis for constructing the TAP model with structural defects. First, a random Si atom in a tetrahedral sheet was deleted. Then, four H atoms were added at the site of the Si vacancy to protonate the oxygen atoms that used to coordinate with that Si, forming four hydroxyl groups in its place ( Figure 2). To assure a random and uniform distribution of defects, further Si vacancy locations were selected so that the distances between defects in the structure were about 12 • Å. The procedure was repeated for all tetrahedral sheets in the supercell, resulting in the creation of one Si defect for every 32 Si atoms in the crystal structure. Such arrangement of defects closely reproduced the experimentally determined concentration of defects with~10% of Q 2 -type Si sites [13,14]. The resulting supercell with structural defects contained 780 atoms and was used in all AIMD calculations. This small supercell was duplicated in all three dimensions to produce a larger supercell with 6240 atoms for CMD calculations, which was equivalent to 8 × 4 × 4 unit cells. Classical MD Simulations The original version of the ClayFF force field, ClayFF-orig [18], and its more recent modification, ClayFF-MOH [22], were used to describe the interatomic interactions in two series of simulations in order to make a detailed performance comparison between both versions, and to compare them both with the results of the AIMD simulations described below. The values of the additional parameters for the M-O-H angle bending terms of the ClayFF-MOH version were recently determined by fitting the structural and spectroscopic results of CMD simulations to the results of AIMD calculations for brucite (Mg(OH) 2 ), gibbsite (Al(OH) 3 ), and kaolinite (Al 2 Si 2 O 5 (OH) 4 ) using a simple harmonic functional form [22,25,26]: where θ 0 represents the equilibrium bond angle for a three-body interaction, and k is the stiffness coefficient. The following parameters for Mg-O-H and Si-O-H angles were used in our calculations [22,25,26]: θ 0,MgOH = 110 • , k MgOH = 6 kcal·mol −1 ·rad −2 , θ 0,SiOH = 100 • , and k SiOH = 15 kcal·mol −1 ·rad −2 . In addition, the original harmonic hydroxyl bond stretching terms were replaced here with a more accurate Morse potential [22,42]. Water molecules were described by the flexible SPC/E model [43]. The LAMMPS (7 January 2022 version, Sandia National Laboratories, Albuquerque, NM, USA) software package [44,45] was used to perform all CMD simulations. The cutoff radius for calculating the short-range Lennard-Jones interatomic interactions was 12.5 Å, and the Particle-Particle-Particle-Mesh method was used for the long-range electrostatic interactions [46]. The standard Lorentz-Berthelot combining rules were used to calculate the Lennard-Jones parameters for different atom types [22]. The classical Newtonian equations of atomic motion were numerically integrated with a timestep of 1.0 fs using the velocity Verlet algorithm [46]. The developed atomistic models of TAP were initially equilibrated for 1 ns at 300 K and 1 bar using the Nosé-Hoover thermobarostat [46]. During this equilibration in the NPT statistical ensemble (constant number of particles, pressure, and temperature), no symmetry constraints were imposed on the crystal structures, and all cell parameters were allowed to vary. After the NPT equilibration, the equilibrium simulation run was performed for another 1 ns in the NVT statistical ensemble (constant number of particles, volume, and temperature) using the Nosé-Hoover thermostat [46]. The collected equilibrium dynamic trajectories of the atoms were then used for further statistical analysis. Ab Initio MD Simulations The DFT and AIMD calculations were performed using the Gaussian and plane wave basis approach, as implemented in the CP2K simulation software package (version 2022.1, T.D.Kühne et al., EU) [47]. Göedecker-Teter-Hütter (GTH) pseudopotentials [48][49][50] were used for Mg, Si, O, and H atoms including 10, 4, 6, and 1 valence electrons, respectively. Double-zeta valence polarized (DZVP MOLOPT) basis sets [51] were used for all calculations along with an auxiliary plane wave with the 600 Ry energy cutoff. The generalized gradient approximation (GGA) parametrized by Perdew et al. [52] was used for the exchange-correlation terms with Grimme D3 dispersion correction [53] without the C 9 term. Both the idealized (talc-based) and defect-containing small supercells of TAP were optimized using the Broyden-Fletcher-Goldfarb-Shanno method [54] before starting the AIMD simulations. The nuclear dynamics were treated within the Born-Oppenheimer approximation and the convergence criterion on forces was chosen to be 10 −6 a.u. All AIMD Minerals 2023, 13, 1018 5 of 17 calculations were performed in the NVT ensemble at 300 K with a 0.5 fs timestep using the Nosé-Hoover scheme [47]. Each system was pre-equilibrated for 2 ps before the 10 ps production run was performed for further statistical analysis of the resulting properties. Periodic boundary conditions [46,47] were applied in all CMD and AIMD calculations and no symmetry constraints were imposed on the simulated structures. Crystallographic Parameters The equilibrium crystallographic unit cell parameters of the idealized and defective TAP structures were obtained using CMD calculations at 300 K and from the zero temperature cell optimization with DFT (Table 1), as described in the previous section. Six models were considered in total, all of which were in qualitative agreement with the experimental data [7,10,11,13,55]. Previously, only the ideal talc-based TAP structure was investigated using CMD simulations with the ClayFF-orig force field [41] and DFT calculations without dispersion corrections [30]. Table 1. Crystallographic unit cell parameters of the TAP models: Simulated results and experimental data. Among the classical models, the best agreement was achieved with the defective structure and the ClayFF-MOH force field; the error of the unit cell volume did not exceed 3% ( Table 1). The results for the idealized structure with ClayFF-orig virtually coincided with the previous calculations [41]. The zero temperature DFT calculations gave the most accurate value of the unit cell volume compared to the experimental data, but the cell vectors and shape of the unit cell were not quite correct. The cell was squeezed along the c-axis and stretched along the aand b-axes. The difference between the ideal and defect models was rather small, probably due to the symmetry constraints during the DFT optimization procedure. The lattice parameters of the earlier DFT calculations [30] significantly deviated from the experimental data. However, those results were obtained without a dispersion correction, which is especially important in systems containing hydroxyls and/or H 2 O molecules in the structure [25,56]. Atom Positions and Interatomic Distances All calculated average distances between metal atoms (Mg, Si) and various oxygen atoms in the structure (O a -apical oxygen atom; O b -basal oxygen; O h -oxygen atom of hydroxyl; O w -oxygen atom of H 2 O) were in fairly good agreement with the experimental results [11,55] (Table 2). The maximum deviation of the calculated values from the experimental data was below 6%. However, the AIMD calculations usually better reproduced the experimental bond distances compared to the CMD simulations. Atomistic modeling of the TAP structure with defects made it possible to obtain the distances between atoms, at least one of which belonged to a structural defect. The average distance between Si s and O bs (Si and basal O belonging to silanol groups) was slightly larger than the Si-O b distance in all simulations. The same result was also obtained for the Si-O a pair in the AIMD calculations ( The experimental interlayer thickness of TAP was poorly reproduced with the AIMD simulations; it was about 0.2 Å lower in both cases. A similar discrepancy was also observed in the DFT simulations of talc without dispersion corrections [31]. The best agreement with the experimental data for TAP was observed using the defect model and the ClayFF-MOH force field. Radial Distribution Functions Radial distribution functions (RDFs) are especially important for understanding the structural arrangement and ordering of the H 2 O molecules in the TAP interlayers. They were calculated for different H-O pairs existing in our models. The first H w -O w peaks from the AIMD calculations were located at 4.7 Å. The results were almost the same for the ideal and defect models (Figure 3a; see also Figure 2), probably due to interlayer shrinkage leading to less mobile H 2 O molecules compared to the experimental data ( Table 2). water molecules. This effect was not observed with the ClayFF-orig force field. The position of the first Hw-Ob RDF peak for the ClayFF-MOH model with defects was also in good agreement with the AIMD calculations (Figure 3b; see also Figure 2). However, all RDF peaks from the CMD simulations were broader and the average Hw-Ob distance from the AIMD calculations was smaller than the classical values (3.3 Å vs. 3.4-3.6 Å, respectively). This was also consistent with the smaller interlayer space thickness resulting from the AIMD calculations (see Table 2). The Hh-Ow RDFs (Figure 4a; see also Figure 2) showed different widths for the first peak. The first peak for the ClayFF-MOH model with defects was wider compared to other classical models and closer to the AIMD results. This could be explained by the greater mobility of the H2O molecules in the six-membered tetrahedral rings [11], which were free from short-range interaction due to the structural defects. The first peak for the ideal model was slightly shifted to the left in the CMD simulations, but the AIMD-like position was obtained using the defect model with the ClayFF-MOH force field. This was due to the formation of stronger H-bonds between the hydrogen atom of the silanol groups (H hs ) and O w of the water molecules (see also Figure 2). The hydroxyl groups in the structural defects enhanced the hydrophilicity, exhibiting stronger attraction towards the nearest H 2 O molecules approaching the tetrahedral layer. Consequently, the nearest water molecules were slightly displaced away from other interlayer water molecules. This effect was not observed with the ClayFF-orig force field. The position of the first H w -O b RDF peak for the ClayFF-MOH model with defects was also in good agreement with the AIMD calculations (Figure 3b; see also Figure 2). However, all RDF peaks from the CMD simulations were broader and the average H w -O b distance from the AIMD calculations was smaller than the classical values (3.3 Å vs. 3.4-3.6 Å, respectively). This was also consistent with the smaller interlayer space thickness resulting from the AIMD calculations (see Table 2). The H h -O w RDFs (Figure 4a; see also Figure 2) showed different widths for the first peak. The first peak for the ClayFF-MOH model with defects was wider compared to other classical models and closer to the AIMD results. This could be explained by the greater mobility of the H 2 O molecules in the six-membered tetrahedral rings [11], which were free from short-range interaction due to the structural defects. Fairly good agreement with the ClayFF-MOH and AIMD simulation results was also and AIMD simulations were almost the same (1.8 and 1.7 Å, respectively). Such short donor-acceptor distances, especially in the case of the AIMD simulations, indicated the formation of particularly strong H hs ···O w H-bonds compared to the CMD calculations with ClayFF-orig. Fairly good agreement with the ClayFF-MOH and AIMD simulation results was also observed for the H hd -O bs RDFs ( Figure 5; see also Figure 2). The average distance between H hd and O bs from the simulations with the ClayFF-orig model was smaller compared to the results of the ClayFF-MOH and AIMD simulations. The peak intensity for this RDF from the ClayFF-MOH and AIMD simulations was also higher than for the ClayFF-orig results. This possibly indicated competition between three O bs atoms for H-bond formation with one of the H hd atoms inside the structural defect for the ClayFF-MOH and AIMD models, leading to the formation of so-called bifurcated H-bonding [57]. For the ClayFF-orig model, the formation of a single stronger H-bond between one of H hd and O bs was most likely. This suggestion was also confirmed by the Hhs-Ow RDFs for the defective structures (Figure 4b; see also Figure 2), where the first peak locations resulting from the ClayFF-MOH and AIMD simulations were almost the same (1.8 and 1.7 Å, respectively). Such short donor-acceptor distances, especially in the case of the AIMD simulations, indicated the formation of particularly strong Hhs···Ow H-bonds compared to the CMD calculations with ClayFF-orig. Fairly good agreement with the ClayFF-MOH and AIMD simulation results was also observed for the Hhd-Obs RDFs ( Figure 5; see also Figure 2). The average distance between Hhd and Obs from the simulations with the ClayFF-orig model was smaller compared to the results of the ClayFF-MOH and AIMD simulations. The peak intensity for this RDF from the ClayFF-MOH and AIMD simulations was also higher than for the ClayFF-orig results. This possibly indicated competition between three Obs atoms for H-bond formation with one of the Hhd atoms inside the structural defect for the ClayFF-MOH and AIMD models, leading to the formation of so-called bifurcated H-bonding [57]. For the ClayFF-orig model, the formation of a single stronger H-bond between one of Hhd and Obs was most likely. Interlayer Atomic Density Distributions To better understand the TAP interlayer structure, 2-dimensional contour maps of the time-averaged atomic density distributions were calculated from the CMD and AIMD simulations (Figures 6 and 7). For the ideal TAP model, there were no differences between the two versions of ClayFF and AIMD results. In all cases, the H2O molecules were located above the structural hydroxyls of each six-membered siloxane ring [41] of the talc tetrahedral layers (see Figure 7a). Also, the H2O molecules were coordinated by the Ob atoms of the tetrahedral layer [41]. According to previous DFT results [30], the Hw atom can form multi-furcated H-bonds with Ob atoms. Water molecules located far from the structural defects had similar behaviors to the H2O molecules in the ideal model for both ClayFF versions ( Figure 6). However, the H2O molecules near the defects behaved differently due to their stronger interaction with the silanol groups of the defects. The CMD results with the ClayFF-MOH model demonstrated an ordered pattern of H2O molecule arrangement near the defects, which was due to their acceptance of strong H-bonds from the silanol groups of the defects (Figure 7b). Interlayer Atomic Density Distributions To better understand the TAP interlayer structure, 2-dimensional contour maps of the time-averaged atomic density distributions were calculated from the CMD and AIMD simulations (Figures 6 and 7). Hydrogen Bonding Structure and Dynamics in the TAP Interlayers A common geometrical definition was used to determine whether a H-bond (HB) existed between a donor and acceptor [58,59]: ROdOa ≤ 3.5 Å, ROaHd ≤ 2.45 Å, and φHdOdOa ≤ 30°, where d is a donor and a is an acceptor. The average lifetime of H-bonds (τHB) was estimated by integrating the so-called continuous time functions of H-bonds [58,59]. The average number of H-bonds (nHB) per donor was also calculated (Table 3; For the ideal TAP model, there were no differences between the two versions of ClayFF and AIMD results. In all cases, the H 2 O molecules were located above the structural hydroxyls of each six-membered siloxane ring [41] of the talc tetrahedral layers (see Figure 7a). Also, the H 2 O molecules were coordinated by the O b atoms of the tetrahedral layer [41]. According to previous DFT results [30], the H w atom can form multi-furcated H-bonds with O b atoms. Water molecules located far from the structural defects had similar behaviors to the H 2 O molecules in the ideal model for both ClayFF versions ( Figure 6). However, the H 2 O molecules near the defects behaved differently due to their stronger interaction with the silanol groups of the defects. The CMD results with the ClayFF-MOH model demonstrated an ordered pattern of H 2 O molecule arrangement near the defects, which was due to their acceptance of strong H-bonds from the silanol groups of the defects (Figure 7b). Hydrogen Bonding Structure and Dynamics in the TAP Interlayers A common geometrical definition was used to determine whether a H-bond (HB) existed between a donor and acceptor [58,59]: R OdOa ≤ 3.5 Å, R OaHd ≤ 2.45 Å, and ϕ HdOdOa ≤ 30 • , where d is a donor and a is an acceptor. The average lifetime of Hbonds (τ HB ) was estimated by integrating the so-called continuous time functions of H-bonds [58,59]. The average number of H-bonds (n HB ) per donor was also calculated ( The short lifetimes of H-bonds indicated that the H 2 O molecules had significant rotational mobility. This suggestion was confirmed by the second-order orientational time correlation function [60,61], which was calculated using the orientation of the unit vector along the O w -H w bond in the H 2 O molecules ( Figure 8). Thus, the interaction between structural hydroxyls and H 2 O molecules played a very important role in the behavior of the interlayer H 2 O molecules in the ideal model. In the new TAP model with defects, several new donor-acceptor H-bonding pairs were possible in addition to the main Hw···Ob and Hh···Ow H-bonds. The most important was the Hhs···Ow pair. It had the longest lifetime, especially as observed in the AIMD simulations ( Table 3 and Figure S3 of the Supplementary Materials). However, the simulation results with the ClayFF-orig force field led to only very weak H-bonds for this donoracceptor pair. This was due to the possibility of frequent and completely unrestricted reorientation of the silanol hydroxyl groups. The average number of Hhs···Ow H-bonds was also the smallest for the ClayFF-orig case, while fairly close results were observed for the ClayFF-MOH and AIMD simulations. Another important H-bonding pair was Hw···Obs, which was due to H-bond donation by the H2O molecules to the oxygen atoms of the silanols. The CMD simulations demonstrated the existence of such H-bonds, but they were not observed during the AIMD runs Defect Model The observed behavior of the H 2 O molecules far from the TAP structural defects was the same as in the ideal model. The H w ···O b H-bonding lifetime did not change significantly (Table 4). However, the number of H w ···O b H-bonds was smaller in all simulations. On the contrary, an increase in the average number of H h ···O w H-bonds was observed for the ClayFF-MOH and AIMD simulations. The lifetime of H h ···O w H-bonds was also increased, and the longest lifetime was observed for the ClayFF-MOH force field (see Figure S2b of the Supplementary Material). This strengthening of H-bonds was also reflected in the orientational relaxation of the H 2 O molecules. The orientational relaxation time was the largest for the defect model with the ClayFF-MOH force field (Figure 8). Similar to the ideal case, H h ···O b H-bonds were formed only with the ClayFF-orig force field. In the new TAP model with defects, several new donor-acceptor H-bonding pairs were possible in addition to the main H w ···O b and H h ···O w H-bonds. The most important was the H hs ···O w pair. It had the longest lifetime, especially as observed in the AIMD simulations ( Table 4 and Figure S3 of the Supplementary Materials). However, the simulation results with the ClayFF-orig force field led to only very weak H-bonds for this donor-acceptor pair. This was due to the possibility of frequent and completely unrestricted reorientation of the silanol hydroxyl groups. The average number of H hs ···O w H-bonds was also the smallest for the ClayFF-orig case, while fairly close results were observed for the ClayFF-MOH and AIMD simulations. Another important H-bonding pair was H w ···O bs , which was due to H-bond donation by the H 2 O molecules to the oxygen atoms of the silanols. The CMD simulations demonstrated the existence of such H-bonds, but they were not observed during the AIMD runs. The stronger H hs ···O w H-bonds probably prevented the formation of such stable pairs in the AIMD simulations. The formation of weak H-bonds donated by the structural hydroxyls to the oxygen atoms of the silanol groups (H hd ···O bs pairs) was also possible inside the structural defects. Thus, the interaction of the silanol groups with the interlayer H 2 O molecules played even a greater role in the defect TAP model. Vibrational Properties The power spectra of atomic motion or vibrational density of state (VDOS) were obtained by Fourier transform of the velocity autocorrelation function of the respective atoms (see, e.g., [23]). The total VDOS ( Figure S4 of the Supplementary Materials) was decomposed into several atomic and molecular contributions: H 2 O molecules ( Figure S5 of the Supplementary Materials), H h atoms ( Figure S6 of the Supplementary Materials), and H hs atoms (Figure 9). According to Raman spectroscopic data in the high-frequency region, the peak of the O h -H h stretching mode was located at 3622 cm −1 , and there were two peaks reflecting the symmetric and asymmetric stretching of the O w -H w bonds within the water molecules at 3593 cm −1 and 3668 cm −1 [1]. The stretching modes of the H2O molecules obtained from the CMD simulations were located at slightly higher frequencies due to the fact that only a simple harmonic function for the intramolecular vibrations was used in the SPC/E water model (Table 4). A more accurate and elaborate H2O intramolecular potential would be necessary to better reproduce the positions and widths of these peaks (e.g., [62]). The Ow-Hw stretching modes from the AIMD calculations showed better agreement with the experimental data; however, the resolution was not really sufficient to clearly distinguish the two peaks. The AIMD peak positions reflecting the stretching mode of structural hydroxyls (Hh atoms) were blue-shifted compared to the experimental data and the results of the CMD The stretching modes of the H 2 O molecules obtained from the CMD simulations were located at slightly higher frequencies due to the fact that only a simple harmonic function for the intramolecular vibrations was used in the SPC/E water model (Table 5). A more accurate and elaborate H 2 O intramolecular potential would be necessary to better reproduce the positions and widths of these peaks (e.g., [62]). The O w -H w stretching modes from the AIMD calculations showed better agreement with the experimental data; however, the resolution was not really sufficient to clearly distinguish the two peaks. The AIMD peak positions reflecting the stretching mode of structural hydroxyls (H h atoms) were blue-shifted compared to the experimental data and the results of the CMD simulations ( Table 5). The experimentally observed peak at 3622 cm −1 was better reproduced for the TAP model with defects using the ClayFF-MOH classical force field (see Figure S6 of the Supplementary Materials). In the high-frequency region, there were also peaks reflecting the O-H stretching mode of the hydroxyls of the silanol groups (H hs atoms) (Figure 9). The peak position obtained from the AIMD simulations was located at 3194 cm −1 , while the CMD results showed peaks located at 3577 and 3431 cm −1 for ClayFF-orig and ClayFF-MOH, respectively (Table 5). This was due to stronger H-bond formation between the hydroxyls of the silanol groups and the interlayer H 2 O molecules (see Table 4). In all calculations, the vibrational peaks of the silanol groups were located at lower frequencies than similar peaks at the edges of the talc crystals [63]. They were also redshifted compared to the high-pressure experimental data of Pawley and Welch [64], in which these vibrational modes were detected at 3587 cm −1 . However, these modes are sensitive to pressure and the strength of interlayer H-bonds, thus additional studies are required for a more detailed quantitative comparison with the experimental data. The Raman spectrum of TAP in the lower frequency region [1] shows several vibrational modes assigned to Mg-O-H and Si-O-Si bending, OH transition, OH libration, Si-O-Si symmetric stretching, and Si-O stretching. The VDOS calculated from our CMD and AIMD simulations showed good agreement with some experimental peaks in this lower frequency region. Generally, we were able to identify the OH librations in the calculated spectra. The calculated partial VDOS for H h from the CMD simulations with ClayFF-orig in models with and without defects showed that the associated peak was located at a lower frequency compared to the ClayFF-MOH and AIMD models ( Figure S6 of the Supplementary Materials). This was a clear indication that the addition of the constraining Mg-O-H angular bending term in the ClayFF-MOH model induced better agreement with the AIMD results. Better agreement with the ClayFF-MOH and AIMD results was also observed for the Si-O-H bending vibrations of the silanol groups of the defects in the lower frequency region of spectra, reflecting librational motions (Figure 9). Such librational modes of H 2 O were located at ca. 310−320 cm −1 in the CMD simulations and at ca. 375−385 cm −1 in the AIMD simulations ( Figure S5 of the Supplementary Materials). The results were consistent with the Raman spectroscopic data [1,65]. In general, due to the intrinsic uncertainty of these kinds of classical model, a comparison of absolute values of the calculated vibrational frequencies with the experimental data are less meaningful than a comparison of trends in these vibrational modes with temperature, pressure, and composition (e.g., [22,23,29]). These trends deserve a much more detailed analysis and discussion and will be the subject of a separate publication. Elastic Properties CMD simulations with the ClayFF force field have been successfully used to calculate the elastic and thermal properties of clays and clay-related materials [22,66,67], and it has been recently demonstrated that the modified ClayFF-MOH version allows for better reproduction of elastic constants for a number of minerals [28,29]. A series of special equilibrium CMD runs was performed here to calculate the elastic constants of different TAP models. First, the stress tensor components were calculated as the sum of the kinetic and virial terms for the equilibrium supercell. Then, negative and positive supercell deformations of 1.0% were applied to the supercell in six independent directions, and the stress tensor components were calculated for the specified deformations. Such amplitudes of deformation are suitable for talc-like materials [37]. The CMD calculations of the initial and deformed TAP supercells were carried out in the NVT statistical ensemble for 0.5 ns. The elastic constants of the crystal were then numerically determined according to the generalized Hooke's law. The bulk modulus (K H ) and shear modulus (G H ) of TAP were also calculated using the Voigt-Reuss-Hill approximation [68]. It is important to emphasize that all components of the elastic tensor were calculated completely independently without any symmetry constraints imposed on the crystal structure (Table 6). They were compared with previous DFT calculations for monoclinic talc (C2/c) [37,69]. The off-diagonal components were less than 10 GPa in all cases. The TAP and talc elastic constants were very close to each other, except for the C 33 and C 55 values. The C 33 constant is responsible for the elasticity along the c-axis. Smaller values for TAP are, obviously, due to the presence of H 2 O molecules in the interlayers, which leads to a softer behavior along the c-axis. The presence of interlayer H 2 O molecules and the absence of shifts of the TOT layers with respect to each other in all TAP models explained the smaller values of the C 55 constant, which is responsible for shear elastic behavior. The values of K H and G H for the defect TAP model were closer to the talc DFT results. As demonstrated in the previous sections, a stable H-bonding network is formed inside the interlayer space of TAP, which additionally stiffened the defect structure. Conclusions Six atomistic models of the 1M polytype of the 10-Å phase (TAP), Mg 3 Si 4 O 10 (OH) 2 ·xH 2 O, were constructed and quantitatively studied using classical and ab initio molecular dynam-ics simulations. A water content of x = 1 was used in this study, as it was the most stable model under ambient conditions. The simulation results clearly demonstrated that the inclusion of Si vacancies in the TAP structure and the presence of silanol groups around these structural defects provided a better description of the experimental volumetric data. The advantage of the defect TAP model was also demonstrated by comparing the elastic constants and bulk and shear moduli between the 10-Å phase and monoclinic talc. Orientational ordering of the hydroxyls of the silanol groups (O bs -H hs ) was observed in the AIMD simulations, with atomic positions having distinct structural pattern. However, reorientation events were very rare. CMD simulations using the recently modified ClayFF-MOH force field resulted in a similar behavior, while the earlier version of the force field, ClayFF-orig, led to much more mobile and disordered O bs -H hs groups. These structural observations were also supported by the analysis of the vibrational spectra calculated from the results of both the CMD and AIMD simulations. In all cases, the modified ClayFF-MOH version of the force field provided more accurate descriptions that were closer to the DFT and AIMD results than the original version of ClayFF. However, the stretching modes of the O bs −H hs bonds in the CMD simulations were located at somewhat lower frequencies than suggested by the experimental data. In the idealized talc-based and defectless TAP structure, all types of H-bonds are quite weak, and this facilitates the orientational relaxation of the interlayer H 2 O molecules, which is very fast compared to normal liquid water. However, stronger H-bonds were formed between the silanol groups of the structural defects and the interlayer H 2 O molecules in the TAP structural model with defects. Thus, a stable H-bonding network could be observed in the interlayers of this TAP model. This phenomenon could certainly play an important role in the behavior of TAP at high pressures and temperatures and could affect the retention and transport of water by this phase in the Earth's upper mantle in subduction zones. A detailed analysis of the CMD and AIMD simulations at high temperatures and pressures for the new model of the 10-Å phase with defects, as well as discussion of the possible geochemical and geophysical implications of these results, will be the subject of a separate publication improving on the earlier simulations of the idealized talc-based and defectless TAP model [41,42]. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/min13081018/s1: Figure S1. Q 3 -and Q 2 -type Si sites in the tetrahedral sheet of the TAP; Figure S2. Continuous H-bonding time autocorrelation functions; Figure S3. Continuous H-bonding time autocorrelation functions for the H hs ···O w pairs; Figure S4. Total VDOS for the ideal and defect TAP models; Figure S5. Partial VDOS of the interlayer H 2 O molecules for the ideal and defect TAP models; Figure S6. Partial VDOS of the H h atoms for the ideal and defect TAP models.
9,447.2
2023-07-30T00:00:00.000
[ "Materials Science", "Geology", "Environmental Science", "Physics" ]
Public, Private, or Inter-Municipal Organizations: Actors’ Preferences in the Swiss Water Sector : To improve sustainable service provision, the public sector has been repeatedly subject to administrative reforms. Yet, the question arises of which types of organizations might be preferred. To address this, we systematically analyze which water supply organizations decision-makers and stakeholders, across different levels of government in Switzerland, prefer. We find that the actors prefer public organizations that involve coordination between municipalities and reject private organizations. Distinguishing between different actor levels reveals a distinct pattern, mainly related to the level of responsibility: the national (confederation) and regional (cantonal) actors only prefer coordination across municipalities, where local politicians lose a degree of control. In contrast, the local actors prefer those organizations where they can maintain democratic control the most. However, such organizations are not expected to perform sustainably, mainly because of lengthy decision-making processes, lack of access to external funds, and short-term financial planning. We, thus, conclude that, at the local level, there is potentially a trade-off between democratic values and performance. Introduction To improve sustainable service provision, the public sector has been repeatedly subject to administrative reforms [1]. The aim is to re-organize or create new structures to enhance coordination amongst a variety of actors to co-manage service provision and to improve sustainable functioning, effectiveness, and efficiency [2,3]. Simply because an organization is expected to be more sustainable, effective, and efficient [4], it may not be deemed preferable by the concerned actors. For instance, a new organizational type might affect existing authority, shift competencies, or be at odds with actors' interests and values [5]. However, it is crucial for successful reforms that they are supported by decision-makers and stakeholders [6]. Moreover, different actors play a key role in drafting reform proposals and implementing them [7]; thus, what they prefer seems crucial to ease reform design and implementation [8]. Furthermore, new policies or changed organizational types have faced opposition both by (local) decision-makers and stakeholders in various contexts [9,10]. The issue of preferences thus becomes all the more pressing in the context of public service provision in general and, specifically, about such vital services as public water supply. In this context, we ask what types of organizations are preferred by stakeholders and decision-makers in the water sector. To answer this question, we first review the literature on administrative reforms and types of organizations. An analysis of such reforms requires an assessment of traditional state-centered types of service delivery and New Public Management (NPM) arguments, in terms of establishing organizations with increased autonomy from public authorities for service delivery [11]. We specifically look at coordination and organizational autonomy. We then conduct an empirical analysis of stakeholder and decision-maker preferences for different organizational types in the water sector. To cope with challenges of austerity, aging infrastructure, and climate change impacts, such as flooding and droughts, responsibilities in the water sector have been spread across different actors and multiple centers of power at different political levels [12]. The Swiss water sector provides an interesting context to assess actors' preferences, as the support of diverse actors is critical in the context of Swiss direct democracy [13]. The remainder of this article is structured as follows. We first address the administrative reforms and types of organizations proposed in the literature. Following this, we first present our methods before showcasing our results from a survey among decision-makers and stakeholders in the Swiss water sector to assess the degree to which actors prefer different organizational types. Finally, we discuss the feasibility and potential developments of administrative reforms and types of organizations in light of the empirical results. Administrative Reforms and Changes in Types of Organizations As single municipalities are often overloaded with new duties, conventional and fragmented approaches are perceived as insufficient to accomplish tasks sustainably, efficiently, and effectively [2,14]. Against this backdrop, administrative reforms and changes in types of organizations call for decision-making and administration to become consolidated, where public and private actors join to tackle common tasks as a result of functional interdependencies rather than political boundaries [15,16]. Such structural reforms involve the creation of larger organizations (typically supra-municipal) to provide a service [17]. Such changes can be differentiated by two key criteria: (1) the degree of coordination, as derived from the literature on administrative consolidation; and (2) the degree of organizational autonomy, meaning the degree of decision-making freedom from government, as derived from the New Public Management literature [14,18]. Our focus is on organizations that provide a service rather than serving a coordinative function (e.g., councils of government). Coordination refers to centralization with substantial horizontal interaction [19]. As shown in Figure 1 and Table 1, the coordination of actors to jointly provide a service can range from no joint work to co-managing and sharing responsibilities for service supply tasks, finances, and infrastructure [2,20]. New Public Management (NPM) reforms focus on reforms within an organization that stipulates flat hierarchies. NPM reforms call for establishing organizations with increased autonomy from public authorities for service delivery, including privatization or contracting out to private firms [11,21]. NPM addresses the increasing independence of service providers at the operational level and the decreasing direct influence of public authorities as well as democratic participation [22]. In other words, emphasis has been placed on the "clear separation of political and managerial roles" [14] (p. 332), resulting in increasingly autonomous organizations for public service provision where citizens have little control over their water management [23]. The operator would no longer have to be embedded within the municipal administration, as municipal governments delegate their authority to a new organizational entity, which has its legal personality and a statute [17]. Organizational autonomy can be differentiated between legal framing, financial authority, and democratic control [18]. Combining different degrees of coordination and organizational autonomy, we identify six organizational types in Figure 1 and Table 1. First, the public bureau is fully integrated into the local public administration, citizens have direct voting rights on financial issues, and services are limited to the municipality (no coordination). Second, the level of coordination can differ both across as well as within the contractual consortium, the inter-municipal association, and the public jointstock corporations. These can all be seen as forms of inter-municipal coordination [24] which aims at establishing structures to enhance coordination among municipalities and a variety of actors to co-manage service provision [2,3]. Assessing the concrete level of coordination remains an empirical question in terms of how many municipalities are involved in an organization [25]. A central element is that all decision-making is now delegated to municipal authorities and is no longer connected to the citizens. Third, Public-Private Partnerships (PPPs) differ from other organizations in the sense that they can include private actors alongside municipalities. This results in both higher levels of autonomy from the political system but also increases the level of coordination among actors [22]. Finally, the private company, as conceptualized in this article, involves a high degree of autonomy, but no coordination, as it is conceived as a single actor. A central aspect here is that all decisions are made by private actors with no public representation. Coordination and autonomy might stand at odds with each other, as organizational autonomy can lead to more fragmented organizations if, for instance, single private providers take over small-scale infrastructure. However, different types of organizations can also include elements of coordination and autonomy, as service providers can gain autonomy from local politicians (municipal governments) by creating a larger organization and sharing managerial and operational responsibilities between public and private partners [1,21,23]. The above reforms and organizational types are broadly applicable to utility sectors such as electricity and gas (also referred to as network industries), which entail fixed and extensive infrastructure systems to deliver services. However, there are some specificities of the water sector that make it different from other utility sectors. Water is a bulky resource that cannot be cheaply transported to different places. Hence, water is typically locally sourced. This has led to the local organization and management of the water supply, often with de-facto local monopolies [12]. In addition, municipal water supply is not a marketable product-its price is not determined by a market price, as with electricity or gas; it is typically set and heavily regulated by the government. This leads to a lack of market competition and strong public control and management [12]. Materials and Methods Switzerland is an ideal country to study actors' preferences for different types of organizations in general, and in the water supply sector in particular. First, its federalist structure and direct democratic institutions provide significant decisional and implementation competencies to regional and local authorities. Thus, with varying institutional and organizational settings, analyses of local service provision embedded in a multi-level federal system are highly relevant [26]. Second, administrative reforms with changes in the types of organizations currently play an important role in the Swiss water supply sector [27]. However, in Switzerland, full-scale or material privatization, in terms of the divestiture of the infrastructure from public to private actors, is non-existent; only one Swiss waterworks is privately owned [26]. Surveys show that water users in Switzerland have a strong preference for public organizations and that they are satisfied with the current system [28]. Current reforms in the Swiss water supply sector involve increased coordination among water suppliers (typically municipalities) with increasing autonomy from the political system, such as through inter-municipal associations or public joint-stock corporations. As shown in Figure 2, our study covers the subnational jurisdiction (canton) of Basel-Landschaft in Northern Switzerland, representative of the Swiss plateau in terms of sociodemographic and economic structure, as well as geophysical conditions, encompassing diverse organizations. We also include the national perspective, by including actors at the federal level in our analysis. While water supply in Switzerland in general is abundant, in this region there have been droughts, with water scarcity particularly in the hilly regions [29]. This region includes contested and failed consolidation endeavors. Of the selected types of organizations (see Figure 1 and Table 1) all but the fully privatized company already exist in the water sector within Basel-Landschaft. The private company is not considered here, as it is not relevant to the Swiss context; the empirical analysis investigates the preferences of five of the six organizations presented in Table 1 and Figure 1. To assess the actors' preferences, we surveyed decision-makers and stakeholders. We distinguish between those two kinds of actors for several reasons: formal decision-makers, such as municipal and subnational governments, are responsible for administrative reforms. These actors are also known as the political elite, and such studies have a long tradition in research on public policies [29]. However, concentrating on a wider array of actors, so-called stakeholders, particularly makes sense in situations where citizens or private companies are heavily involved in decision-making or implementation [29]. Public service delivery in general, and water supply in particular, is characterized by such multi-level involvement of public and private actors [30]. For this reason, we also include stakeholders such as water technicians, water suppliers, engineers, and interest groups (see Table A1 in Appendix A) in our analysis. We identified the actors by first assessing who has formal decision-making competences and then, through interviews and discussions with the Canton, pinpointing which additional stakeholders are relevant for our research question. We asked the respondents to answer on behalf of their organization and we selected the heads of the organizations. A total of 172 actors participated in the survey between September and December 2015. We conducted a mail survey, sending the questionnaire by postal service. Actors were reminded by e-mail and with phone calls. The overall response rate was 90.1% (see Table A1 in the Appendix A). For the five types of organizations relevant for our study (see Table 1; as no private company exists, we do not include this) we created a four-and fiveitem index capturing the key organizational characteristics (see Table A2 in Appendix A). Survey participants were asked to evaluate each item on a scale ranging between "fully agree" (+1.5) and "fully disagree" (−1.5). Based on the scores for each question, indices (weight = 1) for each organization were calculated. Aggregating all items per organization then allows for assessing the preference for each type of organization for all decision-makers and stakeholders. We display aggregated results (preferred organizational type) for different actors. We distinguished between national (confederation), regional (subnational jurisdictions, i.e., cantons), and local (municipalities) actors to see if hierarchical and jurisdictional levels might reveal any difference in preferences. However, affiliation might have a substantial influence on actors' preferences, as reforms might affect existing structures and interests [5]. Hence, we further distinguish between seven actor categories (see Table A1). The first is the "confederation" and includes the federal offices responsible for water supply. The second is the "canton" and encompasses a broad variety of departments from the subnational administration as well as representatives from the executive branch. Third, members of municipal councils and mayors are embraced in the type "municipalities." This includes representatives of very small municipalities as well as larger cities and covers different political parties. Fourth, the "water technicians" operate at the local level-and, in particular, in small municipalities-and are responsible for the operation and maintenance of the local water infrastructure. This can range from a part-time job, municipal staff, or a service provided by a private firm. "Water suppliers" include representatives from waterworks (semi-autonomous from municipality) and fall under the fifth category. Sixth, several engineering companies are involved in water supply management, mostly in terms of consulting and planning activities; these are categorized under "engineers." Finally, the seventh category covers the "interest groups" that include a broad variety of professional associations as well as environmental interest groups. This classification distinguishes the main players involved in the water supply. While some overlap is, in principle, possible (e.g., municipal delegates in water associations or membership in professional associations), in such cases another representative was asked to respond to the survey. Only in one case was this not possible. Figure 3 shows the average actors' preferences for the five types of organizations (cf. Figure 1 and Table 1). We show this for all actors surveyed, and then distinguish between the decision-makers (those who have formal decision-making rights) and the rest of the stakeholders. The figure reveals a positive assessment for three types of organizations, namely public bureau, contractual consortium, and inter-municipal association, with the latter scoring highest, making it the most preferred organizational type across all actors. The contractual consortium scores higher than the public bureau. In contrast, both the public joint-stock company and PPP are rejected, the former being most strongly disapproved of. Differentiating between decision-makers and stakeholders shows a similar pattern across the two groups and reflects the general pattern found for all actors. However, as Figure 3 shows, decision-makers seem to have a more pronounced view regarding all organizational types than the stakeholders. This suggests that the decision-makers have more distinct preferences in contrast to the stakeholders, but this could also result from a stronger heterogeneity of the stakeholder group. Results Distinguishing between the three different actor levels (national, regional, and local actors) reveals a distinct pattern, mainly related to the level of responsibility, from local to national (Figure 4): the national (confederation) and regional actors seem to prefer only one organizational type, namely the inter-municipal association. The local actors are overall rather positive about all organizational types (except the public joint-stock corporation) but prefer the more decentralized and non-autonomous means of the public bureau or contractual consortium than the other organizational types. Besides different levels, diverse actors may have diverging interests regarding reforms in the water supply sector. Thus, Figure 5 displays the average actor preferences for the five organizational types across the seven actor categories (Confederation, _Canton, Municipalities, Water Associations, Engineers, Water Technicians and Interest Groups). As Figure 5 shows, public joint-stock corporations are rejected by all actor categories except the water associations, most notably by actors at the national level. PPPs are slightly supported by municipal actors and water technicians but rejected by all other actors. This support for a PPP by local actors might reflect that several tasks (e.g., water quality control, infrastructure maintenance) for local water supply are increasingly carried out by private companies rather than by municipal employees. However, there is broader support for organizations under public law with a strong link to the municipalities. This is particularly reflected in the support for inter-municipal associations, but also for public bureaus and contractual consortia. In particular, the key actors-municipal actors as well as actors from the canton-clearly approve of these three organizational types. Municipalities show stronger support for contractual consortia and traditional public bureaus, but somewhat less support for inter-municipal associations, which would remove key competencies from the local to the regional level by sharing these with other municipalities. The engineers and water associations also strongly prefer municipal coordination, with inter-municipal associations being the clear winner. Despite their partiality to municipal-based organizations, water technicians are cautious in their assessment. This might result from the fact that the water technicians would be the actors most directly affected by changes in the type of organization. Though national actors and interest groups do not support organizational structures that exclusively focus on the municipality as a service provider, they more strongly endorse institutionalized coordination among municipalities in terms of an inter-municipal association. Overall, the broad support for inter-municipal associations across actor categories implies a demand for enhanced coordination among municipalities for water supply. Table 2 summarizes the results from the survey for the six types of organizations. We see a clear preference for public organizations-that is, those under public law and closely tied to the municipality-and retaining a degree of democratic control. Our data show demand and willingness for enhanced inter-municipal coordination, as expressed in the broad approval for the inter-municipal association. However, preferences for maintaining the link to the municipal level remain strong, indicating potential opposition to reforming traditional direct public management forms into more delegated ones. Conversely, organizational types that have very high degrees of autonomy from the political system-both in terms of financial authority and democratic control-are not preferred. Overall, the actors tend to favor solutions they know best, e.g., more direct public management rather than delegated (private) management. These results cohere with other research indicating a trade-off between democratic control and operational discretion within the characteristic of organizational autonomy [22,31]. Indeed, and on the one hand, it is particularly the specification of decreasing democratic control that seems to lower the preferences for alternative organizations such as the public joint-stock corporation and PPP. This value of democratic control is especially strong at the local level, as the water technicians and municipalities prefer non-autonomous organizations, such as public bureaus, while the national and regional actors favor inter-municipal associations. On the other hand, and in terms of structural reforms, overall all actors prefer an intermunicipal association without a major shift in autonomy to the status quo of direct public management (public bureau and contractual consortium). Discussion Previous work has also found that, particularly in a country like Switzerland, an argument prevails that with a strong direct-democratic tradition, organizations with high democratic control are preferred for water-related services [26]. Similarly, a national survey by the Swiss Gas and Water Industry Association representative of the Swiss population, has shown that that 93% of the population is against water privatization, which includes private legal forms [28]. Similarly, a new water law in the canton of Zurich, Switzerland, was turned down in 2019 by a popular vote, because it would have allowed PPPs (https://www.nzz.ch/zuerich/wassergesetz-privatisierungsgegner-in-stadt-undland-ld.1458822?reduced=true (accessed on 12 June 2022)). The general anti-privatization argument is that the public water systems work well. The population is satisfied with the quality of drinking water and is afraid that privatization would lead to lower quality and higher prices [26]. In light of our survey results, we could infer that, in the Swiss context, the factors of governmental control and a mismatch between private and public interests appear to be central, and, in selecting an organization that would be likely to be implemented, this factor might need to be weighted more than others. Despite reform pressures and research indicating the benefits of increasing coordination and autonomy [2,3], this research shows how local, regional, and national actors in Switzerland do not necessarily prefer organizational types with decreased governmental control. Indeed, decision-maker and stakeholder preferences in this study seem to subscribe to the logic that public services should either be directly or indirectly provided by the government through direct or delegated public management, that is, by public organizations integrated into the local public administration or through inter-municipal associations. Given that the survey was conducted seven years prior to this publication, it is important to reflect on what has happened since then. According to the Canton of Basel-Landschaft, not much has changed. There have been incremental changes in some areas, where even the inter-municipal route was difficult to pursue. Instead, the Canton has facilitated a dialogue among the stakeholders in order to find pragmatic solutions. Given the challenge of finding an appropriate and preferred organizational type, the actors have instead focused on technical solutions, e.g., connecting physically but not organizationally. The Canton hopes that this technical coordination will eventually also lead to organizational coordination. Conclusions This article analyzed actors' preferences for different types of organizations in the water sector. We argue that actor preferences in a specific context are important for sustainable solutions: when key actors who are managing water infrastructure, or are directly affected by them, support changes, this eases the realization and implementation of these reforms. We first presented different types of organizations with varying degrees of coordination and organizational autonomy, two dimensions derived from the administrative consolidation and NPM literature, respectively. We then conducted a survey which included local, regional, and national decision-makers and stakeholders, thus, key actors in Swiss water service delivery. We asked those actors which one of the five organizational types, or dimensions thereof, they preferred for the respective jurisdiction in Switzerland they are responsible for. The results of this study show that the stakeholders strongly prefer public organizations in contrast to the other, more autonomous and private types. Specifically, the organizational type most preferred by the stakeholders is the inter-municipal association. According to the literature, this form of inter-municipal coordination might be a promising form of administrative consolidation, as it enables the pooling of resources, internalizing externalities, shorter decision-making processes, access to external funds, and a degree of longer-term financial planning while retaining a degree of governmental oversight [20,22]. Nonetheless, public bureaus and contractual consortia are also often preferred in the Swiss context, and mostly by local actors whose interests are vested in democratic values and control. Yet, according to the literature, such organizations are not expected to perform sustainably, mainly because of lengthy decision-making processes, a lack of access to external funds, and short-term financial planning [22,31,32]. Here, we find a classic trade-off between a focus on results, which are promoted by NPM-driven public sector reforms, and democratic values in a given context. This trade-off is most significant at the local level, where municipalities and local actors prefer the contractual consortia over the intermunicipal association, and less so at the regional and national level, where the actors favor the inter-municipal association the most. The Swiss water sector provides an interesting context to assess administrative reforms and changes in the type of organization. The water supply structures are traditionally decentralized in Switzerland, based on the principle of subsidiarity and direct public management [26]. However, current developments, such as urbanization, aging infrastructure, or natural disasters, put water supply under pressure, ask for more sustainable and long-term solutions, and challenge the municipalities to join forces and to become (more) professionalized, through legal, financial, and democratic autonomy. Switzerland is, furthermore, an interesting case to reflect on sustainability, effectiveness and efficiency in light of actors' preferences, as direct-democratic instruments and the legal system create a multitude of veto points where (local) decision-makers and stakeholders can block an administrative reform and perpetuate the status quo. Said differently, the aspect of preference seems to be (almost) as important as the fulfillment of legally defined goals in such a system. Future research is needed that relates actors' preferences in other political systems, where the public has different or less direct democratic instruments at its disposal to block decision-making or implementation than in Switzerland. Further analyses would also benefit from connecting an assessment of actors' preferences with goal achievement with the implications for sustainability. Conflicts of Interest: The authors declare no conflict of interest. The funders played a role in the design of the study (in terms of selecting the regions), but played no role in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
5,770.8
2022-06-21T00:00:00.000
[ "Political Science", "Economics" ]
Realization of wire selection platform based on three dimensional visualization technology The three dimensional wire selection platform of the power transmission line is constructed by modelling technology and three dimensional scene render technology in combination with characteristics of the digital elevation, the digital ortho image and the earth geography information system. Through application of design and wire selection in some 330kV I and II circuit line engineering, it shows the design technical person can see scene of the three dimensional power transmission line clearly and livingly when the three dimensional visualization technology is applied in the optimized wire selection practice of the power transmission line, optimizes route of the line, saves investment of the line, reduces labour strength of person and finally improves design quality of the line. Introduction Route selection and investigation are extremely important key loops during the power transmission line design process, reasonability of the wire selection plan plays an important function on economic index, technical index and construction, operation conditions of the line.Selection of the power transmission line takes the space geographical information as basis.The traditional wire selection method takes the middle and small scale topography map as basis for rough selection, which realizes refinement and optimization of the plan through field investigation and measurement treatment on basis of rough selection.Because the current 1:10000 and 1:50000 topology maps applied in our country are mostly measured in 1980s or early, which are very obsolete and can t reflect current ground status during design [1].At same time the topology map isn't intuitive and size of the map is limited, the design person only find the optimum line within the scope wide 30km along the line.It is difficult for the traditional wire selection method to select the ideal route plan, which can't meet demand of power construction.How to overcome disadvantage of wire selection in the power transmission map in the topography map is a problem which is paid attention by the power designer in our country. Following development of the modern investigation and measurement technology and appearance of the versatile business investigation and measurement image data, we are easy to obtain the demanded data.If the three dimensional visualization technology is applied in the wire selection of the power transmission line, when the investigation and measurement designer sees the ground model in the computer, it is convenient and intuitive as seeing ground from the air, which can optimize the routine plan within the large scope, it can see the relative relationship between the line and various buildings, the road and the railway etc, and obtain the effect which is difficult to obtain by the traditional wire selection method and reduce labour strength of the investigation and measurement design person [2]. Three dimensional visualization technology At present, the three dimensional visualization technology is gradually applied in every field of the society and the living, such as digital city construction, military application, environment monitoring, scene spot planning, geography and mine activity, hydro geologic activity, traffic monitoring, estate development, medical aid etc. aspects.Appearance of OSG based on the OpenGL technology etc. three dimensional display platform, Quest 3D, Virtual tools, Skyline based on the DirectX 3D technology etc. system platforms greatly pushes forward development of the three dimensional visualization simulation and virtual technology. Three dimensional wire selection of the power transmission line realizes effective mixture of the digital elevation model, the digital ortho image, the three dimension data and other GIS data information based on the three dimensional GIS platform, which re-appears real condition in the power construction environment of the power transmission line. Geographic information system Geographic Information System is an important tool to obtain, process, manage and analyze Geographic space d- ata.Function mainly includes data input and edit, data management, data processing, data display and output etc.The three dimensional GIS platform can realize integration expression of the ground, earth surface, underground, under water information, which solves seamless integration of the three dimensional CAD design achievement and the three dimensional space information platform, two dimensional and three dimensional integration etc problem.The EV-Globe platform integrates the latest geographic information and the three dimensional software technology, which has the large scope, massive, multiple source data integration management and quick three dimension real-time roaming function, it supports inquiry, analysis and calculation of the three dimensional space, it can integrate with the traditional GIS software and provides the fundamental image documents within the global scope, it can construct the three dimensional space information service system conveniently and quickly, which can complete expansion from the two dimensional GIS system to the three dimensional system quickly, it is a new generation of the large space information service platform. Parameterization modelling technology Parameterization modelling technology is a technology which controls size and information of the model through parameter data.Main characteristic is to apply size driving, i.e., size of the parameterization model is expressed by the corresponding relationship; it isn't expressed by the determined value.When one parameter value is changed, all sizes related to it will be changed automatically, and a new same type of the model will be generated. The user will realize modification of the model through modifying the size, which will generate a part model series with same shape and different specification.Parameterization model expresses the geometrical graph as the geometrical restraint model which consists of the geometrical element and its restraint relationship through seizing the restraint relationship between the geometrical elements in the model.The key in the parameterization modelling is to establish the geometrical restraint relationship, i.e., topology restraint and size restraint.Topology restraint is quantitative description on structure of the product, it expresses relationship between the geometrical element topology and the structure, such as parallelism, symmetry and verticality etc.These relationships maintain unchanged during size driving process of the graph.Size restraint expresses position relationship between the geometrical element through the restraint expressed by the size indication, such as distance size, angle size and radius size etc, is shown in fig 1 .It is the object of parameterization driving.Its features are to realize serialization design through modifying size of the graph on basis maintaining topology relationship of the original graph unchanged. Graphic modelling technology Graphic modelling technology is an important part during construction of the three dimensional model.Some equipment with complicated and versatile shape can realize model construction through the parameterization modelling.The graphic modelling way can make up disadvantage of the parameterization modelling, which becomes an essential model construction way.Secondly, when parameterization modelling can't realize high precision drawing of the complicate servant of some equipment and simulation requirements on the equipment are very high, it is only realized through the graphic modelling technology. The model which is constituted by the graphic modelling technology shall have the corresponding property parameters, which can include two types of the parameter information.One is geometrical parameters, which can record space relationship of the nodes in the three dimension model, such as the appearance size, the diameter of the inner hole etc relevant parameters of the three dimensional model; another is electrical parameters, which can record the electrical property parameters of the three dimensional model, such as type of the equipment, material information etc relevant parameters, is shown in fig 2. Construction of power transmission line scene Optimization design of the graphic rendering algorithm is carried out as three parts during large scene rendering process of the power transmission line, including rendering of terrain and topology three dimensional scene model, rendering of the crossing object three dimensional model and rendering of the electrical equipment model [3][4][5]. Rendering of the crossing object three dimensional model and electrical equipment model Besides the geological information data in the three dimension simulation scene, data of the additional object on the floor are massive.It includes the crossing object (such as the house, the road, the power line etc) and the equipment in the power transmission line (such as the power transmission tower and the fitting etc).Geographical information data and these additional models constitute the complete three dimension simulation scene of the power transmission grid engineering.The rendering efficiency of the large scene scope and the detail effect of the model shall be considered during rendering of the model.The rendering of the model is processed by the sectional loading, grading display method.Carry out sectional loading of the model in the whole three dimensional scene through position distance of the viewing angle and scope of the flat section head scope.For the newly built 330kV I, II circuits line engineering, the whole line consists of two single circuits which are parallel set up.The whole length of the preliminary design route is 72.5km; terrain unit along the line takes the aggraded valley plain, desert as primary, is shown in table 1. Full name Altitude m Import of geographical information data Geographical information data are GIS graphical data of the GIS map which serves to wire selection of the overhead line, it consists of the vector data and grid data from types, it is classified as 3 data types from category, which as basic geographical data, power special data and engineering data respectively.Three dimensional route selection of the power transmission line starts from processing of the three dimensional geographic information data.Three dimensional geographical information data source in the engineering area is expanded by the data processing tools.According to known coordination system information of this engineering, longitudinal and latitude parameter of ground is inputted in date process setting, and then set the central longitudinal and input the corresponding coordination mode, selection of the banding method and the central longitudinal line.After the three-dimensional digital wire selection system is opened, the imported image can be positioned in the GIS three dimensional scene [6]. Route optimization Take advantage of the three dimensional visualization technology based on the GIS platform in combination with the satellite image and navigation image, optimize the preliminary design route plan according to document collection and investigation at site, is shown in table2 and table 3. The following principles shall be considered during optimization. Comprehensively consider construction and operation etc factors, try to construct and operate the line close to the road; Avoid the house effectively and optimize the route in the section with great demolition volume of the house; Reasonably select position of the various crossing, selection of the road, rive, power line etc crossing positions has great influence on the engineering cost, the built and on building power transmission line with crossing shall be reduced as possible, in particular to the power transmission line at high voltage class; Sufficiently consider terrain and forest covering along the line, apply 1 Rendering of terrain and topology three dimensional scene modelGeographical information data for rendering of terrain and topology consists of the DEM elevation data and the DOM video data.The method to reduce consumption of a large quantity of data on the memory etc hardware device resource is to apply the blocking and grading rendering technology to solve visualization issue of the field.In order to realize switching of the geographic information data in different grade and different block quickly, the pyramid model and the four fork tree structure are applied during pre-processing and memory of the original geographic information data, is shown in fig 3. Figure 3 . Figure 3. Processing flow of DEM and DOM data. The same model can be controlled precisely through observing the distance of the viewing angle.Only the low precision model and the outer profile frame of the model are rendered during long distance display.When observation distance is too short, load and render the finest detail characteristic of the model, the visualization display in the same scene can be realized in the large scale of the model by this way, is shown in fig 4. Figure 4 . Figure 4. Fineness rendering sample of model the power transmission line in the traditional two dimensional geology graph, which plays important reference function on tower arrangement and positioning of the line.The three dimensional GIS wire selection platform makes operation efficiency of the design person to improve greatly, which is beneficial to improve design quality and guarantee implementation of engineering technical index.
2,793
2016-01-01T00:00:00.000
[ "Engineering" ]
Detection of normal transitions in a hybrid single-phase Bi2223 high temperature superconducting transformer by using the active power method and a magnetic flux detection coil The authors have been developing a hybrid single-phase Bi2223 high temperature superconducting (HTS) transformer used in the AC current source with rated current of over 500A. Its primary coil is a copper coil and secondary coil is a Bi2223 HTS coil. In this paper, the authors propose a new detection method of normal transitions in the secondary coil by using the active power method and a magnetic flux detection coil attached on the inside of the secondary coil. In the proposed method, the normal transitions are detected by measuring active power dissipated in the secondary coil, and induced voltage of the magnetic flux detection coil by a primary and leakage flux of the secondary coil enables to calculate the active power dissipated in only the secondary coil. As experimental results for a hybrid single-phase Bi2223 HTS transformer, it was found that the proposed method enabled to detect the normal transitions in its secondary superconducting coil. Introduction The authors have been developing a small and light AC power source with a hybrid single-phase Bi2223 HTS transformer for supplying a large current. The transformer consists of a primary copper coil and a secondary Bi2223 HTS coil. It is important to detect normal transitions in the secondary coil to protect from excessive heating in the normal area. The authors have presented the active power method as a detection method of the normal transitions in a superconducting coil [1]. In the active power method, the normal transitions are detected as active power signal in the superconducting coil. The hybrid transformer has a primary copper coil and an iron core and therefore no-load losses are always consumed in the transformer. Then the conventional active power method cannot detect accurately the normal transitions due to the no-load losses [1]. The authors propose a new detection method by using the active power method and a magnetic flux detection coil attached on the inside of the secondary coil. The magnetic flux detection coil contributes to measure active power dissipated in only the secondary coil. This paper shows usefulness of the proposed method for detection of the normal transitions in the hybrid HTS transformer. Principle of detection of normal transitions by using a magnetic flux detection coil In the Hybrid single-phase Bi2223 HTS transformer, copper loss in the primary coil and iron loss are always consumed as no-load losses in the transformer. In conventional active power method, no-load losses are regarded as active power due to the normal transitions, therefore false detections may happen. In the proposed method, the voltage of the magnetic flux detection coil enables to calculate the active power dissipated in only the secondary coil. Mounting positon of the magnetic flux detection coil and an equivalent circuit converted to the primary side of the hybrid single-phase Bi2223 HTS transformer are shown in Figures 1 and 2, respectively. In Figure 2, : primary voltage, i 1 : primary current, r 1 : primary resistance, x 1 : leakage primary reactance, i 0 : excitation current, : excitation conductance, b 0 : excitation susceptance, i 2 : secondary current, x 2 : leakage secondary reactance, r 2 : secondary resistance generated after the normal transitions, : secondary voltage, Z : secondary load impedance, a : turn ratio of the primary coil and secondary coil, a' : turn ratio of the magnetic flux detection coil and the secondary coil (a = a' ), : voltage of the magnetic flux detection coil. As shown in Figure 1, the magnetic flux detection coil is mounted according to the secondary coil on the inside of the bobbin and therefore it generates an induced voltage by a primary flux and leakage flux of the secondary coil. Then a' , which is a conversion voltage on primary side of , is shown in Figure 2. Also the turn ratio a equals to a'. A secondary resistive voltage is detected by difference between a' and (a conversion voltage on primary side of ) and active power dissipated in the secondary coil is calculated as follows. . 1 In this method, the active power P 2 ' is not affected by the signal of primary winding resistance and excitation conductance . Moreover, the protection system becomes more simple than the conventional one because the detection can be achieved by measuring only three signals of the secondary voltage, the secondary current and the voltage of the magnetic flux detection coil. Protection tests of the hybrid HTS transformer In order to verify the above method, detection and protection tests for a hybrid single-phase Bi2223 HTS transformer were carried out. The configuration and specifications of a hybrid single-phase Bi2223 HTS transformer are shown in Figure 3 and Table 1. The primary copper coil is a square shape solenoid coil to supress a leakage flux and the secondary HTS coil is a cylindrical solenoid coil to supress bending stress. The cooling container is made of styrene foam and has a cylindrical bore for the iron core and then only the secondary HTS coil is cooled in liquid nitrogen. A protection circuit is shown in Figure 4. In order to reduce an electromagnetic noise in calculated P 2 , it is filtered by a low pass filter. The resultant active power " P 2 ' " reaches a specified threshold P th ', the thyristor switches are turned off and transport current is shut off. In the protection tests of the transformer, transport current of 21 A peak , 60 Hz was supplied to the primary coil and then a current of 500 A peak was supplied to the secondary coil. The secondary side circuit of the transformer was connected to a resistor of 1.1 mΩ (=Z). The normal transitions were occurred by a heater mounted on the secondary one. The tests for normal transitions in the secondary coil was carried out and the results are shown in Figure 5. Figures 5 (a)-(d) show expanded waveforms for time axes in a superconducting state. Figure 5 (a) shows the primary current of 21 Apeak, 60 Hz and (b) shows the secondary current of 500 A peak , 60 Hz, which was transformed according to the turn ratio of the transformer as mentioned above. Figure 5 (c) show the voltage across the secondary coil and it equals to the voltage of the connected resistor. Figure (d) shows the voltage across the magnetic flux detection coil and it is the induced voltage by a primary flux and leakage one of the secondary coil. Figure 5 (e) shows the active power signal P 2 ' in the secondary coil before and after the normal transition. In a superconducting state until about 150 s, P 2 ' was constant with a small value. Then it increased drastically because the secondary resistance generated by the normal transition. It does not have no-load losses and therefore the normal transition can be detected with a higher SN ratio than that of the conventional method [1]. The thyristor switches were turned off when P 2 ' reached a specified threshold P th '=25 W [2]. Figure 5 (f) shows temperature of the normal zone near the heater on the secondary coil. The maximum temperature of the secondary coil was suppressed to 178K which was lower than permissive temperature [3]. From the test results, it was verified that the protection system based on the proposed method worked successfully for the hybrid HTS transformer. The small value shown in Figure 5 (e) in the superconducting state is supposed to be a signal due to AC loss in the secondary coil. The signal is much smaller than the threshold P th ' and therefore it causes no false detection. Figure 5. Test results for the hybrid single-phase Bi2223 HTS transformer. Conclusions The authors proposed a detection and protection system which consists of the active power method using the magnetic flux detection coil for normal transitions in a hybrid single-phase HTS transformer. The system can detect only the normal transitions in the secondary HTS coil regardless of the resistance of the primary copper coil and iron loss. They verified usefulness of the system for the hybrid HTS transformer through the experimental results.
1,936.2
2017-07-26T00:00:00.000
[ "Physics" ]
Scalable Lightweight Protocol for Interoperable Public Blockchain-Based Supply Chain Ownership Management Scalability prevents public blockchains from being widely adopted for Internet of Things (IoT) applications such as supply chain management. Several existing solutions focus on increasing the transaction count, but none of them address scalability challenges introduced by resource-constrained IoT device integration with these blockchains, especially for the purpose of supply chain ownership management. Thus, this paper solves the issue by proposing a scalable public blockchain-based protocol for the interoperable ownership transfer of tagged goods, suitable for use with resource-constrained IoT devices such as widely used Radio Frequency Identification (RFID) tags. The use of a public blockchain is crucial for the proposed solution as it is essential to enable transparent ownership data transfer, guarantee data integrity, and provide on-chain data required for the protocol. A decentralized web application developed using the Ethereum blockchain and an InterPlanetary File System is used to prove the validity of the proposed lightweight protocol. A detailed security analysis is conducted to verify that the proposed lightweight protocol is secure from key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks. The proposed scalable protocol is proven to support secure data transfer among resource-constrained RFID tags while being cost-effective at the same time. Introduction Scalability is the main concern for public blockchain-based Internet of Things (IoT) applications. As the number of IoT devices increases each year, coupled with the emergence of the 5G network, this trend is expected to accelerate, driven by the release of a new generation of IoT devices. Existing public blockchains try to solve scalability issues by using different approaches such as on-chain (e.g., sharding) and off-chain (e.g., using sidechain, layer-2 scaling) methods, or by completely using different data structures (e.g., Direct Acyclic Graph used in IOTA), as discussed in [1]. Ethereum 2.0, currently known as the Consensus Layer, is expected to offer a throughput of up to 100,000 transactions per second (TPS) when sharding is implemented. One of its existing technologies-rollupsis already used to improve scalability through layer-2 protocols [2]. This is achieved by offloading heavy computational processes from MainNet to a rollup-specific chain which, in turn, speeds up the transactions. Two types of rollups were introduced by the Ethereum 2.0 blockchain: optimistic rollups and zk-rollups. The former rollups have a long transaction finality time due to their fraud-proof mechanism, which is used to detect incorrectly calculated transactions. In contrast, zk-rollups offer fast finality but require heavy computation for the proving system (e.g., Zk-SNARK) [3]. Both of these rollup solutions have their own weaknesses; therefore, instead of improving them, there is a need for an alternative scaling solution that can be supported by IoT devices. There are several challenges, including the use of resource-constrained IoT devices themselves. Typically, supply chain management solutions use proprietary low-cost Radio Frequency Identification (RFID) tags for goods tracking and ownership management. These tags are classified as Class I IoT devices, meaning they have limited resources and processing capabilities to support complex cryptographic algorithms [1]. In order to enable more transparent, secure, and efficient supply chain management of RFID-tagged goods, this paper proposes a scalable protocol that allows for a secure batch ownership transfer of these tagged goods using an Ethereum public blockchain. While Ethereum 2.0 with rollups significantly reduces overall transaction fees, the proposed solution can further decrease them, as only a single fee is charged for managing a batch of IoT devices, ensuring increased scalability. The proposed scalable protocol is designed using a lightweight cryptographic algorithm to protect resource-limited IoT devices from common security attacks. These security attacks include key disclosure, replay, man-in-the-middle, de-synchronization, and tracking attacks [4]. The protocol is scalable with the help of an InterPlanetary File System (IPFS), and is integrated with a public blockchain to perform transparent IoT device ownership data transfer in batches using only a single transaction. The integration of a public blockchain with this proposed solution is essential for transparent ownership data transfer, data integrity, and access to on-chain data. The main contributions of this paper are as follows: 1. A novel scalable public blockchain-based lightweight protocol using bitwise exclusive-OR and simple permutation operations is presented. Its purpose is to perform a secure batch ownership data transfer associated with resource-limited RFID tags; 2. The proposed scalable lightweight protocol is able to protect RFID-tagged goods in the supply chain system from key disclosure, replay, man-in-the-middle, desynchronization, and tracking attacks; 3. The proposed lightweight protocol offers partial transparency for interoperable supply chains wherein the public can only view transaction records. Only legitimate owners are allowed to view the full supply chain details through the IPFS; 4. The proposed protocol allows offline data transfer in batches, which further reduces transaction costs. The remainder of the paper is organized as follows: Section 2 describes related work. Section 3 presents the designed scalable lightweight protocol together with a proof of concept in Section 4. Sections 5 and 6 demonstrate the theoretical and formal analysis of the designed protocol. Section 7 analyzes the performance of the proposed protocol, and Section 8 concludes the paper. Related Work Public blockchains specifically designed for tracking supply chain systems, such as Vechain and Waltonchain, tend to have lower throughput and higher transaction fees compared to other high-performance public blockchains that are not focused on supply chain systems, such as Polygon, IOTA, and Solana. The average throughput of the VeChain network is 165 TPS [5]. In contrast, the Waltonchain network throughput is approximately 13.5 TPS [6]. The network latency, i.e., block generation time in this case, of VeChain is 10 s and around 30.73 s for Waltonchain. These network latencies are quite high compared to the high-performance public blockchains, as shown in Table 1. VeChain uses a dual-token system to prevent fees from being volatile. In April 2021, these fees were reduced to 1x10 13 Wei (equivalent to USD 0.027 per transaction up to the time of writing) to attract enterprise interest in using this blockchain [5]. Unlike VeChain, Waltonchain's transaction costs fluctuate similarly to fees in the Ethereum network. Although high-performance public blockchains offer high throughput and relatively low transaction fees, as shown in Table 1, they have not been widely used in resourceconstrained and data-sensitive IoT applications. Several research works have been presented to improve public blockchain scalability by proposing scalable storage models [7,8], cross-chain integration protocols [9], and efficient consensus protocols [10][11][12]. Off-chain data storage can improve blockchain scalability as on-chain data introduces high computational and resource overheads. A blockchain storage model was designed by Chen et al. using a Distributed Hash Table (DHT) and an IPFS to improve its scalability [7]. An IPFS is a distributed file storage protocol that has been used to enable peer-to-peer file sharing for a variety of IoT applications, such as healthcare [8] and supply chains [13]. An IPFS is used not only as off-chain storage to enhance blockchain storage or store sensitive data but also as a tool to achieve interoperability [14,15]. Sidechains can also be used to achieve scalability and interoperability. For example, Rozman et al. proposed a cross-chain integration protocol to create a scalable framework for the Ethereum blockchain and the Xdai sidechain network to enable shared manufacturing [9]. However, in order to enable efficient and secure cross-chain transfers between the blockchain's main chain and a sidechain, more work is needed [16]. Although efficient data storage models and cross-chain integration protocols can increase blockchain data throughput, the scalability of blockchains can be significantly improved using lightweight consensus algorithms. The first-generation blockchain (e.g., Bitcoin) and most of the second-generation blockchains (e.g., Ethereum 1.0, Litecoin, Monero, etc.) use the Proof-of-Work (PoW) algorithm to achieve consensus. However, blockchains that use PoW have low throughput; thus, other consensus algorithms, such as Proof of Stake (PoS), have been used by newer blockchain generations for better scalability. Some public blockchains provide high scalability with high throughput using scalable consensus algorithms, such as Polygon, which uses PoS, or Solana, which uses Proof of History (PoH). These blockchains have been widely integrated with Ethereum applications to provide higher throughput and lower transaction fees, as shown in Table 1. Some researchers proposed new consensus algorithms to increase blockchain throughput and reduce communication overhead, such as dynamic PoW [10], Zyzzyva consensus protocol [11], and the Groupchain consensus protocol [12]. To summarize, public blockchains designed for tracking goods along supply chain systems, i.e., Vechain and Waltonchain, have lower throughput and higher transaction fees compared to high-performance public blockchains that are not focused on supply chains, such as Polygon, IOTA, or Solana. Several approaches, such as off-chain data storage, cross-chain integration protocols, and efficient consensus protocols, have been proposed to improve public blockchain scalability. Additionally, the use of PoS or PoH lightweight consensus algorithms and newly designed consensus protocols can significantly improve public blockchain scalability. However, even though the aforementioned solutions can improve throughput rates, none of them propose a scalable and interoperable supply chain solution for resource-constrained IoT devices. Thus, this paper addresses this research gap by proposing a scalable lightweight protocol for public blockchain-based interoperable supply chain systems that use resource-limited IoT devices. Lightweight Permutation Operation Several proposed solutions use the concept of permutation to enhance the security of RFID protocols [20][21][22]. However, [20] requires heavy computations that are unsuitable for resource-constrained passive RFID tags. In [21,22], the permutation operations must analyze every bit in a string to rearrange another string. These operations have a complexity of O(n) according to Big O notation. A lightweight permutation operation is introduced in Sensors 2023, 23, 3433 4 of 16 this paper that analyzes 3 characters for a 64-character hexadecimal string. This operation removes a certain number of characters from a string and inserts them into a specific position, as shown in Figure 1. Each of these three indexing operations has a complexity of O(1) only. Suppose there are two 256-bit A and B strings (two hexadecimal strings of length 64). A newly proposed lightweight permutation operation Per(A,B) refers to a process in which a certain length of characters from string A are removed from either the left-or right-hand side based on string B, and then inserted into a specific position of string A, as described in Table 2. Lightweight Permutation Operation Several proposed solutions use the concept of permutation to enhance the security of RFID protocols [20][21][22]. However, [20] requires heavy computations that are unsuitable for resource-constrained passive RFID tags. In [21,22], the permutation operations must analyze every bit in a string to rearrange another string. These operations have a complexity of O(n) according to Big O notation. A lightweight permutation operation is introduced in this paper that analyzes 3 characters for a 64-character hexadecimal string. This operation removes a certain number of characters from a string and inserts them into a specific position, as shown in Figure 1. Each of these three indexing operations has a complexity of O(1) only. Suppose there are two 256-bit A and B strings (two hexadecimal strings of length 64). A newly proposed lightweight permutation operation Per(A,B) refers to a process in which a certain length of characters from string A are removed from either the leftor right-hand side based on string B, and then inserted into a specific position of string A, as described in Table 2. B0 Determines the length of the characters that need to be removed/inserted from/to a string B25 Determines the insert position of a string B57 Determines the remove and insert direction; the odd number represents characters removed from the left-hand side and inserted at the right-hand side of a string, and vice versa for an even number. Scalable Lightweight Protocol Ethereum blockchain was used in the proof of concept stage of this work since it supports smart contracts at its core. This further allowed creating a decentralized web application for the proposed supply chain system that interacts with these smart contracts. The current system involves five parties: tag, reader, supply chain node, public blockchain, and IPFS. Basic supply chain nodes consist of manufacturers, distributors, retailers, and end-users. The functionality of supply chain nodes is described in Table 3. The proposed protocol is designed using bitwise exclusive-OR and lightweight permutation operations that allow it to be used with resource-constrained low-cost RFID tags. In contrast, resource-rich supply chain nodes are designed to perform heavier computations, such as hashing Content Identifiers (CID) using the SHA-256 function. In addition, the supply chain node is required to perform the Elliptic Curve Integrated Encryption Scheme using the secp256k1 curve to encrypt or decrypt CIDs and files uploaded to the IPFS. An assumption is made that the communication channel between supply chain nodes is secure, whereas the communication channel between the reader and a tag is insecure. Notations used in the proposed protocol are described in Table 4. B 0 Determines the length of the characters that need to be removed/inserted from/to a string B 25 Determines the insert position of a string B 57 Determines the remove and insert direction; the odd number represents characters removed from the left-hand side and inserted at the right-hand side of a string, and vice versa for an even number. Scalable Lightweight Protocol Ethereum blockchain was used in the proof of concept stage of this work since it supports smart contracts at its core. This further allowed creating a decentralized web application for the proposed supply chain system that interacts with these smart contracts. The current system involves five parties: tag, reader, supply chain node, public blockchain, and IPFS. Basic supply chain nodes consist of manufacturers, distributors, retailers, and end-users. The functionality of supply chain nodes is described in Table 3. The proposed protocol is designed using bitwise exclusive-OR and lightweight permutation operations that allow it to be used with resource-constrained low-cost RFID tags. In contrast, resourcerich supply chain nodes are designed to perform heavier computations, such as hashing Content Identifiers (CID) using the SHA-256 function. In addition, the supply chain node is required to perform the Elliptic Curve Integrated Encryption Scheme using the secp256k1 curve to encrypt or decrypt CIDs and files uploaded to the IPFS. An assumption is made that the communication channel between supply chain nodes is secure, whereas the communication channel between the reader and a tag is insecure. Notations used in the proposed protocol are described in Table 4. Table 3. Supply chain nodes and their functionality. Entity Description Manufacturer Designing, producing, and delivering products to distributors or retailers Distributor Distributing products purchased from manufacturers to retailers Retailer Selling products purchased from distributors to end-users End-user Purchasing products from retailers The proposed scalable lightweight protocol consists of two phases: the initial phase and the authentication phase. The initial phase involves steps that need to be performed before the authentication phase can take place. These steps are as follows: 1. Each supply chain node creates an account in the Ethereum blockchain and obtains a public key and a private key; 2. The manufacturer node generates ID and K for each RFID tag and stores the ID and K pairs, and two generated random numbers, n and q, in a text file called the DATA file; 3. The manufacturer node hashes q||n to compute Hq. It then stores the Hq string as well as ID and K pair in each RFID tag; 4. The manufacturer node encrypts the DATA file with its public key and uploads the encrypted DATA file to the IPFS. A CID, Qm, generated by the IPFS is returned to the manufacturer node; 5. The manufacturer supply chain node encrypts the Qm with its public key to obtain H. A transaction is then made using the Ethereum blockchain, where the H string is included as input data in the transaction. A transaction hash, Tx, is generated once the transaction is completed; 6. The manufacturer supply chain node can include supply chain data to be shared with other supply chain nodes in the DATA file. The authentication phase is conducted when RFID tags communicate with the supply chain node's reader. Data transfer happens during the authentication phase when the DATA file is encrypted with the next supply chain node's public key. In order to prevent the old owner from tracking the RFID transaction, the DATA file is encrypted using the current supply chain node's public key as listed in steps 5-9 below. In this case, no ownership data transfer happens. The description of the proposed lightweight protocol shown in Figure 2 is as follows: 1. The reader generates a random number, r, and computes Hr to initiate a session by hashing the r value. It then sends a Hello message and the Hr value to the tag; 2. After receiving both of the messages, the tag generates a random number m. The tag then computes ID F and K F from its stored ID and K. Next, the tag computes messages A and B using ID F , K F , and Hq values, as well as the m and Hr values. The tag sends the computed messages, A and B, to the reader; 3. The reader forwards r and messages A and B to the supply chain node X (i.e., manufacturer supply chain node); 4. The supply chain node X obtains the information using the latest H value stored on the blockchain from the decentralized web application. It then decrypts the H value to obtain its Qm value using its private key; 5. The supply chain node X obtains the encrypted DATA file from the IPFS. Next, it decrypts the file and computes Hq by hashing q||n, where q and n are obtained from the file. In order to obtain the correct ID and K pair, the supply chain node X needs to compute ID F and K F using all ID and K values stored in the DATA file. It then extracts m from message B using the computed Hq and K F values. It then computes A using the computed Hq , the hash value of r, the extracted m value, and the ID F value paired with the K F value. If the computed A is equal to the received A, then the correct ID and K pair is found in the DATA file. Otherwise, the current session is terminated; m = B⊕Hq ⊕K F (5) Sensors 2023, 23, 3433 7 of 16 6. The supply chain node X generates a random number, s. It then updates the ID and K pair and stores them in the DATA file. In addition, the random value r is also added to the DATA file as a new n value. The node then computes Hq new by hashing q||r. ID new = ID⊕m⊕s (7) Supply chain node X encrypts the DATA file with the public key of supply chain node Y (i.e., distributor supply chain node), and uploads the encrypted DATA file to the IPFS. 7. IPFS generates an IPFS CID, Qm new , and returns it to the supply chain node X; 8. Supply chain node X encrypts the Qm new with the public key of supply chain node Y to obtain H new ; 9. Supply chain node X performs a transaction on the Ethereum blockchain and sends the transaction with the H new string as input data to supply chain node Y; 10. The supply chain node X computes ID S and K S . Next, it computes messages C, D, and E, and sends the messages to the reader; 11. The reader forwards messages C, D, and E to the tag. After receiving the messages, the tag extracts Hq new from message C using its m value. Based on its computed Hq new value, it then computes ID S and K S . It extracts s from message D using its computed K S value. The tag then computes E using the extracted Hq new and s , together with its computed ID S value. If the computed E is equal to the received E, then the tag proceeds with updating its ID and K pair as well as its Hq string. Otherwise, the session is terminated. Supply chain nodes, such as manufacturers, distributors, and retailers, often would prefer to perform multiple tag ownership transfers since a large number of RFID tags are involved in the process. In a single tag transfer, i.e., when a tagged object is transferred from the retailer supply chain node to the end-user node, a single tag K and ID are stored in the DATA file and encrypted with the end-user's public key. In contrast, multiple tag K and ID values are stored in the DATA file and encrypted using the supply chain node's public key for batch transfers. Since all of the tags in a batch use the same q and r values, they have the same Hq new value. After the reader has finished reading all of the tags, the DATA file is encrypted and uploaded to the IPFS. A new Qm returned from the IPFS is encrypted and stored on the blockchain to enable the supply chain node to retrieve the encrypted DATA file during the next authentication phase. Proof of Concept A proof of concept for the proposed scalable lightweight protocol was developed and deployed on the Ethereum Goerli testnet. The smart contract was written in Solidity. Its purpose is to manage RFID tag ownership transfer in a supply chain system. The smart contract consists of one external setValue() and one public custom-defined node structure. The setValue() is a mutator function to store the node ID (unsigned integer data type) and the encrypted CID, also known as H (byte array data type), as shown in Figure 3. public key for batch transfers. Since all of the tags in a batch use the same q and r values they have the same Hqnew value. After the reader has finished reading all of the tags, th DATA file is encrypted and uploaded to the IPFS. A new Qm returned from the IPFS i encrypted and stored on the blockchain to enable the supply chain node to retrieve th encrypted DATA file during the next authentication phase. Proof of Concept A proof of concept for the proposed scalable lightweight protocol was developed and deployed on the Ethereum Goerli testnet. The smart contract was written in Solidity. It purpose is to manage RFID tag ownership transfer in a supply chain system. The smar contract consists of one external setValue() and one public custom-defined node structure The setValue() is a mutator function to store the node ID (unsigned integer data type) and the encrypted CID, also known as H (byte array data type), as shown in Figure 3. The decentralized web application shown in Figure 4 consists of three sections described below: 1. Ownership Transfer: (a) File encryption: the uploadipfs() function is called to encrypt the DATA file and upload it to the IPFS. The resultant CID generated from the IPFS is encrypted to obtain H; (b) Ownership transfer: A transaction is made using the setValue() function. The supply chain node ID and the H values are sent as input in the transaction to a designated address. 2. View Transaction: the getvalue() function is called where data, including timestamp, sender and receiver addresses, and H, are obtained using Etherscan Ethereum Developers Application Programming Interfaces (APIs); 3. Retrieve File: the getfile() function is called to decrypt H and obtain the CID plaintext, Qm. The uploaded encrypted DATA file is downloaded from the IPFS based on the CID value. The encrypted DATA file is then decrypted using a private key. Security Analysis of the Proposed Protocol The security of the proposed protocol was analyzed against five attacks. Certain assumptions are made based on the Dolev-Yao intruder model to aid in this analysis, described as follows: 1. It is possible for the attacker to initialize the communication both with the tag and the reader; 2. It is possible for the attacker to eavesdrop, block, and modify the messages sent during the communication sequence between the tag and the reader; 3. An attacker is unable to obtain the private key of asymmetric cryptography for each supply chain node. Key Disclosure Attack An attacker is unable to retrieve the secret information, ID, and K pairs from the DATA file because asymmetric encryption is used to encrypt the file. The same applies to the IPFS CID, where Qm is also encrypted with the supply chain node's public key to prevent the attacker from obtaining the encrypted DATA file in the IPFS. This is an additional level of protection for secret ID and K pairs. A random number m is used to encrypt the permutated ID and K values for each new session in a communication channel between the tag and the reader. This random number is not sent in plaintext, as explained previously. In order to limit the number of random guesses, the attacker has to identify the value of m; a threshold of three attempts is set. The reader terminates the session in case this threshold is exceeded, and a new session is initialized with a new random number r. The tag then has to use the hash value of r and the newly generated random number, m, to compute messages A and B. Security Analysis of the Proposed Protocol The security of the proposed protocol was analyzed against five attacks. Certain assumptions are made based on the Dolev-Yao intruder model to aid in this analysis, described as follows: 1. It is possible for the attacker to initialize the communication both with the tag and the reader; 2. It is possible for the attacker to eavesdrop, block, and modify the messages sent during the communication sequence between the tag and the reader; 3. An attacker is unable to obtain the private key of asymmetric cryptography for each supply chain node. Key Disclosure Attack An attacker is unable to retrieve the secret information, ID, and K pairs from the DATA file because asymmetric encryption is used to encrypt the file. The same applies to the IPFS CID, where Qm is also encrypted with the supply chain node's public key to prevent the attacker from obtaining the encrypted DATA file in the IPFS. This is an additional level of protection for secret ID and K pairs. A random number m is used to encrypt the permutated ID and K values for each new session in a communication channel between the tag and the reader. This random number is not sent in plaintext, as explained previously. In order to limit the number of random guesses, the attacker has to identify the value of m; a threshold of three attempts is set. The reader terminates the session in case this threshold is exceeded, and a new session is initialized with a new random number r. The tag then has to use the hash value of r and the newly generated random number, m, to compute messages A and B. The attacker cannot perform brute-force attacks to obtain ID and K values due to the limited number of trials. Furthermore, guessing the values of ID and K solely from messages C, D, and E is out of the question since random numbers s, m, and Hq new are used to encrypt these messages, respectively. The Hq new number is different for each session since it is computed by hashing a concatenation of q and a random number, r. In addition, ID and K are updated for each new successful session. This increases the difficulty for the attacker to obtain the ID and K values. Replay Attack An attacker may try to perform a replay attack. The process typically involves capturing and delaying messages, A and B or C, D, and E in this case, and then fraudulently replaying them to the reader. Two scenarios are possible, described below; however, they are ineffective. 1. The attacker captures messages A and B and replays them to the reader at the next session. Since the messages are encrypted with new random numbers Hr and m for every new session, the supply chain node is not be able to authenticate them; 2. The attacker captures messages C, D, and E and replays them to the tag during the next session. The tag is unable to authenticate the messages because they are encrypted with different Hq new , m, and s random numbers for each new session. The attacker fails to convince both the reader and the tag to authenticate the replayed messages based on the above. Therefore, the protocol is resistant to replay attacks. Man-in-the-Middle Attack Man-in-the-middle (MITM) attacks are eavesdropping attacks and are accomplished by the adversary inserting themselves between the tag and the reader in order to impersonate both parties. The following scenarios demonstrate that the attack would be unsuccessful: 1. For example, an attacker captures messages A and B and blocks them from being sent to the reader. These messages are modified and only then sent to the reader. However, since the attacker was unable to obtain the correct values of K, ID, m, and Hq, the supply chain node is unable to authenticate the modified messages; 2. An attacker captures messages C, D, and E and blocks them from being sent to the tag. These messages are modified and only then sent to the tag. However, the original messages were encrypted with m, s, and Hq new , respectively, which are unknown to the attacker. Therefore, the tag is unable to authenticate the modified messages. Based on the above, the proposed protocol is considered to be secure from man-in-themiddle attacks. De-Synchronization Attack A de-synchronization attack is a type of an attack wherein the attacker tries to break synchronization between the tag and the reader. Several scenarios are possible; however, all of these are ineffective since RFID tags and readers can still communicate during the following sessions either using the current or previous versions of stored values. For example: 1. The attacker interferes and blocks messages A and B from reaching the reader. In this scenario, the reader keeps waiting for messages from the tag. If the messages are not received, the current session is terminated after a certain period of time; 2. The attacker blocks messages C, D, and E from reaching the tag. Thus, the tag is unable to update its data, including the ID and K pair, and the Hq string. As a result, the ID and K values stored in the tag differ from those stored in the latest DATA file uploaded by the supply chain node. The supply chain node can obtain the previously encrypted DATA file using the previous H string obtained from the decentralized web application. The supply chain node proceeds with steps 5-11 in order to confirm the Hq, ID S , and K are synchronous between the tag and supply chain node. The same processes apply to attacks that happen during data transfer between supply chain node X and supply chain node Y. This method, unlike other state-of-the-art solutions, allows secret data to be resynchronized without sending back RFID-tagged goods to the old owner. Tracking Attack This type of attack is typically used for unauthorized tracking of RFID-tagged goods. We considered several scenarios that show that the attack is ineffective. Attackers might eavesdrop on a session and obtain K F ⊕ID F ⊕Hr by performing A⊕B. They can then obtain K F ⊕ID F by performing K F ⊕ID F ⊕Hr with Hr obtained during the beginning of the session. The K F and ID F are computed using the proposed lightweight permutation algorithm in Section 3.1, where K and ID are restructured based on the Hr value to compute K F and ID F . The Hr is obtained by the hashing of the r value, which is a random number generated at the beginning of each communication session. Thus, Hr as well as the computed K F and ID F are different for each session. Therefore, attackers are unable to trace any tags from the eavesdropped messages. In addition, the attackers are unable to extract K F and ID F from K F ⊕ID F and, subsequently, are unable to extract K and ID from the proposed permutation algorithm. Attackers might obtain messages C, D, and E by eavesdropping on the communication channel between a RFID tag and a reader. Attackers can then obtain K S ⊕ID S ⊕m by XORing messages C, D, and E. Since m is a random number freshly generated by the tag for each session, attackers cannot extract K S and IDS from these messages. In addition, all messages C, D, and E are encrypted using random numbers m or s; thus, attackers cannot perform tracking attacks on RFID tags. Furthermore, ID and K are updated at the end of the protocol by XORing random numbers m and s. Note that although ID⊕ K is equivalent to ID new ⊕K new , ID is not equal to ID new , and K is not equal to K new . Since newly generated random numbers and permutated ID and K values (i.e., ID F , K F , ID S , K S ) are used to compute transmitted messages for each new session, attackers are unable to track a tag because the tag returns no constant response. Formal Analysis of the Proposed Protocol Theoretical analysis and formal analysis tools, rather than experimental analysis, have always been broadly used to analyze security protocols [23]. Thus, in addition to the theoretical security analysis presented in Section 5, the proposed protocol was further analyzed using the formal analysis tool AVISPA. The protocol is written using the High-Level Protocol Specification Language (HLPSL). Two back-ends of this AVISPA tool are selected to verify the security of the proposed protocol-On-the-Fly Model-Checker (OFMC) and the Constraint Logic-based Attack Searcher (CL-AtSe) [24]. The other two back-ends are not included in this formal security verification because they are unable to support the exclusive-OR operations used in the proposed protocol. AVISPA uses the Dolev-Yao model for its analysis, where attackers obtain knowledge of normal sessions after its first run. As shown in Figure 5, the OFMC back-end shows that no attack trace was found after searching four nodes in 0.04 s with a search depth of 2. CL-AtSe checks whether there is any reachable state wherein attackers might attack and obtain secret keys. If there are reachable states, it analyzes each state to determine whether the safety condition holds or not. The safety condition refers to the situation where attackers are unable to obtain secret keys. The CL-AtSe back-end result shows that no states were reachable to perform security attacks; thus, it implies that it is safe, as indicated in Figure 5. The summary of results of OFMC and CL-AtSe prove that the proposed protocol is secure from replay and man-in-the-middle attacks. Performance Analysis The proposed protocol is compared with the performance of the existing supply chain solution in terms of scalability, transaction cost, interoperability, computational complexity, storage, and security. Scalability Analysis Blockchain scalability can be analyzed based on the number of TPS. Currently, Ethereum 2.0 blockchain allows for approximately 36.09 TPS. The scalability of this proposed public blockchain-based supply chain management solution can be improved further by performing batch ownership data transfer associated with RFID tags. As explained in Section 3.2, the DATA file is used for storing secret RFID tag data, and CID is generated after uploading the DATA file to the IPFS. As transaction input consists of the supply chain node ID and encrypted CID, this makes the transaction number independent from the number of RFID tags. Therefore, the time needed to perform a transaction does not depend on the number of RFID tags. An experiment was conducted using a Lenovo T14s laptop equipped with an AMD Ryzen™ 5 Pro 4650U central processing unit running at 2.10 GHz base clock speed, with 16 GB of DDR4 random access memory, to analyze the time needed for a transaction to be included in the Ethereum Goerli testnet. For a DATA file with RFID data from a random number of tags between 1 and 1000, the transaction time is consistently between 10 and 20 s. The time needed to scan the 1000 RFID tags and update the DATA file is not analyzed in this section, as this is outside the scope of public blockchain scalability. Transaction Fee Analysis Assuming a supply chain line with 1000 RFID tag-attached goods, a transaction was made with the proposed solution to perform the data transfer of those 1000 tags using the Ethereum Goerli testnet. A total of 292,781 gas was used to execute this setInput() function from the Supply.sol smart contract. The transaction fee needed was 0.014053488 Ether, which is approximately 16.8 USD at the time of writing. The proposed solution supports the transfer of RFID tags in batches instead of 1000 individual transactions. As a result, this significantly reduces the transaction costs, e.g., by 99% compared to individual transaction costs required for 1000 RFID tags. Interoperability The proposed solution provides efficient data management by enabling the sharing of specific data from one supply chain node to another. The proposed solution needs to meet three fundamental privacies: new ownership privacy, old ownership privacy, and a solution to the windowing problem to achieve efficient interoperability. New ownership privacy is preserved by restricting the old owner's access to the new DATA file uploaded by the new owner. This privacy is achieved by encrypting the DATA file with the new owner's public key and then uploading it to IPFS. The proposed protocol also guarantees that the new owner cannot track the previous transactions of the tag because the new owner cannot decrypt the old IPFS CID encrypted using another supply chain node's public key. In order to avoid the windowing problem wherein there should be no time slot for both the old and new owners to access the tag, the new owner should update the ID and K of the tags once tagged objects are received from the old owner. Computational Complexity Analysis Since the proposed system targets resource-constrained IoT devices, the performance of the RFID tags was analyzed to prove that the tags can support the proposed lightweight protocol. According to [1], IoT devices can be categorized intofour classes, mainly depending on their processing capabilities and power consumption. Passive RFID tags are Class I devices, which are resource-limited devices. The total storage cost to store the data that the tags need for the data transfer process is merely the cost of storing ID, K, and Hq values, which amount to 768 bits. This storage size can be supported by passive RFID tags with a chip memory capacity of more than 768 bits. These RFID tag chips include ATA5590 with 1024 bits of user memory and UCODE HSL with 2048 bits of user memory. In addition, Class I devices have low computation capabilities. They cannot support heavy computation outside of simple bitwise operations, such as one-way hash functions and asymmetric encryption supported by Class II devices [4]. The computational cost of an exclusive-OR operation, Txor, is negligible as the cost is less than that for the aforementioned heavy computation [25]. During the authentication process, an RFID tag has a total computational cost of 13 Txor. Security of Raw Data Storage The security of raw data storage is vital and can be analyzed in terms of data confidentiality, integrity, and availability. In order to protect data confidentiality, the DATA file that stores ID and K pairs is encrypted using asymmetric encryption before uploading to the IPFS. Although attackers might be able to obtain the encrypted DATA file through the CID, they are not be able to decrypt the DATA file because it can only be decrypted using the private key that was assigned to a specific supply chain node. In addition, the IPFS CID is encrypted, and this encrypted string is stored on the Ethereum blockchain to protect its integrity. In order to guarantee data availability upon being requested, all supply chain nodes need to participate as an IPFS node to ensure that at least some IPFS nodes stay online at all times to handle the IPFS process. All supply chain nodes also need to pin the CID to ensure important data is retained. Smart Contract Security Analysis Smart contracts are immutable. Thus, before deployment, it is vital to ensure that a smart contract is free from vulnerabilities, such as integer overflow/underflow, reentrancy, denial of service, etc. The designed smart contract, Supply.sol, is analyzed using three security tools-Mythx, Slither, and SmartCheck. Supply.sol passed all checks by the aforementioned tools. Mythx is a software-as-a-service platform that provides a higher performance and vulnerability coverage compared to standalone tools such as Slither and SmartCheck. Mythx has three analyzers, where its static analyzer parses the Solidity abstract syntax tree, the symbolic analyzer detects vulnerable states, and the greybox fuzzer detects vulnerable execution paths. Both Slither and SmartCheck belong to static analyzers, which are able to detect simple vulnerabilities faster than Mythx. The details of checks for vulnerabilities covered by these security tools can be found in [26][27][28]. Other related solutions do not provide much information on storage and computation cost. However, as shown in Table 5, our proposed system supports Class I IoT devices and outperforms all other systems in terms of security and transaction fees. Furthermore, our proposed protocol allows both batch and solitary data transfer; thus, it provides more flexible and efficient data transfer compared to existing state-of-the-art proposals. Conclusions This paper presented a scalable lightweight protocol for public blockchain-based supply chain systems that uses resource-constrained RFID tags and can transfer RFID tags offline in batches. A lightweight RFID protocol was designed with bitwise exclusive-OR and permutation operations to enable secure communication between RFID readers and tags. A proof of concept was created consisting of a decentralized application deployed on the Ethereum public blockchain and an IPFS for full performance evaluation in a real-world environment. A smart contract was designed and analyzed using formal security tools. The proposed protocol has been proven safe against five attacks using both theoretical and formal analyses. The attacks include those of key disclosure, replay, man-in-the-middle, desynchronization, and tracking. The proposed lightweight protocol has proven to be efficient in terms of security, transaction cost, scalability, interoperability, storage, and computational cost. Future research will include developing ownership transfer decentralized applications using Non-Fungible Tokens.
9,851.8
2023-03-24T00:00:00.000
[ "Computer Science" ]
Status and prospects of light bino-higgsino dark matter in natural SUSY Given the recent progress in dark matter direction detection experiments, we examine a light bino-higgsino dark matter (DM) scenario ($M_1<100$ GeV and $\mu<300$ GeV) in natural supersymmetry with the electroweak fine tuning measure $\Delta_{EW}<30$. By imposing various constraints, we note that: (i) For $sign(\mu/M_1)=+1$, the parameter space allowed by the DM relic density and collider bounds can almost be excluded by the very recent spin-independent (SI) scattering cross section limits from the XENON1T (2017) experiment. (ii) For $sign(\mu/M_1)=-1$, the SI limits can be evaded due to the cancelation effects in the $h\tilde{\chi}^0_1\tilde{\chi}^0_1$ coupling, while rather stringent constraints come from the PandaX-II (2016) spin-dependent (SD) scattering cross section limits, which can exclude the higgsino mass $|\mu|$ and the LSP mass $m_{\tilde{\chi}^0_1}$ up to about 230 GeV and 37 GeV, respectively. Furthermore, the surviving parameter space will be fully covered by the projected XENON1T experiment or the future trilepton searches at the HL-LHC. I. INTRODUCTION Scrutinizing the mechanism for stabilizing the electroweak scale becomes more impending after the Higgs discovery at the LHC [1,2]. Besides, there is overwhelming evidence for the existence of dark matter from cosmological observations. Identifying the nature of dark matter is one of the challenges in particle physics and cosmology. The weak scale supersymmetry is widely regarded as one of the most appealing new physics models at the TeV scale. It can successfully solve the naturalness problem in the Standard Model (SM) and also provide a compelling cold dark matter candidate. Among various supersymmetric models, the natural supersymmetry is a well motivated framework (see examples [3][4][5][6][7][8][9][10][11]), which usually indicates the light higgsinos in the spectrum [12]. If unification of gaugino mass parameters is further assumed, the current LHC bound on the gluino (mg 2 TeV [13]) would imply correspondingly heavy winos and binos, resulting in a higgsino-like lightest supersymmetric particle (LSP). However, the thermal abundance of light higgsino-like LSP is typically lower than the observed value of the dark matter in the universe, due to the large higgsino-higgsino annihilation rate. These considerations motivate us to explore the phenomenology of neutralino dark matter in natural SUSY by giving up the gaugino mass unification assumption. One of the possibilities is to allow for the light bino in natural SUSY. Such a mixed bino-higgsino neutralino dark matter can solve the above mentioned problems of a pure higgsino LSP without worsening the naturalness in natural SUSY. The studies of bino-higgsino dark matter have also been carried out in [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33]. In this work, we will confront the light bino-higgsino dark matter scenario in natural SUSY with the recent direct detection data. In particular, we focus on the light dark matter regime (mχ0 1 < 100 GeV) and attempt to address the lower limit of the mass of LSP that saturates the dark matter relic abundance. In natural SUSY, a small µ parameter leads to a certain bino-higgsino mixing, so that the spin-independent/dependent neutralino LSPnucleon scattering cross sections can be enhanced. We will utilize the recent XENON1T [34] and PandaX-II [35] limits to examine our parameter space. Since the couplings of the LSP with the SM particles depend on the relative sign (sign(µ/M 1 )) between the mass parameters µ and M 1 , we will include both of sign(µ/M 1 ) = ±1 in our study and show its impact on the exclusion limits for our scenario. Besides, we explore the potential to probe such a scenario by searching for the trilepton events at 14 TeV LHC. The structure of this paper is organized as follows. In Section II, we will discuss the light bino-higgsino neutralino parameter space in natural SUSY. In Section III, we will perform the parameter scan and discuss our numerical results. Finally, we draw our conclusions in Section IV. II. LIGHT BINO-HIGGISINO NEUTRALINO IN NATURAL SUSY In the MSSM, the minimization of the tree-level Higgs potential leads to the following equation [36] where m 2 H u,d denote the soft SUSY breaking masses of the Higgs fields at the weak scale, respectively. It should be noted that the radiative EWSB condition usually imposes a nontrivial relation between the relevant soft mass parameters at the high scale in a UV model, such as mSUGRA. However, the scenario we studied in our work is the so-called low energy phenomenological MSSM, in which a successful EWSB is always assumed and in this case the above mentioned strong correlation between parameters from radiative EWSB condition in UV models is not applicable. Using the electroweak fine tuning measure ∆ EW [6], one can see that the higgsino mass parameter µ should be of the order of 300 GeV to satisfy the requirement of ∆ EW < 30 [37][38][39][40]. The light higgsinos have been searched for through chargino pair production in the LEP-2 experiment [41], which indicates µ 100 GeV. We will use this LEP-2 limit as a lower bound for the higgsino mass. However, the relic abundance of thermally produced pure higgsino LSP falls well below dark matter measurements, unless its mass is in the TeV range. In order to provide the required relic density, several alternative ways have been proposed, such as the multi-component dark matter that introducing the axion [42]. On the other hand, without fully saturating the relic density (under-abundance), the higgsino-like neutralino dark matter in radiatively-driven natural supersymmetry with ∆ EW < 30 [43] or natural mini-landscape [44] has been confronted with various (in-)direct detections and is also expected to be accessible via Xenon1T experiment. In our study, we achieve the correct dark matter relic density by allowing the light bino to mix with the higgsinos. The two neutral higgsinos (H 0 u andH 0 d ) and the two neutral guaginos (B andW 0 ) are combined to form four mass eigenstates called neutralinos. In the gauge-eigenstate basis (B, , the neutralino mass matrix takes the form: where N 11 denotes the bino component of the lightest neutralino mass eigenstate. It can be seen that the SI scattering cross section depends on the relative sign of M 1 and µ. When violating Z coupling will vanish [16]. However, a low value of tan β is disfavored by the observed Higgs mass in the MSSM. III. PARAMETER SCAN AND NUMERICAL RESULTS In our numerical calculations, we vary the relevant parameters in the ranges of We scan the values of M 1 up to 100 GeV since we are interested in light DM region and attempt to address the lower limit of the LSP mass. For higher upper values of µ and M 1 , a heavy mixed higgsino-bino LSP may also produce the right DM relic abundance [20], while the result for lower bound of LSP mass obtained in the following calculation will not change. The stop and gluino can contribute to the naturalness at loop level, which are expected to be mt 1 2.5 TeV and mg 3 − 4 TeV for ∆ EW < 30 [37,45]. By recasting the LHC Run-2 with ∼ 15 fb −1 of data, it is found that the lower bounds of stop mass and gluino mass are about 800 GeV [46-51] and 1.5 TeV [52] in natural SUSY, respectively. Given the irrelevance of the third generation parameters for our neutralino dark matter, we fix the third generation squark soft masses as MQ 3L = 3 TeV, Mt 3R = Mb 3R = 1 TeV and vary the stop trilinear parameters in the range |A t | < 2 TeV for simplicity. The physical stop mass mt 1 has to be less than 2.5 TeV to satisfy ∆ EW < 30. We also require that each sample can guarantee the correct Higgs mass and the vacuum stability [53,54]. The first two generation squark and all slepton soft masses are assumed to be 3 TeV. Other trilinear parameters are fixed as A f = 0. We also decouple the wino and gluino by setting M 2,3 = 2 TeV. We impose the following constraints in our scan: (1) The light CP-even Higgs boson masses of our samples should be within the range of 122-128 GeV. The package SuSpect [55] is used to calculate the Higgs mass. (2) The samples have to be consistent with the Higgs data from LEP, Tevatron and LHC. (5) The invisible width of the Z boson is required less than 0.5 MeV to satisfy the LEP limit. In Fig. 1, we show the samples satisfying the dark matter relic density for sign(µ) = ±1. Since a bino-like LSP has rather small couplings with the SM particles, a certain portion of higgsino components is required to meet the observed relic density. Otherwise, the universe will be overclosed. Therefore, except for the two resonance regions mχ0 Fig. 1. However, such a region will be excluded by the dark matter direct detections as shown in the following. In Fig. 2, we present the spin-independent/dependent neutralino LSP-nucleon scattering On the other hand, the SD cross section is largely determined by Z-boson exchange and is sensitive to the higgsino asymmetry, σ SD ∝ |N 2 13 − N 2 14 | 2 . The relic density constraint requires a large higgsino asymmetry so that the SD cross section is enhanced. Therefore, a strong bound on such a scenario comes from the PandaX-II (2016) SD neutralino LSPneutron scattering cross section limits, which can rule out about 70% of our samples and exclude the higgsino mass |µ| and the LSP mass mχ0 1 up to about 230 GeV and 37 GeV, respectively. Such lower limits will not changed even if we extend the scan ranges of M 1 and µ to larger values. The current SD neutralino LSP-proton limits from PandaX and PICO are still weak. Both of sign(µ) = ±1 scenarios can be completely covered by the projected XENON1T experiment in the future. Besides the direct detections, the neutralino annihilation in the Sun to neutrinos can also be enhanced by the higgsino component in the LSP. The null results from the neutrino telescopes, such as IceCube, have produced a strong bound on the SD neutralino LSPproton scattering cross sections and has excluded a sizable portion of the parameter space for sign(µ) = −1. Next, we discuss the LHC potential of probing the current parameter space of our scenario allowed by the constraints (1-6) and the above direct/indirect detections. In Fig. 3, we plot the decay branching ratios ofχ 0 2 andχ 0 3 . For sign(µ) = −1, we can see that the neutralinosχ 0 2,3 mainly decay toχ 0 1 Z. When Br(χ 0 2 →χ 0 1 Z) increases, Br(χ 0 3 →χ 0 1 Z) decreases because of the goldstone theorem [25]. A similar correlation can be seen in the decay channelχ 0 2,3 →χ 0 1 h. But for sign(µ) = +1, the neutralinoχ 0 2 still dominantly decay toχ 0 1 Z, while the neutralinoχ 0 3 preferently decay toχ 0 1 h. This indicates that the samples with negative sign of µ/M 1 will produce more trilepton events through the process pp →χ 0 2,3 (→ Zχ 0 1 )χ ± 1 (→ W ±χ0 1 ) than those with positive sign of µ/M 1 , and can be more easily excluded by the null results of searching for electroweakinos at the LHC. Final states Source of signal in our scenario Given the above decay modes, we first recast the LHC searches for the electroweakinos listed in Table I with CheckMATE2 [67]. We generate the parton level signal events by MadGraph5 aMC@NLO [68] and perform the shower and hadronization procedure by Pythia-8.2 [69]. The fast detector simulation are carried out with the tuned Delphes [70]. We implement the jet clustering by FastJet [71] with the anti-k t algorithm [72]. We use Prospino2 [73] to calculate the QCD corrected cross sections of the electroweakino pair productions at the LHC. Then, we estimate the exclusion limit by evaluating the ratio r = max(N S,i /S 95% obs,i ), where N S,i is the event number of signal for i-th signal region and S 95% obs,i is the corresponding 95% C.L. observed upper limit. A sample is excluded at 95% C.L. if r > 1. After checking all surviving samples, we find that the LHC data in Tab. I can not further exclude the parameter space because of the strong direct detection bound on higgsino mass parameter µ > 230 GeV. In Fig. 4, we show the prospect of testing our surviving samples through searching for electroweakino pair production in the trilepton final states at 14 TeV LHC with the luminosity L = 3000 fb −1 . Such an analysis [77] has been implemented in CheckMATE package. In order to reduce the Monte Carlo fluctuations, we generate 200,000 events for each signal C.L.. Therefore, we conclude that our light bino-higgsino neutralino dark matter scenario will be fully tested by either future XENON1T or HL-LHC experiments. IV. CONCLUSION In this work, we examined light bino-higgsino neutralino dark matter in natural SUSY by imposing various constraints from the LEP, dark matter and LHC experiments. We found that the relative sign between the mass parameters µ and M 1 can significantly affect the dark matter and LHC phenomenology of our scenario. For sign(µ/M 1 ) = +1, the very recent SI limits from the Xenon1T (2017) experiment can almost exclude the whole parameter space allowed by the relic density and collider bounds. But for sign(µ/M 1 ) = −1, the SI limits can be avoided due to the cancelation effects in hχ 0 1χ 0 1 coupling. In this case, a strong bound comes from the PandaX-II (2016) SD neutralino LSP-neutron scattering cross section limits, which can exclude the higgsino mass |µ| and the LSP mass mχ0 1 up to about 230 GeV and 37 GeV, respectively. Furthermore, the surviving parameter space will be fully covered by the projected XENON1T experiment or the future trilepton searches at 14 TeV LHC with the luminosity L = 3000 fb −1 .
3,266.2
2017-05-25T00:00:00.000
[ "Physics" ]
Achieving the ultimate precision limit with a weakly interacting quantum probe The ultimate precision limit in estimating the Larmor frequency of N unentangled qubits is well established, and is highly important for magnetometers, gyroscopes, and other types of quantum sensors. However, this limit assumes perfect projective measurements of the quantum registers. This requirement is not practical in many physical systems, such as NMR spectroscopy, where a weakly interacting external probe is used as a measurement device. Here, we show that in the framework of quantum nano-NMR spectroscopy, in which these limitations are inherent, the ultimate precision limit is still achievable using control and a finely tuned measurement. INTRODUCTION The field of quantum sensing studies the precision in measuring various physical quantities using quantum protocols. It was shown that employing quantum schemes can greatly improve the precision in different problems, such as atomic clocks, magnetometry, and frequency estimation [1][2][3][4][5] . A typical settings in this field is when a measurable quantity; e.g., a frequency, is associated with a quantum register of which many copies are available. The registers can be individually addressed or be read by a strong measurement. When the quantum registers are qubits, and the measurable quantity is the Larmor frequency, the ultimate precision limit in a noisy environment is well established, and is achievable by performing Ramsey spectroscopy on each register 6,7 (see left side of Fig. 1) where N is the number of copies and t is the measurement time. We henceforth refer to the equality in Eq. (1) as the standard quantum limit (SQL). Note, that usually the SQL is defined only by the scaling Δω N / 1 ffiffi ffi N p 1,4,5,8 , while here it is defined as an equality. This distinction is key, since the prefactor can reduce the precision by orders of magnitude, and thefore it descriminates between non-optimal schemes to the fundamental percision limit. However, in some physical scenarios the registers cannot be measured directly, but can only be manipulated by global operations and measured through its weak interaction with an external probe (see right side of Fig. 1). It is unknown whether the limit (1) can be achieved in these settings, or more generally, if there is a tight bound on the precision of such measurements. A prominent example of this scenario is nano-scale nuclear magnetic resonance (NMR), i.e., detecting the Larmor frequencies of nano samples of nuclear spins [9][10][11][12][13][14][15] . Since the nuclear spins cannot be measured directly, an external probe that interacts with the sample is manipulated and measured to retrieve information about the Larmor frequencies of the ensemble. The nitrogen-vacancy (NV) center is a natural candidate for an external probe, since in the past decade it has been shown to be an excellent magnetometer on the nano-scale 16,17 , and many successful experiments have been carried out in platforms similar to Fig. 2 [9][10][11][12][13][14][15] . Here, we provide and analyze a protocol that in a general setting of an ensemble measurement, with a simplified interaction Hamiltonian between the ensemble and the quantum probe, achieves the SQL given by Eq. (1) (up to a small prefactor). The protocol is then extended to the physical settings of quantum NMR spectroscopy, where the interaction Hamiltonian is more involved. We show that our protocol still achieves the SQL up to a small prefactor. An open question in the field is whether spectroscopy of nuclear spins can be performed efficiently with shallow NVs, i.e., in the limit of strong back-action 15,18 . In our protocol the SQL is reached by utilizing the strong back-action caused by the entanglement formed between the sensor and the ensemble, and thus provides an affirmative (theoretical) answer. Interestingly this convergence to the ultimate precision limit, despite of the weak interaction, stems from the superradiant nature of the interaction. In superradiance N atoms interact with the same probe, a single mode of radiation, and then even if the fluorescence of each atom is very weak the ultimate precision limit of a perfect projective measurement can still be reached given that N is large enough. In our case, N nuclear spins interact with the same quantum probe. The weak interaction corresponds to the weak fluorescence, and equivalently, given a large enough N, the SQL is achievable. An alternative intuition to the behavior of the precision is given by the following reasoning. Whether the sensing is quantum or classic, the acquired signal is where ω N is their Larmor frequency and A is the amplitude of the signal generated by a single nucleus. In the quantum case the signal is the population of the sensor, and the amplitude, A, is replaced by ϕ, the phase acquired from a single nucleus, which is a function of the interaction strength and the interrogation time. In the classical case the signal is just a classic magnetic field (i.e., measured by a current in coil). The uncertainty of the frequency measurement of the signal (2) scales inversely with the derivative and thus can be written as where the first equality is correct for either classic or quantum sensing with some constant C, while the second is correct only for the quantum case. We henceforth refer to Eq. (3) as "Heisenberg scaling with N" or as the "weak limit" for reasons that will readily become apparent. Comparing Eqs. (3) and (1) implies that whenever ffiffiffi ffi N p ϕ ! 1 the behavior of the signal must change and that the SQL might be achievable. Since the phase increases as the sensor is brought into close proximity with the sample, there is a critical distance d c , for which the behavior of the signal transforms from classical to quantum (see the right side of Fig. 2). State-ofthe-art experimental setups can approach this critical distance (precise evaluations are provided in the following sections). RESULTS Simplified model: constant coupling Let us start with a simplified model that captures the essentials. The physical system consists of a quantum sensor, taken to be a two level system with energy spliting ω 0 , and an ensemble of N spin-1/2 nuclei with Larmor frequency ω N as described by the Hamiltonian where σ i =I j i is the Pauli matrix of the sensor/j-th nucleus in the i direction. The NV and the nuclei interact with a constant coupling Our protocol for the estimation of ω N starts by initializing the system to the state ψ i j i ¼ " X j i S " X Á Á Á " X j iand the interaction is suppressed. Note, that we explicitly assume that the nuclear spins are completely polarized in the x direction. The assumptions of constant coupling, full polarizion, and interaction of the form σ z I x are relaxed in the following sections. The propagation in the interaction picture with respect to the sensor's free Hamiltonian is Then H 1 is turned on while H 0 is turned off, so the propagation by an additional time τ will result in The reduced density matrix of the sensor after both steps is where the off diagonal element is given by To obtain the product it is convenient to write the complex number (9) in a polar representation where the approximations are made using the assumption of weak coupling, gτ ≪ 1. We henceforth refer to the accumulated phase, Φ, as "the signal" since it corresponds to the "classical" signal (2) with the aforementioned accumulated phase ϕ = 2gτ, and we refer to r as the decay. The decay is caused by the entanglement between the NV and nuclear spin ensemble, as illustrated in Fig. 3. For times ω N t = nπ the dynamics is purely classical because the rotation around the x-axis is trivial, whereas for other times the rotation causes the sensor state to be entangled with the collective state of the ensemble. Therefore, the dimensionless quantity characterizing the decay, N gτ ð Þ 2 , is interpreted as the back-action. As the coupling constant g depends on the distance between the sensor and the sample, the transition from weak to strong back-action is dictated by the NV's depth for a given τ. This transition is the one predicted in the introduction, as it occurs when ϕ ¼ 2gt $ 1 ffiffi ffi N p . Substituting Eqs. (10) and (11) into Eq. (9), The measurable quantity associated with (12) is the probability distribution, which is related to Eq. (12) by P " Y j i ¼ 1 (13) is the non-approximate form of Eq. (2). It is drawn on the right hand side of Fig. 2 with N = 3 ⋅ 10 5 and gτ = 10 −6 (top) or gτ = 0.01 (bottom), which correspond to weak and strong backaction, respectively. In order to determine the precision of the estimation of ω N we use the tools of quantum metrology. The Fisher information about a parameter g given a discrete distribution . For a quantum system one can optimize over all possible measurement bases. This leads to the definition of quantum fisher information (QFI) 1 . For a density matrix ρ ¼ j, the QFI about g is 6 . The precision of any measurement is bounded by the Cramer-Rao bound, Δg ! 1 ffiffi ffi I p . Since this is a tight bound, we use it henceforth to quantify precision. For the density matrix (8) the QFI can be expressed as the relevant Bures distance 6,20 : Fig. 1 Possible measurement setups. Leftan ensemble of qubits (blue arrows); i.e., atoms, ions, superconducting circuits, with energy splitting ω that can be strongly measured (depicted above via optical access and an array of photo-detectors). The optimal precision of measuring ω, is achieved by performing Ramsey spectroscopy and is given by Eq. (1). Rightan ensemble of qubits, stores information about a desired measurable quantity; e.g., magnetic field, Larmor frequency. The desired information is accessible only through it's weak interaction with an external quantum probe (red arrow). The interaction is illustrated by the color gradient, such that the amount of color is proportional to the interaction strength. The fundamental bound on the precision of a weakly interacting probe is unknown. thus, Eqs. (10), (11), and (14) lead to where θ = ω N t. In the limit of weak back-action, N gτ ð Þ 2 ( 1, The QFI in this limit is optimal when since the derivative of the signal is then maximal and the decay is negligible. For the optimal time (18), the QFI (15) is which corresponds to the uncertainty Δω ¼ 1 2Ngτt as in Eq. (3) with ϕ = 2gτ. Thus, in the limit of weak back-action, we achieve an uncertainty that scales as N −1 in exchange for the large factor gτ ð Þ À1 . Achieving better precision, however, requires Ng 2 τ 2 ≫ 1, where consequently, the decay starts to affect the quantum sensor. The optimal time (18), therefore, must change to account for the Fig. 3 The protocol. The NV and the nuclear spins are initialized at the x direction. The nuclei are then allowed, using an appropriate pulse sequence, to propagate according to the free Hamiltonian H 0 for a time t, which results in a rotation by θ = ω N t around the z-axis. Then, by changing the external pulse sequence, the system propagates under the interaction Hamiltonian H 1 for a duration of τ. This results in a rotation of the nuclei around the x-axis by an angle of ±ϕ = ± 2gτ depending on the sensor's state. The sensor will experience an effective dephasing depending on the extent of its entanglement with the ensemble. Fig. 2 Nano-NMR based on NV centers. The NV center (red) is situated at a depth d below the diamond surface. The nuclear spin ensemble (blue), which is located on top of the surface, is partially polarized due to the external magnetic field B. The local magnetic field at the NV's position is only affected by the nuclear spins within a hemisphere of radius d centered above the NV's position (dashed red line), since the dipolar interaction creates an effective cutoff. The sensor is initialized and then measured after a given time, in order to reproduce the probability distribution, which is analogous to the classic NMR signal (top right). However, as the sensor is brought into close proximity with the sample, strong back-action causes the signal to change (bottom right). This change dramatically affects the precision of the frequency estimation (the full expression of the probability distribution plotted on the right hand side is taken from Eq. (13)). decay. The optimal time in this regime is approximately for which the QFI (15) is and corresponds to the uncertainty Δω . The decay, therefore, "corrects" the apparent Heisenberg scaling for high precision. We note that the first step of the protocol, where the nuclei propagate according to H 0 , can be implemented by initializing the sensor in an eigenstate of S Z = 0, which eliminates H 1 or by applying an external drive on the NV that suppresses the interaction. Therefore, the uncertainty is limited solely by the coherence time of the nuclei as in refs. 13,15,[21][22][23][24] and it does not depend on the coupling constant g. Moreover, if we could manipulate and read-out each nucleus with a unit fidelity, the optimal QFI would be the SQL (1). Therefore, the presented protocol achieves the optimal precision, up to the numerical factor of e −1 . Figure 4 compares the different QFI scalings to our protocol. We show in Supplementary Note 2 that this behavior of the QFI is ubiquitous in supperradiant measurements. A similar behavior can be observed with N atoms that weakly interact with a single mode of radiation: given a large enough N the ultimate precision limit can be obtained despite the weak interaction. Hence this behavior is due to the superradiant nature of the interaction in Eq. (5). To obtain this QFI we assume optimization of θ and the measurement basis, which requires knowledge of ω N . As shown in Fig. 5, the QFI increases with N(gτ) 2 , but it also becomes increasingly narrow, such that a more accurate estimation of ω N is required to achieve it. This, however, can be resolved by using an adaptive measurement protocola sequence of non-optimal measurements are performed to acquire an estimate of ω N , and the details of the next measurements are updated according to the outcomes of the previous measurements. This can be repeated until the estimate of ω N is good enough to attain optimal precision. An exmple of such a protocol is given in Supplementary Note 8 and further analysis of this model using multiple sensors can be found in Supplementary Note 7. Spatially dependent coupling In the previous section, we presented the simplified model with the interaction Hamiltonian (5). This is an approximate form of the dipole-dipole interaction Hamiltonian, which can be written as In the interaction picture with respect to sensor's free Hamiltonian, after taking the rotating wave approximation, the remaining terms are If we further assume that the nuclei can be driven in the x direction sufficiently fast compared to the interaction and the entanglement generation rate, we arrive at the Hamiltoniañ The difference between Eq. (24) and the toy model (5) is that the coupling differs from one nucleus to the other. It depends, in fact, on the position of the nucleus and, therefore, also on time. Further description of the interaction requires a specific realization of the sensor, henceforth we use the NV center. In a spherical coordinate system, where the NV is found at the origin, the NV's magnetization axis coincides with the z-axis and the i-th nucleus position is denoted by r i ; θ i ; φ i f g , the coefficients g i are given by with the physical coupling constant J ¼ μ 0 _γ e γ N 4π ¼ 0:49 MHz Á nm 3 , where μ 0 is the vacuum permeability, ℏ is the reduced Plank constant and γ e/N are the electronic/nuclear gyro-magnetic ratios. Repeating the derivation of the previous section with the interaction (24), Eq. (9) becomes where G j ¼ R τ 0 dtg j t ð Þ. In the limit of weak coupling, G i ≪ 1, Eq. (26) can be approximated to Gi cos ωNt ð Þ : The optimal QFI follows the weak back-action limit (green) given by Eq. (19) when N(gτ) 2 ≪ 1. As N increases the back-action corrects the scaling until N(gτ) > 1, where the QFI approaches the strong limit given by Eq. (21). In the strong back-action regime, the QFI of our scheme is only smaller by a factor of e −1 from the optimal QFI, Nt 2 , achieved by Ramsey spectroscopy (blue). Fig. 5 Narrowing of the QFI for strong back-action. The QFI (15) for t = 1 and N = 10 5 for different values of gτ. As the back-action N (gτ) 2 gets stronger the peak of the QFI increases until it reaches an optimum since it no longer depends on the interaction. The increase in QFI comes at the cost of it becoming increasingly narrow. Therefore, to achieve the optimal scaling (21) a good estimate of ω N is already required. This can be attained by using an adaptive protocol as explained in SI Note 8. The quantity that determines the behavior of the optimal QFI is s 2 : The regime of interest is s 2 ≫ 1, where, as we readily show, a modified SQL scaling can be achieved. Note that I t 2 N=e; where equality is attained only for homogeneous couplings. To obtain the QFI in a nano-NMR scenario, we need, therefore, to find the relevant s 1 , s 2 . The results will of course depend on the parameters of the problem. We denote by n ¼ N V the nuclei number density, D the diffusion coefficient of the sample, d the depth of the NV, and α the NV's tilting angle, measured between the normal to the diamond surface and its magnetization axis. Let us first calculate s 1 assuming N ≫ 1, While s 1 is proportional to the average magnetic field, it can be observed that s 2 goes as the magnetic field auto-correlation, where Á h i is an average over realizations and P(r, r 0 , t) is the stationary diffusion propagator from r, to r 0 with time difference t. The correlation decays with a characteristic time of τ D d 2 D , which dictates the behavior of s 2 . Although the full expression of s 2 is involved 25 , the asymptotic behavior is rather simple, where B 2 rms is the instantaneous fluctuation in the magnetic field: The mean field, (30), and B rms , (33), can be estimated more explicitly, where we took α = 54. 7 ∘ and n water = 33 nm −3 as the water number density. We use these quantities for the remainder of the article in our quantitative estimates. The validity condition of the second order approximation in G i can be put into physical terms with the definitions (30) and (33); e.g., Eqs. (27) and (28) are valid when Brms B h i j j ( 1, which means that the polarization should be larger than the statistic polarization. Hence, for increasingly shallow NVs, higher orders should be taken into account (see discussion in Supplementary Note 3). We are now fully equipped to calculate the optimal QFI by substituting s 1 and s 2 (Eqs. (30) and (32), respectively) into Eq. (29). In the weak back-action regime, s 2 ≪ 1, the optimal QFI reads: For strong back-action the optimal QFI depends explicitly on τ D , ; where the optimal θ is given by Note that the transition from Eq. (36) to Eq. (37) occurs when 2 π γ e B rms T NV 2 ¼ 1, which defines the critical depth d c in terms of the physical parameters. The addional factor of π −1 derives from the dynamical decoupling sequence applied to the NV in order to produce H 1 (see next section or a detailed derivation in Supplementary Note 4). For common parameters of liquids, an interrogation time of T NV 2 ¼ 1 ms and a fully polarized nuclear spin ensemble, we find that d c~1 30 nm. The result (36) and the short times limit of (37) are similar to those of the simplified model, with the difference that due to the dipolar interaction and the geometry, there is an additional dependence on the NV's tilting angle. In Eq. (37) this is translated into a change in the total number of particles in Eq. (21) by an effective number N ≈ 17.5nd 3 , which is proportional to the number of particles in the effective interaction region. The long times regime of Eq. (37) may look puzzling since for large enough τ we can get an arbitrarily large QFI, which seems paradoxical. This arbitrarily large QFI is due to the fact that we consider an infinite sample volume; i.e., an infinite amount of nuclear spins. Restricting ourselves to a finite volume, V, with N nuclei yields the restriction B h i 2 eB 2 rms τ τD t 2 Nt 2 by imposing the SQL. This limitation on τ can be taken to be τ ≪ τ V (see Supplementary Note 3), where τ V = V 2/3 /D is the volumetric diffusion time; i.e., the characteristic time it takes for a particle to move from one of the volume's boundaries to another. For sufficiently long times, τ ≫ τ V , the QFI scaling changes to This is the equivalent of Eq. (21). In the simplified model the sensor is coupled equally to all the nuclei from the start; therefore, the QFI scales with the total number of nuclei, while in reality the dipolar interaction creates an effective cutoff, such that in short times the QFI scales as nd 3 and only after a long time, τ ≫ τ V , when all the nuclei have passed through the interaction region, does the QFI scale with the total number of nuclei. As in Eq. (21), in this model the QFI is not limited by the coherence time of the NV, T NV 2 , but only by the coherence time of the nuclei T N 2 , which is usually longer by orders of magnitude. Undriven nuclei In the previous section, we assumed that we can drive the nuclei sufficiently fast to achieve the approximate dynamics given by Eq. (24). In some experimental settings this is unfeasible for practical reasons. Hence, in what follows, we drop this assumption. The approximate dipolar Hamiltonian (23) is rewritten as The NV is then driven with π−pulses every time τ p so that the effective pulse frequency ω p ¼ π τp is close to ω N . This yields the effective Hamiltonian (see Supplementary Note 4) D. Cohen et al. and the remaining free Hamiltonian where δω = ω N − ω p and g i ± ¼ À 3 2 J r i t ð Þ ½ À3 sin θ i t ð Þ ½ cos θ i t ð Þ ½ . Following the same protocol as before (see Supplementary Note 5) the signal is where the average magnetic field is as in the driven case, (30). The decay, however, changes to (32), note that the signal is the same upto a prefactor of order 1 and a constant known phase. The decay, however changes, so it is non-zero for any given time t. This is expected, since previously the interaction caused all the nuclei to rotate around the same axis, which resulted in an entanglement induced decay. Without the external drive, the interaction causes each nuclear spin to rotate around a different axis in the xy plane of the Bloch sphere, which induces additional classical dephasing (see Supplementary Figure 3). It is worth pointing out that the quantum contribution (45) can be negative, but the total B rms is always real and positive. When the decay is small, γ e B rms τ ≪ 1, we retrieve the result (36) up to a prefactor~1, since it only depends on the signal. If the decay is dominant, the classical and quantum decay compete when optimizing the QFI. On the one hand the strong back-action regime requires γ e B UDQ rms τ>1, which implies that τ has to be large enough. On the other hand, the classical dephasing causes an exponential decrease in the QFI, e À γ e B UDC rms τ ð Þ 2 , that depends only on τ. To limit this effect we require γ e B UDC rms τ<1. Hence, when the nuclei are undriven fine tuning of τ is required in order to optimize these competing processes. The optimal protocol will no longer be universal and will depend on the physical parameters. First, we assume strong back-action, in order to derive the analog protocol to the previous sections, and to emphasis our last statement regarding the decay. The optimal t satisfies cos 2 δωt ð Þ ¼ 1 . This results in a reduction by a reasonable factor of e −1 . This inequality is a function of the NV's tilting angle alone, and is correct for α ! π 3 . Since the NV's natural tilting angle is α~0.32π this is approximately the case, where the slight deviation will lead to further reduction of the QFI by a small factor. Another issue with this strategy is that it is not guaranteed that τ ¼ 1 γ e B UDC rms T NV 2 , and indeed for a typical number density and an NV depth of d = 140nm, the time required is τ % 3 ms>T NV 2 . Obtaining other scalings is possible by optimizing t and τ, depending on the parameters; for example, by choosing which is the same result as in Eq. (37). Note that this is the case of "critical back-action", when γ e B rms τ = 1, therefore smaller τ will lead to the weak back-action limit and larger τ will cause an exponential decrease. Since the same typical parameters yield τ ≈ 2 ms, the lack of external drive reduces Eq. (37) by a factor of e −1 . The results for τ ≫ τ D can be derived by using reasoning similar to the previous section. However, the long times limit of Eqs. (37) and (39) will typically no longer be achievable because the optimal τ will be much longer than T NV 2 . Partial polarization The initial state of partially polarized nuclear spins is where p is the polarization (−1 ≤ p ≤ 1). Owing to this partial polarization, the ultimate precision limit is no longer I ¼ Nt 2 (see Supplementary Methods 1), but Hence the QFI is degraded by a factor of p 2 . We thus wish to inquire whether we can approach this limit using our protocol. The effect of partial polarization is somewhat similar to the effect of the undriven case. The polarized nuclei behave as in the driven case and induce a quantum dephasing on the external probe, while the unpolarized dynamics, creates classical dephasing. The QFI for τ ≪ τ D is given by (see where pol ¼ p j j and B h i, B rms are given by Eqs. (30) and (33), respectively. Therefore, the results of the previous section also apply for a partially polarized driven ensemble with minor modificatios. The optimal strategy in the strong back-action regime is to take sin 2 θ ð Þ ¼ 1 and τ 2 ¼ 1 4γ 2 e B 2 rms , which yeilds a QFI of Therefore, even with finite polarization our protocol achieves the ultimate precision limit, as long as the polarization is greater than the statistical polarization. When the dephasing caused by the back-action is larger than the one inflicted by the unpolarized dynamics, other approaches can achieve the optimal QFI (see Supplementary Note 6). In that case it is required that pol ≥ 70% and the critical distance changes to d c $ pol 2=3 T 2=3 2 , such that for pol = 70% it drops from 130nm to d c~1 00 nm for T 2 = 1 ms. This fine tuning might yield a certain advantage over the optimal time parameters provided above, since the QFI depends explicitly on the optimal time. DISCUSSION We provide a protocol for nano-NMR that achieves the SQL up to a prefactor in the strong back-action regime. Moreover, the uncertainty does not depend on the coherence time of the sensor or the probe-nuclear coupling. Our anlysis implies that in the strong backaction regime, as long as the polarzation is greater than the statistical polarization, the optimal precision is achived by preforming a single measurment and not by using sequential measurement schemes. These features make it highly applicable for sensing very small samples, which is the ultimate goal of the field.
6,829.2
2020-09-18T00:00:00.000
[ "Physics" ]
Trends in functional disability and cognitive impairment among the older adult in China up to 2060: estimates from a dynamic multi-state population model Background Available evidence suggests that cognitive impairment (CI), which leads to deficits in episodic memory, executive functions, visual attention, and language, is associated with difficulties in the capacity to perform activities of daily living. Hence any forecast of the future prevalence of functional disability should account for the likely impact of cognitive impairment on the onset of functional disability. Thus, this research aims to address this gap in literature by projecting the number of older adults in China with functional disability and cognitive impairment while accounting for the impact of cognitive impairment on the onset of functional disability. Methods We developed and validated a dynamic multi-state population model which simulates the population of China and tracks the transition of Chinese older adults (65 years and older) from 2010 to 2060, to and from six health states—(i) active older adults without cognitive impairment, (ii) active older adults with cognitive impairment, (iii) older adults with 1 to 2 ADL limitations, (iv) older adults with cognitive impairment and 1 to 2 ADL limitations, (v) older adults with 3 or more ADL limitations, and (vi) older adults with cognitive impairment and 3 or more ADL limitations. Results From 2015 to 2060, the number of older adults 65 years and older in China is projected to increase, of which the number with impairment (herein referred to as individuals with cognitive impairment and/or activity of daily living limitations) is projected to increase more than fourfold from 17·9 million (17·8–18·0) million in 2015 to 96·2 (95·3–97·1) million by 2060. Among the older adults with impairment, those with ADL limitations only is projected to increase from 3·7 million (3·6–3·7 million) in 2015 to 23·9 million (23·4–24·6 million) by 2060, with an estimated annual increase of 12·2% (12·1–12·3); while that for cognitive impairment only is estimated to increase from 11·4 million (11·3–11·5 million) in 2015 to 47·8 million (47·5–48·2 million) by 2060—this representing an annual growth of 7·07% (7·05–7·09). Conclusion Our findings suggest there will be an increase in demand for intermediate and long-term care services among the older adults with functional disability and cognitive impairment. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-021-02309-4. Background During the past few decades, the world has seen a rapidly ageing population in both developing and developed countries, due to declining fertility and mortality rates. China, with the largest population of older adults in the world, is rapidly ageing [1]. From 2007 to 2017, the number of persons in China aged 65 years and older increased from 106·36 million (representing 8·1% of the total population) to 158·31 million (representing 11·4% of the total population) [2]. According to the World Health Organization (WHO), it is estimated that by 2050, the population of China aged 60 and older will reach 35·1% [3]. The rapid growth in the number of older adults in China is a source of concern for policymakers due to the health and social care implications of aging. Aging could lead to undesirable outcomes such as rising dependency and caregiver burden; increased health care utilization for both acute and long-term care; and escalating healthcare cost [4][5][6][7][8][9][10]. While a fraction of the population is increasingly avoiding fatal events due to changes in lifestyle which modify the risk factors for mortality, thus delaying the age-atonset and progression of diseases, the majority are not avoiding the physiological changes associated with aging and the accumulation of chronic conditions such as cognitive impairment and functional disability [11][12][13][14][15][16]. The rapid socio-economic development and strict implementation of family planning policies since 1980 has affected the structure of families in China, resulting in the erosion of the traditional family support for older persons [17]. This has led to increased demand for long-term care services from society and the government. Recently, the Chinese government has responded to the huge challenge of caring for the older adult by proposing policies aimed at increasing the supply and access to community-based older adult care services [18]. To plan and provide long-term care services needed in China, health and social services policy-makers responsible for planning the intermediate and long-term care services for older adults require an evidence-based and credible forecast of the current and future number of older Chinese adults with functional disability and cognitive impairment needing assistance to inform policy decisions. Projections of cognitive impairment and functional disability often involve simple extrapolation [19,20] that fail to account for the transition across different health states; when transition rates are accounted for [21][22][23][24], they fail to consider the effect cognitive impairment have on the development of functional disabilities. Available evidence suggests that cognitive impairment, which leads to deficits in episodic memory, executive functions, visual attention, and language [25], is associated with difficulties in the capacity to perform activities of daily living [26]; hence any forecast of the future prevalence of functional disability should account for the likely impact of cognitive impairment/dementia on the onset of functional disability. Thus, this research aims to address this gap in literature by projecting the number of older adults in China with cognitive impairment and functional disability up to 2060, while simultaneously accounting for the impact of cognitive impairment on the onset of functional disability-measured herein by limitations in activities of daily living. The evidence-based projections from this research could help support planning for the number of older adults with care needs, infrastructural capacity required to meet the care needs of older adults, and human resources required to provide care services for older adults in China. Although this study focuses on China, the general insights presented herein are potentially useful to other countries undergoing a similar demographic transition, including Japan, South Korea, Taiwan, India, and Singapore. Model design In order to project the number of older adults in China with cognitive impairment and functional disability, we developed and validated a dynamic multi-state population model [27,28] which simulates the population of China and tracks the transition of older Chinese adults 65 years and older from 2010 to 2060, to and from six health states-(1) active older adult without cognitive impairment (where active is older adults with no ADL limitation), (2) active older adult with cognitive impairment, (3) older adult with 1 to 2 ADL limitations, (4) older adult with cognitive impairment and 1 to 2 ADL limitations, (5) older adult with 3 or more ADL limitations, and (6) older adult with cognitive impairment and 3 or more ADL. Within each health state, the population was further divided into two-dimensional vector: age (from age 65-100 and older) and gender (male and female). To ensure that our model is consistent with the population of China, an additional state which accounts for the population below age 65 is included; each year, the population becoming 65 years was assumed to enter the active older adult without cognitive impairment health state. Not unlike the older adult population, the population below age 65 was subdivided by age (age 0-64) and gender. The population below age 65 increases via births and migration and decreases via deaths, emigration and becoming age 65. Births was estimated using female reproductive age and fertility rates; deaths were obtained from life tables. At the end of each year, the surviving population in each age cohort flows to the subsequent cohort, with the exception of the final age cohort-age 100 and older. The health states with cognitive impairment are further divided into three categories-mild, moderate and severe-cognitive impairment with age-specific, gender-specific and health state specific transition rates accounting for the movements across cognitive impairment categories; the cognitive impairment transition rates are reported in earlier publication in the reference as cited [28]. Transition across health states is determined by 1-year age-specific, gender-specific and health state specific transition rates. Health states Cognitive function of the older adults was measured using the Chinese version of Mini Mental State Examination (MMSE), consisting of 30 items [29,30]. MMSE scores range from 0 to 30; higher scores indicate better cognition. Participants' orientation, memory, attention, calculation, language, and written and visual construction are assessed in the MMSE. Cognitive function was classified as follows: intact (MMSE score ≥ 24), mild cognitive impairment (18)(19)(20)(21)(22)(23), moderate cognitive impairment (10)(11)(12)(13)(14)(15)(16)(17), and severe cognitive impairment (≤9) [28,31]. Functional status was measured using Activity of Daily Living (ADL) consisting of taking a bath/shower, dressing up, eating, standing up from or sitting on chair, walking around the house, and using the toilet. Those who reported no limitations in any of these activities were classified as active older adult; those with 1 or 2 ADL limitations were classified as older adult with 1 to 2 ADL limitations. Lastly, those with 3 or more ADL limitations were classified as older adult with 3 or more ADL limitations. Thus, active older adult without cognitive impairment is defined herein as older adults who have no ADL limitations and are cognitively intact. Active older adult with cognitive impairment is defined as older adults with no ADL limitations but have cognitive impairment-i.e., either mild, moderate, or severe cognitive impairment. Model assumptions Fertility rate from 2010 to 2017 was used, and the 2017 fertility rate was assumed to remain constant throughout the projection because a change in fertility rate will have no impact on the older adult population by 2060. On mortality, a future decline of 1·5% per annum was assumed [32]. Lastly, a 1% annual improvement in transition rates-based on authors' estimates-across all health states were assumed to account for future advancement in behavioural and pharmacological interventions. This survey collects data on demographics, socioeconomic, lifestyle and dietary behaviours, health status, diseases, cognitive function, and physical performance. Further information regarding the CLHLS can be found in the source as cited [33]. The 2012 and 2014 waves were selected because they were follow-up surveys of closed cohort (i.e., cohort with fixed membership. Once the cohort is defined by enrolling subjects and follow up begins, no one can be added to the cohort), as compared to other waves. The survey consists of 1824 Chinese older adults (814 men and 1010 women) age 65 years and older. Demographic data-fertility rate, mortality rate, and initial population distribution-used to initialize the model was obtained from the National Bureau of Statistics in China. Functional disability was measured using ADL limitations and cognitive function by the Chinese version of the MMSE. Data and estimation of transition probabilities Using the 2012 and 2014 waves of CLHLS, the rates of transitioning from one cognitive state to another or death (with the exception of improvement in cognitive states from both moderate and severe CI) as well as transitioning from one functional status to another (with the exception of improvement from 3 and more ADL limitations) or death was estimated for the overall sample [34,35]. In order to estimate 1-year transition probabilities from two waves of survey conducted over a 2 year gap (2012 and 2014), we employed a method adapted from the SAS code of a study by Cai et al., similar to that used in a previous publication estimating older persons with cognitive impairment in China [28,36]. By assigning a cognitive state/functional status to the participants in year 2013, we were able to circumvent the lack of data in the year 2013 with minimal assumptions. To ensure comparability with an earlier study by Ansah et al. [28], the ensuing rules were applied: (1) If a participant had the same cognitive state/ functional status in both 2012 and 2014, we assume he/ she has been in that particular cognitive state/functional status in 2013 since there is no information on transition from the survey; and (2) If a participant was in different cognitive states/functional status in 2012 and 2014, then the transition is assumed to happen randomly between 2012 and 2014 (i.e. cognitive state/functional status in 2013 can be the same as that in 2012: transition happened in the end of 2013 or the beginning of 2014; or cognitive state/functional status in 2013 can be the same as that in 2014: transition happened in the end of 2012 or the beginning of 2014) [28]. The Eq. 1 below, with age and sex as covariate, was used to estimate the transition rate for cognitive state [28]. The equation was solved with multinomial logistic regression models using the "multinom" function in R (v3.2.1) [28]. where p ij is the transition rate from the current state i to state j (i ≠ j), where i corresponds to intact, mild, moderate, or severe cognitive impairment and j corresponds to the same states as well as death [28]. Transition rates were disaggregated according to age (single age cohort from age 65-100 and older) and gender (female, male) [28]. The Eq. 2 (with age, sex, and cognitive impairment status as covariate) was used to estimate the transition rate for functional disability. The equation was solved with multinomial logistic regression models using "multinom" function in R (v3.2.1). where p ij is the transition rate from the current state i to state j (i ≠ j), where i corresponds to healthy, 1 to 2 ADL limitations, or 3 or more ADL limitations and j corresponds to the same states as well as death. Transition rates were disaggregated according to age (single age cohort from age 65-100 and older), gender (female, male), and cognitive impairment (no CI, with CI). Model validation and sensitivity analysis For the purpose of model validation, the simulation model was presented to demographers to verify its structure, assumptions, and model parameters used to initialize the model. In addition, we compared our model estimates on total population and older adult population with official estimates from the National Bureau of Statistics of China. Also, we compared our model estimates of the number of older adults with ADL limitations and cognitive impairment with that of available published estimates. Following a similar process as Ansah et al. [28], the bootstrap method was used to estimate the likely distribution of transition rates to obtain the 95% confidence interval around the point estimates after computing point estimates for the transition rates from the multinomial logistic regressions. First, the sampling weights were rescaled to sum up to 100%. Using the "sample" function in R (v3.2.1), the weights were used as probabilities to draw respondents (by identification, ID) with replacement. In the sampling, each respondent (ID) may be drawn once, more than once or not at all. The process was repeated 1000 times to obtain 1000 datasets for the purpose of estimating the transition rates. Transition rates were estimated using the 1000 samples with the multinomial logistic regression model. The distribution of age and sex specific transition rates and 95% confidence intervals were obtained based on the 1000 sets of estimated transition rates. In order to obtain the likely variation in the projected number of older adults with cognitive impairment, transition rates from the bootstrap analysis were used as input to the sensitivity analysis. Results Transition rates by age, gender, and cognitive impairment Fig S1 in the Additional file 1 shows the graphs of the transition rates by functional disability. For both sexes, the rate of transitioning to a worse functional disability state and death increased with age and cognitive impairment, while the rate of transitioning to an improved functional disability state decreased with age and without cognitive impairment. Transition rates from active to all health states, as well as that from one to two ADL to three or more ADL, and deaths including that of 3 or more ADL to deaths are higher for males than females. However, transition from one to two ADL to active was higher for females than males. In addition, similar with results from Ansah et al. [28] age and gender specific cognitive impairment rates of transition across cognitive states and death are presented in Fig. S2 in the Additional file 1. For both sexes, the transition rate to a worse cognitive state or death increased with age whereas the rate of transitioning to a better cognitive state decreased with age. In accordance with results from Ansah et al. [28], transition rates from intact to mild, moderate, or severe CI were higher in females than in males. Similar effects were found for transition rates from mild CI to intact or death, and from severe CI to death [28]. However, in contrast, transition rates from intact to death, mild CI to moderate or severe CI, and moderate CI to severe CI were higher for males than females. Lastly, the transition rate from moderate CI to death for males decreased to a level lower than females at very old age (about 97 years old) [28]. As shown in As indicated in Table 3, among the older adults with impairment, those with ADL limitations only was The disaggregation of impairment by type and age cohorts, as indicated in Table 4, shows that majority of the older adult with impairment, irrespective of type, was observed among the older adult 85 years and older. By 2060, older adult 85 years and older with ADL limitations only was projected to be 18·692 (18·141-19·243) million; while that for cognitive impairment only is 29·408 (29·083-29·732) million and ADL limitations and cognitive impairment is 21·764 (21·193-22·299) million. Among those with ADL limitations only, the older adult 85 years and older constitute 77·9%, while that for cognitive impairment only was 61·4%. Lastly, 89·3% of the older adult with cognitive impairment and ADL limitation are 85 years and older. Tables S1-S7 in the Additional file 1 provides supplementary projected results. Discussion Our estimates are consistent with available evidence from literature on the future number of older Chinese adults with cognitive impairment and functional disability, due to population aging [24,28]. Findings from our research suggest that the number of older Chinese adults with cognitive impairment and functional disability is projected to increase more than fourfold from 2015 to 2060; consequently, the proportion of the older adults with impairment is projected to rise significantly by 2060. Among those with impairment, the majority will be women aged 85 years and older and cognitive impairment is projected to contribute more to impairment than ADL limitation. The projected increase in older adults with cognitive impairment and functional disability in China is due mainly to an aging population. As income per capita increases in China, coupled with increased educational attainment as well as rising access to healthcare services, mortality is expected to decrease, thus shifting the age distribution of the population. Based on our simulation model, the proportion of the older adults in China 85 years and older is projected to increase from 7% in 2015 to 32% by 2060; while that of older adults 65 to 74 years old is projected to decrease from 62% in 2015 to 35% by 2060. Since the prevalence of functional disability and cognitive impairment increases with age, as the proportion of older adult in China 85 years and older increases (from 7% in 2015 to 32% in 2060), the number of older adults with functional disability and cognitive impairment is expected to increase as well. However, though increased access to healthcare services and changes in lifestyle could modify the risk factors and delay the ageat-onset of functional disability and cognitive impairment at younger age (age 65 to 84 years), at an older age it is much more difficult to avoid these conditions due to physiological changes, hence the increase in the number of older adults with functional and cognitive disability. The finding that 21·8% of the older Chinese adults will develop impairment (functional disability or cognitive impairment) by 2060 of which the majority will be women aged 85 years and older, has policy implications for health and social care service needs. Evidence suggests that older adults with impairment are associated with greater informal care (as care hours provided by informal caregivers are much higher compared to those without impairment [37]), formal long-term care use [38,39], and acute care utilization [40], and may result in growing health care expenditure [41]. Overall, this finding suggests that health and social care needs among the older adults in China are expected to increase significantly. Consequently, policy makers must be proactive in responding to these needs, lest unmet care needs among the older adult will increase leading to poor health outcomes. A delayed response to health and social care needs of the older adults could lead to longer waiting time for services, increased family/informal caregiving burden, and increased number of older adults in worse health states leading to increased health care cost. Further analysis indicates that among those with impairment, the number of older adults with nursing home type care needs (individuals with three or more ADL limitations or moderate to severe cognitive impairment or both) is forecasted to increase significantly-57·164 million (55·993-58·335 million) by 2060-as shown in Fig. 1. Among those with nursing home type care needs, the number with three or more ADL limitations and dementia-defined herein as individuals with moderate or severe cognitive impairment-is estimated to increase from 7·856 million (7·801-7·911 million) in 2015 to 31·109 million (30·745-31·472 million) by 2060. Also, the older adult with three or more ADL limitations only is projected to increase from 2·047 million (2·013-2·081 million) in 2015 to 15·999 million (15·527-16·472 million) by 2060, while that for older adult with dementia and three or more ADL limitations is estimated to increase from 1·101 million (1·081-1·122 million) in 2015 to 10·056 million (9·721-10·391 million) by 2060. The emphasis of filial piety and family as the unit of care for the older adults in Asian culture suggests that older adults with functional disability and cognitive impairment are likely to be cared for at home (Liu and National University of Singapore, 1998). It is estimated that up to 96% of Chinese older adults with dementia are cared for by family members at home [42]. As the number of older adults with disability increases, care burden (especially informal care burden if current living arrangement remains unchanged) is expected to increase. The expected increase in informal care burden will put strain on family caregivers, exposing them to negative caregiving consequences such as depression, anxiety disorders, and weakened immunity [43]. Hence, policy makers should implement caregiver support systems for The finding from this research emphasises the need for China to proactively develop its primary care sector to provide enhanced healthcare services for the rapidly aging population with chronic diseases such as cognitive impairment and functional disability. In addition, the capacity of social care services such as the recently proposed community-based older adult care services and nursing homes, needs to be ramped up and effectively linked to the healthcare system to provide integrated health and social care services in order to meet the needs of the older adults to enhance active aging. Policy makers should emphasize educating the general population on the likely increase of informal/family caregiving burden as the number of older adults with disability increase. Education programs will increase awareness of impairment associated with aging, and equip families with skills and knowledge on how to care for and manage individuals with cognitive impairment and functional disability. Lastly, the findings of this study, especially the transition rates from different disability groups, could be used as baseline transition rates to evaluate the impact of World-Wide FINGER studies [44] on cognitive decline among the older adults in China. FINGER is the Finish multi-domain geriatric intervention study to prevent cognitive impairment and disability. Currently, there are similar ongoing studies around the world (such as US-POINTER, MIND-CHINA, SINGER in Singapore, and UK-FINGER) [42]. The value of the dynamic multi-state population model presented herein is its ability to use trial results from the MIND-CHINA multi-domain intervention and scale it to the population level to explore the health and economic benefits of the intervention. The simulation model used for this study has several limitations. First, we assume that individuals turning 65 in China enter age 65 without any cognitive impairment or functional disability. Including the transition rate of cognitive impairment and functional disability among older adults becoming 65 is likely to increase marginally the projected number of individuals with impairment. Future models projecting trends in functional disability and cognitive impairment among older adults 65 years and older in China should consider including this transition to increase the accuracy of the projections. Second, we assume that future improvement in transition rates is fixed at 1% per year over the simulation time. This percentage may increase or decrease conditional on future advancement in behavioural and pharmacological interventions, thus changing the projections presented herein. Lastly, the estimate of transition rates across health states were derived from a fairly small CHLS datasets given the population of China. There is a need to establish the reliability of the estimated transition rates based on larger datasets to obtain robust projections. Conclusion In conclusion, our evidence based multi-state population forecasting model, incorporating transition across different health states and the impact of increasing prevalence of cognitive impairment on functional disability, projects that the number of older Chinese adults with cognitive and functional impairment will increase significantly by 2060. This expected increase is due to population aging and the resulting shift in age distribution among the population. The expected rise in the burden of agerelated impairment poses a significant challenge that requires urgent policy development to address both intended and unintended consequences. Our findings suggest that there will be an increase in demand for intermediate and long-term care services among the older adults with functional disability and cognitive impairment. Whether this demand is filled by the family, the private sector, or the government is an issue that policy makers should consider in planning for health and social care in China. cognitive impairment. He also conducted the analysis and manuscript writing. CTC conducted the statistical analysis for the transition probability using the available data. He developed the R algorithm used for the data analytics. TLSM conducted the literature review and supported the drafting of the manuscript. She also edited the manuscript and prepared it for submission. ACWY supported the literature review, generation of figures and drafting the manuscript. DBM supported the data analysis, data collection and drafting of the manuscript. All authors have read and approved the manuscript. Funding This work was supported by the Singapore Ministry of Health's National Medical Research Council under its STaR Award Grant (grant number NMRC|STaR|0005|2009) as part of the project 'Establishing a Practical and Theoretical Foundation for Comprehensive and Integrated Community, Policy and Academic Efforts to Improve Dementia Care in Singapore received by DBM. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Availability of data and materials The data for this study is available at Duke Center for the Study of Aging and Human Development. The link to access the datasets is https://sites. duke.edu/centerforaging/programs/chinese-longitudinal-healthy-longevitysurvey-clhls/ Declarations Ethics approval and consent to participate The Ethics Committee of Peking University and National University of Singapore approved this study. Consent was obtained before participating in the study. We received administrative permission to access and use the data because the data collection was partly funded by our research lab. Consent for publication Not applicable.
6,312.6
2021-06-22T00:00:00.000
[ "Medicine", "Economics" ]
BiSync: A Bilingual Editor for Synchronized Monolingual Texts In our globalized world, a growing number of situations arise where people are required to communicate in one or several foreign languages. In the case of written communication, users with a good command of a foreign language may find assistance from computer-aided translation (CAT) technologies. These technologies often allow users to access external resources, such as dictionaries, terminologies or bilingual concordancers, thereby interrupting and considerably hindering the writing process. In addition, CAT systems assume that the source sentence is fixed and also restrict the possible changes on the target side. In order to make the writing process smoother, we present BiSync, a bilingual writing assistant that allows users to freely compose text in two languages, while maintaining the two monolingual texts synchronized. We also include additional functionalities, such as the display of alternative prefix translations and paraphrases, which are intended to facilitate the authoring of texts. We detail the model architecture used for synchronization and evaluate the resulting tool, showing that high accuracy can be attained with limited computational resources. The interface and models are publicly available at https://github.com/jmcrego/BiSync and a demonstration video can be watched on YouTube https://youtu.be/_l-ugDHfNgU. Introduction In today's globalized world, there is an evergrowing demand for multilingual communication. To give just a few examples, researchers from different countries often write articles in English, international companies with foreign subsidies need to produce documents in multiple languages, research institutions communicate in both English and the local language, etc.However, for many people, writing in a foreign language (L2) other than their native language (L1) is not an easy task. With the significant advances in machine translation (MT) in the recent years, in particular due to the tangible progress in neural machine translation (NMT, Bahdanau et al., 2015;Vaswani et al., 2017), MT systems are delivering usable translations in an increasing number of situations.However, it is not yet realistic to rely on NMT technologies to produce high quality documents, as current state-of-the-art systems have not reached the level where they could produce error-free translations.Also, fully automatic translation does not enable users to precisely control the output translations (e.g. with respect to style, formality, or term use).Therefore, users with a good command of L2, but not at a professional level, can find help from existing computer-assisted language learning tools or computer-assisted translation (CAT) systems.These tools typically provide access to external resources such as dictionaries, terminologies, or bilingual concordancers (Bourdaillet et al., 2011) to help with writing.However, consulting external resources causes an interruption in the writing process due to the initiation of another cognitive activity, even when writing in L1 (Leijten et al., 2014).Furthermore, L2 users tend to rely on L1 (Wolfersberger, 2003) to prevent a breakdown in the writing process (Cumming, 1989).To this end, several studies have focused on developing MT systems that ease the writing of texts in L2 (Koehn, 2010;Huang et al., 2012;Venkatapathy and Mirkin, 2012;Chen et al., 2012;Liu et al., 2016). However, existing studies often assume that users can decide whether the provided L2 texts precisely convey what they want to express.Yet, for users who are not at a professional level, the evaluation of L2 texts may not be so easy.To mitigate this issue, researchers have also explored round-trip translation (RTT), which translates the MT output in L2 back to L1 in order to evaluate the quality of L2 translation (Moon et al., 2020).Such studies suggest that it is then helpful to augment L2 writing Figure 1: User interface of our online bilingual editing system.Users can freely choose the language in which they compose and alternate between text entry boxes.The system automatically keeps the other box in sync. with the display of the corresponding synchronized version of the L1 text, in order to help users verify their composition.In such settings, users can obtain synchronized texts in two languages, while only making an effort to only compose in one. A bilingual writing assistant system should allow users to write freely in both languages and always provide synchronized monolingual texts in the two languages.However, existing systems do not support both functionalities simultaneously.The system proposed by Chen et al. (2012) enables free composition in two languages, but only displays the final texts in L2.Commercial MT systems like Google,1 DeepL2 and SYSTRAN3 always display texts in both languages, but users can only modify the source side, while the target side is predicted by the system and is either locked or can only be modified with alternative translations proposed by the system.CAT tools, on the contrary, assume the source sentence is fixed and only allow edits on the target side. In this paper, we present BiSync, a bilingual writing assistant aiming to extend commercial MT systems by letting users freely alternate between two languages, changing the input text box at their will, with the goal of authoring two equally good and semantically equivalent versions of the text. BiSync Text Editor In this work, we are interested in a writing scenario that broadens the existing commercial online translation systems.We assume that the user wants to edit or revise a text simultaneously in two languages.See Figure 1 for a snapshot of our BiSync assistant.Once the text is initially edited in one language, the other language is automatically synchronized so that the two entry boxes always contain mutual translations.In an iterative process, and until the user is fully satisfied with the content, texts are revised in either language, triggering automatic synchronizations to ensure that both texts remain mutual translations.The next paragraphs detail the most important features of our online BiSync text editor. Bidirectional Translations The editor allows users to edit both text boxes at their will.This means that the underlying synchronization model has to perform translations in both directions, as the role of the source and target texts are not fixed and can change over time. Synchronization This is the most important feature of the editor.It ensures that the two texts are always translations of each other.As soon as one text box is modified, BiSync synchronizes the other box.To enhance the user experience, the system waits a few seconds (delay) before the synchronization takes place.When a text box is modified, the system prevents the second box from being edited until the synchronization has been completed.Users can also disable the synchronization process, using the "freeze" button ( ).In this case, the frozen text will not be synchronized (modified).Changes are only allowed in the unfrozen text box.This is the standard modus operandi of most commercial translation systems that consider the input text as frozen, allowing only a limited number of edits in the translation box. Prefix Alternatives The editor can also provide several translation alternatives for a given sentence prefix.When users click just before a word w in a synchronized sentence pair, the system displays the most likely alternatives that can complete the translation starting from the word w in a drop-down menu.Figure 2 (bottom) illustrates this functionality, where translation alternatives are displayed after the prefix "Je rentre", in the context of the English sentence "I'm going home because I'm tired".In the example in Figure 2 (bottom), the user clicked right before the French word "à". Paraphrase Alternatives Another important feature of our BiSync editor is the ability to propose edits for sequences of words at arbitrary positions in both text boxes.Figure 2 this scenario, where paraphrases for the English segment "going home" are displayed in the context "I'm ... because I'm tired" and given the French sentence "Je rentre à la maison parce que je suis fatigué".Such alternatives are triggered through the selection of word sequences in either text box. (top) illustrates Other Features Like most online translation systems, BiSync has a "listen" button that uses a textto-speech synthesizer to read the content in the text box, a "copy" button that copies the content to the clipboard.It also displays the number of characters written in each text box.Figures 1 and 2 illustrate these features. Settings The "gear" button ( ) pops up several system parameters that can be configured by users: The "IP" and "Port" fields identify the address where the BiSync model is launched and waits for translation requests.The "Languages" field indicates the pair of languages that the model understands and is able to translate."Alternatives" denotes the number of hypotheses that the model generates and that are displayed by the system."Delay" sets the number of seconds the system waits after an edit takes place before starting the synchronization.The countdown is reset each time a revision is made.Figure 3 displays BiSync settings with default parameters.We consider a pair of parallel sentences (f , e) and a sentence f as an update of f .The objective is to generate the sentence e that is parallel to f while remaining as close as possible to e. Three types of update are distinguished.Figure 4 displays an example for each update type: • Insertion: adding one or more consecutive words at any position in f ; • Deletion: removing one or more consecutive words at any position in f ; • Substitution: replacing one or more consecutive words at any position in f by one or more consecutive words.The first pattern refers to a regular translation task (TRN), and is used when translating from scratch, without any initial sentence pair (f , e), as in the following example: The white cat <fr> Le chat blanc where only the target language tag is appended to the end of the source sentence.where the edited source sentence f = [The white cat] is followed by the target language tag <fr>, the initial target sentence e, and a tag indicating the edit type that updates the initial source sentence f . f <lang> e g e G The third pattern corresponds to a bilingual text infilling task (BTI, Xiao et al., 2022).The model is trained to predict the tokens masked in a target sentence e g in the context of the source sentence f : The white cat <fr> Le <gap> blanc chat where e g = [Le <gap> blanc] is the target sentence with missing tokens to be predicted.The model only generates the masked tokens e G = [chat]. Synthetic Data Generation While large amounts of parallel bilingual data (f , e ) exist for many language pairs, the triplets required to train our model are hardly available.We therefore study ways to generate synthetic triplets of example (f , e, e ) from parallel data (f , e ) for each type of task (INS, DEL, SUB and BTI) introduced above. Insertion We build examples of initial translations e for INS by randomly dropping a segment from the updated target e .The length of the removed segment is also randomly sampled with a maximum length of 5 tokens.We also impose that the overall ratio of removed segment does not exceed 0.5 of the length of e . Deletion Simulating deletions requires the initial translation e to be an extension of the updated target e .To obtain extensions, we employ a NMT model enhanced to fill in gaps (fill-in-gaps).This model is a regular encoder-decoder Transformer model trained with a balanced number of regular parallel examples (TRN) and paraphrase examples (BTI) as detailed in the previous section.We extend training examples (f , e ) with a <gap> token inserted in a random position in e and use fill-in-gaps to decode these training sentences, as proposed in (Xu et al., 2022).In response, the model predicts tokens that best fill the gap.For instance: The white cat <fr> Le chat <gap> blanc ; est the target extension is therefore e = [Le chat est blanc]. Substitution Similar to deletion, substitution examples are obtained using the same fill-in-gaps model.A random segment is masked from e , which is then filled by the model.In inference, the model computes an n-best list of substitutions for the mask, and we select the most likely sequence that is not identical to the masked segment.For instance: The white cat <fr> Le chat <gap> ; [blanc; bleu; clair; blanche; ...] the target substitution is e = [Le chat bleu]. Note that extensions and substitutions generated by fill-in-gaps may be ungrammatical.For instance, the proposed substitution e = [Le chat blanche] has a gender agreement error.The correct adjective should be "blanc" (masculine) instead of "blanche" (feminine).This way, the model always learns to produce grammatical e sentences parallel to f .Paraphrase Given sentence pairs (f , e), we generate e g by masking a random segment from the initial target sentence e.The length of the masked segment is also randomly sampled with a maximum length of 5 tokens.The target side of these examples (e G ) only contains the masked token(s). Experiments 4.1 Datasets To train our English-French (En-Fr) models we use the official WMT14 En-Fr corpora4 as well as the OpenSubtitles corpus5 (Lison and Tiedemann, 2016).A very light preprocessing step is performed to normalize punctuation and to discard examples exceeding a length ratio 1.5 and a limit of [1,250] For testing, we used the official newstest2014 En-Fr test set made available for the same WMT14 shared task containing 3,003 sentences.All our data is tokenized using OpenNMT tokenizer. 6We learn a joint Byte Pair Encoding (Sennrich et al., 2016) over English and French training data with 32k merge operations. The training corpora used for learning our model consist of well-formed sentences.Most sentences start with a capital letter and end with a punctuation mark.However, our BiSync editor expects also incomplete sentences, when synchronization occurs before completing the text.To train our model to handle this type of sentences, we lowercase the first character of sentences and remove ending punctuation in both source and target examples with a probability set to 0.05. Experimental Settings Our BiSync model is built using the Transformer architecture (Vaswani et al., 2017) implemented in OpenNMT-tf7 (Klein et al., 2017).More precisely, we use the following set-up: embedding size: 1,024; number of layers: 6; number of heads: 16; feedforward layer size: 4,096; and dropout rate: 0.1.We share parameters for input and output embedding layers (Press and Wolf, 2017).We train our models using Noam schedule (Vaswani et al., 2017) with 4,000 warm-up iterations.Training is performed over a single V100 GPU during 500k steps with a batch size of 16,384 tokens per step.We apply label smoothing to the cross-entropy loss with a rate of 0.1.Resulting models are built after averaging the last ten saved checkpoints of the training process.For inference, we use CTranslate2.8It implements custom run-time with many performance optimization techniques to accelerate decoding execution and reduce memory usage of models on CPU and GPU.We also evaluate our model with weight quantization using 8-bit integer (int8) precision, thus reducing model size and accelerating execution compared to the default 32-bit float (float) precision. Evaluation We evaluate the performance of our synchronization model BiSync compared to a baseline translation model with exactly the same characteristics but trained only on the TRN task over bidirectional parallel data (base).We report performance with BLEU score (Papineni et al., 2002) implemented in SacreBLEU9 (Post, 2018) over concatenated En-Fr and Fr-En test sets.For tasks requiring an initial target e, we synthesize e from (f , e ) pairs following the same procedures used for generating the training set (see details in Section 2). Table 2 reports BLEU scores for our two systems on all tasks.The base system is only trained to perform regular translations (TRN) for which it obtains a BLEU score of 36.0, outperforming BiSync, which is trained to perform all tasks.This difference can be explained by the fact that BiSync must learn a larger number of tasks than base.When performing INS, DEL and SUB tasks, BiSync vastly outperforms the results of TRN task as it makes good use of the initial translation e.When we use BiSync to generate paraphrases of an initial input (BTI), we obtain a higher BLEU score of 42.6 than the regular translation task (TRN, 34.9).This demonstrates the positive impact of using target side context for paraphrasing. BLEU TRN INS DEL SUB BTI base 36.0 ----BiSync 34.9 87.9 95.5 78.2 42.6 Next, we evaluate the ability of our BiSync model to remain close to an initial translation when performing synchronization.Note that for a pleasant editing experience, synchronization should introduce only a minimum number of changes.Otherwise, despite re-establishing synchronization, additional changes may result in losing updates previously performed by the user.To evaluate this capability of our model, we take an initial translation (f , e) and introduce a synthetic update (say f ) as detailed in Section 3.1.This update leads to a new synchronization that transforms e into e .We would like e to remain as close as possible to e. Table 3 reports TER scores (Snover et al., 2006) between e and e computed by SacreBLEU. 10These results indicate that BiSync produces synchronizations significantly closer to initial translations than those produced by base.This also confirms the findings of Xu et al. (2022).Finally, Table 4 reports inference efficiency for our BiSync model using CTranslate2.We indi-10 Signature: nrefs:1|case:lc|tok:tercom|norm:no|punct:yes |asian:no|version:2.0.0 cate differences in model (Size and Speed) for different quantization, device, batch size and number of threads.Lower memory requirement and higher inference speed can be obtained by using quantization set to int-8 for both GPU and CPU devices, in contrast to float-32.When running on CPUs, additional speedup is obtained with multithreading.Comparable BLEU scores are obtained in all configurations.Note that for the tool presented in this paper, we must retain single batch size and single thread results (bold figures), since synchronization requests are produced for isolated sentences.Therefore, they cannot take advantage of using multiple threads and large batches. Conclusion and Further Work In this paper, we presented BiSync, a bilingual writing assistant system that allows users to freely compose text in two languages while always displaying the two monolingual texts synchronized with the goal of authoring two equally good versions of the text.Whenever users make revisions on either text box, BiSync takes into account the initial translation and reduces the number of changes needed in the other text box as much as possible to restore parallelism.BiSync also assists in the writing process by suggesting alternative reformulations for word sequences or alternative translations based on given prefixes.The synchronization process applies several performance optimization techniques to accelerate inference and reduce the memory usage with no accuracy loss, making BiSync usable even on machines with limited computing power. In the future, we plan to equip BiSync with a grammatical error prediction model and a better handling of prior revisions: the aim is to enable finer-grained distinction between parts that the system should modify and parts that have already been fixed or that should remain unchanged.Last, we would like to perform user studies to assess the division of labor between users and BiSync in an actual bilingual writing scenario. Figure 2 : Figure 2: BiSync editor displaying paraphrases (top) and translation alternatives for a given prefix (bottom). Figure 4 : Figure 4: Source sentences f when updated (f ) by means of insertion (Ins), deletion (Del) and substitution (Sub) and their corresponding translations (e and e ).Source sentences f are not employed by the models of this work. f <lang> e <update> e The second pattern corresponds to an update task (INS, DEL or SUB) to be used for resynchronizing an initial sentence pair (f , e) after changing f into f , as shown in the following examples: The white cat <fr> Le chat <ins> Le chat blanc The white cat <fr> Le chat est blanc <del> Le chat blanc The white cat <fr> Le chat noir <sub> Le chat blanc Xiao et al. (2022)ice, training such models requires triplets (f , e, e ), as sentences f are not used by the models studied in this work.Inspired byXiao et al. (2022), we integrate several control tokens into the source-side of training examples of a standard NMT model to obtain the desired results.This approach is straightforward and does not require to modify NMT architectures or decoding algorithms.Therefore, our integration of control tokens is model-agnostic and can be applied to any NMT architecture.Several tokens are used to indicate the target language (<en>, <fr>) Del The cat is white The white cat Le chat est blanc Le chat blanc SubThe black cat The white cat Le chat noir Le chat blanc Table 1 : measured in words.Statistics of each corpus is reported in Table 1.Statistics of training corpora. Table 2 : BLEU scores over concatenated En-Fr and Fr-En test sets for all tasks. Table 3 : TER scores between e and e issued from different update types.En-Fr and Fr-En test sets are concatenated. Table 4 : Inference Speed and model Size when decoding test sets with several settings: quantization (Quant), device (Dev), batch size (BS) and number of Threads.Decoding beam size is set to 3. Speed is measured in tokens/second.GPU is a single V100 GPU with 32Gb memory.CPU 1 has 32 cores with 86Gb memory and CPU 2 is an Intel i7-10850H with 32Gb memory.
4,858
2023-06-01T00:00:00.000
[ "Computer Science" ]
Dynamics of Correlation Structure in Stock Market : In this paper a correction factor for Jennrich’s statistic is introduced in order to be able not only to test the stability of correlation structure, but also to identify the time windows where the instability occurs. If Jennrich’s statistic is only to test the stability of correlation structure along predetermined non-overlapping time windows, the corrected statistic provides us with the history of correlation structure dynamics from time window to time window. A graphical representation will be provided to visualize that history. This information is necessary to make further analysis about, for example, the change of topological properties of minimal spanning tree. An example using NYSE data will illustrate its advantages. Introduction Correlation structure among stocks in a given portfolio is a complex structure represented numerically in the form of a symmetric matrix where all diagonal elements are equal to 1 and the off-diagonals are the correlations of two different stocks.That matrix is the so-called correlation matrix [1].It is clear that the larger the number of stocks, the higher the complexity of that structure and the harder it is to understand [2].From recent literature such as, for example, [1][2][3] we learn that understanding OPEN ACCESS correlation structure is one of the most important problems in econophysics.Theoretically, correlation matrix among stocks is a random matrix [4].The vital importance of random matrix in this field is very well known.Its role can be found not only in stock market analysis but also in many other areas such as, for example, portfolio optimization [5,6], asset price [7] and ex-ante optimal portfolios [8].It is also a major problem to understand which non-overlapping time windows, if any, that will provide the most stable correlation structure [8]. There are two mainstreams in analyzing the complex structure of correlation matrix.First, is about to filter the important information contained therein.This mainstream notion is pioneered by Mantegna [1] where he introduced the application of: (i) subdominant ultrametric to construct the economic classification of the stocks in the form of indexed hierarchical tree, and (ii) minimal spanning tree (MST) to filter the topological structure of the stocks.See also [9] for a recent development of robust filters.Nowadays, these two tools have become indispensible in econophysics as can be seen, for example, in [10][11][12][13].Second, is about to model the dynamics of correlation structure from a time window to another [4,8,14,15].Under the assumption that the time series data representing the stocks are governed by geometric Brownian motion (GBM) law, the logarithmic returns are independent and normally distributed.Thus, in this case, the correlation between two different stocks is customarily quantified as Pearson correlation coefficient (PCC) between the corresponding logarithmic returns [1,2]. In this paper our discussion will be focused on the second topic, especially on how to numerically represent the occurrence of correlation structure dynamics from time window to time window.More specifically, on how to identify the time windows where the instability of correlation matrix occurs and to what extent it occurs.Since that problem is multivariate in nature, in the rest of the paper, the study will be focused on statistical model building in multivariate setting.In that setting, Larntz and Perlman [16] have remarked that the statistical model that has been advanced to test the stability of correlation structure is the one developed by Jennrich [17].They further reported that this test has commendable properties in terms of computational and distributional behavior.These are among the reasons why Jennrich's test is considered the most appropriate to test correlation structure stability [5]. Nowadays, under the assumption mentioned above that the time series representing the stocks are a GBM process, Jennrich's test becomes the standard practice in finance and financial market analysis [6,8,18].Its applications can also be found in many studies such as, for example, in global market [15], business of property [19,20], equity analysis [21], real estate [22], and stock market analysis [23].Evidently, there is no doubt that this test plays a vital role in testing the stability of correlation structures [5,18].However, as we will show, if the result is negative, Jennrich's test cannot provide any information about the correlation structure dynamics from a time window to another.It only provides us with the information whether the correlation structure is stable along all time windows.Thus, if it is unstable, how can we identify the time windows where the instability occurs?This is the main problem that will be discussed in this paper. The rest of the paper is organized as follows: in the next section we begin our discussion by briefly recalling Jennrich's test and its limitation, which will be the background and motivation of this paper.In the third section, we construct a statistic, mathematically equivalent to Jennrich's, to overcome the limitation of Jennrich's.Then, in the fourth section, a correction factor for each term in Jennrich's statistic is introduced in order to identify the time windows where the dynamics of correlation structure occurs.In the fifth section, an example using NYSE data will illustrate the advantages of the corrected statistic.To close this presentation, concluding remarks are highlighted in the last section. Background and Motivation Suppose n stocks are available in a portfolio under study and each stock is represented by a time series of its price.Let ( ) i p t and ( ) i r t be the price of stock i and the logarithm of i-th stock's price return at time t, respectively.Thus: for all i = 1, 2, …, n. Under the assumption that ( ) i p t is governed by GBM law, the interrelations or, equivalently, similarities among stocks are summarized in the form of a correlation matrix C of size ( ) n n  where its general element of the i-th row and j-th column is defined as PCC, see [1,2,14]: with i r is the average of ( ) i r t for all t.Thus, the matrix C is a numerical representation of the complex system of stocks' interrelationships.That matrix C plays an important role in econophysics as the main source of economic information.Analyzing the complex structure of C is not simple.The greater the number of stocks, the higher the complexity of that structure [2].However, from the literature we learn that there are two parts in analyzing the complex structure of C, namely: (i) to filter the important information contained therein [1], and (ii) to model the dynamics of correlation structure instability from a time window to another such as discussed in [4,6,8]. In what follows our discussion will be focused on the second topic, especially on how to numerically represent the history of correlation structure instability.For that purpose we introduce a correction factor for each term in Jennrich's statistic.If the original Jennrich's statistic can only be used to test whether the correlation matrix is stable along all time windows, the corrected statistic will be able to identify the particular windows at which the instability, if any, occurs.This information is necessary to make further analysis of correlation structure dynamics in terms, for example, of stock topological properties. It is important to note that in a more general condition of time series, the use of PCC as a similarity measure among two different time series might be not apt.In this case, other similarity measures such as dynamic time warping [24], detrended correlation [25,26], and Hayashi-Yoshida correlation [27] are available.If dynamic time warping is to measure the similarity of two time series which may vary in time frame, detrended correlation is introduced for the case where non-stationary and/or non GBM process is involved.On the other hand, Hayashi-Yoshida correlation is designed for the case where the two time series are observed in a non-synchronous manner.See [24][25][26][27] for the details. Review of Jennrich's Statistic Actually, testing the stability of correlation structure has a long history before Jennrich introduced his test in [17] which, nowadays, became popular as the most appropriate test [5].See, for example, [28] for early development, and [29,30] for more recent works.Those works show that this research area is very active.In the next paragraph we recall briefly Jennrich's test and then highlight its limitations. Suppose m non-overlapping time windows of stock's price time series data are of our concern in studying the dynamics of correlation structure.Let i T be the length of the i-th window and i C the correlation matrix of stocks in that time window.To test the stability of correlation structure among stocks under those time windows, Jennrich [17] proposed this statistic: where , is the pooled correlation matrix; (iii) i  is the column vector where its j-th component is equal to the j-th diagonal element of i Z ; (iv) the general element of G is   , respectively. He showed that J is asymptotically distributed according to a chi-square distribution with degrees of freedom   n n  .Therefore, for significance level  , the correlation structure along all time windows is declared unstable if J exceeds a cut-off value Despite its popularity, Jennrich [17] has remarked at the end of his paper that, although the asymptotic behavior of J in Equation ( 3) is the same as a chi-square variable, the term i J needs not asymptotically be a chi-square variable for all time windows i = 1, 2, …, m.This is the limitation of Jennrich's test that will be handled in the next two sections by introducing a correction factor.As a consequence of that limitation, if the correlation structure along all time windows is unstable, J cannot provide any information about the time windows, if any, at which the correlation structure is changed.This will be not the case if the distribution of i J is known.Therefore, we need to investigate the distributional behavior of i J .In the remaining pages, in order to derive that distribution, a correction factor for i J will be introduced through the construction of an equivalent alternative formula of i J in the form of Mahalanobis square distance.We need the correction factor and that equivalent form because it is difficult to derive the distribution of i J directly from Equation (3).It is the distribution of the corrected i J that will allow us to investigate the dynamics of correlation structure stability.First, we discuss the distributional behavior of i C . Asymptotic Behavior of Correlation Matrix among Stocks Let i  be the theoretical correlation matrix among stocks in the i-th time window.The asymptotic distributional behavior of i C is given in the following theorem [31]. where its   , i j -th element is equal to 1 and 0 , K D be a diagonal matrix where its diagonal elements are those of K, and A =   is asymptotically distributed as multivariate normal of dimension 2 n with mean vector 0 and covariance matrix  , denoted by In that theorem, the matrix K is the so-called commutation matrix and vec(*) is the vectorization of the matrix * obtained by stacking each column underneath the other.See [31], and [32] for the details.It is very important to note that this theorem cannot directly be used to derive the distribution of i J because the covariance matrix  of i C is singular.This motivates us, in the next section, to investigate the asymptotic distribution of the squareform of i C which will simplify our discussion.More specifically, working with this form is more advantageous than working with i C itself because (i) it contains the same information as i C in terms of correlation structure, and (ii) its covariance matrix is non-singular.These properties lead us to the construction of a statistic, equivalent to Jennrich's statistic which allows us to investigate the dynamics of correlation structure instability along all time windows. An Equivalent Form of Jennrich's Statistic Actually, since i C is symmetric and all diagonal elements are not a random variable, what we need in the study of correlation structure dynamics is only the information contained in the lower (or upper) off-diagonal part of i C .To represent that part in a compact way, the notion of squareform operator, used [33], will be adopted., our discussion will be focused on the distributional behavior of that distance in Mahalanobis sense.To derive that distribution, we need to know the covariance matrix . For this purpose, we define a linear transformation M from The transformation M can be represented in matrix form as a block matrix M =   , where 1 M is zero matrix and for r = 2, 3, …, n: where, 2 C r is the number of combinations of 2 out of r objects. The transformation Equation ( 4) and the asymptotic distributional behavior of .From Equation (4), we obtain  = M M t  where  is defined in Theorem 1.Since  is non-singular, the distribution of that Mahalanobis squared distance is given in Property 1 which is a consequence of Theorem 2.2.2 in [31].A special case of that distribution, under the hypothesis that the correlation structure is stable over time windows, is given in Property 2. Based on this property, an equivalent form of Jennrich's statistic J in Equation (3) will be developed and presented in Property 3.This leads us to the correction factor of i J in Property 4.  is asymptotically distributed as chi-square with degrees of freedom mk . In practice, 0  is unknown.Thus, it is so with 0  .In this case, as suggested by Jennrich [17], 0  is estimated by pooled C .Therefore 0  is estimated by 0  obtained from 0  by replacing 0  with pooled C .Since pooled C is a consistent estimator of 0  , then the following property which presents an equivalent form of Jennrich's statistic J in (3) is straightforward. property as i D .By construction, see [17], i D is mathematically equivalent to i J in (3).Moreover, the  in Property 3 is also mathematically equivalent to J.As we have mentioned in Sub- section 2.1, the correlation structure is declared unstable along all time windows if D or, equivalently, . Although J is more preferable than D in terms of computational efficiency, as can be seen in the next section, the statistic D provides an opportunity to develop a correction factor for i J which will be useful to study the dynamics of correlation structure instability. Correction Factor Although D is asymptotically distributed as a chi-square variable, as remarked in Jennrich [17], the distribution of the term i D is still unknown.This is the reason why D or, equivalently, J cannot be used to investigate the dynamics of correlation structure instability.To handle this problem, in the next paragraph a correction factor for each term i D is proposed.Since the time windows are non-overlapping, testing the stability of correlation structure ) is equivalent to testing repeatedly 0 H : i  = 0  for all i = 1, 2, …, m [34]. Based on this equivalence relation, we have the following property.The proof is given in the Appendix. Property 4: Let is asymptotically distributed as chi-square with degrees of freedom k for all i = 1, 2, …, m. We conclude that the term i D in Property 3 corrected by the factor i T T  is asymptotically distributed as chi-square with degrees of freedom k.For computational reason, instead of n n  , matrix inversion in the latter is of size   n n  .As we will see in the next section, this corrected statistic provides us with graphical representation of the history of correlation structure dynamics. Example To illustrate how the corrected statistic introduced in Property 4 works, NYSE data from January 2007 until December 2009 for 100 most capitalized stocks classified in ten industry sectors were used. Those data were downloaded from [35] on 9 May 2013.The distribution of stocks in each sector, represented in different color, is given in Table 1.However, four stocks are not included in this study due to data availability.3) applied to half-yearly data gives J = 28490.90.Since the degrees of freedom is large, for significance level  = 2.5% as suggested in [36], normal approximation gives the cut-off value equals to 23218.53.We conclude that, since J exceeds the cut-off value, the correlation structure along all 6 half-yearly time windows is unstable. That is all information provided by Jennrich's statistic; it can only be used to test whether the correlation structure is stable along all 6 half-yearly time windows.In the next paragraph, by using the corrected statistic developed in Property 4, we investigate further the dynamics of that structure. The details of the i J value and its corrected value are presented in Table 2. Based on the corrected statistic, the last column of this table, with significance level  = 2.5%, the half-yearly history of correlation structure instability is represented graphically in Figure 1.The dots represent half-yearly value of the corrected statistic for the i-th time window; i = 1, 2, ..., 6, and the straight line is the cut-off value for corrected i J , i.e., the   1   = 97.5% quantile of chi-square distribution with degrees of freedom k = 4,560 which is equal to 4,747.17.What we learn from Figure 1 is not only the instability of half-yearly correlation structure but also the history of its dynamics viewed from C pooled as reference.That figure also provides us with the information that at the following time windows the correlation structure are significantly different from the reference; January-June 2007, July-December 2007, January-June 2008, and July-December 2009. Tracking Correlation Structure Changes The information in Figure 1 provided by the corrected statistic makes possible further investigation about to what extent the correlation structure has been changed.In this example, the correlation structure changes will be studied by comparing the pattern of the MST-based network topology issued from each time window and that issued from C pooled .First, we compare them in terms of the power-law of degree distribution and, later on, in terms of Jaccard index. In Figure 2 we present the dynamics of correlation structure in terms of MST-based network topology among stocks [1,2,[10][11][12].Let us consider the pooled correlation matrix issued from all the time windows as reference.We call reference network topology in Figure 2a, the MST-based network topology of C pooled .In Figure 2b-g we also present the network topology of the first until sixth time windows, respectively. In that figure, the weight of the link between two stock i and j represents the distance ( , ) d i j , related to ( , ) c i j in Equation ( 2), defined in [1,2] as: ( , ) From that figure we can investigate how degree distributions differ from that of reference correlation structure.This could lead us to investigate further the topological properties of MST-based network such as the dynamics of the most influential stocks by observing the centrality measures such as, for example, degree centrality, closeness centrality, betweenness centrality, and eigenvector centrality as usually used in networks analysis [11,12,[37][38][39][40].In what follows we focus the discussion on degree distribution.We show that, in this example, the dynamics of correlation structure in Figure 1 as monitored by the corrected Jennrich's statistic, can nicely be explained in terms of the power-law of degree distribution for each MST in Figure 2. Graphically, in log-log scale, the degree distribution of the reference network together with that of each time window is presented in Figure 3. Horizontal and vertical axes represent log(degree) and log(degree frequency), respectively. At a glance, this figure shows the dynamics of correlation structure in terms of the power-law of degree distribution.Specifically, let us write the power-law model ( ) P k = - ck  where ( ) P k is the probability that a particular stock has degree k, and c and  are constants.For each time window, the constant c and the exponent  are given in Table 3. From this table we learn that: (i) According to Lawrence and Lawrence [41], for all time windows, the power-law model ( ) P k = - ck  is reasonably fits the empirical pattern of degree distribution in Figure 3 since the mean absolute percentage errors (MAPE) is between 20% and 50% for all time windows.(ii) Only the power-laws of the fourth and fifth time windows that are closer to the reference power-law related to pooled C .These results are in-line with the result in Figure 1. Jaccard Index To track the changes of correlation structure, we can also use Jaccard similarity coefficient, also known as Jaccard index, between the reference structure C pooled and that of each time window.This index is to measure the similarity between the MST of a particular time window and the reference MST.For the i-th time window, Jaccard index i I , i = 1, 2, …, 6, is defined by: where, i MST and Re f MST represent the MST of the i-th time window and that of the reference, respectively, and A is the number of elements in a set A. As can be seen in Table 4, this index is as nice as the degree distribution to represent the similarity between i MST and Re f MST . The indices for the fourth and fifth time windows are higher than the others.This is also in-line with the result given by the corrected statistic in Figure 1. Concluding Remarks Under the assumption that the time series representing stocks are governed by GBM law, Jennrich's statistic J can be used to test the stability of correlation structure among stocks in the sense of PCC.However, if the correlation structure is unstable, J is not able to provide any information about the time windows at which the instability occurs.Therefore, J cannot tell us the dynamics of correlation structure instability along all time windows. In this paper a correction factor is introduced in order to improve the role of Jennrich's statistic in understanding the dynamics of correlation structure.More specifically, the corrected statistic can be used not only to test the stability of correlation structure but also to identify the particular time windows at which the correlation structure has significantly been changed. By using the corrected statistic, a visual representation of the history of correlation structure instability along all time windows can be constructed.The information from this representation is necessary to investigate further, for example, to what extent the correlation structure in a particular time window has been changed.We have demonstrated these advantages in analyzing the dynamics of correlation structure at NYSE.According to that case of NYSE, the dynamics of correlation structur is closely related to the power-law of degree distribution.Furthermore, Jaccard index is able to quantify the similarity among two MST-based network topology. Appendix: Proof of Property 4 Let us write: Kronecker in Theorem 1 lead us to the asymptotic distribution of Mahalanobis squared distance between 5. 1 . NYSE Correlation Structure Dynamics As an illustration of the advantages of the corrected statistic, let us first test the stability of correlation structure in half-yearly basis (January-June 2007, July-December 2007, January-June 2008, July-December 2008, January-June 2009, and July-December 2009) based on Jennrich's test.The Equation ( the first term on the right hand side is simply i i T C T  , according to Theorem 2.2.2.in[31], the second term on the right hand side of Equation (A1), the distribution of , if i T   for all i = 1, 2, …, m, we have Property 4. That operator transforms i C into a vector containing all elements of i C below or above the diagonal.In this paper we choose the upper off-diagonal part and we denote it by C   is equivalent to the Euclidean length of the vector   Table 1 . Distribution of stocks in each sector. Table 2 . Corrected statistic for each time window. Table 3 . The constant c and the exponent  for each time window. Table 4 . Jaccard index for each time window.
5,573.4
2014-01-06T00:00:00.000
[ "Mathematics" ]
Impact of Rotational Motion Estimation Errors on Passive Bistatic ISAR Imaging via Backprojection Algorithm This work investigates the impact of motion estimation errors on passive inverse synthetic aperture radar (ISAR) images of rotating targets when the backprojection algorithm (BPA) is employed to focus the data. Accurate target motion estimation can be quite challenging, especially in noncooperative target scenarios. In these cases, BPA is applied under erroneous target kinematics information, entailing defocusing and distortions of the final image product. Starting from the evaluation of the image point spread function (PSF) and the resolution properties of the BPA image under ideally known target motion, it will be analytically shown as, at first order, the PSF under motion estimation errors is approximately a scaled and rotated version of the nominal one. Then, theoretical solutions to predict the location of the scatterers in the image will be provided to characterize in closed form the distortion of the BPA plane. Numerical results under different use cases of practical interest are provided to analyze the level of accuracy required by the motion estimation task for a reliable focus in the challenging passive radar scenario. Experimental results using both terrestrial and satellite signals of opportunity are also provided, showing the general validity of the approach in different passive ISAR systems. The present analysis is not limited to passive radars and it can also be applied to active bistatic radars having limited transmitted bandwidth. Impact of Rotational Motion Estimation Errors on Passive Bistatic ISAR Imaging via Backprojection Algorithm Fabrizio Santi and Debora Pastina are with the Department of Information Engineering Electronics and Telecommunications, Sapienza University of Rome, 00184 Rome, Italy (e-mail: fabrizio.santi@uniroma1.it;debora.pastina@uniroma1.it). Digital Object Identifier 10.1109/JSTARS.2023.3341337A T PRESENT, passive radar sensors represent a powerful alternative to conventional active radars for remote surveillance.They exploit the radio frequency (RF) signals already existing in the environment without the need for a dedicated transmitter, therefore enabling lower development and maintenance costs.This facilitates the installation of such systems also in environments where active sensors cannot be installed or are undesired due to their harmful radiations.Passive radar technology has reached today a significant readiness level, with products in the market for the detection, localization, and tracking of moving targets, and researchers are now exploring advanced functionalities to extend the range of potential applications, with passive inverse synthetic aperture radar (ISAR) representing one of the forefront topics [1]. ISAR approaches typically require demanding capabilities for a radar sensor, such as wideband waveforms to achieve fine range resolution and sophisticated signal processing algorithms to achieve well-focused images.These factors made for longtime ISAR imaging an operative mode for active radars only.Nevertheless, the continuous progress of the digital signal processing know-how and the availability of an increasing number of terrestrial and satellite communication signals brought to the advent of the first passive ISAR demonstrations.Most of the research efforts concentrated on the widely available terrestrial TV signals, able to provide acceptable range resolution and relatively high-transmitted power [2], [3].Other studies considered frequency modulation radio [4] and global system for mobile communication [5] illuminators, even though limited at the cross-range profiling due to the very narrowband transmissions, while, for short-range/indoor scenarios, Wi-Fi-based passive ISAR approaches can be found in [6] and [7].Space-based passive radars benefit, with respect to terrestrial ones, from wider accessibility on the global scale, less reliance on potentially vulnerable infrastructure, and signal reception less sensitive to blockage by obstacles such as in mountain areas.In this framework, digital video broadcasting-satellite (DVB-S) signals represent one of the most interesting options, allowing passive imaging with both fine range and cross-range resolutions, thanks to the availability of transmissions over relatively wide channels in the X/Ku-band [8], [9], [10], [11].Global navigation satellite signals (GNSS) are another promising alternative, thanks to their global availability and large constellation design.A first introduction and experimental proof-of-concept of GNSS-based passive ISAR has been provided in [12]. Passive ISAR has a number of relevant differences with respect to its active counterpart.First, the geometry is inherently bistatic, as the transmitter and receiver are not colocated.Moreover, the waveforms are not within the control of the user and are not designed for radar purposes.Available bandwidths are generally much narrower than dedicated imaging systems, the ambiguity function may have worse properties, and longer coherent processing intervals (CPIs) can be required to cope with the unfavorable power budget.This latter condition also applies when transmissions in the lower part of the spectrum are exploited, e.g., UHF and VHF, in order to achieve sufficiently fine azimuth resolutions in spite of the relatively long wavelengths [4].Depending on the particular illuminators, several strategies can be applied to overcome the limitations posed by the challenging passive radar conditions.These typically aim at exploiting the information available over different domains.For example, to increase the range resolution, multichannel combination techniques can be adopted when the allocated spectrum is populated by transmissions over adjacent channels, as often occurs in terrestrial digital television [2], [3].Multiangle acquisitions may allow to increase the spatial resolution of the image, at the same time allowing for significantly reduced shadowing effects and carrying to a higher signal-to-noise ratio (SNR) [13], [14].In this framework, a particularly appealing class of illuminators is represented by GNSS, where signals emitted by satellites widely separated in angle can be collected by a single receiver for passive multistatic ISAR tasks [15], [16], [17].Another interesting approach is capitalizing on the signals collected by differently polarized antennas: different approaches and experimental results about passive polarimetric ISAR can be found in [9] referring to DVB-S illuminators. The range-Doppler (RD) algorithm is the largest adopted option for passive ISAR focusing.It produces a map of the target scatterers related to their bistatic range and Doppler shifts without precise knowledge of the target rotational motion, as long as the Doppler shift experienced by each scatterer is constant during the CPI.However, as for classification procedures having the images in a homogeneous plane (m × m) is often preferable (especially for the estimation of the target physical dimensions), cross-range scaling is also required.Furthermore, the bistatic geometry induces an additional scaling to be applied to the slant-range dimension to convert the bistatic range into the more meaningful monostatic range [13].However, the RD algorithm provides its output in an image projection plane (IPP) that depends on the particular illumination and motion conditions.Therefore, RD is not suitable for the direct combination of images taken from a variety of bistatic geometries, which is a fruitful means to overcome the inherent limitations of passive ISAR [14], [15], [16], [19]. Backprojection algorithm (BPA) is an imaging technique that provides a number of benefits over conventional RD.First, wider observation angles can be synthesized regardless of the migration of the scatterers through the Doppler filters.Indeed, it does not make any specific assumption about target motion, representing the optimum filter in the case of perfectly known motion [13].Even though higher computational complexity than standard RD is typically needed, the increase in the computational power of modern processors is alleviating such an issue.Unlike the IPP of RD, BPA permits imaging in a Cartesian plane representing the target coordinates; therefore, it allows direct extraction of the target geometrical features from the image products.Hence, BPA is particularly suitable when comparing information acquired by different systems.This is the case when direct fusion/comparison of passive images taken from different aspect angles, [19], and/or exploiting multiple illuminators of opportunity, e.g., digital video broadcasting-terrestrial (DVB-T) and DVB-S [20], is conducted. Previous investigations using BPA for passive ISAR assumed an exact knowledge of the target motion [9], [19], [20].Although the obtained results showed its potential for passive ISAR imaging, the assumption of perfect knowledge of the target motion was clearly unrealistic.To overcome this issue, in this work, we remove the known target motion assumption by analyzing the robustness of BPA in passive ISAR with respect to motion estimation errors.Particularly, we focus on the impact of inaccuracies of the rotational motion knowledge in general bistatic geometries, and considering different rotation motion types.(Some preliminary results considering a DVB-S-based passive radar have been anticipated in [21] for targets undergoing uniform rotations.)Other contributions in the literature deal with bistatic ISAR [22], [23], [24], and, more specifically, with passive bistatic ISAR [1].These works focus on the impact of (time-varying) bistatic geometries on ISAR imaging when conventional monostatic RD processors are used.In contrast, the adoption of BPA enables effective image formation regardless of the time-varying behavior of the bistatic angle.Therefore, given the difficulty of achieving fine motion estimates of passive systems, we study the impact of errors in the estimate of the target motion on the quality of the image focused on the Cartesian plane.It is also worth stating that although detailed for the passive radar use where the exploitation of opportunistic waveforms might entail a limited capability to recover accurately the target dynamics, the derivations provided in this work could be applied in principle to any class of bistatic ISAR systems, especially those characterized by coarse range resolution. In this frame, this article analyzes the effects of estimation errors on both the resolution characteristics and on the distortion affecting the images.For the former task, the bistatic point spread function (PSF) in the BPA plane will be analytically derived for both the ideal (i.e., error free) and perturbed (i.e., with motion estimation errors) cases.Later, the distortion effects will be evaluated providing closed-form relationships between the error types and the mispositioning of target scatterers in the resulting image.The ultimate goal of this work is to provide passive ISAR users with analytical tools for understanding the robustness of the focusing under different types of operative conditions, so as to establish the level of accuracy to be guaranteed by the motion estimation task.Exemplary numerical results obtained under different use cases are provided to illustrate the tolerance to different motion estimation errors that can be accepted depending on a few possible user needs.Moreover, experimental results using both terrestrial (DVB-T) and satellite (DVB-S) illuminators of opportunity are provided, showing the image quality deterioration in the presence of erroneous kinematic information. The rest of this article is organized as follows.Section II describes the operative conditions and the BPA focusing, including the resolution properties of the image, under the hypothesis of exactly known target motion.Section III analyzes the impact of motion estimation errors, both in terms of PSF and image distortion.Numerical results for a few exemplary use cases are provided in Section IV.Section V shows experimental results obtained with different illuminators of opportunity.Finally, Section VI concludes this article. Notations: We introduce here some notations used throughout this article.Scalars are denoted by nonboldface type, vectors by boldface lowercase letters, and matrices by boldface uppercase letters.Superscripts (•) T and (•) * denote transpose and complex conjugate, respectively.The dot notation ẋ denotes the derivative with respect to the slow time, while the Euclidian norm of vector x is denoted as x .Unit vectors are identified by the hat accent x.The diacritic sign ∼ denotes a quantity modified with respect to its nominal value due to the nonideal knowledge of the target kinematic: let x, x, and X be the actual values of a scalar, a vector, and a matrix, respectively, then x, x, and X denote their corresponding values corrupted by the estimation errors. A. Operative Conditions and BPA Image The operative conditions comprise a receiving-only device (Rx), a transmitter of opportunity (Tx), and a target (TgT).We do not make any specific assumption on the particular bistatic geometry or illuminator of opportunity, except it provides a relatively coarse range resolution (i.e., narrowband signals).The target is modeled as a rigid body in the far field composed of a number of scatterers with constant amplitude during the aperture time T a (CPI).As usual in the ISAR literature, we decompose its motion as the translation of a reference point and the rotation of the target body around that point (target fulcrum).As the focus here is on targets undergoing their own rotations (e.g., ship targets), the former is assumed negligible in the aperture time or already compensated, while the latter gives rise to the Doppler gradient making the imaging effective.Namely, we are assuming that the residual translational motion (if it exists) does not entail residual range and Doppler migration.This is likely fulfilled, because of the coarse range resolution and because the target rotations are usually exploited in scenarios in which target translations do not suffice to provide a Doppler gradient enough to achieve cross-range resolution. Fig. 1(a) shows the considered right-handed (0, x, y, z) reference system, inertial with the target and centered in its fulcrum, namely, x, y, and z axes represent the longitudinal, lateral, and vertical axis, respectively, of the target itself.Fig. 1(b) illustrates the relevant vectors for the subsequent derivations as they will be introduced in this section.For easy use of future reference, Nomenclature lists the main symbols with their description. Let u ∈ [− T a 2 , T a 2 ] be the variable spanning the slow time.At the image time u = 0, d R , d T , and d b are the Rx-to-TgT, Tx-to-TgT, and Tx-to-Rx distances.The target is supposed pitching, rolling, and yawing.Therefore, in the target-fixed reference system, the transmitter and receiver rotate around the target in opposite directions and their instantaneous positions can be written as where T and φT = [cos(θ 0 T ) cos(ψ 0 T ), sin(θ 0 T ) cos(ψ 0 T ), sin(ψ 0 T )] T are the unit vectors of the lines Rx-TgT and Tx-TgT, θ 0 R (respectively θ 0 T ) denoting the aspect angle of the receiver (respectively Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.transmitter) measured clockwise from the x-axis and ψ 0 R (respectively ψ 0 T ) denoting the receiver (respectively transmitter) elevation angle [see Fig. 1(a)], evaluated at the image time u = 0. M ω (u) is the instantaneous matrix accounting for the roll, pitch, and yaw motions equal to where M x (u), M y (u), and M z (u) are the instantaneous rotation matrices accounting for roll, pitch, and yaw, C γ = cos[ϑ γ (u)] and S γ = sin[ϑ γ (u)] and ϑ γ (u) is the instantaneous angle swept around the γ = x, y, z axis.By deriving ϑ γ (u), γ = x, y, z, with respect to the slow time u, it is easy to obtain the instantaneous angular velocities ω x (u), ω y (u), and ω z (u), denoting the roll, pitch, and yaw instantaneous rotations that can be collected into the vector Considering the possibility of targets observed for significantly long CPIs (as often needed in case of low-power illuminators or signals at lower frequencies), the rotation rate around each axis is expanded in the Taylor series at first order, thus accounting for a constant rotation plus a rate, i.e., ω γ ≈ ω 0 γ + ωγ u, with the apex "0" denoting the value at image time u = 0. Overall, the instantaneous rotation vector can be rewritten as We assume the receiver to be equipped with a reference channel to collect the direct signal, plus a surveillance channel to collect reflections from the surveyed area.To achieve a focused image of the target in the passive scenario, the following main stages are usually implemented, as sketched in Fig. 2. As typically the illuminators of opportunity operate with signals continuous in time, data reformatting according to the equivalent of fast time and slow time (τ, u) is implemented.This is achieved by segmenting the received data in consecutive batches according to an equivalent pulse repetition interval (PRI), namely, τ ∈ [0, PRI]. 1 Hereinafter, we will assume that the fictitious PRI has been chosen sufficiently short to avoid Doppler spectrum folding.After downconversion, for each batch, the data are range compressed by cross correlating the complex envelope of the signal received in the surveillance channel with a replica of the direct signal registered by the reference channel.Then, by Fourier-transforming the slow time, an RD map of the surveyed area is obtained.Here, the target can be detected and the area of interest, containing the target energy, is cropped from the whole map.The resulting cropped map can be interpreted as an unfocused image of the target (if needed, the CPI exploited for ISAR imaging can be extended by juxtaposing data corresponding to consecutive RD maps).Depending on the particular conditions, advanced techniques could be required to detect the target under unfavorable power budget conditions and additional signal processing stages could be needed to remove undesired disturbances, such as clutter and direct path interference [13].The analysis of the impact of such effects is beyond the scope of this work and it will not be further considered in the following.This means we assume the target is already detected and the motion estimation errors to be the only source of nonideality affecting the focusing. The last stage is image focusing, aiming at producing a well-focused and interpretable image of the target to enable classification procedures.In our work, this step is performed through BPA, which produces an image in the (0, x, y, z) target reference system by compensating, over a grid, the instantaneous delay and phase according to the available kinematic parameters information.Our implementation of the BPA is as follows (see also [9] and [20]). Let a = [x, y, z] T be a hypothesized scatterer belonging to the target body.The signal received from a can be written in the fast-frequency and slow-time domain (f, u) as where S(f ) is the spectrum of the transmitted signal after compensation of the phase modulation induced by the data payload content, f c is the central frequency, and τ a (u) is the instantaneous time delay.In the geometry under consideration, it is given by being c the speed of light.Without loss of generality, let us assume the BPA produces a top-view image of the target at a given height z.Let (x, y; z) be the backprojected image, this is achieved as (6) where ā = [x, y, z] T and B is the exploited signal bandwidth.If target kinematics are perfectly known, the BPA correctly integrates the energy in the fast frequency over the whole aperture time, thus maximizing the image quality as much as possible.The resolution properties of will be analyzed in the following sections, while Section III will address the case of motion estimation errors. B. Image PSF The resolution properties of the image can be described by means of the generalized ambiguity function (GAF), representing the correlation coefficient between the returns from the target fulcrum 0 and a point a in its vicinity [25] By setting P (f ) = P (f ) / ∫ P (f )df , P (f ) being the power spectral density of the transmitted signal, ( 7) can be rewritten as where τ d (u) is the differential time delay between point a and fulcrum.Accounting for translational motion compensation, without loss of generality we can set τ 0 (u) = 0, and therefore, It is worth to note that since all the scatterers share the same translational velocity, τ d (u) is invariant to any residual translational motion. The two exponential terms in the equation above imply range and Doppler migrations, respectively.Under ideal knowledge of the target kinematics, the range migration experienced over the aperture time will be perfectly compensated by the BPA; therefore, the first exponential term can be simplified as exp{j2πf τ 0 a }, where apex "0" denotes the value corresponding at the reference instant.At the same time, the Doppler migration will be compensated as well; therefore, the second exponential term becomes exp{j2πf c [τ 0 a + τa u]}, where τa is the differential delay rate.By using these positions and neglecting scale factors, (8) simplifies to It can be observed as χ(a) is given by the product of three terms: an initial phase, the time response of the signal of opportunity p( * ) (specifying the range resolution properties), and a sinc function characterizing the Doppler properties. While (9) describes the resolution properties in the delay (i.e., range) and Doppler domain, its arguments can be manipulated to have the PSF representation in the Cartesian space.For this purpose, considering the limited size of the targets of interest with respect to the transmitter and receiver to target distances, we can approximate τ a (u) around the fulcrum arresting the series expansion at the first order where ∇ is the vector differential operator and ( φR + φT ) = 2 cos( β / 2 ) φT β , where β is the bistatic angle and φβ is the unit vector lying on its bisector [see Fig. 1(b)].Then, (10) can further be expanded in the Taylor series around the image time.Arresting the series expansion at the second order where ) u=0 (for ease of reference, the derivations of Ṁ ω and M ω are included in Appendix A). As M 0 ω coincides with the identity matrix, the constant term is immediately obtained as The delay rate is given by where × denotes the cross product.Let ζ be the angle between ω 0 and φβ , then the cross product in ( 13) can be rewritten as ω 0 sin(ζ) ξ, where ξ is the unit vector normal to the plane containing ω 0 and φβ [see Fig. 1 where ω Eff = 2 cos( β / 2 ) ω 0 sin(ζ). For ease of future reference when defining the model of motion errors in Section III, the second-order term is also derived.Considering the derivation of M ω (see appendix A), τa is equal to where Ω is a lower triangular matrix defined as By using ( 12) and ( 14) in ( 9), the passive ISAR image PSF after ideal BPA focusing as a function of the spatial coordinates is obtained as being λ the wavelength and where the initial phase term has been omitted. For the sake of simplicity, we set the BPA output grid to z = 0 (hereinafter referred to as the ground plane).Let K be the number of target scatterers, image will be the coherent T belong to the same isorange and the same iso-Doppler lines.Therefore, āk can be calculated by solving the following linear system: Namely where P is the [2 × 3] matrix relating the positions of the scatterers in R 3 to their positions in the image.(If BPA produces an image at a height z = 0, matrix P can be obtained by replacing in ( 18) C. Image Resolution Properties The couple ( φβ , ξ) sets the bistatic IPP, where φβ and ξ define the bistatic range and azimuth resolution directions, respectively.Particularly, on the IPP, from (17), it is easy to see that the range and azimuth resolutions of the system (without loss of generality evaluated at -3 dB) are given by, respectively where B is the exploited signal bandwidth of the illuminator of opportunity and k p is a factor accounting for the shape of p( * ).However, unlike RD or polar format algorithms, BPA produces an image on a plane generally not coinciding with the IPP.Let α φ and α ξ be the directions of the projections of φβ and ξ on the ground plane (measured counterclockwise from the x-axis), representing the image range and azimuth resolution directions, respectively, and let ψ φ (respectively ψ ξ ) be the angle between φβ (respectively ξ) and the ground plane [see Fig. 1(b)]. The values of the image range and azimuth resolutions can be obtained as It is worth to point out that in the bistatic SAR case, range and azimuth resolution directions are generally not orthogonal [25]. In contrast, in the ISAR framework considered here, φβ and ξ are orthogonal by construction, as it results from the cross product in (13).Nevertheless, because of the projections in ( 22) and ( 23), ground range and ground azimuth resolution directions α φ and α ξ are generally not orthogonal, and therefore, ρ α φ and ρ α ξ cannot be proper indicators of the image resolution properties, which are in the scope of the resolution ellipse.This ellipse is defined as the locus of points for which on the ground plane it results |χ(a)| = 1 / √ 2 (-3 dB resolution) and its major and minor axes, ρ MAX and ρ min , respectively, define the worst and the best value of the spatial resolution.The resolution ellipse is evaluated as [26] ). Solving (24) for each α, we can get the resolution parameter ρ α (α) describing as the spatial resolution varies with the direction α; the directions and the corresponding values of the best and worst spatial resolutions in the BPA plane are, therefore, obtained as III. IMPACT OF MOTION ESTIMATION ERRORS So far, we assumed perfect knowledge of system topology.Namely, with reference to Fig. 1, we assumed perfect knowledge of the relative Tx, Rx, and TgT positions (i.e., in the target-fixed reference system of the vector positions t x and r x ) as well as the target dynamics.However, this information must be generally recovered directly from the received data and is, therefore, subject to nonidealities.Errors on the vector positions t x and r x entail estimation errors on the bistatic angle and on the direction of the range resolution unit vector.However, in typical surveillance scenarios, the inaccuracy of the relative target position is much lower than its distance from the transmitter and receiver.Therefore, estimation errors on β and φβ can be usually neglected.Also, variations of the bistatic angle during the aperture time could be neglected, since we only consider targets undergoing relatively slow translations and rotations around their fulcrum.However, while the time-varying behavior of the bistatic angle can significantly affect focusing performance in the case of RD processors [22], [23], [24], BPA does not make any specific assumption on the motion characteristics and can, therefore, handle such a case. Major sources of nonideality affecting the BPA data focusing are the estimation errors on the target rotation vector.Indeed, ω vector can significantly change its orientation among different frames and it can be characterized by nonuniform speed during the selected aperture time [24], [27].Its estimation is typically quite challenging even for active radar systems and, in the passive ISAR framework, the lack of proper power budget and dedicated waveform design can worsen significantly the situation. Understanding how residual rotational motion estimation errors affect the final image product may allow setting proper Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. bounds on the required accuracy of the motion estimation module in order to guarantee a reliable interpretation of the images and their proper exploitation.To this purpose, in this section, after introducing a straightforward model for the motion errors, we will derive the PSF of the image when BPA works under nonexact knowledge of the target motion; then, distortion effects caused by such inaccuracies will be derived in closed form. A. Image PSF and Resolution Properties Under Motion Estimation Errors Let δω be the rotation motion estimation error, comprising a constant and a time varying part δω 0 = [δω x , δω y , δω z ] T and δ ω = [δ ωx , δ ωy , δ ωz ] T , respectively, where δω γ and δ ωγ (γ = x, y, z) denote the errors on the single components.Therefore, the estimated rotation vector ω can be written as Consequently, an instantaneous time delay τa (u) = τ a (u) is injected into the "BPA focusing" block in Fig. 2 and the resulting image is written as Likewise the actual instantaneous time delay, also τa (u), can be expanded in the Taylor series giving rise to where the constant term coincides with the actual one as it does not depend on the rotation vector.By replacing ω 0 with ω0 in (13), the apparent delay rate can be immediately obtained as The apparent derivative of the delay rate can be evaluated by replacing in (15) the matrix Ω and the vector ω with their apparent versions.Matrix Ω can be obtained from ( 16) by replacing in each of its entry ω 0 γ with ω 0 γ + δω 0 γ .Thereby, it is easy to verify that Ω can be written as where matrix δΩ denotes the error term.Therefore, we can write Overall, because of the error vector δω, an instantaneous error δτ a (u) = δ τa u + δτ a u 2 2 must be taken into account in the BPA image formation procedure and the resulting image (27) can be rewritten as (u) e j2πf c δτ a (u) df du .(32) It can be observed that the error term δτ a entails a residual range migration as well as a residual Doppler migration, represented by the two exponential terms in the integral.Concerning the residual range migration, it is worth to point out that only scatterers quite far from the fulcrum undergo a tangible variation of the range position using τa in lieu of τ a .Because of the limited size of the targets of interest and the coarse range resolutions achievable with opportunistic signals (moreover worsened by the bistatic geometry), the residual range migrations due to δω can be neglected and (32) simplifies to du. (33) The PSF of the image can be derived by injecting the Doppler migration term in the GAF evaluation, carrying to (34), shown at the bottom of this page, where m( τa , δ τa ; δτ a ) denotes the solution of the integral over the slow time.With respect to the ideal solution in (9) [i.e., m = sinc(T a f c τa )] and in case of motion estimation error, this comprises a Doppler shift (δ τa ) and a defocusing term (δτ a ).The effects of the two terms are analyzed in the following sections. 1) Doppler Shift: Let us assume first that the defocusing effect can be neglected (i.e., δτ a ≈ 0).Under this condition, m( * ) in (34) takes the form sinc[T a f c ( τa + δ τa )].To have a representation of this PSF in spatial coordinates, we can write δ τa as in ( 13) by replacing M ω with M δω .That is, by replacing the actual instantaneous angular velocity with their error versions, we obtain where δζ is the angle between δω 0 and φβ , δξ is the unit vector normal to the plane containing δω 0 and φβ , and δω Eff is defined as 2 cos( β / 2 ) δω 0 sin(δζ).Therefore, omitting the initial phase term, the image PSF can be rewritten as (36) ( Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. with the ideal PSF (17), we can observe that the error term δω 0 implies an apparent azimuth resolution direction equal to ξ in lieu of ξ, while the parameter defining the azimuth resolution value ω Eff becomes ωEff .Therefore, the apparent azimuth resolution direction in the image is given by αξ , denoting the projection of ξ on the ground plane, while the image azimuth resolution ( 23) is modified into its apparent version where ψξ is the angle between ξ and the ground plane.Then, by means of (36), the resulting resolution cell of the image can be calculated by resorting to the resolution ellipse as done in (24), looking for the locus of points for which χ (a) = 1 / √ 2 .This provides the apparent versions of the best and worst spatial resolutions of the image ρmin and ρMAX with directions αmin and αMAX , respectively.Therefore, on the ground plane, χ(a) at first order can be regarded as a scaled and rotated version of χ(a). Note that the estimation task must provide both an orientation and a magnitude of the rotation vector, namely, it must provide estimation for both ξ and ω Eff .However, in some scenarios, the orientation of the rotation vector could be known in advance, for example, if the target is forced to rotate around a specific direction, for example in the case of a ship observed in low sea state conditions where pitch and roll could be neglected.Therefore, we can consider the possible situations: 1) ξ = ξ and 2) ξ = ξ.The former case occurs when the error on the constant part of the rotation vector is confined to the magnitude; therefore, the estimated version of the rotation vector can be written as ω0 = ω 0 / ω with ω > 0. A simple case study is here presented to illustrate the shape of the ideal and apparent PSFs.With reference to the passive ISAR geometry in Fig. 1, let us assume a passive receiver with θ 0 Rx = 0 • and ψ 0 T x = 0 • collecting the signal emitted by one transponder of a DVB-S transmitter of opportunity with bandwidth 32 MHz and carrier frequency 11.347 GHz and reflected by a rotating target characterized by rotation vector T °/s.The transmitter illuminates the target with aspect and elevation angles θ 0 T x = 45 • and ψ 0 T x = 37 • , respectively, thus resulting in a bistatic angle β = 55.62 • .The aperture time is set equal to 0.5 s and, for the sake of simplicity, a constant power density of the DVB-S signal is assumed over the receiver bandwidth (i.e., p( * ) takes the form of a sinc function).Fig. 3 shows the resulting PSFs and the corresponding resolution ellipses considering the following estimates of the rotation vector: 1) ω = ω, i.e., ideal case χ(a) [see Fig. Comparing the figures, it can be seen that in the case of motion error confined to the magnitude of the rotation vector, χ(a) essentially undergoes a shrinkage (if ω < 1, as in the considered example) or expansion ( ω > 1) over the direction of the image azimuth resolution.In the case of motion estimation errors on both the magnitude and direction of the rotation vector, the nominal and apparent directions of the azimuth resolution directions differ and, as a consequence, χ(a) undergoes both a rotation and a scaling with respect to χ(a).In the considered example, the best spatial resolution direction α min is equal to 108.8 • in the ideal case χ(a) and it remains unaltered for χ(a) when ξ = ξ ; in contrast, for χ(a) when ξ = ξ, αmin = 113.6 • . 2) Defocusing: So far, we assumed a negligible quadratic phase term because of the motion estimation error.If such a condition does not apply, m( τa , δ τa ; δτ a ) takes the form of a blurred sinc function centered in ( τa + δ τa ) with higher sidelobes level according to δτ a .Criteria to avoid the kth scatterer being significantly defocused can be set by imposing that the uncompensated quadratic phase at the edge of the CPI (i.e., at u = T a /2 ) is below a given angle ϕ lim (depth of focus criterion), i.e., from ( 31) and ( 33) (38) In active high-resolution imaging systems, a widely adopted assumption is ϕ lim = 90 • ; however, this is somehow arbitrary, and the specific criterion can be set depending on the particular user requirements [28].In the passive ISAR scenario, due to the challenging conditions and the lower expected image quality, this criterion might be relaxed allowing ϕ lim greater than 90 • .Some examples of image PSFs for different δϕ k values are provided in the following. Actually, for a given level of motion estimation errors, scatterers undergo more significant defocusing when the CPI is extended, especially in the case of uncompensated accelerations (i.e., δ ω = 0).As a case study, let us consider the geometry in Fig. 3 and assume T a = 0.8 s, while the target rotation includes both a constant rotation vector ω 0 = [−0.T °/s 2 ).Fig. 4 shows the residual quadratic phase at the edge of the CPI as a function of the scatterer's coordinates (assuming them lying on the ground plane).As is apparent, for equal distance from the fulcrum, a different level of defocusing is experienced according to the scatterer's angular position. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.To show the resulting PSFs when affected by different quadratic phase terms, BPA as in (32) has been implemented by simulating the returns from point scatterers in the following positions: (0, 0) m (i.e., target fulcrum), ϕ 1 = 0 • ; (-16, 19) m, ϕ 2 ≈ 90 • ; and (-33, 38) m, ϕ 3 ≈ 180 • .Fig. 5 shows the resulting images (the figures have been centered around the scatterer position in the image : because of δω 0 , each scatterer will appear in a position different from the nominal one, as it will be addressed in the following section.)Comparing the images, it can be observed that the focusing quality progressively deteriorates, also causing a loss in SNR.A slight defocusing is observed for ϕ k ≤ 90 • , a medium defocusing is experienced for 90 • < ϕ k ≤ 180 • , while the focusing capability is severely compromised when ϕ k > 180 • . B. Image Distortions As shown in (34), rotation errors entail a Doppler shift and a defocusing term for each scatterer.While the latter affects the quality of the focusing, the former entails an erroneous scaling of the Doppler axis, thus causing mispositioning of target scatterers in the final image.The resulting image distortion negatively affects (geometrical) target size estimation, which is the main information typically used by passive ISAR users for classification procedures [29].In this section, we derive closed-form equations to predict how the ideal scatterer position āk in is mapped into the "apparent" position āk in image because of errors on the rotation vector.Particularly, we derive a matrix transforming āk into āk for a given observation geometry and motion estimation errors, referred to as deformation matrix. In the general case, the estimation of the rotation vector implies an estimate of the azimuth resolution direction different from the nominal one, i.e., ξ = ξ.To find the apparent position of the scatterer, we can consider that δω does not affect the range position of the scatterer, i.e., āk and āk belong to the same isorange line.Moreover, the scatterer's nominal delay rate is equal to the delay rate calculated using the estimated version of the rotation vector and the apparent scatterer position.Combining these two constraints, the following linear system can be written: Therefore where S ξ is the deformation matrix mapping the scatterer with ideal position āk in the image plane in its apparent position āk . As explained in Section III-A, when ω 0 estimation error is confined to the magnitude of the rotation vector (i.e., ω0 = ω 0 / ω ), the nominal and apparent azimuth resolution directions coincide, i.e., ξ = ξ.Let S ξ be the deformation matrix pertaining this case, it can be obtained from (40) by replacing ξ\z with ξ\z and ω Eff /ω Eff = ω .After some calculus (see Appendix B), this carries to . (41) From the deformation matrix in (40), as well as special form (41), the kth scatterer position error can be calculated as the distance between the nominal and apparent scatterer positions, i.e., δā k = āk − āk = S ξ − I āk (42) where I is the 2×2 identity matrix.As a case study, let us consider the same scenario in Fig. 3 and a 6(a) shows the image focused via BPA using nominal rotation vector [PSF in Fig. 3(a)], where the green markers denote the nominal scatterers' positions on the ground plane āk evaluated by (9).Then, BPA focusing is implemented injecting an estimation error such that ω = 1.3 ω [PSF in Fig. 3(b)], and the corresponding image is shown in Fig. 6(b), where the red markers denote the theoretical apparent positions evaluated by the deformation matrix S ξ in (41).As it is apparent, the positions of the scatterers in the image correspond to the theoretical expectations.Finally, Fig. 6(c) shows the result of the BPA focusing using ω = [0 −0.5 2.1 ] T °/s [PSF in Fig. 3(c)].In this case, as ξ = ξ, the theoretical apparent scatterer positions have been calculated using the general form S ξ of the deformation matrix (40).Also in this case the correspondence between scatterers position in and theoretical expectations is achieved. It is worth mentioning that a further simplification of the deformation matrix in (41) can be done when the ground plane and the IPP coincide.This situation occurs when the bistatic range resolution direction lies on the ground plane and the rotation vector is collinear to the vertical axis z, as for example in the case of DVB-T-based passive radar located on the ground observing a ship undergoing a dominant yaw motion.As in such a case range and azimuth resolutions are orthogonal over the image, S ξ can be simplified as where the apex ⊥ denotes the case α ξ = α Θ + 90 • , M α ξ is the 2-D clockwise rotation matrix by angle α ξ , and } is a scaling matrix.Namely, in these particular conditions, the apparent scatterer position can be calculated from the nominal one by applying: 1) a clockwise rotation by angle α ξ , allowing to transit in a reference plane (x , y ), where x and y represent the azimuth and range resolution directions; 2) a scaling over x according to the ratio between the nominal and estimated value of the target speed, i.e., ω ; 3) a counterclockwise rotation by angle α ξ to return in the (x, y) reference system.Moreover, the magnitude of the position error (42) can be written as where α k is the angular position of the scatterer on the ground plane (measured counterclockwise from the x-axis).Therefore, for the ground plane coinciding with the IPP and for equal distance from the fulcrum, the position error is maximum (in Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.magnitude) for scatterers aligned in the azimuth resolution direction (i.e., α k = α ξ ) and it is zero for scatterers lying on the range resolution direction (i.e., α k = α Θ ).The derivations of ( 43) and ( 44) are reported in Appendix C. IV. NUMERICAL ANALYSIS In this section, we provide numerical analyses to investigate the robustness of the BPA-based passive ISAR focusing on different scenarios.A few possible user requirements of image quality are preliminarily defined, in order to establish a required accuracy level for the target motion estimation module.Particularly, five criteria are here defined to measure the impact of the motion estimation error in the image area, from level 0 (negligible) to level 4 (severe). Motion estimation errors are assumed negligible if they cause residual quadratic phase errors below ϕ lim = 45 • and if the maximum shift of the scatterer is lower than the best spatial resolution of the system ρ min .However, such constraints can be particularly demanding for passive ISAR systems.A scatterer can be sufficiently well-focused as long as the quadratic phase errors entail a residual phase below 90°[see Fig. 5(b)].Moreover, it is worth to point out that the exploitation of signals of opportunity makes the resolution cell generally much larger along α MAX than along α min , and therefore, the condition on the maximum shift could further be relaxed.Indeed, especially in scenarios characterized by coarse spatial resolutions [30], a less stringent requirement could be set by imposing that the maximum shift cannot exceed the equivalent diameter ρ eq , defined as the diameter of the circle having the same area as the resolution ellipse, i.e., ρ eq = √ ρ min ρ MAX .Based on the considerations above, we can define two criteria corresponding to negligible and slight impact. 1) Level 0-negligible impact: The position error is below the best spatial resolution of the image ρ min and the quadratic phase error does not exceed 45°.Under such conditions, is very close to , and therefore, no appreciable effects on the information contained in the image are expected.2) Level 1-slight impact: The shift is greater than ρ min but it does not exceed ρ eq and the quadratic phase error does not exceed 90°.Setting ϕ lim = 90 • guarantees a still good quality of the focusing; moreover, a shift lower than ρ eq is supposed only slightly affecting the classifier performance.Under such conditions, some differences between and can be observed, but these are supposed to do not sensibly affect the classifier performance.For shifts larger than ρ eq and/or residual quadratic phase errors larger than 90°, some significant differences between and begin to appear that could bring the classifier to commit errors.Particularly, three further criteria for a higher impact of the errors can be defined. 1) Level 2-medium impact: The shift is still lower than ρ eq , but the quality of the focusing deteriorates being the quadratic phase error larger than 90°but still below 180°.Even though the geometrical distortion is tolerable, SNR losses due to defocusing could partly compromise the capability of correctly extracting the target segment from the background.2) Level 3-high impact: ϕ lim is set equal to 180°and the position error is larger than ρ eq , but still lower than the worst spatial resolution of the system ρ MAX .Under these conditions, geometric distortion is supposed to become relevant.3) Level 4-severe impact: If the quadratic phase error exceeds 180°and/or the position error is larger than ρ MAX , the reliability of the information contained in the focused image is assumed severely affected, possibly compromising the success of the classification procedure.For the sake of clarity, Table I summarizes these criteria.It is worth to stress that these are used here as an example, as they depend on the particular classifier behavior and settings.Different requirements could be obviously adopted still maintaining the quality of the results discussed in the remainder of the section. First, let us consider a passive radar system exploiting DVB-T signals, using a single channel with a bandwidth of 7.61 MHz and carrier frequency of 626 MHz [2], [20].The system geometry is given by transmitter and receiver coplanar with the target and characterized by the angles θ 0 During the aperture time equal to 2.5 s, the target is supposed to constantly yaw at the speed of 2 °/s.In the following, we assume the exact knowledge of the direction of the rotation vector and consider the yaw speed estimated with different error levels, in the form of δω z equal to a percentage of the absolute velocity (i.e., ω0 = ω ω 0 case).Fig. 7 shows the norm of the position error δā k as a function of the nominal scatterers coordinates over an area of 200 m × 200 m, which is supposed to be large enough to contain any target of practical interest in the considered scenario, when δω z = 10%ω z (i.e., ω = 1.1).We point out that in this case study, the ground plane coincides with the IPP, and therefore, the position error can be evaluated by means of (44).As expected, the position error is null for points lying in the range resolution direction α Θ (white dotted line in the figure) while it maximizes over the azimuth resolution direction α ξ (black dotted line). The maximum allowed area can be defined as the region of the (x, y) plane containing the nominal scatterers' positions for which the impact of δω z does not exceed one of the criteria defined above.Since both resolution cell size and residual quadratic phase δϕ k vary with the bistatic angle, we consider two bistatic geometries corresponding to bistatic angles β 1 = 30 • and β 2 = 100 • .In the former case, the PSF is characterized by ρ min1 = 2.52 m and ρ MAX1 = 18.08 m, and therefore, ρ eq 1 = 6.75 m, while in the latter ρ min2 = 3.78 m and ρ MAX2 = 27.17m, and therefore, ρ eq 2 = 10.15 m.The maximum allowed area is here calculated for both the geometries for different motion estimation errors, by considering δω z = ( ω − 1) ω z with ( ω − 1) = 5%, 10%, 15%, 30%.Fig. 8(a) and (b) shows the allowed areas experiencing negligible impact (level-0) for bistatic angles β 1 and β 2 , respectively, while Fig. 8(c) and (d) shows the allowed areas for slight impact (level-1).Comparing Fig. 8(a) and (b) with Fig. 8(c) and (d), we can observe that, in parity of error percentage, targets belonging to greater dimensional classes can be imaged if the classifier can reliably operate under criteria level-1.However, a general tolerance of the focusing is observed even for the most stringent requirements (level 0), allowing an almost ideal focusing for quite inaccurate rotation motion estimations for target sizes of practical interest.It is worth to point out that in the considered scenario of constant rotation speed, the main effect of the motion estimation errors is the geometrical distortion of the resulting image.Noticeably, such a distortion does not depend on the absolute value of the rotation speed, but only on the relative error ω Eff /ω Eff and, if ξ = ξ, on the difference between the nominal and estimated azimuth resolution directions.Moreover, the scatterer position error is not a function of the bistatic angle: the differences among results for bistatic angles β 1 and β 2 in Fig. 8 are due to the different resolutions experienced, as both range and azimuth resolution are scaled by cos(β/2) with respect to their monostatic versions. In the previous case study, the target was supposed to undergo a rotational motion confined to the plane around its vertical axis.However, in most cases, the target exhibits complex dynamics including pitch and roll rotations.Often, especially in the case of ships, these are modeled as sinusoidal rotations: the angles swept with time around the γth axis (γ = x, y) are given by ϑ γ (u) = A γ sin(2πf γ u + ϕ γ ), where A γ , f γ , and ϕ γ are the amplitude, frequency, and initial phase of the sinusoidal motion [31].In such conditions, the pitch and roll rotations exhibit both a constant and a time-varying component, equal to ω 0 γ = A γ 2πf γ cos(ϕ γ ) and ωγ = −A γ (2πf γ ) 2 sin(ϕ γ ), respectively.The recovery of the target 3-D rotation vector (besides time-varying) is a very challenging task even for dedicated active radar imaging systems, typically requiring high-range resolution, high SNR, and multichannel (colocated or distributed) receiver configurations [32], [33], [34], [35], [36].When exploiting illuminators of opportunity, this goal might become extremely difficult, and often horizontal rotations (i.e., due to pitch and roll) are neglected in the focusing procedures.The methods presented in the article are exploited to investigate the tolerance of the BPA focusing when the presence of pitch and roll rotations is neglected.Particularly, we assume a ship target undergoing a dominant yaw motion with constant speed and additional sinusoidal pitch and roll rotations.The motion estimation module is supposed to be able to recover exactly the yaw motion while roll and pitch are completely neglected during the focusing.Namely, To analyze the tolerance of the focusing for increasing pitch and roll, we fix the frequency and initial phase of the rotations and progressively increase their amplitude.Particularly, f x and f y have been set equal to 0.1 Hz, ϕ x = 30 • , and ϕ y = −45 • , while A x and A y have been set both equal to 0°, 0.5°, 1°, and 1.5°in four different case studies (A-D).Table II lists the target motion and the acquisition parameters.In this case study, we consider a DVB-S-based passive radar observing a ship target composed of many point-like scatterers with different levels of superstructure.As DVB-S satellites operate in the Ku-band and transmit relatively wide bandwidths, in such a scenario passive ISAR products with good spatial resolution could be obtained, in principle enabling the extraction of finer details about the imaged target, in turn allowing more advanced classification procedures.To assess the tolerance of the BPA for different values of the roll and pitch rotations, for each case study, we assign different regions of R 3 to one specific class following the user requirements defined in Table I. Fig. 9 shows the achieved images for the four case studies A-D listed in Table II (corresponding to different levels of the sinusoidal roll and pitch amplitudes) along with the 3-D point scatterer target model.In these figures, each scatterer has been displayed with a marker according to the experienced level of impact of the motion estimation errors to highlight which parts of the target belong to the classes defined in Table I.Fig. 9(a) corresponds to null pitch and roll (case A), therefore, corresponding to the ideal image and serving as reference.When A x = A y = 0.5 • [case B, Fig. 9(b)], only scatterers quite close to the fulcrum experience almost no impact of the errors (blue "•" markers), but most of the target can be still imaged with a limited impact on the result (green "+" markers), and ship's endpoints only are affected by medium errors (magenta "×" markers).In fact, comparing the BPA image with the ideal result in Fig. 9(a), very limited differences can be observed.Increasing the amplitudes at 1°[case C, Fig. 9(c)], a general worsening of the image can be appreciated.In this case, most of the scatterers belonging to the ship deck are quite significantly shifted from their nominal positions (red " * " markers) possibly affecting the capability of correctly extracting detailed information about the target shape.Moreover, as points outside the ground plane are the most sensitive to errors on the horizontal components of the rotation vector, the scatterers belonging to the mainmast are affected by a severe impact of neglecting pitch and roll (black " " markers).Finally, when the amplitudes reach 1.5°[case D, Fig. 9(d)], most of the scatterers on the target edges are severely affected by the motion estimation errors.Comparing ISAR images in Fig. 9(a) and (d), it can be seen that, with respect to the ideal image, the target shape is much harder to recognize.The SNR losses caused by the defocusing makes the extraction of the individual scattering return from a noisy background difficult that, combined with the image distortion, reduce the match score with reference ship models [37], thus increasing the probability that a classifier assignes it to a different target type. A. DVB-T-Based Passive ISAR Using passive radar systems based on digital terrestrial television illumination has been perhaps one of the first attempts to obtain images of man-made targets via passive ISAR approaches, being DVB-T the most adopted standard worldwide [2].As these sources operate in the UHF/VHF-band, the resulting images may be characterized by a poor azimuth resolution that, when combined with the limited bandwidths (typically around 8 MHz), often results in low-resolution products.On the other hand, they can benefit from relatively large transmitted powers allowing them to achieve images characterized by significant SNRs.Moreover, by extending the CPI and adopting multichannels or compressive sensing-based approaches, the poor spatial resolution issue can be mitigated [3]. An experimental campaign has been conducted in order to collect DVB-T data in Livorno (Italy) using the experimental passive radar system ATLAS [see Fig. 10(a)], developed by Fraunhofer FHR.Such a system is a scalable software-defined Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.III lists the parameters of the acquisition pertaining to a time interval of about 6 s which is used for imaging. Fig. 11(a) shows the imaging result using the known-motion BPA.It can be observed that, due to the particular operative conditions, different scattering points along the x-axis, namely along the deck line, can be identified, while the size of the PSF along the y-axis is larger than the ferry's width.Therefore, the major information that can be extracted from this image is the length of the ferry.The black dotted line superimposed on the figure denotes the ship's centerline, whose edges have been highlighted with the black "•" markers, taken at a distance of 32 m (i.e., compliant with the ferry's nominal length).Then, two further images have been focused, using erroneous versions of the rotation vector, as described in the following. As can be observed from Table III, the ferry was undergoing a dominant yaw motion.Particularly, the unit vector of ω 0 Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. is approximately equal to [0 0 −1] T , while its magnitude is about 1.87 °/s.In this case study, we assume a perfect knowledge of ω 0 and consider motion estimation errors confined to the direction of the rotation vector.(38)).However, a quite significant distortion is experienced.In the figures, the superimposed black "•" markers show the theoretical positions of the ferry's edges, (40), allowing to appreciate as the individual scatterers are mapped into positions according to the theory.Therefore, the proposed theoretical framework allows error length estimation. Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. Specifically, identifying the ship's endpoints in the ISAR images and measuring the span over the x direction, in the nominal case a value very close to the actual one can be found, while about 45 m and 55 m are obtained from the ISAR images in Fig. 11(b) and (c), respectively, bringing to relative errors of about 40% and 70%. B. DVB-S-Based Passive ISAR The exploitation of DVB-S signals of opportunity is one of the most recent trends among the passive radar community, turning out in a promising solution for the protection of critical infrastructures, thanks to nearly global availability, relatively wide bands, and fine velocity resolutions [38], [39], [40].DVB-S-based passive ISAR images can be characterized by fine spatial resolutions that could be even comparable to dedicated imaging systems, especially when signals emitted by multiple transponders are acquired.Indeed, the DVB-S spectrum is densely populated by multiple channels with little gaps, which can be combined to enhance the range resolution [8].Moreover, signals with both horizontal and vertical polarization states are available, thus providing further degrees-of-freedom to enrich the image's information space [9], [19]. To obtain DVB-S-based passive ISAR data, experiments were conducted alongside the Rhine in Bonn, Germany.The experimental hardware was the passive radar system SABBIA developed by Fraunhofer FHR, as shown in Fig. 12(a).This comprised two identical receiver front-ends, one for surveillance and one for reference, both using a dish antenna with horizontal polarization.Each antenna was connected to a global positioning system/IMU to obtain precise information about location and antenna pointing direction, and they were both locked to the same 10 MHz reference signal to ensure phase coherence.The transmitter of opportunity was the Astra satellite 1KR located at the 19.2 °E orbital position, which provides coverage of all the European continent [41].An instantaneous signal bandwidth of 80 MHz was acquired centered on the carrier frequency 11 347 MHz of the DVB-S spectrum.Fig. 12(b) shows the power spectral density measured by the reference channel, where it can be seen that slightly more than two transponders' emissions have been acquired.The target was the Königswinter ferry (46.24 m × 20 m), whose side-view photograph is shown in Fig. 12(c).The ferry was located at about 800 m from the receiver and it was equipped with an IMU located on its central superstructure [see Fig. 12(c)] to record its kinematics.In the trials, the target was observed for about 200 s, during which the target was rotating over a large angle, offering the chance to image it under different illumination and motion conditions. The parameters of the first case study are listed in Table IV.In this case, to experimentally obtain a PSF of the image, the processed bandwidth was limited at 25 MHz around the central frequency 11361.75MHz (i.e., +14.75 MHz from the acquired carrier) to process the signal pertaining to one single transponder, where the spectrum shows a nearly flat behavior [see Fig. 12(b)], and therefore, the corresponding range profile closely resembles a sinc function.However, as the DVB-S standard adopts a squared-root-raised cosine pulse shaping and, therefore, the TABLE IV DVB-S PASSIVE ISAR EXPERIMENTAL PSF PARAMETERS spectrum is only approximately flat, a more complicated function than a sinc should be used to model the range response [42].For the sake of convenience, here we do the approximation p ≈ sinc(2B eq cos( β2 ) φT β a ), where B eq is the reciprocal of the time resolution evaluated at the -4 dB level of the actual range response of the reference channel (here equal to ∼22 MHz). We selected a short frame of about 200 ms in which an isolated strong return could be observed in the area corresponding to the central superstructure where the IMU was located.Particularly, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.we focused the image via BPA using three different values of the rotation vector, namely actual motion (as registered by the IMU and reported in Table IV) and by injecting an error of ±30% on the yaw component.Fig. 13 shows the resulting images, where on the top row the experimental PSFs are displayed along with the corresponding theoretical PSFs on the bottom row.A nice correspondence between the experimental and theoretical results can be appreciated for both the ideal and with motion error cases.This correspondence is further confirmed by inspecting the cuts along the ρ min axis direction shown in Fig. 14. Table V lists the parameters of a second case study where several scattering points could be observed in the image.In this case, the whole acquired 80 MHz bandwidth has been exploited.It could be shown that even though the spectrum of the signal over this interval includes some gaps, these are sufficiently small and do not produce any grating lobes in the target area and thus do not significantly affect the range response.Particularly, B eq has been measured equal to ∼70 MHz.Considering also the target rotation vector and the acquisition geometry, this results in a PSF having a quite fine spatial resolution in all the directions, with a resolution ellipse with ρ min ≈ 0.4 m and ρ MAX ≈ 2.1 m, therefore with an area of about 70 cm 2 .Such a fine resolution potentially allows to identify a number of details of the imaged target and, in particular, may enable target edge identification for a quite accurate target dimension estimation.15(c) and (d)].It could be shown that these errors do not cause significant defocusing because of the limited target size (38).In the ISAR images, 0 dB denotes the mean background power (for the sake of better visualization of the dominant scattering centers, the color dynamic range has been saturated between 5 and 25 dB), while the white dotted lines highlight the same grid sketched in Fig. 15(a).In the ideal image, the points of the grid are located in their nominal positions, whereas in the images focused using erroneous rotation vectors, their positions have been calculated according to the deformation matrix S ξ .It can be appreciated as, for both the erroneous values of the yaw speed, the scattering points fit well the areas outlined by the "distorted" grids (see, for example, the bottom right corner where several bright returns can be seen).The distortion of the images will entail an erroneous evaluation of the target edges location that, in turn, affects the capability of measuring its physical dimension, as it can be appreciated by looking at the superimposition of the images over the ferry optical photograph in Fig. 16.It can be seen as, in this case, the underestimation of the rotation speed brought to the larger errors in identifying the ferry's edges than its overestimation.Inspecting, for example, the bottom right corner, we observe that several bright returns are imaged well outside the actual ferry's area when ωz = ω z − 0.3ω z [see Fig. 16(b)], while they still remain inside it for ωz = ω z + 0.3ω z [see Fig. 16(c)]. VI. CONCLUSION BPA showed great potential for ISAR image formation, including the possibility of directly extracting the target size from the images and to focus the data without any assumption on the target motion.However, target kinematic parameters needed for the focusing must be generally obtained by data-driven estimation procedures, which could be significantly affected in the passive radar scenario using waveforms not originally intended for imaging procedures.In this work, to achieve a comprehensive understanding of the effects caused by rotational motion estimation errors, a generalized approach able to quantify their impact on the passive ISAR image products has been defined.Particularly, different types of rotational motion estimation errors in general bistatic geometries have been considered and their effects on the PSF and on the image plane have been analytically derived.A few scenarios have been considered to illustrate the effectiveness of the methods in case studies of practical interest.These also include experimental results with terrestrial illuminators in the UHF band and satellite signals in the Ku-band, showing the wide applicability of the proposed methods in different classes of passive systems.This work aims at serving as a tool for passive ISAR users to set proper requirements for the motion estimation task to ensure the image's reliability.Moreover, the provided derivations can be used to build parametric motion estimation procedures.As BPA is particularly suitable for multiangle acquisitions, the next stage of this research is moving from the bistatic to the We explicitly point out that by means of the skew-symmetric matrix (A1), the cross product in ( 13) is easily obtained.Carrying out the calculus for (A2), we have Recalling that α Θ = tg −1 (φ y /φ x ) and α ξ = tg −1 (ξ y /ξ x ), some simple manipulations bring to (41). (C1) It is easy to verify that this matrix can be rewritten as the matrix multiplications in (43), which provides a clear geometric interpretation of the procedure to calculate the apparent scatterer position under these special conditions. The position error (42) in this case can be rewritten as As in this case δā ⊥ k lies on the azimuth resolution direction, its norm can be easily calculated by applying a clockwise rotation by angle α ξ to the equation above to transit in the (x , y ) reference system, bringing to Fabrizio Santi ,Relevation angle ψ 0 T Member, IEEE, Iole Pisciottano , Diego Cristallini , and Debora Pastina , Member, IEEE Abstract-This work investigates the impact of motion estimation errors on passive inverse synthetic aperture radar (ISAR) images of rotating targets when the backprojection algorithm (BPA) is employed to focus the data.Accurate target motion estimation can be quite challenging, especially in noncooperative target scenarios.In these cases, BPA is applied under erroneous target kinematics information, entailing defocusing and distortions of the final image product.Starting from the evaluation of the image point spread function (PSF) and the resolution properties of the BPA image under ideally known target motion, it will be analytically shown as, at first order, the PSF under motion estimation errors is approximately a scaled and rotated version of the nominal one.Then, theoretical solutions to predict the location of the scatterers in the image will be provided to characterize in closed form the distortion of the BPA plane.Numerical results under different use cases of practical interest are provided to analyze the level of accuracy required by the motion estimation task for a reliable focus in the challenging passive radar scenario.Experimental results using both terrestrial and satellite signals of opportunity are also provided, showing the general validity of the approach in different passive ISAR systems.The present analysis is not limited to passive radars and it can also be applied to active bistatic radars having limited transmitted bandwidth.Index Terms-Backprojection algorithm (BPA), bistatic inverse synthetic aperture radar (ISAR), ISAR focusing, motion estimation errors, passive ISAR.NOMENCLATURE Symbol Description r x Receiver vector position t x Transmitter vector position d R Receiver-to-target distance d T Transmitter-to-target distance d b Transmitter-to-receiver distance ψ 0 Receiver Transmitter elevation angle Manuscript received 26 June 2023; revised 29 September 2023 and 13 November 2023; accepted 2 December 2023.Date of publication 11 December 2023; date of current version 22 January 2024.(Corresponding author: Fabrizio Santi.) θ 0 Raspect angle θ 0 Tφ φβ elevation angle ψ ξ ξ elevation angle α φ Receiver Transmitter aspect angle β Bistatic angle φR Unit vector of the receiver to target line φT Unit vector of the transmitter to target line φβ Unit vector of the bistatic range resolution direction ξ Unit vector of the Doppler resolution direction ω 0 Target rotation vector at the image time ω x Roll ω y Pitch ω z Yaw a Target scatterer vector position ψ Image range resolution direction α ξ Image azimuth resolution direction I. INTRODUCTION Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.superimposition of the K PSFs (each scaled for the corresponding scatterer complex reflectivity) evaluated at the ground plane level and centered in the kth positions āk = [x k ȳk ] T , representing the position on the ground plane of the scatterer with vector position a k = [x k y k z k ] T .For those scatterers belonging to the ground plane, i.e., a k = [x k y k 0] T , āk = a k \z , where the subscript \z denotes the elision of the z component.In contrast, for scatterers lying outside the ground plane, i.e., a k = [x k y k z k = 0 ] T , āk is generally different from a k \z .The position of the scatterer in the image can be calculated considering that both the vectors a k and [ā T k , 0] Fig. 3 . Fig. 3. Passive ISAR PSF examples in [dB] (top row) and corresponding resolution ellipses (bottom row).(a) Ideal focusing.(b) Estimation error confined to the magnitude of the rotation vector.(c) Estimation error in both magnitude and direction of the rotation vector. Fig. 9 . Fig. 9. DVB-S passive radar case study, BPA images, and 3-D target model for (a) case A (ideal focusing), (b) case B, (c) case C, and (d) case D. Fig. 11 ( 5 ] b) and (c) shows the resulting images focused using rotation vec-T , respectively.It could be shown that both images are affected by negligible defocusing (ϕ k < 45 • within the target area, Fig. 12 . Fig. 12. DVB-S-based passive radar experimental campaign (a) receiving hardware, (b) acquired direct signal power spectral density, and (c) cooperative target side-view photograph. Fig. 15 ( a) shows the top view of the target photograph where the dotted lines highlight a grid to identify different portions of the ferry.Images have been focused via BPA using the nominal value of the rotation vector [see Fig. 15(b)] and injecting an TABLE II DVB -S-BASED PASSIVE ISAR CASE STUDY-SIMULATION PARAMETERS TABLE III DVB -T PASSIVE ISAR EXPERIMENTAL IMAGES PARAMETERS TABLE V DVB -S PASSIVE ISAR EXPERIMENTAL IMAGES PARAMETERS error equal to ±30% on the yaw component [see Fig.
16,618
2024-01-01T00:00:00.000
[ "Engineering", "Physics" ]
Sharing the Corporate Tax Base: Equitable Taxing of Multinationals and the Choice of Formulary Apportionment Tax avoidance by multinational enterprises (MNEs) is a global problem. Most crossborder trade occurs within MNEs, susceptible to abuse of gaps and loopholes in domestic and international tax law that allow “profit shifting” between fiscal jurisdictions in order to reduce corporate tax liability. A lack of transparency makes this kind of tax avoidance difficult to quantify – let alone to monitor and control. This paper provides a case study of profit shifting using publicly available, unique, country-by-country reporting data for Vodafone Group Plc, the first large MNE to voluntarily publish such data. We show the tax impact of a move to formulary apportionment on a global basis, and under the European Union’s Common Consolidated Corporate Tax Base proposal. We also consider the rationale for the current proposals for apportionment factors and propose an alternative. Introduction The avoidance of corporation tax by multinational enterprises (MNEs) -essentially on behalf of their shareholders -is facilitated by current international tax rules, based on the separate entity and arm's length principles. MNEs are able to exploit this system to minimise their tax liability, by shifting profits to countries with low or zero tax rates, undermining the tax base of those where real activities take place and reducing government revenues worldwide, in both developed and developing countries. The scale of this profit shifting to low-tax jurisdictions -known to the International Monetary Fund (IMF) as "conduits" -is very large, involving as much as two-fifths of MNE profits. It has also exacerbated tax competition between countries: the global average statutory corporate tax rate has fallen by more than half over the past three decades (Zucman et al., 2018). Offshore investment hubs also play a major role in global investment. Some 30% of cross-border corporate investment stocks have been routed through conduit countries before reaching their destination as productive assets, and a logical corollary of the outsized role of offshore hubs in global corporate investments is tax planning (UNCTAD, 2015). In consequence, G20 world leaders in 2013 gave their support to the Organisation for Economic Cooperation and Development (OECD) project on base erosion and profit shifting (BEPS), calling for reform of the rules to ensure that MNEs would be taxed "where economic activities occur and value is created". 1 However, the approach taken under the BEPS project 2 still relies on transfer pricing rules, which start from the independent entity principle and transactional analysis, the so called "arm's length principle". Unfortunately, this principle is extraordinarily difficult to apply objectively in practice. Alternatives to the arm's length principle do exist (Faccio and Picciotto, 2017) and a logical alternative (ICRICT, 2018) would be to assess multinationals on a worldwide basis (country-by-country reporting, or CbCR) and apportion profits (that is, the tax base) by a formula which would allocate a firm's worldwide income across countries, based on allocation factors that reflect real economic activities (e.g. sales, employees, assets). Domestic corporate taxes would be paid on the share of the worldwide income that is allocated to each jurisdiction. Such apportionment systems do exist, of course, within federal states. Historically, many US states have used the so-called "Massachusetts formula", which uses equal weights on property, payroll and sales, to assess local corporate tax liability from national accounts. Canada employs a similar system, but with equal weights on gross receipts and payroll. Following a similar logic, the European Union (EU) has recently decided to relaunch a project for a Common Consolidated Corporate Tax Base (CCCTB) 3 based on formulary apportionment, with a decision expected by the end of 2018. Initial estimates by the IMF -discussed below -of the effect of such a system, using aggregate data for US firms overseas, indicate that the tax revenue gains would be large for both developed and developing countries, the impact depending on the weights used in the apportionment formula (IMF, 2014). Previous studies using firmlevel data (Clausing and Lahav, 2011;Krchniva, 2014) are based on extrapolation from multinationals' financial information available in databases or from financial statements. However, there have been no studies using publicly available CbCR data for multinational firms covering a large number of countries, both developed and developing. The purpose of this paper is to examine in detail the scale of profit shifting and the effects of apportionment at the firm level, using CbCR recently published by Vodafone Group Plc. Vodafone is the first multinational group to voluntarily publish CbCR data, and we hope that its effort to increase transparency by publishing basic financial and qualitative information for each of the countries in which it operates will be followed by other multinational groups. Section 2 explores the tax apportionment issue and establishes three models to be applied to this data. The Vodafone data are presented in Section 3, and the results of the three apportionment models are discussed. Section 4 examines critically the logical basis for these apportionment proposals and sketches a possible alternative based on equity criteria. Section 5 concludes with some implications for future research and policy discussion. Formulary apportionment Tax avoidance by MNEs is a global problem. The greater part of cross-border commerce takes place within MNEs, with an estimated two-thirds of global trade involving related parties (UNCTAD, 2013). This type of trade is susceptible to abusive exploitation of gaps and loopholes in domestic and international tax law that allow for "profit shifting" from country to country, with the intention of reducing the taxes paid on profits. A lack of transparency makes this kind of tax avoidance difficult to quantify, let alone monitor or prevent. Under the arm's length principle, which underlies separate entity accounting, a multinational corporate group should price transactions with its affiliated entities as if those transactions had occurred with unrelated entities. For tax purposes, affiliated businesses should set transfer prices at levels that would have prevailed had the transactions occurred between unrelated parties. Multinationals are therefore required to identify market-based prices for goods and services transferred within the multinational, to obtain a price that approximates the result that independent entities would reach in the market. Transfer pricing rules attempt to construct prices for the transactions among entities that are part of MNEs as if they were independent. This is inconsistent with the economic reality of modern-day MNEs, which are unified firms run by a single management entity and organised to reap the benefits of integration across jurisdictions. This approach requires subjective, ad hoc and discretionary evaluation of each taxpayer by tax authorities in the different jurisdictions in which the taxpayer operates. This system also requires significant resources from skilled tax authorities and maintains the incentive for multinationals to create ever more complex group structures to minimise taxes (e.g. investment schemes involving offshore financial centres and special purpose entities) (UNCTAD, 2016). Profits can be shifted between the affiliates of multinationals in many ways, through the provision of services or sale of goods (multinational groups can manipulate intragroup exports and import prices so that subsidiaries in high-tax countries export goods and services at low prices to related firms in low-tax countries and import from them at high prices; such transfer price manipulations reduce profits in hightax countries and increase them in low-tax countries), through intra-group lending (affiliates in high-tax countries borrow money from affiliates in low-tax countries, which again reduces profits in high-tax countries and increases them in low-tax countries) and the licensing of intangible assets (e.g. proprietary trademarks, logos and patents owned by affiliates in low-tax countries are licensed to other affiliates within the group; these affiliates then receive royalties which reduce profits in hightax countries). An alternative to the arms-length approach, espoused by the OECD in the BEPS project, would be to tax multinationals under formulary apportionment. Under formulary apportionment, multinationals are treated as a unitary business based on the legal and economic control the parent corporation exercises over its subsidiaries. This unitary business is treated as a single taxpayer, and its income is calculated by subtracting worldwide expenses from worldwide income, based on a global common accounting system. The resulting net income is apportioned among taxing jurisdictions on the basis of a formula that takes into account various agreed factors (e.g. sales, employees). Each jurisdiction then applies its tax rate to the income apportioned to it by the formula and collects the amount of tax resulting from this calculation. As the global profits of the multinational are distributed across different jurisdictions on the basis of an agreed formula, the multinational would not need to calculate the taxable profits earned by each entity of the group in each jurisdiction. In fact, formulary apportionment is currently adopted in the United States and Canada for the intra-country allocation of the profits of a single entity or a group of entities. In the experience of US states, income has been allocated to state jurisdictions using a variety of formulas. Historically, many states have used the so-called "Massachusetts formula", which employs equal weights on property, payroll and sales, although, over the years, a significant number of states have moved to a formula that gives more weight to the sales factor (Mintz, 2007). Canada uses equal weights on gross receipts and payroll, with each factor weighted by one-half. The experience of these countries show that implementation challenges mainly hinge on the apportionment system and the lack of uniformity across states (e.g. how the elements of the apportionment formulae are defined) and the lack of consolidation. The importance of gaining agreement among states on a common tax base and common formula is a crucial insight from the experience in the United States and Canada (Weiner, 2005). Despite these challenges, the experience of these two countries provides a useful blueprint for the adoption of this system at the international level. Under a proposed formulary apportionment system, firms would no longer have an artificial tax incentive to shift income to low-tax locations where their real economic activity is not located. A move to formulary apportionment would also reduce the distortionary features of the current tax system, reducing its complexity and administrative burden. By ignoring internal arrangements that lead to BEPS, formulary apportionment would enormously simplify international tax rules, ending the need for the complex rules on hybrids, source of income, treaty abuse, and the like. It would also lead to a significant reduction in conflict and uncertainty, by dispensing with ad hoc decisions that require subjective value judgements. A move to formulary apportionment would also be cost effective and simple for MNEs, as they would need to prepare a global tax return to be submitted to the tax authorities in each of the countries where the multinational operates. There would be an initial setup cost for the appropriate accounting system, but this would be significantly lower than the current cost of implementing, documenting and defending transfer pricing structures under the arm's length approach. Through formulary apportionment, tax authorities and government would have a better understanding of MNEs' profit allocation across countries. Such a system would also be more suited to an integrated world economy and result in simplification gains and administrative savings. Although a country could introduce formulary apportionment unilaterally, by requiring MNEs to determine what element of their global profits is taxable in that country, a shift towards formulary apportionment is likely to require coordination to facilitate a move to this system, negotiate an appropriate formula and address some of the associated technical issues (e.g. definition of a common tax base, procedure for consolidation of profits and compliance). So far, formulary apportionment has been tested only on a country level in a limited number of countries (e.g. the United States and Canada), so a coordinated global move to formulary apportionment would likely be complex, but not more complex than the current system and, in any event, more closely aligned to the economic reality of the modern world. The EU has recently decided to relaunch a project for a Common Consolidated Corporate Tax Base (CCCTB) 4 , a single set of rules to calculate companies' taxable profits in the EU based on formulary apportionment. With the CCCTB, cross-border companies would have to only comply with a single EU system for computing their taxable income, rather than many national rulebooks, and would be able to offset losses in one Member State against profits in another. The consolidated taxable profits would be shared between the Member States in which the group is active, using an apportionment formula. Each Member State would then tax its share of the profits at its own national tax rate. It is to be expected that the redistributive effect of the re-apportionment of the tax base would be considerable, although, as yet, there are no reliable estimates of the scale. Figure 1 summarizes the estimates made by the IMF on the basis of data for US firms operating abroad in 2010 (IMF, 2014). Three elements of the apportionment model are considered separately, each allocated according to its location in the respective tax jurisdiction: sales, payroll and employment. As the Fund points out (2014, p. 38), "These are no more than illustrative, but point to large and systematic effects. Advanced economies generally gain tax base, whichever factor is used, while substantial tax base moves out of conduit countries; emerging and developing economies clearly gain base only if heavy weight is placed on employment." The category of "conduit" countries as defined by the IMF (2014, p. 18) "refers to countries that are widely perceived as attractive intermediate destinations in the routing of investments-whether for tax or other reasons". The IMF (2014) identifies Bermuda, Ireland, Luxemburg, the Netherlands, Singapore and Switzerland as "conduit" countries. Specifically, as Figure 1 indicates, conduit jurisdictions see large reductions (between 50 and 100%) in their tax bases for all four apportionment factors, as would be expected. Further, developed countries experience broadly similar increases in their tax bases under all four factors -of between 30 and 50%. In other words, as far as these two groups of countries are concerned -and assuming that US firms are representative of all MNEs -the redistributive effect would be robust to the precise apportionment formula used. The same, however, is not true of developing countries, where each factor (and thus its weight in the formula) has a radically different effect -due essentially to the asymmetrical allocation of these factors between developed and developing countries by MNEs. Specifically, developing countries gain from employment factors and lose from asset factors, as economic theory would predict, due to the lower capital-labour ratios (i.e. technologies) used by firms there as compared with developed countries. The payroll factor actually leads to revenue losses for developing countries because wages are much higher in developed countries. 5 However, the sales factor seems to benefit developed and developing countries to a similar degree, although in absolute terms the gains are much greater to developed countries, owing to their greater national incomes and thus tax bases. In sum, unlike developed countries, the gains for tax bases in developing countries from the different models of apportionment do depend crucially on the weights given to the factors in the respective formulae. Absent a comprehensive international database for MNEs similar to that maintained by the US Department of Commerce, an alternative approach to assessing apportionment rules would be to look at individual MNEs. To one such unique case we now turn. Advanced countries Developing coutries "Conduit" countries The Vodafone case study Enhancing transparency in the way MNEs report and publish their accounts would help tackle tax avoidance at very low cost. Despite publishing their consolidated accounts as if they are unified entities, MNEs are not taxed in this way. Each business entity within an MNE is taxed individually, making it difficult to establish an overview of what is happening within a group of companies for tax purposes. This would be different if reporting were done on a country-by-country basis. Public country-by-country reporting (CbCR) is the publication of a defined set of facts and figures by large MNEs, thereby providing the public with a global picture of the taxes that MNEs pay on their corporate income and the allocation of profits across the group's entities. CbCR data is considered to be suitable for high-level transfer pricing risk assessment and for evaluating other BEPS related risks. 6 Vodafone is the first large multinational 7 to have voluntarily published countryby-country data, in a report titled Vodafone Group Plc -Taxation and our total economic contribution to public finances 2016-2017. 8 The data provided by the Group for 2016-17 (see Appendix to this paper) allows the identification of the sixty countries where the Group operates, the scale of operations in each country, and the allocation of group taxable profits across the different countries in which the Group operates. Although the data Vodafone supplies fall short of the country-by-country data that MNEs will eventually have to file with tax authorities across the world as part of the OECD CbCR guidelines, 9 as well as of the EU proposal for a directive on corporate tax transparency country-by-country reporting 10 , and of the data advocated by tax justice campaigners, 11 these data do finally provide country-by-country information on the revenue and taxable profits, corporate tax payments, employees and assets of the multinational. potential Group tax in the aggregate because liability depends on the tax regime in each jurisdiction and the distribution of the tax base, as well as adjustments from previous years. The full data base in the Appendix to this paper clearly shows the misalignment between the current taxable profit allocation and indicators of the Group's real economic activities (sales, employees and assets) in the countries where Vodafone operates and thus the potential for BEPS activities by the Group through the use of low-tax "conduit" countries. 13 Table 1 shows the Group revenue, profit before tax, employment, assets and tax paid for the 10 largest country operations, which accounted for some 70% of Group activity by sales. We have also calculated the effective tax rate paid (tax paid divided by profit before tax). Data for a single year are not always representative: nonetheless it is notable that six of these 10 country operations reported losses; and one country (Italy) achieved an effective tax rate well below the statutory "headline" rate. In contrast, sales revenue does seem broadly correlated with employment and assets, once the relative capital intensity of developed and developing countries, discussed above, is taken into account. in Luxembourg, far larger than sales (although these are commensurate with employment), and in Malta, leading inevitably to the hypothesis that these two are the main conduit countries for the Group, with reported profits roughly equal to net profits for the Group as a whole and very low effective tax rates. In sum, it is clear that considerable profit shifting is occurring within the Vodafone Group -whether for reasons of "tax planning" or "commercial reasons" is unclear but fortunately we do not have to resolve this issue here. However, the data do permit us to see how different models of global formulary apportionment might affect the way the Vodafone tax base is distributed across tax jurisdictions and thus provide a firm-level case study comparable to the aggregate-level IMF study discussed above. Figure 2 shows how these profits (that is, the corporation tax base) are distributed between regions, based on the World Bank's classification 14 of low-income, lowermiddle-income, upper-middle-income and high-income countries. This aggregation also helps to smooth out some of the noise inherent in the individual country figures. Vodafone's profits are reported to be 1% to low-income countries, 14% to lower-middle-income countries, 27% to upper-middle-income countries, 19% to high-income countries and 38% -the largest share of all -to the "conduit group" of Malta and Luxembourg. 14 https://datahelpdesk.worldbank.org/knowledgebase/articles/906519 Our first apportionment exercise is based on equal weighting of sales, assets and payroll, 15 as an approximation of the US ("Massachusetts") formula ( Figure 3). This weighting would decrease the global distribution of Group profits attributable to developing countries (low-income and lower-middle income countries in the World Bank definition) from 15% to 13%, which would indicate that using a factor that takes into account wage costs may not be beneficial for developing countries. However, replacing the payroll factor with employment (i.e. number of employees per country) increases the global distribution of Group profits attributable to developing countries, from 15% to 23% (Figure 4). In both scenarios, the major gainers would be the developed countries (upper-middle-income and high-income countries), nearly doubling their share, while the conduit group is, of course, the main loser. Figure 5 shows an apportionment based on sales and number of employees only, equally weighted. The share attributable to developing countries rises slightly compared to Figure 4, at the expense of developed countries, as might be expected -although less so than the IMF estimates discussed above. 15 Unfortunately, no payroll figures are provided in the Vodafone data, only employment figures. However, the International Labour Organisation states that there is a close correlation between national wage/ salary rates and income per capita (ILO, 2016). We have thus used the ratios between income per capita for our four country groups, as given by the World Bank database in 2017 (https://data.worldbank.org/ products/wdi), as a proxy for the earnings ratios, and then applied these to the Vodafone employment data to derive the appropriate apportionment of the 'payroll' element. An apportionment based on sales alone, as some would propose, yields the results in Figure 6. This allocation further increases the share of developed countries but at the expense of developing ones. In sum, the introduction of formulary apportionment does result in a major reassignment of the tax base, mainly to the benefit of developed countries, although developing countries also gain considerably. Although overall it is likely that different apportionment formulae would not fundamentally alter the outcome for developed countries, the impact on developing countries could be significant. The data suggest that the use of an employment factor would be likely to result in higher allocation of profits to developing countries, relative to the use of the payroll factor. Finally, we simulate how the Group profits would be allocated according to the proposed EU CCCTB -sales, employees 16 and assets equally weighted -between 16 As no payroll data are provided in the CbCR data, and nearly all EU Member States in which the Group operates are high-income countries, no payroll adjustment has been made. the EU Member States individually. Figure 7 shows that, as expected, the clear losers would be Luxembourg and Malta, which would lose almost all their present Vodafone tax base, as well as Italy. Clear winners would be Germany and the United Kingdom, with significant increases also showing for Spain, the Netherlands and Portugal. The United Kingdom and Germany are Vodafone's top two countries for revenues and are also among their top 10 countries for number of employees, but losses before tax are currently reported for these two countries and this explains why a movement to formulary apportionment would be particularly beneficial to these two countries. The balance of the loss to conduit states would, of course, accrue to the rest of the world -both developed and developing. A u s t r i a F r a n c e R o m a n i a H u n g a r y G r e e c e I r e l a n d N e t h e r l a n d s P o r t u g a l S p a i n U n i t e d K i n g d o m G e r m a n y Source: Appendix. Apportionment and equity issues The previous section examined in detail a particular case, although a significant one because Vodafone is a relatively large, global (with CbCR data reported for 49 countries) and technologically advanced MNE. We have shown how profit shifting occurs and what the redistributive effect of various reapportionment formulae would be if applied to this case. The results are interesting and consistent with the IMF study of US MNEs, with the main gainers from reapportionment indicated to be the tax authorities of developed countries, as might be expected; within the EU the main gainers would be Germany and the United Kingdom. We have taken the factors (sales, assets, employment and payroll) and the formulae (US, Canada and EU) for apportionment from the current international policy framework. Almost inevitably these formulae have emerged from political negotiation over fiscal resources rather than a coherent economic or political theory. Above all, they have emerged within federal polities where there are other redistributive mechanisms, particularly the allocation of the resources generated by corporate taxation. There is no reason therefore why such formulae should be best for an international non-federal system other than that these formulae form a useful precedent for negotiating. The three canonical criteria for judging taxation are "equity, efficiency and ease". 17 As the staff of the US Congress states: Analysts generally apply three principal economic criteria when judging the merits of any tax system: Does that tax system increase or decrease equity across taxpayers? Does it increase or decrease economic efficiency (that is, the extent to which market decisions are free of distortions introduced by the tax)? And can that tax system be easily administered? (JCT, 2008, p. 48) "Ease" refers to administrative feasibility and cost on the one hand, and transparency on the other. It is clear that formulary apportionment in any form is superior in "ease" to the present system of conflicting jurisdictions, and that, by effectively eliminating conduits, it would raise tax revenue without great administrative cost because MNE groups already prepare CbCR for their internal use. "Efficiency" in the sense of reducing market distortions is clearly achieved by any formulary apportionment because it would eliminate the enormous present complexity and distortions created by tax avoidance schemes and the use of artificial conduits. There is less clarity about the first criterion, that of "Equity". Internationally (and indeed between federal states) this concept in the present context relates not so much to individual taxpayers but rather to equity in distribution between tax jurisdictions. This of course is the rationale behind the three formulae discussed above, which aim to achieve a more equitable distribution of tax base (and thus revenue) between countries. The somewhat scarce policy literature on the subject appears to be based on a concept of taxing profits "where economic activities occur and value is created". The OECD intergovernmental agreements on BEPS refer to the need for the tax base to "reflect the underlying economic reality" 18 without explicitly stating how this is to be defined; while the Independent Commission on the Reform of International Corporate Taxation states that "… these factors, such as employment, sales, resources used, fixed assets, etc., should be chosen to reflect the MNE's real economic activity in each jurisdiction" (page 6) and that "It is the Commission view that global formulary apportionment is the only method that allocates profits in a balanced way using factors reflecting both supply (e.g., assets, employees, resources used) and demand (sales). Neither can create value without the other." (ICRICT, 2018, p. 7) However, while such an approach to the creation of "value" has some appeal in terms of political economy, there is little economic theory to underpin it. The so-called "Massachusetts Formula" apparently has become accepted through precedent (i.e. political negotiation between states) rather than as the result of economic analysis or research into the impact. A line of argument might be derived from the "contribution of factors of production" approach with, say, the location of "land", "labour" and "capital"; but this would exclude sales and extend the definition of assets. Moreover, from a textbook standpoint, profits are attributable to capital alone because the other factors are rewarded according to their marginal productivity; and, of course, in the standard neoclassical model (with no scale economies), profits are the marginal productivity of capital itself plus the reward to entrepreneurship. On this basis, apportionment should be based on the true location of real fixed capital, technology and management or entrepreneurship. In neither approach does sales come into the economic argument. The case for including sales seems to be based more on ease of administration than anything else. However, the attraction of this case is that it ultimately implies replacing direct with indirect taxation -which in turn has undesirable consequences for equity (IMF, 2013). Corporation tax is, in essence, a withholding tax on dividends and is thus strongly progressive, reducing income inequality; sales taxes on the other hand are usually regressive. Moreover, the "value creation" approach seems to misunderstand the fact that large firms' profits arise from market power (including intellectual property and the like) and specifically from their multinational nature -or to put it another way, these are spatially unlocated rents that should be taxed. As Avi-Yonah and Clausing (2007, p. 13) explain: multinational firms exist in large part because these interactions generate more income than would separate domestic firms interacting at armslength; thus requiring firms to allocate this additional income among domestic tax bases is necessarily artificial and arbitrary, because it would by definition disappear if the related entities operated at arm's length. Finally, assessment of the distributive effects of different apportionment schemes should take into account not only the direct impact on different countries' revenues but also the response of companies to the new rules. For instance, a company could sub-contract its labour inputs in any one jurisdiction and thus could shift its tax liability under formulary apportionment. What this illustrates is the problem of effectively assessing value chains that stretch across sectors and countries, where effective control may be exercised not only by ownership but also by contracts, technology, franchising and other means. In addition, we have already seen how apportionment systems would necessarily benefit developed countries most (at the expense of conduit countries, some of them developing countries) because this is where most sales, capital and high wages are to be found. There is a case therefore for examining what other criteria might be used to underpin the formula for international apportionment. Here we will briefly sketch just one 19 in outline, the application of an apportionment principle of equity between countries that is based on income per capita. When designing personal income taxation it is conventional to include an element of progressivity on the grounds of the greater "ability to pay" of richer strata of the population -or in economic terms, the declining marginal utility of money with income. This is normally called "vertical equity" in contrast to "horizontal equity", which ensures that taxpayers at similar income levels pay similar amounts, independently of the source of income. By extension, we could argue that current apportionment proposals are mainly concerned with horizontal equity between jurisdictions, but that, logically, an element of vertical equity should also be introduced. In other words, the apportionment weights should be based on -or at least include -the level of per capita national income, to ensure a more equal distribution of taxing rights (i.e. how the multinational's tax base is shared between developed and developing countries). This may appear to be a radical proposal, but it does have indirect precedents. On the one hand, within federal polities (upon which the current formulae are based) there does exist -implicitly -a strong redistributive element insofar as federal direct taxation is "returned" in the form of fiscal transfers on a notionally per capita basis. On the other, the current system of international development cooperation ("aid") is essentially fiscal, involving the raising of taxation in the donor country and the support of public expenditure 20 in the recipient country. A somewhat more conventional form of this proposal would parallel the special provisions in trade agreements for less developed participants. In terms of formulary apportionment this could take the form of an agreed adjustment factor for the three developing-country groupings discussed in the previous section. A move to formulary apportionment, either based on existing apportionment formulae or on our proposal would have effect both on the tax revenue generated and investment decisions by MNEs. Whilst taxation is only one of the factors on which investment decisions are based, in addition to eliminate opportunities for base erosion and profit shifting, a system of formulary apportionment could remove the inherent subjectivity of the current system of international tax rules thereby providing greater economic certainty to taxpayers and governments, and this should in turn encourage cross-border investment. The risk of double taxation in the current system is high, with multiple countries asserting taxing rights on the same tax base. However, under a system of formulary apportionment investors will be able to predict in advance of the investment decision the effective rates at which each country will impose its tax, therefore increasing tax certainty. Conclusion The analytical and empirical evidence in this paper shows that a move to formulary apportionment is likely to minimise the allocation of MNEs' profits to low-tax jurisdictions, where multinationals have limited economic activities. The profits currently allocated to these jurisdictions would be reallocated to both developed and developing countries. Research on this subject has been constrained by the lack of firm-level data. However, the results from a detailed examination in this paper of the CbCR data of Vodafone Group Plc, the first large multinational to voluntarily publish such data, allow us to demonstrate the profit-shifting process and to estimate the effect of formulary apportionment for a major MNE based in the United Kingdom, which supports the aggregate analysis of US corporations overseas by the IMF. We also suggest that the current formula proposals are limited by a lack of clear economic rationale, on the one hand, and insufficient attention to the equitable treatment of developing countries, on the other. The policy implications of this paper are four. First, clearly much more research covering a longer time period is needed at the firm level. Ideally this would be comprehensive, but if not possible then a representative selection should be made of MNEs in distinct sectors and based in distinct countries. In particular, MNEs based outside the US and EU (particularly those from emerging-market economies) should be well covered. Second, policy debate should move on from the need for formulary apportionment to the nature of the formula and participation in its determination, with particular attention to low-income countries. There appears to be some current momentum towards basing apportionment on sales, driven in good part by concerns about e-commerce, but this may not be helpful to developing countries. Third, although formulary apportionment does not require a global body to collect or redistribute tax, it does require a multilateral forum where rules can be established, methodology approved and disputes arbitrated. These rules would cover not only the apportionment formula as such but also the reconciliation of national and regional differences in accounting criteria and tax expensing. Whether the OECD (which has already made progress on these topics) or the UN (which has representational legitimacy) should be the locus for such an initiative is an open question. Fourth, a clear linkage should be established between debates on international taxation and other global debates on income inequality, sustainable development and multilateral institutions. Fiscal coordination is not just an issue of financing for development but rather one of the bases for global economic cooperation as such. Source: Vodafone Group Plc country-by-country reporting data (https://www.vodafone.com/content/dam/sustainability/pdfs/vodafone_2017_tax.pdf).
8,233.4
2018-09-14T00:00:00.000
[ "Economics", "Law", "Business" ]
A Mathematical Model for Controlling a Quadrotor UAV Given the recent surge in interest in UAVs and their potential applications, a great deal of work has lately been done in the field of UAV control. However, UAVs belong to a class of nonlinear systems that are inherently difficult to control. In this study we devised a mathematical model for a PID (proportional integral derivative) control system, designed to control a quadrotor UAV so that it follows a predefined trajectory. After first describing quadrotor flight dynamics, we present the control model adopted in our system (developed in MATLAB Simulink). We then present simulated results for the use of the control system to move a quadrotor UAV to desired locations and along desired trajectories. Positive results of these simulation support the conclusion that a quadrotor UAV spatial orientation control system based on this model will successfully fulfil its task also in real conditions. CONTROL SYSTEMS FOR UNMANNED AERIAL VEHICLES UAVS Multi-rotor flying platforms are a relatively young and dynamically developing field. Such devices are finding more and more applications, but they are still struggling with unsolved problems. With advances made in the development of materials, electronics, sensors and batteries, the size of micro UAVs now ranges from 0.1 to 0.5 metres in length and 0.1 to 5 kg in weight. The construction of a simple flying object of this type seems to be extremely simple, however, it poses serious challenges for designers and programmers. Due to the nature of the platform, active thrust control must be applied to maintain the robot in the air. Although simple PID (proportional integral derivative) controllers are sufficient for flight stabilisation, achieving control quality that allows practical use of this type of platform is still a challenge. A class of unmanned aircraft with four rotors, known as quadrotors, is becoming increasingly popular. Given this surge of interest, much effort has lately gone into developing new control methods for quadrotors in particular. One example can be found in the HMX-4, which is a kind of quadrotor about 0.7 kg in weight, 76 cm in length between the ends of the rotor axes, and with flight time of about 3 minutes. Due to the weight limitation of this type of aircraft, no GPS or other instrumentation (such as accelerometers) can be added. Position and velocity data can be obtained via cameras. For feedback, it has an early micro electromechanical system (MEMS) for the pilot-assistant. The feedback linearisation controller controls the altitude and yaw angle. Because the qaudrotor drifts in the x-y plane under the control of this controller, a reversing controller is needed for attitude control. Another commonly studied testbed uses a proportional-embedded PID controller to jointly control position and attitude [1,2,3,12,13]. The AscTec Hummingbird is a typical model of this type of quadrotor. This model has a wood or carbon fibre frame, making it robust and lightweight, at around 0.5 kg. These quadrotors have their own sensors for obtaining states. The controller regulates the plant based on the difference between the set points and the measured values. Another PID control system used is the STARMAC test table. This quadrotor has a weight of about 1.1-1.6 kg and can carry an additional payload of about 2.5 kg. One of the externally available trajectory tracking models is called the X4-flyer. This controller was developed to reduce vehicle orientation and keep it very low. The dynamic model was obtained using the Euler-Langrage method. The PID controller is used to control the hovering unmanned vehicle or track its trajectory [4,5,6,14,15]. Dynamics and control model The rotors are designed to transmit force upward; their rotation also introduces torques. The forces and moments produced depend on the rotor speed. The main challenge is to control the appropriate speeds of the four rotors to ensure stable flight along the desired trajectory. Figure 1 shows a simplified representation of how the rotors control the movement of a quadrotor. Rotors 1 and 3 rotate counter-clockwise and produce torque in the clockwise direction; conversely, rotors 2 and 4 rotate in a clockwise direction and produce torque in a counter-clockwise direction. All rotors produce force in the upward direction perpendicular to the plane of rotation. In the example shown in Fig. 1, the rotational speeds of rotors 2 and 4 are greater than those of rotors 1 and 3. The effective torque will therefore be in the counter-clockwise direction, and the quadrotor will deflect in the counter-clockwise direction. When the four rotors have the same rotational speed, the momentum in the counter-clockwise direction will balance the momentum in the clockwise direction. In order to develop a dynamic model of the quadrotor, we position the fuselage frame so that it is on the main axis of the quadrotor. As shown in Figure 1, the origin of the fuselage frame is positioned at the centre at point O of the quadrotor. One arm is selected as the X axis and the other as the Y axis. Dynamic model The structural frame of the UAV is denoted by the letter w and labelled X w , Y w , Z w as shown in cos sin sin cos cos sin sin cos cos sin cos sin sin co If r is the position vector of the centre of mass in the UAV frame, the acceleration of the centre of mass is given by the equation [8]: In the body frame, the components of the angular velocity of the robot are p, q and r. The relationship between these values and the angle derivatives of yaw, roll, pitch are [9]: Each rotor i produces a moment M i which is perpendicular to the plane of rotation. Rotors 1 and 3 produce moments in the -Z B direction. The moments of rotors 2 and 4 are in the opposite direction: Z B . The torque produced on the quadrotor is opposite to the direction of rotation of the blades, so that M1 and M3 act in the ZB direction, while M2 and M4 act in the opposite direction. The distance from the rotors' axis of rotation to the centre of the rotor is denoted by the letter L. By comparing the individual components of the quadrotor, the moment of inertia, I, is related to the centre of gravity along the X B -Y B -Z B axis. The angular acceleration is obtained using Euler's equations, as shown below: In this paper, we present the mathematical model for a PID controller for quadrotor control. The controller adjusts the process control inputs to minimize the error, which is the value of the difference between the measured process variable and the desired setpoint [10]. The roll and pitch angles are placed on the input. Using a backward step approach, two types of position control methods can be obtained. One is the UAV hover execution controller, which will control the rotor to maintain the rotor in the desired position. The other is a four-rotor control for tracking arbitrary trajectories in 3D [11,16,17]. Flight control controller The quadrotor will hover when the nominal propulsion of the propellers is equal to the force of gravity, in other words when: and the engine speed: r T (t)the trajectory position, ψ T (t)the yaw angles, which are the path. Note that ψ T (t) = 0 for the UAV flight controller. The position error is given by: The recommended acceleration can be calculated using the PID law: For hovering: The relationship between desired accelerations and roll, pitch angles is as follows: Height control In the case of hovering, proportional-derivative control laws are used in a way: From the above equation we obtain the result for ω: Simulation of a dynamic model This section presents the numerical simulation results for the validation of the dynamic and control model discussed in the previous section. The parameters used for the simulation are shown below in Table 1. Tab. 1. Parameters of the dynamic model. Based on the dynamic model of the quadrotor, the control model was developed in MATLAB Simulink. For the simulation, the quadrotor flies from the initial position (0,0,0) to the final location (10,10,10) and hovers over this point (10,10,10). The distance units given here are expressed in metres. Fig. 4 shows the actual path the quadrotor travels. To further verify the dynamic and control model with respect to following a desired trajectory (or target location), the quadrotor was put into motion along a circle with a centre at (5, 0) and a radius of 5 m. Fig. 8 shows the simulation result of the desired and actual trajectory of the quadrotor following the circular trajectory. Figs. 9 and 10 show the simulation result for the desired x and actual x, desired y and actual y for the circular path. The control model was then verified, showing that the quadrotor moves from its resting position to the desired location and lands again. Fig. 11 shows a simulation of UAV motion. The quadrotor took off at point (0, 0, 0) flew to point (0, 0, 10), then was rotated and moved to point (10,10,10) and then landed at point (10, 10, 0). Fig. 11. Simulation of the outcome of the quadrotor's movement to successive desired locations. The above numerical simulations showed that the quadrotor was able to navigate to any waypoint locations and follow any desired trajectories. Fig. 12 shows selected flight parameter waveforms presenting the operation of the unmanned aerial vehicle control system during the landing phase with the use of the on-board vision system. The data presented in the diagrams were obtained in simulations. The graphs obtained show that the applied algorithm allows a satisfactory quality of control of the aircraft to be achieved in the landing phase. CONCLUSIONS Existing control algorithms for multi-rotor UAV platforms leave room for superior systems that control altitude, position and flight trajectory while aiming at full autonomy of the aerial vehicle. The first obstacle to overcome in achieving this goal is the incorrect design of the mechanical structure. As a flying apparatus, a multi-rotor craft needs to have a very low empty weight while providing adequate rigidity and strength. At the same time, a key issue is the quality of the orientation measurement, essential in the platform stabilisation process. As the rigidity of the superstructure increases, the quality of control improves due to the smaller influence of the structural deformations on forces and measurements. At the same time, however, the impact of the interference generated by the moving elements and especially by the drive units on the quality of measurements increases. In this study, we focused on two problems: 1) obtaining a dynamic quadrotor model 2) route planning and route planning optimisation. With respect to the first problem, the dynamics of the quadrotor was analysed and then a controller based on the PID method was developed. In this part, parameters were chosen to make this simulation model almost identical to a real quadrotor. Point-to-point navigation and trajectory experiments were then carried out using the simulation model developed based on the dynamics of the quadrotor. The results showed that the model worked well in practice. For the second problem, a trajectory of full coverage of a defined area was developed. Trajectory parameters were identified and the Langrage multiplier algorithm was used to obtain those parameters that minimised the total time required to traverse the entire trajectory. To recap, then, in the study reported herein, we developed a mathematical model of quadrotor dynamics, which we then used, after its linearisation, to design a spatial orientation control system. Positive results of simulation tests allow us to assume that this system will fulfil its task also in real conditions.
2,714.6
2021-09-01T00:00:00.000
[ "Computer Science" ]
Ensuring Reliable Communication in Disaster Recovery Operations with Reliable Routing Technique The purpose of this research paper is to ensure reliable and continuous communication between the rescue officers and other people during disaster recovery and reconstruction operations. Most of the communication infrastructure gets damaged during the disaster and proper communication cannot be established in the area which leads to longer delays in emergency operations and increased damage to life and property. Various methods proposed to enable communication between the people using wireless ad hoc networks do not guarantee reliable delivery of data with fast moving devices. This paper presents a Reliable Routing Technique (RRT) that ensures reliable data delivery at the destination device even when the people with the mobile devices are moving in the network. We make use of the broadcasting property of the wireless network and create a priority list of probable forwarding candidates at each device. With this technique, RRT ensures that if a forwarder device is unable to forward the data packet due to movement of mobile devices, the next priority candidate forwards the data packet to the destination device, thus ensuring reliability of data delivery in the network. Simulation results show that RRT achieves significant performance improvement with better data delivery in ad hoc networks. Introduction The world has witnessed a number of natural disasters over these years, causing huge losses to human and animal life, infrastructure, and almost everything in the region.These natural hazards like earthquakes, floods, and hurricanes have always struck in different places at unpredictable times, leading to the increase in damage of life and property.Although science has made vast progress in many areas, scientists are still unable to accurately predict the time and place of these disasters and the extent of damage that might occur.The Indian Tsunami in 2004 [1,2] is one among many that have made us realize the extent of damage a natural disaster can cause in unpredictable situations.So the major focus has always been on minimizing the damage that might be caused by a natural disaster and to stay ready for disaster recovery and reconstruction operations [3]. Disaster recovery and reconstruction operations have always been a challenging task for the government, local authorities, and the people.The primary aim of the firemen, police officers, local guards, and other rescue officers arriving just after the event is to look for the survivors and to help the injured.These first responders arriving at the site immediately after the disaster have to deal with a number of issues and challenges.In some cases it is necessary for them to prevent the damage from spreading to other areas.They have to search for the survivors within the damaged buildings and also have to make sure that the medical assistance reaches the survivors in minimum time.Once the survivors are found and medical assistance is given, the next major task is to rebuild the basic infrastructure to start the reconstruction works.A major issue during these recovery and reconstruction operations is that most of the infrastructure used for transportation, communication, and so forth would have been completely or partially damaged 2 Mobile Information Systems with the disaster and it becomes quite difficult to handle and coordinate the entire process without the help of this infrastructure. One of the most important requirements in the disaster recovery and emergency response situations is to establish reliable and continuous communication between the officers, medical team, and other rescue workers [4].Effective communication is very important in coordinating the rescue work and also in reconstruction works after the disaster.In order to carry out efficient and quick recovery, the rescue workers, police officers, and everyone involved may have to move at a fast pace to different locations within the area to minimize the damage and to find out more survivors of the disaster.But in most of the situations the communication infrastructure gets damaged and proper communication cannot be established in the area which leads to longer delays in emergency operations and increased damage to life and property [5]. A number of solutions have been proposed over these years to establish communication in disaster management services [6][7][8].The use of ad hoc networks [9][10][11] is one of the best techniques used in establishing communication during disaster relief operations.Ad hoc wireless networks are a collection of mobile devices that can be configured to work as a single network and can be deployed in these areas without the help of any infrastructure or centralized control.Any number of mobile devices like mobile phones and personal computers can be attached to the ad hoc network.Every mobile device is free to join or leave the network at any point of time.Every device in the network acts as the router as well as the host.When a mobile device sends a data packet into the network, the device in its transmission range receives the data packet and then forwards the packet to the next device in its transmission range and so on till it reaches the destination.Although a number of techniques have been proposed for the transmission of the data from the source to the destination, due to constant movement of mobile devices, none of the methods guarantee delivery of the information at the destination [12,13].Also most of the methods do not support continuous communication between the mobile devices [14].It is very important for all the people at the place of the disaster including the survivors and rescue workers to communicate with each other while moving from one place to the other at a fast pace for safety to minimize the damage.Thus reliable delivery of information and continuous communication become two important factors in efficient working of disaster recovery operations even with fast moving people using devices like mobile phones, laptops, iPads, and so forth. This paper provides a new technique called Reliable Routing Technique (RRT) that utilizes opportunistic routing to guarantee the delivery of information at the destination device.We use the term data packet for the information passed in the wireless network.When a device sends a data packet into the network, all the devices in its transmission range receive the data packet.We create a priority list of these mobile devices.The mobile device that is nearest to the destination is given the highest priority and is always selected to forward the data packet to the next device.If that particular mobile device moves away during this time, the next priority mobile device forwards the data packet in the network.Thus communication is maintained, as long as one mobile device receives the transmission and thus the delivery of data packet at the destination is guaranteed.Simulation results show that RRT achieves high data delivery even when the mobile devices are moving rapidly from one place to the other. The first section of this paper discusses the various research work that has already been carried out in the area of reliable communication in disaster management.The next section analyses the reasons for the failure of communication systems during the disaster recovery process and also analyses the importance of telecommunications in disaster recovery and reconstruction processes based on the data collected from the questionnaires and personal interviews.The next section describes the proposed Reliable Routing Technique for reliable and continuous delivery of information during disaster recovery and reconstruction process.Next section explains the implementation details with results and discussions.We conclude the paper in the last section with discussion to future works and enhancements. Related Work A number of research papers have been published highlighting the effects of natural disasters in various regions around the world.Most of these papers have highlighted the importance of being prepared for emergency and disaster recovery works.Reference [15] has reviewed the effect of the earthquake that hit Kobe, Japan, in 1995.The paper highlights the changes that have taken place over time in the region for effective disaster management.Another book [16] reviews the major natural disasters that have hit various regions of the world in 2011 and discusses the effects of natural disasters on the country and its people.Reference [17] highlights the problems and issues that can occur if people are not well prepared for disaster management.Reference [18] explains the various steps that are required for every region for disaster preparation and management with reference to the island of Hawaii.The paper [19] describes the various steps that need to be implemented for prevention and efficient recovery from natural disasters. The importance of communication during disaster recovery process has been a major area of study over these years with a number of research papers published highlighting the need for effective means of communication in disaster recovery and reconstruction works.Reference [4] has given a detailed explanation of the need for telecommunication during the disaster recovery process.The paper also highlights the various steps that can be implemented as a preparation to meet the disaster recovery process.Reference [20] proposed a public safety communication system by integrating wireless local area network and the radio.Although this was a new approach in public safety communication, the method had scalability problems and some major issues.Reference [10] gave an ad hoc network for disaster relief operations that was used for communication and in tracing people inside damaged buildings.Although the method had a number of additional functions apart from communication, reliable delivery and high performance could not be achieved with moving mobile devices.Reference [21] reviewed the various methods that are available for communication in public safety and disaster management. Reference [22] gave a rapid emergency deployment mobile communication node that uses several communication technologies to provide multiple communication services in disaster management situations.The system also supported live video streaming and multimedia content sharing along with the traditional communication services.The proposed node included a hybrid power source based on both renewable and nonrenewable energy generators and batteries to provide electric autonomy.A pneumatic telescopic mast is installed to support communication antennas providing mobility and increased coverage range.One or more nodes can be easily and quickly deployed in any location of interest to provide communication services.Reference [23] proposed a flexible network architecture that provided a common networking platform for heterogeneous multioperator networks, for interoperation in case of emergencies.Reference [24] gave excellent techniques using key agreements to enhance the security of mobile wireless networks.Reference [25] discussed the communication services that can be exploited during the disaster recovery and reconstruction operations. As the focus was more on deploying the ad hoc network and its security, fewer papers have worked on reliable and continuous data delivery between the fast moving mobile devices.The traditional topology based protocols like DSR [26] and DSDV [27] suffer from increased node mobility.These protocols are much more focused on fixed routes and their performance decreases with dynamic topology and node mobility.This leads to data loss in the network which is unacceptable in emergency situations.The idea of geographic routing [28] provided a much reliable and better way of transferring data in ad hoc networks.Geographic routing uses location information or the geographic position of the node to transfer data from one node to the other.The location information is transferred as one-hop beacon between the nodes.GPSR [29] is one of the most popular geographic routing protocols that use greedy forwarding and perimeter routing to transfer data in dynamic wireless networks.But even geographic routing suffers from a major drawback that it is very sensitive to inaccuracies in location information and its performance comes down with high mobility of nodes.This gave way to opportunistic routing and opportunistic forwarding [30].The broadcasting nature of wireless networks was exploited with the ExOR [30] opportunistic routing protocol.ExOR protocol provided an improved way to utilize the broadcasting property of the wireless links to enhance communication at the data link and network layers of multihop wireless networks that remained static.It is a hybrid routing and MAC protocol for wireless networks that improves the data delivery of unicast transmissions.Here the sender broadcasts a batch of packets.Every packet has a list of nodes which can forward it.To maximize the progress of each transmission, the forwarding node would send data packets in the order of their nearness to the destination node.To reduce redundant transmissions, ExOR uses a batch map which would store the list of packets received at each node; every forwarding node would only forward data that has not been acknowledged by the nodes nearer to the destination in their particular batch maps.ExOR provides significant throughput improvement over earlier routing strategies but ExOR cannot support multiple simultaneous flows and thus limits the practical use of this protocol in these fast reconfiguring networks.This problem was addressed by the SOAR [31] protocol.SOAR supports simultaneous flows in multiple paths.It also incorporates adaptive forwarding path selection to leverage path diversity and minimizes duplicate transmissions.But one of the major limitations with this protocol is that it uses link state style topology database for routing.In order to determine the rate of loss of packets, we often require periodic networkwide updates and measurement.This would be impractical in wireless networks with highly dynamic nodes.In order to avoid this problem some protocols introduced the batching system [32].But many applications using this system incurred much delay in packet arrival.Most of the recent location based routing schemes [33] do not entirely address the problem of additional memory consumption and overhead incurred.A community aware opportunistic routing [34] was proposed to work on mobile nodes with social characteristics.Reference [35] proposed an opportunistic routing scheme to handle communication voids in fast moving ad hoc networks.Reference [36] proposed a parallel routing scheme which is performed by many nodes simultaneously to maximize the opportunistic gain while controlling the interuser interference.Most of these routing protocols suffered from one or more disadvantages and were not able to guarantee reliable data delivery at the destination.This has motivated us to work on this new method of RRT that guarantees reliable delivery of information between the rescue workers during the time of disaster and in its recovery and reconstruction phases. Analysing the Reasons for the Failure of Communication Systems during the Time of Disaster Data for the analysis was collected through questionnaires from the people affected by 2013 Uttarakhand floods [37] in India.Personal interviews were carried out with the rescue workers and other people who took part in the disaster recovery and reconstruction works.The questionnaires and interviews were focused on two major things: (1) the reasons for the failure of communication systems during the time of disaster and (2) the importance of a reliable communication network during the disaster. Figure 1 shows the analysis of the data collected through questionnaires.Out of 419 responses received, 321 (76.61%) people indicated that the primary reason for the failure of telecommunication systems during the disaster is the destruction of communication infrastructure and network elements.74 (17.66%) people indicated the isolation and failure of supporting elements in communication as the major reason for the failure.21 (5.01%) people were of the opinion that the excess network load and network congestion were responsible for the failure of communication during the time of the disaster and in the recovery process.Figure 2 shows the analysis of the data collected through personal interviews.Out of 44 rescue workers interviewed, 20 (45.45%) people indicated that the primary reason for the failure of telecommunication systems during the disaster is the destruction of communication infrastructure and network elements, 16 (36.36%)people indicated the isolation and failure of supporting elements in communication as the major reason for the failure, and 8 (18.18%) people were of the opinion that the excess network load and network congestion were responsible for the failure of communication during the time of the disaster and in its recovery process. From the above results we concluded that the failure of communication systems during natural disaster and in recovery and reconstruction works occurs mainly due to these three reasons with most of the people citing the destruction of physical infrastructure needed for communication as the primary reason. (i) Destruction of Communication Infrastructure and Network Elements.Natural disaster often destroys the physical communication infrastructure and network components like the transmission towers, base stations, and so forth.Most of these telecommunication equipment pieces like transmission towers are very much prone to natural disasters due to their structure and place of deployment.Once this physical telecommunication infrastructure gets destroyed, it becomes very difficult to have proper means of communication in the disaster affected areas. (ii) Isolation and Failure of Supporting Elements in Communication.This is one of the major challenges faced in ensuring communication during the time of the disaster and afterwards.The infrastructure like electricity supply, transportation systems, and so forth that supports communication infrastructures gets damaged during these unpredictable events.These supporting elements play a vital role in the telecommunication sector and thus entire communication system is hampered by the destruction of these elements. (iii) Network Overload and Congestion.During the time of a disaster, most of the people try to communicate with others and overload the available communication bandwidth. Analysing the Importance of Reliable and Continuous Communication during the Disaster Communication networks play a vital role in disaster management services.In this section we analyze the importance of reliable and continuous communication in disaster management services using the data obtained from the questionnaires and interviews.Figure 3 shows the analysis of the data collected through questionnaires.We used Likert Scale to mark the responses.Out of 419 responses received, 345 (82.33%) people indicated that reliable and continuous communication was "very important" during the disaster and in disaster management services.40 (9.54%) people called it "important" while 29 (6.9%) people indicated it as "moderately important."A small minority, 3 (0.007%) people, called it "of little importance" and 2 (0.004%) people called it "unimportant."Figure 4 shows the analysis of the data collected through personal interviews. Out of 44 rescue workers interviewed, 38 (86.36%) people indicated that reliable and continuous communication was "very important" during the disaster and in disaster management services.Four (9.09%) people indicated that reliable communication was "important," while 2 (4.5%) people indicated it as "moderately important."None of the rescue workers selected the "of little importance" and "unimportant" options.So it is evident that all the rescue workers that took part in disaster recovery and reconstruction works regard reliable communication as a very important factor in disaster management. Of little importance Unimportant From the analysis it is very evident that reliable and continuous communication between the people, rescue workers, and everyone at the site was extremely important during the time of the disaster and also afterwards in disaster recovery and reconstruction processes.So it is necessary to provide a technique for reliable and continuous communication between the rescue officers and various people involved during the disaster recovery process to ensure minimum damage to life and property. Reliable Routing Technique (RRT) The working of the Reliable Routing Technique (RRT) is illustrated in Figure 5.The small circles depict the wireless devices like mobile phones, laptops, iPads, and so forth, used by different people in the disaster hit area.Let us first consider device S1 held by a rescue officer trying to send information to device D1 held by another rescue officer in the wireless network.Wireless devices have a unique property of broadcasting every data piece it receives into the network [38].Wireless devices communicate with each other by broadcasting every data packet it receives into the network.We make use of this property in designing our new method.Using this property of the wireless medium the person using the mobile device S1 broadcasts the information intended for the person with device D1 into the wireless ad hoc network.Position of the destination is obtained using a location registration and lookup service used in [29].This service would map node addresses to the locations.We consider two situations to implement the working of Reliable Routing Technique (RRT).In the first situation we assume that there are no disruptions and problems in the wireless link and wireless channel and there is no movement of the devices out of the transmission area.Mobile devices X, Y, and Z which are in device S1's transmission range receive the data packet.We create a priority list of these devices such that the mobile device that is nearer to the destination is selected as the device which would forward the data further towards the destination device.We then share the priority list between the neighboring devices.So, based on the priority list, device Y is selected to forward the data packet or the information towards the destination device.Device Y would first check whether the destination device is in its transmission range.If yes, it would directly deliver the data packet to the destination.If not, device Y broadcasts the data packet towards the destination.Devices X and Z which are in the transmission range of Y also receive a copy of the same data packet and thus they realize that the data packet has been already forwarded by another best forwarder mobile device.So they drop the data packet.Meanwhile the packet is received by devices P and Q.Based on the priority list P has the maximum progress towards the destination.Mobile device P would initially check whether the destination device D1 is in its transmission range or not.As it finds destination D1 in its transmission range, it delivers the data packet to the destination. In the second scenario, we assume that the rescue officers with mobile devices are moving to different places.Let us Mobile Information Systems consider device S2 held by a rescue officer trying to send information to device D2 held by another rescue officer in the wireless network.Device S2 broadcasts the data which is received by A, B, and C. Based on the method we would select device B as the first priority candidate to forward the packet.But device B moves out of the transmission range because of the movement of the rescue officer handling device B. Thus device B is unable to receive the data packet.We have set a timer (T) for every device.Once the timer expires and devices A and C do not receive the copy of the same data packet (devices A and C have already obtained one copy of the data packet from S2), based on the priority list set, the second priority device, device A forwards the packets to the destination device.Similarly device F receives the data packet and delivers it to the destination device.So as long as there is one device in the priority list, the delivery of the data at the destination device and continuous communication is guaranteed. Developing Priority List of Forwarding Candidates.We construct a priority list of forwarding nodes in the network, so that if one node is unable to receive or forward the packet, the next priority node can do it, thus ensuring continuous communication even with highly mobile nodes.We have set the priority list in such a way that only nodes located in the forwarding area would be given priorities and the chance to forward the packet. Algorithm for constructing the forwarder priority list is as follows: initialization, set the destination node as N D , set the Forwarder Priority List as FPL and initialize its value to zero, set the Neighbor Node List as NNL, set the transmission range as T R , set the distance from the current node to the destination node N D as CN DIST : (1) begin (2) if destination node is in the list of neighbors then (3) set destination node as the next hop node (4) return (5) end if (6) for j ← 0 to length(NNL) do (7) (16) then (17) FPL.add(NNL[j]) and set the priority for each node starting from 1 (18) end if (19) end for The source node broadcasts data packet intended for the destination into the network.The proposed algorithm (steps (1)-( 5)) initially checks whether the destination node is in the list of neighbors of the node receiving the data packet.If the destination node is found the data packet is delivered.Next we consider all the neighboring nodes of the initial node and calculate the distance (steps ( 6)-( 9)) of each neighboring node with the destination and sort the list.Then we allocate priorities to each node, with the node with the shortest distance to the destination getting the first priority, followed by the next and so on.The first priority node would be selected as the best forwarder node by the neighboring node for that particular data transmission.For each node we constantly check (steps ( 10)-( 17)) whether the distance to the next hop node exceeds half of the transmission range of that node and whether it moves way from the transmission range.If it exceeds the distance we would remove it from our list and update the Forwarder Priority List (FPL). Results and Discussion We study the performance and properties of Reliable Routing Technique through simulations in Network Simulator-2 [39].We had developed different types of topologies with different number of nodes for evaluating the performance.Using Network Simulator-2 we create an environment similar to a disaster hit area with many wireless devices.The movement of the rescue officers is simulated by moving the mobile devices randomly in the network.Initially we carry out simulation with 100 nodes with a uniformly distributed network topology.The packet size is set at 256 bytes and the transmission range is 250 m.The nodes are distributed over a 1000 m × 800 m rectangular region.The two-ray ground propagation model is used for the simulation.Mobility is introduced in the network with the Random Way Point mobility model.The speed of the nodes is then varied from a minimum of 1 m/s to various maximum limits in each topology setup to analyse the performance of the protocol in fast changing MANETs with highly mobile nodes.Constant Bit Rate Traffic is generated between the nodes.The simulation time is set at 1000 seconds.We compare and evaluate the performance of RRT with AODV and GPSR protocol based on three metrics.These three metrics are very crucial in deciding the reliability and performance of a routing protocol, especially with highly dynamic nodes. (i) Packet Delivery Ratio (PDR).It is one of the most important metrics in deciding the performance of a routing protocol in a network.It is defined as the ratio of data packets received at the destination(s) to the number of data packets sent by the source(s).(ii) Average End-to-End Delay.It is the average delay in receiving an acknowledgement for a delivered data packet. (iii) Length of Every Path.It is the average end-to-end (node to node) path length for successful packet delivery. (iv) Data Forwarding Times for Each Hop.It is average number of times a packet is forwarded from network layer to deliver data over each hop. (v) Data Forwarding Times for Each Packet.It is the average number of times a packet is forwarded from the network layer to deliver the data.Initially we varied the speed of the mobile devices from 0 to 10 m/s and the corresponding values for the performance metrics were noted.Table 1 shows the values of the three performance metrics with varying mobility of the nodes.As the speed of wireless nodes increases there is a small decline in the Packet Delivery Fraction and a small increase in the average delay experienced by the data packet.But as the speed variation is limited to 10 m/s, much variation in performance metrics cannot be observed.This is because the proposed opportunistic routing scheme works equally well in normal and dynamic scenarios.As a result, even the number of hops taken by the data packet to reach the destination remains the same with a value of 2.1.Further when we increase the speed of nodes to 50 m/s, more variation is found which is shown in Figures 6-10. From Figure 6 we can see that using Reliable Routing Technique the Packet Delivery Fraction is very close to 1.As the Packet Delivery Fraction is very close to the optimal value we can interpret that the packet loss that occurred is minimal.This is because RRT guarantees delivery at the destination as long as there is one node in the Forwarder Priority List. Figures 6, 7, 8, 9, and 10 show the comparison of the performance of RRT with GPSR and AODV protocols.From Figure 6 it is evident that the Packet Delivery Fraction (PDF) of RRT is much better compared to the other two protocols.This implies that the number of packets received at the destination is much larger for the RRT protocol.The PDF value for RRT protocol is very near to 1, which means that almost all the data packets sent are delivered at the receiver.As the mobility of nodes increases the PDF of GPSR and AODV protocol comes down considerably, but RRT maintains a high delivery ratio.Figure 7 shows that the average delay experienced by the data packets using RRT protocol is much less compared to the other two.Also as the mobility of the nodes increases, the performance of the GPSR and AODV comes down.Similarly from Figure 8 we can see that RRT takes smaller number of hops to deliver the data packet compared to the other two protocols.These results show that RRT would ensure reliable delivery of data packets with minimum delay in fast changing ad hoc networks. Figure 9 shows the average packet forwarding times per hop for the three protocols.We can see that the average time taken by RRT to forward a data packet is less compared to GPSR and AODV protocols.This shows that delay experienced by RRT protocol at each hop is very less compared to the other two protocols, leading to RRT's high efficiency.Figure 10 shows the average packet forwarding times per packet for the three protocols.From the figure we can see that RRT takes less time to forward the data packet compared to the other two protocols.This shows that the delay experienced by each packet using RRT is very less compared to the other two protocols in the network.Also we can see that as the speed of nodes increases, the performance of GPSR and AODV comes down but RRT maintains a steady performance.This shows that the RRT protocol maintains a very good performance even with highly mobile and random nodes in the network.From the simulation results it is evident that RRT achieves better performance compared to the other two protocols and ensures high rate of data delivery even in fast changing and reconfiguring mobile ad hoc networks with highly mobile nodes. Conclusions In this paper we initially analyzed the importance of reliable and continuous communication in disaster recovery and reconstruction works.The data obtained from the questionnaires and personal interviews confirmed that reliable and continuous communication was very important in disaster management services.We also discussed a number of methods given by various researchers for communication in disaster environments.Most of these methods could not guarantee reliable data delivery at the destination device.Using the broadcast property of the wireless medium the proposed Reliable Routing Technique was used for data delivery between two mobile devices in highly mobile ad hoc networks.Results from the simulations and comparisons with the other popular data transfer methods confirmed that our method gave very high performance and guaranteed reliable data delivery at the destination device and also ensured continuous communication between the devices. Figure 1 : Figure 1: Reasons for the failure of communications system: data from the questionnaires. Destruction of communication infrastructure and network elementsIsolation and failure of supporting elements in communication Network overload and congestion Figure 2 : Figure 2: Reasons for the failure of communications system: data from the interviews. Figure 3 : Figure 3: Importance of reliable and continuous communication during the disaster: data from questionnaires. Figure 4 : Figure 4: Importance of reliable and continuous communication during the disaster: data from interviews. Figure 5 : Figure 5: Data transferred between the wireless nodes using RRT. Figure 8 : Figure 8: Length of every path. Figure 9 : Figure 9: Data forwarding times for each hop. Figure 10 : Figure 10: Data forwarding times for each packet. Table 1 : Performance analysis of RRT.
7,340.2
2016-02-11T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
The optical conductivity of few-layer black phosphorus by infrared spectroscopy The strength of light-matter interaction is of central importance in photonics and optoelectronics. For many widely studied two-dimensional semiconductors, such as MoS2, the optical absorption due to exciton resonances increases with thickness. However, here we will show, few-layer black phosphorus exhibits an opposite trend. We determine the optical conductivity of few-layer black phosphorus with thickness down to bilayer by infrared spectroscopy. On the contrary to our expectations, the frequency-integrated exciton absorption is found to be enhanced in thinner samples. Moreover, the continuum absorption near the band edge is almost a constant, independent of the thickness. We will show such scenario is related to the quanta of the universal optical conductivity of graphene (σ0 = e2/4ħ), with a prefactor originating from the band anisotropy. For many two-dimensional semiconductors, such as MoS2, the exciton absorption increases with thickness. Here, the authors show that, in black phosphorus, less material absorbs more light due to exciton resonances. I n recent years, two-dimensional (2D) materials including graphene, transition metal dichalcogenides (TMDCs) and black phosphorus (BP), are at the forefront of scientific research. Strong light-matter interactions have been demonstrated in these atomically thin materials [1][2][3][4][5][6][7][8][9][10][11][12] , holding great promise for photonic and optoelectronic applications. Optical absorption in 2D materials is a fundamental light-matter interaction process, basically governed by the optical sheet conductivity σ(ħω). Graphene is a well-known 2D example, which exhibits a universal conductivity σ 0 = e 2 /4ħ in a broad frequency range, where e is the electron charge and ħ is the reduced Plank constant 1,2 . Moreover, N-layer graphene exhibits an optical conductivity of Nσ 0 , showing quantized optical transparency 1,13 . In semiconducting TMDCs, such as MoS 2 , the optical conductivity due to K-point exciton increases with layer number 14 . However, in this paper, we will show that excitons in thinner samples absorb more light in few-layer BP. Meanwhile, the absorption due to the electron-hole continuum near its own edge is almost the same for each thickness, with a value close to that in monolayer graphene. Few-layer BP is an elemental 2D semiconductor beyond graphene, with a strongly layer-dependent direct bandgap [15][16][17][18] , offering an ideal platform to probe the layer-dependent properties and dimensional crossover from 3D to 2D. Moreover, the intrinsic in-plane band anisotropy definitely distinguishes fewlayer BP from other widely studied 2D materials, such as graphene, TMDCs and InSe. Along with the moderate bandgap and large tunability, few-layer BP is unique and promising in polarized IR detectors and emitters. From this point of view, a quantitative determination and thorough understanding of the optical absorption in few-layer BP is in great demand. The optical absorption of few-layer BP is found to be dominated by robust excitons [10][11][12] . In previous optical studies 10,15,16 , the absorption intensity is not well quantified. In other words, quantitative insight of the absolute absorption is yet to be gained, though it is highly desirable for future optoelectronic applications, such as evaluating the quantum efficiency of photoluminescence (PL) and photocurrent generation. Our study resolves this issue and provides new insights into light-matter interaction in anisotropic 2D gapped materials. Results Sample preparation and IR characterization. A Fourier transform infrared (FTIR) spectrometer was used to obtain the extinction spectra (1 − T/T 0 ) of few-layer BP on polydimethylsiloxane (PDMS) or quartz substrates (see Methods for sample preparation and IR characterization), where T and T 0 denote the light transmittance of the substrate with and without BP samples, respectively, as illustrated in Fig. 1d. For atomically thin materials on a thick transparent substrate, like in our case, when the optical conductivity is not large, the extinction (1 − T/T 0 ) is approximately proportional to (the real part of) the optical conductivity 2,4 : σ(ħω) = (1 − T/T 0 ) · (n s + 1) · c/8π, where n s is the refractive index of the substrate (n s = 1.39 for PDMS in the measured IR range 19 ) and c is the speed of light. We systematically measured IR conductivity σ(ħω) spectra (in the unit of σ 0 ) for 2-7 L BP in a broad range of 0.4-1.36 eV at room temperature, as shown in Fig. 2. The incident light is normal to the layer plane and polarized along the armchair (AC) direction. The spectra for zigzag (ZZ) polarization are featureless 15 , hence not discussed here. Previous studies have revealed quantized subband structures of few-layer BP 15,16 , due to the quantum confinement in the out-of-plane direction and considerable interlayer interactions, in analogy to traditional quantum wells (QWs). In symmetric QWs with normal light incidence, optical transitions obey the Δj = 0 selection rule (j is the subband index) 20,21 . E jj denotes the exciton resonance associated with the optical transition between the jth pair of subbands (v j → c j ) at Γ point of the 2D Brillouin zone, as illustrated in Fig. 1b. As seen from Fig. 2, the E 11 resonance exhibits a very narrow linewidth even at room temperature, especially for thicker BP. In addition, the Stokes shift of few-layer BP is almost negligible (see Supplementary Fig. 1 as an example of 3 L), indicating good sample quality. The IR conductivity σ(ħω) at the E 11 resonance reaches 6.6σ 0 in 2 L BP, directly translated to a light absorption of 15% in free-standing case. This suggests very strong light-matter interactions in this atomically thin material. Moreover, we can observe a spectrally flat and broad response above the exciton energy, attributed to the continuum of band-toband transitions. Step-like features of continuum absorption underline the step-like 2D joint density of states (DOS) in QWlike structures, as sketched in Fig. 1c, this still holds in anisotropic few-layer BP 22 . Layer-dependent exciton absorption. The σ(ħω) spectra can provide a wealth of information on the exciton oscillator strength, which is directly related to the frequency-integrated conductivity (or absorption) of excitons 23,24 , as indicated by the shaded areas in Fig. 2. The peak height of the exciton feature is not as informative, since it is sensitive to the sample quality. Figure 3a shows the integrated conductivity σ I of the ground state (1 s) exciton of few-layer BP as a function of layer number N. The details for spectral fitting and extraction of σ I are presented in Methods. For each layer thickness, at least three samples were measured, with the error bar defined as the spread of the data. From Fig. 3a, one can find that thinner BP has larger absorption. In other words, it manifests a fact that less material absorbs more light at exciton resonances. This remarkable result is in sharp contrast to widely studied 2D semiconducting TMDCs, in which the exciton absorption increases with layer number (see Supplementary Fig. 2 for the absorption of 1-4 L MoS 2 and also ref. 14 ). To be more quantitative, the integrated absorption due to excitons is directly proportional to L z Á φ ex ð0Þ 2 , where L z is the thickness of the sample, |φ ex (0)| 2 is the modular square of the exciton wavefunction at origin, describing the probability to find the electron and the hole at the same location [23][24][25] . Certainly, the smaller the thickness is, the larger the confinement in the z direction is and hence the larger |φ ex (0)| 2 is. Theory has predicted that |φ ex (0)| 21 /L z 2 for QWs 26 , if the penetration of the electron and hole wavefunctions into the barriers is negligible 27,28 . This gives the integrated absorption~1/L z (L z~N for BP), which means stronger absorption for thinner samples. This argument correctly predicts the trend but the fitting with such a relation does not work well. The prediction shows a much steeper decrease than what we observed, as shown in Fig. 3a (black dashed curve), which calls for a more refined model. We can tackle this problem from another perspective. In the 2D hydrogen model for excitons, we find that the integrated absorption for the 1 s exciton is proportional to the exciton binding energy (E b ) (see Supplementary Note 1 for details). This is a very reasonable conclusion, since the larger the binding energy is, the closer the electron is to the hole, which favors light absorption at exciton resonances. We adopt this result for our analysis, even though excitons are not ideally 2D in few-layer BP. Olsen et al. proposed a simple screened hydrogen model for 2D excitons 29 , in which E b can be analytically expressed as a function of the reduced effective mass μ and the 2D sheet polarizability χ. Under the condition of 32πμχ/3 >> 1, E b can be simplified as E b ≈ 3/4πχ, which is proved to be valid for few-layer BP 10 . The sheet polarizability χ relates to the dielectric screening of electron-hole interactions. In the case of Fig. 2, few-layer BP is supported by a PDMS substrate, which introduces additional dielectric screening. Thus, both of the substrate and the material itself contribute to the screening. In view of this, the sheet polarizability χ is replaced by an effective value for N-layer BP, expressed as χ eff = χ 0 + Nχ 1 , with χ 0 = 6.5 Å and χ 1 = 4.5 Å describing the screening effect from the PDMS substrate and single-layer BP, respectively, based on our previous study 10 . Thus, we have σ I ∝ 1/(χ 0 + Nχ 1 ). With this relation, it is clear that the exciton absorption increases as the layer number decreases, though it is not as dramatic as~1/N. We use this relation to fit the data for E 11 , as shown in Fig. 3a, the overall agreement is much better than the 1/N scaling. The basic behavior is well captured by the modified model, though the agreement with experimental data is still not so excellent. The deviation is mainly caused by the experimental uncertainty in determining the optical conductivity, especially for thinner BP samples, which are expected to be more susceptible to the environment. The substrate plays a role in reducing the exciton binding energy, hence the integrated absorption. For freestanding BP, this model also gives a scaling of 1/N (ref. 10 ), consistent with the previous prediction for QWs. For E 22 resonances, the exciton absorption increases with the decrease of layer number as well, as also shown in Fig. 3a. It should be noted that, the thickness dependence of exciton absorption in BP is even qualitatively quite different from that in traditional QWs. For the latter, the absorption reaches maximum at certain thickness and decreases again with decreasing thickness, due to the exciton wavefunction leakage into the barriers in the ultrathin limit 27,28 . Such scenario becomes more evident in shallow QWs. The hard confinement in atomically thin BP definitely distinguishes it from traditional QWs. From this perspective, atomically thin BP provides us unique opportunities to examine this new type of QWs. The frequency-integrated conductivity of excitons is robust against temperature, although the linewidth is typically vulnerable to various factors. Since the aforementioned PDMS substrate thermally expands (contracts) during the heating (cooling) process, significant strain effects can be introduced to few-layer BP samples during temperature-dependent measurements, overwhelming the pure temperature effect. Therefore, we transferred few-layer BP samples to quartz substrates with a much smaller thermal expansion coefficient, so that the strain effect can be ignored 30 . Fig 4a, d show the IR extinction spectra of a 3 L and 7 L BP at varying temperatures from 10 K to 300 K, respectively. To improve the ratio of signal-to-noise for such measurements, the incident IR beam size was set to be larger than the sample size. Thus, the absolute intensity of exciton absorption is underestimated, but it's legitimate to compare the integrated area of exciton peaks for the same sample at different temperatures, which is directly related to the exciton oscillator strength. To quantitatively probe the temperature effect, the integrated area and linewidth of exciton peaks are extracted from spectral fitting using Eq. (3) in Methods. Clearly, the exciton width increases with temperature both for E 11 and E 22 as expected (Fig. 4c, f). Such scenario is common in semiconductors, which is attributed to the enhanced electron-phonon scattering channels at elevated temperature 31,32 . The integrated areas of exciton peaks, on the other hand, are nearly independent on the temperature, as shown in Fig. 4b, e. The slight deviation from a constant may arise from the experimental uncertainty and data fitting procedure. A recent study shows that for typical semiconductors, in the incoherent region of light-matter interactions, the integrated exciton absorption only depends on the radiative decays, rather than the scattering decays, hence the absorption is independent of temperature 33 . This is exactly the case for our observations. Since the integrated exciton intensity is proportional to the exciton binding energy, our results indicate that the exciton binding energy is almost a constant within the tested temperature range. Continuum absorption. Next, we will focus on the layer dependence of the absorption due to the continuum transition. As seen in Fig. 2, a relatively flat response can be observed in the σ(ħω) spectra above the ground state exciton energy, mainly associated with the continuum band-to-band transitions (labeled as T jj ) and possibly excited excitonic states 10 . A closer examination of Fig. 2 shows that the continuum absorption is not strictly a constant but decreases gradually with increasing photon energy. For simplicity, we focus on the continuum absorption close to the band edge (ħω ≈ E g ), indicated by the red (blue) arrows for T 11 (T 22 ) transitions in Fig. 2. The results are summarized in Fig. 3b as a function of layer number N, they approximately follow a constant value despite of the experimental uncertainty. This means that the absorption due to the bandgap continuum transitions is almost the same, regardless of the thickness of BP samples. According to the Fermi's golden rule, if a light beam with frequency ω and linear polarization alongẽ 0 is normally incident to a direct-gap 2D system, the dimensionless absorption A(ħω) due to a pair of conduction and valence bands can be expressed as (in SI units) 34 Að hωÞ ¼ πe 2 n s cε 0 m 2 0 ω where ε 0 is the vacuum permittivity, m 0 is the free electron mass, and A r is the area of the 2D material. The momentum matrix Ài h∇ as the momentum operator. E cv ðkÞ ¼ E ck À E vk andk is the wave vector. It first looks like that the absorption is strongly band parameter dependent, since the terms of P ! cv ðkÞ and DOS are involved in Eq. (1). Surprisingly, the absorption in isotropic graphene 1,2 and InAs QWs 35 are found to be universal, indicating that there is a cancelling mechanism between these two terms. For massless graphene, which has a unique linear energy dispersion near the Dirac point E cv ðkÞ ¼ ± hv F k (v F is the Fermi velocity), the cancelation is exact with no frequency dependence, leading to a well-defined universal absorption quanta πα, where α = e 2 /4πε 0 ħc is the fine structure constant (~1/137) 36 . For anisotropic 2D massive semiconductors (see Supplementary Note 2 for details), based on thek Áp perturbation theory 34 , it is obtained that where μ x(y) is the reduced effective mass in the AC (ZZ) direction. With these, and also taking into account the degrees of spin and valley degeneracy of g s and g v , we thus have where Θ(ħω − E g ) is the step function, describing the 2D joint DOS, and E g is the bandgap. This indicates that most of the band parameter dependent terms are perfectly canceled out, leaving behind the frequency dependent term ω and the band anisotropy q . Discussion As a simple explanation of what we observed for both exciton and electron-hole continuum transitions, we can revisit the 2D band structure of few-layer BP. Due to the strong coupling between layers, the conduction and valence bands split into multiple 2D subbands with sizable energy spacing (for the case with sample thickness below 10 L) 15 . If we focus on the bandgap continuum transition, no matter the thickness or how many subbands in total, the relevant ones are only c 1 and v 1 subbands. As a consequence, one can not expect more absorption with photon energy right above the bandgap when the layer number increases, given that only the first pair of 2D subbands is involved and the 2D joint DOS typically barely vary. Therefore the increase of sample thickness does not cause enhanced absorption for the continuum. However, on the other hand, the excitonic effect weakens with increasing layer number due to weaker confinement in the z direction, which actually decreases the exciton absorption. This can qualitatively explain the findings in Fig. 3. This argument applies to other 2D materials as well. For MoS 2 , the electronic bands associated with direct-gap exciton at K point show almost no splitting when the layer number increases from 1 to 2 (or other thickness) 37 . The degenerate subbands certainly double the DOS and the exciton absorption can increase accordingly, giving an opposite trend compared to BP. As for Nlayer graphene, in spite of the complexity of the very low energy band structure, in the range from visible to near-IR, N pairs of subbands are involved in the optical absorption and each pair contributes an absorption quanta πα 1,2 . While for N-layer BP, if we only focus on the bandgap region, only one pair of subbands is involved and one absorption step is contributed, regardless of the thickness. Of course, if we increase the photon energy, more and more subbands are counted in and the absorption increases step by step. Eventually the continuum absorption is comparable to that of N-layer graphene. Therefore, counting the 2D subbands does give us a good estimation of absorption in different 2D systems 36 . In summary, we have determined the optical conductivity of 2-7 L BP using IR absorption spectroscopy. Our results reveal that the exciton absorption increases as the layer number decreases, a direct consequence of enhanced excitonic effects in reduced dimensionality. Moreover, the absorption from the continuum states near the band edge exhibits a layer-independent value. Few-layer BP provides us an ideal platform to probe the dimensional effect on the strength of optical absorption from bound (exciton) and unbound (continuum) states in the same material. The highly enhanced exciton absorption of atomically thin BP, which tends to host high density of optical excitations, may open up new possibilities for applications in nonlinear optics and quantum optics. Our results are expected to stimulate further theoretical interests in anisotropic 2D materials. Methods Sample preparation. Few-layer BP samples were prepared by a modified mechanical exfoliation method 38 . Firstly, a piece of bulk BP crystal (HQ Graphene Inc.) was cleaved several times using a Scotch tape. Secondly, the tape containing BP flakes was slightly pressed against a PDMS substrate of low viscosity and then peeled off rapidly. Some thin BP flakes with relatively large area and clean surface were left on the PDMS substrate. To achieve high optical quality, the samples on PDMS were directly used for room-temperature IR measurements, without any additional sample transfer process. Supplementary Fig. 1 shows an example of a 3 L BP on PDMS. The negligible Stokes shift indicates good sample quality. To avoid sample degradation in air, the samples were prepared in a N 2 glove box with O 2 and H 2 O levels lower than 1 ppm, and then loaded into a cryostat purged with N 2 for subsequent IR measurements. The cryostat is only for sample protection instead of temperature control. The layer number and crystal orientation of BP were readily identified using polarized IR spectroscopy 15 . Polarized IR spectroscopy. The IR extinction (1 − T/T 0 ) spectra of few-layer BP were measured using a Bruker FTIR spectrometer (Vertex 70 v) equipped with a Hyperion 2000 microscope, as illustrated in Fig. 1d. A tungsten halogen lamp was used as the light source to cover the broad spectral range of 0.4-1.36 eV with 1 meV resolution, in combination with a liquid nitrogen cooled Mercury-Cadmium-Telluride (MCT) detector. The lower bound cutoff photon energy is restricted by the substrate. The linearly polarized incident light was obtained after the IR beam passing through a broadband ZnSe grid polarizer, then was focused on BP samples using a ×15 IR objective. For room-temperature measurements, the aperture size was set to be smaller than the sample size to make sure the IR beam all goes through the sample. A spectrum was typically acquired over 1000 averages to improve the signal-to-noise ratio. Low temperature IR measurements. For low temperature measurements, fewlayer BP samples were transferred from PDMS to quartz substrate, to avoid the significant strain effect of PDMS during the heating (cooling) process, since the latter has a much smaller thermal expansion coefficient. Then, the samples were loaded into a liquid He cooled cryostat, with the temperature range from 10 K (or lower) to 300 K. To improve the signal-to-noise ratio, the aperture size was set to be larger than the sample size. Data analysis. Generally, homogeneous broadened lineshape is characterized by a Lorentzian function. However, for the E 11 peaks in Fig. 2, attempts failed to fit the lineshape using a single Lorentzian function, since the high-energy tail deviates the spectrum from the ideal lineshape, especially for thinner BP. Similar observations have been reported in QWs 39-41 , the exciton absorption (or emission) lineshape is slightly asymmetric, exhibiting a Lorentzian and an exponential slope at the lowand high-energy side, respectively. This is caused by the so-called "exciton localization effect", as a consequence of spatial inhomogeneity 39 . Nevertheless, such imperfection will not affect our determination of the integrated areas of exciton peaks. To determine the integrated conductivity (σ I ), the E 11 peaks in Fig. 2 are fitted using a documented model, which was first proposed by Schnabel et al. 39 , and has been successfully applied to QW-like structures 40,41 . This model is analytically expressed as: The first part is a Lorentzian function, describing the symmetric lineshape with area of L S and linewidth of γ L . Δ = ħω − ħω 0 , with ħω 0 denoting the average transition energy between the conduction and valence subbands. The second part describes the asymmetric lineshape, γ is the linewidth and η describes the asymmetric broadening by exciton localization, L A is the spectral weight. We use Eq. (3) to fit the E 11 peaks. As shown in Fig. 2 (red curves), the general agreement is excellent, except for a small discrepancy at the high-energy end of the peaks, given that the excited excitonic states (2 s or 3 s states) and the continuum absorption are also involved in this spectral region. While E 22 peaks (blue curves) can be well fitted only using the symmetric part (Lorentzian function). The integrated conductivities shown in Fig. 3a are extracted from the fitted peaks. In addition, as indicated by the extracted parameter η for each layer thickness ( Supplementary Fig. 4), we can see that thinner BP exhibits larger asymmetry. This is reasonable, since thinner BP is more sensitive to the underlying substrate and environment. Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
5,531.4
2020-04-15T00:00:00.000
[ "Physics" ]
Stem-Like Signature Predicting Disease Progression in Early Stage Bladder Cancer. The Role of E2F3 and SOX4 The rapid development of the cancer stem cells (CSC) field, together with powerful genome-wide screening techniques, have provided the basis for the development of future alternative and reliable therapies aimed at targeting tumor-initiating cell populations. Urothelial bladder cancer stem cells (BCSCs) that were identified for the first time in 2009 are heterogenous and originate from multiple cell types; including urothelial stem cells and differentiated cell types—basal, intermediate stratum and umbrella cells Some studies hypothesize that BCSCs do not necessarily arise from normal stem cells but might derive from differentiated progenies following mutational insults and acquisition of tumorigenic properties. Conversely, there is data that normal bladder tissues can generate CSCs through mutations. Prognostic risk stratification by identification of predictive markers is of major importance in the management of urothelial cell carcinoma (UCC) patients. Several stem cell markers have been linked to recurrence or progression. The CD44v8-10 to standard CD44-ratio (total ratio of all CD44 alternative splicing isoforms) in urothelial cancer has been shown to be closely associated with tumor progression and aggressiveness. ALDH1, has also been reported to be associated with BCSCs and a worse prognosis in a large number of studies. UCC include low-grade and high-grade non-muscle invasive bladder cancer (NMIBC) and high-grade muscle invasive bladder cancer (MIBC). Important genetic defects characterize the distinct pathways in each one of the stages and probably grades. As an example, amplification of chromosome 6p22 is one of the most frequent changes seen in MIBC and might act as an early event in tumor progression. Interestingly, among NMIBC there is a much higher rate of amplification in high-grade NMIBC compared to low grade NMIBC. CDKAL1, E2F3 and SOX4 are highly expressed in patients with the chromosomal 6p22 amplification aside from other six well known genes (ID4, MBOAT1, LINC00340, PRL, and HDGFL1). Based on that, SOX4, E2F3 or 6q22.3 amplifications might represent potential targets in this tumor type. Focusing more in SOX4, it seems to exert its critical regulatory functions upstream of the Snail, Zeb, and Twist family of transcriptional inducers of EMT (epithelial–mesenchymal transition), but without directly affecting their expression as seen in several cell lines of the Cancer Cell Line Encyclopedia (CCLE) project. SOX4 gene expression correlates with advanced cancer stages and poor survival rate in bladder cancer, supporting a potential role as a regulator of the bladder CSC properties. SOX4 might serve as a biomarker of the aggressive phenotype, also underlying progression from NMIBC to MIBC. The amplicon in chromosome 6 contains SOX4 and E2F3 and is frequently found amplified in bladder cancer. These genes/amplicons might be a potential target for therapy. As an existing hypothesis is that chromatin deregulation through enhancers or super-enhancers might be the underlying mechanism responsible of this deregulation, a potential way to target these transcription factors could be through epigenetic modifiers. Urothelial Stem Cells Normal adult stem cells in the basal layer of the urothelium can regenerate and proliferate to restore urothelial integrity after damage [1]. These basal urothelial stem cells have a high nuclear to cytoplasmic ratio, express CD44, laminin receptor (LR), β1 and β4 integrins, and specific "basal" cytokeratins (CK-5/14, CK-17) [2]. Phenotypically similar populations of cells have been isolated from urothelial cancer cell lines and primary tumors [3]. The tissue-specific stem cells of the normal urothelium have been proposed to reside in the basal layer, responsible for the maintenance of tissue homeostasis and renewal. The basement membrane serves as a nidus for epithelial stromal interactions that are essential for stem cell preservation [4]. Alternatively, it has also been suggested that at least two independent pools of urothelial stem cells exist [1]. In the attempt to identify and target tumor-initiating cell populations, the similarities between normal and tumor stem cells of the same tissue have been employed. Many molecules expressed by normal stem cells have been found in their malignant counterparts [1]. For example, the embryonic stem cell marker OCT3/4, akey regulator of self-renewal, showed high expression in human bladder cancer and the level of expression correlated with tumor aggressiveness and progression rate in patients [5,6]. Another marker, CD44 is one of the most prominent stem cell markers. CD44+ cells are located in the basal layer of the normal urothelium as well as in the UCC [7]. CD44 is a cell surface molecule that has been related to multiple functions including cell differentiation, cell proliferation, cell migration and angiogenesis. Other potential functions include presenting cytokines, chemokines, and growth factors to the corresponding receptors, docking of proteases at the cell membrane and cell survival signaling. Cancer Stem Cells (CSC) in Bladder Cancer The understanding of the role of stem cells in tissue biology is the base of the proposal that cancers might similarly develop from a progenitor pool (the "cancer stem cell (CSC) hypothesis"). This idea supports that CSCs differentiate into cancer cells and therefore to tumors, like normal adult tissues would arise from a specific stem-like cell population. These would drive tumor growth and ability to metastasize, as well as resistance to conventional antitumor therapy. Cancer stem cells (CSC) were defined in 2006 as malignant cells with an ability to renew and differentiate to form all of the cell types in a given tumor (American Association for Cancer Research Workshop 2006 [8]. In 2009 bladder cancer stem cells (BCSCs) were identified for the first time via the markers used to isolate normal stem cells [9], and their existence was supported by subsequent studies [10]. Urothelial cancer stem cell comprises a tumor cell subpopulation with tumor-initiating potential, self-renewal, clonogenic and proliferative capacity. These cells are observed in normal urothelium in tumors and have the ability to conserve cellular heterogeneity via differentiation and hierarchical tissue organization. The rapid development of the CSC field, together with powerful genome-wide screening techniques, have provided the basis for the development of future alternative and reliable therapies. One of the characteristics of CSC is the stemness; i.e. the ability to self-renew and differentiate [11], for which several signaling pathways are overactivated such as JAK/STAT, Wnt/β-catenin, Nanog, and Notch, depending on the tumor type [12,13]. CSCs sometimes phenotypically resemble non-stem cancer cells and this has been related to therapy resistance. Non-stem cancer cells can acquire stemness by dedifferentiation in response to multiple stimuli, possibly including conventional cancer therapies [14,15]. Some studies hypothesize that bladder CSCs (BCSCs) do not necessarily arise from normal stem cells, but might derive from differentiated progenies following mutational insults and acquisition of tumorigenic properties [1]. Recent studies support that normal bladder tissues might transform into CSCs through mutations in urothelial stem cells, basal cells, intermediate cells and terminally differentiated umbrella cells [10]. Old studies were supporting that CSCs may arise from normal stem cells having suffered gene mutations instead of deriving from differentiated progenies [16]. Although this concept was abandoned, it has now emerged again when recent papers have shown that, co-mutagenesis of ARID1A, GPRC5A and MLL2 by CRISPR/Cas9 technology significantly enhanced self-renewal and tumor initiation properties of BCSCs [17]. CSC Markers in Bladder Cancer BCSCs are heterogenous and identification of these cells is crucial since they are integral to the initiation, high recurrence and chemoresistance of bladder cancer. Based on all these data, the use of a combination of markers could help refine the CSC phenotype in BC and in other tumors. CSC as Markers of Progression In clinical practice we are still missing reliable prognostic markers to identify those papillary tumors more prone to progress to being muscle-invasive. In this context, the use of CSC markers as a prognostic tool has been limited by the functional and phenotypic heterogeneity of CSCs populations. Along the same line, recurrence of NMIBC has been related to the presence of undifferentiated cells exhibiting stem-like properties, the so-called cancer stem cells (CSCs) [26,27]. These tumor initiating undifferentiated cells can undergo unlimited self-renewal and have been shown to reform the tumors when implanted in immunocompromised mice [28,29]. Several stem cell markers have been found to be linked to recurrence or progression. One example is the CD44v8-10-to-standard CD44-ratio (total ratio of all CD44 alternative splicing isoforms) in urothelial cancer that has been shown to be closely associated with tumor progression and aggressiveness [4,30]. ALDH1, has also been reported to be associated with BCSCs and a worse prognosis in a large number of studies [31,32]. Stem Cell Differences between NMIBC and MIBC Most bladder caners are non-invasive urothelial papillary tumors. These display high recurrence rates after resection but only rarely infiltrate the bladder wall or develop metastasis. However, 10-30% of them develop into high-grade invasive tumors and it is critical to identify genetic defects and predictive markers for risk stratification. For example, NMIBC which account for 70-80% of human UCC cases are frequently associated with activating mutations of some proto-oncogenes, of which fibroblast growth factor receptor 3 (FGFR3) and HRAS are most prevalent. These mutations are present in up to 75% and 30% of the papillary tumors, respectively, and are linked to different outcome. In contrast, the remaining 20-30% of human UCC are constituted by MIBC, related to loss of p53, RB, PTEN and activation of EMT-TFs (Epithelial Mesenchymal Transition-Transcription Factors). In this subset, proliferation seems to be mediated by E2F3 (controlled by miR-125b) and EMT that is regulated by the miR-200 family [1,33]. All these mutations result in genomic instability and an anti-apoptotic phenotype, which enables tumor progression through accumulation of mutations. Both the high recurrence rates and tumor heterogeneity of bladder cancer have been related to bladder cancer stem cells [34]. Attempts to isolate BCSCs based on the basal cell surface marker CD44 expression-despite being closely associated with tumor progression and aggressivenessshowed substantial variation among basal tumor subtypes and have been at least unsuccessful in non-muscle-invasive tumors [35][36][37]. Trying to link grade and stage with stem-like properties, Brandt et al. proposed that BCSCs, for low-grade papillary/noninvasive and high-grade flat/invasive UCC, have a different origin associated with a distinct genetic background. His gene-profiling study showed that noninvasive urothelial carcinomas predominantly express mRNA-encoding markers of differentiated urothelial cells; among others, the superficial/umbrella cell marker uroplakin 2 and the cell adhesion proteins LAMB3 and ITGB4 [34]. Teixeira et al. [38] showed that distinct cell subsets of muscle-invasive BC can show molecular features of stem-like cells with an aggressive phenotype, enhanced chemoresistance and tumor-initiating ability. In their report [38], they used distinct stem cell-related markers, such as embryonic transcription factors (OCT4 (POU5F1), SOX2 and NANOG), ABC transporters (PGP (ABCB1) and BCRP (ABCG2)), aldehyde dehydrogenase isoforms (ALDH1A1, ALDH2 and ALDH7A1), and basal urothelial stem cell markers (CD44, CD47 and KRT14). The authors showed a significant co-upregulation of CD44 and of basal-type KRT14. The KRT14 cytokeratin is a primitive stem cell marker precursor to KRT5 and KRT20 in urothelial differentiation and has been associated with tumor recurrence and poor overall survival, independently of known clinical and pathological variables [39,40]. They analyzed gene expression patterns in primary clinical samples and could identify a two-gene stem-like signature (SOX2 and ALDH2) potentially useful to identify muscle-invasive tumors that are more susceptible to progression or metastasis. SOX4 or E2F3 were not analyzed in this study. Based on this study [38], the role of CSCs as driving forces in the pathogenesis and relapse of invasive BC is reinforced based on expression of at least two stemness-related markers in muscle-invasive tumors. This supports the interest of identifying, novel therapeutic approaches considering CSCs as a target population. This signature could help to prospectively identify BC patients that could benefit from a more aggressive therapeutic intervention targeting CSCs at earlier time points. Prospective confirmation of these findings will be required. Role of Amplification of 6p22 in Bladder Cancer Even though knowledge about copy number alterations in BC is limited, amplification of chromosome 6p22 is frequently described [41,42]. As was shown by Shen et al. [31] with The Cancer Genome Atlas (TCGA) dataset and cBio Cancer Genomics Portal analysis, they observed amplification of chromosome 6p22 in 18% of bladder cancer patients. Some authors have described that amplification of chromosome 6p22 was significantly associated with MIBC in contrast to NMIC [43][44][45]. This observation was subsequently supported by a subsequent paper, by Shen et al. observing that the amplification of chromosome 6p22.3 was associated with the 22% of MIBC in contrast to 9% of NMIBC (p = 0.04) [45]. Interestingly, they also observed a much higher rate of 6p22.3 amplification in high-grade NMIBC (13%; 12/93) compared to low-grade NMIBC (2%; 1/47). Tumor depth of invasion in MIBC was also associated with 6p22.3 amplification (p = 0.12). However, they failed to show a significant association of amplification (35/181; 19.2%) with survival (log-rank p = 0.438) for the 181 MIBC patients who underwent a cystectomy for curative intent. The authors hypothesize that 6p22.3 amplification might act as an early event in tumor progression. This report supports that amplification 6p22.3 together with the standard pathological factors-such as grade, depth of invasion (pT), and positive nodes (pN)-is associated with a more aggressive phenotype [45]. When examining the 6p22.3 region of amplification eight known genes (ID4, MBOAT1, E2F3, CDKAL1, SOX4, LINC00340, PRL, and HDGFL1) are present [31]. RNA-seq results showed that CDKAL1, E2F3 and SOX4 in the 6p22.3 region were highly expressed in patients with the chromosomal 6p22 amplification. E2F3 has been characterized as a potential cell proliferation effector of 6p22 amplification. Knockdown of E2F3 was observed to inhibit cell proliferation in a 6p22.3-dependent manner while knockdown of CDKAL1 and SOX4 did not affect cell proliferation [45]. "Oncogene addiction", a term first coined in 2000 by Bernard Weinstein, reveals a possible "Achilles' heel" within the cancer cell that can be exploited therapeutically. One could hypothesize that 6p22.3 could be explored as a potential "Achilles' heel" and this region of amplification as an area of "amplicon addiction" that could be modulated epigenetically. Role of 6p22 Amplification in Cell Lines Three MIBC cell lines (5637, TCC-SUP and HT1376) that contain amplification of the 6p22 region have been described [45]. E2F3a, E2F3b, CDKAL1 and SOX4 are highly expressed in the 6p22-amplified 5637 cells. In TCC-SUP and HT-1376 cells the E2F3a and E2F3b mRNA levels were similar to those in the control of non-6p22-amplified cells and amplification of 6p22 did not correlate with gene expression values. SW780 and J82 cells showed high expression of CDKAL1 and RT-112, and RT-112-D21 cells showed high expression of SOX4. However, none contained the amplified 6p22 region. Proliferation of the 5637-bladder cancer cell line was found to be highly dependent on the 6p22.3 amplicon, particularly of the E2F3 gene. In this cell line, cell proliferation was reduced when E2F3a or E2F3b were knocked down compared to no effect with knockdown of SOX4 and CDKAL1 [43,44,46]. CCND1 was also down regulated in response to shE2F3a. The authors confirmed that cell proliferation induced by E2F3 is dependent on chromosomal 6p22 amplification repeating the experiment in 253J and T24, two other cell lines without 6p22 amplification in which knockdown of E2F3 did not inhibit cell proliferation. This supports that the E2F3 role depends on the presence of chromosomal 6p22 amplification, maybe through an "amplicon addiction" mechanism [45]. Role of E2F3 in Bladder Cancer In human bladder cancer, amplification of the E2F3 gene, located at 6p22, is associated with overexpression of its mRNA and high expression of the E2F3 protein. This overexpression is seen in over one third of primary transitional cell carcinomas and increases with tumor stage and grade. Because of the role E2F3 in cell cycle progression, these findings support that the E2F3 gene represents a candidate bladder cancer oncogene activated by DNA amplification and overexpression [43,44,47]. Additionally, CCND1-a key cell cycle regulator specifically for the G1-to-S phase-has been identified as a potential target of E2F3 [48]. Binding of cyclin D1 to cyclin-dependent kinase (CDKs) leads to retinoblastoma protein (pRb) phosphorylation followed by release of E2F transcription factors allowing G1-to S-phase progression. Inhibition of CCND1 and CDK4/6 may potentially reverse the oncogenic effect elicited by E2F3 amplification in TCC-UB cell lines with amplified 6p22. (Palbocyclib testing is ongoing). Besides, increased tumor recurrence and progression in patients with NMIBC has been associated with increased E2F and Ezh2 expression in a study that provides a genetically defined model for human high-grade NMIBC. The report shows that the Rb-E2F-Ezh2 axis can promote tumor development when disrupted [49]. SOX4 SOX4 is a member of the SOX (SRY-related HMG-box) family of transcription factors involved in organogenesis of the heart, pancreas, and brain, and in T lymphocyte differentiation. SOX4 gene expression is upregulated in many cancer types, and increased SOX4 activity contributes to cellular transformation, cell survival, and metastasis [50]. SOX family of proteins are found in all metazoans, and high expression of SOX2 has been used to recognize the presence of CSCs. This approach has been applied in some studies of urothelial carcinoma [21,32,51,52]. SOX4 in Tumors High levels of SOX4 gene expression have been reported in diverse human cancers including leukemia, colorectal cancer, lung cancer and breast cancer. It has been related to both apoptosis (leading to cell death) and to tumorigenesis suggesting a role in the development of these malignancies [53,54] through the epithelial-to-mesenchymal transition (EMT) mechanism [53]. A systematic review and meta-analysis of SOX4 as a potential prognostic factor in human cancers was carried out analyzing the expression status of SOX4 in twenty kinds of human cancers at a protein level (The Human Protein Atlas). The positive rate of SOX4 expression was about 78% in overall cancer tissues. The meta-analysis showed that SOX4 overexpression correlated with a poor overall survival with a pooled hazard ratio (HR) of 1.67 (95% CI 1.01-2.78). The study concluded that SOX4 is a potential prognostic biomarker in human cancers [55]. SOX4 in Bladder Cancer SOX4 gene expression was increased 2.2-times in bladder tumors compared to normal tissue by immunohistochemistry and real-time PCR [56]. Immunostaining, used to confirm the presence of protein, showed significant differences between bladder tumors and normal bladder tissue (p = 0.001). Altogether these data suggest that SOX4 gene may have a role in bladder cancer tumorigenesis [56]. Based on gene expression profiling, another report showed correlation of high SOX4 expression with advanced cancer stages and poor survival, supporting again a potential role of SOX4 as a regulator of the BCSC properties that may serve as a biomarker of the aggressive phenotype in bladder cancer [31]. One study has provided contradictory result, [57] potentially related to differences in the SOX4-specific antibody used. This study conducted in 2360 clinically annotated bladder tumors using tissue microarray found unexpectedly, a correlation (p < 0.05) between strong SOX4 expression and increased patient survival. SOX4 and EMT BCSCs are enriched with elevated levels of genes acting in EMT [58]. EMTs underlie the pathophysiology by which sessile, epithelial cells lose polarity and cell-cell adhesion, and transition to motile, mesenchymal stem cells. This process increases migratory and invasive potential during organismal development and is key for progression of epithelial tumors to metastatic cancers. Activation of EMT program in cancer cells switches the CSCs with stationary phenotype to migratory phenotype. Cancer cells can then enter the blood circulation, extravasate, and eventually metastasize to target organs [1]. EMT may render cancer cells with cancer stem cell properties, and/or stimulate the expansion of malignant BCSC population, giving rise to a more aggressive tumor type. It is therefore critical to explore for the connection between EMT and cancer stemness to assess their potential implications in bladder cancer therapy [1]. Breast cancer studies have suggested that SOX4 induces epithelial-to-mesenchymal transition (EMT) and cooperates with the RAS oncogene in cancer progression [59]. In breast cancer, SOX4 is believed to play a critical role in the early stages of the malignant progression of breast cancer. The same might happen in other tumor types like bladder cancer [50]. Tewari went on to identify SOX4 target genes validated by qRT-PCR in response to siSOX4, including CCND1, CDK1, FGFR1, FGFR3, MYB and MYC. The authors also found that tight junction proteins, like CRB3, TJP1, TJP3, were specifically up-regulated in response to knock down of SOX4. This gene expression profiling data indicates that SOX4 is involved in other signaling pathways and cellular processes that regulate cell migration and invasion besides it's known role in classic cell cycle regulation. Interestingly, their findings support that SOX4 exerts its regulatory functions upstream of the Snail, Zeb, and Twist family transcriptional inducers of EMT, but without directly affecting their expression as seen in several cell lines of the CCLE project. [50] They concluded that SOX4 is a master regulator of epithelial-mesenchymal transition by governing the expression of the epigenetic modifier, Ezh2 [50]. As of today, the detailed mechanisms of SOX in the regulation of BCSCs are far from clear to us. To further update the SOX4 dependencies in different bladder cancer cell lines Table 1, describe recent dependency data based on CERES score and CRISPR Avana. Epigenetics and CSC Beyond common CDs like CD44 and CD133, ALDH1, and SOX, EMT-related markers like polycomb repression complex (PRC) related pathway have its role on CSC biology [60]. PRC2 mediates the trimethylation of histone H3 at lysine 27, a hallmark of gene silencing and facultative heterochromatin formation, and its dysregulation is linked to human diseases. PRC2 consists of four core subunits, EZH2, Eed, SUZ12 and Rbbp4. Among these, EZH2 is the catalytic subunit that requires Eed and SUZ12 for catalysis [61]. EZH2 plays a role in tumor invasiveness, colony formation and migration and are related to the expression of CSC-related genes (CD44, KLF4, OCT4 and ABCG2). EZH2 was found to be regulated by E2F1 [62], which was previously linked to aggressiveness and prognosis of bladder cancer [62,63]. Bmi-1 (a member of PRC1) serves as the gene silencer that induces cellular senescence and cell death, and it can contribute to cancer when improperly expressed. Bmi-1 overexpression (mostly due to gene amplification) leads to INK4A/ARF locus repression and consequent inactivation of Rb and p53 [64]. In contrast to many studies on the potential involvement of Bmi-1 in the oncogenesis of various lymphomas and leukemias, there is still a lack of knowledge about its role in the pathogenesis of many solid tumors, including UCC. A recent study has confirmed overexpression of Bmi-1 protein in BC correlated with tumor classification, recurrence, TNM stage, and survival, proposing it as a possible prognostic marker. Bmi-1 protein was up regulated to a much greater extent than Bmi-1 mRNA in cancer tissue, suggesting deregulation at the post-transcriptional level [65] despite that some authors reported no significant Bmi-1 mRNA expression [66]. A genomics approach revealed an 11-gene signature (including BMI1) that consistently displayed a stem-cell-resembling expression profile in distant metastatic lesions of different cancers, including bladder cancers [67]. Other data suggest that Bmi-1 overexpression is probably not a primary event in the genetics of BC, but is involved in the progression of the tumor [68]. Other members of the Polycomb family genes have also been shown to correlate with disease development, presenting novel potential targets for therapy. For example, CBX7 that inversely correlated with the tumor stage and grade progression in BC, or EZH2 expression that showed a significant increase in UCC specimens and bladder cancer cell lines [69,70]. Interestingly, the Polycomb group of proteins are commonly abnormally overexpressed years prior to cancer pathology, making early targeted therapy an option to reverse tumor formation [4,71]. Chromatin modifying genes are frequently mutated in bladder cancer. This chromatin dysregulation might be responsible of the epigenetic modification through enhancers or super-enhancers in different areas of the genome. The amplicon in chromosome 6 that contains SOX-4 and E2F3 is frequently found amplified in bladder cancer might be epigenetically regulated and might be a potential target for therapy. Epigenetics and SOX4 Genetic and epigenetic alterations have been linked to transitional cell carcinoma of the urinary bladder (TCC). The transcription factor SOX4 has been identified as a master regulator of EMT. It controls a number of EMT-relevant genes in addition to EZH2, a critical SOX4 target gene during EMT. There seems to be interplay between transcriptional and epigenetic control during EMT which suggest that the inhibition of EZH2 could be an attractive avenue for the therapeutic intervention of progression [50]. Of note, in early-stage lymph-node-negative breast cancer, the concomitant high expression of Ezh2 and SOX4 significantly correlated with poor metastasis-free survival [50]. There seems to be a complex gene regulatory network driving the increased expression of SOX4 during the early phases of TGFb-induced EMT. These findings are indeed hypothesis-generating, and warrant further investigation to unravel the action of transcription factors and epigenetic regulators in driving the transcriptional reprogramming underlying progression to subsequent stages. Therapeutic Implications. Need to Eradicate Csc in Therapy CSCs are attributed to have tumorigenic potential and ability to dictate invasion and metastatic progression as well as enhanced resistance to therapy. Among other mechanisms for these actions are the quiescence or slow cycle kinetics, enhanced DNA repair mechanisms and overexpression of multidrug resistance-type membrane transporters. All these processes can contribute to the failure of existing therapies [77,78]. Considerable evidence associates CSCs to high recurrence rates and poor survival and failure of adjuvant treatment in patients with MIBC [10,38,79]. Although metastatic bladder cancer is highly responsive to chemotherapy with cisplatin, only a small cohort of patients (10-20%) respond completely as evident by eradication of BCSCs in metastasis and prolonged survival of patients [1]. These remaining cisplatin-resistant bladder cancer cells include the existence of CSC-like cells that display higher levels of Bmi1 and Nanog expression, EMT characteristics, CSC marker expression, and sphere-forming capacity, conferring them a role in progression and drug resistance of bladder cancer [80]. Inability of currently widely used anti-cancer therapies to kill CSCs is one of the probable reasons for their failure and tumor relapse. Underlying CSCs remain viable as quiescent BCSC in patients with metastatic disease. This tumor dormancy is a key limiting factor in the treatment of metastatic diseases and understanding the mechanisms behind stem cell proliferation and differentiation could lead to the development of new anti-cancer strategies. These, alone or in combination could successfully target the BCSC population by inhibiting the maintenance of stem cell state as well as by killing the bulk tumor cell population [1]. Few targeted therapies have shown promising results in bladder cancer [81,82]. In contrast, immunotherapy (e.g., immune checkpoint blockade) has provided good objective responses and prolonged survival [83][84][85]. Experimental monoclonal antibodies against some surface markers (such as 67LR and CD47) have already given some promising results in human xenografts and in vitro studies. As an example, CD47 is highly expressed on UCC and functions as a ligand of the SIRP inhibitory molecule expressed on phagocytes. Blockade of CD47 by a monoclonal antibody resulted in efficient and specific macrophage engulfment of bladder cancer cells in vitro [4]. Inhibition of Ezh2 function could also be an interesting alternative for therapeutic intervention during tumor progression as shown by promising early results with the Ezh2 inhibitor 3-deazaneplanocin (DZNep) [86,87]. Results from clinical trials are awaited [50]. Another potential agent, honokiol, a biologically active biphenolic compound isolated from Magnolia officinalis, has been shown to inhibit cancer cell proliferation, survival, cancer stemness, migration and invasion, through the downregulation of EZH2 expression levels, along with reductions in the expression of matrix metalloproteinase 9, CD44, SOX2 and the induction of tumor suppressor miR-143 in bladder cancer [52]. Similarly CD274, another CSC marker of relevance, showed useful results in a study focusing on CD274 (PD-L1) in cholangiocarcinoma through in vivo and in vitro experiments [88]. The results of the study provided direct evidence of the participation of PD-L1 in the activity of CSCs. More research into the mechanisms through which PD-L1 affects BCSCs is required to illustrate how CSCs participate in these checkpoint immunomodulatory actions [19]. Being PD-L1 a CSC marker, can help to explain the long lasting responses seen with immunotherapy. Conclusions There is now evidence that E2F3 in 6p22-amplified bladder cancer is a potential oncogene of critical importance. Also, the 6p22.3 amplicon itself might be a potential target for therapeutic intervention. Future studies are needed to determine the interactions of E2F3 with other genes in the 6p22 amplicon such as SOX4. In addition, other potential oncogenes, such as CCND1 not in the 6p22 amplicon but yet frequently altered in bladder cancer and dependent on E2F3 could be secondarily impacted. Importantly, the critical biological role of SOX4 in EMT has raised the question of whether SOX4 contributes to malignant tumor progression and metastasis. The fact that the 6p22.3 amplicon is being frequently amplified in MIBC and contains the SOX4 transcription factor creates the speculation that SOX4 might be the underlying force of progression from high grade NMIBC to MIBC. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflicts of interest.
6,671
2018-08-02T00:00:00.000
[ "Medicine", "Biology" ]
Preliminary analysis of two NAC transcription factor expression patterns in Larix olgensis The NAC transcription factor family is plant-specific with various biological functions. However, there are few studies on the NAC gene involving coniferous species. Bioinformatics research and expression analysis of NAC genes in Larix olgensis can be used to analyse the function of the NAC gene in the future. Screening of excellent genetic materials and molecular breeding have been utilized to cultivate high-quality, stress-resistant larches. According to the transcriptome data for L. olgensis, the genes Unigene81490 and Unigene70699 with complete ORFs (open reading frames) were obtained by conserved domain analysis and named LoNAC1 and LoNAC2, respectively. The cDNAs of LoNAC1 and LoNAC2 were 1971 bp and 1095 bp in length, encoding 656 and 364 amino acids, respectively. The molecular weights of the proteins encoded by the two genes were predicted to be 72.61 kDa and 41.13 kDa, and subcellular localization analysis indicated that the proteins were concentrated in the nucleus. The results of real-time quantitative PCR analysis showed that at different growth stages and in different tissues of L. olgensis, the relative expression levels of the two NAC genes were highest in the stem, and the expression differences were more obvious in non-lignified tissues. After drought, salt and alkali stress and hormone treatment, expression was induced to different degrees. The expression levels of LoNAC1 and LoNAC2 in semi-lignified L. olgensis were higher than in the other two periods (non-lignified and lignified), and expression levels significantly increased under drought and salt stress. Relative expression levels changed under hormone treatment. It is speculated that these two genes may not only be related to drought and salt stress and secondary growth but may also be induced by hormones such as abscisic acid. Overall, LoNAC1 and LoNAC2 are genetic materials that can be used for molecular breeding of larch. Introduction The NAC transcription factor family is plant-specific and widely distributed in terrestrial plants. It is also regarded as one of the gene families with the most transcription factors (Riechmann et al. 2000;Olsen et al. 2005). More than 8000 transcription factors of the NAC family have been found in plants, and 151 in Arabidopsis thaliana (L.) Heynh. alone (de Oliveira et al. 2011). This study found that the N-terminus of the NAC protein has a highly conserved domain, while the C-terminus has a transcription activation domain and presents diversity, which is an important recognition-related feature Abstract The NAC transcription factor family is plantspecific with various biological functions. However, there are few studies on the NAC gene involving coniferous species. Bioinformatics research and expression analysis of NAC genes in Larix olgensis can be used to analyse the function of the NAC gene in the future. Screening of excellent genetic materials and molecular breeding have been utilized to cultivate high-quality, stress-resistant larches. According to the transcriptome data for L. olgensis, the genes Uni-gene81490 and Unigene70699 with complete ORFs (open reading frames) were obtained by conserved domain analysis and named LoNAC1 and LoNAC2, respectively. The cDNAs of LoNAC1 and LoNAC2 were 1971 bp and 1095 bp in length, encoding 656 and 364 amino acids, respectively. The molecular weights of the proteins encoded by the two genes were predicted to be 72.61 kDa and 41.13 kDa, and of the NAC protein structure (Aida et al. 1997). There are approximately 150 amino acids in the N-terminal domain of the NAC protein, which can be divided into five subdomains (Chen et al. 2019). The C-terminus has a simple amino acid sequence with high repetition and contains a greater number of Ser, Thr, and Glu and some acidic amino acid residues than the N-terminal domain (Olsen et al. 2005). By aligning the NAC protein sequences of Arabidopsis, researchers have shown that some common sequences can be found even at the C-terminus (Ooka et al. 2003). In the NAC gene family, the Petunia NAM gene was the first one found to be related to plant morphogenesis (Souer et al. 1996). Researchers found that in Arabidopsis, the key factor regulating secondary wall thickening of xylem fibre cells was the specific expression of the NAC family NST1 and NST3/SND1 genes (Mitsuda et al. 2007). Huang et al. (2015) found that the genes NAC29 and NAC31 in rice are related to the regulation of cellulose synthesis. He et al. (2005) reported that the Arabidopsis transcription factor AtNAC2 was highly expressed in roots under high salt conditions, the lateral roots of plants overexpressing this gene were well-developed, and changes in auxin and ethylene were observed. Arabidopsis ANAC096 can help plants survive dehydration-related osmotic stress and is related to ABA-induced genes (Xu et al. 2013). Liu et al. (2018) noted that TsNAC1 can target an important proton transporter to improve salt tolerance. Researchers have studied the function of the NAC gene family but most of these have focused on plants such as Arabidopsis, tobacco, and rice. The function of NAC family of genes in coniferous species remains to be explored and verified. Larix olgensis A. Henry, belonging to the larch genus of the pine family, is a fast-growing timber species as well as a species for soil and water conservation in China (Zhang 2012). With the advance of science and technology resulting in the genetic improvement of larch, molecular breeding has been combined with traditional breeding to improve the efficiency of genetic improvement and to accelerate the improvement process (Levee et al. 1997). In this study, two full-length NAC genes of L. olgensis were used, and their functions initially estimated from the expression levels of genes in different tissue parts and under drought and salinealkali conditions and hormone treatment. This will provide the basis for the verification of the NAC gene in subsequent experiments and for the screening of genetic material by genetic engineering-based breeding. Prediction of gene sequence structure and function Through the National Center for Biotechnology Information (NCBI) online tool Blastx, more than 10 candidate sequences of NAC genes obtained in the laboratory were compared, and the structural domains of the sequences were predicted and analysed with the online tool CD-search (Marchler-Bauer et al. 2015). Unigene81490 (LoNAC1) and Unigene70699 (LoNAC2) were selected with the NAC family conserved structural domain and complete open reading frame. MEGA5.0 software (Tamura et al. 2011) was used to construct the neighbour-joining tree. Amino acid sequences corresponding to full-length genes in Blastx and to several full-length genes with the highest similarity in the evolutionary tree were combined, and these were used to perform multiple sequence comparisons by BioEdit software. Protparam (https:// web. expasy. org/ cgi-bin/ protp aram/ protp aram) was used to predict and analyse the physicochemical properties of the protein encoded by the full-length NAC gene. GOR4 (https:// npsa-prabi. ibcp. fr/ cgi-bin/ npsa_ autom at. pl? page= npsa_ gor4. html) was used to predict the secondary structure, and WOLF PSORT (https:// wolfp sort. hgc. jp/) and SwissModel (https:// www. swiss model. expasy. org/) to predict the subcellular localization and three-dimensional structure of the protein, respectively. Analysis of LoNAC1 and LoNAC2 expression in L. olgensis Total RNA was extracted using Universal Plant Total RNA Extraction Kit (BIOTEKE, Beijing, China), and cDNA obtained by reverse transcription of total RNA was used as a template (ReverseScript RT reagent Kit, TaKaRa). Primer Premier 5.0 software designed the qRT-PCR primers (Table 1), the primers specifically screened by gel electrophoresis detection. LoTublin was selected as a reference gene and amplification was performed automatically according to the Real Master Mix (SYBR Green) kit instructions and an ABI 7500 real-time PCR instrument was used. Three replicates were set up in the quantitative PCR instrument and the results analysed by the 2 −ΔΔCt method (Relative Expression = 2 −((Ct gene−Ct LoTublin)sample−(Ct gene−Ct LoTublin)control) = 2 −(ΔCt sample−ΔCt control) = 2 −ΔΔCt ) (Pfaffl 2001). Identification of LoNAC1 and LoNAC2 The NCBI online tool blastx was used to compare the genes from the L. olgensis transcriptome database obtained in the laboratory. Two genes, Unigene81490 and Unigene70699, with the NAC family special conserved domain (NAM), were obtained and named LoNAC1 and LoNAC2, respectively. The cDNAs were 1974 bp and 1098 bp in length, encoding 657 and 365 amino acids, respectively ( Fig. 1). Predicted physicochemical properties of LoNAC1 and LoNAC2 proteins According to the prediction of physical and chemical properties, the theoretical molecular weights of the LoNAC1 and LoNAC2 proteins are 72.61 kDa and 41.13 kDa (1 kDa = 1000 Da = 1000 g/mol), respectively, and the predicted isoelectric points are 4.99 and 4.79, respectively. The LoNAC1 and LoNAC2 proteins contain 63 and 40 positively charged amino acids, respectively, and 91 and 61 negatively charged amino acids, respectively. The instability coefficient of the LoNAC1 gene is 38.13, the instability coefficient of the LoNAC2 gene is 50.12, and the protein hydrophobicity of the two sequences is − 0.460 and − 0.667, respectively. Subcellular localization analysis predicted that the proteins were concentrated in the nucleus. Using the GOR4 webpage to predict the secondary structure of the two genes ( Fig. 2) showed that they also have certain similarities in secondary structure. LoNACl and LoNAC2 proteins are mainly composed of random coils. LoNAC1 and LoNAC2 contain 21.31% and 26.58% α-helices, 18.57% and 16.44% extension chains, and 60.12% and 56.99% random coils, respectively, and lack β-turns. SwissModel homology modelling was used to predict the tertiary structure of the two proteins. As shown in Table 2 and Fig. 3, the N-terminus of LoNAC1 and LoNAC2 proteins have a highly conserved domain (NAM), which is an important recognition-related feature of the NAC protein structure and the prediction results for the tertiary structure of the two proteins were very similar. Sequence alignment and evolution tree analysis of LoNAC1 and LoNAC2 proteins LoNAC1 and LoNAC2 were translated to obtain the amino acid sequence and the sequence was compared with the sequences of NAC family proteins of Arabidopsis thaliana by MEGA5.0 software to construct a phylogenetic tree (Fig. 4). Figure 4 shows that LoNAC1 and LoNAC2 of L. olgensis and AtNAC053, AtNAC078, AtNAC082, and AtNAC103 of Arabidopsis thaliana are clustered on the same branch. It is speculated that these genes are relatively similar in evolutionary kinship. Using BioEdit software, LoNAC1 and LoNAC2 were multiplexed and compared with the Arabidopsis NAC members AtNAC053, AtNAC078, AtNAC082, and AtNAC103 (Fig. 5). There is a highly conserved domain at the N-terminus of the amino acid, and this domain can be divided into multiple subdomains. The C-terminus shows diversity, a highly variable Analysis of the tissue expression patterns of LoNAC1 and LoNAC2 Different seedling growth stages, i.e., non-lignified (approximately 60 days), semi-lignified (approximately 120 days) and lignified (180 days) (Fig. 6) stored at − 80 °C was used to extract plant RNA and reverse transcribe it to the corresponding cDNA. The gene expression in different tissues (roots, stems, needles) was measured at different growth stages by qRT-PCR (Fig. 7). In the needles of L. olgensis, both genes had the highest relative expression levels during the semi-lignification period, and the expression levels at different stages were in the order: semi-lignified > lignified > non-lignified; among the stems, the relative expression of genes was also highest during the semi-lignification period, and the relative expression levels at different stages were in the order: semi-lignified > non-lignified > lignified. In the roots, the relative expression level was again highest in the semi-lignification stage, and the expression levels at different stages were in the order: semi-lignified roots > lignified roots > non-lignified roots, similar to the needles. In the three tissues, the two genes had the highest relative expression levels during the semi-lignification period. The relative expression levels in the roots were significantly different among the three growth stages (Fig. 7). Expression analysis of LoNAC1 and LoNAC2 under abiotic stress Under drought stress, the expression levels of LoNAC1 and LoNAC2 were upregulated at five time periods (Fig. 8). The relative gene expression levels in L. olgensis seedlings for 96 h reached the highest levels, increasing 12.7fold (LoNAC1) and 12.8-fold (LoNAC2), respectively. Under salt stress, LoNAC1 was down regulated at 12 h and 24 h, while LoNAC2 was down regulated at 12 h. In addition, both expression levels were up regulated and the relative expression level reached the highest level at 96 h, approximately 10 times that before treatment. Under alkali stress, the relative expression levels of LoNAC1 at 12 h and 24 h and LoNAC2 at 24 h and 48 h were lower than in the controls, and expression was inhibited. Expression was induced at the remaining time periods, and the relative expression level was highest 96 h after treatment, reaching 2-3 times the value of the controls. Analysis of the expression patterns of LoNAC1 and LoNAC2 under hormone induction RT-PCR was used to measure gene expression in L. olgensis treated with six hormones (Fig. 9) Bioinformatics analysis of LoNAC1 and LoNAC2 Bioinformatics analysis showed that the full-length sequences of both the LoNAC1 and LoNAC2 genes contain special NAM domains. The coefficient of instability of LoNAC1 is less than 40, and that of LoNAC2 greater than 40, i.e., the LoNAC1 protein is stable and the LoNAC2 protein unstable. It is speculated that the LoNAC1 protein may be present for a long duration in L. olgensis, while the LoNAC2 protein may appear at certain stages. Both proteins had negative hydrophobicity results and are presumed to be hydrophilic proteins. The evolutionary tree and homology analysis of the amino acids encoded by Arabidopsis NAC family genes showed that LoNAC1 and LoNAC2 are clustered on the same branch with AtNAC053, AtNAC078, AtNAC082, and AtNAC103, and it is speculated that these genes are close in evolutionary kinship. Their structures and functions may be similar. Under the condition of protein toxicity, AtNAC053 and AtNAC078 work together to activate the expression of many factors so that the plant produces sufficient protein homeostasis factors, such as the 26S proteasome to regulate the protein toxicity stress response. The AtNAC053 and AtNAC078 proteins play a role as central regulators in this process (Gladman et al. 2016). However, the induction of AtNAC103 expression in Arabidopsis depends on bZIP60, which participates in the growth and development of Arabidopsis and in the endoplasmic reticulum stress response (Sun et al. 2018). It is believed that the functions of LoNAC1 and LoNAC2 are similar and in the phylogenetic tree analysis, the affinity of LoNAC1 and LoNAC2 reached 97%. These two genes may cooperate in some regulatory mechanisms. Due to the differences in gene functions between different plants and the influence of distant and close relationships, this conclusion requires further study. Analysis of LoNAC1 and LoNAC2 qRT-PCR results Abiotic stress restricts the growth and development of plants, leading to a reduction in the yield and quality of agricultural and forestry crops. Research indicates that environmental stresses have caused almost half crop losses globally. Among environmental stress factors, drought is considered the most important factor restricting the development of global agriculture (Boyer 1982). As soil salinity becomes an increasingly serious problem, saline-alkali stress has also emerged as one of the factors that restrict crop growth. Plant hormones are a class of substances that regulate growth and development and are related to processes of plant environmental adaptation. They not only independently but also cooperatively regulate seed maturation, dormancy, and germination, vegetative and reproductive growth, and plant adaptation to abiotic and biological stresses during growth. Previous studies have found that NAC genes participate in both the stress response of plants (Lu et al. 2007;Mao et al. 2014), secondary growth and also regulate plant cells and tissue death effects (Tran et al. 2009;Ma et al. 2018), which are related to hormone synthesis and regulatory networks (Fujita et al. 2004;Gao et al. 2010;Mao et al. 2017). NAC is a family of transcription factors with various biological functions discovered in recent research. According to the analysis of the qRT-PCR results, during the non-lignification period, the distribution of LoNAC1 and LoNAC2 in different tissues varied greatly. There was little difference in the lignification period, and the two genes had the highest relative expression levels in the semi-lignification period, indicating that the genes may participate in the secondary growth of L. olgensis. Under drought and salt stresses, the relative expression levels of LoNAC1 and LoNAC2 were quite different, while under alkaline stress, the differences were relatively small. These results indicate that the genes LoNAC1 and LoNAC2 respond to drought and salt stress. Under hormone treatment, both LoNAC1 and LoNAC2 were induced to different degrees, and the relative expression levels in the 2,4-D, ABA and GA 3 treatments were significantly different, suggesting that the expression of the two genes was highly correlated with these three hormones. 2,4-D is a representative, artificially synthesized plant hormone that is an auxin analogue and widely used as a growth regulator for some crops (Hu et al. 2019). Studies have shown that 2,4-D can delay senescence in citrus plants. The levels of many endogenous hormones changed, and defence-related genes and proteins were up regulated, which improved the ability of defence under adversity. Some NAC family genes were over expressed (Ma et al. 2014). ABA plays a vital role in stress responses and regulates various developmental processes such as seed maturation and dormancy, organ shedding, and leaf senescence (Erik and Stokstad 2010). There also exist ABA-dependent regulatory pathways that respond to various abiotic stresses such as drought, high salt stress, and cold stress (Yamaguchi-Shinozaki and Shinozaki 2005). This study found that some NAC genes are related to the corresponding pathways of 2,4-D and ABA stress. For example, in 2,4-D-treated citrus plants, some NAC family genes were up regulated which improved the ability of defence under adversity (Hu et al. 2019). OsNAC52 may respond to ABA to increase drought tolerance in transgenic plants (Gao et al. 2010). In this study, analysis of the qRT-PCR results show that the expression of two NAC genes was significantly up regulated under drought and salt stress, and these genes were induced by treatment with the hormones 2,4-D and ABA. It may also be related regulatory networks in L. olgensis. The promoters of the two genes are currently unknown but will be identified by RACE technology in the future, and promoter sequence analysis will be completed to establish the regulatory network of these two genes. Conclusion LoNAC1 and LoNAC2 have special conserved NAM domains and it is believed that their functions are similar to those of NAC family genes. Analysis of the qRT-PCR results show that the two genes were induced by 2,4-D, ABA, and GA 3 and participated in the secondary growth process of L. olgensis and in the response to drought and salt stress. These genes can be used as excellent genetic material for molecular breeding. Author Contributions QC and LZ conceived and designed the study. QC, PA and SZ performed the experiments. QC wrote the paper. JW, HZ and LZ reviewed and edited the manuscript. All authors read and approved the manuscript. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
4,571
2021-04-13T00:00:00.000
[ "Agricultural And Food Sciences", "Biology" ]
Agri-Food Sector Potential in the Chosen CIS Countries Since the early 1990s, the chosen CIS1 countries (Armenia, Azerbaijan, Belarus, Georgia, Kazakhstan, Moldova, Russia and Ukraine) have undergone transition from a centrally planned to a market-oriented economy. Several countries from the group have sizeable agricultural sectors. Countries like Russia and Ukraine play an important role on the international markets. The untapped agricultural potential of these countries is the subject of this paper. The main goal of this paper is to improve the understanding of the agri-food sector performance in the chosen CIS countries. The paper provides content analysis of country reports, cross-country SWOT analysis of agri-food sector performance and development potential supplemented with expert evaluation. Introduction In a world where long-term food security is an issue, it is important to evaluate the untapped potential for food, feed and biomass production. Several countries of the Commonwealth of Independent States (CIS) have sizeable agricultural sectors. Yet, although countries like Russia and Ukraine play an 1. To investigate the main strengths and weaknesses of the agri-food sector on the basis of the content analysis of the country studies. 2. To emphasize the main opportunities and threats of the agri-food sector development in the above-mentioned countries. 3. To figure out the main and specific problems in agri-food sector development in the analysed countries. 4. To analyse expected potential growth directions in agricultural and food sector based on expert evaluation. Research methods This paper presents the results of the study performed under the EU 7 th Framework Programme AGRICISTRADE 2 project. The study combines methods of qualitative and quantitative analysis to give a more comprehensive understanding of the agricultural and food sectors' situation in the eight analysed countries. The quantitative analysis justifies the comparison of the state of affairs between the countries. The impact of the recent events in Ukraine, which had affected development of the main indicators, was eliminated from the study. In order to cover data gaps and get useful insights on agricultural and food sector development directions and potentials, the qualitative analysis was conducted by chosen experts from every country. Experts represented the leading country institutions specialised in economic research on agriculture. These led us to highlight the main strengths, weaknesses, opportunities and threats (SWOT) of the agricultural and food sectors in the analysed countries. Then, the experts were asked to indicate the relevance of the particular factors for the country. The relevance of these factors was marked on three levels "low", "intermediate" and "high". In addition, all experts were asked to emphasize the main problems faced by agricultural and food sector. Cross-country analyses provide us with similarities and differences between the main problems and bottlenecks in all of the analysed countries. Valuation method applied by the employed experts' gave an insight into the potential of sectors' developments and specified issues which were proposed by experts to improve the situation. SWOT analysis of the agricultural and food sector in the CIS The main strength of agricultural and food sectors of all the analysed countries, except Belarus, is good agroclimatic conditions (Table 1). Russia covers a large area and occupies regions with different agroclimatic conditions. However, according to the current direction of its national policy the country has sufficiently good conditions, to allow for ensuring national food self-sufficiency and improving export of niche products. Relevance of the factor for the country: "+" -low; "++"-intermediate; "+++"-high. Source: based on the synthesis of country reports (AGRICISTRADE, 2015) and expert valuation. One of the main strengths of the aforementioned countries are relatively low input costs securing higher competitiveness. It should be noted that the origin of this situation differs from country to country and has natural or artificial nature. As a rule, labour costs in the agricultural and food sector of the analysed countries are lower than in countries with well-functioning market economy. The comparison of labour costs between countries shows significant fluctuations of salaries too. The attractiveness of other inputs could be determined by state support or special conditions in the domestic market. Some countries benefit from lower fuel and energy prices if they are net exporters of primary energy products (e.g., Russia and Azerbaijan), import them at a price lower than the worlds' average (e.g., Belarus) or introduce a state intervention (for example, fuel subsidies). Countries often benefit from developed domestic agriculture-related industries (for example, Russia and Belarus have local industry of fertilizers, machinery, etc.). The strong decrease in agricultural production and the initial stage of the new trade networks' formation after the USSR collapse, reserved a good potential for the agricultural and food sector development. All countries, except Belarus, Moldova and Ukraine, have more or less significant potential to increase the area of land for agricultural purposes. The development potential of Belarus is limited by the relatively low land productivity, unfavourable climate conditions and the exclusion of certain areas from agricultural use after the Chernobyl accident in 1986. Moldova and Ukraine have a high share of agricultural land in the structure of land and face a challenge of preserving the productive potential of soil. Unfortunately, the analysed countries also inherited common weaknesses, which were deep-seated adapting values of the central planning system. They resulted in slowing down the agricultural and food sectors' development during the transition period (Table 2). 3(352) 2017 Most of the typical weaknesses could be overcome by implementing a sustainable policy of the agricultural and food sector development. However, the situation is complicated due to budget constraints and lack of appropriate funding from other sources, which makes a significant progress, shifting from the factual to the potential state of the agricultural and food sector, unlikely. Low productivity, lack of qualified labour and drawbacks of the national agricultural education and research organizations are the most acute problems, which must be solved. Majority of the countries stress the importance of machinery and equipment renewal as well as infrastructural and institutional development issues. The most important and commonly repeated opportunities and threats of the national agricultural and food sectors in the eight countries are summarized in Table 3. The main opportunities of sectors' potential development are untapped domestic markets and demand for niche products on the world's market. Positive changes in investment environment and level of yields (productivity) are anticipated to reduce the gap between the actual and potential possibilities of the agricultural and food sector. It should be noted that market liberalization is understood as a threat to the national agricultural and food sector. The deeper penetration of foreign producers on domestic markets is named as a challenge. Experts also stressed the im-portance of social and environmental issues for the future development of national sectors. The most visible social challenge is the aging of the population in rural areas due to the high level of migration. This issue causes a shortage of labour force and changes in the national structure of agriculture. Some countries are also faced with soil degradation issues, which could significantly reduce the potential of the national agricultural and food sector. Main and specific problems in agricultural and food sector of the CIS All the country experts were asked to point out the main problems relevant to a given country. As a result, we see that the problems are connected to each other in different countries. Major concerns are related to two main problems. The first one is connected with obsolete infrastructure in rural areas, outdated irrigation and drainage systems. The second one is related to human capital underdevelopment, low technical skills, education, scientific developments. These two problems were mentioned in seven out of eight country reports. Analyses led to investigation also of specific problems in all of the analysed countries. Armenia, Azerbaijan, Georgia and Kazakhstan are concerned about small-scale farming and land issues. In particular, these countries stressed the stocked land consolidation processes and weak position of small farms on the market. Belarus, Kazakhstan, Russia and Ukraine pointed out the ineffectiveness of subsidy mechanisms for agricultural sector. Low productivity and yield problem is relevant for all the analysed countries, but was underlined as one of the most important problems in Armenia, Azerbaijan and Ukraine. The problem of obsolete technological base in agricultural production/processing was underlined in small countries like Armenia and Azerbaijan as well as in a large country like Russia. Growing natural risks and environmental challenges were highlighted by Belarus, Kazakhstan, Moldova and Russia. Armenia and Belarus are a huge concentration of foreign trade in agri-food and forest products on Russian market (around 80% of total agri-food and forest products exports is oriented towards the Russian market). It was stressed by experts also as one of the most important problems in these countries. It needs to be kept in mind that such high export concentration is very risky and makes strong dependency on political events and macroeconomic developments in Russia. Small countries, like Moldova and Armenia, were concerned about low competitiveness of the agri-food sector and difficulty to enter new markets with low amounts of production, specific technical requirements on foreign markets, etc. Agri-food sector potential in the analysed countries AGRICISTRADE experts were asked to identify the most important areas of the national agricultural and food sector with the highest potential of development in the immediate future. The comparison of the most prospective areas of agriculture in the eight countries is provided in Table 4. Relevance of the area for the country: "+"-attractive for potential development; "-"-unattractive for potential development or not mentioned. Source: based on the synthesis of country reports (AGRICISTRADE, 2015) and expert valuations. 3(352) 2017 Ukraine has very good agroclimatic conditions but the performance of its agricultural sector is relatively poor (though it is high enough compared to the analysed countries). The lag is preconditioned by the domination of low capital intensity farms. These farms cannot afford investing in modern machinery, high-quality seeds, plant protection products and fertilizers. Fertile soils partly compensate these drawbacks and make Ukraine competitive in case of large number of agri-food products (cereals, flour, oilseeds, vegetable and animal fats, vegetables, fruit, residues of food industry, animal fodder, dairy products, etc.). Cheap feed and raw materials supply for the livestock sector and the processing industry are also important driving forces of the agri-food market development. Some research on the identification of organic production potential was conducted and significant efforts to develop the institutional framework of this niche were made. Untapped potential of the organic market could be an attractive perspective for the development of the country, as Ukraine has fertile soils which could give a competitive advantage. The huge debt incurred in relation to natural gas and interruptions in energy supply could encourage investing in the development of biomass production in the immediate future. The driving force of the agricultural and food sector development potential in Russia is a state policy of import substitution. Such a policy is inefficient and costly from the economic point of view, but it should certainly increase produc-tion of particular agri-food products (i.e. fruits, vegetables, meat and dairy products). Some positive impact on agriculture-related industries is also possible. Though the coverage of the domestic market gaps in pork, vegetable and fruit production could be quick, production development in agricultural inputs, beef and dairy will take time. A significant growth in the agricultural and food sector is expected. Agricultural land restructuring and increase in productivity, which is below the average of countries with similar agroclimatic conditions, will reduce the gap between the actual and possibilities potential of the sector. Russia will probably strengthen production and export of grain and oilseed, which have a natural competitive advantage. Although the organic product market is an attractive niche, the institutional environment is not favourable for the rapid development of this area. Biomass potential will not be developed as it excludes valuable arable lands from production and endangers national self-sufficiency and raises food security issues. The situation of Belarus is distinctive among the other countries. Belarus has no potential to increase arable area as land is almost fully utilised. The performance of the agricultural sector is close to that recorded in neighbouring countries of the EU with the exception of some products (e.g. rapeseed, wheat, maize, barley, etc.). The moderate potential of development is seen in yields increase and regulation of the structure of the cultivated crops. A reasonable growth of vegetables (cabbage, carrot, onion, cucumber), apples and strawberries is possible due to demand in domestic and Russian markets. The development potential of the livestock sector is an attractive direction. The most promising areas are dairy products and related beef production, pork and poultry production. The growing demand for organic products on the domestic market shows that this production also could be an attractive niche in the immediate future. Privileged prices for energy do not encourage investing in the further development of biomass production. Moldova has good climate conditions and fertile soils, however, yields are below the EU average. Old equipment and machinery, labour force qualification issues and traditional production technologies predetermine low yields and productivity. The most attractive commodities with export potential are sunflower, walnuts, wheat and maize. Moldova has potential in wine and fruit production which could be realized on new markets. Land degradation issues are very important due to overintensive and unsustainable use of land. The development of the organic product market could be one of the important tools to keep land in production longer. However, this type of farming requires a well-functioning institutional environment and support. Some potential in biomass production could be covered as the government plans to increase consumption of this type of energy. Georgia has the lowest yields for almost all agricultural commodities. The situation could be changed with significant investments in human resources and technological development of the sector. The increase in yields and the enlargement of planted area may be treated as a potential for growth. Grapes have the highest potential for development. This product is important for national wine industry, which has good export potential. Apples, hazelnuts and apricots were mentioned by experts as products with growth and export potential. Some progress in the area of organic production was made and further development could be an attractive niche. Increase in biomass production has low potential as this area is not supported by the government. Yields in Kazakhstan are significantly lower than in countries with a similar agroclimatic conditions. The potential growth could be achieved investing in machinery, plant protection and fertilizers, irrigation systems. Kazakhstan is a leading country in wheat and wheat flour export. This specialization still has growth capacities as yields are very low. Linseed, rapeseed, soya beans and grain maize have export growth potential. Kazakhstan also has export potential for beef and sheep meat. The organic market is on the initial stage of development and has no significant demand on the domestic market. Though the potential of production development is high, the current institutional environment will not encourage remarkable changes in this area. The development of biomass sector in the country is also questionable. Yields in Armenia still have a potential for development. Absence of irrigation, poor farmers' skills, old machinery and farm plots' fragmentation are the main factors, which influence economic results. Armenia has a good position in the markets of tomatoes, cucumbers, peaches, apricots and grapes. Berries sector is growing and experts mark its potential in the organic farming niche. The country made a notable progress in the development of organic production. Certification system allows labelling and selling products as organic even on foreign markets. Fish and crustaceans production is competitive due to low production costs. Armenian tobacco industry lacks local tobacco, which is cheaper than imported one. Wine and brandy production has growing potential; however, this sector is vulnerable as the high share of its production is exported to Russia. Productivity and yields in Azerbaijan are lower in all sectors of agriculture. Crop rotation, shift to modern technologies and enlargement of cultivated areas are among the most important factors to increase potential of the agricultural and food sector. The highest potential for growth was noted by: fresh fruit (apples, pomegranates, citrus, etc.), tubers (potatoes), vegetables (cucumbers, tomatoes, cabbages, gherkins, etc.) and animal products (canned meat, eggs, wool, leather, etc.). The government of Azerbaijan plans to increase yields and arable areas of maize, sugar beet, and industrial crops. The potential growth of vegetable, fruit and livestock sector will be achieved by increasing productivity. Azerbaijan does not cover the local demand for meat, milk and fish. That is why the livestock sector provides a good potential for growth. Some progress on the development of the organic market is achieved; however, the well-functioning institutional environment would make this niche more attractive. The rapid development of biomass production in energy-independent country is unlikely.
3,880.8
2017-09-12T00:00:00.000
[ "Economics" ]
Mathematical Game Creation and Play Assists Students in Practicing Newly-Learned Challenging Concepts Twenty-four high-performing fifth grade students (aged 10 11 years) participated in a year-long study in which conditions alternated for six instructional units between lecture-based mathematics instruction and practice through solving additional problems in small groups versus practice through designing and playing mathematics games related to the topic. Students scored similarly on all units at the time of the posttest. Creating games allowed students to examine concepts on their own, making sense of them at a deeper level, avoiding confusion. Game-making may also have made the mathematics more personal, relevant, and interesting. The authors suggest that mathematics teachers consider adding game-making to their strategies for practicing and applying mathematical concepts. Introduction Many educators avoid spending precious instructional time allowing students to creatively invent and play games to practice mathematical concepts because they believe that more direct instruction or student small group work in solving and discussing given problems will lead to greater learning gains (Au, 2007). To determine if this is indeed the case, the researchers conducted a repeated measures study with upper elementary stu-dents for six relatively new mathematics topics. Students alternated between more self-directed methods that involved game-making followed by game-playing and more teacher-centered methods of additional instruction followed by small group practice of given problems. This research design allowed the researchers to assess student mathematical performance and attitudes for each condition. The following sections briefly review the literature on self-directed learning, inquiry and learning through play, and previous use of student game-making in learning mathematics. Self-Directed Learning and Self-Regulation When students are provided opportunities to take control of and evaluate their own learning, they learn the valuable skill of self-regulation (cf., Bandura, 1989;Zimmerman & Schunk, 2004). The literature on self-directed learning and self-regulation supports the use of games and other playful activities in classrooms regardless of students' prior achievement levels and motivation (Oblinger, 2004). Self-regulation depends upon learners being able to set their own goals and standards for performance; therefore, students need opportunities to practice these abilities (Winne, 1995). Creating personalized games related to mathematical concepts is inherently a goal-setting and standard-building behavior. Another aspect of assisting students in becoming more self-directed is providing students opportunities to evaluate their final products (Butler & Winne, 1995), as well as imposing consequences for their behavior (Miller & Brickman, 2004). When students are given opportunities to design playful activities in the classroom as a means of learning content, they evaluate their creations as they engage in the activity. Furthermore, the rules of the game or activity provide a set of self-imposed contingencies that provide necessary consequences and feedback for learning. Inquiry and Learning through Play Many studies have shown the benefit of play in educating young students (e.g., Ailwood, 2003;Isenberg, 2002;Moyer, 2014). The benefits cited in these studies include, among others, the development of fine and gross motor skills, interpersonal communication, negotiation, stress reduction, goal seeking, cognitive development, and problem solving. Nevertheless, American school districts continue to focus on direct academic instruction for test performance, excluding most imaginative pretend play and "choice" time from kindergarten and classes for older students (Miller & Almon, 2009). Clear articulation of how the cognitive skills children develop during pretend or structured play impact future learning more than merely memorizing standardized information is crucial to keeping play from being excluded from school (Bergen, 2002). Another factor preventing the use of play in classrooms is the belief many teachers hold that play and learning are two separate concepts that are mutually exclusive (Hyvonen, 2011). This might be true if play were viewed as a strictly imaginative, no rules, free-forall. "Affording play" (play with elaboration and assessment), in which the teacher acts as facilitator, advisor, observer, and encourager, bridges the gap between play and learning (Hyvonen, 2011). The use of affording play in the form of student-made mathematical games during this study may help to define benefits for highachieving students. Games in Mathematical Education Use of games in teaching mathematics is considered a best practice (Moore, 2012) that is recognized by students as making mathematics more meaningful (Miller, 2009). Games encourage logico-mathematical thinking (Kamii & Rummelsburg, 2008), facilitate the development of mathematical knowledge while having a positive influence on the affective component of learning situations (Booker, 2000), and have a positive effect on students' interest and motivation (Bragg, 2007). For example, teachers using commercial games to increase understanding of algebra, spatial sense, and multistep problem solving found that students were highly motivated and engaged during game-playing (Lach & Sakshaug, 2005). Additionally, elementary school students in Queensland who played probability games not only exhibited enjoyment and motivation, but developed more positive attitudes toward the utility of learning about chance with decreased mathematics anxiety related to the topic (Nisbet & Williams, 2009). Computer technology has allowed simple mathematical games to become more customized, variable, and personal. This variability has allowed games to become more effective in exposing students to more problems per day than simple worksheets allowed (Lee, 2004), in addition to providing immediate feedback and appropri-ate follow-on problems. Lee (2004) also found that these seven to eight-year-old students routinely increased the difficulty in their games without direct instruction to do so, suggesting that computer games motivated students to take risks in practicing mathematics concepts. Computer games, more than two decades ago, produced significant gains in mathematics achievement for k-12 students (Randel, Morris, Wetzel, & Whitehill, 1992). Besides playing commercial or teacher-made games, students obtain many benefits from designing their own games. The Playground Project (Noss & Hoyles, 2006) was a research endeavor in which young students interacted in an online environment across countries in building, modifying, sharing, and playing mathematical computer games. The project directors found that even young students could learn how to modify the rules of a virtual reality environment without direct instruction; for example, students learned how formally state rules to do things like change the color of objects, move spaceships, and to produce a sound when a bat hit a ball. Many students made advanced mathematical discoveries for their young age such as the fact that two-dimensional motion could be decomposed into horizontal and vertical components. Engaging students in using computer tools and objects allowed students to translate standard mathematics concepts into personal knowledge constructions. Student-made games have been utilized in many contexts to assist students in developing deeper mathematical understandings. Secondary mathematics teachers derived content knowledge of the history of mathematics along with pedagogical knowledge through game creation (Huntley & Flores, 2010). In another study, university freshmen designed and played their own games to learn mathematics concepts in pre-calculus and calculus courses (Gallegos & Flores, 2010). These two studies were descriptive reports of how game-making was incorporated into these innovative courses rather than controlled experiments comparing student performance with and without game-making and playing. Therefore, to better compare the effects of student-centered game-making versus more teacher-centered instruction and small group working of given problems, a small research study was conducted with upper elementary students. The games developed by fifth grade students, which were explored in the current study, were primarily cardand board-based. Students first examined and played some teacher-made games to become familiar with various formats for games. Then, the teacher gave the students instruction on how to make games variable from play to play, how to design them to be easier or more difficult, and how to allow the game story or actions to develop through the experience of playing. These factors allowed the games to behave more like modern computer games, while still being simple enough for students to build in a few days with items regularly found in a classroom. Fengfeng (2008) reported that many games that are commercially available lack connection to the curricular goals that students need to meet to succeed in our current climate of high-stakes testing. Our students' games had the advantage of requiring the students to design each of the mathematics standards from the current topic into their play. To determine the effects on enjoyment, motivation, perceived understanding and mathematical performance of students creating and playing their mathematical games related to the topic, we designed a repeated measures study. The same group of fifth grade students (aged 10 -11 years) alternated between two conditions so that their performance and attitudes related to topics studied under one condition could be compared to those studied under the other. Repeated measures research design is especially effective in controlling the following potential threats to internal validity: selection of research subjects, maturation of students, loss of subjects, or regression (Creswell, 2002). The treatments were designed to be distinct (use of game-making followed by game-playing for practice versus additional teacher instruction followed by small group solving and discussion of problems) and the treatment periods were fairly short (a few weeks), making this within-group research design effective. We hypothesized that the experimental condition in which students spent time creating games, and then practicing mathematics by playing the games designed by other groups of classmates would be more enjoyable and motivating, with better perceived student understanding of concepts and higher resulting mathematical performance scores compared to the control condition of additional teacher instruction and solving/discussing problems in small groups. Details of the methodology are provided in the next section. Participants Fifth grade students (n = 24; 16 male, 8 female; 18 White, 3 Black, 3 Biracial), aged 10 or 11 years, who had shown advanced performance in mathematics and who were enrolled in a mathematics class that addressed the sixth grade mathematics curriculum at a suburban school district in the Midwestern United States participated in the study. Internal review board approval from the overseeing university, school district approval, and both student and parent or guardian written consent were obtained for all study participants. Research Design The study was completed over the course of an entire school year with six main mathematics topics that the instructor judged as presenting new mathematical concepts, rather than review, being chosen for inclusion in the study and randomly assigned to the conditions. The other, more review-focused mathematics topics addressed during the school year were not included as part of the study. A pretest-posttest repeated measures design was used in which the same group of fifth grade students alternated between a control condition for learning the mathematics topic through small group working of problems with discussion and an experimental condition that utilized student-made games to practice concepts. The advantage of this repeated measures research design was that comparisons were made with the same group of students and the same enthusiastic teacher with mathematics instructional units that each lasted approximately the same number of days. Each mathematics unit began with students responding to the school district's benchmark assessment pretest. Then, the same instructor presented interactive lessons on the mathematics. During the last three days of this instruction, students experiencing the experimental condition designed their mathematical games in small groups, whereas students experiencing the control condition continued to receive interactive instruction from the teacher regarding the current mathematical topic. Students subsequently practiced the mathematics in one of two ways: 1) through the more-conventional method of solving and discussing a given set of practice problems (control condition), or, 2) through playing student-made mathematics games that addressed the topic and provided student-generated practice problems to be solved with a correct answer key (experimental condition). After students had practiced the concepts for five class periods, they completed the school's benchmark posttest assessment. In the games unit condition, students created and played games of other groups during those five class periods. In the non-games unit condition, those five class periods were spent with students working in small groups to practice new problems with additional instruction from the teacher as needed or requested. The design of the study is shown in Table 1. Instrumentation All students took the district-provided identical pretest and posttest (district benchmark assessments) for each mathematics unit. These tests were mostly constructed response (only a few multiple choice questions throughout the 6 units) and were between 10 -15 questions each (the exception is the decimal operations unit; that test was 20 questions and over half of it was multiple choice). The tests were tied to the state standards for mathematics for each topic and devised by the school district. Each student responded to a quick student attitude survey after completing the practice work on each unit (the group work problems or the creation and playing of games). This survey consisted of three questions answerable by circling a number on a rating scale that went from 1 to 10. Students were asked to "Please circle a number below to rate: 1) your enjoyment of mathematics during the unit we just completed; 2) … your understanding of this mathematics topic; and 3) … how motivated you felt to learn more about the mathematics during this unit. On the scale, "1" signified "not enjoyable at all," or "did not understand at all," or "not motivated at all"; and "10" signified "very enjoyable," or "understood it very well" or "very motivated". Students were asked to give two reasons for each of their responses. All student responses to a question for each condition were transferred to a spread sheet and sorted using the constant comparison method in which similar responses were grouped into categories while simultaneously comparing all the responses to the given question. The categories were repeatedly refined as new responses were read, changing the category labels to define new relationships as needed (Dye, Schatz, Rosenberg, & Coleman, 2000). Pretest and Posttest Scores Pretest and posttest mean scores are shown in Table 2. Students performed just as well for each condition on the posttest scores as evidenced by similar posttest scores (no significant differences were found). This indicates that students spending time to create and play games related to the mathematical content being taught results in similar performance to more conventional small group solving of mathematics practice problems. Table 3 shows the mean attitude ratings for the game and non-game units. Overall, the scores were fairly high for all ratings, regardless of the condition. This reflects student appreciation for their enthusiastic teacher who enjoyed teaching mathematics and who invested a lot of time in his instruction. Differences in mean ratings across conditions were non-significant. Understanding for both conditions was perceived similarly and reported to be higher than enjoyment or motivation. This finding is congruent to student posttest scores that were similar for games and non-games units. It was no surprise that enjoyment was lower than understanding, but it was a bit odd that the enjoyment was as high as it was, compared to attitudes in the general school population regarding mathematics (Furner & Duffy, 2002). The students in the current study were somewhat advanced in mathematics, taking sixth grade mathematics in fifth grade. They likely felt more competent in mathematics than typical students and this feeling of competency correlates with self-motivation (Deci & Ryan, 2010). This enjoyment indicates the relatively high levels of motivation of this advanced group, as, in general, humans seek that which brings them pleasure. In general, the reasons students gave for their ratings on the attitude surveys were brief and surprisingly similar. Table 4 shows the mean student ratings of enjoyment for the game compared to the non-game units, along with reasons for these ratings. Fewer students mentioned liking the game unit topics (line 4), but students in both conditions expressed that they found the unit work (game and non-game) to be fun. This finding indicates that although topics may not be perceived as interesting, creation of games may transform the learning into an enjoyable activity. During the games units, students more often reported enjoying the unit for being challenged (line 6). Student Attitudes The students most often reported, in both the games and non-games units, the reason for their lack of enjoyment as boredom. Students in the non-games units also reported fairly frequently that they did not enjoy the unit Motivation to learn more about the mathematics of the unit 7.9 (1.7) 7.8 (2.0) because it was difficult or frustrating (line 2). This finding contrasts with the pretest scores that showed the game units were initially more difficult for students, indicating that making games to practice difficult concepts was more interesting and less frustrating than practicing concepts through more conventional discuss-and-solve methods. Table 5 shows that students in the games units most often reported understanding of the material by commenting that the material was easy or that they had significant prior knowledge (line 2). This rating is interesting when contrasted with the struggle students displayed on the unit pre-assessments and the fact that students frequently reported their final proficiency with the material. Students most often discussed understanding in the non-games units by stating that the concepts made sense and that the teacher explained ideas well (lines 3 and 4). Students in the non-games unit more frequently reported confusion or difficulty of the material as the reason for less understanding of the unit. In contrast, students in the games units connected any lack of understanding to a lack of proficiency in solving problems. Table 6 shows that the most commonly remarked reason for high motivation during all units was new learning. High achieving students are often motivated by gaining knowledge and this group was no different. Again, students expressed greater liking for the mathematical topics of non-games units, but somewhat more frequently noted the games units as being fun. Student Motivation The bottom part of Table 6 confirms the previous finding that students thought the material of the non-games units was more confusing and difficult, even though they scored higher on the pretests of these topics. Creating games may have allowed students to examine concepts on their own, making sense of them at a deeper level, avoiding confusion. Student-Created Games Students worked in small groups of two to four persons to create the games. The self-chosen groups varied from unit to unit and were mostly same-sex groups, but mixed groups occurred occasionally. The simplest games made by students involved rolling dice or drawing a card to fill in a portion of a number sentence that a player had to then solve to score points. Occasionally, these games pitted players against each other or against a clock in a race. These simple games appeared in each of the games units, but after the first games unit they became less frequent as students noticed other game possibilities and began to increase the complexity of their games. The second type of game included a wide array of board games. Some of these were very basic games in which a player rolls a die, moves around a board, and encounters various obstacles and benefits on different spaces. Examples of this type of game made by students are shown in Figure 1 and Figure 2. Some of the board games included elements of game play that had nothing to do with mathematics but were personally motivating to students as they addressed current celebrities with trivia questions, such as a game about the popular music singer, Taylor Swift titled, "The Swift Challenge". See Figure 3 and Figure 4. These games were quite popular and were the most commonly produced type. One reason the students liked to make this type of game was that they were able to create outlandish concepts with fun features while being able to easily incorporate mathematics review into the play. Some students took board games to a higher level of complexity. These games often included a board, but the board more resembled a map. The maps included treasure chests, one-way doors, and enemies that were visible or hidden. Players chose certain characters with a variety of attributes relating to hit points or strength manifested in the ability to retry missed problems, or many other attributes that the creators invented. Students engaged in these games through adventures or role-playing with correct responses to mathematics problems required to progress to different parts of the map, to defeat enemies, or to obtain treasure and items. Groups of both sexes produced adventure games, but the themes were different: males created fighting games, while females focused more on games about popular celebrities. The mixed groups generated games like "Mustache Chase" and "Bacon Maze". See Figure 1 and Figure 2. The object of Mustache Chase was to acquire five mustache cards by moving around a board, answering questions. The Bacon Maze game featured a labyrinth in which the player chose different paths with hazards such as stoves or frying pans that stopped the player who could not answer the math questions correctly. This game included the opportunity to earn bonus cards for skipping a space by answering mathematics questions at special spaces. The humor and absurdity students incorporated into games like these added to the joy of the activity. The most complex games were a series of choose-your-own-adventure type games during the final games unit. Several groups of students were interested in making video games for their last unit, but found quickly that they would need far more time than a few days to produce something with variability and enough mathematics review to fit the project requirements. Instead, students developed a plan to put the game on a website with links that would lead from point to point with a variety of challenges embedded throughout. However, building websites also proved to be too time-consuming. Students settled on using PowerPoint as the foundation of their games with hot buttons that jumped from slide to slide as a player made decisions. Students produced a football game, a fishing game, and a treasure hunting game. These games were intricate and fairly massive. "Treasure Hunter" had 45 slides and was highly enjoyed by students with good replay value. Several slides from this game are shown in Figure 5. Students displayed creativity, attention to detail, and thoughtfulness in meshing mathematics review with fun, innovative games. Some games were simple, some were complex, but all were valuable in helping the students in this class master difficult mathematics concepts. As students designed the games, they discussed how to make the game more challenging for the mathematics being addressed. Therefore, they were mentally reviewing the concepts and deciding which concepts were more difficult (metacognition) and should be incorporated into some "challenging" questions. Conclusion Students in this study reached the same levels of mathematical achievement by practicing concepts through creating and playing games as with more-conventional solving of practice problems. Although students found the topics of the games units more difficult initially (as evidenced by pretest scores) and less interesting (as acknowledged on the attitude survey), they reported much less frustration and confusion along with more ease of learning and more fun during the units in which students practiced with games. These findings indicate that allowing students the time to create and play each other's games is at least as effective as more-conventional group practice and seems to have additional benefits of less frustration and greater student clarity of concepts. Figure 5. Example slides from a game using Powerpoint. Many teachers across the globe feel great pressure to prepare students for standardized tests, often resorting to direct instruction and drill in mathematics rather than more student-centered approaches. The current study shows that having students create games with clear guidelines for required content can be just as effective and may produce additional benefits of deeper understanding. Ultimately, it may have been the elements of student choice and self-direction that make the difference. Creation of one's own problems to solve, the selection of how to apply the mathematical concepts, incorporation of preferences for celebrities, or parody of popular games actively involved students in considering the essential aspects of the mathematical concepts and how to incorporate them in into a game in their own way. When asked, students said that they liked "being able to create things", and "trying to outsmart each other with harder and harder math problems". This ownership, along with dissection of the mathematical concepts so that they might be applied in the games, motivated students and allowed them to perceive the learning as fun and easy, whether they initially thought the topics were interesting or not. The results that emerge from this study indicate that there is certainly room in mathematics education for games, creativity, and developmental play at the upper elementary levels. In addition to students reporting enjoyment of the process of creating and playing the games, the gains they made between the pretest and posttests when compared to the non-game units were evident with large effect sizes. The pretest scores for the games units were significantly lower than for the non-game units, yet students achieved at the same level on the posttests. The authors encourage mathematics teachers to use student-invented games in their instruction. Students in the current study were focused and engaged during the game-making process. They tried to generate more and more creative set-ups as the year progressed. Games evolved from simple dice, card, and coin-flip games to expansive board games, adventure games with combat (e.g. enemies were damaged when students were able to solve problems), and even a few "choose your adventure" PowerPoint-based games with active links. Students looked forward to building games during each unit. This level of student engagement is hard to achieve in our schools, especially during mathematics. The authors suggest that additional studies be conducted in using student-made games with students who are less proficient in mathematics. Because a game has rules, protocols, props, or paths along a board, the structure of the game may carry some mathematical procedures that are difficult for a student to keep in working memory. Therefore, the student may perform at higher levels with the support of the game, allowing the student to practice and gain more mathematical proficiency. The metacognitive aspects of choosing problems for games and the motivational impact of choosing a game theme that includes interesting characters or ideas related to favorite leisure activities will have a positive effect on learning.
6,048
2015-08-07T00:00:00.000
[ "Mathematics", "Education" ]
Parts of Falling Objects: Galileo’s Thought Experiment in Mereological Setting This paper aims to formalize Galileo’s argument (and its variations) against the Aristotelian view that the weight of free-falling bodies influences their speed. Iobtain this via the application of concepts of parthood and of mereological sum , and via recogni-tion of aprinciple which is not explicitly formulated by the Italian thinker but seems to be natural and helpful in understanding the logical mechanism behind Galileo’s train of thought. Ialso compare my reconstruction to one of those put forward by Atkinson and Peijnenburg (Stud Hist Philos Sci 35(1):115–136, 2004), and propose aformalization which is based on a principle introduced by them, which I shall call the speed is mediative principle . The Verification of Hypotheses and Galileo's Reasoning Confronting a scientific hypothesis which a scientist is convinced is false and lacking suitable empirical machinery to reject it, she may resort to the power of pure thought. If h is such a hypothesis and K is the body of knowledge the scientist is working with, one of the possible pure-thought strategies is to assume h obtains, incorporate it into the body of knowledge and check what the consequences of K + h are. If from K + h the scientist manages to draw a false statement, then something among K + h is false (because false sentences cannot be consequences of true ones). Since h is the main suspect, the reasonable strategy is to reject it and accept the negation of h instead as an element of one's knowledge. Since Aristotle, and before Galileo, people had been convinced that if two falling bodies differ in weight, then their speeds must be different as well: the heavier one falls faster than the lighter. This view was (and still is) supported by everyday experience. It called for a genius of Galileo's to break through the surface of things and to discover that if it indeed was like that, the ontological principle of consistency would be violated. Whether the great thinker of Pisa performed the famous experiment dropping cannonballs from the leaning tower of his hometown is still an object of debate among the historians and the philosophers of science. What is undeniable is his thought experiment in which he demonstrates the falsity of the widespread view on falling bodies. The thought experiment depicts the following situation: (G1) Consider two falling stones, B and b, assuming that the weight of the first one is larger than that of the second. (G2) Assume that the stones are united somehow. (G3) According to Galileo (1954): "[…] on uniting the two, the more rapid one will be partly retarded by the slower, and the slower will be somewhat hastened by the swifter." (i.e. B is retarded by b, and b is hastened by B). (G4) " […] if a large stone moves with a speed of, say, eight, while a smaller stone moves with a speed of four, then when they are united, the system will move with a speed less than eight; but the two stones when tied together make a stone larger than that which before moved with a speed of eight. Hence the heavier body moves with less speed than the lighter; an effect which is contrary to your supposition." (G5) in other words, the united body B, b is heavier than B but moves with less speed than B. In the above, K is elementary knowledge about the behaviour of spatial things, h is Aristotle's viewpoint. The contradictory statement (G4) (i.e., that there is an object which at the same time does have and does not have some property) allows for the repudiation of h. The Aristotelian principle has led us to a contradiction, and we are justified in rejecting it-it is not true that heavier bodies fall with greater speed than lighter ones. Since it is tacitly rejected that lighter bodies fall with greater speed than heavier ones, one may conclude that weight itself does not influence the speed of falling bodies. The Nature of Thought Experiments The nature of mental operations known as thought experiments did stir a heated debated in the 80s and 90s of the previous century, with opposite views represented mainly by John D. Norton and James Robert Brown. 1 To set the stage for my interpretation of the Galilean thought experiment let me recapitulate the main points of the debate. Norton (1991) defines a thought experiment as an argument which posits a state of affairs being either hypothetical or counterfactual, and invokes particulars which do not harm the generality of the conclusion of the argument. In consequence, by its very definition every thought experiment can be reconstructed as an argument, a stance which is embodied in the following Reconstruction Thesis: (RT) All thought experiments can be reconstructed as arguments based on tacit or explicit assumptions. Belief in the outcome-conclusion of the thought experiment is justified only insofar as the reconstructed argument can justify the conclusion. 2 From (Norton 1996) we can infer that in order to make a fully satisfactory analysis of the thought experiment we have to: (RT1) explicitly formulate all the premises incorporated in the experiment, including enthymematic ones upon which the experimenter may seem not to rely at first sight, (RT2) formulate a statement which embodies the posited hypothesis, (RT3) show that the premises are strong enough to justify the conclusion as a consequence of the premises, either in a deductive or inductive sense (in which case embrace the hypothesis as part of knowledge), or use the reductio ad absurdum method to show that the posited hypothesis is inconsistent with knowledge (and reject it), 3 (RT4) last but not least, ensure that it is clear which elements of the thought experiment are essential to the point and which are mere colourful details or 'stage-setting' to make it imaginable. Therefore, as Norton (1996) points out, "the success of the thought experiment is determined by the validity of the argument". With reference to the opening section of this paper, the analysis of a thought experiment would require checking whether all the premises (both explicit and enthymematic) constitute an item of knowledge, formulating a hypothesis the experimenter wants to reject and finally demonstrating that from the premises and the hypothesis we can deduce some falsehood or absurdity (i.e., applying the reductio ad absurdum method). Following Norton let me observe that the thought-experiment-as-argument stance gives us precise criteria for judging the reliability for thought experiments: (C1) the argument must be based on true premises (with the possible exception of a hypothesis put to a test), (C2) the argument must be valid, i.e., the process of reasoning must not be fallacious (which in particular means that all intermediate steps in the argument must be justified by the premises and accepted rules of inference). 4 Only when (C1) and (C2) are satisfied has the thought-experiment managed to produce real knowledge. A somewhat opposite view on the nature of the thought experiments is advocated by James Robert Brown, this being what is known in the literature as the Platonic view of thought experiments. Brown proposes a taxonomy of thought experiments in which there is a special branch of those which are destructive and constructive at the same time: (P1) A Platonic thought experiment is a single thought experiment which destroys an old or existing theory and simultaneously generates a new one; it is a priori in that it is not based on new empirical evidence nor is it merely logically derived from old data; and it is an advance in that the resulting theory is better than the predecessor theory. 5 (P2) This a priori knowledge is gained by a kind of perception of the relevant laws of nature which are, it is argued, interpreted realistically. Just as the mathematical mind can grasp (some) abstract sets, so the scientific mind can grasp (some of) the abstract entities which are the laws of nature. 6 According to Brown, Galileo's thought experiment is the most prominent example-it destroys the Aristotelian theory and generates a new one according to which all bodies fall alike. Why is it Platonic? It gives us an instant glimpse into the abstract realm of laws of nature, which are relations among non-spatiotemporal objects (universals). Why the knowledge provided by the experiment is a priori? There are three distinct reasons for this: (AP1) there have been no new empirical data, (AP2) the new theory is not logically deduced from old data, nor is it any kind of logical truth, (AP3) the transition from Aristotle's to Galileo's theory is not just a case of making the simplest overall adjustment to the old theory; that, is we not only have a new theory, we have a better one. 7 Among these, (AP2) is in stark contrast with the Nortonian thought-experiment-asargument view, and it is a crucial factor for the Brownian approach, as no empiricist should have any qualms about (AP1) and (AP3). But (AP2) is a serious bone of contention between Platonists and empiricists. From the opening section it should be clear that I sympathize with Norton's treatment of thought experiments and this strongly motivates my reconstruction of Galileo's one in what follows. It is not the purpose of this paper to present a critique of the Platonic view; therefore, I will only point out these controversial aspects which are relevant for the remaining part of this paper, and which will allow for the exposition of my personal view on Galileo's thought experiment. Every thought experiment is accompanied by reasoning, understood as a kind of mental process. As such it is not intersubjective, but we make it so by means of verbalization, and thanks to this we put forward a model of this reasoning in the form of an argument. A good thought experiment should be easy to communicate (though not necessarily easy to comprehend), and so the thoughts behind it must be clear and precise enough to be conveyed by sentences of a language, either natural or mathematical, or a concoction of the two. From a logician's point of view, the most important consequence of the above is that (C1) and (C2) serve as perspicuous criteria for testing the quality of a thought experiment. Once the premises have been verbalized and the argument carried out, we can ask about the status of the former and the validity of the latter. By employing this strategy, in Sect. 5 I will show that (G3) is a flaw in the Galilean thought experiment. And thanks to it I am able to propose a reasonable premise that could be accepted by the Aristotelian and allows for rejection of the Stagirite's theory. The same strategy allows for analysis of different forms of argument based on the thought experiment which are examined mainly in Sect. 7. The consequence of such an approach is that knowledge obtained by dint of the experiment is not a priori, but is justified (in a deductive way) by the data already included in the premises. From the point of view of this paper, the thought experiment does not open any door to the realm of laws of nature but lets us discern what is hidden in the information we possess, yet what is hard to see, so to say. Thus, for example, (AP1) seems to be satisfied by my analysis since the premises I propose do not contain any new empirical data compared to those available to the Stagirite and his followers (including the novel one I introduce), but (AP2) is unsustainable since all steps towards a new theory are taken via deduction from the premises. I basically agree with (AP3), since the Galilean outcome is in a way revolutionary, but this cannot be treated as an advantage on the side of a priori knowledge advocates. Any ground-breaking theory, however obtained and justified, satisfies (AP3). The choice of the argument approach to thought experiments is of course at the same time a rejection of (P1) and (P2). The fact that I have chosen the mereological approach has one more important consequence which supports (RT4). My analysis embraces two initial steps: firstly, I expound the premises; and, secondly, I translate them into the formal language of mereology. This leaves us with the flesh and bones of the argument, setting all the particulars aside. Galileo's Thought Experiment as an Argument Upon analysis, we may distinguish in the Galileo's thought experiment the following premises: (I) every spatial body has a weight and a speed, (II) there are at least two disjoint bodies which differ in weight, (III) (disjoint) bodies can be united into a single body, (IV) any given body is heavier than any of its proper parts. The fragment of the reasoning which is not addressed in the four points above is (G3). In the literature, it was recognized as a weak and controversial point which undermines Galileo's conclusion (see e.g. Schrenk 2004). Galileo himself seems to assume it or to suggest that it stems from the assumptions of Aristotelian physics and introduces it into the thought experiment which allows him to repudiate Aristotle's view. In the literature, the counterpart of (G3) has the form: (V) […] natural speed is a property such that if a body A has natural speed s 1 , and a body B has natural speed s 2 , the natural speed of the combined body A−B will fall between s 1 and s 2 . (Gendler 1998) It is known under the name of the speed is mediative postulate. As I mentioned before, the postulate is contentious. Therefore, I not only present a mereological formaliza-tion of the thought experiment assuming the aforementioned postulate, but I also put forward a version of the experiment which instead of the contentious postulate uses the following simple and intuitive principle: (V') every part of a falling spatial body has the same speed as the body itself, which, as I point out in Sect. 7, is related to a certain principle used by Atkinson and Peijnenburg (2004). I will show that in the theory presented further in this paper, the contradiction can be obtained by means of the counterpart of (V') and other axioms motivated by (II)-(IV) plus the principle formalizing the weak Aristotelian viewpoint (see (SWAD) on page 13), but omitting (V') results in a consistent system. From a philosophical point of view, this could be obtained by accepting (RT) as a leading thesis. As it was rightly raised by one of the referees, (V') (a) strongly suggests that the weight of a falling body is irrelevant for its speed and (b) its acceptance in place of (G3) changes the dialectic of the original reasoning. As for (a) it should be noted that Aristotelian physics was the naïve physics of everyday experience. Therefore the hard step for an Aristotelian, concerning the phenomenon of falling bodies, was to treat two falling bodies (plunging with different speed) as a single entity 8 and draw the conclusion that it cannot be their weight that influences their speeds. Therefore, while articulating (V') (and the remaining assumptions) I use the term "body" with the intended meaning of a rigid body, i.e. such whose deformations are null or negligible. With this interpretation in mind, and in light of the naïve physics interpretation of the Aristotelian theory, I venture to maintain that (V') is a reasonable assumption that could be accepted by an Aristotelian, along with one which says that any pair of (disjoint) rigid bodies can be combined into a single one. I explicitly make this assumption later while couching the postulates in the language of mereology and distinguishing in the domain of discourse the subset of rigid bodies. As for (b), it is true that the replacement of (G3) with (V') alters the dialectic of the original reasoning, but there are reasons to do so. Firstly, such a change of dialectic is no novelty in the literature, as, for example, Atkinson and Peijnenburg (2004) consider modifications of Galileo's reasoning by replacing (G3) with different principles. In Sect. 7 I compare my approach to one of those proposed by them, and I also propose a new formalization of the argument which is based on the speed is mediative principle of Atkinson and Peijnenburg. Secondly, to locate this situation in the thought-experiment-as-argument setting, let me point out that due to controversies surrounding (G3), Galileo's original argument seems to fail criterion (C1), if we agree that (G3) is one of the assumptions. If we were to treat it as a consequence of the set of assumptions, the situation is no better, since it can be easily shown that (G3) is not a consequence of the very basic assumptions of the thought experiment, and together with the Aristotelian viewpoint results in an inconsistent set of sentences; therefore, the argument fails the other criterion, (C2). Hence my decision to introduce (V') for rigid bodies, which (in presence of other postulates) allows for the rejection of the Aristotelian viewpoint, a rejection which is based on fairly reasonable assumptions, and thus for a reconstruction of Galileo's aim (to establish by the power of pure thought that weight does not influence speed). To what extent have I really managed to do this is to be judged by the reader. Before I continue, let me emphasize after Norton (1996, pp. 342-343) that any argument based on Galileo's thought experiment is conclusive only if we accept the tacit assumption that "The speed of fall of bodies depends only on their weights". No one who is not ready to embrace it will ever be convinced. 9 An Extended System of Mereology The underlying logic of the system presented is first-order classical logic with identity. The symbols '¬', '∧', '∨', '−→', ←→, '∀', '∃' and '=' are interpreted, respectively, as negation, conjunction, disjunction, material implication, material equivalence, big and small quantifier, and identity. If A and B are sets, then A × B is their Cartesian product, i.e., the set of all ordered pairs a, b such that a ∈ A and b ∈ B. For a set A, P(A) is its power set, i.e. the collection of all subsets of A. For a fixed domain M, whose elements will be called (spatial) bodies, let ⊆ M × M be the part of relation. By means of (and logical constants) I define auxiliary relations of ingrediens (also called improper parthood), overlap and disjointness: Ingrediens and overlap are of course reflexive, disjointness is irreflexive. In terms of the parthood and overlap relations, the key notion of mereology, a mereological sum of a given set of objects, is defined thus: . From philosophical point of view, a mereological sum may be treated as a faithful mathematical model of the assembly of an object from given entities. The notion of a mereological sum seems to be a good candidate to "spell out exactly what would constitute a proper unification of bodies", the problem raised, among others, by Schrenk (2004). In order to avoid controversy related to (V')-addressed in Sect. 3-in the set M of all bodies I distinguish a set R whose elements will be called rigid bodies, and I accept the following axiom of sum existence: which says that every pair of rigid objects has a rigid sum. In the special case when M = R (i.e. we only consider a universe of rigid bodies), (∃Sum R ) postulates the existence of mereological sums of arbitrary finite collections of things. In general, I do not assume that mereological sum is an operation, i.e., in most cases, it does not have to be the case that any collection of objects has exactly one rigid sum. But, assuming that r : M × M −→ P(R) is the function which attributes to any pair of objects all its rigid sums: Thus x, y is a randomly chosen rigid sum of x and y. Of course, where we use the part of relation and mereology to model spatiotemporal dependencies, the sum uniqueness property is a more than reasonable assumption. The fact that I do not include it into the body of axioms has nothing to do with any ontological or philosophical stance whatsoever. I just want to show that uniqueness is not necessary to model Galileo's thought experiment. Thus the reason to exclude uniqueness is purely logical. 10 I assume that each element of M has a weight and a speed (see (I)). These are normally expressed using real numbers, but for the sake of analysis of the reasoning, it is enough to assume that we have a non-empty set V of values, which may or may not be ordered by some binary relation. Thus, there are two functions: For any spatial body z, w(z) plays the role of the weight of z, and s(z) the role of its speed. The only place where I do assume that the set of values is the field of real numbers is Sect. 7 in which an Atkinson's and Peijnenburg's version of the argument is examined. From the point of view of modern logic the paper deals with two-sorted structures M, V, R, , w, s , which can additionally be extended with other relations and operations (as is done, for example, when we want an order on the set of values). Natural Speeds are Mediative This section is devoted to the formalization of Galilean thought experiment in which the speed is mediative postulate is used. 11 To properly express it within the system introduced in the previous section I must equip the set of values with a strict order relation < (i.e. irreflexive and transitive) which allows for the comparison of elements of V. I standardly assume that the order is total, i.e., any two distinct elements are comparable with respect to <. In this setting the speed is mediative postulate can be formalized as the condition: which says that in the case the speed of x is less than the speed of y, the speed of any sum of x and y falls in between the speeds of x and y. We now need to interpret the Aristotelian postulate, according to which the weight of a body influences its speed. Atkinson and Peijnenburg 12 call the weak dogma this Aristotelian stance according to which "heavier bodies fall more quickly than lighter ones". By the name of the strong dogma they call "the quantitative statement that the natural motion of a body is proportional to its weight". I will stick to the weak dogma, which can be nicely expressed as: The counterpart of (IV) takes the form of: Finally, let me introduce the formal analogue of (II): according to which there are at least two rigid bodies with distinct weights. This is a bit weaker than (II), since I skipped the disjointness requirement, but still strong enough to obtain the results I am aiming for. If the reader feels uncomfortable about the absence of disjointness, she can easily add this to (∃2) and convince herself that all the proofs can be repeated along similar lines as those to follow. Conventions From now on, in the case S is a set of axioms (postulates) and ϕ 1 , . . . , ϕ n are sentences, then by: S + ϕ 1 + · · · + ϕ n I denote the set S ∪ {ϕ 1 , . . . , ϕ n }. In a similar way, in the case where ϕ is a sentence: S − ϕ is a set of postulates from which ϕ has been removed. 12 Ibid. Fig. 1 The three-element mereological structure x y z Fig. 2 The structure with two isolated bodies x y It is routine to verify that P 1 := (∃2) + (∃Sum R ) + (Wght) + (WAD) is consistent. Let us take the three-element mereological structure (see Fig. 1 It is easy to see that all the conditions are satisfied in the structure. So, the conclusion is that without additional assumptions it cannot be demonstrated that the weak Aristotelian dogma is false. The following set P 2 := (∃2) + (∃Sum R ) + (Wght) + (SpdM) is also consistent. Again, take the structure from Fig. 1 in which the interpretation is as in the case of P 1 except for: Let me also observe that the presence of the mereological sum axiom is relevant for the derivation of the contradiction in the theorem to follow; that is, the set (P 1 ∪ P 2 ) − (∃Sum R ) is consistent as well. To see this consider a structure composed of two isolated bodies (see Fig. 2 Observe now that: Proof Assume all the postulates. Take x, y ∈ R such that (a) w(x) < w(y). By (WAD) we obtain that s(x) < s(y). Fix x, y . We have two possibilities: y = x, y or y x, y . In the first case (i) s(y) = s( x, y ). In the second one, firstly (Wght) entails that w(y) < w( x, y ), and secondly, (WAD) gives us that (ii) s(y) < s( x, y ). Yet by (SpdM) we obtain that: so s( x, y ) < s( x, y ) in both (i) and (ii), a contradiction. The reasoning in the proof of Theorem 1 is within, so to say, Galileo's dialectic. We take two bodies, consider their unification into a single body (in the form of their mereological sum), apply the remaining assumptions (of which (Wght) is implicitly used in (G3)) and arrive at a contradictory conclusion that there is a body faster than itself. In the reasoning, the speed is mediative principle plays a crucial role in deriving a contradiction, and is one of the assumptions. The problem is that it is controversial, so the argument seems to fail to satisfy criterion (C1). Moreover, it is easy to see that P 2 − (SpdM) plus the negation of (SpdM) is consistent by putting in the model from Therefore the speed is mediative postulate cannot be a consequence of the basic premises, and the argument fails the validity criterion. Adding both the postulate and the weak Aristotelian dogma the inconsistency is obtained, so a hardened Aristotelian could easily defend his view by attacking (SpdM). In the next section I venture to put forward a remedy for this situation. Galileo's Argument Cleared of the Flaw From now on I will try to deploy a minimal amount of concepts and make my assumptions as weak as possible, yet strong enough to achieve Galileo's objective. First of all, until Sect. 5 I will no longer require that values in the set V are ordered-it is not necessary for the setting I propose. And so, the postulate (Wght) is replaced by: saying that any proper part of a given body must have different weight from the body itself. This is, of course, weaker than (Wght) which entails the former. As I already pointed out, we are perfectly entitled to accept the stronger condition, but it is irrelevant for what follows. 15 x y z R The following axiom is a formal counterpart of (V'): and says that every part of a given falling rigid body must have the same speed as the body itself. If the reader finds herself uncomfortable with the second quantifier ranging over the whole set of bodies, she may restrict it to the set R of rigid bodies-it does not influence the argument. Or she may accept a reasonable axiom according to which every part of a rigid body must itself be rigid. Let me show how (Spd) relates to the sentence: which says that all rigid bodies fall alike. In the absence of (∃Sum R ) neither (Spd) entails (Spd * ), nor vice versa, even in the case when both (∃2) and (w-Wght) hold. For the first, take the structure from Observe that in both cases the models satisfy (w-Wght) and (∃2), yet fail to meet (∃Sum R ), since x and z are rigid bodies without a mereological sum. If we agree that the set (w-Wght) +(∃2) constitutes elementary knowledge about bodies with respect to weight, then we conclude that such knowledge is too weak to establish any dependency between (Spd * ) and (Spd). Under this interpretation all rigid bodies have the same speed, while z has a part with a different speed from z itself. So, I have identified two postulates-(∃Sum R ) and (Spd)-which together entail that all rigid bodies fall alike. However, from a logical point of view, these are not enough to reject Aristotle's stance. Let us remember that the Aristotelian viewpoint was that the lighter body would have a smaller speed than the heavier one. From point of view of the correctness of the argument, it is irrelevant in which way weight influences speed, it is enough to assume that it indeed has influence, so we will consider the following hypothesis, which partially reflects Aristotle's stance: (SWAD) is the acronym for super-weak Aristotelian dogma. Since both the strong and the weak dogmas of Atkinson and Peijnenburg (2004) entail the super-weak one, the refutation of the latter is enough to falsify the former two. Observe now that there are models of (∃Sum R ) + (Spd) + (SWAD), e.g. any structure with R = ∅ and only isolated bodies (atoms) in which every element has different weight and speed, or a degenerate one-element structure M := {x} =: R (this is the model with which Parmenides would have been probably very content). Therefore we need more to derive a contradiction and reject the super-weak dogma. In reference to the opening section let: constitute the body of knowledge which we take into account, the common ground between an Aristotelian and Galileo. Let me show that K satisfies the very minimal requirement for knowledge, i.e. it is consistent. To this end, take again the structure from Fig. 1 As for the speed, I define it to be 1 for all bodies in the structure. We leave it to the reader to check that all the sentences from K are indeed true in the model. Now, Galileo's reasoning can be encapsulated in the proof of the following theorem: Proof By (∃2) there are rigid bodies x and y with different weights: w(x) = w(y). Therefore by (SWAD) they have different speeds: s(x) = s(y). On the other hand, by Fact 2 we have that s(x) = s(y): a contradiction. Going back to the opening section of the paper, K constitutes knowledge and (SWAD) is the hypothesis which we aim to reject. This is achieved by accepting (SWAD) and deriving a contradiction from K +(SWAD). To anyone who accepts classical logic, the argument satisfies Norton's (C2) requirement. Does it also satisfy (C1)? Well, today we do not need this argument to convince ourselves that it is not the weight that influences the speed of falling bodies. So a better question is whether the premises could count as true for Galileo and Aristotelians. (∃2) should not raise questions for anyone, similarly (Spd) is very appealing and intuitively plausible. As for the sum existence axiom, probably the most controversial among the three, in the next theorem I show that it is relevant for deriving a contradiction from K . In consequence, if we want Galileo's argument to be valid, we should at least consider taking it as a true premise. 16 I also show that (Spd) is relevant for deriving the contradiction, which justifies its inclusion in the body of knowledge. Theorem 5 (K − (Spd)) + (SWAD) is consistent. 16 See also the final section of the paper for a short discussion of the mereological sum axiom and its role in the context of Galileo's thought experiment. Proof I use the structure from Fig. 1 Notice that (Spd) fails, since x z, yet s(x) = s(z), and all axioms from the set are satisfied by the structure. (∃Sum R ) holds since z Sum {x, y} and it is routinely verified property of mereological sum that for every a, a Sum {a}. For (w-Wght) it is enough to notice that only x z and y z, and by the definition of w function we have: w(x) = w(z) = w(y). For (SWAD) notice that s(x) = s(z) = s(y). Natural Speeds are Intensive: Division Versus Summation The speed is mediative postulate was replaced in Atkinson and Peijnenburg (2004) by the axiom according to which the natural speeds of falling bodies are intensive, by which they mean that: if two bodies with the same natural speeds are bound together, the natural speed of the composite is the same as that of each of the two constituent bodies. In our setting this can be nicely presented in mereological notation as follows 17 : I omit the prefix of universal quantifiers since for this section, until Theorem 8, I assume that R = M, i.e. all bodies are rigid. I make this assumption to avoid unnecessary complications which could cloud the focus of this section. The above postulate is closely related to (Spd) in the sense that (Z1) is its consequence, for (Spd) entails that s(x) = s( x, y ). However, (Spd) is not only strictly stronger than (Z1), but the latter is also too weak to prove the counterpart of Theorem 3, even in the presence of the stronger version of the sum existence axiom, according to which every pair of rigid objects has exactly one sum: Let K be K in which (∃Sum R ) has been replaced by the stronger version of sum existence axiom (∃!Sum R ). Atkinson and Peijnenburg reproduce a version of the Italian thinker's argument with the aid of (Z1), which means that they use principles other than those used by me so far, and it is interesting to see how their argument can be recaptured in a mereological setting. The two other mentioned principles are: (Z2) weight is extensive, according to which any body composed of two bodies of the same weight is twice as heavy as either of the bodies: and (Z3): the natural speed of a falling body is a continuous function of its weight. In order to interpret this postulate in my setting an additional function : V −→ V is needed, and the set of values must be such that it allows for speaking of continuity of . I standardly assume that V is the set of R of all real numbers and that R is the field of real numbers (we need the standard operations on reals to properly express the reasoning). Thus, in Atkinson and Peijnenburg's setting we are dealing with structures: The two additional axioms put upon (which together form (Z3)) are: (Z3a) is continuous, (Z3b) the speed of a body is a -function of its weight: s(x) = (w(x)). To carry out the reasoning we will need an axiom concerning divisibility of bodies (tacitly assumed by Atkinson and Peijnenburg): Define: The following theorem and its proof are based on the thought experiment carried out by Atkinson and Peijnenburg (2004, p. 121). Theorem 7 Assuming the Axiom of Dependent Choices, it is a consequence of AP that all bodies fall alike, and the set AP Proof Take a body x whose weight is r ∈ R. By (Div) there are bodies x 1 and x 1 which are proper parts of x and such that w(x 1 ) = r /2 = w(x 1 ). Further, divide x 1 into bodies x 2 and x 2 which satisfy (Div), so we have that w(x 2 ) = r /4 = w(x 2 ). Choose x 2 and fix r /4. When the n-th stage is reached, chose x n and fix r /2 n . Applying the Axiom of Dependent Choices we come up with countable sequences of bodies (x n ) n∈N (with x 0 = x) and their weights ( r 2 n /) n∈N . The limit of the latter sequence is 0. By (Z3b) we have that for every n ∈ N: x 11 x 10 x 0 x 00 x 01 and so by (Z3a) it must be the case that the limit of (s(x n )) n∈N is (0). By construction and by (Div), for every n ∈ N, x n = x n+1 , x n+1 and w(x n+1 ) = w(x n+1 ). From this and (Z3b) we obtain that s(x n+1 ) = s(x n+1 ). By the speed is intensive postulate we obtain that: so we have that: . . and the continuity of entails that s(x) = (0) (for w(x) = r and s(x) = (w(x))). By the arbitrariness of x we obtain that for every body its natural speed is (0), and therefore all bodies fall alike, i.e. ∀ x,y∈M s(x) = s(y). Since, by assumption, there is a body x whose weight w(x) different from 0, the divisibility axiom entails that it has a proper part y such that w(y) < w(x), and so by (SWAD) it must be the case that s(x) = s(y): a contradiction. To see that the assumption that there is a body with non-zero weight is relevant for the contradiction observe that the set AP + (SWAD) is consistent. As a model take the infinite binary tree T from It is also interesting to observe that the mereological sum axiom fails at T, since if we put R := M, objects, for example, x 10 and x 01 do not have any sum. The only candidate is x which is the upper bound of {x 10 , x 01 }, however, x 11 is part of x and is disjoint from both these objects. Therefore, Aristotle's aim can be obtained without the Let me emphasize that from now on I drop the assumption that R = M. Theorem 6 shows that (Spd) is stronger than (Z1). Its strength is manifest in the following: Theorem 8 (i) (Spd) together with 'R = ∅' and the following weaker version of axiom of divisibility : is inconsistent with the super-weak Aristotelian dogma. (ii) Similarly, the set (Spd) +(w-Wght) is inconsistent with the dogma, if only R = ∅ and every rigid body has a rigid proper part : ∀ x∈R ∃ y∈R y x. Proof (i) By assumption there is a rigid body x, and by (DIV ) there are, rigid as well, y, z x such that w(y) = w(z). So (SWAD) entails that s(y) = s(z). On the other hand by (Spd) we have that s(y) = s(z). 19 (ii) Let x, y ∈ R be such that y x. By (w-Wght) it is the case that w(y) = w(x), so (SWAD) entails that s(y) = s(x). But the speeds of y and x must be equal by (Spd), a contradiction. It remains to verify that the sets of premises without the super-weak Aristotelian dogma are consistent. For (Spd), (DIV ) and 'R = ∅' take the full binary tree T from Fig. 5 and put: R := M, V := R, w(x) := 1, and for every x i on the tree whose weight is r , let w(x i1 ) = r /3 and w(x i0 ) = 2r /3. Let the speed of all objects be equal to 1. A slightly simpler model-the infinite descending chain from Fig. 6-is enough to demonstrate the consistency of the premises from the second point of the theorem. Put R := M, V := R, fix a positive real number r , let w(x n ) := r /2 n , s(x n ) := 1 for every natural number n. It is routine to check that the postulates are true in the model. The most serious objection we can raise against the above versions of the argument is that they significantly change the thinking behind Galileo's original thought experiment, as the unification of objects is replaced by their division. So, can we reject the Aristotelian dogma by means of the speed is intensive postulate and some weaker logical apparatus than that applied in the proof of Theorem 7, remaining at the same time as close to Galileo's original idea as possible? Of course, this just boils down to finding reasonable assumptions, and the question is can we find such assumptions? Yes, we can, and I am going to put forward yet another version of the argument in which (Spd) is replaced with the weaker the speed is intensive postulate and the mereological sum is used in a relevant way. Take the following existence postulate (which is a formal analogue of (II)): and the following weight measurability axiom: assuming at the same time that < is a strict linear order on the set of values. Define: where (WS) is the following, self-explanatory, principle: Observe that this is a consistent set of postulates. Indeed, as a model take the mereological structure from Fig. 1, put R := M and V := {1, 2} with 1 < 2. Let w(x) := 1 =: w(y), w(z) := 2 and for all a ∈ R, s(a) := 2. I leave it to the Reader to check that R is true in the model, and that the part of relation of the model is transitive. However, we have: Theorem 9 R is inconsistent with the super-weak Aristotelian dogma (assuming that parthood is transitive). Proof Fix rigid disjoint bodies x and y. We have two possibilities. (i) If w(x) = w(y), then by (WS) and the speed is intensive postulate we have that s(x) = s( x, y ). But the disjointness of the bodies and the reflexivity of entail that x x, y , so by (w-Wght) it must be the case that w(x) = w( x, y ). In consequence, by the super-weak Aristotelian dogma: s(x) = s( x, y ), a contradiction. (ii) In the second case the weights of x and y are different and without the loss of generality we may assume that w(x) < w(y). By (Msr) there is a rigid body z y such that w(x) = w(z). By (WS) and the speed is intensive postulate we obtain that s(x) = s( x, z ). However, x and z are disjoint (since x and y are , and transitivity holds), and so again x x, z , which together with (w-Wght) entails that w(x) = w( x, z ). Therefore s(x) = s( x, z ) by (SWAD). Let me verify that none of the sentences from R can be left out (transitivity holds in all models below). It is easy to verify that it satisfies all the postulates. (b) (R − (∃Sum R )) + (SWAD) is consistent. Take the two-element structure from There are at least a couple of aspects that make the version of the Galilean argument from Theorem 9 interesting. Firstly, the rather strong postulate (Spd), according to which every part of a rigid body has the same speed as the body itself, has been eliminated in favour of the weaker the speed is intensive postulate. Secondly, the remaining assumed postulates seem to be at least reasonable and could be accepted by Aristotle's followers. Thirdly, none of the premises assumes the potential infinite divisibility of objects. Lastly, in the course of the proof, only the standard transformations based on the principles from the classical logic are made. Conclusion The observant reader might have noticed that the notion of mereological sum might be too strong to obtain the goal of the paper. If the reader is so kind to go through all the facts involving the notion and the sum existence axiom, she will see that the right-hand conjunct from (df Sum) is never used. That is, in none of the proofs it is important that the sum of two bodies x and y is not too large, so to speak. From a logical point of view, what matters is that the body x, y is an upper bound of the two, i.e. contains x and y as its parts. However, such a choice, if logically correct, could somewhat mar the ontological flavour of the paper, especially in the analysis of the speed is mediative principle in Sect. 5. For these reasons, I have decided to stick to the mereological sum concept since it is the best formalization of the unification of bodies I am aware of. The reader may also ask in what way the formalization put forward is superior to other reconstructions that have been given in the literature. As I would not like to advocate for superiority, I believe that certain points make this formalization at least interesting. Since the argument concerns falling bodies and their parts, mereology is a very natural setting for its logical reconstruction. As Galileo's original thought experiment relies heavily on the unification of falling bodies, the question why not use a mereological sum principle as a mathematical model of the process? is very natural and investigation of its consequences is, in my opinion, interesting for mereologists, Galileo scholars and philosophers of science, as well as for philosophically-minded logicians. It is also an advantage of the mereological approach of this paper that all premises have been couched in a uniform precise language which allows for the scrutiny of the mutual dependencies between them and between various forms of the argument. I have encountered an objection that since my reconstruction relies heavily on a contentious mereological sum principle, it is quite hard to find a justification for the whole process of formalization within the mereological setting. However, if the reader has similar thoughts I would like to ask her to change her perspective. Although I agree that unrestricted mereological sum principles might be contentious from an ontological point of view, 20 I ask the reader to notice that (∃Sum R ) is restricted to rigid bodies only, and turns into the unrestricted version only if we apply an extra axiom saying that all bodies are rigid. Of course, one may still object that the unrestricted sum principle for rigid bodies is no better from an ontological point of view, as it postulates the existence of beings well beyond the limits of necessity. But, on the other hand, in order to precisely reconstruct Galileo's thought experiment we must address the issue of unification (which in the context seems to be more important than the notion of the division of bodies). The sum principle is a reasonable proposal since it is precise, relatively simple and, as I have proven, it is relevant for the whole process of the reasoning in the formalized form. Also, if we aim at the universality of principles we want to establish, we cannot say that we only accept the possibility of unification for particular bodies, since in such a case the conclusion could be only applied to the same particular bodies. What Galileo does is formulate the universal law of nature: the weight of falling bodies does not influence their speed. If unification is relevant for a derivation of the law, then it is more than reasonable to accept the unification in a strong form. So the change in perspective is that we do not perceive the mereological sum as just another postulate frowned upon due to its contentious consequences, but we treat it as a postulate which permits the explication of the establishment of one of the basic laws of nature. Therefore, there might be more to strong mereological sum principles than meets the eye, I venture to say. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
11,278.4
2020-05-29T00:00:00.000
[ "Philosophy" ]
Assessment of groundwater quality and determination of hydrochemical evolution of groundwater in Shillong, Meghalaya (India) Deterioration of surface water quality in various parts of India due to increasing urbanization has led to the extensive usage of groundwater for various domestic and irrigation needs, thereby raising concerns over its quality. However, there are very few studies focussing over the issue of groundwater quality in North-Eastern region of India. In order to make an assessment of the quality of groundwater for drinking and irrigation purposes, this study was carried out in Shillong—the Capital City of Meghalaya State in North-East India during pre-monsoon and post-monsoon seasons of 2018. Standard sampling and analytical procedures were followed for groundwater quality assessment. Minimal variation was observed in the water quality of pre- and post-monsoon seasons. However, the study found that groundwater samples are having acidic pH and presence of nitrate is also reported. Some of the samples also showed the presence of mercury, nickel, and cadmium. The presence of these contaminants could be attributed to the industrial activities in the state. Overall, the groundwater quality was found suitable for drinking and irrigation purposes after conventional treatment. Hydrochemical studies further inferred that groundwater properties in the region are influenced by the rock weathering along with the atmospheric precipitation. Introduction Periodic water quality assessment of surface and groundwater is necessary for the well-being of ecosystem in general, and for the human society in particular. Declining surface water resources along with the increasing levels of pollution have rendered the use of groundwater mandatory in various parts of the globe [1,2]. Being the largest user of the groundwater in the world, India fulfils its 85% drinking water needs and more than 60% of the irrigation requirements through groundwater resources [3,4]. Therefore, it is of utmost importance to look after the groundwater resources on regular basis so that required action, if any, could be taken well in time. The chemical constituents of the water are affected by natural as well as anthropogenic factors. Increasing use of chemicals (fertilizers and pesticides) for the agricultural practices is also one of the anthropogenic causes for the deterioration of both surface and ground water quality [5]. Therefore, it is necessary to keep a check on the water quality in order to ensure wellbeing of the people. Considering the importance of groundwater and its quality degradation due to urbanization and increasing pollution, many researchers have discussed the groundwater chemistry and its human health risk assessment across the globe [6][7][8][9]. Not only the heavy metals and bacterial contamination, excess amount of basic water quality parameters such as pH, total dissolved solids, and nitrate have also been reported in groundwater owing to unsustainable use and indiscriminate subsurface discharge of various pollutants [4,7]. In order to understand the effects of these factors onto the health and agriculture, many researchers have employed various statistical and multivariate statistical analyses tools [1]. As water quality index is a comprehensive approach to assess the quality of groundwater, Su et al. have used entropy weighted water quality index [10]. Use of Fuzzy method has also been reported by the researchers for the easy and accurate estimation of water quality [7]. Groundwater is affected by various other factors as well, such as geological features, precipitation pattern, rock weathering mechanism, river system, oxidation-reduction, evaporation, sorption, and exchange reactions [11,12]. The water quality of Shillong is also affected by various such processes. Meghalaya, a north-eastern hilly state, is one of the 29 states of India bounded to the south by Bangladeshi divisions, and to the north and east by Assam, India. This state is the wettest region of India, receiving approximately 12,000 mm rain in a year. Therefore, it is obvious that the groundwater chemistry of the area would show the influence of the rainfall pattern of the area. The present study was carried out in Shillong, which was known as the 'Scotland of the East' during British period owing to the presence of rolling hills around the town [13]. Although this region has been pollution free, the population explosion and urbanization have led to various environmental problems recently. Surface water scarcity is among one of those problems which is of serious concern due to topography of the region [14]. Due to shortage of the clean surface water, now the pressure is increasing onto the groundwater resources, and hence, it was considered necessary to have an assessment of the quality of the same. In this manuscript, the assessment and suitability of the groundwater in the Shillong region for drinking and irrigation purposes are detailed. one of the seven districts of Meghalaya State. The climate of the area ranges from temperate humid to subtropical humid with temperature varying from 1.7 °C to 24 °C [14]. Annual climate distribution of the district is shown in Fig. 2. It depicts that the highest rainfall in the area is received in the month of June and July, though none of the month is completely dry. The highest temperature is also reported in the same months, thus making the climate humid. The south-west monsoon, which originates from the Bay of Bengal, has large effect on the weather pattern in Meghalaya. It results in heavy rainfall of more than 12,000 mm in various districts of the state. Mawsynram, which receives about 12,270 mm rainfall, is the wettest place on the earth owing to the specific geographical and climatological conditions. Shillong is also characterized by the presence of a number of rivers, such as Umtrew, Umiam, Umkhen in the northern parts and Umiew (Shella), Umngot, Umngi (Balat) in the southern part. The rivers present in the northern part of the district drain into the Brahmaputra River (India), while southern rivers drain into the Surma River (Bangladesh) [14]. Precambrian rocks of gneissic composition are the dominant rock types in the study area. Basically, these rocks form the base of the overlying Shillong rocks. Another rock type which is present in the study area is quartzite. These quartzites attained their final form after the metamorphosis, though originally these are of sedimentary origin, as evident by the presence of bedding and ripple marks. As the terrain in Shillong is mountainous and undulating, the groundwater resources in the area are influenced by the topography, presence of rock fractures, and weathering zones. Generally, the groundwater in the region is found in the weathered and fractured zone of quartzite, under the water table condition. Groundwater resources have been reported in the form of springs, seepages, wells, and bore wells. The property of retaining water in the bore wells is also influenced by the underlying rocks, as it has been seen that metabasic rocks provide better inflow of groundwater. Consequently, the wells over the metabasic rocks have water availability throughout the year, while the wells over the quartzites become devoid of water in dry season. The hydrogeological map of East Khasi Hills District of Meghalaya is shown in Fig. 3, which shows that the highest groundwater potential is in the coarse sandstone, silt, shale, and clay formations, though their occurrence is limited. The majority of the area is occupied by the quartzite and granite rocks having groundwater potential of 5-15 m 3 /hr. Sampling and preservation Twenty ground water samples were collected each during pre-and post-monsoon seasons of 2018. The details of the sampling locations are shown in Table 1. The sampling points are located in the middle of the basin as the northeast and southwest parts of the study region were forests/hilly areas and hence not accessible (Fig. 1). The samples were collected from bore wells in clean polyethylene bottles, and acid was added in order to preserve [4,15,16]. The water samples for trace element analysis were collected in acid leached polyethylene bottles and preserved by adding ultra-pure nitric acid (5 mL/lit.). All the samples were stored in sampling kits maintained at 4 °C and brought to the laboratory for detailed physicochemical analysis. The distribution of sampling locations is shown in Fig. 1. Analysis All the chemicals used for the analysis were of analytical grade (Merck). To analyse the metal content in the samples, standard solutions of metal ions were procured from Merck, Germany. De-ionized water was used for the analysis. Samples for metal analysis were filtered using 0.45 µm membrane filter. Glasswares and all other containers used for trace element analysis were thoroughly cleaned using appropriate methods [4]. Prescribed standard methods were used for the analysis of physico-chemical parameters [15,16]. The analysis of anions and cations was carried out using Ion Chromatograph (IC) (Make: Metrohm, Model 930). Metal analysis was done in Inductively Coupled Plasma Mass Spectroscopy (ICP-MS) (Make: PerkinElmer, Model: ELAN DRC-e). For bicarbonate analysis, Potentiometric Auto Titrator (Model 888 system) was used. Analytical precision was < 5% for all the analytes (anions and cations) and metals, and accuracy was < 5%. Alkalinity was determined by setting the end point using Potentiometric Auto Titrator, and finally, bicarbonate was calculated using inbuilt formula in the system (titration accuracy < 2.0%, precision < 1.5% and systemic error < ± 0.010 mL). Errors in ionic balance were < 5% for each analysis. Ionic balance was calculated by the formula [{(TZ + − TZ − )/(TZ + + TZ − )} × 100], thus establishing the reliability and quality of the analytical results. Calibration curves of standard solutions for the respective constituents were drawn for the quantification of chemical constituents. AQUACHEM 2011.1 software was used for drawing the Piper plot. Groundwater quality evaluation for drinking purposes Groundwater quality estimation in Shillong was done for all the necessary organoleptic and physico-chemical parameters. The metals were analysed only for the samples collected during pre-monsoon. Tested values were compared with the standard values given by Bureau of Indian Standards (BIS) and World Health Organization (WHO) [17,18] (Tables 2-3). It can be seen that among the general parameters, nitrate (NO 3 − ) is the only parameter for which the value is exceeding the acceptable limit. Another parameter of interest is pH. As per the BIS and WHO, the acceptable pH values for drinking purposes should lie within the range of 6.5-8.5. It was seen that none of the samples were having pH value beyond 8.5; however, 12 samples recorded the value below 6.5, minimum being the 3.5. It indicates acidic contamination in the groundwater. One of the reasons for this acidic contamination might be the influence of geological factors of the area. Meghalaya is known for the high deposits of coal [19], and Indian coal is characterized by the high sulphide pyrite content [20]. Moreover, the geology of Meghalaya is also characterized by the presence of high iron content [14]. Iron sulphide upon oxidation forms the sulphuric acid (Eqs. 1 and 2) [21], which might increase the acidity of groundwater. Another reason might be the contamination from acid mine drainage [22]. High concentration of nitrate in samples collected from well number 5, 7, 12, 15, and 20 (Table 1) is also indicative of the unhygienic conditions near these wells and contamination due to municipal sewage as it was found flowing through the open drains. There was no diffuse contamination from fertilizers [5]. Contamination due to sewage is of serious concern and needs to be looked upon as it is difficult to restore groundwater quality, once contaminated [7]. The spot value maps for pH and nitrate are presented in Fig. 4. Analysis of water quality results indicates that most of the sites are common where pH and nitrate values are persistently beyond the acceptable range. High values of nitrate can be attributed to the faecal contamination through the open municipal drains [5]. Further, the analytical results of metals reveal that water quality of the area is affected by the presence of iron (Fe), manganese (Mn), mercury (Hg), nickel (Ni), and cadmium (Cd) to the considerable extent. Among these elements, presence of Fe and Mn can be attributed to local geogenic causes. Low pH in the groundwater might be one of the reasons for the occurrence of high amount of Fe and Mn. (1) It is a fact that acidic pH results in more dissolution of Fe and Mn [23]. Since in the groundwater of Shillong, pH values are far below the acceptable range and water is acidic (Table 2), the excess amount of Fe and Mn is evident. In such conditions, iron occurs in the form of Fe +2 . Such water might result in rusty colour upon bringing it into the atmosphere, owing to the oxidation from Fe +2 to Fe +3 . The occurrence of Fe and Mn is not much harmful because of their natural presence in human body [4,24,25]. However, dissolved iron (Fe +2 ) results in growth of iron bacteria within the bore wells, which might create problems of unpleasant taste and odour in the bore well waters. Therefore, it is advisable to disinfect the bore wells and plumbing fixtures at regular time intervals. Occurrence of Hg, Ni, and Cd is undoubtedly a reason of concern, considering the harmful impacts onto the human body. These three elements are generally of industrial origin. Though very few samples are exceeding the acceptable limit, the presence of these metals shows that there is seepage either from point or non-point sources, which is contaminating the groundwater. Discharge from open municipal drains could also be one of the reasons for the occurrence of metals. Moreover, the inappropriate disposal from industries manufacturing dry cell batteries, light bulbs, and other fluorescent items also contributes toward the groundwater contamination [22]. Groundwater quality evaluation for irrigation purposes For agricultural purposes, it is necessary to evaluate groundwater samples as water of suitable quality is one of the prime requirements for enhancing the crop growth and soil properties [26]. With this purpose, chemical parameters were assessed to determine the quality of water for irrigation purposes (Table 4). Total dissolved solid (TDS) is one of the most important parameters, and its value for all the samples is below the 1000 mg/L, the maximum value being 454.4 mg/L. It represents that the soil is of non-saline nature. Electrical conductivity is another parameter for representing salinity. High conductivity is not considered good as it might lead to high salinity. Table 4 shows that 70% samples are having conductivity < 250 µS/cm, while 25% lies in the range of 250-750 µS/ cm. Therefore, this water is suitable for its use for irrigation purposes in terms of salinity hazard. Apart from salinity, another important factor to consider for irrigation purposes is the alkalinity. Sodium concentration in the soil affects the sodium absorption ratio. High concentration of sodium results in alkali hazard in the soil. In such conditions, clay particles tend to absorb sodium ion and displace the magnesium and calcium ions. It results in saturation of cation exchange complex with sodium and further leads to dispersion of clay particles, thereby altering the soil structure [4]. Permeability of the soil may also get affected by such an exchange of cations [34]. In the present study, alkalinity hazard is less than 10, which indicates that water is suitable for agricultural purposes. As per the permeability index also, majority of the samples lie within the suitable range. Percent Na and Kelly's ratio indicates the sodium content in water. For both these parameters, the values were found within the suitable range (Table 4). US salinity diagram was plotted for assessing the sodium and salinity hazard, as shown in Fig. 5. A total of 45% pre-monsoon samples lie in C1-S1 category, while 25% samples lie in C2-S1 category. Similarly, among the post-monsoon samples, 50% lie in C1-S1 category and 20% lie in C2-S1 category. C1-S1 category represents the low salinity and low sodium hazard, while C2-S1 category represents the medium salinity and low sodium hazard [28,35]. Thus, this analysis corroborates that the groundwater is fit for irrigation purposes. However, it is to be noted that one pre-monsoon sample lies in C3-S1 category as well which indicates high salinity and low sodium hazard. This particular condition may be attributed to local factors. Plants having sufficient salt tolerance may be preferred for cultivation using this groundwater [4]. Magnesium ratio and residual sodium carbonate are also important parameters to determine the alkalinity hazard in the water to be used for irrigation purposes. Kumar et al. reported that alkalinity may increase if the water contains high magnesium content which ultimately affects the yield of the crop [36]. It is intriguing to note that out of the 20 samples in pre-monsoon and post-monsoon seasons each, only 2 samples in each seasons were found suitable as far as magnesium ratio is concerned (Table 4). However, in such a case, the soil to be used for cultivation may be treated with some organic/inorganic acidifying materials. Another important parameter of interest is residual sodium carbonate (RSC), which is usually assessed to check the suitability of irrigation water for clayey soil. This is so because clayey soils possess high cation exchange capacity. It can be seen in Table 4 that all the samples are found to be suitable, having RSC values less than 1.25 [32]. Corrosivity ratio is the ratio of alkaline earths to saline salts in groundwater [33] and is usually calculated for determining the suitability of groundwater for its transportation and distribution through metallic pipes. In case water is found to be corrosive for the pipes, polyvinylchloride (PVC) pipes may be utilized. In the Shillong water, almost half of the samples were found to be corrosive (Table 4), and therefore, use of PVC pipes is advised. Thus, it can be seen that water quality of the Shillong is suitable for agricultural purposes in respect of all the parameters except magnesium ratio and corrosivity ratio. Correlation among hydrochemical variables Pearson's correlation matrix is a good method to establish the relationship among various variables. The correlation matrix for pre-monsoon and post-monsoon seasons (Table 5 and Table 6, respectively) shows the inter-relationship among 13 hydrochemical parameters. Excess amount of salt which is dissolved in water (TDS) increases the electricity conducting ability of water, thereby establishing the correlation with EC. Positive correlation of EC also exists with the Na, K, Ca, Cl, etc., in both pre-monsoon and post-monsoon seasons. Further, strong correlation is also seen among the TDS and Na, K, Ca, Cl. Positive correlation is also seen between total hardness (TH) and Ca and Mg. Since hardness is caused by the carbonates and bicarbonates of Ca and Mg, positive correlation among them is evident. It is interesting to note that Cl is strongly correlated with NO 3 in both the seasons. This might be due to the influence of sewage contamination as it was found flowing through the open drains near the sampling sites. Chemical nature of the groundwater Piper trilinear diagram is a very convenient method to classify the groundwater [37]. This diagram is developed for the groundwater of Shillong for both pre-monsoon (Fig. 6) and post-monsoon (Fig. 7) seasons. It can be seen that during pre-monsoon season, most of the cations are calcium type and sodium and potassium type. Majority of the anions are chloride type, and a few are bicarbonate type. Similarly, in the post-monsoon season too, most of the cations are calcium type and majority of the anions are chloride type, and a few are bicarbonate type. Thus, seasonal variation does not have much effect on groundwater quality. Overall, groundwater samples are distributed among the calcium chloride and mixedtype hydrochemical facies. The findings of Piper diagram can also be corroborated with the Chadha's diagram, as shown in Fig. 8. It can be seen that pre-monsoon samples are distributed among the three hydrochemical facies, viz. Ca-Mg-HCO 3 type, Na-Cl type, and Ca-Mg-Cl type. However, the post-monsoon samples are more or less equally distributed among the four facies, viz. Ca-Mg-HCO 3 , Na-Cl, Ca-Mg-Cl, and Na-HCO 3 , and hence, such groundwater may be called as the mixed type. Gibbs diagram and rock-water interaction To understand the mechanism of factors governing groundwater chemistry of Shillong, it is necessary to understand the Gibbs diagram [38]. Gibbs diagrams have been plotted for both cations and anions during pre-and post-monsoon seasons, as shown in Fig. 9. It is quite evident that during pre-monsoon as well as postmonsoon seasons, the rock dominance and atmospheric precipitation are the two most important mechanisms which are affecting the ionic concentration in groundwater of Shillong [38]. It is known that the important cations in groundwater are Na, K, and Ca, while anions are the Cl and SO 4 . Therefore, the relation between these ions and TDS is reflecting that the water is in partial equilibrium with the rocky content of the region [38]. Moreover, as Meghalaya is one of the most heavily down poured regions of India, the dominance of atmospheric precipitation mechanism is very much obvious. It also indicates that the chemical constituents in the groundwater are influenced by the dissolved salts which are obtained from atmospheric precipitation. These diagrams indicate that rock weathering in the region is not the major mechanism controlling the chemistry of the groundwater. Rather, the composition of dissolved salts is influenced by the atmospheric precipitation as well. Mode of weathering and identification of hydrogeochemical processes Gibbs diagram reflected that rock weathering is one of the factors responsible for controlling the groundwater chemistry apart from atmospheric precipitation. Therefore, it is important to explore the weathering process. Scatter diagram of (Ca + Mg) vs. (HCO 3 + SO 4 ) in Fig. 10a shows that the samples lie near the equiline in carbonate weathering zone both during the pre-monsoon and post-monsoon seasons. It shows that carbonate weathering is the predominant mechanism affecting the Shillong's groundwater. To determine the rock types of Ca and Mg involved in the weathering process, ratio of Ca/Mg is calculated. Ca/Mg ratio of unity denotes the dissolution of dolomite rocks, while higher ratios indicate the dominance of calcite rocks [4,39]. Further higher ratios of Ca/Mg, viz. Ca/Mg > 2, represent the dissolution of silicate minerals in the groundwater [40]. The Ca/Mg ratio in groundwater of Shillong district shows the weathering of silicate minerals in the rocks as both the pre-monsoon and post-monsoon samples lie above the Ca/Mg ratio of 2. Conclusion The study was carried out to assess the suitability of groundwater of Shillong region in India for drinking and irrigation purposes and to understand various hydrochemical processes involved. It was found that groundwater in the region is acidic in nature along with having high concentration of nitrate. Iron and manganese were also found in high amount. Among the metals, nickel, mercury, and cadmium were in high concentration in some of the samples which is an issue of concern. Anthropogenic factors could be attributed to the high concentration of these parameters. Consumption of water contaminated with such metals might result in variety of health ailments, and therefore, its utilization for drinking without necessary treatment is not recommended. However, adoption of suitable removal technologies, such as oxidation/filtration, might help in improving the water quality. Moreover, the water quality in the region was found suitable for agricultural purposes in respect of all the parameters except magnesium ratio and corrosivity ratio. The variation in the groundwater samples of pre-monsoon and post-monsoon seasons was found minimal. Hydrochemical studies inferred that the groundwater in the region is influenced by the rock weathering along with the atmospheric precipitation considering that Meghalaya is the highest down poured state in India. Code availability Not applicable. Compliance with ethical standards Conflict of interest Authors declare no conflicts of interest. Ethical approval Not applicable. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
5,653.2
2021-01-01T00:00:00.000
[ "Environmental Science", "Geology" ]
TIME–SPACE ANALYSIS OF TRANSPORT SYSTEM USING DIFFERENT MAPPING METHODS . Transport systems exist within at least two types of space. One is the apparent geographic space, but equally important is the time–space implied by the travel time relations created by the system. Differences between the geographic and time–spaces are properties induced by the transport system. Methods for time–space transformations of geographic space to explore visualize and analyse transport systems were initially developed in the 1960s and 1970s but due to the low computational capacity not evolved yet. However, these methods have not been pursued beyond this ini-tial flurry of research activity, most likely due to the difficulties associated with handling and processing huge amount of digital geographic data. This paper presents a case study of the transformation possibilities and particularly the usage of non-affine transformations of maps – Rubber-Sheet Method (RSM) – using a typical GIS software called ArcView in order to analyse the current status and development possibilities of the Hungarian railway system. Introduction It is very common to build distorted graphics in order to highlight relevant information in different cases, for example the CO 2 emission on Earth by country (Fig. 1). The higher the CO 2 emission the larger the distortions are. Authors have investigated of usage of distorted geographical maps in order to reveal the distortion of travel time. Understanding the travel time relationships induced by a transport system can be crucial for assessing its performance. Transport systems attempt to improve the efficiency of trading time for space when moving between geographic locations. Greater time efficiency for movement can enhance individuals' accessibilities to activities and resources by freeing more time for travel and activity participation. Conversely, less time efficiency in geographic movement can reduce accessibility through the consumption of scarce temporal resources that could otherwise be used for travel and activity participation (Hägerstrand 1970). Spatial variations and patterns in these travel time relationships can help transport analysts and planners understand relative differences in system performance, guiding the plan-ning, design and deployment of transport infrastructure and services towards efficient and equitable outcomes. The travel time relationships induced by a transport system imply a time-space connection where relative locations and proximity relationships can differ from those in geographic space. As with geographic space, mapping and spatial analysis of time-spaces can be illuminating. Time-space maps can provide a synoptic visual summary of the travel time relationships in a given environment, indicating areas where the transport system is (Bournay 2008) Annex II countries Non-Annex I countries Non-parties to the UNFCCC TRANSPORT ISSN 1648-4142 / eISSN 1648 performing well and other areas where it is inefficient. Also, since induced travel time relations are central to transport systems, spatial analysis of time-space can be more meaningful than analysis of geographic space in understanding transport system performance (Ahmed, Miller 2007). Several attempts were done even in Hungary for travel time based maps (Fig. 2). The aim of authors was to build up a distorted map that significantly shows the changes in travel time compared to geographical map. Authors have investigated the different kind of transformations of railway infrastructure maps in order to gain new information on infrastructure (e.g. rate of centralisation, missing links, etc.). The basic Hungarian railway infrastructure has been examined but the described method can be adapted to other transport modes and other countries as well. Nowadays railway reaches its second 'goldenage' , at European level more and more funds are available for railway investments in order to increase efficient usage of railroad (Gašparík, Zitrický 2010). A method had been investigated which is able not only to analyse the reduction of travel time as a social benefit for the current system but is capable of estimating the social benefits of future investments as well. Methodology Mapping time-spaces has a long history in spatial analysis. Research dates back to pioneering work in the 1960s by Bunge (1960) and Tobler (1961). Cartographic transformations to generate time-spaces reached a peak in the 1970s with the work of researchers such as Marchand (1973), Forer (1974, Ewing, Wolfe (1977), Clark (1977), Muller (1978). Despite the efforts of these and subsequent researchers, key issues surrounding time-space mapping remain unresolved. Inconclusive results regarding the nature of time-spaces and their structure probably result from the state of key transformation techniques such as Multi-Dimensional Scaling (MDS) and map comparison techniques. There are different ways to establish the connection between the two, different type of maps (the travel time and the geographical map) such as in case of Berta and Török (2010). The easiest and most accurate way was to find some control points (significant points, which can be easily find on both of the two maps) to determine the mathematical relationship. In our case 34 different points were given in the transformations (all county seats and mayor border crossing points for passenger train transport). The corresponding travel time data were collected between them and two different matrices were built in Microsoft Excel spreadsheet (Figs 3 and 4). The travel time can act as distance in a mathematical sense, and a symmetric 'travel' time distance matrix between m points can be developed: where: D is the overall distance matrix (symmetric, square matrix); d ij is the travel time distance between city i and j. This matrix is a symmetric one, because it is assumed that d AB = d BA and if A = B then d AB = 0. Authors are assumed that in railway transport the distances are similar there and back. Mathematically travel time is behaving like a distance function so it can also be a basis of a graph. In order to visualise the two different graphs from the distances (geographical and time distances) the matrices were converted to SPSS (Statistical Packa ge for Social Sciences, http://www.ibm.com/software/analytics/spss) statistical analysing software. In the Euclidean space, the distance between two points is given by the Euclidean distance (2-norm distance). In 2 dimensions, the minimum distance between two points is the length of the line segment between them. This gives us the shortest straight distance between the two points. Authors had to face the fact that the 2-norm 'Cartesian' distance is not describing correctly the situation, because the railway tracks are not on the 'shortest' path. That is the reason why authors have changed the 'Cartesian' distance to 'travel' distance. 'Travel' distance describes the distance between city A and B, by the route between them. To build up a graph from distances (geographical and time based) the necessary relative coordinates of the cities were calculated as vertices of the graphs. MDS were used in SPSS which is a set of related statistical techniques often used in data visualization. An MDS algorithm starts with a matrix (matrix of distances in this case), and then assigns a 'location' of each vertice suitable for graphing: (2) As it can be seen there is direct bijective relation between Eq. (1) and Eq. (3) so relation (3) describes the matrix of Euclidean distances, based on the relative coordinates of cities (vertices) in the graph. This is the method how the computer calculates the place of vertices or cities compared to other vertices or cities. The output of the MDS method in SPSS are the relative coordinates of cities in case of travel time distances (see Fig. 5, 2nd and 3rd columns). The graphs were visualized in Microsoft Excel spreadsheet (Fig. 5). Similar method had been used by Dusek (2010) but at the end the calculations and graphical representation was conducted by Darcy 2.0 software. Therefore only Rubber-Sheet Method (RSM) was used for 23 nods by Dusek. Since then the development of computational science made it possible to run the RSM with 34 nodes. Authors have investigated the possibilities of linear and quadratic transformation. Finally in this method the graphs (geographical and travel time based) were saved in graphical (jpeg) format in order to be able to import in ArcView 10 to perform geographical information analysis. It is a typical GIS software, distributed by ESRI (http://www. esri.com). The software gave three possible ways to create the mathematical connection between the two point clouds. The first and most commonly used is the affine transformation. With an affine transformation the transformed coordinates can be derived as a linear function of the original coordinates. It is a transformation that preserves angles and changes all distances in the same ratio, called the ratio of magnification. That is a typical linear transformation, which means that after transformation linear is to remain linear as it was before (Detrekői, Szabó 1995 where: ( ) Increasing the power of the transformation might give better result between the two databases, so the second way became the quadratic transformation. The main connections are the following: The third one is the rubber-sheet transformation. It is based on a 'flexible surface' in which the original map points are not uniformly transformed. The rubbersheet transformations can be implemented partly as well -they are usually called patch -so the map can be divided into regions and every part can have of its own transformation equation. The equations need to satisfy the continuity condition of parts, namely the first and second derivates supposed to be the same in the connecting points. Therefore the residuals are always zero. The main equation cannot be described in a closed form, it vary locally. On Fig. 6 the summary of the technological steps can be seen. The described method cannot be inverted as it makes no sense from time-map to build up a spatial map and it is independent from the method of transformation. Results The transformation matrices were used to modify the geographical map in order to investigate the railway travel time in Hungary (Fig. 7). The input dataset of travel time can vary through time (summer/winter period or day/night). The input dataset were based on the average travel time from schedule. The three different transformations require a different amount of significant points the first two are easier because the affine needs at least 3 the quadratic needs at least 6 control points. Due to the local solutions of rubber-sheet transformation it is not possible to give an exact amount of significant points. In our case 34 different points were given in the transformations (all county seats and mayor border crossing points for passenger train transport), which at the first two cases are more than the minimum, so these methods are easy to analyse. ArcView has a built in Least Square Algorithm (LSA) to determine the elements of the different transformation matrixes and the Root Mean Square (RMS), which refers to a total distance of residual vectors, is also computable. Preliminary results of this model had already been published, but since then the model and the statistical analysis are developed (Ficzer et al. 2011). The main equation of the Total RMS Errors (TRMSE) is the following: where: The meaning of the RMS is illustrated on the Fig. 8. As it can be seen from Fig. 9 only rotation was used as linear transformation to get the control points covered. The statistical results shows that the TRMSE is quite high therefore it needs to be decreased. For this reason authors have increased the power of transformation in order to get more suitable covering and lower error. As it can be seen from Fig. 10 linear elements were distorted due to the quadratic transformation to parallelogrammic elements. The statistical result shows that the TRMSE is smaller in the quadratic case as compared to the linear but it is still significant and therefore it needs to be decreased. Further on authors have not increased the power of the transformation for better approximation but have chosen another way of approximation: the transformation called RSM, which provides zero TRMSE by definition as using different distortion matrices for different locations. 34 particular locations had been established by the computer around the 34 cities and made perfect covering with 0 error. The rubber-sheeting based on planar affine transformation (White, Griffin 1985;Saalfeld 1985) has been very popular as a possible and effective map conflation technique (Doytsher 2000). This techniques were used as the rubber-sheeting of historical maps (Fuse et al. 1998;Shimizu et al. 1999). More recently its implementations have been reported by Niederoest (2002). The result of rubber-sheet transformation in this case can be seen in Fig. 11. Conclusions The result of investigation (Fig. 11) clearly shows the centralised situation of the capital Budapest and the travel time distortion. You can see on Fig. 12 that dark grey background nowadays is located in our neighbourhood countries. The remaining topology is evidently centralised: the core is Budapest. The time map reveals the missing links since nowadays they belong to the neighbourhood countries. Mostly radial directions were found in Hungary. Authors have found that radial track should be developed and should be extended by side lanes. This results only based on travel times not on passenger counts. The linear and the quadratic models had huge errors therefore the results that were gained from these models were not used for the investigation which cover the whole country. But these methods could give a good base for local investigations and optimalisations (e.g. local bus route planning). As a result it can be stated that map distortion is fully functioning as a tool of railway infrastructure investigation. New and additional information can be derived from time maps as analytic tool of visualization.
3,115
2014-09-22T00:00:00.000
[ "Geography", "Engineering" ]
Regularization versus renormalization: Why are Casimir energy differences so often finite? One of the very first applications of the quantum field theoretic vacuum state was in the development of the notion of Casimir energy. Now field theoretic Casimir energies, considered individually, are always infinite. But differences in Casimir energies (at worst regularized, not renormalized) are quite often finite --- a fortunate circumstance which luckily made some of the early calculations, (for instance, for parallel plates and hollow spheres), tolerably tractable. We shall explore the extent to which this observation can be made systematic. For instance: What are necessary and sufficient conditions for Casimir energy differences to be finite (with regularization but without renormalization)? And, when the Casimir energy differences are not formally finite, can anything useful nevertheless be said by invoking renormalization? We shall see that it is the difference in the first few Seeley--DeWitt coefficients that is central to answering these questions. In particular, for any collection of conductors (be they perfect or imperfect) and/or dielectrics, as long as one merely moves them around without changing their shape or volume, then physically the Casimir energy difference (and so also the physically interesting Casimir forces) are guaranteed to be finite without invoking any renormalization. Introduction Quantum field theoretic Casimir energies (considered in isolation) are typically infinite, requiring both regularization and renormalization to extract mathematically sensible answers, this at the cost of sometimes obscuring the physics [1][2][3][4][5]. On the other hand Casimir energy differences are quite often finite, and have a much more direct physical interpretation [1,2]. Additional background and general developments may be found in references [6][7][8][9][10][11][12][13][14][15]. In this article, I shall first argue (mathematically) that there are a large number of interesting physical situations where the Casimir energy differences, (and so the Casimir energy forces), are automatically known to be finite, even before starting specific computations. Secondly, I shall argue (mathematically) that one can often develop physically interesting "reference models" such that the Casimir energy difference between the physical system and the "reference model" is known to be finite, even before starting specific computations. (I will not actually calculate any Casimir energies -knowing that the result you are after is finite is often more than half the battle.) I shall first start with a simple formal argument to get the discussion oriented, and then provide a more careful argument in terms of regularized (but not renormalized) Casimir energies. Formal argument The formal argument starts with the exact result that: Now let ω n and (ω * ) n be two infinite sequences of numbers then, again as an exact result: Then in terms of the heat kernel K(t) defined by we formally have: But, (now assuming that the ω 2 n and (ω * ) 2 n are in fact the eigenvalues of some secondorder linear differential operators), by the standard Seeley-DeWitt expansion we have both and Note d is the number of space dimensions. As will be discussed more fully below, the integer indexed a n have both bulk and boundary contributions, while the half-integer indexed a n+ 1 2 have only boundary contributions. Then for the difference in heat kernels we have: (2.9) Here the designation "UV finite" means that any remaining terms contributing to the "UV finite" piece are now guaranteed to not have any infinities coming from the t → 0 region of integration. That is, taking E Casimir = 1 2 n ω n , we have the formal result: All of the potentially UV-divergent terms are now concentrated in the d + 2 leading terms proportional to the ∆a i . The rest of the article will involve several refinements on this simple theme. Generally, in d space dimensions, if we are comparing two physical systems for which the first d + 2 Seeley-DeWitt coefficients are equal, then the difference in Casimir energies will be finite. Exact argument Let us now regularize everything a little more carefully, to develop an exact rather than formal argument. (Initially we shall use the complementary error function [erfc(x) = 1 − erf(x)] as a particularly simple and mathematically transparent reguator, but will subsequently show that physically almost any smooth cutoff function will do.) We have the exact result that: This leads to the further exact result that: But, because all the relevant quantities are guaranteed finite, we can now exchange sum and integral to obtain the exact (no longer just formal) result: Then in terms of the heat kernel: Now apply the Seeley-DeWitt expansion: But then (now choosing N = d ) for the heat kernel term we have: Working with the integral term is a little trickier. In the integral we instead choose N = d + 1. Then, treating the logarithmic term separately, we have That the a (d+1)/2 term leads to logarithmic term in the Casimir energy (and effective action) is well-known. See for instance references [5,16,17]. Performing the remaining integrals: ∞ Now assembling all the pieces: (3.9) We now have the exact result: -5 - For our current purposes the specific values of the dimensionless coefficients k i are not important. (4.5) We can now safely take the limit as the cutoff is removed (Ω → ∞). We have: Theorem 2 (Casimir energy differences) If we compare two systems where the first d + 2 Seeley-DeWitt coefficients are equal, then: This is a very nice mathematical theorem, but how relevant is it to real world physics? Just how general is this phenomenon? Unchanging Seeley-DeWitt coefficients Perhaps unexpectedly, there are very many physically interesting situations where the (first few) Seeley-DeWitt coefficients are unchanging. The pre-eminent cases are these: • Parallel plates. In both of these cases an infra-red regulator is needed, and some subtle thought is still required. Much more radically: • Take any collection of perfect conductors. Move them around relative to each other. (Without distorting their shapes and/or volumes.) • Then the change in Casimir energy is finite. • Then the Casimir forces are finite. (Subsequently, we shall show that similar comments can be made for both imperfect conductors and dielectrics.) To establish these results we note that for a region V with boundary ∂V we have the quite standard results that: Here the { , , } denote various species-dependent linear combinations of the relevant terms. For current purposes we do not need to know the specific values of any of the dimensionless coefficients. (There are also contributions to the a i from kinks and corners; but let's stay with smooth boundaries for now.) Above we have retained terms due to both intrinsic and extrinsic curvature, plus a scalar potential V (x). One could in principle obtain even more terms from background electromagnetic or gauge fields, but the terms retained above are sufficient for current purposes. Parallel plates Working with QED (so V = 0) in flat spacetime (Riemann tensor zero) with flat boundaries (extrinsic curvature zero): So for finite Casimir energy differences one just needs to keep volume and surface area fixed. For example: Apply periodic boundary conditions in d − 1 spatial directions, and apply conducting box boundary conditions in the remaining spatial direction. -8 -Physically this means you put the Casimir plates inside a big box, of fixed size, with two faces parallel to the plates. Consider the situation where one varies the distance between the Casimir plates while keeping the size of the big box (the infra-red [IR] regulator) fixed. From the above, and with no further calculation required, we can at least deduce that the Casimir energy difference (and so the Casimir force between the plates) is finite. Hollow spheres We are now working with QED in flat spacetime with thin spherical boundaries. The idea is to understand as much as we can regarding Boyer's calculation [2], but without explicit computation. (We shall assume 3+1 dimensions.) Step I (QED in flat spacetime) Using only the fact that we are working with QED (V = 0) in flat spacetime (Riemann tensor zero): a 0 ∝ (volume); Since the extrinsic curvature is non-zero, K = 0, keeping control of the higher a i , the higher-order Seeley-DeWitt coefficients, is now a little trickier. Step II (thin boundaries) As long as the boundaries are thin, then K inside = −K outside , leading to cancellations in both a 1 and a 2 . Similarly the thin boundaries take up zero volume, so the total volume is held fixed. (The outermost boundary, the IR regulator, is always held fixed.) Then: ∆a 2 → 0. 5.2.3 Step III (rescaling -conformal invariance) As long as the inner boundaries for the two situations we are considering are simply rescaled versions of each other, then ∂V KK √ g 2 d 2 x is scale invariant, thus leading to a cancellation in a 3/2 . (The outermost boundary, the IR regulator, is always held fixed.) Then: Note we still have to deal with ∆a 1/2 . Step IV (TE and TM modes) In spherical symmetry, one can easily define TE and TM modes. Note that they have equal and opposite contributions to a 1/2 , again leading to a cancellation in a 1/2 . (The outermost boundary is always held fixed.) Then: This finally is enough to guarantee finiteness of the Casimir energy difference. Step V (finiteness) From the above we have ∆(Casimir Energy) = (finite). (5.1) This observation underlies the otherwise quite "miraculous cancellations" in Boyer's calculation of the Casimir energy of a hollow sphere [2]. Comparing two hollow spheres of radius a and b; and letting the IR regulator (which is the same for each sphere) move out to infinity: Boyer uses Riesz resummation, (the so-called "Riesz means"), which is justified only in hindsight. If you know the answer you want is finite, then any of the standard "regular" resummation techniques will do [18]. In contrast if you don't know beforehand that the answer you want is finite, then blindly calculating is asking for trouble. Arbitrary arrangement of fixed-shape fixed-volume perfect conductors Consider now any collection of fixed-shape fixed-volume perfect conductors in 3+1 dimensions. We are working with QED (V = 0) in flat spacetime (Riemann tensor zero). Then: a 0 ∝ (volume); a 1/2 ∝ (surface area); Fixed-shape fixed-volume implies fixed extrinsic curvature, so all the ∆a i ≡ 0. That is: • Take any collection of perfect conductors. Move them around relative to each other. (Without distorting their shapes and/or volumes.) • Then the change in Casimir energy, and the Casimir forces, are finite. We shall subsequently see how to generalize this result to imperfect conductors and/or dielectrics. Reference models Consider now a non-zero potential (V = 0), in flat spacetime (Riemann tensor zero), with periodic boundary conditions (so that there is no boundary). We have: So for finiteness we "just" need to keep a 0 , a 1 , and a 2 fixed. 1+1 dimensions In (1+1) dimensions let us define the spatial average Compare the two situations: eigenvalues ω 2 n . Then: In fact in this situation the reference eigenvaluesω n can be written down explicitly as The ω n depend on V (x) and can be quite messy; the difference between the ω n and the reference problem ω n is however well behaved. -13 -7 What if Casimir energy differences are not finite? Now there are certainly (mathematical) situations where the ∆a i = 0 and the Casimir energy difference is not naively finite. This merely means one has to be more careful thinking about the physics. For instance: • Real metals and real dielectrics are transparent in the UV. • The UV cutoff Ω is then merely a stand-in for all the complicated physics. For real metals and real dielectrics the cutoff represents real physics. See for instance the discussion in references [20][21][22] and compare with the discussion in [23][24][25][26]. Note that the discussion regarding real metals and real dielectrics has often lead to some considerable disagreement regarding interpretation [27][28][29]. (My own view, as should be clear from the current article, is that Casimir energies are ultimately determined by looking at differences in zero-point energies, summed over all relevant modes.) General class of cutoff functions Let us write a general class of cutoff functions as Note f (0) = 1, while f (∞) = 0, and f (ω/Ω) is monotone decreasing. To see just how general this class of cutoff functions is, we proceed by noting that So we see Substituting χ = 1/ξ 2 we obtain But this is just the Laplace transform of g(χ −1/2 )/χ, evaluated at the point s = ω 2 /Ω 2 . Consequently, as long as the inverse Laplace transform of f (s 1/2 ) exists, which is a relatively mild condition on the cutoff function f (s 1/2 ), then we can determine g(ξ) in terms of f (ω/Ω). Indeed, there is a little-known algorithm due to Post [30], see also Bryan [31], and reference [32], that allows for inversion of Laplace transforms by taking arbitrarily high derivatives. Specifically, if G(s) is the Laplace transform of g(z) then This algorithm may not always be practical, since one needs arbitrarily high derivatives. Even if not always practical, it again settles an important issue of principle -knowledge of the cutoff f (ω/Ω) in principle allows one to reconstruct an equivalent weighting g(ξ). The point is that almost any cutoff function f (ω/Ω) can be cast in this "weighted integral over erf-functions" form. (In particular we could rephrase all of the preceding discussion concerning erf-regularization in terms of this more general f -regularization, but when ∆a i = 0 nothing new is obtained. It is only when ∆a i = 0 that general f -regularization becomes at all interesting.) f -regularized Casimir energy Let us now consider a generic regularized sum of eigen-frequencies: n ω n f ω n Ω . becomes The integrals over g(ξ) can be absorbed into redefining the dimensionless constants k i in a f -dependent manner. That is: Theorem 3 (Physical cutoff ) For a general cutoff f (ω/Ω) one has The [k(f )] i are dimensionless phenomenological parameters that depend on the detailed physics of the specific cutoff function f (ω/Ω). However k (d+1)/2 is cutoff independent. The Ω dependence represents real physics. Live with it! The [k(f )] i are dimensionless phenomenological parameters that depend on the detailed physics of the specific cutoff function f (ω/Ω). However k (d+1)/2 is cutoff independent. The Ω dependence represents real physics. Live with it! Part of the reason it was never worthwhile to keep explicit track of the k i is that, once the f -cutoff is introduced, the k i would in any case be replaced by the purely phenomenological and cutoff dependent [k(f )] i . Furthermore, if the first d + 2 of the ∆a i are zero, then the cutoff dependence drops out of the calculation. That is, even for imperfect conductors and dielectrics, if one is comparing two situations where the conductors/dielectrics have merely been moved around, (without changing shape and/or volume), then the difference in Casimir energies (and so the Casimir forces) are guaranteed finite. Forcing finiteness? Can one force the Casimir energy difference to be finite? By hook or by crook find a number of "simple" problems D i such that Then it is certainly safe to say Casimir energy of D i = (finite). (8.2) Of course this does not calculate the "finite piece" for you, but it gives you some confidence regarding what to aim for before you start calculating. More formally, if the D i are sufficiently simple one might apply analytic techniques (such as zeta functions [5,19] or the like) to argue that it might make sense to define: Casimir energy of D i + (finite). while analytically continued to be finite, is purely formal. It need not be a physical energy difference. In short, one should seek at all times to calculate Casimir energy differences between clearly defined and specified physical systems. This might, at a pinch, involve differences between linear combinations of physical systems, but to get a physically meaningful Casimir energy one must either enforce ∆ m i=1 a j D i = 0, or develop an explicit physical model for the cutoff f (ω/Ω). Conclusions In (d + 1) dimensions, iff the first d + 2 Seeley-DeWitt coefficients agree, ∆a 0 = ∆a 1/2 = . . . ∆a (d+1)/2 = 0, (9.1) then the difference in Casimir energies is guaranteed finite. This is an extremely useful thing to check before you start explicitly calculating. The erfc function, in the form erfc(ω/Ω), is a perhaps unexpectedly useful regulator erfc(0) = 1; erfc(∞) = 0. For real metals and real dielectrics, which become transparent in the UV, the cutoff is physical, and its influence on the Casimir energy is encoded in a small number of dimensionless parameters [k(f )] i and an overall cutoff scale Ω. Various generalizations of this argument, (such as counting differences in eigenstates, or calculating differences of sums of powers of eigenvalues), are also possible. Similar arguments, regarding differences in Seeley-DeWitt coefficients, can also be applied to the one-loop effective action [33]. Finally, I should emphasise that I have not renormalized anything anywhere in this article, the worst I have done is to temporarily regularize some infinite series, to allow some otherwise formal manipulations to be mathematically well-defined.
4,047.4
2016-01-06T00:00:00.000
[ "Physics" ]
Dantu Blood Group Erythrocytes Form Large Plasmodium falciparum Rosettes Less Commonly ABSTRACT. Dantu erythrocytes, which express a hybrid glycophorin B/A protein, are protective against severe malaria. Recent studies have shown that Dantu impairs Plasmodium falciparum invasion by increasing erythrocyte membrane tension, but its effects on pathological host–parasite adhesion interactions such as rosetting, the binding of uninfected erythrocytes to P. falciparum–infected erythrocytes, have not been investigated previously. The expression of several putative host rosetting receptors—including glycophorin A (GYPA), glycophorin C (GYPC), complement receptor 1 (CR1), and band 3, which complexes with GYPA to form the Wrightb blood group antigen—are altered on Dantu erythrocytes. Here, we compare receptor expression, and rosetting at both 1 hour and 48 hours after mixing with mature trophozoite-stage Kenyan laboratory–adapted P. falciparum strain 11019 parasites in Dantu and non-Dantu erythrocytes. Dantu erythrocytes showed lower staining for GYPA and CR1, and greater staining for band 3, as observed previously, whereas Wrightb and GYPC staining did not vary significantly. No significant between-genotype differences in rosetting were seen after 1 hour, but the percentage of large rosettes was significantly less in both Dantu heterozygous (mean, 16.4%; standard error of the mean [SEM], 3.2) and homozygous donors (mean, 15.4%; SEM, 1.4) compared with non-Dantu erythrocytes (mean, 32.9%; SEM, 7.1; one-way analysis of variance, P = 0.025) after 48 hours. We also found positive correlations between erythrocyte mean corpuscular volume (MCV), the percentage of large rosettes (Spearman’s rs = 0.5970, P = 0.0043), and mean rosette size (rs = 0.5206, P = 0.0155). Impaired rosetting resulting from altered erythrocyte membrane receptor expression and reduced MCV might add to the protective effect of Dantu against severe malaria. INTRODUCTION Plasmodium falciparum causes more than 220 million clinical malaria infections annually, of which between 1% and 3% develop into severe, life-threatening disease episodes. 1wo key processes that are important in the pathophysiology of severe malaria are the adhesion of P. falciparuminfected erythrocytes to the lining of blood vessels (cytoadhesion) 2 and to uninfected erythrocytes, a feature known as rosetting. 3Both lead to the obstruction of microvascular blood flow, 4 tissue ischemia, 5 anaerobic glycolysis, 6 and acidosis. 7Both cytoadhesion and rosetting are mediated by the binding of parasite-encoded adhesion proteins on the surface of infected erythrocyte to specific host cell receptors. 8,9A number of erythrocyte surface molecules have been proposed as rosetting receptors, 10 including the blood group antigens A and B, 11 complement receptor 1 (CR1), 8,12 glycophorin A (GYPA), 13 and glycophorin C (GYPC). 14Most recently, monoclonal antibody Fab fragments against the Wright b blood group antigen, formed by GYPA in complex with the erythrocyte anion transporter band 3 (AE1/SLC4A1), 15 have been shown to disrupt rosettes, 16 suggesting that the GYPA/band 3 complex could be a host rosetting receptor. Glycophorins are abundant, highly glycosylated erythrocyte surface proteins that are responsible for creating a negatively charged, repulsive force between erythrocytes to prevent hemagglutination. 17Glycophorin A is a 131-amino acid glycoprotein comprised of a total of 72-amino acid extracellular domain, a single transmembrane-spanning domain, and a 36-amino acid cytoplasmic domain. 18Glycophorin B (GYPB) is a smaller, less abundant glycoprotein that is closely related to GYPA, and is comprised of a total of 72 amino acids that form a short extracellular domain and transmembrane region with almost no intracellular domain.The genes that encode GYPA and GYPB (GYPA and GYPB) are adjacent on chromosome 4 18 (Figure 1A 19,20 ).This genetic region is a recombination hotspot involving complex rearrangements at the GYPA/B/E locus, one outcome of which is a hybrid GYPB/ A gene that encodes the Dantu glycoprotein, comprising the extracellular domain of GYPB, and the transmembrane and cytoplasmic domains of GYPA (Figure 1A). Dantu is a rare glycophorin variant that is found at a frequency of up to 9.5% on the coast of Kenya, 21 but is generally absent outside East Africa. 18,19Previous genome-wide association studies have shown that Dantu is associated strongly with protection against all clinical forms of severe malaria. 18,21More recently, we demonstrated that membrane tension is increased in Dantu erythrocytes, and that this increased tension is associated with reduced P. falciparum merozoite invasion, providing a plausible explanation for the malaria protective effect of Dantu. 22We also showed that levels of a number of membrane proteins are altered in Dantu erythrocytes, which could potentially affect rosetting. 22Flow cytometry showed significantly reduced levels of GYPA, which was confirmed by quantitative massive spectrometry (19% of the non-Dantu level in Dantu-variant erythrocytes, P , 0.0001).However, minor changes in CR1 and GYPC suggested by flow cytometry were not confirmed in the quantitative assay.Band 3 showed significantly increased staining in Dantu cells, but significantly lower levels by quantitative mass spectrometry (49% of the non-Dantu level in Dantu-variant cells, P , 0.05).This discrepant result may be explained by the band 3 monoclonal antibody (mAb) used in flow cytometry having greater access to its epitope in Dantu cells as a result of the reduced level of GYPA. 22Other studies 23,24 have shown that the Dantu hybrid glycophorin lacks the Wright b antigen, because the extracellular region of GYPA required to form the Wright b epitope with band 3 15 is missing in the hybrid protein.To our knowledge, staining for the Wright b antigen in Dantu erythrocytes has not been reported previously, and the relative expression levels of Wright b in Dantu and non-Dantu erythrocytes is unknown. The previously described changes in erythrocyte membrane proteins raise the possibility that Dantu might also lead to impaired P. falciparum rosetting.Various other human erythrocyte polymorphisms associated with protection against severe malaria, including CR1 deficiency, 25 blood group O, 26,27 Hemoglobin S, 28 Hemoglobin C, 29 and the Knops blood group, 30 cause reduced parasite rosetting.Because the size, strength, and frequency of rosettes all influence the degree of microvessel blockage, 4,31 the occurrence of fewer, smaller, and weaker rosettes when parasites infect erythrocytes with malaria protective polymorphisms may improve microvascular blood flow and protect against pathology. We hypothesized that the altered expression of rosetting receptors in Dantu erythrocytes might impair their ability to form rosettes-a mechanism of protection against severe malaria that might add to the invasion phenotype demonstrated previously. 22To investigate this possibility, we examined the expression of putative rosetting receptors in Dantu and non-Dantu erythrocytes by flow cytometry, and the ability of Dantu and non-Dantu erythrocytes to form rosettes with an East African P. falciparum line. MATERIALS AND METHODS Study participants.Blood samples were obtained from 21 children younger than 13 years from Kilifi County on the Indian Ocean coast of Kenya. Dantu sample genotyping.Genomic DNA was extracted from whole blood using a QIAmp 96 DNA QIcube HT kit on a QIAcube HT System (QIAGEN, Manchester, United Kingdom) according to the manufacturer's instructions.Genotypes at the Dantu marker single nucleotide polymorphism, rs186873296, were determined by a CviQI (Thermo Fisher, Waltham, MA) restriction fragment length polymorphism assay as described elsewhere. 22The 21 samples studied here are a subset of the 42 samples described previously. 22ample preparation.Erythrocyte samples were purified from whole blood before cryopreserving in glycerolyte 28 and storing in liquid nitrogen.The frozen erythrocytes were shipped to Edinburgh, where they were thawed by standard methods 32 in sets of three (ABO blood group-matched Dantu homozygous, Dantu heterozygous, and non-Dantu erythrocytes).After thawing, erythrocytes were kept at 4 C and used within 48 hours.Sample genotypes were masked until all experiments were completed. Monoclonal antibody Fab fragment preparation.The mAbs used in flow cytometry are described in Table 1.Fab fragments were generated from 100 mg of IgG2a mAbs by papain digestion (5-6 hours, 37 C) using a Pierce Fab Micro Preparation Kit (44685, Thermo Fisher Scientific, Waltham, MA) according to the manufacturer's instructions.Immunoglobulin G1 Fab fragments were prepared by Ficin digestion (3-5 hours, 37 C) using a Pierce Mouse IgG1 Fab and F(ab 9 ) Micro Preparation Kit (44680, Thermo Fisher Scientific) according to the manufacturer's instructions.Purified Fab fragments were concentrated using Amicon Ultra-0.5 10-kDa MWCO Centrifugal Filter Devices (UFC501024, Sigma-Aldrich St. Louis, MO) according to the manufacturer's instructions.Total protein in Fab preparations was quantified using a Nanodrop Spectrophotometer (Thermo Fisher Scientific), and successful digestion was confirmed by sodium dodecyl sulfatepolyacrylamide gel electrophoresis. Flow cytometry analysis of erythrocyte surface receptors.Dantu homozygous, Dantu heterozygous, and non-Dantu erythrocytes were stained for Wright b (BRIC 14), GYPA (BRIC 256), band 3 (BRIC 200), GYPC (Ret40f), and CR1 (J3D3), and were analyzed by flow cytometry.A packed cell volume (PCV) (0.5 mL) of erythrocytes was added to each of eight tubes, washed once with 750 mL phosphatebuffered saline (PBS), washed once with 750 mL PBS/0.1% bovine serum albumin (BSA), and resuspended in 4 mL PBS/0.1% BSA.Erythrocytes were incubated with 0.1 mg/mL antibody Fab fragments or isotype controls in PBS/1% BSA at a final hematocrit (Ht) of 8% for 1 hour at 37 C, with mixing every 15 minutes.Samples were washed twice with 750 mL PBS/0.1% BSA and incubated at 2.5% Ht with 1:1000 dilution of Alexa Fluor 488-conjugated goat antimouse IgG in PBS/0.1% BSA for 45 minutes on ice in darkness, with mixing every 15 minutes.Erythrocytes were washed twice with cold PBS/0.1% BSA, then resuspended in cold 500 mL FACS buffer (PBS/0.5% BSA/0.02%sodium azide).Samples were run on an LSRII flow cytometer (BD Biosciences, Wokingham, United Kingdom) using the 530/30 Blue (488-nm) laser, with at least 10,000 singlet erythrocyte events counted per sample.Data were analyzed with FlowJo software (BD Biosciences), and the geometric mean fluorescence intensity of each sample was used to compare staining among genotypes.The gating strategy and background staining with isotype controls are shown in Supplemental Figures 1. Purification of infected erythrocytes.Infected erythrocyte purification and rosetting assays were performed either on the day of PKH2 staining or the following day, depending on the availability of mature-stage parasitized erythrocytes.Plasmodium falciparum-infected erythrocytes were purified using a magnetic-activated cell sorting (MACS) column (Miltenyi Biotec Bisley, United Kingdom) as described previously, 34 with the addition of 1 mg/mL heparin to all buffers to disrupt rosettes.The postpurification parasitemia was determined by staining an aliquot of erythrocyte suspension with 25 mg/mL ethidium bromide (37 C for 2 minutes) and by assessing the percentage of infected erythrocytes out of 200 erythrocytes counted on a wet preparation by fluorescence microscopy.This ranged from 58% to 73%, depending on the efficacy of the MACS purification. Purified, infected erythrocyte mixing with PKH2stained Dantu erythrocytes.Approximately 20 mL PCV of the purified, infected erythrocyte pellet was resuspended in 150 mL of incomplete RPMI/1 mg/mL heparin/0.5%BSA, and 50-mL aliquots were placed into three Eppendorf tubes.One hundred microliters of 50% Ht PKH2-stained erythrocytes from each donor was centrifuged at 1,000 3 g for 2 minutes, the supernatant removed, and the erythrocytes resuspended in incomplete RPMI/1 mg/mL heparin/0.5%BSA.The stained erythrocyte suspensions were added to the tubes containing the purified, infected erythrocytes, giving a final parasitemia of 4%.Cells were washed three times with incomplete RPMI, once with complete RPMI, and resuspended in 2.5 mL complete RPMI.The erythrocyte suspension was transferred to a T25 culture flask and gassed for 30 seconds with 1% O 2 /5% CO 2 /94% N 2 and incubated at 37 C. Rosetting assays.Rosetting assays were performed 1 hour and 48 hours after mixing purified, infected erythrocytes with stained Dantu erythrocytes (Figure 1B).A 200-mL aliquot of culture suspension was stained with 25 mg/mL ethidium bromide for 2 minutes at 37 C. A 10-mL aliquot of this stained erythrocyte suspension was placed on a microscope slide and covered with a 22-3 22-mm coverslip.The erythrocytes were viewed with a Leica DM2000 fluorescent microscope (340 magnification), with white light combined with the TRITC filter to identify ethidium bromide-stained infected erythrocytes or the FITC filter to view the PKH2 staining.The rosette frequency of each sample was determined by counting the percentage of mature, infected (pigmented trophozoite-or schizont-infected) erythrocytes binding two or more uninfected erythrocytes, at least one of which had to be PKH2 stained.This allowed for the determination of rosetting with the test erythrocytes, while excluding rosettes formed with the subpopulation of unstained, uninfected erythrocytes carried over in the MACS purification.One hundred infected erythrocytes were assessed for rosetting in two different parts of the wet preparation slide, and the values were averaged to give the rosette frequency based on 200 infected erythrocytes per sample.Mean rosette size was determined by counting the number of uninfected erythrocytes that were bound to infected erythrocytes in 50 rosettes per sample.Only rosettes that contained PKH2-stained erythrocytes were included in this count.The percentage of large rosettes was determined from the rosette size counts, with a large rosette defined as four or more uninfected erythrocytes per rosette.This definition was based on studies of rosette size and stability showing enhanced survival of large rosettes in narrow vessels. 31easurement of erythrocyte mean corpuscular volume.Erythrocyte mean corpuscular volume (MCV) was measured using an automated cell Coulter counter (Beckman Coulter, Indianapolis, IN) as described previously. 22tatistical analysis and graphing.Data were visualized and analyzed using GraphPad Prism v7.0 (GraphPad Software La Jolla, CA).Differences between sample means were analyzed by one-way analysis of variance (ANOVA), with Dunnett's multiple comparisons test to compare values in Dantu homozygous and Dantu heterozygous samples to those in non-Dantu erythrocytes.The x 2 test was used to examine differences between genotypes in the frequency distribution of rosette size.Spearman's rank correlation coefficient was used to assess the relationship between erythrocyte MCV and rosetting.All raw data are provided in Supplemental datasheets S1 through S3. RESULTS Dantu erythrocytes express putative rosetting receptors including the Wright b antigen.Immunofluorescent staining was carried out to assess the relative expression levels of erythrocyte rosetting receptors in Dantu homozygous, Dantu heterozygous, and non-Dantu erythrocytes.Glycophorin A staining was less in Dantu homozygous erythrocytes compared with non-Dantu erythrocytes, and band 3 staining was greater, confirming previous results 22 (Figure 2 and Supplemental Figure S2).Staining for CR1 was significantly less in Dantu heterozygotes, again confirming previous data, 22 whereas GYPC staining showed no difference among genotypes.Staining for the Wright b antigen tended to mirror the pattern of GYPA staining, but the variation within genotypes was large and no statistically significant differences were observed (Figure 2).Given the requirement for the extracellular region of GYPA to bind to band 3 to create the Wright b epitope, 15 a correlation between GYPA and Wright b staining would be expected.Examination of the flow cytometry data showed a strong positive correlation between the geometric mean fluorescent intensity for GYPA and Wright b for the data set as a whole (r s 5 0.7373, P 5 0.0001, Supplemental Figure S3).Each genotype also showed a positive correlation for GYPA and Wright b staining, although the strength of the relationship varied, being most marked in the non-Dantu donors (non-Dantu r s 5 0.9643, P 5 0.0028; Dantu heterozygous r s 5 0.7143, P 5 0.0881; Dantu homozygous r s 5 0.5225, P 5 0.2397). Large rosettes are less common in Dantu erythrocytes.To investigate whether the Dantu polymorphism influences rosetting, Dantu homozygous, Dantu heterozygous, and non-Dantu erythrocytes were stained with the PKH2 fluorescent dye then incubated with MACS-purified, P. falciparum-infected erythrocytes from Kenyan parasite line 11019, and rosetting was assessed at 1 hour and 48 hours.The two time points allowed for assessment of rosetting when only the uninfected erythrocytes in the suspension were stained (at 1 hour), and when the parasites had reinvaded so that both infected and uninfected erythrocytes were stained (at 48 hours).The second time point allows adhesion interactions to reach their maximum strength 35 and also allows for any reduction in adhesion due to impaired P. falciparum erythrocyte membrane protein one (PfEMP1) display to become manifest. 29At the 1-hour time point, there was no significant difference in rosette frequency, mean rosette size, or percentage of large rosettes between the Dantu and non-Dantu erythrocytes (Figure 3A-C).At 48 hours, there was also no significant difference among genotypes in rosette frequency, but there was a nonsignificant trend toward smaller mean rosette size and lower variance in Dantu homozygous compared with non-Dantu erythrocytes (Figure 3D-F).The percentage of large rosettes was significantly less in Dantu homozygous donors (mean, 15.4; standard error of the mean [SEM], 1.43) and Dantu heterozygous donors (mean, 16.43; SEM 3.20) compared with non-Dantu erythrocytes at 48 hours (mean, 32.86; SEM 7.12; P 5 0.025 one way ANOVA) (Figure 3D-F).This difference in rosette size between genotypes was also seen by examining the frequency distribution of rosette size at 48 hours, with more frequent two-and three-uninfected erythrocyte rosettes, and fewer rosettes with four, five, six, or seven uninfected erythrocytes in Dantu compared with non-Dantu donors (Figure 4; P ,0.0001, x 2 test). There is a positive correlation between erythrocyte mean corpuscular volume and rosetting.On average, Dantu erythrocytes are smaller than non-Dantu erythrocytes, 22 so we investigated whether the effect of Dantu on rosetting that we observed might be related to the reduced size of Dantu erythrocytes.We found no significant relationships between MCV and rosetting at 1 hour (Supplemental Figure S4); however, we did find a significant positive correlation between MCV and mean rosette size (Spearman's r s 5 0.5206, P 5 0.0155), and between MCV and the percentage of large rosettes (r s 5 0.5970, P 5 0.0043) at 48 hours (Figure 5).The MCV values for four of seven Dantu homozygous donors were within the normal range (. 80 fL); however, their percentage of large rosettes was generally less than the non-Dantu donors with similar MCVs.This may suggest that both erythrocyte size and reduced expression of membrane receptors contribute to impaired rosetting in Dantu erythrocytes. DISCUSSION In our study, we confirmed the alterations in erythrocyte membrane receptor expression in Dantu erythrocytes described previously, 22 and investigated whether Dantu erythrocytes show any difference in rosetting compared with non-Dantu controls.We found no effect of Dantu genotype on rosette frequency (the proportion of infected erythrocytes forming rosettes), but significantly fewer large rosettes were found after the parasites had reinvaded and grown in Dantu erythrocytes for 48 hours.This result has pathophysiological implications, because large rosettes are more stable and resistant to disruption in microvessels compared with small rosettes. 31Hence, large rosettes are more likely to contribute to microvascular obstruction in severe malaria.The effect of Dantu on the reduction of large rosettes mirrors that of the ABO blood group, in which reduced numbers of large rosettes in group O erythrocytes compared with non-O blood groups may contribute to malaria protection. 11,26ur study was limited in terms of the number of erythrocyte donors tested (seven of each genotype) with a single P. falciparum line.Future studies using additional donors are needed to confirm the findings reported here.Furthermore, given the complexity in molecular mechanisms of rosetting, which involves diverse parasite ligands and host erythrocyte receptors, [8][9][10][11][12][13][14] there is a need to carry out Dantu blood group rosetting experiments with a wider set of P. falciparum lines with differing rosetting phenotypes. If the results shown here are confirmed, we can consider the possible mechanisms that could be responsible for the paucity of large rosettes in Dantu erythrocytes.These include reduced numbers of erythrocyte rosetting receptors, 5,8 reduced expression of the parasite adhesion molecule PfEMP1 28,29 or reduced erythrocyte size. 5Our data do show reduced expression of some putative erythrocyte rosetting receptors, although it is unclear whether the relatively modest changes in expression would be sufficient to impact adhesion.We did not measure PfEMP1 expression level after parasite invasion into Dantu erythrocytes, but this could be done in future studies by generating antibodies against the rosette-mediating PfEMP1 variant from the 11019 P. falciparum line and by testing other rosetting parasite lines 28 for which PfEMP1 antibodies are available.Previous data show that Dantu erythrocytes have significantly lower MCV compared with non-Dantu controls. 22A strong positive correlation between erythrocyte MCV and mean rosette size was noted in a previous study, 36 with microcytic erythrocytes from donors with iron-deficiency anemia and thalassaemia showing reduced rosetting capacity.In our study, we confirmed a significant positive correlation between MCV and mean rosette size, and also showed a significant positive correlation between MCV and the percentage of large rosettes.With Dantu erythrocytes on average having lower MCV, it is plausible that the size of Dantu erythrocytes is responsible in part for the impaired formation of large rosettes, combined with the specific erythrocyte membrane receptor changes, as we hypothesized originally. One surprising finding from our study was that the Wright b blood group antigen, formed by a physical association between band 3 and GYPA, 15 was detected at high levels in all genotypes.Previous studies 23,24 have found that the Wright b antigen is not present on the hybrid glycophorin of Dantu-positive erythrocytes.This is expected because the extracellular region of GYPA, which is missing from the hybrid protein, is required for Wright b expression. 15Why then, did we observe robust staining for the Wright b antigen on Dantu homozygous erythrocytes?Recent whole-genome sequencing has confirmed that the Dantu locus contains an intact copy of the normal GYPA gene, in addition to the genes encoding the hybrid protein. 18,19Hence, although the hybrid glycophorin cannot form the Wright b antigen with band 3, normal GYPA should also be present in Dantu erythrocytes to allow formation of Wright b .Our data show definitively that Dantu erythrocytes from both heterozygous and homozygous donors express the Wright b antigen.There was a trend toward lower geometric mean fluorescence intensity for Wright b staining in Dantu erythrocytes, but statistically significant differences were not detected in this small sample set.Future studies with larger sample sizes are needed to determine whether there are consistent differences in Wright b expression between Dantu genotypes.CONCLUSION Overall, we report that Dantu erythrocytes show reduced expression of some candidate rosetting receptors, and that the Dantu phenotype impairs the ability of P. falciparum to form large rosettes.These data suggest that in addition to the effects on erythrocyte membrane tension and parasite invasion shown previously, the protection against severe malaria afforded by the Dantu blood group may also include an effect on P. falciparum rosetting. FIGURE 1 . FIGURE 1. Schematic of Dantu glycophorins and the Dantu rosetting experiment.(A, top) Organization of non-Dantu, Dantu heterozygote, and Dantu homozygote glycophorin genes. 19(A, bottom) Composition of glycophorin A (GYPA), glycophorin B, and the Dantu glycophorin B-A hybrid (GYPB-A)across the Dantu genotypes.The regions of the glycophorin molecules encoded by the corresponding exons are numbered. 20The lower expression of GYPA in Dantu erythrocytes is indicated by paler shading.(B) Dantu rosetting experimental design in which purified Plasmodium falciparum-infected erythrocytes stained with ethidium bromide were mixed with Dantu homozygous, Dantu heterozygous, or non-Dantu erythrocytes stained with PKH2 fluorescent dye, with rosetting assessed after 1 hour and 48 hours. FIGURE 3 . FIGURE 3. Fewer large rosettes observed in Dantu erythrocytes.The ability of different Dantu genotype erythrocytes to form rosettes was measured by mixing Plasmodium falciparum strain 11019 purified, infected erythrocytes with PKH2-stained Dantu or non-Dantu erythrocytes.Rosetting was assessed by fluorescence microscopy after 1 hour (A-C) and 48 hours (D-F).The rosette frequency is the percentage of mature, infected (pigmented trophozoite-or schizont-infected) erythrocytes binding two or more uninfected erythrocytes, at least one of which was stained with PKH2 dye.The mean rosette size is the number of uninfected erythrocytes per rosette based on 50 rosettes per sample.Large rosettes are defined as those containing four or more uninfected erythrocytes.Seven non-Dantu, seven Dantu heterozygous, and seven Dantu homozygous samples were tested.Statistical comparison across groups was performed by one-way analysis of variance with Dunnett's multiple comparisons test (*P , 0.05).Bars show the mean and standard error of the mean for each genotype. FIGURE 4 . FIGURE 4. Frequency distribution of rosette size at 48 hours differs among genotypes.The rosette size data for donors within each genotype were pooled, and the relative frequency of rosettes in each size category (expressed as a percentage of all rosettes) is shown for Dantu homozygous, Dantu heterozygous, and non-Dantu donors.Rosette size indicates the number of uninfected erythrocytes in each rosette.Statistical comparison between groups was performed using a Chi-squared test, P , 0.0001. TABLE 1 H1L 5 heavy and light chains; IBGRL 5 International Blood Group Reference Laboratory.All primary antibodies are mouse monoclonal antibodies.
5,539.8
2024-01-30T00:00:00.000
[ "Biology", "Medicine" ]
THE COST OF DIRECT TAXATION ON INVESTMENT IN BRAZIL This paper analyzed the impact of taxation on the investment in Brazil, focusing on the taxation of corporate income. Following the literature, it was used an economic model to calculate two indicators of effective tax rates Effective Marginal Tax Rate (EMTR) and Effective Average Tax Rate (EATR). The EMTR measures the increase of the cost of capital due to corporate income tax. The EATR represents a measure of the average tax rate levied on an investment that has a pre-defined economic profit. The results suggest Brazil may face some difficulties to attract foreign investment. The country presents high rates for EATR and EMTR, higher than the average of the rich countries and well above the figures of development countries like Chile, Mexico, South Africa, Russia and China, potential competitors in attracting investments. INTRODUCTION Direct taxation on investments has been the subject of concerns and disputes between countries.Data from Organization for Economic Cooperation and Development (OECD, s/d) point to sharp declines in rates of corporate income tax in the last 25 years, as a way to attract investment. In Brazil, however, when thinking about investment, companies always point out the high costs of indirect taxation, but not on direct taxes.This is because indirect taxes and contributions on investment in Brazil are only partly compensated.Recent work by the National Industry Confederation (CNI, 2014) on the tax cost of the investment comes to proposals to eliminate indirect taxes (ISS, ICMS, PIS, Cofins, IPI and AFRMM)1 on investment, but not say anything about direct taxes. However, in most developed countries this discussion is already outdated.Indirect taxes such as the Value Added Tax are not levied on investment, or when it happens, it is quickly recovered.All the tax cost of the investment is related to direct taxation, such as the Income Tax. This paper aims to analyze the tax cost of the investment in Brazil and its focuses on the taxation of corporate income.According to what the literature suggests, an economic model has been employed to calculate two indicators of effective rates -the Effective Marginal Tax Rate (EMTR) and the Effective Average Tax Rate (EATR). The EMTR measures the increase in the cost of capital as a result of the taxation of corporate income.It is assumed that the investment will occur until the level where the marginal gain of an additional unit of investment equals the cost of capital.The tax increases the cost of capital and ultimately reduces investment.The EATR is a measure of the average effective tax rate on investment that has a predefined economic profit. From the perspective of policymakers, the EATR could serve as an important indicator for choosing the location of a factory among several countries, while the EMTR is believed to be a relevant indicator for the level of investment that would be realized. The results show that Brazil may face difficulties in attracting investments.The country has high rates of EATR and EMTR, higher than the average of OECD countries and above important developing countries such as Chile, Mexico, South Africa, Russia and China, which are potential competitors in terms of investment attraction.DOI: 10.1590/198055272111 PAES, N. L. The cost of direct taxation on investment in Brazil After this brief introduction, the following section presents a literature review, that highlights the models used to calculate the EMTR and EATR and the results already found.Section 3 develops the theoretical model used in this paper.Section 4 presents the main results and compares the figures for Brazil with those of a group of relevant countries.Finally, Section 5 concludes with suggestions for the Brazilian tax policy. LITERATURE REVIEW The development of indicators as a measure of the effective cost of taxation on investment began with the work of Auerbach (1979), which built a theoretical model where the present value of the investment income flow should equal the cost of capital in order to determine the level of investment.King and Fullerton (1984) built a new theoretical model and developed the EMTR as a measure of the impact of effective marginal tax rates on investment.In their paper, they applied the EMTR only to the United States, the United Kingdom, Sweden and Germany. The idea of effective marginal rates on investment gained popularity and got international boost with Alworth (1988), Keen (1991) and OECD (1991), which has incorporated this indicator in its comparative analyzes of taxation between its member countries. The EMTR constitutes a tool to analyze the impact of the current tax system on the income stream from an investment.The EMTR can be used for calculating the impact of taxation on the level of investment.Devereux and Griffith (1998) introduced another indicator involving effective tax rates, called the EATR.This new approach measures the effective average tax rate, and estimates the impact of taxation on an investment that earns a given pre-tax rate of return.Thus, the EATR is believed to be a useful tool for investment location decisions, since it provides a complementary view to the EMTR. Later, Devereux and Griffith (2003) established a theoretical link between the EMTR and the EATR.Their model innovated by incorporating the net present value of depreciation deductions and thus allowing the forward-looking calculation of EATR.Their model was applied to the case of the UK and the US: a EATR time series was built and the location decision of a firm's new industrial plant was simulated to choose the between France, Germany and the UK.Lorentz (2008) applied both indicators in a broad analysis of the taxation of corporate income in the OECD countries between 1982 and 2007.In that period the nominal rates declined, but the tax basis increased, which resulted in a relative stability of revenues.Both the EATR as the EMTR presented a downward trend during the period, although lower than the decrease in the statutory rates.In the average, the EMTR fell from 34% to 22% and the EATR was reduced from 37% to 24%.Klemm (2008) extended the model by Devereux and Griffith (2003) to consider the possibility of fixed investment.Originally, the model predicted that the increase in capital stock would occur in one period, i.e., the cost of capital was calculated on the basis of a one period perturbation in the stock of capital.In Klemm's revised version, the capital increase becomes permanent.The author argues that the new model is more appropriate for the analysis of long-term investments.But he also points out that the present value calculation procedure for deduction on depreciation in the model of Devereux and Griffith (2003) attenuates the differences between the two models. Botman, Klemm and Baqir (2008) applied the model introduced by Kleem (2008) for a number of Southeast Asian countries such as the Philippines, Laos, Cambodia, Malaysia, Indonesia, Vietnam and Thailand.Almeida (2004) and Almeida and Paes (2013) calculated the EMTR to Brazil by using the King and Fullerton methodology (1984).In the former case, the author estimated effective marginal tax rates for Brazil in 2004, including investments in machinery and equipment, buildings and inventories, which could be financed by retained earnings, new shares or debt.In the latter, the authors managed to update Almdeida (2004) for 2012 datasets and to include the impact of interest on net equity in the calculation of effective rates. Some studies have calculated the EMTR and the EATR to several countries.For example, the Institute for Fiscal Studies -IFS (1997) estimated the EMTR for investment in buildings, machinery and inventories funded by retained earnings, new equity and debt, for a group of 10 OECD countries.Devereux et al. (2002) repeated the exercise for 19 OECD countries and the IFS (2010) updated the numbers of its 1997 study.Polito (2010) calculated the EMTR for the United Kingdom and the United States related to investments in plants and machinery with funding made by issuing shares or debt.Finally, Bilicka and Devereux (2012) published a ranking of the EMTR and the EATR for OECD and G20 countries. Although widespread, the use of EMTR and EATR as analytical tools for tax policy is not free of criticism.The main issue is that the effective tax rate indicators do not deal with all the complexity existing in modern tax systems, such as accelerated depreciation rules, treatment of tax losses, special regimes, tax planning and tax evasion.However, as stressed by Devereux et al. (2004), both the EMTR and the EATR provide a better picture of the taxation of corporate income than the simple nominal rates and thus they may be useful in the design of tax policy.This paper contributes to the literature in several aspects.First, it presents a time series for the EMTR and the EATR for Brazil since 1990.Until now the two Brazilian indicators had only been calculated for 2004 by Almeida (2004) and 2012 by Almeida and Paes (2012) and Bilicka and Devereux (2012).In addition, this paper incorporates the effects of indexation that existed in Brazil until 1996 and by doing that sheds a light on an important peculiarity of Brazil's tax system.A second contribution is to allow comparative analysis among developing countries such as Argentina, India, China, Russia and South Africa.While the literature focuses mainly in OECD countries due to the availability of data, this article builds time series of the EATR and the EMTR for both groups of countries, which allows for the comparison of the tax cost of the investment in Brazil and their peers in the world. MODEL The economic model was built similarly to the one developed by Devereux and Griffith (2003).It allows a microeconomic determination of EATR and EMTR.Consider the value of the firm is given by: = In (1), V t is the firm's value at time t; D t are dividends paid at time t; N t is new equity issued at time t; i t is the nominal interest rate; m i is the personal tax rate on interest income; m d is the personal tax rate on dividends; c is the rate of tax credit granted for dividends; and z is the personal tax rate on capital gain. By simplifying the expression for the value of the firm the following is obtained: (2) Dividends may be expressed as: (3) In the above expression, ) is the product of the economy at time t and it only depends of the capital from the previous period; is a one period debt of the firm issued at time t; τ is the corporate tax rate; and φ is the rate at which capital expenditure can be offset against tax. The EMTR measures the rising cost of capital due to the corporate income tax.To calculate the cost of capital, one must consider a disturbance in the capital stock at time t.Investment increases for one unit in period t and decreases in the following period t + 1, making capital rise for one unit in t and returns to its previous value in other periods.Thus, the net present value for the shareholder of this disturbance in t, R, is equal to the change in market value of the firm. (4) By setting the economic profit to zero at the margin, the expression 0 defines the cost of capital and allows finding the optimal capital stock at time t. The investment increases by and decreases in the following period by , where δ is the one period depreciation rate and π is the inflation rate.The addition of capital increases real output in t + 1 by where p is the real financial return on investment.The nominal product increases by . The disturbance in the capital stock affects the dividends: (5) Only the hypothesis that the new investment will be entirely financed by means of retained earnings 2 will be considered.In this case, the return on investment will be distributed as dividends, and it is determined by: (6) By substituting the derivatives in the previous equation, the following is obtained: 2 Other usual financing possibilities in the literature are debt and the issuance of new equities.For simplicity, it was chosen to work only with retained earnings. = PAES, N. L. The cost of direct taxation on investment in Brazil By setting R RE = 0, as a measure of the cost of capital, the marginal financial rate of return, The ETMR can be obtained as the difference between the marginal rate of return of the investment and the real interest rate, as in ( 9): ( 9) By replacing values and assuming that there is no taxation on interest and capital gains (m i = z = 0), the following expression for the EMTR in obtained in case of retained earnings: (10) In the case of the EATR, the first step is to find the present value of the net investment for shareholders in the absence of taxation.Based on (7): (11) In this case γ = 1 and ρ = i.Then, (12) The EATR can be calculated by the proportional difference between the return of an investment with and without taxes.However, to obtain a rate that represents this indicator, one need to normalize this difference.An alternative is to scale the difference by the net present value of the pre-tax income stream, net of depreciation, 1+ .Thus, the EATR is defined by: Admitting that there is no tax on the shareholder level ( = = = = 0), we obtain the following expression for the EATR: (14) We still have to find the value of φ, which represents the Net Present Value of depreciation allowances (NPV). In general, the tax legislation accepts two different methods for calculating depreciation.The first, known as straight-line depreciation, states that an asset may be depreciated at a constant rate.A second possibility is the use of declining balance, which allows higher depreciation rates at the beginning of life-time of an asset. In both cases, the value of φ can calculated using the following formulas: Declining Balance Depreciation (16) Each country determines the depreciation method that is acceptable: straight-line, declining balance or both. RESULTS AND DISCUSSION The model has been applied to a set of countries in order to study the evolution of these indicators.All data related to depreciation and tax rates have been taken from the Centre for Business Taxation at Oxford University 3 .Inflation data have been obtained at the International Monetary Fund International Financial Statistics online database 4 .The IMF data covers varied countries and years, depending on each case (in brackets in the footnote) 5 . 5 Argentina (1990); Australia (1982); Austria (1981); Belgium (1983); Brazil (1990) In the case of Brazil, an adjustment is needed to calculate the present value of depreciation for the period before 1996, since the Brazilian law allowed the use of monetary restatement on the depreciation values.As the country's tax law only accepts linear depreciation, the φ calculation formula for the 1990-1995 period has been changed to: Linear depreciation Brazil (1990Brazil ( -1995) ) (17) The monetary restatement (c.m.) has been calculated by using data extracted from the Table for Updating the Cost of Assets and Rights the website of the Federal Revenue of Brazil (RFB), considering the annual value measured in December of each year 6 . In all the calculations of the EMTR ahead, as is standard in this literature, it has been assumed that the investment is made in machinery and equipment with real interest rate of 5%. Figure 1 shows the EMTR results for Brazil.The calculations have been made based on the equations ( 10) and ( 17) for the period 1990-1995 and on ( 10) and ( 15) for the period from 1996 on. The EMTR is very high until 1996 with marginal rates exceeding 50%.There are two main reasons for that.First, the rate of corporate income taxation in the country was 54% until 1993, then dropped to 49% in 1994 and to 46% in 1995.A second reason is the hyperinflation of the period, which would eventually erode depreciation, despite the indexation allowed at the time.The combination of high tax rates with huge inflation results in high marginal tax rates on investment in the country. Inflation in Brazil was substantially reduced from 1996 on in the time series.The stabilization of the economy affected both factors.The corporate income tax rates declined to 32.4% in 1996, 33% between 1996 and 1998, and rose to 37% in 1999 and 34% since 2000.Inflation was also substantially reduced and this allowed the present value of depreciation to increase, even with the end of the monetary restatement in 1996.The Brazilian EMTR, then, stabilized at around 36%, a level slightly above the corporate tax rate7 . A comparison of the values of the Brazilian EMTR with that of other Latin-American countries and also with the average of OECD countries provides a better picture of the comparative impact of the Brazilian taxation on the return on investment.During the period of hyperinflation and high corporate income tax rates, the Brazilian EMTR was far higher than its Latin American neighbors' and the average for OECD countries.With the stabilization of the economy and reduced tax rates, the country witnessed a progressive decline in EMTR, which eventually stabilized at a relatively high level.The Brazilian EMTR is second only to Argentina, but it is well above that of Chile, Mexico and OECD.The graph shows the gradual and steady decline of the EMTR that developed countries promoted by means of reductions in statutory rates.While the marginal effective tax rate on investment in Brazil is close to 35%, the average rate of OECD countries is about 20%. Figures 3 ahead compares the evolution of the EMTR within the BRICS.Within the group of major developing nations, Brazil also shows a comparatively high tax on investment.However, the trajectories show that soon after the stabilization of Brazil's economy in 1996, the country had one of the lowest EMTR in the block, higher only than that of China.Over time, Russia sharply reduced their nominal rates, from 35% in 1999 to 20% from 2009 and more recently, South Africa, has also promoted cuts in taxation with declining rates of corporate income tax from 34.55% in 2012 to 28% in 2013 and 2014.Brazil is currently the second country with the highest EMTR in the BRICS, second only to India. As for the EATR, it is important to remember that this indicator is suitable for the making investment location decisions among different countries, since it provides a measure of the average impact of taxation on the return on investment. Once again, the results have been calculated assuming an investment in machinery and equipment with real interest of 5%.It is assumed that the real financial return expected from the investment will be 10%. Figure 4 shows the EATR results for Brazil.As in the EMTR case, it was necessary to incorporate the monetary restatement until 1996.Therefore, the calculations have been made based on the equations ( 14) and ( 17) for the period from 1990 to 1995 and on ( 14) and ( 15) for the period ranging from 1996 on. Results show that the EATR was also influenced by the high inflation and the high tax rates present in Brazil until 1995.Since the stabilization of the economy in 1996 and the fall in corporate income tax rates, the EATR has remained 35.4%, a level slightly lower than the EMTR, but above the current tax rate of 34%.The comparison of Brazil's rates to those of the major economies of Latin America and the average for developed countries (OECD) helps to understand the taxation on investment in a comparative perspective.Figure 5 ahead shows the results.The behavior of the EATR is clearly very similar to that of the EMTR.Brazil is almost always the second country with the highest EATR level.Noteworthy is the Chilean indicator, whose values are the lowest along the whole series, ranging from 15% to 20% between 1996 and 2014, largely due to reduced statutory rates in the country. The following figure compares the trajectories of the EATR within the BRICS.The results are again similar to that obtained with the EMTR, with only minor differences.For example, China's EMTR was 20% and the EATR was calculated at 22%.India's EMTR was calculated at 44%, while the EATR was 42%.Compared to other BRICS countries Brazil also showed a relatively high rate, 35%, second only to India.Interestingly, both countries have the same nominal rate, but India's inflation is higher and the depreciation method adopted in that country is also different -India adopts declining balance depreciation and Brazil straight-line depreciation. It should be noted the reduction of corporate income tax rates that occurred in the BRICS countries from 2007 on has not been followed by Brazil and India.For example, China reduced it from 33% to 25% since 2007; in South Africa it feel from 34.5% to 28% from 2012 on; and in Russia from 24% to 20% from 2008. The framework suggested by the indicators is that Brazil may face difficulties in attracting investment.The high EATR puts the country in an unfavorable condition in the competition for attracting investment.And yet, the high EMTR also suggests that even if the country is chosen, the investment size may be lower. But some of those results must be put into perspective.Taxation is only one part of the complex decision-making process in the allocation of investments.Several assumptions about the structure and financing of the investment must be made and some real world features are not dealt with by the model.For example, the existence of tax benefits such as accelerated depreciation, reduced tax base, loss compensation and incentives for research and development are not considered. Still, the EMTR and the EATR provide a better picture of the taxation of corporate income than the simple nominal rates.Both indicators are important for providing an average indicative of the size of taxation a firm would actually face in the country.In Brazil, the combination of high rates and still high inflation ends up making it more difficult to attract investments. CONCLUSION This paper has investigated the role of taxation of corporate income on investment by using an economic model to calculate two indicators of effective tax rates -the EMTR and the EATR. It has been observed that the effective tax rates on investment in Brazil are high, despite their decreased since the stabilization of the currency in the mid-1990s.Two reasons can be pointed out for this reduction.First, the rate of corporate income taxation dropped from 54% in 1993 to 34% from 2001.Second, the control of inflation increased the present value of depreciation.Even with reduced effective tax rates, both the EMTR and the EATR still remain relatively high in Brazil.They are higher compared to the rates of the average of the OECD countries and that of relevant developing countries such as Mexico, Chile, Russia, China and South Africa.As highlighted before, high effective rates could put Brazil in an unfavorable condition in the competition for attracting investments. The reduction of corporate income tax rates seems to be an appropriate measure for Brazil.The country has very high rates that do not result in higher revenues.Lower rates, accompanied by the suppression of tax benefits, is expected to reduce the opportunities for tax avoidance and can help the country to boost its competitiveness in terms of investment attraction. A second improvement measure for the country could be a revision of the depreciation rules that makes them friendlier to investment.Due to technological developments, machinery and equipment tend to depreciate faster and faster, and it would be important to adapt the tax legislation in Brazil to this trend. . L. The cost of direct taxation on investment in Brazil ) . L. The cost of direct taxation on investment in Brazil Figure Figure 4 -EATR, Brazil, 1990-2014 . L. The cost of direct taxation on investment in Brazil
5,483.8
2017-01-01T00:00:00.000
[ "Economics" ]
Analysis of the Fire Properties of Blown Insulation from Crushed Straw in the Buildings Sustainable development in civil engineering is the clear and necessary goal of the current generation. There are many possibilities for reducing the use of depletable resources. One of them is to use renewable and recyclable materials on a larger scale in the construction industry. One possibility is the application of natural thermal insulators. A typical example is a crushed straw, which is generated as agricultural waste in the Czech Republic. Due to its small dimensions and good thermal insulation parameters, this material can also be used as blown thermal insulation. The research aims to examine the fire resistance of crushed straw as blown insulation. The single-flame source fire test results, thermal attack by a single burning item (SBI) test and large-scale test of a perimeter wall segment are shown. The results show that blown insulation made of crushed straw meets the requirements of fire protection. In addition, crushed straw can be also used to protect load-bearing structures due to its behaviour. This article also shows the production process of crushed straw used as blown insulation in brief. Introduction Sustainable development in civil engineering entails not only the need to improve conventional construction methods and materials but also the development of new methods or rediscovering forgotten techniques [1,2]. This is also a larger problem at this time, as building material prices are rising. Straw as a part of buildings (roofs, insulation and load-bearing elements) appeared in the Middle Ages and was also used at the beginning of the 20th century [3]. The use of straw in construction complies with the principles of sustainable development thanks to the use of the material as a secondary raw material [4,5]. In the Czech Republic, 2 million tonnes of straw are wasted annually [6]. Worldwide, a large amount of straw is burned on the field [7] or used in incinerators for energy production [8], which entails a certain increase in the environmental burden. However, it can be suppressed by alternative uses of straw. We must not forget the increased interest in ecological construction due to the indisputable need to further reduce the impact of human activities on the environment and global climate problems [9][10][11]. Straw can be used as prefabricated straw wall panels [12,13] or self-supporting straw bales [14,15]. Another possible use of straw in construction can be various composite structural elements, for example in the form of a straw fibre cement composite [16,17]. A very interesting possibility is the application of crushed straw in construction as blown insulation [18]. It should be noted that each type of use of straw has a different economic and environmental impact and especially different principles of behaviour of the entire structure and individual elements. The thermal insulation [17,19,20], sound [21,22], moisture [23,24] and diffusion [25] properties of straw are relatively well researched, but there is a lack of more extensive knowledge of fire resistance, flammability and fire parameters. Fire characteristics of building materials hold an important role in building design [26]. This aspect is even more important for buildings made of natural materials than, for example, for masonry buildings, as natural materials are expected to be less resistant to fire than industrially produced materials. Additionally, information about less typical materials and structures and resistance to thermal attack is available, which can be used in fire resistance research [27][28][29]. This information can be considered in the research of crushed straw fire resistance. A lot of information about fire tests of straw bales can be found [3,[30][31][32], but there is less information about blown insulation with crushed straw. The article builds on previous research [33,34] and complements it with new knowledge, especially in the field of evaluation of large-format fire tests. The research aims to prove the fire resistance of crushed straw as blown insulation. Production The production process of crushed straw begins with the harvest of cereals [35]. By mowing grown cereals, loose straw stalks are created and can also be tied into bales. More often, however, straw blades are left in the field, where they dry out to a moisture value of 15% under favourable climatic conditions. Then, the loose stalks are removed using a baling machine (see Figure 1a) [36], which creates bales from them. The compression and shape of the product depend on the type of the machine, which produces bales in the form of rollers or blocks. After creating bales in the baling machine, where the stalks are tied together at the same time, the bales are transported to covered warehouses for storage. building materials hold an important role in building design [26]. This aspect is even more important for buildings made of natural materials than, for example, for masonry buildings, as natural materials are expected to be less resistant to fire than industrially produced materials. Additionally, information about less typical materials and structures and resistance to thermal attack is available, which can be used in fire resistance research [27][28][29]. This information can be considered in the research of crushed straw fire resistance. A lot of information about fire tests of straw bales can be found [3,[30][31][32], but there is less information about blown insulation with crushed straw. The article builds on previous research [33,34] and complements it with new knowledge, especially in the field of evaluation of large-format fire tests. The research aims to prove the fire resistance of crushed straw as blown insulation. Production The production process of crushed straw begins with the harvest of cereals [35]. By mowing grown cereals, loose straw stalks are created and can also be tied into bales. More often, however, straw blades are left in the field, where they dry out to a moisture value of 15% under favourable climatic conditions. Then, the loose stalks are removed using a baling machine (see Figure 1a) [36], which creates bales from them. The compression and shape of the product depend on the type of the machine, which produces bales in the form of rollers or blocks. After creating bales in the baling machine, where the stalks are tied together at the same time, the bales are transported to covered warehouses for storage. The next step leading to the formation of crushed straw is the actual crushing of the straw stalks in the tied bales. After transporting the bales from the warehouse, the bales are placed on a receiving table or a rake conveyor. These machines transport the bales to a shredder (see Figure 1b) [36]. The disassembled stalks are then cut to a length of about 150 mm. The cut stalks are then crushed to the desired fraction in a hammer crusher. The crushed straw is then often used as bedding for livestock. Harvesting press for large prismatic straw bales (a); biomass shredder (b) [36]. Harvesting press for large prismatic straw bales (a); biomass shredder (b) [36]. The next step leading to the formation of crushed straw is the actual crushing of the straw stalks in the tied bales. After transporting the bales from the warehouse, the bales are placed on a receiving table or a rake conveyor. These machines transport the bales to a shredder (see Figure 1b) [36]. The disassembled stalks are then cut to a length of about 150 mm. The cut stalks are then crushed to the desired fraction in a hammer crusher. The crushed straw is then often used as bedding for livestock. Preparation and Application Blown insulation is an alternative to the insulation available today in the form of mats or boards. Blown insulation made of cellulose or wooden fibres is known and popular. In the future, crushed straw can be among the standard natural blown insulations. In contrast, the main advantage is the application of the straw by using a blower; therefore, they can fill even hard-to-reach places and spaces. In the implementation, crushed straw is poured into the application machine (see Figure 2a), and the insulator is divided into small parts using rotary blades (see Figure 2b). Subsequently, the insulation is transported to the destination through air pressure and hose lines. The application machine can be set exactly on the insulation material parameters. Because of this setting, the optimal bulk density of the insulation in the structure can be obtained to prevent it from settling [18]. Another advantage of this type of insulation, compared to boards and mats, is the minimisation of waste during application. Preparation and Application Blown insulation is an alternative to the insulation available today in the form of mats or boards. Blown insulation made of cellulose or wooden fibres is known and popular. In the future, crushed straw can be among the standard natural blown insulations. In contrast, the main advantage is the application of the straw by using a blower; therefore, they can fill even hard-to-reach places and spaces. In the implementation, crushed straw is poured into the application machine (see Figure 2a), and the insulator is divided into small parts using rotary blades (see Figure 2b). Subsequently, the insulation is transported to the destination through air pressure and hose lines. The application machine can be set exactly on the insulation material parameters. Because of this setting, the optimal bulk density of the insulation in the structure can be obtained to prevent it from settling [18]. Another advantage of this type of insulation, compared to boards and mats, is the minimisation of waste during application. Fire Tests of Crushed Straw Not only in the Czech Republic is the fire resistance determined in minutes according to legislation [37,38]. For the evaluation of materials, the reaction to fire class is used. This class shows how resistant a given material is to the spread and development of fire. Other criteria for classification into reaction to fire are the amount of energy released by the material during a fire, the intensity of smoke during the burning of the material and the occurrence of burning drops. This classification is based on tests performed in certified laboratories according to the procedure prescribed by legislation [37,38]. Fire Tests of Crushed Straw Not only in the Czech Republic is the fire resistance determined in minutes according to legislation [37,38]. For the evaluation of materials, the reaction to fire class is used. This class shows how resistant a given material is to the spread and development of fire. Other criteria for classification into reaction to fire are the amount of energy released by the material during a fire, the intensity of smoke during the burning of the material and the occurrence of burning drops. This classification is based on tests performed in certified laboratories according to the procedure prescribed by legislation [37,38]. Single-Flame Source Fire Test This experiment was partially introduced before in [33], but because the single-flame source test was the first step in determining the fire resistance, the basic principle and results are described here as well. The test was conducted according to EN ISO 11925-2 [39]. A vertical container with a minimum dimension of 180 mm × 90 mm × 40 mm was prepared and filled with the tested crushed straw. The test container was made of wire mesh, and on the exposed site was the opening for the flame. A burner with a small standard flame was placed at an angle of 45 • at a distance of 40 mm above the lower edge of the container. The test was performed by moving the burner onto the test specimen at such a distance that its flame touched the specified point of contact. The burner remained in this position for 15 s. The test was performed on five samples. After the test, the sample was evaluated based on its ignition and whether the burnout reached a height of 150 mm from the point of contact. Thermal Attack by a Single Burning Item (SBI) Fire Test To determine the reaction to fire classes A2 to D, thermal attack by a single burning item (SBI) according to EN 13823 was used [40]. This test simulates the course of a fire in a corner of a room on a real scale. The tested material was prepared in a vertical form in a room with floor plan dimensions of 3 m × 3 m and a height of 2.4 m. The unexposed surface of the test specimen was made of oriented strand board (OSB) panels. Filling holes for crushed straw application were located at the top of the samples. The fire source was set at a critical point (corner) on the floor. The test segment consisted of two parts 0.5 m and 1.0 m wide. Both parts were 1.5 m high. A 30 kW propane sand burner with at least 95% technical propane technology was used to cause a fire. A flue gas extractor was set up in the upper part of the room. The classification parameters of the SBI test are: Flaming droplets and particles according to their occurrence during the first 600 s of the test. After measuring and calculating, these values were used to classify the reaction to fire class according to the criteria given in Table 1. Large-Scale Fire Test of a Wall Segment Large-scale fire tests are most often used to verify the fire resistance parameters of entire building structures. Real structures are exposed to fire tests, which are performed in specialised and certified laboratories equipped with the necessary technical facility for these tests. In addition to the large-format fire test, which is expensive, a cheaper variant can be used for the preliminary determination and verification of fire resistance with the possibility of verifying several test specimens at once. This cheaper variant is called a preliminary fire test according to EN 1364-1 [41]. In contrast to the classical test, the segment is tested for informative purposes with smaller dimensions, only 0.8 m × 0.8 m, and the test cannot be performed with the loading of the samples because the individual segment is separated from each other by a lining. This test can be used to determine the value of fire resistance I (insulation) (min) and E (integrity) (min). It is not possible to determine the value of fire resistance R (load capacity) (min). For the analysis of different combinations of properties, three different variants of the cladding were prepared together with crushed straw (see Table 2). The supporting structure of the test specimens was made of a vertically perforated LAG frame [42]. The test specimens were sheathed and filled with crushed straw with a bulk density of 90 kg·m −3 . This test aimed to determine the value of fire resistance EI (min) of the tested samples. The value of E (integrity) indicates the ability of the material (sample) to prevent the passage of flame and hot gases when heated on one side and to prevent the occurrence of flames on the unexposed side. The value I (insulation) indicates the ability of the material (sample) to limit the temperature rise on the other side when heating one side [41]. The criteria for cracks, holes and continuous combustion are checked visually with gauges. For criterion I, the decisive factor is the time during which the temperature on the unexposed surface of the sample rises by 140 • C from the average initial temperature of this surface. The temperature is measured using thermocouples during the test. To ensure the external conditions for the preliminary fire test according to the standard [41], it was necessary to prepare the samples under such conditions that the moisture content of the sample was close to the normal conditions in practice. During the test, the ambient air temperature in the vicinity of the test furnace must not fall by more than 10 • C and must not increase by more than 20 • C from an initial temperature of between 10 • C and 40 • C. It is also important to check the oven temperature, which must not rise above 50 • C for five minutes before starting the test. The values recorded by the individual thermoelectric sensors before the test needs to be checked. These values determine the initial measurement temperature. The test time is recorded from the moment the burners ignite in the furnace. The sensors record the temperature every minute. Single-Flame Source Fire Test The tested crushed straw was natural, without any chemical and artificial additives. The bulk density of the samples was 90 kg·m −3 . The reason for choosing the bulk density of 90 kg/m 3 was the finding that no settling would occur over time [18]. The sample was ignited, and after moving the burner away, the sample was immediately extinguished (see Figure 3a). The flame spread was up to 65 mm from the bottom edge (see Figure 3b). During the test, no burning particles of crushed straw fell off the container. Single-Flame Source Fire Test The tested crushed straw was natural, without any chemical and artificial additives. The bulk density of the samples was 90 kg•m −3 . The reason for choosing the bulk density of 90 kg/m 3 was the finding that no settling would occur over time [18]. The sample was ignited, and after moving the burner away, the sample was immediately extinguished (see Figure 3a). The flame spread was up to 65 mm from the bottom edge (see Figure 3b). During the test, no burning particles of crushed straw fell off the container. Experimental results are shown in Table 3. The single-flame source test showed that crushed straw placed in the structure does not heavily contribute to the spread of fire. Based on the results, crushed straw can be classified as a better fire reaction category than E. For this reason, another test was prepared: thermal attack by a single burning item. Thermal Attack by a Single Burning Item Fire Test During the SBI experiment at time t = (300 ± 5), the main burner ignited, and surface ignition of the test specimen occurred. The flame from the burner spread over the surface of the sample only to the upper edge of the test sample. There was no flame spread to the side edge of the short or longwing (see Figure 4). Experimental results are shown in Table 3. The single-flame source test showed that crushed straw placed in the structure does not heavily contribute to the spread of fire. Based on the results, crushed straw can be classified as a better fire reaction category than E. For this reason, another test was prepared: thermal attack by a single burning item. Thermal Attack by a Single Burning Item Fire Test During the SBI experiment at time t = (300 ± 5), the main burner ignited, and surface ignition of the test specimen occurred. The flame from the burner spread over the surface of the sample only to the upper edge of the test sample. There was no flame spread to the side edge of the short or longwing (see Figure 4). There was no such parameter LFS evaluation. No flaming particles or droplets fell from the surface of the exposed parts of the test sample. According to Table 1, the tested sample can be classified as d0. The smoke evolution rate parameter (SMOGRA) and the total smoke evolution from the test body release in the first 600 with a fire test (TSP600s) meeting the requirements for coating even in Class A2 according to the reaction to fire and crushed straw can be classified as s1 according to smoke spread. The fire spread index (FIGRA0.2MJ) exceeded the required value FIGRA0.2MJ ≤ 120 W/s, necessary for classification into reaction to fire class B. The difference between the required value and the measured value was very small. Crushed straw with a density of 90 kg.m -3 can be classified as Class C-s1, d0. The evaluation parameters and classification of the individual test samples from the experiment of crushed straw are given in Table 4. Large-Scale Fire Test of a Wall Segment The fire test can be terminated after reaching one of the limit states (E, I) when the safety of operating personnel is endangered if there is a risk of damage to the test equipment or at the request of the client. In this case, the preliminary fire test was terminated when the integrity limit state was reached (E). This limit state was reached in the 92nd minute from the start of the test. The insulation limit (I) was not reached during the test. The graphical temperature curve of all samples is shown in Figure 5. There was no such parameter LFS evaluation. No flaming particles or droplets fell from the surface of the exposed parts of the test sample. According to Table 1, the tested sample can be classified as d0. The smoke evolution rate parameter (SMOGRA) and the total smoke evolution from the test body release in the first 600 with a fire test (TSP600s) meeting the requirements for coating even in Class A2 according to the reaction to fire and crushed straw can be classified as s1 according to smoke spread. The fire spread index (FIGRA0.2MJ) exceeded the required value FIGRA0.2MJ ≤ 120 W/s, necessary for classification into reaction to fire class B. The difference between the required value and the measured value was very small. Crushed straw with a density of 90 kg·m −3 can be classified as Class C-s1, d0. The evaluation parameters and classification of the individual test samples from the experiment of crushed straw are given in Table 4. Large-Scale Fire Test of a Wall Segment The fire test can be terminated after reaching one of the limit states (E, I) when the safety of operating personnel is endangered if there is a risk of damage to the test equipment or at the request of the client. In this case, the preliminary fire test was terminated when the integrity limit state was reached (E). This limit state was reached in the 92nd minute from the start of the test. The insulation limit (I) was not reached during the test. The graphical temperature curve of all samples is shown in Figure 5. Sample No. 1 was coated on the exposed side with a 12.5 mm gypsum fibreboard, which itself reacts to the fire class of A2 (see Figure 6b). On the unexposed side, the cladding was made of 25 mm cement panel WS with reaction to fire in Class A2-s1. It was assumed that based on the cladding of the material classified as the better class of reaction to fire, the sample with this cladding will have the best fire resistance. This hypothesis was not confirmed. The best results from the tested sample were observed in the case of compositions with an oriented 15 mm strand board with reaction to fire class D on the exposed side (see Figure 6a). From the measured values of surface temperatures on the unexposed side and the reaction of the tested samples in terms of the limit state of integrity (E) and insulation (I), a preliminary value of fire resistance EI (min) could be determined. As the test specimens were multilayered with wooden load-bearing elements suitable for wooden constructions, it is necessary to supplement the EI value with a classification of components and parts according to Categories DP1 (nonflammable structural system), DP2 (mixed structural system) and DP3 (flammable structural system). For this classification, the material from which the load-bearing part of the structure is made and its influence on the intensity of the fire are important. In Category DP1, the load-bearing system consists only of noncombustible materials (Classes A1, A2) and only for buildings up to a height of 2.5 m. Categories DP2 and DP3 include structures with load-bearing elements made of material with reaction to fire in Classes A2 (buildings higher than 2.5 m) to D. For their classification, however, the decisive material is the cladding of the structure, which is a reaction to fire class. This classification is important in practice because the legislation specifies exactly what materials can be used for a given space and function, e.g., load-bearing, fireproof, partition walls, horizontal structures, etc. This classification, according to structural parts, appears only in Czech legislation, as Czech requirements for wooden buildings are among the strictest in Europe. The tested segments contained load-bearing elements made of wood pertaining to reaction to fire class D. The cladding of the exposed side for sample No. 1 consisted of material with reaction to fire class A2, i.e., gypsum fibreboard FC. Therefore, the samples could be classified as Category DP2. For sample No. 2, the cladding on the exposed side consisted of a flammable OSB board of reaction to fire class D. This sample could only be classified into Class DP3. To determine the time of fire resistance of structures in category DP2, the decisive factor is the time for which one of the limit states and the integrity of the surface layers, the cladding, is reached. The surface layers must limit the burning of load-bearing parts and insulations (thermal or sound) so that they do not ignite in the required time, which could cause burning and increase the intensity of burning. The lowest time in which the first of the assessed limit states (E, I, R) was exceeded was taken as the determining time of fire resistance. Sample No. 1 was coated on the exposed side with a 12.5 mm gypsum fibreboard, which itself reacts to the fire class of A2 (see Figure 6b). On the unexposed side, the cladding was made of 25 mm cement panel WS with reaction to fire in Class A2-s1. It was assumed that based on the cladding of the material classified as the better class of reaction to fire, the sample with this cladding will have the best fire resistance. This hypothesis was not confirmed. The best results from the tested sample were observed in the case of compositions with an oriented 15 mm strand board with reaction to fire class D on the exposed side (see Figure 6a). During the test, the time to reach the limit states I and E was monitored on the unexposed side of the furnace. Furthermore, the time for which the integrity of the cladding of the segments for classification into Categories DP2 and DP3 was ensured was monitored. After the sheathing fell off or burned off, the supporting elements of the samples burned out. In this case, the construction of warehouses identical to the samples cannot be classified into Category DP2. The crushed straw did not fall out during the test and was thus able, at least partly, to protect the supporting elements of the sample from the flame. In the case of load-bearing elements, only parts of them gradually burned out in places that were directly exposed to fire. The other elements were without major violations. Based on the measured surface temperatures on the unexposed side of the furnace and the behaviour of the samples during the fire test, the indicative values of fire resistance EI (min) were determined (see Table 5). In the experiment, the temperature sensors were placed only on the unexposed surfaces of the samples, not inside. The estimation of the behaviour of the samples inside during the fire test was performed based on visual observation of the exposed side of the furnace and the measured surface temperatures. There was a rapid increase in the average surface temperature on the unexposed side after the sheathing burned off or fell off. Within 5 min between the 21st and 25th minutes, the surface temperature of the sample increased to the maximum surface temperature measured during the test (88 °C). Other samples had a similar course. It can be assumed that the course of fire propagation in samples with crushed straw after falling or burning of the casing will be identical to the course of flame propagation in the flammability test described in Section 4.1. Similarly, as in the single-flame source test, after the ignition of crushed straw and its charring in the sample, the flames did not spread deeper into the layer of crushed straw. This finding corresponds to the theory of flame nonpropagation in the straw to the depth. Based on the results, one can notice that most specimens classified as DP3 have a fire resistance of 90 min and can be classified as EI 90 [41]. This fire resistance could be higher for some specimens, but the preliminary fire test was completed at 92 min. Only segment From the measured values of surface temperatures on the unexposed side and the reaction of the tested samples in terms of the limit state of integrity (E) and insulation (I), a preliminary value of fire resistance EI (min) could be determined. As the test specimens were multilayered with wooden load-bearing elements suitable for wooden constructions, it is necessary to supplement the EI value with a classification of components and parts according to Categories DP1 (nonflammable structural system), DP2 (mixed structural system) and DP3 (flammable structural system). For this classification, the material from which the load-bearing part of the structure is made and its influence on the intensity of the fire are important. In Category DP1, the load-bearing system consists only of noncombustible materials (Classes A1, A2) and only for buildings up to a height of 2.5 m. Categories DP2 and DP3 include structures with load-bearing elements made of material with reaction to fire in Classes A2 (buildings higher than 2.5 m) to D. For their classification, however, the decisive material is the cladding of the structure, which is a reaction to fire class. This classification is important in practice because the legislation specifies exactly what materials can be used for a given space and function, e.g., load-bearing, fireproof, partition walls, horizontal structures, etc. This classification, according to structural parts, appears only in Czech legislation, as Czech requirements for wooden buildings are among the strictest in Europe. The tested segments contained load-bearing elements made of wood pertaining to reaction to fire class D. The cladding of the exposed side for sample No. 1 consisted of material with reaction to fire class A2, i.e., gypsum fibreboard FC. Therefore, the samples could be classified as Category DP2. For sample No. 2, the cladding on the exposed side consisted of a flammable OSB board of reaction to fire class D. This sample could only be classified into Class DP3. To determine the time of fire resistance of structures in category DP2, the decisive factor is the time for which one of the limit states and the integrity of the surface layers, the cladding, is reached. The surface layers must limit the burning of load-bearing parts and insulations (thermal or sound) so that they do not ignite in the required time, which could cause burning and increase the intensity of burning. The lowest time in which the first of the assessed limit states (E, I, R) was exceeded was taken as the determining time of fire resistance. During the test, the time to reach the limit states I and E was monitored on the unexposed side of the furnace. Furthermore, the time for which the integrity of the cladding of the segments for classification into Categories DP2 and DP3 was ensured was monitored. After the sheathing fell off or burned off, the supporting elements of the samples burned out. In this case, the construction of warehouses identical to the samples cannot be classified into Category DP2. The crushed straw did not fall out during the test and was thus able, at least partly, to protect the supporting elements of the sample from the flame. In the case of load-bearing elements, only parts of them gradually burned out in places that were directly exposed to fire. The other elements were without major violations. Based on the measured surface temperatures on the unexposed side of the furnace and the behaviour of the samples during the fire test, the indicative values of fire resistance EI (min) were determined (see Table 5). In the experiment, the temperature sensors were placed only on the unexposed surfaces of the samples, not inside. The estimation of the behaviour of the samples inside during the fire test was performed based on visual observation of the exposed side of the furnace and the measured surface temperatures. There was a rapid increase in the average surface temperature on the unexposed side after the sheathing burned off or fell off. Within 5 min between the 21st and 25th minutes, the surface temperature of the sample increased to the maximum surface temperature measured during the test (88 • C). Other samples had a similar course. It can be assumed that the course of fire propagation in samples with crushed straw after falling or burning of the casing will be identical to the course of flame propagation in the flammability test described in Section 4.1. Similarly, as in the single-flame source test, after the ignition of crushed straw and its charring in the sample, the flames did not spread deeper into the layer of crushed straw. This finding corresponds to the theory of flame nonpropagation in the straw to the depth. Based on the results, one can notice that most specimens classified as DP3 have a fire resistance of 90 min and can be classified as EI 90 [41]. This fire resistance could be higher for some specimens, but the preliminary fire test was completed at 92 min. Only segment No. 2 has a fire-resistance value of EI 60, as a permanent flame appeared in this sample at the 89th minute of the test. The achieved value of the tested samples allows designing load-bearing structures of the same compositions up to the fourth degree of fire safety of the fire department. Discussion All these fire tests need to be taken in terms of the practical use of these wall compositions, which can be highly valued alternatives to other thermal insulation materials in residential structures [42]. It is necessary to analyse other properties of these materials, e.g., the thermal conductivity of crushed straw is λ = 0.045 W·m −1 ·K −1 , while cellulose as an alternative blown insulation has values of about λ = 0.036 W·m −1 ·K −1 [43]. The values for other thermal insulation materials are as follows: wood λ = 0.2 W·m −1 ·K −1 , gypsum board, λ = 0.17 W·m −1 ·K −1 ; rock wool, λ = 0.38 W·m −1 ·K −1 [44]. From this point of view, crushed straw is not the best, but it shows good performance. In terms of fire resistance, straw has similar or the same properties as cellulose and, at the same time, much better resistance than mineral wool, fibreglass and bamboo, even by adding fire retardants [44,45]. From the point of view of the fire properties of crushed straw, i.e., reaction to fire and the fire resistance of structures filled with crushed straw, fire tests have shown that crushed straw has comparable fire properties to natural blown insulation used today. The requirement for fire resistance then depends on the type of structure (for example, ceiling, wall, roof), whether the structure is located on the underground or above-ground floor and according to the degree of fire safety of the fire section (type of fire zone). For small standard buildings, such as single-family homes, the standard requires a fire resistance of at least 15 min. If the tested sample reaches fire resistance during the test, for example EI 15DP2, we know that it is possible to use such a construction, for example in family houses [41]. The reaction to fire class of crushed straw with a bulk density of 90 kg·m −3 was classified based on the SBI fire test into Class C-s1, d0 materials with limited contribution to fire. For example, blown cellulose in the wall, with a bulk density of 70 kg·m −3 (trade name CLIMATIZER PLUS, CZ), reacts to fire B-s1, d0. Therefore, blown cellulose is a material with a very limited contribution to fire. Blown wooden fibre (trade name STEICO ZELL, CZ) with a bulk density of 32 to 40 kg·m −3 has a reaction to fire B-s2, d0. Both materials have a lower contribution to fire than crushed straw, but in terms of normative properties, all these materials are flammable. Because they are not classified as A1 or A2 (nonflammable materials), their use in building construction is limited. If we compare the results of the SBI fire test of crushed straw (see Table 4) with the classification criteria, we can see that crushed straw did not meet only the fire growth rate (FIGRA 0.2MJ ) criterion (W/s) [40]. The limit for classes A2 and B is FIGRA 0.2MJ ≤ 120 W/s. Crushed straw has FIGRA 0.2MJ = 123.0 to 134.9 W/s (see Table 4). For example, blown wooden fibre insulation (trade name STEICO ZELL) has FIGRA 0.2MJ ≤ 72.1 W/s. Blown cellulose insulation has FIGRA 0.2MJ about 100 W/s. If we compare other SBI fire test criteria, crushed straw has lower values than wooden and cellulose blown insulation. Improving the FIGRA criterion of crushed straw is possible only if we use the same chemical fire retardant; for example, if the cellulose insulation contains a magnesium sulphate fire retardant. However, for the practical use of crushed straw, flame retardants are of little use. Conclusions The presented research of the fire parameters of crushed straw has proven that crushed straw placed in a structure does not contribute to the spread of fire. The general assumption that constructions made of straw burn easily and quickly was thus dispelled. Crushed straw can be a suitable and cheaper alternative to other blown insulation. There is a necessity that crushed straw must be compacted to the required density in real construction. The minimum bulk density of crushed straw in structures should be 90 kg·m −3 . Under these conditions, the flame acting on the surface of crushed straw does not get deeper into the structure, as the surface is partially closed. Thanks to this phenomenon, crushed straw can be used as insulation in wooden structures, to protect their load-bearing elements for a certain period, and thus to extend their fire resistance. Author Contributions: The contribution is fully prepared by J.T. The author has read and agreed to the published version of the manuscript. Funding: Financial support from VŠB-Technical University of Ostrava by means of the Czech Ministry of Education, Youth and Sports through institutional support for the conceptual development of science, research and innovation for the year 2021 is gratefully acknowledged. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: All data are in the article.
8,989.4
2021-08-01T00:00:00.000
[ "Physics" ]
Drug Disease Relation Extraction from Biomedical Literature Using NLP and Machine Learning Extracting the relations between medical concepts is very valuable in the medical domain. Scientists need to extract relevant information and semantic relations between medical concepts, including protein and protein, gene and protein, drug and drug, and drug and disease. )ese relations can be extracted from biomedical literature available on various databases. )is study examines the extraction of semantic relations that can occur between diseases and drugs. Findings will help specialists make good decisions when administering a medication to a patient and will allow them to continuously be up to date in their field. )e objective of this work is to identify different features related to drugs and diseases from medical texts by applying Natural Language Processing (NLP) techniques and UMLS ontology. )e Support Vector Machine classifier uses these features to extract valuable semantic relationships among text entities. )e contributing factor of this research is the combination of the strength of a suggested NLP technique, which takes advantage of UMLS ontology and enables the extraction of correct and adequate features (frequency features, lexical features, morphological features, syntactic features, and semantic features), and Support Vector Machines with polynomial kernel function. )ese features are manipulated to pinpoint the relations between drug and disease. )e proposed approach was evaluated using a standard corpus extracted from MEDLINE. )e finding considerably improves the performance and outperforms similar works, especially the f-score for the most important relation “cure,” which is equal to 98.19%. )e accuracy percentage is better than those in all the existing works for all the relations. Introduction Biomedical information is abundantly available in journal articles and research studies in various databases, such as MEDLINE, PubMed, and Medscape. Scientists need to automatically extract relevant information, for instance, semantic relations between medical entities, from these databases. For example, scientists need to know which drug cures a given disease or which diseases are the side effects of a given drug. ese relations can help specialists update their knowledge and improve their expertise in their field. ese relations can be discovered from a variety of texts in biomedical literature. Various methods have been applied to extract relations from the biomedical literature [1][2][3][4][5]. e relationship extraction studies have focused on specific types of relations, including interactions between protein and gene, protein and protein [6], drug and disease, and drug and drug [7]. erefore, the objective of this study is to contribute to a better understanding of drug-disease relation. is paper aims to explore the extraction of drug-disease relation from biomedical texts. e paper proposes a semantic relation extraction approach between biomedical entities (drug and disease) which exploits the specific features of these entities, which can be discovered by using a suggested NLP technique and UMLS ontology. ese extracted features will form the input to the Support Vector Machine (SVM) classifier for the classification of relations between these entities. Extraction of Relations between Medical Concepts. Many different biomedical text relation extraction strategies have been proposed to discover relationships, including protein and protein, gene and gene, gene and protein, gene and disease, gene and drug, and drug and drug. e works about protein-protein relation extraction are generally based on the identification of protein features (lexical features) rather than similarity methods [8][9][10] or classification methods [11], which are applied to discover the interaction between pairs of proteins. For gene-gene relation extraction, the researchers focused on the use of ontologies, such as Gene Ontology [12] or statistical models [13,14]. To identify gene-protein relations, various works have proposed the use of machine learning and NLP techniques [15,16,17]. To discover gene-disease relations, classification models that support these relationships were built [18]. In other works, NLP tools and ontologies were exploited [19][20][21]. For gene-drug relation extraction, various works recommended text mining approaches supported by classification models [22]. To discover drug-side effect relation, dictionaries and ontologies were built from the Unified Medical Language System (UMLS) Metathesaurus [29]. Drug-Disease Relation Extraction. Discovering the relationship between drugs and diseases plays a crucial role in medical domain development. e huge medical literature sources allowed the automatic identification of significant relations hidden in free text. Various computational methods have been proposed to discover the relations between drugs and diseases. Rosario and Hearst [30] proposed a method that distinguishes seven relations between two semantic entities, "treatment" and "disease." Five graphical models and a neural network have been presented. Seven relations were detected, but only three relations, namely, cure, prevent, and side effect, were represented with accuracy levels of 92.6, 38.5, and 20, respectively. Abacha and Zweigenbaum [31] suggested a hybrid approach associating a pattern-based method and a statisticalbased learning method (linear SVM) to extract two relations between a disease and a treatment. F-scores were given as effectiveness measure, and they are 95 and 15.15 for cure and prevent, respectively. Frunza et al. [32,33] have applied a machine learning technique to extract diseases and treatments from medical papers. Six classification algorithms were used, including probabilistic models, adaptive learning models, decisionbased models, and linear classifiers like SVM. ree data representation techniques were adopted to extract treatment relations as follows: Bag-of-Word, NLP, and medical concepts. e effectiveness measures of the three detected relations, namely, cure, prevent, and side effect, are 93.6, 76.5, and 50, respectively. Suchitra and Sudah [34] used NLP and machine learning techniques to extract relations between drugs and treatments. Rule-based approaches, statistical models, and logic techniques were used for cooccurrence analysis. A Bloom filter was applied to remove unwanted data. Naive Bayes, SVM, inductive logic techniques, and statistical models were used. e obtained results had an overall F-score of 90.3 and an overall accuracy of 90 for the three extracted relations, namely, cure, prevent, and side effect. Muzaffar et al. [35] used the Unified Medical Language System and ranking algorithms to rank verb phrases. e relations between drugs and treatments were classified using SVM and Naive Bayes techniques. ree relations were detected, namely, cure, prevent, and side effect. e F-scores were 98.05, 93.55, and 88.89 for cure, prevent, and side effect, respectively. e accuracies were 96.1, 97.4, and 96.4 for cure, prevent, and side effect relations, respectively. Wang et al. [36] suggested a pattern-based relationship extraction method to extract two types of relations between drugs and diseases, namely, treatment (a drug treats/cures a disease) and inducement (the side effect of a drug). ey created a drug and disease lexicon from the UMLS and used drug-disease pair seeds for the pattern-based method to extract the relations between drugs and diseases. e reported results showed an F-score of 90.49 for cure relation and an F-score of 87.56 for the side effect relation. Some researchers proposed a relation extraction between three concepts, namely, drug, disease, and protein [37] or drug, disease, and gene [38]. Other researchers have focused on a particular disease or a particular drug when looking for relations, for example, the extraction of treatments for psoriasis [39], the association between diabetes and the treatments for diabetes [40], and the effect of estrogen replacement therapy on Alzheimer's disease and Parkinson's disease [41]. Table 1 shows a comparison between the most important works in the field of relation extraction between drugs and diseases. e existing works based on drug-disease relationship did not take into account many important features about drugs and diseases. ese features (frequency features, lexical features, morphological features, syntactic features, and semantic features) can be very useful for the detection of good and valuable relations. To overcome this issue, we proposed a novel methodology that discovers the drug-disease association based on 2 Mobile Information Systems Natural Language Processing strategy with the help of the UMLS ontology and a machine learning technique, such as the SVM model, for automatic relations extraction from biomedical texts. The Proposed Approach e methodology adopted in this study was developed from studies and concerns related to relation extraction from medical literature, text mining, and machine learning. e proposed approach entailed three main components, namely, preprocessing, features extraction, and relation extraction. e first component started with free-text sentences, performs a preprocessing task, and outputs a set of annotated words. e second component identified various features about sentences, which later helps the relation extraction. irdly, the output of the previous component was fed into a machine learning component, thereby completing the identification of associations between drug and disease entities. e architecture of the proposed approach, named "DDRel," is shown in Figure 1. e steps outlined in Figure 1 are discussed in detail in the following subsections. Preprocessing. Preprocessing, the first step of the approach, was based on Natural Language Processing (NLP) techniques. It eliminated noisy data and outputs all words in medical texts related to the biomedical concept (treatments and diseases). It included four major stages, namely, (i) splitting, (ii) tokenization, (iii) part-of-speech tagging, and (iv) semantic annotation. Sentence Splitting. is step divided texts into smaller units, and an identifier is assigned to each unit. Texts are segmented into sentences using punctuation markers "," "?," and "!" In this step, ANNIE English Sentence Splitter was used as a cascade of finite-state transducers to spill the text into sentences, as shown in Figures 2(a) and 2(b). Tokenization. After the sentence splitting, each sentence was segmented into tokens. Tokenization is the segmentation of sentences into a sequence of words using nonalphabetic characters, such as alien break, space, or punctuation characters. e result of tokenization was presented as an XML file that gathers tokens associated with the following: (i) the sentence identifier (id-sentence); (ii) the token identifier (id); (iii) the token length (length); (iv) the token orthography (orth); (v) the token kind (kind); and (vi) the token (string). e display of the XML file for the user is presented in Figure 3(a). Part-of-Speech Tagging. Part-of-Speech (POS) tagging is the method of associating words in a text according to their grammatical function, definition, and context, such as noun (NN), verb (VB), adjective (JJ), conjunction (CC), and proper noun (NNS). e algorithm of ANNIE POS Tagger has been implemented. e output is an XML file, in which each word has association with its grammatical function. e display of the XML file for the user is presented in Figure 3(b). Semantic Annotation. is step involved the extraction of named entities of drugs and diseases. It was difficult to extract drugs and diseases for many reasons. Each medical concept can be identified by several synonymous, different terms, and abbreviations. Moreover, simple dictionaries cannot be used for new drugs and diseases in our context. e Meta-Map system was configured to detect the concepts of the UMLS Metathesaurus hidden in the biomedical texts. e UMLS is a medical ontology that originated from the National Library of Medicine. e output of this step was the identification of concepts as Concept Id, Concept name, Preferred Name, and Semantic Type. e most important information extracted from this step was the Semantic Type, which was defined in UMLS. is significant knowledge will help determine the nature of concepts of drugs or diseases. Figure 4 shows the results of the semantic annotation. Feature Extraction. e feature extraction was the second step of the proposed approach. It sets features as combinations of some characteristics and is inspired by Rosario and Hearst [30] in relation to the semantic type. e features for each word in a sentence were as follows: the semantic types, such as Word, Part of Speech (POS), and Phrase Constituent, belong to the same chunk as in the previous work; the MeSH mapping of the words; Domain Knowledge; and morphological features. In this work, unlike that of Rosario and Hearst [30], the features were built for each sentence instead of each token. Moreover, new kinds of features were created, and these were assumed to be more suitable for extracting drug-disease relations. In this work, new features were proposed to extract drugdisease relations, including the following: (i) frequency features, (ii) lexical features, (iii) morphological features, (iv) syntactic features, and (v) semantic features. Frequency Features. e frequency features represented the following: (i) Order of words present in NE (ii) Order of words present in every two NEs (iii) Sequence of "n" words preceding every NE (iv) Sequence of "n" words after every NE Morphological Features. In this step, morphological features were extracted and included the following: (i) Lemmas order of the words among every two NEs (ii) Lemmas order of the "n" words preceding every NE (iii) Lemmas order of the "n" words after every NE Syntactic Features. ese features concern the POS of each NE and include the following: (i) POS order of words among every two NEs (ii) POS order of "n" words preceding each NE (iii) POS order of "n" words after every NE (iv) Verb sequence among every two NEs (v) First verb preceding every NE (vi) First verb after every NE Semantic Features. e purpose of this step is to extract the combination of words in the sentence. e values of these semantic types are DIS (DISease) and TREAT (TREATment). (1). Example of Feature Extraction. Consider the following sentence: "Preliminary evidence suggests that interferons beta may also induce regression of metastatic renal cell carcinoma." e output of the feature extraction step from this sentence is provided in detail in Table 2. e result of feature extraction is displayed for the user in Figure 5. Relation Extraction. e relationships between drug and disease were extracted using a machine learning classifier. e relation extraction process is based on a classification process, which proceeded according to relation classes, as follows: CURE, PREVENT, SIDE EFFECT, NO CURE, and OTHER RELATION. is classification helped extract relations between entities and was performed by exploiting the extracted features and outputs of the previous step. e traditional machine learning classification techniques performed poorly when the classified data were immense. erefore, this approach used an SVM, which scaled up relatively well to highdimensional data [3]. SVM is a well-known supervised learning algorithm. e input of this algorithm is a set of features detected from the Mobile Information Systems 5 previous step. ese features are used by a machine learning method to find a hyperplane that separates the feature space into classes with a maximum margin. When maximizing the margin, the SVM algorithm attempts to achieve maximum separation between classes and then minimize misclassification errors. In this paper, a supervised classifier SVM was used to classify the drug-disease relations from biomedical databases. e objective of SVM was to discriminate between classes of relations. SVM was used with polynomial kernel, because this type of SVM has a kernel function and is very well suited with our context. e first step of relation extraction was to provide the classifier a training set. e training set is composed of feature vectors. It was labeled data assigning a relation class for each sentence as follows: CURE, PREVENT, SIDE EF-FECT, NO CURE, and OTHER RELATION. e training set is used by SVM to build a model that predicts the target relation class. e second step was the prediction. To predict the relation class for each sentence in the data file, SVM applies the model on feature vectors, already created in the preprocessing step and semantic annotation. ese vectors gather all the features related to each sentence in this data file (one vector for each sentence). Figure 6 shows the results of the relation extraction. By clicking on drug-disease relations extraction, the list of drugs and a list of relations are displayed. Alternatively, when choosing a drug and a type of relation (prevent, cure. . .), the diseases that have such a relationship with this drug are displayed. Experiment Setup. To validate the proposed approach, a system was implemented. Screenshots are presented in Figures 2-6. For the experiments, we used the standard corpus obtained from MEDLINE 2001. is corpus was annotated with types of semantic relationships between treatment (TREAT) and a disease (DIS). ese relationships were CURE, PREVENT, SIDE EFFECT, and NO CURE. is corpus was validated using MEDLINE 2001 Database of biomedical papers [30]. e corpus was used to guarantee the validity of the comparison of the results. 4.2. Results. For the evaluation, performance measures were deduced from a confusion matrix, which is a matrix shown in Table 3 with rows and columns and with the following classes: False Positives (FP), False Negatives (FN), True Positives (TP), and True Negatives (TN). A particular row in the matrix recorded the instances in an actual class, and each column recorded the instances in the predicted class. e confusion matrix for the implemented system is for multiclass classification as shown in Table 4. For the class CURE, 785 TP classes exist, because they are CURE classes and are predicted as CURE classes. e number of FN classes is 25 � (10 + 5 + 10), because they belong to the class CURE, but they are not predicted as such. e number of FP classes is 4 � 2 + 1 + 1, because they are predicted as CURE classes, but they are not. e number of TN classes is 82 � 57 + 25 + 0, because they are not predicted as CURE classes, and they are not. For recall and precision, only the results of Abacha and Zweigenbaum [31] and Wang et al. [36] were available. e recall and the precision of the class NO CURE for Abacha and Zweigenbaum [31] were not available (NA). e recall and the precision of the class NO CURE for Abacha and Zweigenbaum [31] were not available (NA). Also, the recall and the precision of the classes PREVENTand NO CURE for Wang et al. [36] were not available, because this work was interested only in two relations, namely, CURE and SIDE EFFECT. e recall in Table 5 shows that Abacha and Zweigenbaum [31] had a better recall (100%) compared with Wang et al. [36] (89.8%) and with a proposed approach (96.91%) for the extraction of CURE relation. For the rest of the relations, the proposed approach performed better. Also, for precision measures in Table 6, the proposed approach performed better than those in the works of Abacha and Zweigenbaum [31] and Wang et al. [36]. e F-score measure was not reported in the work of Rosario and Hearst [30]. Also, the F-score measure of the class PREVENT was not available for Wang et al. [36], because this work was interested only in two relations. Moreover, the F-score measure of the class SIDE EFFECT was not available for Abacha and Zweigenbaum [31]. e accuracy measure was not available for the works of Abacha and Zweigenbaum [31], Frunza et al. [33], and Wang et al. [36]. Nevertheless, the results demonstrated in Table 8 show that the proposed approach achieved a higher accuracy compared with all similar works for all the relations. e accuracy of NO CURE was not reported in any work except in the proposed approach. Table 9 represents the specificity measure of the implemented system. is measure is not available in the other works. e results computed in this study were promising and showed that the combination of the used techniques outperforms the majority of the previous approaches using the same corpus. e possible reasons for this aspect are the appropriate mixture of the suggested NLP technique and UMLS ontology in the detection of relevant features (frequency features, lexical features, morphological features, syntactic features, and semantic features) for drug and disease and machine learning methods (SVM). e proposed approach seems to be suitable when dealing with semantic relations in natural language texts. Mobile Information Systems e novel idea presented in the study is the integration of a novel NLP approach reinforced by the UMLS ontology and a machine learning method that performed better in a multidimensional context. Conclusion and Future Work We proposed a novel computational approach for relation extraction between drugs and diseases from a biomedical is study significantly contributed to the existing literature on relation extraction between drugs and diseases from the medical literature. e main contribution of this work is the identification of specific features (lexical, semantic. . .) related to medical concepts (drug and disease). is finding is a confirmation that, in the field of text mining, these features are relevant for the discovery of interesting relationships between concepts. e experimental results proposed an improvement in the performance compared with other similar works. e upcoming research will focus first on further improvements of the proposed approach. More investigations on the features of medical concepts will be conducted. en, the next direction will focus on updating the method to assist the professionals in finding relevant and authentic information in extracting semantic relations between other medical entities. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
4,827.2
2021-05-19T00:00:00.000
[ "Computer Science" ]
Deep Learning Algorithm-Based Magnetic Resonance Imaging Feature-Guided Serum Bile Acid Profile and Perinatal Outcomes in Intrahepatic Cholestasis of Pregnancy This study was aimed to explore magnetic resonance imaging (MRI) based on deep learning belief network model in evaluating serum bile acid profile and adverse perinatal outcomes of intrahepatic cholestasis of pregnancy (ICP) patients. Fifty ICP pregnant women diagnosed in hospital were selected as the experimental group, 50 healthy pregnant women as the blank group, and 50 patients with cholelithiasis as the gallstone group. Deep learning belief network (DLBN) was built by stacking multiple restricted Boltzmann machines, which was compared with the recognition rate of convolutional neural network (CNN) and support vector machine (SVM), to determine the error rate of different recognition methods on the test set. It was found that the error rate of deep learning belief network (7.68%) was substantially lower than that of CNN (21.34%) and SVM (22.41%) (P < 0.05). The levels of glycoursodeoxycholic acid (GUDCA), glycochenodeoxycholic acid (GCDCA), and glycocholic acid (GCA) in the experimental group were dramatically superior to those in the blank group (P < 0.05). Both the experimental group and the blank group had notable clustering of serum bile acid profile, and the experimental group and the gallstone group could be better distinguished. In addition, the incidence of amniotic fluid contamination, asphyxia, and premature perinatal infants in the experimental group was dramatically superior to that in the blank group (P < 0.05). The deep learning confidence model had a low error rate, which can effectively extract the features of liver MRI images. In summary, the serum characteristic bile acid profiles of ICP were glycoursodeoxycholic acid, glycochenodeoxycholic acid, and glycocholic acid, which had a positive effect on clinical diagnosis. The toxic effects of high concentrations of serum bile acids were the main cause of adverse perinatal outcomes and sudden death. Introduction Intrahepatic cholestasis of pregnancy (ICP) means that most pregnant women have hepatic cholestasis in the second and third trimester of pregnancy [1][2][3]. It is estimated that the incidence of intrahepatic cholestasis of pregnancy in different people is between 0.3% and 15% and most reports are between 0.3% and 0.5% [4][5][6]. Palmer et al. [7] pointed out that the serum total bile acid level was related to the pregnancy outcome. Studies have pointed out that pregnant women with intrahepatic cholestasis of pregnancy have a unique profile of serum bile acid metabolism and hepatobiliary diseases such as hepatitis, cirrhosis, and cholelithiasis all show an increase in serum total bile acid levels, but their respective bile acid profiles are different from those of pregnant women with intrahepatic cholestasis of pregnancy [8][9][10]. Therefore, among pregnant women with abnormal liver function, early and accurate identification of pregnant women with intrahepatic cholestasis of pregnancy is conducive to early selection of appropriate intervention treatment and improvement of perinatal adverse outcomes. Magnetic resonance imaging (MRI) has the characteristics of strong tissue contrast, no radiation, and strong repeatability and plays an important role in the diagnosis of liver diseases. Medical image diagnosis mainly depends on doctors' professional knowledge and clinical experience, and subjective factors may lead to different diagnosis results. As an important branch of artificial intelligence, deep learning is widely used in medical imaging [11]. The deep learning confidence network model method can automatically extract target features from massive MRI medical image data; accurately segment and identify liver anatomical structure; divide image signals of various hepatic lobes, segments, porta hepatis, hepatic arteries, portal veins, and branches of hepatic veins; establish different layers of information; and reorganize them, thus eliminating the influence of subjective factors and extracting more advanced target features, which is helpful for doctors to accurately diagnose diseases [12]. The innovation of this research was that a new MRI based on deep learning confidence network model was proposed to evaluate the imaging data of 50 pregnant women with ICP. Ultra-performance liquid chromatography-mass spectrometry/mass spectrometry (UPLC-MS/MS) was used to analyze the serum bile acid profile of ICP pregnant women in middle and late pregnancy. Then, partial least squares discriminant analysis (PLS-DA) was adopted to establish ICP diagnostic model, and the perinatal outcomes of ICP pregnant women were analyzed. This research was developed to screen differential bile acids and analyze the related factors affecting perinatal outcomes, so as to provide evidence for clinical diagnosis and treatment. Research Objects. Fifty pregnant women with ICP diagnosed in hospital from October 15, 2019, to April 25, 2021, were selected as the experimental group. Another 50 healthy pregnant women were recruited as a blank group, and 50 patients with cholelithiasis were taken as cholelithiasis group. The age range is 24-43 years old, with an average age of (28.82 ± 5.74) years; the gestational age was 26-38 weeks at the time of enrollment. The experiment had been approved by the committee of hospital. Patients and their families understood the research situation and signed informed consent. The inclusion criteria are as follows: (i) patients who met the diagnostic criteria for ICP in the Intrahepatic Cholestasis Diagnosis and Treatment Expert Consensus formulated by the expert committee for the diagnosis and treatment of intrahepatic cholestasis; (ii) ICP diagnosed at more than 28 weeks of gestation; and (iii) single fetus. The exclusion criteria are as follows: (i) patients complicated with viral hepatitis, hepatolithiasis, acute fatty liver during pregnancy, gestational hypertension, gestational diabetes, premature rupture of membranes, placenta previa, and other diseases; (ii) patients with abnormal liver function and bile metabolism before pregnancy; and (iii) pregnant women who had congenital heart disease and other serious congenital diseases that may affect pregnancy. Observation Indexes. Determinations were performed by circulating enzyme method, including fasting venous blood bile acid (TBA), total bilirubin (TBIL), conjugated bilirubin (DBIL), alanine aminotransferase (ALT), and aspartate aminotransferase (AST) of pregnant women in the experimental group and the blank group. The perinatal cord blood bile acid (TBA), creatine kinase (CK), lactate dehydrogenase (LDH) levels, cardiac troponin I (cTnI), and adverse outcomes in the experimental group and the control group were determined. 2.3. MRI Scan. Images were captured using 3.0 T magnetic resonance imaging and all-digital Ingenia 3.0 T superconducting magnetic resonance imaging. Liver scan was performed with 16-channel body coil, including MRI plain scan and enhanced scan. TR = 3:2 ms, TE = 1:5 ms, and reverse angle = 150. Matrix was 320 × 256, layer thickness was 3 mm, no-spacing scanning, and scanning field ðFOVÞ = 40 cm × 32 cm. Enhanced scanning was performed with a high pressure syringe at a flow rate of 2.5 mL/s by injecting the contrast agent gadolinium penate meglumine 0.1 mmol/ kg through the cubital vein mass. Then, 20 mL of normal saline was injected at a flow rate of 2.5 mL/s. The arterial phase, the portal vein phase, and the equilibrium phase were performed at 60s and 50s, respectively, 25 s after the contrast agent was injected. The serial images of all pregnant women were copied by the secondary operating station and PACs system and exported in DICOM format and stored in the mobile hard disk, and the name and number of each pregnant woman were standardized. Construction of Deep Learning Belief Network Model. Restricted Boltzmann machine (RBM) is a random network model based on probability with a two-layer structure. It can meet the full connection between the layers and the disconnection within the layers and accurately improve the target characteristics. It can also be used to pre-train the traditional feedforward neural network, which greatly improves the discriminative ability of the network. If multiple RBMs are stacked, a deep learning belief network (DLBN) can be formed. Each low-level RBM is used as the input data and output after training. It is also used as the input of the high-level RBM and passed layer by layer, stacking multiple RBMs. In this way, a complete deep learning belief network structure is formed, and an abstract and characterizing feature vector is formed at the highest level ( Figure 1). Deep Learning Belief Network Model Training Process. The training process of deep learning confidence network model is mainly divided into the following two steps. In the pretraining stage, unsupervised layer-by-layer greedy training method is adopted. The parameters of each layer of restricted Boltzmann machine is trained from the bottom to the top. It is ensured that feature information is retained as much as possible when the low-level feature vectors are mapped to the high-level feature space. After pretraining, there is a supervised fine-tuning phase, where the parameters of each layer are fine-tuned from the top layer to the bottom layer of the network. Parameter settings include number, location, size, shape, edge blur, and neighboring organization. Combined with the characteristics of different sequences of pregnant women, the training set was trained and adjusted to obtain a complete training data set of pregnant women. In the network training process, the performance of the network will be greatly reduced if only pretraining is carried out without fine-tuning parameters. The fine-tuning process is a supervised process. It takes the Computational and Mathematical Methods in Medicine difference between the output label of the network and the sample label as an error and spreads it forward layer by layer, which then modifies the parameters of each layer to make the network reach a better state, as illustrated in Figure 2. 2.6. Experiment Procedure. The network structure of deep learning confidence network model was built with three layers, and the last layer was connected with nonlinear classifier. MRI images were taken as training samples, and if no fine-tuning measures were taken after input into the network, the classification error rate was 21.32%. After pretraining, this was followed by supervised fine-tuning of the network to allow errors to propagate forward while adjusting parameters at each layer. The MRI test images were then fed into the adjusted network as input data. The error rate of learning classification was 5.14%, and the accuracy was increased by 16.18% compared with before fine tuning. Chromatography and Mass Spectrometry Conditions. For the ACQUITY UPLC BEH C18 (50 × 2:1 mm, 1.7 μm) column, the mobile phase of phase A was a 0.1% formic acid aqueous solution, and the phase B was a 0.1% formic acidacetonitrile solution. The flow rate was 0.4 mL/min, and the column temperature was 45°C. The gradient elution program was 0~2.5 min, 36%~45%, phase B; 2.5~3.5 min, 45%~47%, phase B; 3.5~4.5min, 47%~58%, phase B; 4.5~6.5 min, 58%, phase B; 6.5~8.5 min, 58%~65%, phase B; 8.5~10.5 min, 65%~95%, phase B; and 10.5~11.5 min, 36%, phase B. The electrospray ion source was in negative ion mode and multireaction monitoring mode. The capillary voltage was set to 3.5 kV, and the ion source temperature was set to 140°C. The desolventizing gas temperature was set to 400°C, the desolventizing gas flow rate was set to 800 L/h, and the cone gas flow rate was set to 50 L/h. Computational and Mathematical Methods in Medicine 2.8. Statistical Methods. SPSS 21.0 was used for statistical analysis of the data. Partial least squares discriminant analysis (PLS-DA) was used to compare serum bile acid profile and screen differential bile acid spectrum among groups by using SIMCA-P 13.0 (Umetrics, Sweden). The analysis results were illustrated by two-dimensional and threedimensional score charts. The calculated data conforming to the normal distribution were expressed as the mean standard deviation ( x±s), and the data that do not conform to the normal distribution is expressed as the percentage (%). In addition, P < 0:05 indicated notable difference. Experimental Results of Deep Learning Belief Network Model. The deep learning belief network model of this experiment was compared with the recognition rate of convolutional neural network (CNN) and support vector machine (SVM), and the error rate of different recognition methods on the test set was measured. The error rate of the constructed deep learning belief network (7.68%) was substantially lower than that of the convolutional neural network (21.34%) and the support vector machine (22.41%), and the difference was notable (P < 0:05) (Figure 3). MRI Manifestations of ICP Pregnant Women. In pregnant women with ICP, the bile ducts were dilated with cholestasis, the intrahepatic bile ducts were slightly compressed by the neck of the gallbladder, and the gallbladder was retained and expanded. There was a small amount of fluid in the gallbladder fossa, and the greater omentum was wrapped in the dilated bile ducts on T2WI, showing obvious high signal, with low signal on DWI (Figures 4(a) and 4(b)). The bile duct was not dilated with biliary mud deposition, and irregular hyposignal areas were observed in the dilated bile duct with high signal intensity on T2WI, with obviously hypersignal intensity on T1WI and DWI (Figures 4(c) and 4(d)). Analysis of Serum Total Bile Acid Profile in each Group. Determination of ten kinds of clear bile acids in serum of each group was carried out, including lithocholic acid (LCA), ursodeoxycholic acid (UDCA), chenodeoxycholic acid (CDCA), deoxycholic acid (DCA), cholic acid (CA), taurine lithocholic acid (TLCA), glycoursodeoxycholic acid (GUDCA), glycochenodeoxycholic acid (GCDCA), glycodeoxycholic acid (GDCA), and glycocholic acid (GCA). Figure 5 showed ten serum bile acid profile analyses. There were different performance characteristics of serum bile acid profile in the normal healthy blank group, gallstone group, and ICP experimental group. The levels of glycoursodeoxycholic acid (GUDCA), glycochenodeoxycholic acid (GCDCA), and glycocholic acid (GCA) in the experimental group were dramatically superior to those in the blank group, and the difference was notable (P < 0:05). The level of glycodeoxycholic acid (GDCA) in the gallstone group was dramatically superior to that of the blank group and the experimental group, and the difference was notable (P < 0:05). Analysis of Total Serum Bile Acid Profile between Experimental Group and Gallstone Group. According to the levels of ten known serum bile acids detected by mass spectrometry, the serum bile acid profile of the experimental group and the gallstone group was analyzed by PLS-DA, and the serum differential bile acids were screened and analyzed. The contribution values of various bile acids were the contribution values of different groups on the PLS-DA score chart. It is generally believed that bile acids with contribution values >1 are regarded as the differential bile acids between groups. The PLS-DA model (R 2 Y = 0:125, Q 2 = 0:134) established by the experimental group and the blank group showed low values of R 2 Y and Q 2 . In Figures 6(a) and 6(b), 2D and 3D scores showed notable aggregation in both the experimental group and the blank group, and the experimental group could be well distinguished from the gallstone group. Figure 6(c) showed that bile acids with contribution values >1 for the four differential bile acids can be Analysis of Total Serum Bile Acid Profile of Experimental Group and Gallstone Group. PLS-DA scores of ten serum bile acids in the experimental group and the cholelithiasis group were established (R 2 Y = 0:258, Q 2 = 0:195), as illustrated in Figures 7(a) and 7(b), and the serum differential bile acid spectrum was screened and analyzed. Figures 7(a) and 7(b) showed partial overlap between the experimental group and the gallstone group, indicating that the characteristics of serum bile acid profile in the two groups were similar, but there were differences in their serum bile acids. Figure 7(c) showed that the three bile acids with contribution value >1 could be used as serum differential bile acids between groups. The bile acid contribution value of LCA was greater than UDCA and greater than CDCA. Comparison of the Blood Biochemical Index Levels of Pregnant Women in the Experimental Group and the Blank Group. The levels of TBA, TBIL, DBIL, ALT, and AST of pregnant women in the experimental group were superior to those in the blank group, with notable differences (P < 0:05), as illustrated in Figure 8. The Expression of Umbilical Cord Blood Indexes in Perinatal Infants. The comparison of the perinatal serum biochemical indexes TBA, myocardial enzyme spectrum CK, LDH, and cTnI levels between the experimental group and the blank group showed that the experimental group was dramatically superior to the blank group (P < 0:05), as illustrated in Figure 9. Figure 10 showed the comparison of perinatal outcomes between the experimental group and the blank group. The incidence of amniotic fluid contamination, asphyxia, and premature perinatal Computational and Mathematical Methods in Medicine infants in the experimental group was dramatically superior to that in the blank group, with statistical differences (P < 0:05). Discussion Intrahepatic cholestasis during pregnancy is a common clinical hepatobiliary disease that leads to adverse fetal outcomes and may lead to unexpected and sudden fetal death. Therefore, once the disease is diagnosed, intervention measures should be taken immediately [13]. Currently, for pregnant women with a gestation cycle of about 29 weeks, the measured TBA is used as an indicator of liver function, and intrahepatic cholestasis during pregnancy is screened according to whether the clinical symptoms contain pruritus [14][15][16]. However, the use of TBA alone as a laboratory indicator still has some limitations. Cifci et al. [17] reported that abnormal liver function occurs under normal TBA level, which cannot exclude the risk of ICP. Serum bile acid metabolism profile of ICP pregnant women is specific. In this study, the levels of ursodeoxycholic acid, ursodeoxycholic acid, and ursodeoxycholic acid were remarkably increased in ICP pregnant women. When intrahepatic cholestasis occurs during pregnancy, liver function is impaired and bile acid concentration changes, and the levels of total bile acid and bile acid are mainly measured in clinical laboratories [18]. The results showed that the total bile acid profile of healthy pregnant women in blank group, ICP experimental group, and cholelithiasis control group was different under the premise that the total bile acid levels were similar. Therefore, serum bile acid profile had a positive effect on the clinical diagnosis of ICP, which was similar to the results of Chappell et al. [19]. ICP can cause great harm to both pregnant women and fetuses, especially the fetus [20]. The concentration of total bile acid in umbilical cord blood serum of ICP fetus will be remarkably increased, mainly due to the limitation of the process of total bile acid in the fetus to the mother. The fetus will be accompanied by hypoxia. Generally, myocardial cells are sensitive to fetal hypoxia. CK and LDH are used as specific indicators. ICP and acute hypoxia of the placenta are the direct causes of damage to the fetus. In this study, the results showed that the levels of TBA, TBIL, DBIL, ALT, AST, myocardial zymogram CK, LDH, and cTnI in the Computational and Mathematical Methods in Medicine experimental group were superior to those in the control group (P < 0:05). The incidence of amniotic fluid contamination, asphyxia, and premature perinatal infants in the experimental group was dramatically superior to that in the blank group (P < 0:05). High concentrations of TBA have a direct toxic effect on the fetus, especially myocardial cells, which is the main cause of adverse perinatal outcomes and sudden death. Starting from the research background and development of deep learning, this research studied the RBM-based deep learning model and its application in MRI. Firstly, it explained in detail how the RBM model constitutes the deep learning confidence model, and discussed the error rate of different models in the simulation experiment. Good results were harvested in the experiment, indicating that the deep learning confidence model is the optimal one [20]. Conclusion In this research, imaging data of 50 ICP pregnant women were evaluated by constructing MRI based on deep learning confidence network model. A comprehensive analysis of serum bile acid profile in ICP pregnant women was conducted to screen for differential bile acids and to analyze perinatal outcomes in ICP pregnant women. It turned out that the error rate of deep learning confidence model was low. The serum characteristic bile acids of ICP were glycoursodeoxycholic acid, glycochenodeoxycholic acid, and glycocholic acid, which played a positive role in clinical diagnosis. Moreover, the toxic effect of high concentration of serum bile acid was the main cause of perinatal adverse outcome and sudden death. However, the deficiency of this study is that the sample size is small, and the selection of cases is subjective to some extent. Therefore, the sample size should be expanded for further study in the later stage. In conclusion, this study provides a reference for the clinical diagnosis of ICP. Data Availability The data used to support the findings of this study are available from the corresponding author upon request.
4,751.6
2022-06-06T00:00:00.000
[ "Computer Science" ]
Structural and Magnetic Studies of Bulk Nanocomposite Magnets Derived from Rapidly Solidified Pr-(Fe,Co)-(Zr,Nb)-B Alloy. In the present study, the phase constitution, microstructure and magnetic properties of the nanocrystalline magnets, derived from fully amorphous or partially crystalline samples by annealing, were analyzed and compared. The melt-spun ribbons (with a thickness of ~30 µm) and suction-cast 0.5 mm and 1 mm thick plates of the Pr9Fe50Co13Zr1Nb4B23 alloy were soft magnetic in the as-cast state. In order to modify their magnetic properties, the annealing process was carried out at various temperatures from 923K to 1033K for 5 min. The Rietveld refinement of X-ray diffraction patterns combined with the partial or no known crystal structures (PONKCS) method allowed one to quantify the component phases and calculate their crystalline grain sizes. It was shown that the volume fraction of constituent phases and their crystallite sizes for the samples annealed at a particular temperature, dependent on the rapid solidification conditions, and thus a presence or absence of the crystallization nuclei in the as-cast state. Additionally, a thermomagnetic analysis was used as a complementary method to confirm the phase constitution. The hysteresis loops have shown that most of the samples exhibit a remanence enhancement typical for the soft/hard magnetic nanocomposite. Moreover, for the plates annealed at the lowest temperatures, the highest coercivities up to ~1150 kA/m were measured. Introduction Hard magnetic materials have proven to be indispensable in modern technology. They can be characterized by few macroscopic parameters, including: (i) the coercivity field ( J H c )-which refers to the maximum reversed magnetic field up to which the hard magnetic specimen resist demagnetization; (ii) the polarization remanence (J r )-the value of magnetic polarization measured at zero external magnetic field for the initially magnetically saturated specimen; (iii) maximum magnetic energy product (BH) max -the amount of magnetic energy that can be stored in the magnet and (iv) the Curie temperature (T C )-the temperature of the transition from ferro-to paramagnetic state of the ferromagnetic material. Over the past half century, the RE-Fe-B-type (RE-rare earth element) hard magnetic alloys have become commonly used in different areas of life, including in automotive, electronic, and household appliances [1]. In particular, the green energy related applications in hybrid vehicles or wind turbines triggered a burst of interest in both the processing routes, as well as in alteration of the Materials and Methods The ingot samples of the Pr 9 Fe 50 Co 13 Zr 1 Nb 4 B 23 alloy were produced by arc-melting of the high purity constituent elements, with the addition of the Fe-B pre-alloy. Subsequently, the melt-spun ribbon (~30 µm thick) and suction-cast 0.5 mm and 1 mm thick plates were produced. In order to modify their magnetic properties, the annealing process was carried out at various temperatures (T a ), from 923 K to 1033 K for 5 min. For this purpose, the samples were sealed off in the quartz tubes under the low pressure of Ar and subjected to annealing in the lab resistance furnace. The T a were chosen, based on differential scanning calorimetry measurements carried out on fully amorphous ribbons [29]. The X-ray diffraction (XRD) studies were performed in order to determine the phase constitution of the investigated specimens. Furthermore, the Rietveld refinement was used in order to quantify the amounts of constituent phases in samples annealed at various temperatures and having different shapes. Additionally, the application of the partial or no known crystal structures (PONKCS) method allowed one to estimate the amount of remaining amorphous phase within the samples subjected to annealing. The crystallite sizes, as well as the unit cell parameters, were determined, based on the Rietveld refinement. The XRD patterns were collected using the Bruker D8 Advance diffractometer (Bruker AXS GmbH, Karlsruhe, Germany), with CuK α radiation equipped with a LynxEye detector (linear focus of 25 mm, primary beam divergence slit-0.6 mm), with Soller slits on a primary and diffracted beam. The measurements were performed in the Bragg-Brentano configuration, with the Ni K β filter on the detector side. The 2θ step size was 0.02 deg and step time 5 s. The Rietveld refinements were performed using DIFFRAC SUITE TOPAS 4.2 software (Bruker AXS GmbH, Karlsruhe, Germany). The temperature dependences of magnetization were measured using the Faraday balance, in the temperature range from 300 K to 820 K, at an external magnetic field of 0.1 T. It was used as a complementary method, to confirm the phase constitution. Measurements of magnetic hysteresis loops allowed one to determine the optimal annealing condition for obtaining the highest magnetic parameters. The measurements were performed using a LakeShore 7307 VSM magnetometer operating at room temperature, in external magnetic fields up to 1590 kA/m. Resultsand Discussion In Figure 1, the XRD patterns measured for the melt-spun ribbon, as well as for the suction-cast 0.5 mm and 1 mm thick plates are presented. Materials and Methods The ingot samples of the Pr9Fe50Co13Zr1Nb4B23 alloy were produced by arc-melting of the high purity constituent elements, with the addition of the Fe-B pre-alloy. Subsequently, the melt-spun ribbon (~30 µm thick) and suction-cast 0.5 mm and 1 mm thick plates were produced. In order to modify their magnetic properties, the annealing process was carried out at various temperatures (Ta), from 923 K to 1033 K for 5 min. For this purpose, the samples were sealed off in the quartz tubes under the low pressure of Ar and subjected to annealing in the lab resistance furnace. The Ta were chosen, based on differential scanning calorimetry measurements carried out on fully amorphous ribbons [29]. The X-ray diffraction (XRD) studies were performed in order to determine the phase constitution of the investigated specimens. Furthermore, the Rietveld refinement was used in order to quantify the amounts of constituent phases in samples annealed at various temperatures and having different shapes. Additionally, the application of the partial or no known crystal structures (PONKCS) method allowed one to estimate the amount of remaining amorphous phase within the samples subjected to annealing. The crystallite sizes, as well as the unit cell parameters, were determined, based on the Rietveld refinement. The XRD patterns were collected using the Bruker D8 Advance diffractometer (Bruker AXS GmbH, Karlsruhe, Germany), with CuKα radiation equipped with a LynxEye detector (linear focus of 25 mm, primary beam divergence slit-0.6 mm), with Soller slits on a primary and diffracted beam. The measurements were performed in the Bragg-Brentano configuration, with the Ni Kβ filter on the detector side. The 2θ step size was 0.02 deg and step time 5 s. The Rietveld refinements were performed using DIFFRAC SUITE TOPAS 4.2 software (Bruker AXS GmbH, Karlsruhe, Germany). The temperature dependences of magnetization were measured using the Faraday balance, in the temperature range from 300 K to 820 K, at an external magnetic field of 0.1 T. It was used as a complementary method, to confirm the phase constitution. Measurements of magnetic hysteresis loops allowed one to determine the optimal annealing condition for obtaining the highest magnetic parameters. The measurements were performed using a LakeShore 7307 VSM magnetometer operating at room temperature, in external magnetic fields up to 1590 kA/m. Resultsand Discussion In Figure 1, the XRD patterns measured for the melt-spun ribbon, as well as for the suction-cast 0.5 mm and 1 mm thick plates are presented. These studies have shown that melt-spun ribbons were fully amorphous. It was also confirmed by the measurement of the hysteresis loop, which was typical for the soft magnetic materials. In the case of the 0.5 mm thick plate, the XRD pattern revealed a characteristic bump for the amorphous Figure 1. The X-ray diffraction (XRD) patterns measured for the melt-spun ribbon and the suction-cast 0.5 mm and 1 mm thick plates. These studies have shown that melt-spun ribbons were fully amorphous. It was also confirmed by the measurement of the hysteresis loop, which was typical for the soft magnetic materials. In the case of the 0.5 mm thick plate, the XRD pattern revealed a characteristic bump for the amorphous phase, however the presence of small intensities' diffraction peaks was possible, but not clear, due to the relatively high background. A lack of pronounced diffraction peaks might be related to a weight fraction of the precipitating crystalline phase which is too low to be detected by the XRD method. In the case of the 1 mm thick plate, except for the amorphous bump, very low intensity diffraction peaks were detected. Nevertheless, these studies proved the relatively good GFA of the Fe 50 Co 13 Pr 9 Zr 1 Nb 4 B 23 alloy. In order to modify the microstructure and magnetic properties, a further annealing process was performed in temperatures (T a ), ranging from 923 K to 1033 K, for 5 min. The annealing conditions were adjusted based on the differential scanning calorimetry (DSC) measurements [29], which have shown a crystallization process manifested by a wide crystallization peak, for which the beginning of crystallization occurs at T x of 931 K and the maximum of crystallization-at T p = 944 K. The XRD studies of the annealed samples are presented in Figure 2. It was shown that the annealing of a thin ribbon at 923 K ( Figure 2a) resulted in significant crystallization of the sample. Although the T x measured for the ribbon is higher than 923 K, one has to consider that the DSC measurements were performed dynamically, with a heating rate of 10 K/min. The shift of the crystallization temperature to higher values with the increase of the heating rate is characteristic for the DSC measurements [31]. Thus, in the case of static annealing conditions, the crystallization can begin at lower temperatures. The qualitative XRD analyses revealed a presence of the hard magnetic Pr 2 (Fe,Co) 14 B (2:14:1) and paramagnetic Pr 1+x Fe 4 B 4 (1:4:4) crystalline phases in all ribbon specimens subjected to annealing. Furthermore, the increase of annealing temperature caused an increase of the diffraction peak intensities, corresponding to the soft magnetic α-Fe phase. In all annealed 0.5 mm and 1 mm thick plates, the same crystalline phases as those detected for the ribbon specimens were revealed. phase, however the presence of small intensities' diffraction peaks was possible, but not clear, due to the relatively high background. A lack of pronounced diffraction peaks might be related to a weight fraction of the precipitating crystalline phase which is too low to be detected by the XRD method. In the case of the 1 mm thick plate, except for the amorphous bump, very low intensity diffraction peaks were detected. Nevertheless, these studies proved the relatively good GFA of the Fe50Co13Pr9Zr1Nb4B23 alloy. In order to modify the microstructure and magnetic properties, a further annealing process was performed in temperatures (Ta), ranging from 923 K to 1033 K, for 5 min. The annealing conditions were adjusted based on the differential scanning calorimetry (DSC) measurements [29], which have shown a crystallization process manifested by a wide crystallization peak, for which the beginning of crystallization occurs at Tx of 931 K and the maximum of crystallization-at Tp = 944 K. The XRD studies of the annealed samples are presented in Figure 2. It was shown that the annealing of a thin ribbon at 923 K ( Figure 2a) resulted in significant crystallization of the sample. Although the Tx measured for the ribbon is higher than 923 K, one has to consider that the DSC measurements were performed dynamically, with a heating rate of 10 K/min. The shift of the crystallization temperature to higher values with the increase of the heating rate is characteristic for the DSC measurements [31]. Thus, in the case of static annealing conditions, the crystallization can begin at lower temperatures. The qualitative XRD analyses revealed a presence of the hard magnetic Pr2(Fe,Co)14B (2:14:1) and paramagnetic Pr1+xFe4B4 (1:4:4) crystalline phases in all ribbon specimens subjected to annealing. Furthermore, the increase of annealing temperature caused an increase of the diffraction peak intensities, corresponding to the soft magnetic α-Fe phase. In all annealed 0.5 mm and 1 mm thick plates, the same crystalline phases as those detected for the ribbon specimens were revealed. However, in case of multiphase nanocrystalline materials, the unambiguous phase identification is a difficult task, for several reasons. The most important are a strong overlapping, the angular widening due to the nanocrystalline structure, and the low intensity of peaks corresponding to the constituent phases. These let us to consider a further quantitative analysis of the XRDs using However, in case of multiphase nanocrystalline materials, the unambiguous phase identification is a difficult task, for several reasons. The most important are a strong overlapping, the angular widening due to the nanocrystalline structure, and the low intensity of peaks corresponding to the Materials 2020, 13, 1515 5 of 16 constituent phases. These let us to consider a further quantitative analysis of the XRDs using the Rietveld refinement method. The crucial precondition for obtaining a successful fit is a reasonable initial model of the material's phase structure. This allows one to avoid the false local minima of fit. To provide such a condition, the complementary studies of the temperature dependences of magnetization M(T) were performed for as-cast and annealed specimens. The measurements were carried out, both under the heating and cooling conditions, at low external magnetic field (µ 0 H =0.1 T), therefore the measured value of magnetization is not the saturation magnetization. The heating and cooling M(T) curves have different shapes ( Figure 3). In the case of M(T) curves measured on the heating of the annealed samples, the increase of temperature causes a decrease of the anisotropy constant in the hard magnetic Pr 2 Fe 14 B phase [32,33], and thus the alignment of the magnetic domains along the external magnetic field. As a result, the significant rise of magnetization with the temperature is observed below the Curie point (T C ) of the Pr 2 Fe 14 B phase. It could cause difficulties in distinguishing the characteristic steps at T C coming from the magnetic phase transitions of other ferromagnetic phases. the Rietveld refinement method. The crucial precondition for obtaining a successful fit is a reasonable initial model of the material's phase structure. This allows one to avoid the false local minima of fit. To provide such a condition, the complementary studies of the temperature dependences of magnetization M(T) were performed for as-cast and annealed specimens. The measurements were carried out, both under the heating and cooling conditions, at low external magnetic field (µ0H =0.1 T), therefore the measured value of magnetization is not the saturation magnetization. The heating and cooling M(T) curves have different shapes ( Figure 3). In the case of M(T) curves measured on the heating of the annealed samples, the increase of temperature causes a decrease of the anisotropy constant in the hard magnetic Pr2Fe14B phase [32,33], and thus the alignment of the magnetic domains along the external magnetic field. As a result, the significant rise of magnetization with the temperature is observed below the Curie point (TC) of the Pr2Fe14B phase. It could cause difficulties in distinguishing the characteristic steps at TC coming from the magnetic phase transitions of other ferromagnetic phases. Therefore, the M(T) curves measured under heating for the as-cast specimens and under cooling conditions for the annealed samples are presented in Figure 4. The TC was determined as the minimum of the first derivative of magnetization, with respect to the temperature, and its values are collected in Table 1. However, in the case of the amorphous phase, the magnetization gradually decreases with the increase of the temperature. Therefore, in order to determine the TC, the M 1/β (T) curves were constructed, where β = 0.36 is an effective critical exponent [34]. In this procedure,TC were determined by the extrapolation of the linear part of M 1/β (T) curve to M =0. Therefore, the M(T) curves measured under heating for the as-cast specimens and under cooling conditions for the annealed samples are presented in Figure 4. The T C was determined as the minimum of the first derivative of magnetization, with respect to the temperature, and its values are collected in Table 1. However, in the case of the amorphous phase, the magnetization gradually decreases with the increase of the temperature. Therefore, in order to determine the T C , the M 1/β (T) curves were constructed, where β = 0.36 is an effective critical exponent [34]. In this procedure, T C were determined by the extrapolation of the linear part of M 1/β (T) curve to M =0. For the as-cast ribbon (Figure 4a), the heating and cooling curves look similar and the magnetization gradually decreases with the increase of the temperature, and at~515 K the sample becomes paramagnetic. The annealing of the ribbon at 923 K resulted in a significant increase of the T C corresponding to the amorphous component phase, which is still present in the sample. This increase is related to the change of the chemical composition of the amorphous phase, due to a formation of the hard magnetic and paramagnetic crystalline components. The presence of the hard magnetic phase was evidenced by the second step in the M(T) curve. This result is in agreement with the XRD studies, which have proven the presence of hard magnetic Pr 2 (Fe,Co) 14 B, and paramagnetic Pr 1+x Fe 4 B 4 phases. Annealing of the ribbon at 953K and higher temperatures resulted in a significant change of the M(T) curves. A ferro-to paramagnetic transition occurs at~708 K. Such high T C measured for the hard magnetic 2:14:1 phase can be attributed to the partial occupation of the Fe positions in the unit cell by the Co atoms. Such an effect was also reported in [29,35] for Pr-Fe-Co-B alloys. An increase of T C with the rise of the annealing temperature might be attributed to the Co substitution changes in 2:14:1 phase. Additionally, the thermomagnetic measurements have confirmed the presence of α-Fe phase, for the specimens annealed at 953 K, 1003 K and 1033 K. Due to the limited range of the measurement temperatures, the α-Fe ferro-to paramagnetic transition is not visible on these curves, but the presence of this phase is indicated by the high magnetic moment of the samples at temperatures higher than 750 K. The M(T) curves also confirmed the increase of the volume fraction of the α-Fe phase, with the rise of the T a . change of the M(T) curves. A ferro-to paramagnetic transition occurs at ~708 K. Such high TC measured for the hard magnetic 2:14:1 phase can be attributed to the partial occupation of the Fe positions in the unit cell by the Co atoms. Such an effect was also reported in [29,35] for Pr-Fe-Co-B alloys. An increase of TC with the rise of the annealing temperature might be attributed to the Co substitution changes in 2:14:1 phase. Additionally, the thermomagnetic measurements have confirmed the presence of α-Fe phase, for the specimens annealed at 953 K, 1003 K and 1033 K. Due to the limited range of the measurement temperatures, the α-Fe ferro-to paramagnetic transition is not visible on these curves, but the presence of this phase is indicated by the high magnetic moment of the samples at temperatures higher than 750 K. The M(T) curves also confirmed the increase of the volume fraction of the α-Fe phase, with the rise of theTa. Partial crystallization of both the as-cast 0.5 mm and 1 mm thick plates had its reflection in the two-step M(T) curves measured for these samples (Figure 4b,c). This effect was more pronounced for the 1 mm thick plates, where the cooling rate during rapid solidification of the samples was much lower than this for the 0.5 mm thick plates. On the other hand, the annealing of the 0.5 mm thick plate at 953 K and 983 K resulted in the formation of the only one ferromagnetic 2:14:1 phase, without precipitation of α-Fe. An increase of the Ta up to 1003 K resulted in the additional formation of the α-Fe phase, that is reflected in the significant value of the magnetization at temperatures Partial crystallization of both the as-cast 0.5 mm and 1 mm thick plates had its reflection in the two-step M(T) curves measured for these samples (Figure 4b,c). This effect was more pronounced for the 1 mm thick plates, where the cooling rate during rapid solidification of the samples was much lower than this for the 0.5 mm thick plates. On the other hand, the annealing of the 0.5 mm thick plate at 953 K and 983 K resulted in the formation of the only one ferromagnetic 2:14:1 phase, without precipitation of α-Fe. An increase of the T a up to 1003 K resulted in the additional formation of the α-Fe phase, that is reflected in the significant value of the magnetization at temperatures higher than 750 K. In the case of the 1 mm thick plates, the α-Fe phase is present for the sample annealed at 983 K, and its fraction increases with the increase of T a . Examples of Rietveld refinements carried out for the annealed ribbon and plates of various thicknesses were shown in Figure 5. The criteria of fit [36] are collected in Table 2. In the starting structural model used in the Rietveld refinement, the presence of two crystalline phases 1:4:4 and 2:14:1 for all annealing temperatures were considered. The α-Fe and amorphous phases were also incorporated into the model, when positive indication of their presence was demonstrated by the thermomagnetic studies. Quantification of the amorphous phase was possible with the use of the PONKCS method [37]. In this approach, the amorphous component is modeled as a group of peaks (so called "peak phase"), with its specific ZMV parameter. In the case of known crystalline phases, this parameter is related to the mass (ZM) and the volume (V) of its unit cell. However, for the amorphous phase, such structural details cannot be specified. Therefore, the value of the ZMV parameter for such a phase is derived from the diffraction pattern measured for the mixture of the known amounts of amorphous phase and well-characterized internal standard. This ZMV value has no physical meaning, but serves as a calibration value for the calculations of the amorphous phase concentration. Materials 2020, 13, x FOR PEER REVIEW 7 of 18 higher than 750 K. In the case of the 1 mm thick plates, the α-Fe phase is present for the sample annealed at 983 K, and its fraction increases with the increase of Ta. Examples of Rietveld refinements carried out for the annealed ribbon and plates of various thicknesses were shown in Figure 5. The criteria of fit [36] are collected in Table 2. In the starting structural model used in the Rietveld refinement, the presence of two crystalline phases 1:4:4 and 2:14:1 for all annealing temperatures were considered. The α-Fe and amorphous phases were also incorporated into the model, when positive indication of their presence was demonstrated by the thermomagnetic studies. Quantification of the amorphous phase was possible with the use of the PONKCS method [37]. In this approach, the amorphous component is modeled as a group of peaks (so called "peak phase"), with its specific ZMV parameter. In the case of known crystalline phases, this parameter is related to the mass (ZM) and the volume (V) of its unit cell. However, for the amorphous phase, such structural details cannot be specified. Therefore, the value of the ZMV parameter for such a phase is derived from the diffraction pattern measured for the mixture of the known amounts of amorphous phase and well-characterized internal standard. This ZMV value has no physical meaning, but serves as a calibration value for the calculations of the amorphous phase concentration. The Rietveld refinement was used to calculate the weight fractions (W f ) and the unit cell parameters of constituent phases, and the volume weighted coherently diffracting domain sizes (L vol ) that are the measure of the crystallite sizes (d). In Table 3, the starting values of the lattice parameters for the crystalline phases included in the model, together with their refined values, were presented. As one can see from Table 3, the addition of Co to the alloy composition resulted in the reduction of the unit cell parameters of the 2:14:1phase. It has been frequently reported that the addition of Co to the hard magnetic RE-Fe-B alloys results in changes of their magnetic properties, without the change of the 2:14:1 phase structure type [38][39][40][41]. The Co atoms take the Fe sites in the unit cell of the 2:14:1 phase, resulting in a monotonic decrease of the unit cell parameters. This leads to the change of the interatomic distances, thus affecting the exchange interactions. It was proven by neutron diffraction analysis [32] and Mössbauer spectroscopy [42], that Co atoms preferentially take a 4e position in the unit cell of the 2:14:1 phase, while other positions (16k 1 , 16k 2 , 8j 1 and 4c) are randomly occupied by Co. It has a crucial influence on T C [29,35,38]. It is worth mentioning that the unit cell parameter c calculated for 1:4:4 phase also decreased, while the α-Fe unit cell remains almost unchanged. The dependences of calculated average diameters of nanocrystallites (d) and the weight fractions (Wf ) of the component phases on the annealing temperature were shown in Figures 6 and 7, respectively. The refined values of these parameters were collected in Table 4. For all samples, the rise of the Ta led to an increase in the average grain diameters of the constituent crystalline phases ( Figure 6). The Rietveld refinement has shown that the hard magnetic phase forms the largest crystallites during annealing, with the diameters changing from 20 to 40 nm. Comparable sizes of crystallites (from 18 to 30 nm) were calculated for the α-Fe phase in the ribbon specimens. In case of bulk samples, the crystallites of α-Fe phase were much smaller, with average diameters lower than 10 nm. The weight fractions of constituent phases change in various ways with the increase of the annealing temperature (Figure 7). In the case of thin ribbon, the largest fraction of the hard magnetic 2:14:1 phase is formed for the sample annealed at 923 K. For this sample, the amorphous, as well as paramagnetic 1:4:4 phases, were observed. An increase in the annealing temperature led to a gradual decrease in the Wf of the hard magnetic phase, in favor of the paramagnetic 1:4:4 and/or the soft magnetic α-Fe phases (which appear at the annealing temperatures above 953 K). For the ribbons annealed at temperatures higher than 923 K, no amorphous phase was found. The examples of transmission electron micrographs (TEM), together with the electron diffraction patterns obtained for the 0.5 mm thick plates in as-cast state and the one subjected to annealing at 983 K, are presented in Figure 8. Due to the characteristics of the TEM studies (the observations were carried out only on the limited areas of the sample), the crystallites nucleated during rapid solidification of the bulk sample were not revealed. Furthermore, its electron diffraction image is typical for the amorphous phase. In case of the annealed specimen, the microstructure is heterogeneous, with crystal particles of different sizes. Additionally, the electron diffraction image is typical for the nanocrystalline specimens. The presence of crystal nuclei within the amorphous matrix of the as-cast plate resulted in a growth of crystallites, nucleated during the rapid solidification and simultaneous nucleation of new ones during the heat treatment. The average crystallite sizes are similar to those calculated using whole powder pattern fitting in the Rietveld refinement procedure. For all samples, the rise of the T a led to an increase in the average grain diameters of the constituent crystalline phases (Figure 6). The Rietveld refinement has shown that the hard magnetic phase forms the largest crystallites during annealing, with the diameters changing from 20 to 40 nm. Comparable sizes of crystallites (from 18 to 30 nm) were calculated for the α-Fe phase in the ribbon specimens. In case of bulk samples, the crystallites of α-Fe phase were much smaller, with average diameters lower than 10 nm. The weight fractions of constituent phases change in various ways with the increase of the annealing temperature (Figure 7). In the case of thin ribbon, the largest fraction of the hard magnetic 2:14:1 phase is formed for the sample annealed at 923 K. For this sample, the amorphous, as well as paramagnetic 1:4:4 phases, were observed. An increase in the annealing temperature led to a gradual decrease in the W f of the hard magnetic phase, in favor of the paramagnetic 1:4:4 and/or the soft magnetic α-Fe phases (which appear at the annealing temperatures above 953 K). For the ribbons annealed at temperatures higher than 923 K, no amorphous phase was found. The examples of transmission electron micrographs (TEM), together with the electron diffraction patterns obtained for the 0.5 mm thick plates in as-cast state and the one subjected to annealing at 983 K, are presented in Figure 8. Due to the characteristics of the TEM studies (the observations were carried out only on the limited areas of the sample), the crystallites nucleated during rapid solidification of the bulk sample were not revealed. Furthermore, its electron diffraction image is typical for the amorphous phase. In case of the annealed specimen, the microstructure is heterogeneous, with crystal particles of different sizes. Additionally, the electron diffraction image is typical for the nanocrystalline specimens. The presence of crystal nuclei within the amorphous matrix of the as-cast plate resulted in a growth of crystallites, nucleated during the rapid solidification and simultaneous nucleation of new ones during the heat treatment. The average crystallite sizes are similar to those calculated using whole powder pattern fitting in the Rietveld refinement procedure. The changes in the phase constitution with the annealing temperature are reflected in the shapes of the magnetic hysteresis loops (Figure 9). For the as-cast ribbon, the shape of the hysteresis loop confirms its fully amorphous structure (Figure 9a). Annealing of the ribbon at 923 K allowed one to reach the coercivity value JHc of 762 kA/m, which is the highest for the annealed ribbons. Heat treatment of ribbons at higher temperatures resulted in the gradual decrease of JHC and the increase of the saturation polarization Js. This can be related mainly to the increase of the weight fraction of the α-Fe phase, with the increase of the annealing temperature and the growth of crystallites of the 2:14:1 phase, that was shown by the Rietveld refinement. The highest maximum energy product (BH)max and remanence Jr were measured for the ribbon annealed at 953 K. The changes in the phase constitution with the annealing temperature are reflected in the shapes of the magnetic hysteresis loops (Figure 9). For the as-cast ribbon, the shape of the hysteresis loop confirms its fully amorphous structure (Figure 9a). Annealing of the ribbon at 923 K allowed one to reach the coercivity value J H c of 762 kA/m, which is the highest for the annealed ribbons. Heat treatment of ribbons at higher temperatures resulted in the gradual decrease of J H C and the increase of the saturation polarization J s . This can be related mainly to the increase of the weight fraction of the α-Fe phase, with the increase of the annealing temperature and the growth of crystallites of the 2:14:1 phase, that was shown by the Rietveld refinement. The highest maximum energy product (BH) max and remanence J r were measured for the ribbon annealed at 953 K. More complex shapes of the hysteresis loops, measured for both the as-cast 0.5 mm and 1 mm thick plates (Figure 9b,c), correspond to their partially crystalline structure. Here the wasp-wasted shapes of hysteresis loops are typical for the specimens, where small fractions of the hard magnetic phase are diluted within the soft magnetic amorphous matrix. In the case of bulk samples, the highest values of the coercivity J H c reaching 1150 kA/m and maximum magnetic energy product (BH) max =25 kJ/m 3 were measured for the 0.5 mm and 1 mm thick plates annealed at 983K and 953K, respectively. These samples also had the lowest saturation polarization. The increase in the saturation polarization of samples annealed at higher temperatures can be attributed to the increase in the content of the α-Fe phase. The magnetic parameters determined from the hysteresis loops, i.e., magnetic polarization remanence J r , saturation polarization J s , coercivity field J H c and maximum magnetic energy product (BH) max , were collected in Table 5, and their dependences on T a are presented in Figure 10. one to reach the coercivity value JHc of 762 kA/m, which is the highest for the annealed ribbons. Heat treatment of ribbons at higher temperatures resulted in the gradual decrease of JHC and the increase of the saturation polarization Js. This can be related mainly to the increase of the weight fraction of the α-Fe phase, with the increase of the annealing temperature and the growth of crystallites of the 2:14:1 phase, that was shown by the Rietveld refinement. The highest maximum energy product (BH)max and remanence Jr were measured for the ribbon annealed at 953 K. High values of Jr/Js> 0.6 for all annealed specimens suggest the presence of the spring effect in all investigated magnets. The highest values of Js = 0.715 T were measured for the ribbon annealed at 1033 K. However, the Js(Ta) dependences are similar for all specimens. The rise of Js with Ta can be attributed to the increase of the volume fraction of the α-Fe phase. In addition, for all investigated ribbons, JHc were lower than that measured for the bulk specimens. It also has an impact on the (BH)max values, which were smaller for annealed ribbon. The highest JHc and (BH)max were measured for the 0.5mm thick plates in the whole range of annealing temperatures, despite the fact that JHc was decreasing with the increase of Ta. Conclusions For the Pr9Fe50Co13Zr1Nb4B23 alloy, the 30 µm thick melt-spun ribbon, as well as suction-cast 0.5 mm and 1 mm plates, were produced. XRD studies carried out for these samples have proven the good glass forming ability of this alloy. The ribbon samples were fully amorphous, while in the case of plates, the amorphous phase constitutes the biggest part of the as-cast specimens. The XRD analysis, as well as the magnetic and thermomagnetic measurements, have revealed partial crystallization of both types of as-cast plates. One should notice that the ribbon and plate samples are rapidly solidified at different cooling rates [43]. This has an impact on the initial phase constitution of the samples. Further heat treatment led to the precipitation of various crystalline phases. The phase constitution however, changed both with the annealing temperature and the thickness of the specimen. For fully amorphous ribbon, it should be assumed that the formation of the crystalline phases occurs simultaneously at a given temperature and their content changes with Ta. For the bulk samples, one can assume that the initial fraction of the crystalline component phases has the main impact on the magnetic parameters of annealed samples. Thus, the heat treatment causes, in the first High values of J r /J s > 0.6 for all annealed specimens suggest the presence of the spring effect in all investigated magnets. The highest values of J s = 0.715 T were measured for the ribbon annealed at 1033 K. However, the J s (T a ) dependences are similar for all specimens. The rise of J s with T a can be attributed to the increase of the volume fraction of the α-Fe phase. In addition, for all investigated ribbons, J H c were lower than that measured for the bulk specimens. It also has an impact on the (BH) max values, which were smaller for annealed ribbon. The highest J H c and (BH) max were measured for the 0.5mm thick plates in the whole range of annealing temperatures, despite the fact that J H c was decreasing with the increase of T a . Conclusions For the Pr 9 Fe 50 Co 13 Zr 1 Nb 4 B 23 alloy, the 30 µm thick melt-spun ribbon, as well as suction-cast 0.5 mm and 1 mm plates, were produced. XRD studies carried out for these samples have proven the good glass forming ability of this alloy. The ribbon samples were fully amorphous, while in the case of plates, the amorphous phase constitutes the biggest part of the as-cast specimens. The XRD analysis, as well as the magnetic and thermomagnetic measurements, have revealed partial crystallization of both types of as-cast plates. One should notice that the ribbon and plate samples are rapidly solidified at different cooling rates [43]. This has an impact on the initial phase constitution of the samples. Further heat treatment led to the precipitation of various crystalline phases. The phase constitution however, changed both with the annealing temperature and the thickness of the specimen. For fully amorphous ribbon, it should be assumed that the formation of the crystalline phases occurs simultaneously at a given temperature and their content changes with T a . For the bulk samples, one can assume that the initial fraction of the crystalline component phases has the main impact on the magnetic parameters of annealed samples. Thus, the heat treatment causes, in the first instance, a growth of crystallites nucleated during the rapid solidification and simultaneous nucleation of new ones. The differences in the phase constitution between ribbon and plate specimens of various thicknesses were revealed, using the Rietveld refinement. Its outcome is consistent with the results of M(T) measurements. The changes of T C observed for the 2:14:1 phase in specimens, subjected to annealing at various temperatures, are related to the replacement of Fe atoms by Co in the unit cell of the 2:14:1 phase. This is also reflected in the unit cell parameters calculated for the 2:14:1 phase, based on the XRD studies. The dependences of J H c , J r and (BH) max on the annealing temperature for ribbon and plate samples are closely related to the evolution of the phase constitution with the annealing temperature. For all annealed specimens except the hard magnetic 2:14:1 phase, the paramagnetic 1:4:4 was also present. Moreover, fractions of ferromagnetic phases (hard magnetic 2:14:1 and soft magnetic α-Fe) are similar for all samples, and likewise change with the annealing temperature (amount of 2:14:1 phase slightly decreases, while α-Fe increases with the rise of T a ). Furthermore, in the case of ribbons, the average diameters of crystallites (d) of the 2:14:1 phase increase from 20 nm to 33 nm, with the rise of T a , while for bulk specimens, the d parameter changes from~30 nm to 45 nm. On the other hand, in ribbons, the d values for the α-Fe phase are much higher than those for the bulk specimens. Therefore, lower coercivities measured for the ribbon than those for bulk samples might be explained by lower crystallite sizes of the hard magnetic phase. Similar behavior was observed for the Nd-Fe-B magnets and presented in [5,44]. The highest J H c of 1150 kA/m was measured for the 0.5 mm and 1 mm thick plates, annealed at 983K and 953K, respectively. The magnetic parameters of nanostructured magnets presented in this work can be compared to other rare-earth (RE) magnets with reduced RE content. An interesting example in this group is the Pr 7 Fe 88 B 5 alloy with high Fe and low B contents in the form of ribbon [45]. Annealing of this ribbon resulted in the quite high value of J r , reaching 1.2 T due to the presence of the α-Fe phase, and the very moderate coercivity J H c = 461 kA/m. Similar magnetic properties were measured for the as-quenched melt-spun ribbons of the Pr 9 Fe 79 B 12 alloy, for which 100 nm grains of Pr 2 Fe 14 B and Fe 3 B phases were found. Their magnetic parameters reached J r =0.7 T, J H c = 563 kA/m and (BH) max = 52 kJ/m 3 [46]. It should be noted that there are only a few publications of other authors concerning bulk nanostructured RE-Fe-B magnets derived from bulk glassy precursors. The interesting results were obtained for the as-cast 1.5 mm diameter rods of the (Fe 86-x Nb x B 14 ) 0.88 Tb 0.12 alloy, for which J H c reaches up to 5600 kA/m. However, due to the low volume fraction of the Tb 2 Fe 14 B hard magnetic phase, its (BH) max reaches only~13 kJ/m 3 [47]. The investigated specimens in the form of plates seems to be particularly interesting, due to their potential application. Processing of the specimens in two steps: (i) suction-casting to the partly amorphous structure and (ii) subsequent annealing provides the possibility of obtaining fully dense bulk nanostructured magnets. The advantage of investigated alloy and the usage of a short time manufacturing technique is reflected in the reduction of processing costs. Furthermore, suction-casting allows one to obtain miniature magnets of demanded shapes, that might be used in some small size electromagnetic devices. The only drawback is the need for the use of high purity constituent elements, in order to maintain the good glass forming ability of the alloy. The nanocrystalline magnets reported in the present work were isotropic, due to the application of conventional heat treatment in the lab resistance furnace. In order to gain better performance of the magnets, the annealing of the amorphous precursor has to be carried out in the external magnetic field to induce anisotropy. Therefore, our further studies will be focused on this point. Funding: This research received no external funding. Conflicts of Interest: The authors declare no conflict of interest.
9,756.6
2020-03-26T00:00:00.000
[ "Materials Science" ]
Study of the Ashkin Teller model with spins $S$ = $1$ and $\sigma$ = $3/2$ subjected to different crystal fields using the Monte-Carlo method Using the Monte-Carlo method, we study the magnetic properties of the Ashkin-Teller model (ATM) under the effect of the crystal field with spins $S = 1$ and $\sigma = 3/2$. First, we determine the most stable phases in the phase diagrams at temperature $T = 0$ using exact calculations. For higher temperatures, we use the Monte-Carlo simulation. We have found rich phase diagrams with the ordered phases: a Baxter $3/2$ and a Baxter $1/2$ phases in addition to a $\left\langle \sigma S\right\rangle$ phase that does not show up either in ATM spin 1 or in ATM spin $3/2$ and, lastly, a $\left\langle \sigma\right\rangle = 1/2$ phase with first and second order transitions. Introduction In recent years, the functioning of spins in different network structures has been a magnetic manifestation. It also allowed one to verify the nature of the phase transition as well as the critical behavior in the field of statistical mechanics [1]. In addition, the properties of magnetic materials and their technological applications such as thermomagnetic recording media and micro-electromechanical systems [2]are characterized by the phenomenon of mixed spins, which are well defined in the Ising model approach [3]. Studies of magnetic materials of mixed spins have been extended to the Ising model in the presence of a crystalline field and, specifically, are applied to the mixed spin (1, 3/2). The latter studies have shown some interesting behaviors using an effective field theory. The results of the field theory study have shown that the mixed spin system has first order transition lines as well as offers tricritical and triple points. They also found out that the system is of the types [4]. However, in the context of mixed spin (1, 3/2), the Blume-Capel Ising model was realized when a first order transition line was found separating two ferromagnetic regions on a square cubic lattice [5]. Using Monte Carlo simulations, it was shown that the interactions between the nearest neighbors of the Ising model J 1 , J 2 and J 3 with frustrations are the main barriers to the transmission change in the increase in temperature and also indicate an Ashkin-Teller behavior. This study estimates the transition points at the critical point of Potts and confirms the first order transition behavior in the stabilization state of J 3 antiferromagnetic [6]. In addition, they conducted studies on the nature of the four-stage thermal phase transition degraded in a Monte Carlo simulation and the finite size scaling. On the other hand, first-order behaviors are noted under Potts' critical points with four states, and thus, his work indicates that the four-state transition in the Ising antimagnetic model represents a similar transition [7]. In this context and to properly describe the notion of phase transitions, Ashkin and Teller [8] developed a very interesting model in these Ising systems and, thus, simplified the study of statistical mechanics. In this model, one could introduce the cooperative phenomena of quaternary alloys into a network, which is described by a Hamiltonian in a form suitable for magnetic systems [9]. Kramers and Wannier observed critical points of a particular case of the Ashkin-Teller model in which three of the four components are degraded [10]. Their hypotheses extended to the Ashkin-Teller model shown by Fan [11], and tended to be that of Wegner who generally proved that the argument did not exist at a critical point. Hence, it is interesting to study closely the problem of transition in this model [12]. In addition to this, Wagner proved that the Ashkin-Teller model was the equivalent of the alternating eight vertex model, which has not been solved exactly; only one critical line in the phase diagram of the isotropic Ashkin-Teller model is as precise as possible thanks to the duality relation found by Fan [11]. One of the most interesting critical properties of this model is the non-universality of critical behaviour on the self-doubling lines in which the critical exponents evolve continuously [13]. The model is a two-dimensional system in which two layers of Ising spins interact with each other by a four-spin system. Within each of the models, there is an interaction at two turns between the nearest neighbors, i.e., a coupling of two Ising models that are characterized by the spins located in each cubic network site with a four-spin interaction parameter [14]. In particular, the numerical study of the Ashkin-Teller spin-1 model under the crystal field effect was carried out by Badehdah et al. [15]. In addition, Wu and Lin found out a diversity of Ising type phase transitions of the anisotropic Ashkin-Teller model [16], and in this system, Bekhechi et al. analyzed the critical behavior of the Ashkin-Teller model using the averaged field and the Monte Carlo methods, with which they established that the σ phase appears in the isotropic case when the interactions are antiferromagnetic [14]. Recently, the development of the Ashkin-Teller model has been achieved by the absorption of the selenium compound on the Ni surface [17], and the phase diagram obtained from the elastic DNA response [18]. This model also needs the study of thermodynamic properties of superconducting capsules (CuO 2 chips) [19]. Furthermore, the oxygen estimate can also be adapted in YBa 2 Cu 3 O z to the two-dimensional Ashkin-Teller model [13]. This model has also been applied over the years in other fields such as chemical reactions in metal alloys [20]. In addition and because of the similarities to this model, which presents a complex and important phase diagram, different theoretical and numerical methods have been applied including Monte Carlo simulation [21], mean field approximation [22], effective field theory [23], matrix transfer method [24] and renormalization group theory [25]. This model can also describe phenomena [15]. In this paper, we essentially study the isotropic spin (S = 1, σ = 3/2) of the Ashkin-Teller model in the fundamental state at zero temperature (T = 0) using the Monte Carlo method. We have also examined this model at high temperatures using Monte Carlo simulations. We also check the stable phases of this model for different parameters K 4 , D 1 and D 2 . The paper is structured as follows; after an introduction, we have determined the fundamental state of the model and their basic diagrams. The subsequent section describes the Monte Carlo simulation and its formalism with the presentation of the phase diagram of the model. Then, we discuss the results of the simulations. Finally, we present a concluding section. Model and phase diagram of the fundamental state In this work, we consider the Ashkin-Teller model in the case of mixed spins σ = 3/2 and S = 1. We analyze this case under the effect of different crystal fields. Thus, this model is described by the following Hamiltonian: where the variables σ i and S i take the values (±3/2, ±1/2) and (±1, 0), respectively, and are located on the sites of a cubic lattice, i, j refers to a pair of nearest neighbor spins. The first term of equation (2.1) refers to the bilinear interactions between the spins located at the sites i and j using the coupling parameter K 2 . Moreover, the second term refers to the interaction of the four spins with the coupling constant K 4 . The last term refers to the existence of two ionic crystal fields D 1 and D 2 . From the contribution of a pair of S 1 , S 2 , σ 1 and σ 2 , the Hamiltonian is expressed as a sum of contributions of the nearest neighbors, we obtain the pair energy as follows: Study of the Ashkin Teller model with spins S = 1 and σ = 3/2 subjected to different crystal fields According to the values containing the variables S i and σ i , we extract 144 (3 2 × 4 2 ) possible configurations for the ground state at T = 0. Using symmetry configurations, this number reduces to 24 configurations. For each set of parameters: K 2 , K 4 , D 1 and D 2 , we select the configuration with minimal energy E p . This leads to the phase diagram in the fundamental state (T = 0). Different phases will be given in the form (S 1 , σ 1 , S 2 , σ 2 ). In what follows, we consider different situations by fixing one parameter and varying the others (the latters will be normalized by K 2 ). In figure 1, we plot the phase diagram by varying the parameter K 4 /K 2 as a function of D 2 /K 2 (letting D 1 = 0): parallel such that S i = 1 and σ i spins are antiparallel. Consequently, the stable phase obtained is the antiferromagnetic phase. For We can distinguish that the spins S i and σ i are both aligned in the same direction, so σS = 1/2; this corresponds to the ferromagnetic phase. • In the case: , we can observe that the σ i spins are parallel such that σ = 3/2 and S i spins are antiparalleled. Consequently, the stable phase obtained is the antiferromagnetic phase. Otherwise if K 4 /K 2 > −1 and K 4 /K 2 > −D 2 /K 2 − 1. We can distinguish that the spins S i and σ i are both aligned in the same direction, including S = 1 and σ = 3/2 so σS = 3/2 while we have the ferromagnetic phase 3/2. In the second situation, we obtain the figure which represents the variation of K 4 /K 2 as a function of D 1 /K 2 . Lastly, we put D 1 = D 2 = D and draw the diagram K 4 /K 2 as a function of D/K 2 (figure 2). • For D 1 /K 2 < −0.1 or K 4 /K 2 > 4/9D 1 /K 2 − 4/9 and K 4 /K 2 > −4/9D 1 /K 2 − 4/9: we have a stable phase called the phase σ because we have the spins S i being equal to zero such that S = σS = 0 and σ = −3/2. we observe that the S i spins are parallel such that S = 1 so that the σ i spins are antiparallel or σ = 3/2, consequently the stable phase obtained is the antiferromagnetic phase. Otherwise if K 4 /K 2 > −1 and We can observe that the spins S i and σ i are both aligned in the same direction, including S = 1 and σ = 3/2 so σS = 3/2 while we have the ferromagnetic phase 3/2 as the stable phase. The third case was made using D 1 = D 2 = D (figure 2). We draw the diagram K 4 according to crystalline field D such that: • For K 4 /K 2 > −12/9D/K 2 − 12/9 and K 4 /K 2 > −0.4; we have σ = 3/2, S = 1 and σS = 3/2 so σ = σS such that the spins σ i and S i are both parallel, then we have the Baxter 3/2 phase called the ferromagnetic Baxter phase (the stable phase). • For K 4 /K 2 < 4/9D/K 2 − 4/9 and K 4 /K 2 < −0.4; in this part of the diagram we see that the spins σ i are parallel so that the spins S i are antiparallel. This means that we have an antiferromagnetic Baxter phase, which is always the Baxter phase (3/2). • For the zone that is noticed in the phase diagram and which is specificed by the equations: K 4 /K 2 > 4/9D/K 2 − 4/9 and K 4 /K 2 < −4/9D/K 2 − 4/9 and if D = −1, we have the spins S = 0 and σ = 3/2, such that S i are equal to zero and σ i are parallel. Finally, we obtain the phase σ is the phase σ = 3/2 which does not exist either in the case of mixed spin −1/2 [14] or in the Ashkin-Teller model for spin-3/2 [26]. The Monte Carlo simulation In our work, to determine the magnetic properties of the Ashkin-Teller model for non-zero temperatures, we use Monte Carlo simulations implemented with the Metropolis algorithm with periodic boundary conditions to update the lattice configurations. We consider a 2d square lattice of L × L size 33707-4 Study of the Ashkin Teller model with spins S = 1 and σ = 3/2 subjected to different crystal fields which contains N = L 2 sites. We performed the simulations for system size L = 30. We performed simulations for certain values of the parameters K 4 , D 1 and D 2 using P = 100000 Monte Carlo steps after discarding the first 20000 MCS for thermal equilibrium. The magnetization of the system is given by the formula: with α = σ, S, σS, where i runs over the lattice sites and c runs over the obtained system configurations obtained. The lattice is updated by a sweep of the N spins (the Monte Carlo step) after the system reaches thermal equilibrium. The magnetic susceptibility relationship is given by: with α = σ, S, σS and the Binder cumulant is given by: Errors are deducted from the blocking method. Results and discussions We obtain the magnetization behavior as a function of temperature as well as the susceptibilities of the studied system for different values of the coupling parameters. As shown in figure 4 our MC results at low temperature show a ferromagnetic Baxter phase (S 1 σ 1 S 2 σ 2 )=(1 3/2 1 3/2) with σS = 3/2 (figure 4a) and a ferromagnetic Baxter phase (1 1/2 1 1/2) with σS = 1/2 (figure 4b) as expected from the T = 0 phase diagram (figure 1), where we find a new partially ordered phase σS identified by σ = S = 0, and σS 0. For high temperatures in both cases, the system becomes disordered. The critical transition temperature is estimated from the maximum of the susceptibility associated with the different magnetization. We found for the case (a) T c = 7.39 and for case (b) T c = 1.89. In addition, the transition between the phases mentioned is always of second order due to the continuity of the order parameters across the transition line. In figure 5, the first case (a) at low temperature, we have σ = 1/2, S = σS = 0 corresponds to the phase (0 1/2 0 1/2). However, at high temperature the system undergoes a transition to the paramagnetic phase. state is the ferromagnetic Baxter phase (1 3/2 1 3/2). The susceptibility plot shows a peak corresponding to σ and S at the critical temperature T c1 = 11.09; by contrast, the susceptibility corresponding to σS shows a distinct peak at the transition temperature T c2 = 14.19, clearly defining a partially ordered phase σS at high temperature separating the disordered phase from the Baxter phase. The phase diagram in figure 6 shows the stable phases at different temperature in the plane (D 2 /K 2 , T/K 2 ) in the case D 1 = 0 for the coupling parameter K 4 = 1, we found that for low values of D 2 /K 2 , two phases are separated by a first order transition line. This is shown in figure 8. The verification of the phase transition nature is determined from the discontinuity or continuity of the order parameters [25].The two phases are: ferromagnetic Baxter 1/2 and ferromagnetic Baxter 3/2. The former phase (σ = 1/2) was neither found for this ATM model with spin-1/2 [9] nor in the Ashkin-Teller model with spin-3/2 [26]. At high temperature, a second order transition to the paramagnetic disorder phase takes place. The Baxter ferromagnetic-3/2 is stable for large values of D 2 /K 2 . In figure 7 we plot the phase diagram in the (T/K 2 , D/K 2 ) plane. We found out a similar form of the phase diagram as in figure 6 except that the low temperature low D phase is now σ = 1/2 phase with a first transition line to the ferromagnetic Baxter 3/2 for large values of D. We also note that the σ − 1/2 phase was not found in the Ashkin-Teller for spin-3/2 [26]. Moreover, when the coupling parameter values are increased K 4 /K 2 = 3 for the case of D 1 = D 2 = D with the growth of the values of D/K 2 . In figure 9 we found out a partially ordered σS phase, figure 5b, between the ordered phase of Baxter ferromagnetic 33707-6 Study of the Ashkin Teller model with spins S = 1 and σ = 3/2 subjected to different crystal fields 3/2 and the disordered paramagnetic phase at high temperature. These phases were illustrated in previous cases(figure 6,7). This new phase σS was also found in the model Ashkin-Teller mixed of spin-1/2 [14] but not in the Ashkin-Teller for spin-3/2 [26]. By decreasing the crystal field D/K 2 , we found a transition from Baxter ferromagnetic 3/2 to the paramagnetic phase which is of first order type. We also observe the same phase σ = 1/2 as in figure 7 which separates high temperature paramagnetic phase in low values of D/K 2 by a second order transition, and is separated with a ferromagnetic Baxter 3/2 phase by a first order transition. The phase transition points to a function of temperature at the fixed values of the coupling K 4 /K 2 as well as the crystalline field D 2 /K 2 , which were pre-located from the points of intersection of the cumulative curves of Binder specified by the equation (3.3). We show these cases for D 2 /K 2 = 3 (D 1 /K 2 = 0) for different size of L = (10, 20, 30, 40, 60) and the susceptibilities as a function of temperature as well as the Binder cumulant as a function of temperature figure 10 plotted. It is noted in figure 10 that the peaks when L increases, show the transition point which means the critical temperature or the change of phase transition. Nevertheless, the Binder accumulate curves whose figure 10 shows that there is an intersection point that defines the critical temperature knowing that T c = 6.55. Conclusion In order to well describe the magnetic properties of Ising typical systems in statistical mechanics, and within this framework, we analyzed the Ashkin-Teller model with spins (1, 3/2) on a cubic lattice under the effect of different crystalline fields and the coupling parameters which are defined in equation (2.1). The first step of our study was the most important of the stable phase in the fundamental state (zero temperature) in three cases of crystalline field; the system undergoes a first-order phase transition between these stable phases because we noticed that we have phases that did not exist in ATM spin-1/2. On the other hand, when the temperature is non-zero, we have processed the AT model by the Monte Carlo simulation, specifically, using the Metropolis method. As a result, we found out that the coupling parameter values were fixed and the crystal field varied with the temperature variation. We also found out that we had the second-order phase diagram, which contained stable phases such as the Baxter phase 3/2 as well as the paramagnetic phase in different cases of crystalline field in the parameter space (K 4 /K 2 , D 1 /K 2 , D 2 /K 2 , D/K 2 , T/K 2 ) delimited by lines with multicritical points. Crucially, we found a new phase in the phase diagram in space (K 4 /K 2 , D/K 2 , T/K 2 ). Finally, we verified the phase transition nature of this model, which is of second order phase transition of Ising type.
4,692.8
2020-09-01T00:00:00.000
[ "Physics" ]
Timescales of Chaos in the Inner Solar System: Lyapunov Spectrum and Quasi-integrals of Motion Numerical integrations of the Solar System reveal a remarkable stability of the orbits of the inner planets over billions of years, in spite of their chaotic variations characterized by a Lyapunov time of only 5 million years and the lack of integrals of motion able to constrain their dynamics. To open a window on such long-term behavior, we compute the entire Lyapunov spectrum of a forced secular model of the inner planets. We uncover a hierarchy of characteristic exponents that spans two orders of magnitude, manifesting a slow-fast dynamics with a broad separation of timescales. A systematic analysis of the Fourier harmonics of the Hamiltonian, based on computer algebra, reveals three symmetries that characterize the strongest resonances responsible for the orbital chaos. These symmetries are broken only by weak resonances, leading to the existence of quasi-integrals of motion that are shown to relate to the smallest Lyapunov exponents. A principal component analysis of the orbital solutions independently confirms that the quasi-integrals are among the slowest degrees of freedom of the dynamics. Strong evidence emerges that they effectively constrain the chaotic diffusion of the orbits, playing a crucial role in the statistical stability over the Solar System lifetime. I. INTRODUCTION The planetary orbits in the inner Solar System (ISS) are chaotic, with a Lyapunov time distributed around 5 million years (Myr) [1][2][3][4].Still, they are statistically very stable over a timescale that is a thousand times longer.The probability that the eccentricity of Mercury exceeds 0.7, leading to catastrophic events (i.e., close encounters, collisions, or ejections of planets), is only about 1% over the next 5 billion years (Gyr) [5][6][7].The dynamical halflife of Mercury orbit has recently been estimated at 30-40 billion years [4,7].A disparity of nearly four orders of magnitude between the Lyapunov time and the timescale of dynamical instability is intriguing, since the chaotic variations of the orbits of the inner planets cannot be constrained a priori.While the total energy and angular momentum of the Solar System are conserved, the disproportion of masses between the outer and inner planets implies that unstable states of the ISS are in principle easily realizable through exchanges of these quantities.The surprising stability of the ISS deserves a global picture in which it can emerge more naturally. To our knowledge, the only study addressing the timescale separation in the long-term dynamics of the ISS is based on the simplified secular dynamics of a massless Mercury [8]: All the other planets are frozen on regular quasi-periodic orbits; secular interactions are expanded to first order in masses and degree 4 in eccentricities and inclinations; an a priori choice of the relevant terms of the Hamiltonian is made.The typical instability time of about 1 Gyr [8,9] is, however, too short and in significant contrast with realistic numerical integrations of the Solar System, which show a general increase of the instability rate with the complexity of the dynamical model [7].We have shown that truncating the secular Hamiltonian of the ISS at degree 4 in eccentricities and inclinations results in an even more stable dynamics, with an instability rate at 5 Gyr that drops by orders of magnitude when compared to the full system [10].From the perspective of these latest findings, the small probability of 1% of an instability over the age of the Solar System may be naturally regarded as a perturbative effect of terms of degree 6 and higher.Clearly, the striking stability of the dynamics at degree 4 is even more impressive in the present context, and remains to be explained. A strong separation in dynamical timescales is not uncommon among classical quasi-integrable systems [e.g., 11,12].This is notably evinced by the Fermi-Pasta-Ulam-Tsingou (FPUT) problem, which deals with a chain of coupled weakly-anharmonic oscillators [13].Far from Kolmogorov-Arnold-Moser (KAM) and Nekhoroshev regimes (as is likely to be pertinent to the ISS, see Sect.III), one can generally state that the exponential divergence of close trajectories occurring over a Lyapunov time is mostly tangent to the invariant tori defined by the action variables of the underlying integrable problem, and hence contributes little to the diffusion in the action space [14,15].In other words, the Lyapunov time and the diffusion/instability time scale differently with the size of the terms that break integrability, and this can result in very different timescales [12].However, this argument is as general as poorly satisfactory in addressing quantitatively the timescale separation in a complex problem as the present one.Moreover, even though order-of-magnitude estimates of the chaotic diffusion in the ISS suggest that it may take hundreds of million years to reach the destabilizing secular resonance g 1 − g 5 [16], the low probability of an instability over 5 Gyr still remains unexplained [4].Establishing more pre-cisely why the ISS is statistically stable over a timescale comparable to its age is a valuable step in understanding the secular evolution of planetary systems through metastable states [4, 17][18].With its 8 secular degrees of freedom (d.o.f.), this system also constitutes a peculiar bridge between the low-dimensional dynamics often addressed in celestial mechanics and the systems with a large number of bodies studied in statistical mechanics: It cannot benefit from the straightforward application of standard methods of the two fields [e.g., 19,Appendix A]. This work aims to open a window on the long-term statistical behavior of the inner planet orbits.Section II briefly recalls the dynamical model of forced secular ISS introduced in Ref. [4].Section III presents the numerical computation of its Lyapunov spectrum.Section IV introduces the quasi-symmetries of the resonant harmonics of the Hamiltonian and the corresponding quasiintegrals (QIs) of motion.Section V establishes a geometric connection between the quasi-integrals and the slowest d.o.f. of the dynamics via a principal component analysis (PCA) of the orbital solutions.Section VI states the implications of the new findings on the long-term stability of the ISS.We finally discuss the connections with other classical quasi-integrable systems and the methods used in this work. II. DYNAMICAL MODEL The long-term dynamics of the Solar System planets consists essentially of the slow precession of their perihelia and nodes, driven by secular, orbit-averaged gravitational interactions [2,20].At first order in planetary masses, the secular Hamiltonian corrected for the leading contribution of general relativity reads [e.g. 4, 21] The planets are indexed in order of increasing semi-major axes (a i ) 8 i=1 , m 0 and m i are the Sun and planet masses, respectively, e i the eccentricities, G the gravitational constant and c the speed of light.The vectors r i are the heliocentric positions of the planets, and the bracket operator represents the averaging over the mean longitudes resulting from the elimination of the non-resonant Fourier harmonics of the N -body Hamiltonian [4,21].Hamiltonian (1) generates Gauss's dynamics of Keplerian rings [4,22], whose semi-major axes a i are constants of motion of the secular dynamics. By developing the 2-body perturbing function [23,24] in the computer algebra system TRIP [25,26], the secular Hamiltonian can be systematically expanded in series of the Poincaré rectangular coordinates in complex form, where being the reduced masses of the planets, I i the inclinations, i the longitudes of the perihelia and Ω i the longitudes of the nodes [27].Pairs (x i , −jx i ) and (y i , −jy i ) are canonically conjugate momentum-coordinate variables.When truncating at a given total degree 2n in eccentricities and inclinations, the expansion provides Hamiltonians H 2n = H 2n [(x i , xi , y i , ȳi ) 8 i=1 ] that are multivariate polynomials. Valuable insight into the dynamics of the inner planets is provided by the model of a forced ISS recently proposed [4].It exploits the great regularity of the long-term motion of the outer planets [2,20,28] to predetermine their orbits to a quasi-periodic form: for i ∈ {5, 6, 7, 8}, where t denotes time, xil and ỹil are complex amplitudes, m il and n il integer vectors, and ω o = (g 5 , g 6 , g 7 , g 8 , s 6 , s 7 , s 8 ) represents the septuple of the constant fundamental frequencies of the outer orbits.Frequencies and amplitudes of this Fourier decomposition are established numerically by frequency analysis [29,30] of a comprehensive orbital solution of the Solar System [4, Appendix D].Gauss's dynamics of the forced ISS is obtained by substituting the predetermined time dependence in Eq. ( 1), so that H = H[(x i , y i ) 4 i=1 , t].The resulting dynamics consists of two d.o.f. for each inner planet, corresponding to the x i and y i variables, respectively.Therefore, the forced secular ISS is described by 8 d.o.f. and an explicit time dependence.As a result of the forcing from the outer planets, no trivial integrals of motion exist and its orbital solutions live in a 16-dimensional phase space. A truncated Hamiltonian H 2n for the forced ISS is readily obtained by substituting Eq. ( 3) in the truncated Hamiltonian H 2n of the entire Solar System.At the lowest degree, H 2 generates a linear, forced Laplace-Lagrange (LL) dynamics.This can be analytically integrated by introducing complex proper mode variables When expressed in the proper modes, the truncated Hamiltonian can be expanded as a finite Fourier series: where I = (X, Ψ) and θ = (χ, ψ) are the 8-dimensional vectors of the action and angle variables, respectively, and we introduce the external angles φ(t) = −ω o t.The The quasi-periodic form of the outer orbits in Eq. (3) contains harmonics of order higher than one, that is, m il 1 > 1 and n il 1 > 1 for some i and l, where • 1 denotes the 1-norm.Therefore, the dynamics of H 2n and H 2n are not exactly the same [4].Still, the difference is irrelevant for the results of this work, so we treat the two Hamiltonians as equivalent from now on.Despite the simplifications behind Eqs. ( 1) and ( 3), the forced secular ISS has been shown to constitute a realistic model that is consistent with the predictions of reference integrations of the Solar System [2,5,6,20].It correctly reproduces the finite-time maximum Lyapunov exponent (FT-MLE) and the statistics of the high eccentricities of Mercury over 5 Gyr [4].Table I presents a summary of the different Hamiltonians and corresponding dynamics we consider in this work. III. LYAPUNOV SPECTRUM Ergodic theory provides a way, through the Lyapunov characteristic exponents (LCEs), to introduce a fundamental set of timescales for any differentiable dynamical system ż = F (z, t) defined on a phase space P ⊆ R P [31][32][33][34].If Φ(z, t) denotes the associated flow and z(t) = Φ(z 0 , t) the orbit that emanates from the initial condition z 0 , the LCEs λ 1 ≥ λ 2 ≥ • • • ≥ λ P are the logarithms of the eigenvalues of the matrix Λ(z 0 ) defined as where M(z 0 , t) = ∂Φ/∂z 0 is the fundamental matrix and T stands for transposition [32,33].Introducing the Jacobian J = ∂F /∂z, the fundamental matrix allows us to write the solution of the variational equations ζ = J(z(t), t)ζ as ζ(t) = M(z 0 , t)ζ 0 , where ζ(t) ∈ T z(t) P belongs to the tangent space of P at point z(t) and ζ 0 = ζ(0).The multiplicative ergodic theorem of Oseledec [31] states that if ρ is an ergodic (i.e.invariant and indecomposable) measure for the time evolution and has compact support, then the limit in Eq. ( 7) exists for ρ-almost all z 0 , and the LCEs are ρ-almost everywhere constant and only depend on ρ [32].Moreover, one has lim for ρ-almost all z 0 , where λ (1) > λ (2) > . . .are the LCEs without repetition by multiplicity, and z0 is the subspace of R P corresponding to the eigenvalues of Λ(z 0 ) that are smaller than or equal to exp λ (i) , with 8) is irrelevant [32,34].Once the LCEs have been introduced, a characteristic timescale can be defined from each positive exponent as λ −1 i .In the case of the maximum Lyapunov exponent λ 1 , the corresponding timescale is commonly called the Lyapunov time. For a Hamiltonian system with p d.o.f.(i.e., P = 2p), the fundamental matrix is symplectic and the set of LCEs is symmetric with respect to zero, that is, If the Hamiltonian is time independent, a pair of exponents vanishes.In general, the existence of an integral of motion C = C(z) implies a pair of null exponents, one of them being associated with the direction of the tangent space that is normal to the surface of constant C [e.g.33]. The ISS is a clear example of a dynamical system that is out of equilibrium.Its phase-space density diffuses seamlessly over any meaningful timescale [5,28].Therefore, the infinite time limit in Eq. ( 7) is not physically relevant.The non-null probability of a collisional evolution of the inner planets [5,6,35,36] implies that such limit does not even exist as a general rule.Most of the orbital solutions stemming from the current knowledge of the Solar System are indeed asymptotically unstable [4,7].Physically relevant quantities are the finite-time LCEs (FT-LCEs), λ i (z 0 , t), defined from the eigenvalues The time dependence of the phase-space density translates in the fact that no ergodic measure is realized by the dynamics, and the FT-LCEs depend on the initial condition z 0 in a non-trivial way [37]. The FT-MLE of the forced secular ISS has been numerically computed over 5 Gyr for an ensemble of stable orbital solutions of the Hamiltonian H with initial conditions very close to their nominal values [38].Its long-term distribution is quite large and does not shrink over time [4,Fig. 3].At 5 Gyr, the probability density function (PDF) of the Lyapunov time peaks at around 4 Myr, it decays very fast below 2 Myr, while its 99th percentile reaches 10 Myr [4, Fig. 4].The significant width of the distribution relates to the aforementioned out-of-equilibrium dynamics of the ISS, as the FT-MLE of each orbital solution continues to vary over time.The dependence of the exponent on the initial condition is associated with the non-ergodic exploration of the phase space by the dynamics.As a remark, the fact that the lower tail of the FT-MLE distribution, estimated from more than 1000 solutions, does not extend to zero implies that invariant (KAM) tori are rare in a neighborhood of the nominal initial conditions (if they exist at all).This fact excludes that the dynamics is in a Nekhoroshev regime [12,39], in agreement with the indications of a multidimensional resonance overlapping at the origin of chaos [19,40].In such a case, the long dynamical half-life of the ISS should not be interpreted in terms of an exponentially-slow Arnold diffusion. Computations of the FT-MLE of the Solar System planets have been reported for more than thirty years [1,3].However, the retrieval of the entire spectrum of exponents still represents a challenging task.Integrating an N -body orbital solution for the Sun and the eight planets that spans 5 Gyr requires the order of a month of wall-clock time [e.g.41].The computation by a standard method of the entire Lyapunov spectrum for a system with p d.o.f. also requires the simultaneous time evolution of a set of 2p tangent vectors [42].On the top of that, a computation of the exponents for an ensemble of trajectories is advisable for a non-ergodic dynamics [4].These considerations show how demanding the computation of the Lyapunov spectrum of the Solar System planets is.By contrast, a 5-Gyr integration of the forced ISS takes only a couple of hours for Gauss's dynamics (H) and a few minutes at degree 4 (H 4 ).This dynamical model thus provides a unique opportunity to compute all the FT-LCEs that are mainly related to the secular evolution of the inner orbits. We compute the Lyapunov spectrum of the truncated forced ISS using the standard method of Benettin et al. [43], based on Gram-Schmidt orthogonalization.Manipulation of the truncated Hamiltonian H 2n in TRIP allows us to systematically derive the equations of motion and the corresponding variational equations, which we integrate through an Adams PECE method of order 12 and a timestep of 250 years.Parallelization of the time evolution of the 16 tangent vectors, between two consecutive reorthonormalization steps of the Benettin et al. [43] algorithm, significantly reduces the computation time.Figure 1a shows the positive FT-LCEs expressed as angular frequencies over the next 10 Gyr for the Hamiltonian truncated at degree 4. The FT-LCEs are computed for 150 stable solutions, with initial conditions very close to the nominal values of Gauss's dynamics and random sets of initial tangent vectors [19,Appendix C].The figure shows the [5th, 95th] percentile range of the marginal PDF of each exponent estimated from the ensemble of solutions.For large times, the exponents of each solution become independent of the initial tangent vectors, the renormalization time, and the norm chosen for the phase-space vectors (see Appendix A and Fig. 8a).In this asymptotic regime, the Benettin et al. [43] algorithm purely retrieves the FT-LCEs as defined in Eq. (10), and the width of their distributions only reflects the out-of-equilibrium dynamics of the system.The convergence of our numerical computation is also assessed by verifying the symmetry of the spectrum stated in Eq. ( 9) (see Appendix A and Fig. 8b). The spectrum in Fig. 1a has distinctive features.A set of intermediate exponents follow the MLE, ranging from 0.1 to 0.01 yr −1 , while the smallest ones fall below 0.01 yr −1 .Figure 1a reveals the existence of a hierarchy of exponents and corresponding timescales that spans two orders of magnitude, down to a median value of λ −1 8 ≈ 500 Myr.The number of positive exponents confirms that no integral of motion exists, as one may expect from the forcing of the outer planets.We also compute the spectrum for the Hamiltonian truncated at degree 6.As shown in Appendix A (Fig. 9), the asymptotic distributions of the exponents are very similar to those at degree 4.This result suggests that long-term diffusion of the phase-space density is very close in the two cases.The different instability rates of the two truncated dynamics mainly relates to the geometry of the instability boundary, which is closer to the initial position of the system for H 6 than for H 4 [7]. The relevance of the Lyapunov spectrum in Fig. 1a emerges from the fact that the existence of an integral of motion implies a pair of vanishing exponents.This is a pivotal point: By a continuity argument, the presence of positive exponents much smaller than the leading one constitutes a compelling indication that there are dynamical quantities whose chaotic decoherence over initially very close trajectories takes place over timescales much longer than the Lyapunov time.In the long term, such quantities should diffuse much more slowly than any LL action variable.Therefore, Fig. 1a suggests that the secular orbits of the inner planets are characterized by a slowfast dynamics that is much more pronounced than the well-known timescale separation arising from the LL integrable approximation.The existence of slow quantities, which are a priori complicated functions of the phasespace variables, is crucial in the context of finite-time stability, as they can effectively constrain the long-term diffusion of the phase-space density toward the unstable states.The next section addresses the emergence of these slow quantities from the symmetries of the Fourier harmonics that compose the Hamiltonian. IV. QUASI-INTEGRALS OF MOTION The emergence of a chaotic behavior of the planetary orbits can be explained in terms of the pendulum-like dynamics generated by each Fourier harmonic that composes the Hamiltonian in Eq. ( 6) [e.g.44].One can write H 2n (I, θ, t) = H 0,0 2n (I) + M2n i=1 F i (I, θ, t), with where (k i , i ) = (0, 0), M 2n is the number of harmonics in H 2n with a non-null wave vector, and c.c. stands a O is the order of the harmonic.b τ res is the fraction of time the harmonic is resonant.Only harmonics with τ res > 1% are shown.c 5th and 95th percentiles of the time distribution of ∆ω as subscripts and superscripts, respectively. for complex conjugate.Chaos arises from the interaction of resonant harmonics, that is, those harmonics F i whose frequency combination k i • θ + i • φ(t) vanishes at some point along the motion.Using the computer algebra system TRIP, the harmonics of H 10 that enter into resonance along the 5-Gyr nominal solution of Gauss's dynamics have been systematically retrieved, together with the corresponding time statistics of the resonance half-widths ∆ω [19].The resonances have then been ordered by decreasing time median of their halfwidths.The resulting ranking of resonances is denoted as R 1 from now on.Table II recalls the 30 strongest resonances that are active for more than 1% of the 5-Gyr time span of the orbital solution.The wave vector of each harmonic is identified by the corresponding combination of frequency labels the order of each harmonic, defined as the even integer O = (k, ) 1 .The support of the asymptotic ensemble distribution of the FT-MLE shown in Fig. 1a overlaps in a robust way with that of the time distribution of the half-width of the strongest resonances.In other words, where ∆ω R1 stands for the half-width of the uppermost resonances of ranking R 1 .Equation ( 12) shows the dynamical sources of chaos in the ISS by connecting the top of the Lyapunov spectrum with the head of the resonance spectrum.Computer algebra allows us to establish such a connection in an unbiased way despite the multidimensional nature of the dynamics.We stress that such analysis is built on the idea that the time statistics of the resonant harmonics along a 5-Gyr ordinary orbital solution should be representative of their ensemble statistics (defined by a set of stable solutions with very close initial conditions) at some large time of the order of billions of years.This assumption was inspired by the good level of stationarity that characterizes the ensemble distribution of the MLE beyond 1 Gyr [4,19], and that extends to the entire spectrum in Fig. 1a.We remark that, strictly speaking, ranking R 1 is established on the Fourier harmonics of the Lie-transformed Hamiltonian H 2n [19,Appendix G].New canonical variables are indeed defined to transform H 2n in a Birkhoff normal form to degree 4. The goal is to let the interactions of the terms of degree 4 in H 2n appear more explicitly in the amplitudes of the harmonics of higher degrees in H 2n , the physical motivation being that the non-linear interaction of the harmonics at degree 4 constitutes the primary source of chaos [19].Keeping in mind the quasiidentity nature of the Lie transform, here we drop for simplicity the difference between the two Hamiltonians.Moreover, all the new analyses of this work involve the original variables of Eq. (5). A. Quasi-symmetries of the resonant harmonics In addition to the dynamical interactions responsible for the chaotic behavior of the orbits, Table II provides information on the geometry of the dynamics in the action variable space.Ranking the Fourier harmonics allows us to consider partial Hamiltonians constructed from a limited number m of leading terms [7,19], that is, The dynamics of a Hamiltonian reduced to a small set of harmonics is generally characterized by several symmetries and corresponding integrals of motion.We are interested in how these symmetries are progressively destroyed when one increases the number of terms taken into account in Eq. (13).Consider a set of m harmonics of H 2n and a dynamical quantity that is a linear combination of the action variables, that is, γ ∈ R 8 being a parameter vector.From Eq. ( 11), the partial contribution of the m harmonics to the time derivative of C γ along the flow of and Ċγ = Ċγ,M2n , where M 2n is the total number of harmonics with a non-null wave vector that appear in H 2n .Any quantity C γ with γ • k i = 0 is conserved by the one-d.o.f.dynamics generated by the single harmonic F i .In other words, such a quantity would be an integral of motion if F i were the only harmonic to appear in the Hamiltonian.Considering now m different harmonics, these do not contribute to the change of the quantity , that is, if the vector γ belongs to the orthogonal complement of the linear subspace of R 8 spanned by the wave vectors (k i ) m i=1 .We also consider the quantity Because of the explicit time dependence in the Hamiltonian, the partial contribution of a set of m harmonics to the time derivative of C γ along the flow of ) and one has Ċ Dynamical quantities C γ or C γ that are unaffected by a given set of leading harmonics, that is, with null partial contribution in Eq. ( 15) or (17), are denoted as quasiintegrals of motion from now on.More specifically, we build our analysis on ranking R 1 , since the resonant harmonics are those responsible for changes that accumulate stochastically over long timescales, driving chaotic diffusion. In the framework of the aforementioned considerations, the resonances listed in Table II possess three different symmetries. a. First symmetry.The rotational invariance of the entire Solar System implies the d'Alembert rule [21,24,45,46].Moreover, the Jupiterdominated eccentricity mode g 5 is the only fundamental Fourier mode of the outer planet forcing to appear in Table II.The quantity with b. Second symmetry.We write the eccentricity and inclination parts of the harmonic wave vectors explicitly, that is, k = (k ecc , k inc ) with k ecc , k inc ∈ R 4 .One can visually check that the harmonics in Table II verify the relation ).Therefore, denoting γ 1 = (0 4 , 1 4 ), the quantity is conserved by these resonances.C inc is the angular momentum deficit (AMD) [47] contained in the inclination d.o.f.This symmetry can then be interpreted as a remnant of the conservation of the AMD of the entire (secular) Solar System.We remark that the AMD contained in the eccentricity d.o.f., C ecc = 4 i=1 X i , is not invariant under the leading resonances because of the eccentricity forcing mainly exerted by Jupiter through the mode g 5 .The conservation of C inc depends on two facts: the inclination modes s 6 , s 7 , s 8 of the external forcing do not appear in Table II; low-order harmonics like 2g 1 −s 1 −s 2 , 2g 1 − 2s 1 , and 2g 1 − 2s 2 are never resonant (even if they can raise large quasi-periodic contributions), so that two AMD reservoirs C ecc and C inc are decoupled in Table II.We recall that the absence of an inclination mode s 5 in the external forcing relates to the fixed direction of the angular momentum of the entire Solar System [2,21,46]. c. Third symmetry.The first two symmetries could be expected to some extent on the basis of physical intuition of the interaction between outer and inner planets.However, it is not easy to even visually guess the third one from Table II.Consider the 30 × 8 matrix K 30 whose rows are the wave vectors (k i ) 30 i=1 of the listed resonances.A singular value decomposition shows that the rank of K 30 is equal to 6. Therefore, the linear subspace V 30 = span(k 1 , k 2 , . . ., k 30 ) spanned by the wave vectors has dimension 6.A Gram-Schmidt orthogonalization allows us to determine two linearly independent vectors that span its orthogonal complement Since the second symmetry clearly requires that γ 1 ∈ V ⊥ 30 , the three quantities C inc , C 2 , C ⊥ 2 are not independent and one has indeed The additional symmetry can thus be interpreted in terms of a certain decoupling between the d.o.f. 3, 4 and 1, 2, representing in the proper modes the Earth-Mars and Mercury-Venus subsystems, respectively. The aforementioned symmetries, that exactly characterize the resonances listed in We remark that, differently from C inc and C 2 , the quantity E 2n is a non-linear function of the action-angle variables.However, as far as stable orbital evolutions are concerned, the convergence of the series expansion of the Hamiltonian is sufficiently fast that the linear LL approximation E 2 = H 2 +g 5 1 8 •I = C γ3 , with γ 3 = −ω LL +g 5 1 8 , reproduces reasonably well E 2n along the flow of H 2n for n > 1.The vector γ 3 is used in Sect.V, together with γ 1 and γ 2 , to deal with the geometry of the linear action subspace spanned by the QIs.The explicit expressions of these vectors are given in Appendix B. We mention that, differently from γ 1 and γ 2 , the components of γ 3 are not integer and they have the dimension of a frequency. B. Slow variables The QIs of motion E 2n , C inc , C 2 are clearly strong candidates for slow variables once evaluated along the orbital solutions.In what follows, to assess the slowness of a dynamical quantity when compared to the typical variations of the action variables, we consider the variance of its time series along a numerical solution. We define the dimensionless QIs where C 0 stands for the current total AMD of the inner planets, that is, the value of C ecc + C inc at time zero.We stress that, by introducing the unit vectors At degree 2, one also has E 2 = C γ3 /C 0 .We then consider the ensembles of numerical integrations of H 4 and H 6 , with very close initial conditions and spanning 100 Gyr in the future, that have been presented in Ref. [7]. The top row of Fig. 2 shows the time evolution over 5 Gyr of the dimensionless QIs and of two components of the dimensionless action vector I = I/C 0 along the nominal orbital solutions of the two ensembles.We subtract from each time series its mean over the plotted time span.The time series are low-pass filtered by employing the Kolmogorov-Zurbenko (KZ) filter with three iterations of the moving average [4,48].A cutoff frequency of 1 Myr −1 is chosen to highlight the long-term diffusion that can be hidden by short-time quasi-periodic oscillations.This is in line with our definition of quasi-integrals based on contribution from resonant harmonics only.Figure 2 clearly shows that the QIs are slowly-diffusing variables when compared to an arbitrary function of the action variables.The behavior of the QIs along the nominal orbital solutions of Fig. 2 is confirmed by a statistical analysis in Appendix C. Figure 10 shows the time evolution of the distributions of the same quantities as Fig. 2 over the stable orbital solutions of the entire ensembles of 1080 numerical integrations of Ref. [7]. Figure 11 details the growth of the QI dispersion over time.We remark that C 2 and E 2n show very similar time evolutions along stable orbital solutions, as can be seen in the top row Fig. 2.This is explained by the interesting observation that the components of the unit vectors γ 2 and γ 3 differ from each other by only a few percent, as shown in Appendix B. However, we stress that the two vectors are in fact linearly independent: C 2 does not depend on the actions X 1 and X 2 , while E 2n does.The two QIs move away from each other when high eccentricities of Mercury are reached, that is, for large excursions of the Mercury-dominated action X 1 . C. Weak resonances and Lyapunov spectrum A fundamental result from Table II is that the symmetries introduced in Sect.IV A are still preserved by resonances that have half-widths an order of magnitude smaller than those of the strongest terms.It is natural to extract from ranking R 1 the weak resonances that break the three symmetries.A new ranking of reso-nances R 2 is defined in this way.Table III II, only harmonics that are resonant for more than 1% of the 5-Gyr time span of the nominal solution of Gauss's dynamics are shown.The leading symmetry-breaking resonances have half-widths of about 0.01 yr −1 .For each QI, the dominant contribution comes from harmonics involving Fourier modes of the outer planet forcing other than g 5 : the Saturn-dominated modes g 6 , s 6 and the modes g 7 , s 7 mainly associated to Uranus.In the case of C inc , there is also a contribution that starts at about 0.006 yr −1 with F 8 = 4g 1 − g 2 − g 3 − s 1 − 2s 2 + s 4 and comes from high-order internal resonances, that is, resonances that involve only the d.o.f. of the inner planets.We remark that the decrease of the resonance half-width with the index of the harmonic in Table III is steeper for C inc than for E 2n , C 2 , and is accompanied by a greater presence of high-order resonances.This may notably explain why the secular variations of C inc are somewhat smaller in the top row of Fig. 2. We finally point out the important symmetry-breaking role of the modes g 7 , s 7 , representing the forcing mainly exerted by Uranus.Differently from what one might suppose, these modes cannot be completely neglected when addressing the long-term diffusion of ISS.This recalls the role of the modes s 7 and s 8 in the spin dynamics of Venus [49], and is basically a manifestation of the long-range nature of the gravitational interaction. As we state in Sect.III, a pair of Lyapunov exponents would vanish if there were an exact integral of motion.In the presence of a weakly broken symmetry, one may expect a small positive Lyapunov exponent whose value relates to the half-width of the strongest resonances driving the time variation of the corresponding QI.Such an argument is a natural extension of the correspondence between the FT-MLE and the top of the resonance spectrum given in Eq. ( 12).Comparison of Table III with the Lyapunov spectrum in Fig. 1a shows that the time statistics of the half-widths of the symmetry-breaking resonances of ranking R 2 overlaps with the ensemble distribution of the three smallest FT-LCEs, that is, λ 6 , λ 7 , λ 8 .One can indeed write where ∆ω R2 stands for the half-width of the uppermost resonances of ranking R 2 .a Only harmonics that are resonant for more than one percent of time are shown, i.e., τ res > 1%. exponents: Equation ( 23) is not a one-to-one correspondence, nor should it be understood as an exact relation since, for example, λ 6 is not well separated from the larger exponents.Its physical meaning is that the QIs are among the slowest d.o.f. of the ISS dynamics.Such a claim is one of the core points of this work.In Sect.V, we show its statistical validity in the geometric framework established by a principal component analysis of the orbital solutions.Moreover, Sect.IV D shows that Eq. ( 23) can be stated more precisely in the case of a simplified dynamics that underlies H 2n .We remark that E 2n , C inc , C 2 constitute a set of three QIs that are independent and nearly in involution, and it is thus meaningful to associate three different Lyapunov exponents with them.On the one hand, the independence is easily checked at degree 2 as the vectors γ 1 , γ 2 , γ 3 are linearly independent. D. New truncation of the Hamiltonian The fundamental role of the external modes g 6 , g 7 , s 6 , s 7 in Table III IV and related to C 2 are resonant for a few timesteps and their time statistics is very tentative.More precise estimations of the half-widths should be obtained over an ensemble of different orbital solutions, possibly spanning more than 5 Gyr.In any case, the fundamental point here is the drastic reduction in the size of the uppermost resonances with respect to Table III, and this is a robust result.We remark that resonances of order 12 and higher may also carry an important contribution at these scales, but they are excluded by the truncation at degree 10 adopted in Ref. [19] to establish the resonant harmonics, so they do not appear in the tables of this work. Hamiltonian H • 2n .The implications of Table IV suggest to introduce an additional truncation in the Hamiltonian H 2n .This consists in dropping the harmonics of Eq. ( 6) that involve external modes other than g 5 : where φ 1 (t) = −g 5 t and • = ( 1 , 0, . . ., 0), with 1 ∈ Z. Consistently with the absence of symmetry-breaking resonances related to E 2n in Table IV, the corresponding dynamics admits the exact integral of motion which represents the transformed Hamiltonian under the canonical change of variables that eliminates the explicit time dependence in Eq. ( 24).We point out that, as the additional truncation is applied to the action-angle formulation of Eq. ( 6), the external modes other than g 5 still enter the definition of the proper modes of the forced Laplace-Lagrange dynamics [4].The orbital solution arising from H • 2n is initially very close to that of H 2n .A frequency analysis over the first 20 Myr shows that the differences in the fundamental frequencies of the motion between H • 2n and H 2n are of the order of 10 −3 arcsec yr −1 , an order of magnitude smaller than the typical frequency differences between H 4 and H 6 [4, Table 3].Therefore, even though H • 2n constitutes a simplification of H 2n , it should not be regarded as a toy model.Its dynamics, in particular, still possesses 8 d.o.f.We compute the Lyapunov spectrum of the Hamiltonian H • 4 in the same way as described in Sect.III in the case of H 2n .Since its dynamics turns out to be much more stable than that of H 4 (see Sect.VI, Fig. 7), we extend the computation to a time span of 100 Gyr.The marginal ensemble PDFs of the positive FT-LCEs are shown in Fig 1b .Comparing to the Lyapunov spectrum of H 4 , one notices that the distributions of the leading exponents turn out to be quite similar, apart from being more spaced and except for a slight decrease in their median values.However, such a decrease is more pronounced for smaller exponents, and the drop in the smallest exponents is drastic.The smallest one, λ 8 , decreases monotonically, consistently with the fact that E • 4 from Eq. ( 25) is an exact integral of motion.The exponent λ 7 drops by more than an order of magnitude, and apparently begins to stabilize around a few 10 −4 arcsec yr −1 , while λ 6 also reduces significantly, by a factor of three, to about 0.005 yr −1 .The drop in the smallest exponents agrees remarkably well with that of the half-width of the leading symmetry-breaking resonances when switching from Table III to Table IV.One can indeed write where ∆ω R3,Q stands for the half-width of the uppermost resonances of ranking R 3 related to the quasi-integral Q.The hierarchy of the three smallest exponents in the spectrum of Fig. 1b These one-to-one correspondences are a particular case of Eq. ( 23) and support the physical intuition behind it.In Sect.V, we prove the validity of Eq. ( 27) in the geometric framework established by a principal component analysis of the orbital solutions of H • 2n .Numerical integrations.We compute ensembles of 1080 orbital solutions of the dynamical models H • 4 and H • 6 , with initial conditions very close to the nominal ones of Gauss's dynamics and spanning 100 Gyr in the future.This closely follows what we did in Ref. [7] in the case of the models H 2n .The bottom row of Fig. 2 shows the filtered dimensionless QIs along the nominal solutions of the two models over the first 5 Gyr.The hierarchy of the QIs stated in Eq. ( 27) is manifest.The quantity C 2 has secular variations much slower than C inc , while the latter is itself slower with respect to its counterpart in the orbital solutions of H 2n .We remark that, as E • 2n is an exact integral of motion for the model H • 2n , we do not plot it.From Fig. 2 it is also evident how difficult can be the retrieval of the short-lasting resonances affecting C 2 from a solution of H • 2n spanning only a few billion years.The hierarchy of the QIs is confirmed by a statistical analysis in Appendix C. Figure 10 shows the entire time evolution of the distributions of the filtered dimensionless QIs over the stable orbital solutions of the ensembles of 1080 numerical integrations.Figure 11 details the growth of the QI dispersion over time.As suggested by Table IV, the drop in the diffusion rates of the QIs when switching from H 2n to H • 2n is manifest. V. STATISTICAL DETECTION OF SLOW VARIABLES Section IV shows how the slow-fast nature of the ISS dynamics, indicated by the Lyapunov spectrum, emerges from the quasi-symmetries of the resonant harmonics of the Hamiltonian.QIs of motion can be introduced semianalytically and they constitute slow quantities when evaluated along stable orbital solutions.In this section, we consider the slow variables that can be systematically retrieved from a numerically integrated orbital solution by means of a statistical technique, the principal component analysis.We show that, in the case of the forced secular ISS, the slowest variables are remarkably close to the QIs, and this can be established in a precise geometric framework. A. Principal component analysis PCA is a widely used classical technique for multivariate analysis [50,51].For a given dataset, PCA aims to find an orthogonal linear transformation of the variables such that the new coordinates offer a more condensed and representative view of the data.The new variables are called principal components (PCs).They are uncorrelated and ordered according to decreasing variance: the first PC and last one have, respectively, the largest and the smallest variance of any linear combination of the original variables.While one is typically interested in the PCs of largest variance, in this work we employ the variance of the time series of a dynamical quantity to assess its slowness when compared to the typical variations of the action variables (see Sect.IV B).We thus perform a PCA of the action variables I and focus on the last PCs, as they give a pertinent statistical definition of slow variables.We stress that, when coupled to a lowpass filtering of the time series, the statistical variance provides a measure of chaotic diffusion. Implementation.Our procedure for the PCA is described briefly as follows [for general details see, e.g., 52,53].Let I(t) = (X(t), Ψ(t)) be the 8-dimensional time series of the action variables evaluated along a numerical solution of the equations of motion.As in Sect.IV B, we apply the KZ low-pass filter with three iterations of the moving average and a cutoff frequency of 1 Myr −1 to obtain the filtered time series Î(t) [4,48].In this way, the short-term quasi-periodic oscillations are mostly suppressed, which better reveals the chaotic diffusion over longer timescales.We finally define the meansubtracted filtered action variables over the time interval [t 0 , t 0 + T ] as Ĩ(t) = Î(t) − n −1 n−1 i=0 Î(t 0 + i∆t), where the mean is estimated by discretization of the time series with a sampling step ∆t such that T = (n − 1)∆t.The discretized time series over the given interval is stored in an 8 × n matrix: The PCA of the data matrix D consists in a linear transformation P = A T D, where A is an 8 × 8 orthogonal matrix (i.e.A −1 = A T ) defined as follows.By writing A = [a 1 , . . ., a 8 ], the column vectors a i ∈ R 8 are chosen to be the normalized eigenvectors of the sample covariance matrix, in order of decreasing eigenvalues: (n − 1) −1 DD T = AΣA T , where Σ = diag(σ 1 , . . ., σ 8 ) and σ 1 ≥ • • • ≥ σ 8 .The PCs are defined as the new variables after the transformation, that is, PC i = a i • I with i ∈ {1, . . ., 8}.The uncorrelatedness and the ordering of the PCs can be easily seen from the diagonal form of their sample covariance matrix, (n − 1) −1 PP T = Σ, from which it follows that the variance of PC i is σ i . Among all the linear combinations in the action variables I, the last PC, i.e., PC 8 , has the smallest variance over the time interval [t 0 , t 0 + T ] of a given orbital solution.The second last PC, i.e., PC 7 , has the second smallest variance and is uncorrelated with PC 8 , and so on.It follows that the linear subspace spanned by the last k PCs is the k-dimensional subspace of minimum variance: the variance of the sample projection onto this subspace is the minimum among all the subspaces of the same dimension.These properties indicate that the last PCs provide a pertinent statistical definition of slow variables along numerically integrated solutions of a dynamical system.The linear structure of the PCA, in particular, seems adapted to quasi-integrable systems close to a quadratic Hamiltonian, like the ISS.In such a case, one may reasonably expect that the slow variables are, to a first approximation, linear combinations of the action variables.We remark that the mutual orthogonality allows us to associate a linear d.o.f. to each PC. Aggregated sample.Instead of considering a specific solution, it is also possible to take the same time interval from m different solutions, and stack them together to form an aggregated sample: where D i is the data matrix of Eq. ( 28) for the ith solution.Since this work deals with a non-stationary dynamics, as the ISS ceaselessly diffuses in the phase space [7], we always consider the same time interval for each of the m solutions.The aggregated sample is useful in capturing globally the behavior of the dynamics, because it averages out temporary and rare episodes arising along specific solutions. B. Principal components and quasi-integrals Both the QIs and the last PCs represent slow variables, but are established through two different methods.Equations ( 23) and ( 27) claim that the QIs found semianalytically in Sect.IV are among the slowest d.o.f. of the ISS dynamics.This naturally suggests to compare the three QIs with the three last PCs retrieved from numerically integrated orbital solutions.In this part, we first introduce the procedure that we implement to establish a consistent and systematic correspondence between QIs and PCs.We then present both a visual and a quantitative geometric comparison between them. Tweaking the QIs The three last components PC 8 , PC 7 , PC 6 are represented by the set of vectors S PCs = {a 8 , a 7 , a 6 } belonging to R 8 .By construction, these PCs have a linear, hierarchical, and orthogonal structure.In other words: the PCs are linear combinations of the action variables I; denoting by the order of statistical variance, one has PC 8 PC 7 PC 6 ; the unit vectors (a i ) 8 i=6 are orthogonal to each other.On the other hand, the QIs of motion C inc , C 2 , E 2n do not possess these properties.Therefore, we adjust them in such a way to reproduce the same structure. a. Linearity.While C inc and C 2 are linear functions of the action variables, E 2n is not when n > 1.Nevertheless, as we explain in Sect.IV A, as far as one considers stable orbital solutions, the linear LL approximation E 2 = γ 3 •I reproduces E 2n reasonably well.Therefore, we consider the three linear QIs of motion C inc , C 2 , E 2 , which are respectively represented by the set of R 8 -vectors S QIs = {γ 1 , γ 2 , γ 3 }.In this way, the 3-dimensional linear subspaces of the action space spanned by the sets S QIs and S PCs can be compared.b.Ordering.We define a set of QIs that are ordered by statistical variance, as it is the case for the PCs.We follow two different approaches according to model H • 2n in Eq. ( 24) or H 2n in Eq. ( 6) (clearly n > 1).c. Orthogonality.We apply the Gram-Schmidt process to the ordered set S QIs to obtain the orthonormal basis S QIs = {α 1 , α 2 , α 3 }.The set S QIs clearly spans the same subspace as S QIs .Moreover, the Gram-Schmidt process preserves the hierarchical structure, that is, the two m-dimensional subspaces spanned by the first m ≤ 3 vectors of S QIs and S QIs , respectively, are identical. In the end, we obtain a linear, ordered, and orthogonal set of modified QIs of motion {QI 1 , QI 2 , QI 3 }, where QI i = α i • I. Visual comparison We now visually compare the vectors α 1,2,3 of the modified QIs with the corresponding vectors a 8,7,6 of the last three PCs.We use the ensembles of 1080 numerically integrated orbital solutions of the models H 4 and H 4 considered in Sects.IV B and IV D, respectively.The nominal solution of each set is denoted as sol.#1 from now on.For the model H 4 , we also consider two other solutions: sol.#2 that represents a typical evolution among the 1080 solutions, and sol.#3 representing a rarer one.The particular choice of these two solutions is detailed in Sect.V B 3. Hamiltonian H • 4 .The modified QIs can be explicitly derived in this case and comprise interpretable physical Here, QI 1 is proportional to C2 and QI 2 is proportional to C ⊥ 2 ; see Eq. (20). quantities.One has QI 1 proportional to C 2 and QI 2 proportional to C ⊥ 2 .Moreover, QI 3 is the component of E 2 that is orthogonal to both C 2 and C ⊥ 2 .Figure 3 shows the comparison between the modified QIs and the corresponding PCs for three different time intervals along sol.#1 of H • 4 (see Fig. 2 bottom left for its time evolution).The agreement of the pairs (QI 1 , PC 8 ), (QI 2 , PC 7 ), and (QI 3 , PC 6 ) across different intervals is manifest and even impressive.One can appreciate that the "slower" the PC, the more similar it is to its corresponding QI.The overlap between the modified QIs and the three last PCs means that the QIs of motion span the slowest 3dimensional linear subspace of the action space.Therefore, to a linear approximation, they represent the three slowest d.o.f. of the H • 4 dynamics.The quasi-integral C 2 represents the slowest linear d.o.f.: it coincides with the last principal component PC 8 , which has the smallest variance among all the linear combinations of the action variables.C inc and E 2 represent the second and the third slowest linear d.o.f., respectively: the component of C inc orthogonal to C 2 , i.e., C ⊥ 2 , matches the second last principal component PC 7 ; the component of E 2 orthogonal to the subspace generated by (C 2 , C inc ) matches the third last principal component PC 6 .The strong hierarchical structure of the slow variables for the simplified dynamics H • 4 is clearly confirmed by the almost frozen basis vectors of the PCs. Hamiltonian H 4 .In this case, the QIs of motion C inc , C 2 , E 2 do not show a clear hierarchical structure in terms of statistical variance.Therefore, we consider the whole subspace spanned by the three QIs with respect to that spanned by the three last PCs.Since it is not easy to visually compare two 3-dimensional subspaces of R 8 , we compare their basis vectors instead.The basis α 1,2,3 of modified quasi-integrals QI 1,2,3 is computed according to the algorithm presented in Sect.V B 1. Figure 4 presents the comparison between the modified QIs and the corresponding PCs across three different time intervals of three solutions of H 4 (see Fig. 5 for their time evolution).The first two, sols.#1 and #2, show thorough agreement between the pairs of QIs and PCs across all intervals, which indicates close proximity between the two subspaces V QIs = span(S QIs ) and V PCs = span(S PCs ).One can appreciate that the directions of the basis vectors are quite stable.The last component PC 8 , in particular, remains close to C inc .The slowest linear d.o.f. of H 4 can thus be deduced to be close to C inc , in line with the discussion in Sect.IV C. Such a result shows how interesting physical insight can be gained through the PCA.Some changes in the basis vectors can arise, however, as for the first time interval of sol.#2.This may be expected from a dynamical point of view.Differently from H • 4 , there is no pronounced separation between the slowest d.o.f. at the bottom of the Lyapunov spectrum in Fig. 1a: the marginal distributions of consecutive exponents can indeed touch or overlap each other.Therefore, the hierarchy of slow variables is not as frozen as in H • 4 and it can change along a given orbital solution. Solutions #1 and #2 represent typical orbital evolutions.If the same time intervals of all the 1080 solutions are stacked together to form an aggregated sample on which the PCA is applied, the features mentioned above persist: the agreement between QIs and PCs, the stability of the basis vectors, and the similarity between PC 8 and C inc (see Fig. 4).Once again, the PCA confirms that the subspace spanned by the three QIs is overall close to the slowest 3-dimensional linear subspace of the action space.Therefore, to a linear approximation, they represent the three slowest d.o.f. of the H 4 dynamics.We remark that the slowness of the 3-dimensional subspace spanned by the QIs is a much stronger constraint than the observation that each QI is a slow variable.To give an example, let Q = q • I be a slow variable with unit vector q.If is an arbitrary small vector, i.e. 1, then Q = ( q + ) • I can also be considered as a slow variable, whereas the normalized difference of two quantities, •I, is generally not.Therefore, the linear subspace spanned by Q and Q , that is, by q and , is not a slow 2-dimensional subspace. Solution #3 in Fig. 4 represents an edge case (see Fig. sol.#1 Time evolution over 5 Gyr of the dimensionless QIs of motions ( Cinc, C2, E) and of two representatives of the dimensionless action variables ( X1, Ψ3) for three solutions of H4, that is, sol.#1 (top), sol.#2 (middle), and sol.#3 (bottom).E stands for E4.The time series are low-pass filtered with a cutoff frequency of 1 Myr −1 and the mean over 5 Gyr is subtracted. for its time evolution).Typically, the variances of the QIs are at least one order of magnitude smaller than those of the action variables, which allows a clear separation.Nevertheless, the distinction between the QIs and faster d.o.f.can be more difficult in two rare possibilities.Firstly, if the change in a QI accumulates continually in one direction, its variance can inflate over a long time interval.This is the case for the interval [0, 5] Gyr of sol.#3.Secondly, the variance of a variable that is typically fast can suddenly dwindle during a certain period of time, for example, Ψ 3 over the interval [1,2] Gyr of sol.#3.In both cases, the slow subspace defined by the three last PCs can move away from the QI subspace due to the contamination by d.o.f. that are typically faster.This is reflected in the mismatch of QI 3 and PC 6 on the last two time intervals of sol.#3 in Fig. 4. We remark that PC 8,7 are still relatively close to QI 1,2 , which indicates that the slowest 2-dimensional subspace spanned by PC 8,7 still resides inside the QI subspace.It should be stressed that this disagreement between QIs and PCs does not mean that the QIs are not slow variables in this case.The mismatch has a clear dynamical origin instead.The resonance tables of this work have been retrieved from a single, very long orbital solution, with the idea that its time statistics is representative of the ensemble statistics over a set of initially very close solutions [19].Therefore, the QIs derived from these tables characterize the dynamics in a global sense.The network of resonances can temporarily change in an appreciable way along specific solution, or be very particular along rare orbital solutions.In these cases, a mismatch between the last PCs and the present QIs may naturally arise.Moreover, the contamination of the QIs by d.o.f. that are typically faster may also be expected from the previously mentioned lack of a strong hierarchical structure of the slow variables.The Lyapunov spectrum in Fig. 1a shows that the marginal distributions of the exponents λ 5 and λ 6 , for example, are not separate but overlap each other. Distance between the subspaces of PCs and QIs The closeness of the two 3-dimensional linear subspaces V PCs , V QIs ⊂ R 8 spanned by the sets of vectors S PCs and S QIs , respectively, can be quantitatively measured in terms of a geometric distance.This can be formulated using the principal (canonical) angles [55][56][57]. Let A and B be two sets of m ≤ n independent vectors in R n .The principal vectors (p k , q k ) m k=1 are defined recursively as solutions to the optimization problem: between the two subspaces span(A) and span(B) are then defined by The principal angle θ 1 is the smallest angle between all pairs of unit vectors in span(A) and span(B); the principal angle θ 2 is the smallest angle between all pairs of unit vectors that are orthogonal to the first pair; and so on.Given the matrices defining the two subspaces, the principal angles can be computed from the singular value decomposition of their correlation matrix.The result is the canonical correlation matrix diag(cos θ 1 , . . ., cos θ m ).This cosine-based method is often ill-conditioned for FIG. 6. PDF of the distance between two random 3dimensional linear subspaces of R 8 (blue, 10 5 draws) compared with the PDF of the distance between the two subspaces VPCs (PC8,7,6) and VQIs (QI1,2,3) arising from the time interval [0, 5] Gyr of 1080 solutions of H • 4 (top) and 10 800 solutions of H4 (bottom) (green).For each model, the subspace distance from the same time interval of representative solutions (vertical red lines) and of the aggregated sample of all the solutions (vertical dark green line) are indicated.The subspace distance is given by Eq. (31). small angles.In such case, a sine-based algorithm can be employed [58].In this work, we use the combined technique detailed in Ref. [59]. Once the principal angles have been introduced, different metrics can be defined to measure the distance between two subspaces.In this work, we choose the normalized chordal distance [57]: The distance is null if A and B are the same subspace and equal to 1 when they are orthogonal.We use this metric to show that the subspace closeness suggested by Figs. 3 and 4 is indeed statistically significantly.More precisely, we provide evidence against the null hypothesis that the distribution of distances between V PCs and V PCs , arising from the H • 4 and H 4 dynamics, coincides with that of randomly drawn 3-dimensional subspaces of R 8 .The PDF of the distance between two random 3-dimensional subspaces of R 8 is shown in Fig. 6 in blue (such random subspaces can be easily generated by sampling sets of 3 vectors uniformly on the unit 7-sphere [60]).While the range of possible distances is [0,1], the distribution concentrates on the right-hand side of the interval, with a probability of approximately 99.3% that the distance is larger than 0.6.In this regard, we remark that the notion of distance in high-dimensional spaces is very different from our intuition in a 3-dimensional world.If we draw randomly two vectors in a very high-dimensional space, it is extremely likely that they will be close to mutual orthogonality. The upper panel of Fig. 6 shows in green the PDF of the distance between V PCs and V QIs arising from the time interval [0, 5] Gyr of the 1080 orbital solutions of model In the lower panel, we consider a larger ensemble of 10 800 solutions of model H 4 spanning the same time interval [7], and plot the corresponding PDF of the distance between V PCs and V QIs .In both cases, the distance stemming from the aggregated sample of all the solutions is indicated by a vertical dark green line.We also report the distances from the specific solutions considered in Figs. 3 and 4 as vertical red lines.As the PDFs of both models peak at small distances, there results a strong evidence that the distribution of distances between the subspaces spanned by the PCs and the QIs is not that of random subspaces.In this sense, the closeness of the subspaces V PCs and V QIs is a statistically robust result.In the case of the simplified dynamics H • 4 , the PDF peaks around a median of roughly 0.08 and has small variance.Switching to model H 4 , the median increases to about 0.26 and the PDF is more spread out, with a long tail toward larger distances.The differences between the PDFs of the two models follow quite naturally the discussion in Sect.V B 2: a quasi frozen hierarchy of the slowest variables for H • 4 ; a larger variance for H 4 related to contamination by d.o.f. that are typically faster and to variations of the resonant network with respect to the nominal solution of Gauss's dynamics which is used to infer the QIs.Solution #3 in Fig. 4 represents in this sense an edge case of the distance distribution, while sol.#2 is a typical solution close to the PDF median. VI. IMPLICATIONS ON LONG-TERM STABILITY The existence of slow variables can have crucial implications on the stability of the ISS.The QIs of motion can effectively constraint in an adiabatic way the chaotic diffusion of the planet orbits over long timescales, forbidding in general a dynamical instability over a limited time span, e.g., several billions of years.Here we give compelling arguments for such a mechanism. Figure 7 shows the cumulative distribution function (CDF) of the first time that Mercury eccentricity reaches a value of 0.7, from the ensembles of 1080 orbital solutions of H that such a high eccentricity is a precursor of the dynamical instability (i.e., close encounters, collisions, or ejections of planets) of the ISS [6].We also report the same CDF for the models H 4 and H 6 , which we recently computed in Ref. [7].One can appreciate that the time corresponding to a probability of instability of 1% is greater than 100 Gyr for the H 2n and H 2n relates to the smallest Lyapunov exponents (Fig. 1), and this is accompanied by a much slower diffusion of the QIs for H • 2n (Figs. 2, 10, and 11), Fig. 7 indicates that the dynamical half-life of the ISS is linked to the speed of diffusion of these slow quantities in a critical way.We stress that the slower diffusion toward the dynamical instability in the H • 2n model derives from neglecting the external forcing mainly exerted by Saturn, Uranus, and Neptune. We also observe that, to a linear approximation, the knowledge of C inc and E 2 allows us to bound the variations of the action variables X, Ψ. Recalling that the actions are positive quantities, from Eq. ( 19) one sees that fixing a value of C inc puts an upper bound to the variations of the inclination actions Ψ.As a consequence, at degree 2 in eccentricities and inclinations, fixing a value of the QI with γ 3 = (γ ecc 3 , γ inc 3 ), also bounds the upper variations of the eccentricity actions X, since the components of γ ecc 3 have all the same sign, as those of γ inc 3 (see Appendix B).This is an important point, as we state in Sect.I that the lack of any bound on the chaotic variations of the planet orbits is one of the reasons that complicate the understanding of their long-term stability.We remark that the secular planetary phase space can be bound by fixing the value of the total AMD, that is, C ecc +C inc [47].A statistical study of the density of states that are a priori accessible can then be realized [61].It is not, however, fully satisfying to consider a fixed value of total AMD of the ISS, as we show that C ecc is changed by some of the leading resonances of the Hamiltonian, as a result of the eccentricity forcing mainly exerted by Jupiter through the mode g 5 .Moreover, the destabilization of the ISS consists indeed in a large transfer of eccentricity AMD, C ecc , from the outer system to the inner planets through the resonance g 1 − g 5 [5,6,36,62].It should be noted that C ecc can still be consider as a slow quantity with respect to an arbitrary function of the action variables, as it is only changed by the subset of the leading resonances involving the external mode g 5 .This slowness has indeed been observed on stable orbital solutions of the Solar System [47] and supports the statistical hypothesis in Ref. [61] that allows one to obtain a very reasonable first guess of the long-term PDFs of the eccentricities and inclinations of the inner planets. The emerging picture explains the statistical stability of the ISS over billions of years in a physically intuitive way.The chaotic behavior of the planet orbits arises from the interaction of a number of leading resonant harmonics of the Hamiltonian, which determine the Lyapunov time.The strongest resonances are characterized by some exact symmetries, which are only broken by weak resonant interactions.These quasi-symmetries naturally give birth to QIs of motion, quantities that diffuse much more slowly than the LL action variables, constraining the variations of the orbits.The long dynamical half-life of the ISS is connected to the speed of this diffusion, which eventually drives the system to the instability.It should be stressed that, besides the speed of diffusion, the lifetime of the inner orbits also depends on the initial distance of the system from the instability boundary defined by the resonance g 1 − g 5 .This geometric aspect includes the stabilizing role of general relativity [5,6], which moves the system away from the instability boundary by 0.43 yr −1 , and the destabilizing effect of terms of degree 6 in eccentricities and inclinations of the planets [7]. VII. DISCUSSION This work introduces a framework that naturally justifies the statistical stability shown by the ISS over a timescale comparable to its age.Considering a forced secular model of the inner planet orbits, the computation of the Lyapunov spectrum indicates the existence of very different dynamical timescales.Using the computer algebra system TRIP, we systematically analyze the Fourier harmonics of the Hamiltonian that become resonant along a numerically integrated orbital solution spanning 5 Gyr.We uncover three symmetries that characterize the strongest resonances and that are broken by weak resonant interactions.These quasi-symmetries generate three QIs of motion that represent slow variables of the secular dynamics.The size of the leading symmetrybreaking resonances suggests that the QIs are related to the smallest Lyapunov exponents.The claim that the QIs are among the slowest d.o.f. of the dynamics constitutes the central point of this work.On the one hand, it is supported by the analysis of the underlying Hamiltonian H • 2n , in which one neglects the forcing mainly exerted by Saturn, Uranus, and Neptune and, as a consequence, the diffusion of the QIs is greatly reduced.On the other hand, the geometric framework established by the PCA of the orbital solutions independently confirms that the QIs are statistically the slowest linear variables of the dynamics.We give strong evidence that the QIs of motion play a critical role in the statistical stability of the ISS over the Solar System lifetime, by adiabatically constraining the long-term chaotic diffusion of the orbits. A. Inner Solar System among classical quasi-integrable systems It is valuable to contextualize the dynamics of the ISS in the class of classical quasi-integrable systems.A comparison with the Fermi-Pasta-Ulam-Tsingou problem, in particular, deserves to be made.This concerns the dynamics of a one-dimensional chain of identical masses coupled by nonlinear springs.For weak nonlinearity, the normal modes of oscillation remain far from the energy equipartition expected from statistical mechanics for a very long time [13].One way to explain the lack of energy equipartition reported by Fermi and collaborators is through the closeness of the FPUT problem to the integrable Toda dynamics [63][64][65].This translates in a very slow thermalization of the action variables of the Toda problem and of the corresponding integrals of motion along the FPUT flow [15,[65][66][67][68][69].In the framework of the present study, the very long dynamical half-life of the ISS is also likely to be the result of the slow diffusion of some dynamical quantities, the QIs of motion.We find, in particular, an underlying Hamiltonian H • 2n for which this diffusion is greatly reduced, as a consequence of neglecting the forcing mainly exerted by Saturn, Uranus, and Neptune.This results in a dynamics that can be considered as stable in an astronomical sense.We stress that, differently from the FPUT problem, H • 2n is not integrable as the Toda Hamiltonian.It is indeed chaotic and shares with the original Hamiltonian H 2n the leading Lyapunov exponents.The QIs that we find in this work are only a small number of functions of the action-angle variables of the integrable LL dynamics, and are related to the smallest Lyapunov exponents of the dynamics.Our study suggests that in the FPUT problem the very slow thermalization occurring beyond the Lyapunov time might be understood in terms of combinations of the Toda integrals of motion diffusing over very different timescales. The long-term diffusion in chaotic quasi-integrable systems should be generally characterized by a broad range of timescales that results from the progressive, hierarchical breaking of the symmetries of the underlying integrable problem by resonant interactions [70][71][72].A hierarchy of Lyapunov exponents spanning several orders of magnitude, in particular, should be common among this class of systems [e.g., 73]. B. Methods The long-term dynamics of the ISS is described by a moderate but not small number of d.o.f., which places it far from the typical application fields of celestial mechanics and statistical physics.The first discipline often studies dynamical models with very few degrees of freedom, while the second one deals with the limit of a very large number of bodies.Chaos also requires a statistical description of the inner planet orbits.But the lack of a statistical equilibrium, resulting from a slow but ceaseless diffusion of the system, places the ISS outside the standard framework of ergodic theory.The kind of approach we develop in this work is heavily based on computer algebra, in terms of systematic series expansion of the Hamiltonian, manipulation of the truncated equations of motion, extraction of given Fourier harmonics, retrieval of polynomial roots, etc. [4,19].This allows us to introduce QIs of motion in a 16-dimensional dynamics by analyzing how action-space symmetries are progressively broken by resonant interactions.Our effective method based on the time statistics of resonances arising along a single, very long numerical integration is alternative to formal approaches that define QIs via series expansions [e.g.74,75].The practical usefulness of these formal expansions for a dynamics that covers an intricate, high-dimensional network of resonances seems indeed doubtful.Through the retrieval of the half-widths of the symmetry-breaking resonances, computer algebra also permits us to extend the correspondence between the Lyapunov spectrum and the spectrum of resonances well beyond the standard relation linking the Lyapunov time to the strongest resonances [76]. In the context of dynamical systems with a number of d.o.f. that is not small, this work also considers an approach based on PCA.The role of this statistical technique can be twofold.We use PCA as an independent test to systematically validate the slowness of the QIs.While being introduced semi-analytically as dynamical quantities that are not affected by the leading resonances, they can indeed be related to the last PCs.By extension, the first PCs should probe the directions of the main resonances.This leads to a second potential application of the PCA, which should offer a way to retrieve the principal resonant structure of a dynamical system.In this sense, PCA represents a tool to systematically probe numerical integrations of a complex dynamics and distill important hidden insights.We emphasize that PCA is the most basic linear technique of dimensionality reduction and belongs to the more general class of the unsupervised learning algorithms.There are more sophisticated methods of feature extraction that can be more robust [e.g.77,78] and can incorporate nonlinearity [79].These methods are often less intuitive to understand, less straightforward to apply, and harder to interpret than PCA.Yet, they might be more effective and worth pursuing for future works. With long-term numerical integration and a computer algebra system at one's disposal, the entire strategy we develop in this work can in principle be applied to other planetary systems and quasi-integrable Hamiltonian dynamics with a moderate number of d.o.f.To quantitatively estimate the numerical precision on the computed FT-LCEs, we exploit the symmetry of the spectrum stated in Eq. (9).For a single orbital solution, the relative numerical error on each exponent λ i can be estimated as We plot in Fig. 8b the medians of i for the ensemble of 150 orbital solutions of Fig. 1a.The relative errors decrease asymptotically with time, as expected.Even in the case of the smallest exponent, λ 8 , the median error is less than 10% at 10 Gyr.Hamiltonian H 6 .We compute for comparison the FT-LCEs of the forced ISS truncated at degree 6 in eccentricities and inclinations, that is, H 6 .We consider 150 stable orbital solutions with initial conditions very close to the nominal values of Gauss's dynamics and random sets of initial tangent vectors, as we do for the truncation at degree 4. Figure 9 shows the [5th, 95th] percentile range of the marginal PDF of each FT-LCE estimated from the ensemble of solutions.Apart from being somewhat larger, the asymptotic distributions of the exponents are very similar to those of H 4 shown in Fig. 1a. Appendix C: Ensemble distributions of the quasi-integrals over time To retrieve the long-term statistical behavior of the QIs, we consider the ensembles of 1080 numerical integrations of the dynamical models H 4 and H 6 , with very close initial conditions and spanning 100 Gyr in the future, that have been presented in Ref. [7].We also consider the similar ensembles of solutions for the simplified Hamiltonians H • 4 and H • 6 that we introduce in Sect.IV D. We report in Fig. 10 the time evolution of the ensemble PDFs of the low-pass filtered dimensionless QIs and dimensionless actions X 1 , Ψ 3 for the different models (the cutoff frequency of the time filter is set to 1 Myr −1 , as in Sec.IV B).More precisely, to highlight the growth of the statistical dispersion, we consider at each time the PDF of the signed deviation from the ensemble mean, so that all the plotted distributions have a null mean.At each time, the PDF estimation takes into account only the stable orbital solutions, that is, those solutions whose running maximum of Mercury eccentricity is smaller than 0.7 [7]. Figure 10 shows that the QIs are indeed slow quantities when compared to the LL action variables.The growth of the QI dispersion is detailed in Fig. 11, where we report the time evolution of the interquartile range (IQR) of their distributions.After a transient phase lasting about 100 Myr and characterized by the exponential separation of close trajectories, the time growth of the IQR follows a power law typical of diffusion processes.and its PDF has null dispersion. 8 (b) H • 4 FIG. 1 . FIG. 1. Positive FT-LCEs λi of the forced secular ISS from Hamiltonians H4 (a) and H • 4 [Eq.(24)] (b), and corresponding characteristic timescales λ −1 i .The bands represent the [5th, 95th] percentile range of the marginal PDFs estimated from an ensemble of 150 stable orbital solutions with very close initial conditions.The lines denote the distribution medians.The H • 4 most resonances drops by two orders of magnitude.We stress that such harmonics are resonant for very short periods of time along the 5 Gyr spanned by the nominal solution of Gauss's dynamics.To retrieve the time statistics of the resonances affecting C 2 , we indeed choose to repeat the computations of Ref.[19] by increasing the cutoff frequency of the low-pass filter applied to time series of the action-angle variables from (5 Myr) −1 to 1 Myr −1 [19, Appendices F.2 and G.5].The filtered time series have then been resampled with a timestep of 50 kyr.Many harmonics we show in Table 5 FIG. 5.Time evolution over 5 Gyr of the dimensionless QIs of motions ( Cinc, C2, E) and of two representatives of the dimensionless action variables ( X1, Ψ3) for three solutions of H4, that is, sol.#1 (top), sol.#2 (middle), and sol.#3 (bottom).E stands for E4.The time series are low-pass filtered with a cutoff frequency of 1 Myr −1 and the mean over 5 Gyr is subtracted. H 4 : FIG. 8. (a) Positive FT-LCEs of Hamiltonian H4 and corresponding characteristic timescales for a single initial condition and an ensemble of 150 random sets of initial tangent vectors.The bands represent the [5th, 95th] percentile range of the marginal PDFs.The lines denote the distribution medians.(b) Medians of the relative numerical errors i on the FT-LCEs λi, as defined in Eq. (A1), for the ensemble of 150 orbital solution of Fig. 1a. 8 FIG. 9 . FIG. 9. Positive FT-LCEs λi of Hamiltonian H6 and corresponding characteristic timescales λ −1 i .The bands represent the [5th, 95th] percentile range of the marginal PDFs estimated from an ensemble of 150 stable orbital solutions with very close initial conditions.The lines denote the distribution medians. FIG. 10 . FIG. 10.Time evolution over 100 Gyr of the PDF of the signed deviation from the mean of the low-pass filtered dimensionless QIs and dimensionless actions X1, Ψ3.Estimation from an ensemble of 1080 numerical orbital solutions for different models (H4, H6, H • 4 , and H • 6 ).First row : C inc .Second row : C2.Third row : E4 (H4) and E6 (H6).Fourth row : X1.Fifth row : Ψ3.The time of each curve is color coded.At each time, the estimation only takes into account stable solutions, that are those with a running maximum of Mercury eccentricity smaller than 0.7.The quantity E • 2n is an exact integral of motion for the model H • 2n FIG. 11 . FIG. 11.Time evolution of the interquartile range (IQR) of the ensemble PDFs of the QIs shown in Fig. 10.Left: C inc .Middle: C2.Right: E4 (H4) and E6 (H6).The quantity E • 2n is an exact integral of motion for the model H • 2n and its PDF has a null IQR. TABLE I . Summary of the different models of forced secular ISS considered in this work.Gauss's dynamics results from first-order averaging of the N -body Hamiltonian over the mean longitudes of the planets.The dynamics generated by H2n and H2n are practically equivalent and treated as such.The H • 2n models are introduced and discussed in Sec.IV D. ×Z 7 .At degree 2, one has H 2 = −ω LL •I, where ω LL = (g LL , s LL ) ∈ R 4 ×R 4 are the LL fundamental precession frequencies of the inner planet perihelia and nodes.Hamiltonian H 2n is in quasi-integrable form. , with unchanged action variables, allows us to remove the explicit time dependence in these harmonics.Quantity E 2n coincides with the transformed Hamiltonian and the harmonics in TableIIdo not contribute to its time derivative. is therefore unaffected by the resonances listed in TableII.In an equivalent way, the time-dependent canonical transformation θ → θ +g 5 t 1 8 Table II, naturally repre-sent quasi-symmetries when considering the entire spectrum of resonances R 1 .They are indeed broken at some point by weak resonances (see Sect.IV C).Quantities E 2n , C inc , and C 2 are the corresponding QIs of motion.The persistence of the three symmetries under the 30 leading resonances is somewhat surprising.Concerning C inc and C 2 , for example, one might reasonably expect that, since the ISS has 8 d.o.f., the subspace spanned by the wave vectors of just a dozen of harmonics should already have maximal dimension, destroying all possible symmetries. • 2n is exactly conserved and not shown).The time series are low-pass filtered with a cutoff frequency of 1 Myr −1 and the mean over 5 Gyr is subtracted.The variations of the QIs are enlarged in the insets.The H • 2n models are introduced and discussed in Sec.IV D. FIG.2.Time evolution over 5 Gyr of the dimensionless QIs ( C inc , C2, E) and of two representatives of the dimensionless action variables ( X1, Ψ3) along the nominal orbital solutions of different models.Top row : H4 and H6 ( E stands for E4 and E6, respectively).Bottom row : H • 4 and H • 6 from Eq. (24) ( E reports the 10 strongest symmetry-breaking resonances that change E 2n , C inc , C 2 , respectively.As in Table TABLE III . Top of ranking R2.First 10 symmetry-breaking resonances of H10 along the 5-Gyr nominal solution of Gauss's dynamics, that change E2n, C inc , and C2, respectively (see TableIIfor details). TABLE IV . Top of ranking R3.First 10 symmetry-breaking resonances of H10 along the 5-Gyr nominal solution of Gauss's dynamics, that only involve g5 among the external modes and change E2n, C inc , and C2, respectively. raises the question of which symmetry-breaking resonances persist if one excludes all the Fourier harmonics that involve external modes other than g 5 .Therefore, we define a new ranking R 3 by extracting such resonances from ranking R 2 .TableIVreports the 10 strongest resonances per each broken symmetry.The difference with respect to Table III is manifest.As g 5 is the only external mode remaining, there are no resonances left that can contribute to the time evolution of E 2n .For the remaining two QIs, the only harmonics that appear in Table IV are of order 8 or higher, and this is accompanied by a significant drop in the half-width of the leading resonances.In the case of C inc , the half-width of the uppermost resonances is now around 0.005 yr −1 .One can appreciate that the activation times τ res of the resonances do not exceed a few percent, differently from Table III.The most impressive change is, however, related to C 2 : only harmonics of order 10 appear in Table IV and the half-width of the upper consistently follows that of the QIs suggested in Table IV by the very different sizes of the leading resonances.In other words, one can state: H • 2n : A strong hierarchy of statistical variances among the QIs emerges from the size of the leading symmetry-breaking resonances in Table IV and from the orbital solutions in Figs. 2, 10, and 11.One has E • 2n ≺ C 2 ≺ C inc .While E • 2n is an exact non-linear integral of motion, we expect that its linear truncation E • 2 = E 2 varies more than C 2 and C inc .Therefore, we consider the ordered set of QIs of motion {C 2 , C inc , E 2 } represented by the ordered set of vectors S QIs = {γ 2 , γ 1 , γ 3 }.H 2n : Since the leading resonances affecting the QIs in Table III have comparable sizes, there is no clear order of statistical variances that can be inferred.We then implement a systematic approach that orders the QIs by simply inheriting the ordering of the PCs.More precisely, we define a set of ordered vectors S QIs through the projections of the three last PCs onto the linear subspace generated by the QIs: S QIs = {proj SQIs (a 8 ), proj SQIs (a 7 ), proj SQIs (a 6 )}[54].As a result, the new set of QIs mirrors the hierarchical structure of the PCs.We stress that S QI spans the same subspace of R 8 as S QI , since the ordered QIs are just linear combinations of the original ones. • 4 model, while it is about 15 Gyr for H 4 .At degree 6, this time still increases from 5 Gyr for H 6 to about 20 Gyr in H •
19,903.8
2023-05-02T00:00:00.000
[ "Physics" ]
Towards a water quality database for raw and validated data with emphasis on structured metadata On-line continuous monitoring of water bodies produces large quantities of high frequency data. Long-term quality control and applicability of these data require rigorous storage and documentation. To carry out these activities successfully, a database has to be built. Such a database should provide the simplicity to store and document all relevant data and should be easy to use for further data evaluation and interpretation. In this paper, a comprehensive database structure for water quality data is proposed. Its goal is to centralize the data, standardize their format, provide easy access, and, especially, document all relevant information (metadata) associated with the measurements in an ef fi cient way. The emphasis on data documentation enables the provision of detailed information not only on the history of the measurements (e.g., where, how, when and by whom was the value measured) but also on the history of the equipment (e.g., sensor maintenance, calibration/validation history), personnel (e.g., experience), projects, sampling sites, etc. As such, the proposed database structure provides a robust and ef fi cient tool for functional data storage and access, allowing future use of data collected at great expense. INTRODUCTION Automated monitoring stations and state-of-the-art instrumentation are used to continuously monitor and control water bodies over the long term and increasingly also in real time. This on-line, continuous monitoring is used to collect data at high frequency thus generating large sets of data (Rieger & Vanrolleghem ). However, these large quantities of data are only beneficial if they are accessible, well-documented and reliable (Copp et al. ). Thus, the tasks of efficient storage and quality control are crucial to their interpretation and further application. Generally, in many organizations, storage and quality check of the collected data are done individually by the users at their work space. However, each user organizes, structures and evaluates the data in a different manner (Camhy et al. ). As personnel are changing over time, this diversification hinders data interpretation, understanding and reproduction leading to inconsistencies in further studies. Thus, to successfully manage these large amounts of heterogeneous data, a systematic and efficient storage system is needed (Rieger et al. ). In this respect, Camhy et al. () and Horsburgh et al. () identified several data management challenges: the collected raw data have a highly variable format; the database has to be flexible and adaptable because it is growing continuously: monitoring programs are modified, additional variables are measured and different sensors are used; the personnel involved in collecting and managing the data changes. It is thus critical that one is documenting the collected data with all relevant metadata (data about data). Metadata are any additional information that provide more details about the data and its identification: the measured attributes, their names, units, the extent, the quality, the spatial and temporal aspects, the content, and how the value was obtained (Gray et al. ; ISO ). This information is essential for other potential users to understand and interpret the collected data. The issues of metadata are illustrated with an example of a one-month measurement campaign conducted at a fullscale wastewater treatment plant. For this campaign, a number of automated sensors to measure water quality parameters (TSS, N-components, etc.) were installed. If only the measured values are stored, the data will only have very limited meaning. At the very least, metadata such as the variable names and their units should be stored as well. However, even with the addition of these metadata, the relevance and application of the data set will most likely be limited to persons that were directly involved in the campaign. Subsequently, the data will either be shelved and lost or applied unsuccessfully in a further study because too much information on the data is missing. If we want the efforts of such a measurement campaign to transcend this limited life-expectancy, much more detailed metadata should be stored: the exact location where the sensors were placed, the type of sensors (and their measurement principles), their maintenance, calibration and validation history, the weather conditions during the campaign, etc. Providing a systematic structure to store all these metadata is an important challenge for effective data management. Some commercial databases to store water quality and hydrological data in a structured way are offered on the market. Nevertheless, accessing the raw data or making a modification of the metadata is sometimes limited or not possible, and can only be done through a predefined graphical user interface (GUI) (Camhy et al. ). Moreover, data have to be continuously transformed to the proprietary format of the software. In addition, any modification relies on the vendor support, thus placing important restraints on customized use. Also, some organizations have proposed standards to exchange environmental data including data description, analysis and reporting, e.g., the Environmental Data Stan- Using their experience with high frequency data collection, the modelEAU research group at Université Laval in Québec City (Canada), developed a database structure to be applied to water quality data from rivers, sewer systems and water resource recovery facilities (WRRFs). The main objectives of this database are to centralize data storage from on-line measurements, laboratory analysis and data post-treatments, and deal with the challenges presented above, especially regarding the storage of metadata. This Downloaded from https://iwaponline.com/wqrj/article-pdf/54/1/1/520763/wqrjc0540001.pdf" /><meta name="description" content="Abstract. On-line continuous monitoring of water bodies produces large quantities of high frequency by guest paper presents the structure of the developed database and its application. DATABASE DESIGN The database structure that was designed, named datEAUbase (water database, 'eau' is water in French), offers robustness, data format uniformity, flexibility if modifications are needed, efficient storage of relevant metadata, and the possibility to comprehensively document a monitoring program. The datEAUbase has been designed to store all relevant data, i.e., the raw, filtered and validated data, laboratory measurements and corresponding metadata (see Figure 1). The storage of the raw, filtered and laboratory data in the same database has been considered essential since all of them are related, and crucial to validate the data series and assure their quality. datEAUbase STRUCTURE The metadata considered are presented in Figure 2 and include detailed information about the sites, the sampling points, the watershed, the parameters, the equipment used, the measurement procedure followed, the project in which the data have been collected, for which purpose the value has been measured, the person responsible for the value and the weather conditions when the value was taken. The design presented in Figure 2 is materialized by 23 different, interrelated tables in MySQL. The overall structure of the datEAUbase is presented in Figure 3. Compared to other software, e.g., MS Access, MySQL not only offers a large capacity but, more importantly, also the possibility to work with m-to-n relationships (MS Access for instance, only allows 1-to-n relations). The m-to-n relationship means that each row in one table can be related to multiple rows in another table and vice versa. For example, many people can be involved in one project, and one person can also be involved in several projects. The links between the tables are made through the specific keys (called IDs in Figure 3) associated with each row of a table. The storage requirements for each data type included in the dataEAUbase are described in Table 1. Primary tables The general structure is based on primary and lookup tables. The primary tables (Metadata, Value and Comments tables presented in Figure 3) Table 2). To illustrate the database's structure, an example follows. In the primary tables, the information stored can be: on June 15, 2015 at 10:40:00 GMT, a value of 6.5 was measured. This value is linked to Metadata_ID 22. Moreover, a comment can be added that the calibration activity was unsuccessful. Through the internal links with the Downloaded from https://iwaponline.com/wqrj/article-pdf/54/1/1/520763/wqrjc0540001.pdf" /><meta name="description" content="Abstract. On-line continuous monitoring of water bodies produces large quantities of high frequency by guest Ultimately, by its specific structure the datEAUbase not only permits to rigorously document all measured values but it also allows to build memory of the measuring campaigns in a reliable way. For instance, the structure allows to track the history of a piece of equipment, e.g., in which projects has one sensor been used or which is its calibration/ A text as the corresponding binary string data type e.g., who has been involved in a certain project or who has used certain equipment which can be useful information if some experienced person is needed. Lookup tables The lookup tables have been divided into six different blocks, shown in Figure 3: all information about the instru- Sampling location information The Sampling location tables contain the information about the site and the identification of the specific sampling points. Also, some more information about urban and hydrological characteristics is included. Project information In the Project table, information about the project is detailed. This table is linked to other parts of the database by a number of tables containing n-to-m links. These linking tables contain information about who is working in a project, where a project takes place and which equipment is used, and vice versa, in how many projects someone is working, for how many projects a location is used, and in how many projects a piece of equipment is used. For example, the monEAU project deals with the usefulness of automatic monitoring stations (AMS) to study the water quality. The measurements are located at the inlet of Grandes-Piles F/AL. The following equipment is used: con-ductivity_001, pH_003 and ammolyser_001. The personnel involved are Alferes, Plana and Vanrolleghem. Contact information In the Contact table, detailed information about the people involved in the different projects is stored. This information includes the first name, the last name, their affiliation together with the address of the corresponding office and the person's function. Also, the e-mail, the phone number, the skype name or the LinkedIn information are stored. Purpose of the measurement information The Purpose table stores information about the aim of the value included in the database, i.e., on-line measurement, laboratory analysis, calibration, validation or cleaning. This Downloaded from https://iwaponline.com/wqrj/article-pdf/54/1/1/520763/wqrjc0540001.pdf" /><meta name="description" content="Abstract. On-line continuous monitoring of water bodies produces large quantities of high frequency by guest is accompanied with a detailed description of the different purposes. For example, the purpose of the measurement is sensor validation. This is a routine sensor validation activity for verification of proper operation. Weather information Despite the fact that weather data such as daily rainfall or hourly temperatures can be stored into the database, this datEAUbase APPLICATION The structure and design of the datEAUbase creates a comprehensive environment to store and document data alongside their relevant metadata in a robust and highly efficient way. Moreover, it ensures that each value stored in the datEAUbase is unique, being linked to a specific time stamp and a complete set of metadata. Although these features represent the core functional- Downloaded from https://iwaponline.com/wqrj/article-pdf/54/1/1/520763/wqrjc0540001.pdf" /><meta name="description" content="Abstract. On-line continuous monitoring of water bodies produces large quantities of high frequency by guest The following important steps in the maintenance and application of the datEAUbase are facilitated through the user interface ( Figure 5): • Before measurements can be stored in the datEAUbase, its metadata need to be present in the lookup tables. The interface allows easy addition or modification of metadata (for example, adding a new sensor in an existing project). • Different metadata_IDs have to be created in the metadata • Non-automated data (such as laboratory results) can be entered in the datEAUbase through the user interface. This also consists of a simple coupling of the measured values to their corresponding metadata_ID. • One of the main features of the interface is its application to search the database and extract a specific data set of interest or information on sensor or project history. • During the search process, an internal quality check is also performed. Data will only be available for extraction if all internal links are present. All metadata combinations that are present in the metadata table should also be linked internally in the lookup tables. CONCLUSIONS Technological advances in water quality measurement lead to the creation of large quantities of high frequency data. Without efficient storage and rigorous documentation, the Downloaded from https://iwaponline.com/wqrj/article-pdf/54/1/1/520763/wqrjc0540001.pdf" /><meta name="description" content="Abstract. On-line continuous monitoring of water bodies produces large quantities of high frequency by guest life expectancy of these data is often limited to the specific project for which they were collected. Such common practices represent a significant loss of information as well as expense (that often goes into a measurement campaign). To maintain understanding of the collected data, track their history and secure their usefulness in further studies, documentation by metadata is crucial. This includes detailed information about the sites, the sampling points, the watershed, the parameters, the equipment used, the measurement procedure followed, the project in which the data have been collected, for which purpose the value has been measured, the person responsible for the value and the weather conditions when the value was taken. This paper presents a comprehensive database structure (the datEAUbase) that offers a data storage system with an emphasis on metadata. It provides a robust, large storage capacity with flexibility for future modifications and possible improvements. Its specific structure, consisting of a combination of three primary tables interlinked with 20 lookup tables, allows for very efficient storage of huge amounts of information while avoiding redundancy. Moreover, this rigorous documentation of all measured values with their metadata allows to build memory on sensor history, project history and so on, in a reliable way. Since this tool is meant for large data users to store and exchange water quality data, easy access and maintenance is ensured through a user-friendly interface.
3,189.2
2018-11-14T00:00:00.000
[ "Computer Science" ]
Systematic Cys mutagenesis of FlgI, the flagellar P-ring component of Escherichia coli The bacterial flagellar motor is embedded in the cytoplasmic membrane, and penetrates the peptidoglycan layer and the outer membrane. A ring structure of the basal body called the P ring, which is located in the peptidoglycan layer, is thought to be required for smooth rotation and to function as a bushing. In this work, we characterized 32 cysteine-substituted Escherichia coli P-ring protein FlgI variants which were designed to substitute every 10th residue in the 346 aa mature form of FlgI. Immunoblot analysis against FlgI protein revealed that the cellular amounts of five FlgI variants were significantly decreased. Swarm assays showed that almost all of the variants had nearly wild-type function, but five variants significantly reduced the motility of the cells, and one of them in particular, FlgI G21C, completely disrupted FlgI function. The five residues that impaired motility of the cells were localized in the N terminus of FlgI. To demonstrate which residue(s) of FlgI is exposed to solvent on the surface of the protein, we examined cysteine modification by using the thiol-specific reagent methoxypolyethylene glycol 5000 maleimide, and classified the FlgI Cys variants into three groups: well-, moderately and less-labelled. Interestingly, the well- and moderately labelled residues of FlgI never overlapped with the residues known to be important for protein amount or motility. From these results and multiple alignments of amino acid sequences of various FlgI proteins, the highly conserved region in the N terminus, residues 1–120, of FlgI is speculated to play important roles in the stabilization of FlgI structure and the formation of the P ring by interacting with FlgI molecules and/or other flagellar components. INTRODUCTION Bacteria swim by the rotation of flagella in a screw-like manner. The flagellar motor is embedded in the cytoplasmic membrane and the rotational power generated by the motor is transmitted to the helical flagellar filament through the hook. The flagellar motor, which is composed of numerous proteins, is divided into two parts, the rotor and the stator that surrounds the rotor. The rotor is mainly composed of the MS ring and the C ring, which is located on the cytoplasmic side of the MS ring. The stator is composed of the MotA/MotB complex, which contains at least four molecules of MotA and two molecules of MotB (Kojima & Blair, 2004), and functions as a proton channel (Blair & Berg, 1990;Stolz & Berg, 1991). About 10 of these stator units surround the rotor, and interactions between the stator and rotor are believed to generate the driving force for rotation (Reid et al., 2006). The MotA/MotB stators assemble around the flagellar basal body, and the functional flagellar motor is established. It is thought that the peptidoglycan binding motif of MotB is involved in the assembly. A recent study has revealed that in Escherichia coli, the stator complexes incorporated into the motor are exchanged frequently with the one that floats on the cytoplasmic membrane near the motor (Leake et al., 2006), suggesting that the interaction between the rotor and the stator is weak and/or temporary. The P ring is one of the components of the basal body. In Gram-negative bacteria, it assembles around a proximal part of the basal body, and is thought to be attached to the peptidoglycan layer; it forms a stiff cylindrical structure to hold the central rod with the L ring, which assembles at the LPS (outer membrane) layer (Akiba et al., 1991). The P ring is a part of the basal body, but is believed to be a nonrotating component to hold the rod as a bushing. The P ring is thought to consist of 26 copies of a single protein, FlgI (Jones et al., 1990;Sosinsky et al., 1992), which is expressed as a precursor form with a cleavable N-terminal 19 aa leader sequence, and exported to the periplasmic space via the Sec apparatus (Homma et al., 1987;Jones et al., 1989), where it assembles into the P ring surrounding the rod (Kubori et al., 1992). The flagellar structure is constructed by a highly ordered process. First the MS ring is assembled at the cytoplasmic membrane as a base plate, then the C ring, the transport apparatus and the rod structure are assembled in turn. Next, the P-and L-ring structures are assembled around the rod, followed by the hook and the filament. Disruption of any flagellar component causes the assembly of flagellar structure to arrest; a disruption in FlgI causes a motility defect because the flagellar construction terminates at the rod structure. Recently, we revealed that the intramolecular disulfide bond formation in FlgI is not necessary for P-ring assembly but is important to protect against degradation of the protein (Hizukuri et al., 2006). Various interactions have been speculated for the P-ring protein FlgI in the flagellar basal body (Fig. 1). To expand our knowledge of the P-ring structure and to understand the spatial arrangement around the rod in the periplasmic space, we constructed and characterized a series of systematically Cys-substituted E. coli FlgI variants. Among 32 FlgI Cys variants constructed, the protein amounts of five FlgI variants were significantly decreased, and cells carrying five of the variants showed reduced motility. We further characterized the variants using a thiol-specific reagent to investigate which residue of the protein was exposed to solvent on the protein surface. Interestingly, this work showed that the residues of FlgI that can be labelled never overlap with the residues found to be important for protein stability or motility. METHODS Bacterial strains, growth conditions, and media. The E. coli strains used in this work are listed in Table 1. To delete the cat gene cassette of the DflgI : : cat strain YZ1 (Hizukuri et al., 2006), we used the method described by Datsenko & Wanner (2000), and the constructed strain was named YZ11. The DmotAB : : cat strain YS5 was kindly provided by Yoshiyuki Sowa (Oxford University). To construct a DflgI DmotAB : : cat triple deletion strain, the DmotAB : : cat region of YS5 was transferred into strain YZ11 by using P1 phage (Silhavy et al., 1984), and the resultant strain was named YZ12-1. E. coli cells were cultured at 37 uC in LB medium (1 % Bacto tryptone, 0.5 % yeast extract, 0.5 % NaCl) or at 30 uC in TG medium (1 % Bacto tryptone, 0.5 % NaCl, 0.5 %, w/v, glycerol). When necessary, ampicillin and kanamycin were added to a final concentration of 50 mg ml 21 . Construction of plasmids. Routine DNA manipulations were carried out according to standard procedures (Sambrook et al., 1989). The plasmids used in this work are listed in Table 1. To construct the pYZ301 plasmid, a 1.4 kb KpnI-SphI fragment containing the E. coli flgI gene was cut out of pYZ201 (Hizukuri et al., 2006) and inserted into the corresponding sites of the vector pSU38. To obtain a series of plasmids expressing FlgI Cys variants, we performed site-directed mutagenesis on pYZ301 by which a full length of plasmid DNA was amplified by PfuUltra High-Fidelity DNA Polymerase (Stratagene) using a pair of complementary primers carrying a mutagenized codon, and was then digested by DpnI. To construct the pJN726 plasmid, a 2.4 kb SalI-SalI fragment containing the motAB genes was cut out of pYA6022 (Asai et al., 2003) and inserted into the vector pBAD24. Motility assays. Swarming motility was assayed as follows. Overnight culture (2 ml) (grown on LB medium at 37 uC) was dropped on a soft agar T broth plate (1 % Bacto tryptone, 0.5 % NaCl, 0.27 % Bacto agar) containing 50 mg ml 21 each of ampicillin and kanamycin and 0.04 % L-arabinose. If necessary, 5 mM DTT was added to agar plates. The plates were incubated at 30 uC for the time indicated for each experiment. Relative swarm size of the FlgI Cys mutant was calculated by normalizing to the diameter of the swarm ring of the wild-type FlgI-expressing cells after subtracting the diameter of the swarm ring of the vector-containing cells. Cysteine modification by methoxypolyethylene glycol 5000 maleimide (mPEG-maleimide). An overnight culture (grown on LB medium at 37 uC) was inoculated at a 50-fold dilution into TG medium containing 50 mg ml 21 ampicillin and kanamycin and 0.04 % L-arabinose, and cultured at 30 uC. At the exponential growth phase, 300 ml of cultured cells (OD 660 51.0) was harvested by centrifugation (9500 g, 5 min). The cells were suspended in 1 ml Wash Buffer (10 mM potassium phosphate buffer, pH 7.0, containing 0.1 mM EDTA-K), centrifuged again and resuspended in 25 ml MLM Buffer (Wash Buffer containing 10 mM DL-lactate/KOH and 0.1 mM Lmethionine). The thiol-specific reagent mPEG-maleimide (Fluka) was suspended in DMSO as a 40 mM stock solution and stored at 220 uC in the dark. Twenty-five microlitres of mPEG-maleimide reaction buffer (MLM Buffer containing 4 mM mPEG-maleimide, prepared freshly before each experiment) was added to cell suspensions and mixed well, then incubated with shaking at 37 uC for 30 min. To Fig. 1. Information about the P-ring protein FlgI; possible interactions between the P-ring protein FlgI and other components. In the left-hand image, cylindrically averaged flagellar structure reported by DeRosier (1998) is outlined. The estimated P-ring region is shaded. The drawing of the FlgI monomer displayed in the right-hand box is an estimated shape. PG, peptidoglycan. Systematic Cys mutagenesis of FlgI terminate the reaction, 5 ml b-mercaptoethanol was added and then 5 ml 10 % SDS. The sample solution was boiled at 100 uC for 5 min and mixed with 15 ml 56 SDS loading buffer containing bmercaptoethanol. An aliquot of 5 ml was used for SDS-PAGE. Protein amounts and cross-linked products of FlgI Cys variants We constructed the E. coli DflgI DmotAB : : cat triple deletion strain YZ12-1. The constructed strain showed no motility in either liquid or soft agar medium, and no flagella were observed by electron microscopy (data not shown). When transformed with two plasmids, one harbouring the flgI gene and the other the motAB genes, YZ12-1 cell motility was almost the same as that of wildtype cells in liquid medium (data not shown). Hereafter, we refer to this complemented strain as the wild-type FlgIexpressing strain. In the future we plan to investigate the interaction between the P ring and the MotA/MotB stator complex; however, we focused our current investigation on the P-ring structure and its function. We systematically constructed a series of FlgI Cys variants. We designed cysteine substitutions every 10th residue in the mature form of FlgI (the numbers correspond to the positions of the amino acid residues in the mature form of FlgI). Two additional mutants were also designed in which Ile 3 and Ile 346 were substituted with cysteine (Fig. 6). FlgI protein has two native cysteine residues, at positions 254 and 338, and they remained intact in our Cys-substituted variants. From the 37 candidates designed, we obtained 32 FlgI Cys variants, but were unable to generate D41C, Q61C, A131C, Q221C or L261C. We examined the protein amounts of the FlgI Cys variants by immunoblot analysis using anti-FlgI antibodies (Fig. 2). Most of the FlgI variants showed slightly decreased amounts of product compared to wild-type FlgI. However, the protein amounts of five variants, I3C, D111C, I181C, G241C and L251C, were significantly decreased compared to the other variants ( Fig. 2, b-ME +, filled triangles). We have reported that FlgI C254A, FlgI C338A and the double mutant seem to be more susceptible to degradation (Hizukuri et al., 2006), so we speculate that FlgI Gly 241 and Leu 251 (which is located near Cys 254 ) affect the susceptibility of the protein to degradation when replaced with Cys. In FlgI E1C, an additional band that was larger (by~2 kDa) than the estimated monomer band was detected. The larger protein may be a precursor form (38 kDa) of the mature form of FlgI (36 kDa) because replacement of the residue next to a signal cleavage site probably affects the cleavage efficiency. In the absence of the reductant b-mercaptoethanol, disulfide cross-linked products were detected in most of the FlgI Cys variants (Fig. 2, b-ME -). The apparent molecular masses of the cross-linked products showed a wave-like pattern with a peak at residues~160-190. The variants with Cys replacements at the N-or C-terminal regions had the approximate estimated size of dimers (72 kDa). On the other hand, replacements near the central region caused decreased mobility of the bands, probably because the cross-linked dimers at the middle positions formed aberrant shapes. It is worth noting that FlgI Y191C showed a large number of cross-linked products, whereas FlgI N171C and FlgI A321C showed almost no cross-linked products. Effects of the FlgI Cys variants on motility Defects in FlgI cause a failure of P-ring assembly and result in the termination of flagellar formation after rod construction. To assess the effects of the Cys replacements in FlgI on P-ring assembly, we first examined swarming ability in soft agar plates. We measured swarm ring sizes after sufficient incubation and calculated the relative swarm rates for each FlgI Cys mutant against that of wild-type FlgI (Fig. 3). Most of the FlgI Cys mutants retained swarming ability, but five mutants, FlgI I3C, G21C, G51C, G81C and D111C, had significantly decreased swarm rates (see also Fig. 4a, upper panel). When we observed swimming ability in liquid medium using dark-field microscopy, cells carrying each of these five mutations except G21C were motile, although the fractions of motile cells were extremely low (,5 %); cells carrying the G21C mutation completely lost swimming ability. When we observed the cells by electron microscopy, the mutant cells expressing FlgI G21C were shown to be completely non-flagellate (data not shown), implying that the Gly 21 residue of FlgI is critical for P-ring assembly. We investigated the effects of reducing agents such as DTT on the motility of the weakly motile mutants FlgI I3C, G51C, G81C and D11C, and the non-motile mutant G21C ( Fig. 4). In the absence of DTT, these five mutants showed poor or no motility, as described above. On the other hand, when 5 mM DTT was added to the motility agar, the swarm ring size of the wild-type cells was slightly decreased, but the four weakly motile mutants had significantly restored swarming abilities. The non-motile mutant G21C remained completely non-motile. These results suggest that the motility defects of the four weakly motile mutants were caused by the formation of an incorrect disulfide bond by the replacement Cys either within FlgI itself or with another Cys-containing protein(s). Thiol modification of the FlgI Cys variants by mPEG-maleimide To obtain structural information about FlgI, we examined the cysteine modification of the FlgI Cys variants using mPEG-maleimide, which is a membrane-impermeable thiol-specific reagent that carries an attached polyethylene glycol and has a high molecular mass of~5000 Da Systematic Cys mutagenesis of FlgI (Akiyama et al., 2004). We detected the reaction by mobility shifts of the bands on SDS-PAGE gels (Fig. 5a). The band of wild-type FlgI did not shift in mobility, suggesting that the two native Cys residues of FlgI were not accessed by mPEG-maleimide, probably because they formed an intramolecular disulfide bond. On the other hand, some of the FlgI Cys variants showed~5 kDa shift of the monomer band (36 kDa), indicating that the additional Cys residue was accessible to mPEG-maleimide, which means that the additional Cys residues are likely to be exposed to solvent on the surface of the protein. To analyse accessibility further, we quantified the labelling efficiencies of each FlgI Cys variant, which are given relative to the total amount of FlgI present (Fig. 5b). Based on this analysis, we could classify these variants into three groups: the high labelling efficiency group (.30 %, open rectangles), FlgI G11C, G161C, Y191C and S211C; the moderate labelling efficiency group (.15 %, closed rectangles), FlgI D31C, T71C, T101C, N121C, Q301C, N311C and Q331C; and the low labelling efficiency and nonlabelling group (,15 %), which contains the other mutants. Alignment of the amino acid sequences of the Pring protein FlgI from various species The replaced residues that impaired cell motility in this work were localized around the N terminus of FlgI (Ile 3 , Gly 21 , Gly 51 , Gly 81 and Asp 111 ; Fig. 3). To assess the importance of the N terminus of FlgI, we aligned the amino acid sequences of FlgI obtained from various flagellated bacteria (Supplementary Fig. S1). Multiple alignments revealed that the amino acid sequences of FlgI were well conserved among various bacteria, except the Nterminal leader sequence, and in particular, the sequences of the N-terminal region of FlgI (Arg 2 -Asp 31 and Lys 63 -Gly 120 ) were extremely well conserved. Replacement of Gly 21 with Cys, which is in the most highly conserved region, caused the most severe phenotype, non-motility. The five residues whose mutation impaired cell motility were all positioned around this N-terminal highly conserved region. From this alignment analysis and our experimental results, the N-terminal highly conserved region of FlgI is suggested to play important roles in various functions, such as maintenance of the structure of FlgI and formation of an interface with other flagellar proteins or with itself. DISCUSSION In this study, we characterized a series of FlgI Cyssubstituted mutants with respect to protein amount, motility of the cells, and modification by a thiol-specific reagent. The results of this work are summarized in Fig. 6. Among the mutations, the well-or moderately labelled residues (rectangles) never overlapped with the residues that affected their protein amount (triangles) or the motility of cells (circles). This may suggest that the important residues for protein folding or P-ring assembly are not exposed to solvent on the surface of the protein. This seems to be reasonable, because residues important for protein folding or assembly are likely to be located inside the protein (i.e. forming the core) or at the protein-protein interface. FlgI is secreted to the periplasmic space and then assembled around the rod of the flagellar basal body to form the P-ring structure. FlgI interacts with other FlgI molecules to form the P ring, with the L-ring protein that is located above the P ring, with rod proteins, with peptidoglycan (PG), and possibly with the MotA/B stator complex or other components in the 6. Profiles of the FlgI Cys mutants. The grey bar at the N terminus indicates the leader peptide, which is cleaved when the protein is exported to the periplasm. The white vertical bands indicate the residues substituted with Cys in this study, while the grey vertical bands indicate the substitutions that we were unable to generate. C 254 and C 338 are shown above to indicate the positions of the native cysteine residues in FlgI. Profiles are shown under the name of each variant: Amount, the residues for which the protein amount was decreased when substituted with Cys; Motility, the residues that caused decreased ($) or completely disrupted (#) motility of the cells; Modified, the residues that were well-(h) or moderately (&) labelled by mPEG-maleimide. The residues well-or moderately labelled by mPEG-maleimide (rectangles) never overlap with the residues that affect their protein amount (triangles) or the motility of cells (circles). periplasmic space. Speculative interactions of FlgI are illustrated in Fig. 1. FlgI is predicted to form strong interactions with FlgI itself or FlgH, the L-ring protein. In Salmonella enterica serovar Typhimurium, the P and L rings form an extremely stiff cylindrical structure. Only the L-P ring complex remains after 7.5 M urea treatment, during which all of the other flagellar components dissociate (Akiba et al., 1991). Based on this evidence, it has been inferred that the interactions of FlgI-FlgI and FlgI-FlgH are probably very strong. FlgI may also interact with FlgG, a distal rod protein. In the process of P-ring assembly, secreted FlgI is predicted to recognize the rod structure in the periplasm and associate with the rod. The rod is a part of the rotor structure, but the P ring is believed to be a part of the stator that supports the rod as a bushing and allows the rotor to run smoothly. Therefore, it has been predicted that the FlgI-FlgG interaction is temporary and/or very weak. The P ring is located in the peptidoglycan layer (and so is named the P ring), and it may interact with the peptidoglycan layer. Considering the role of the P ring, it is very likely that the P ring is fixed in the peptidoglycan layer to stabilize rotation of the motor. The highly conserved region in the N terminus, residues 1-120, of FlgI is suggested to play an important role, such as in stabilizing the structure of FlgI or forming an interface with other FlgI proteins or other flagellar components, i.e. FlgH. The role of the conserved region is not known, but it may be important to maintain FlgI structure, because there are many Gly and Pro residues in this region. The G161C variant was the most accessible to the cysteine modification reagent. When the hook basal bodies (HBBs) isolated from cells expressing FlgI G161C were treated with mPEGmaleimide, FlgI was labelled and showed band shifting (data not shown). In addition, FlgI Y191C showed numerous cross-linked products compared to other variants (Fig. 2). These results may suggest that the central region of FlgI is exposed to the outer surface of the P ring. In our previous work, we reported that the replacement of the native Cys residues of FlgI (Cys 254 and/or Cys 338 ) with Ala has little effect on motility but results in a significantly decreased amount of protein (Hizukuri et al., 2006). We concluded that the intramolecular disulfide bond formed between Cys 254 and Cys 338 is required to prevent the degradation of protein. Here, we showed that the amount of FlgI protein is decreased for FlgI G241C or L251C, but is not changed in FlgI Q331C or A341C. The FlgI C254A mutation has a more severe effect on both flagellar motility and protein amount than the FlgI C338A mutation (Hizukuri et al., 2006). These results may suggest that the amino acids around Cys 254 are more important for protein folding or protection against degradation than those around Cys 338 . Recently, a novel structure, named the T ring, was discovered in the flagellar basal body of Vibrio alginolyticus (Terashima et al., 2006). The T ring is located on the periplasmic side of the P ring and is composed of MotX and MotY, which are essential proteins for motor function. The T ring is proposed to interact with the PomA/PomB stator complex, which is homologous to the MotA/MotB complex, by the interaction between MotX and PomB. The T ring has an important role in the incorporation and stabilization of the stator (Okabe et al., 2005;Terashima et al., 2006). E. coli and Salmonella species do not have a T ring or protein homologues of MotX or MotY. We think that the P ring of E. coli might have a similar role to that of the T ring for incorporation or stabilization of the MotA/MotB stator in the motor. The C-terminal peptidoglycan binding (PGB) motif of MotB is believed to anchor to the peptidoglycan layer via the central flexible linker region of MotB and to stabilize the stator complex during rotation. The P ring is also located in the peptidoglycan layer; thus, the stator complex may be associated with the P ring via the PGB motif of MotB when it assembles and functions around the motor. In future studies, we will assess the possible interactions between the P ring and MotB.
5,478.2
2008-03-01T00:00:00.000
[ "Biology" ]
Shallow defects and variable photoluminescence decay times up to 280 µs in triple-cation perovskites Quantifying recombination in halide perovskites is a crucial prerequisite to control and improve the performance of perovskite-based solar cells. While both steady-state and transient photoluminescence are frequently used to assess recombination in perovskite absorbers, quantitative analyses within a consistent model are seldom reported. We use transient photoluminescence measurements with a large dynamic range of more than ten orders of magnitude on triple-cation perovskite films showing long-lived photoluminescence transients featuring continuously changing decay times that range from tens of nanoseconds to hundreds of microseconds. We quantitatively explain both the transient and steady-state photoluminescence with the presence of a high density of shallow defects and consequent high rates of charge carrier trapping, thereby showing that deep defects do not affect the recombination dynamics. The complex carrier kinetics caused by emission and recombination processes via shallow defects imply that the reporting of only single lifetime values, as is routinely done in the literature, is meaningless for such materials. We show that the features indicative for shallow defects seen in the bare films remain dominant in finished devices and are therefore also crucial to understanding the performance of perovskite solar cells. Quantifying recombination in halide perovskites is a crucial prerequisite to control and improve the performance of perovskite-based solar cells.While both steady-state and transient photoluminescence are frequently used to assess recombination in perovskite absorbers, quantitative analyses within a consistent model are seldom reported.We use transient photoluminescence measurements with a large dynamic range of more than ten orders of magnitude on triple-cation perovskite films showing long-lived photoluminescence transients featuring continuously changing decay times that range from tens of nanoseconds to hundreds of microseconds.We quantitatively explain both the transient and steady-state photoluminescence with the presence of a high density of shallow defects and consequent high rates of charge carrier trapping, thereby showing that deep defects do not affect the recombination dynamics.The complex carrier kinetics caused by emission and recombination processes via shallow defects imply that the reporting of only single lifetime values, as is routinely done in the literature, is meaningless for such materials.We show that the features indicative for shallow defects seen in the bare films remain dominant in finished devices and are therefore also crucial to understanding the performance of perovskite solar cells. Non-radiative recombination via defects is one of the most important loss processes in most photovoltaic technologies 1,2 .Thus, a considerable amount of photovoltaic research has been dedicated to suppressing non-radiative recombination as well as characterizing and quantifying its extent 3,4 .This is especially true for emerging photovoltaic technologies such as halide perovskites that mostly rely on solution-processed polycrystalline thin films.In lead halide perovskites, non-radiative recombination is much less of a problem compared to other polycrystalline materials used for photovoltaic applications 5,6 .The common explanation is that most intrinsic defects are either shallow or unlikely to form 7 .Interestingly, the experimental community has so far worked under the paradigm that deep defects dominate recombination, whereas shallow defects are mostly considered irrelevant 8,9 .The only shallow defects considered crucial for device functionality are mobile ions causing field screening and hysteresis in the current-voltage curve 10,11 . Identifying the properties of the defects dominating non-radiative recombination is important for various reasons.Depending on the https://doi.org/10.1038/s41563-023-01771-2decay times that vary over four orders of magnitude spanning tens of nanoseconds at the beginning of a decay up to hundreds of microseconds.This implies that the heavily used concept of a single 'lifetime' of charge carriers in halide perovskites can be highly misleading and may have to be replaced by effective recombination coefficients.We note that decay times exceeding tens of microseconds are observed also in films with one or two charge-extracting layers and in full devices.This finding implies that the shallow traps and long lifetimes are consistent with efficient charge extraction in solar cells with fill factors exceeding 80%.The absence of any type of saturation of the decay time to a constant value shows that (1) the halide perovskite films are extremely intrinsic 17 and (2) the Shockley-Read-Hall (SRH) lifetime for recombination via deep defects must be extremely long (in the range of hundreds of microseconds; Supplementary Fig. 7).The latter finding implies a change of the dominant paradigm that reduction of deep defects is crucial for further efficiency improvements.Instead, we show that a high density of shallow defects dominates recombination and limits device performance. Defect-mediated recombination Recombination via defects is the most relevant recombination mechanism for thin-film photovoltaics as it reduces the open-circuit voltage of solar cells and often also the fill factor and the short-circuit current.The SRH model is used to identify non-radiative recombination and estimate its effect on device performance.The SRH recombination rate for one species of singly charged defects is given by 18,19 where n, p and n i represent electron, hole and intrinsic carrier concentrations; τ p and τ n are the SRH lifetimes for holes and electrons; Here n 1 and p are in the unit 'per cubic centimetre' and use further variables such as the effective density of states for the conduction and valence bands (N C and N V , respectively) and the energy of the trap (E T ), conduction band edge (E C ) and valence band edge (E V ), as well as a Boltzmann constant (k B ) and temperature (T).Given that lead halide perovskites behave like intrinsic semiconductors 17 , the equation is typically simplified using the two assumptions n = p and n ≫ n i .Furthermore, n 1 and p are typically considered negligible relative to n and p, implying that detrapping is neglected, which is typically a good approximation for dominant defect species, different material optimization strategies and characterization approaches are needed.For deep traps, both transient photoluminescence (PL) and PL quantum yields are viable methods to quantify recombination, and the information content of both quantities is basically identical 12 .In the presence of deep traps, transient PL measurements lead to monoexponential decays at sufficiently low injection conditions, from which charge carrier lifetimes can be extracted.Those lifetimes must then be consistent with the PL quantum yields obtained from steady-state PL measurements and consequently correlate with the voltage difference E g /q - V oc , where E g , q and V oc are the bandgap energy, elementary charge and open-circuit voltage, respectively.Figure 1a shows this correlation for a range of perovskite studies 13,14 , where the voltage difference was calculated from the solar cell data while the lifetime was obtained from film measurements.While many data points seem to show correlation between lifetime and voltage difference, others (especially the data points (stars) labelled 'this work' and 'ref.15') feature decay times that seem too long for the associated value of E g /q - V oc .This raises the question of whether transient PL decay times are always a valid method to quantify recombination and voltage losses in halide perovskites.Moreover, the finding raises doubts regarding the implicit assumption that deep defects dominate recombination losses and transient PL decay. Here we show that typical triple-cation perovskite layers, layer stacks and solar cells are strongly affected by shallow defects that manifest themselves in steady-state and transient PL data.The most convincing evidence for shallow defects is the presence of an extremely long-lived PL signal in time-resolved PL (tr-PL) measurements with a high dynamic range of greater than ten orders of magnitude.Combined with the observation of PL quantum yields in the range of 2% (this means much smaller than unity) in films, we can rule out radiative band-to-band recombination as the reason for the long-lived decay.Consistent with the dominant influence of shallow defects, the decay approximately follows a power law (PL flux ϕ ∝ t -α , where t is time and α is a constant) over the investigated time range and never saturates to an exponential decay (Supplementary Fig. 5).The high dynamic range is important to disentangle different mechanisms in large-signal transients 16 and is not typically found in the literature on halide perovskites (Fig. 1b).The differential decay times exceed 100 µs at the end of the decay, which implies that these decays may be the longest measured so far in halide perovskites or any other direct semiconductor considered for photovoltaic applications 6 .The high dynamic range combined with the power-law nature of the decay allows us to observe differential Fig. 1 | Meta-analysis of reported energy loss and decay time in publications.a, The voltage difference E g /q - V oc as a function of decay time of perovskite films. The lines indicate the relationship between carrier lifetime and energy loss 13,14 , which is calculated based on a step function absorptance by taking p 0 = 0, p a = 0, p e = 0.05, G ext = 5.3 × 10 21 cm −3 s −1 and V SQ oc = 1.297V (corresponding to a bandgap of 1.57 eV), where p 0 is the equilibrium carrier concentration, p a is parasitic absorption probability, p e is emission probability, G ext is the generation rate of electron-hole pairs due to external illumination and V SQ oc is the open-circuit voltage in the Shockley-Queisser (SQ) model.Additionally, k rad is the radiative recombination coefficient.b, Meta-analysis on the dynamic range of tr-PL decay curves in publications.The colours represent the fitting methods of the tr-PL decay curves.More information is in Supplementary Note 5. Article https://doi.org/10.1038/s41563-023-01771-2 a deep trap.These simplifications lead to R SRH = n/(τ p + τ n ), that is, to the situation in which SRH recombination is often considered synonymous with first-order recombination, which has a rate that is linear in n (and p), that is, R SRH ∝ n δ , where δ = 1 is the reaction order.However, this is only a special case, where the trap is between the quasi-Fermi levels under operation 20 .If a trap dominating recombination is close to either the conduction or the valence band edge, one of the two voltage-independent terms n 1 or p 1 will become comparable to n and p, and hence affect the recombination rate.Without loss of generality, we assume that we have a defect close to the conduction band, implying that n 1 ≫ p 1 .In this case, the rate R SRH = n 2 /[(n + n 1 )τ p + nτ n ] can scale linearly with n (for the case of n 1 ≪ n); it may scale quadratically with n (for n 1 ≫ n); or it may have a non-integer recombination order if n and n 1 are similar in magnitude.Thus, depending on the trap level and the quasi-Fermi levels, SRH recombination may lead to 1 < δ < 2, but in consequence, the ideality factor n id will assume non-integer values over a wide range of Fermi levels, that is, 1 < n id < 2. Equation (1) describes SRH recombination in a steady-state situation relevant for explaining the current-voltage curve, the open-circuit voltage and the steady-state PL.For a transient experiment, the SRH formalism becomes a set of coupled rate equations that can be solved numerically (Supplementary Note 6).Analytical solutions are possible 15,16 but are commonly used only in the absence of detrapping.A perovskite film on glass whose recombination is dominated by a deep trap will exhibit a monoexponential PL decay at sufficiently low excitation conditions, where radiative recombination can be disregarded.In the presence of shallow traps, however, the decay will have additional features related to detrapping and changes in trap occupation due to the movement of the quasi-Fermi levels relative to the trap position during the transient process.Note that increased apparent lifetimes caused by detrapping have previously been reported for multicrystalline Si wafers 21,22 and for kesterite solar cells 23 . PL experiments To investigate the nature of defects, we prepared Cs 0.05 FA 0.73 MA 0.22 PbI 2.56 Br 0.44 triple-cation perovskite films post-treated with n-octylammonium iodide (OAI), with a bandgap of ∼1.63 eV, and subsequently fabricated ITO/Me-4PACz/perovskite/C 60 /BCP/Ag inverted solar cells with non-radiative recombination losses as low as ∼100 mV (ITO, indium tin oxide; Me-4PACz, [4-(3,6-dimethyl-9H-carbazol-9-yl) butyl]phosphonic acid; BCP, bathocuproine).To quantitatively analyse non-radiative recombination, we measured tr-PL decays as a function of light intensity.Figure 2a shows that the initial PL flux ϕ(t = 0) of transient PL measurements scales with the square of the laser power and hence with n 2 , suggesting that the electron and hole concentrations are identical just after the pulse (that is, before recombination could have happened) over a range of pulse intensities.Thus, the perovskite film cannot have either a high hole or a high electron density in the dark, as otherwise the initial amplitude should have scaled linearly with laser power, as seen for instance for Sn-based perovskites 17 .Thus, we can treat the OAI-modified film as well as an unmodified control (Supplementary Fig. 12) as intrinsic semiconductors-a finding that is consistent with reports on similar triple-cation perovskites 24 . Furthermore, we measured tr-PL decays with two different methods (single-photon counting and a gated CCD (charge-coupled device) camera) over approximately ten orders of magnitude in dynamic range (Fig. 2b).In Fig. 2b, the normalized decay data have been changed to differential decay time versus Fermi-level splitting.The details of the transformation and the figure before transformation can be found in Supplementary Fig. 4. Figure 2b shows that the tr-PL data obtained using the gated CCD camera features a constantly changing decay time that varies from tens of nanoseconds at high Fermi-level splitting (beginning of the decay) to >100 µs at a Fermi-level splitting of ∼1 V.The covered range of ∼500 meV in Fermi-level splitting corresponds to exp(500 meV/k B T) ≘ 9.5 orders of magnitude dynamic range (k B , Boltzmann's constant).Increasing the laser intensity can enhance the signal-to-noise ratio, which can further increase the dynamic range to over ten orders of magnitude.Then a detectable decay time of >280 µs was observed (Supplementary Fig. 7).The decay time (τ) is nearly continuously changing with a constant slope, where τ ∝ exp(-ΔE F /(θk B T)), 2 ≤ θ ≤ 3 and E F is the Fermi energy.This implies that the decay is approximately consistent with a power law of the type ϕ ∝ t -2 as expected (Supplementary Fig. 5) for radiative recombinationor shallow defects-in an intrinsic semiconductor.An alternative to the determination of a (constantly changing) decay time is therefore the determination of a differential recombination coefficient k diff (details in Supplementary Note 1), which would be constant for a recombination quadratic in free carrier density.This would enable the description of the recombination dynamics by a single constant parameter in the case of the absence of deep defects.Such a recombination coefficient has been frequently used in the organic solar cell community 25,26 but is so far uncommon in the description of halide perovskites. We also note that the single-photon counting data partly overlap with the data from the gated CCD but have additional features.Due to the repetition rate limitation of the single-photon counting method, the decay times shown using the gated CCD are impossible to measure in the absence of a measurement system that can handle repetition rates below a few kilohertz (Supplementary Fig. 4).In addition, we demonstrate that exponential fitting is unable to reliably extract PL decay times (Supplementary Table 1).Finally, we measured the steady-state PL to determine the ideality factor of the films and obtained a non-integer ideality factor of n id ≈ 1.2 (Fig. 2c). To verify that the data are consistent with shallow traps but inconsistent with recombination being limited by deep traps, we use a rate-equation model to simulate both the tr-PL and the steady-state PL data (equations are shown in Supplementary Note 6).The solid lines shown in Fig. 2b,c represent fits to the data, with the parameters shown in Supplementary Table 2.We use three shallow defects to fit the data, and they have a distance to the nearest band of about 55, 95 and 125 meV.Furthermore, we know from Fig. 2a that the defects cannot dope the layer, that is, they have to be acceptor-like defects close to the conduction band or donor-like defects close to the valence band (as visualized in Supplementary Fig. 17).We note that defect positions closer to mid-gap would lead to substantially different shapes of the tr-PL (Supplementary Fig. 20).Furthermore, the only way to consistently explain decay times of hundreds of microseconds in combination with steady-state PL values that are much lower than the radiative limit is to invoke the presence of shallow traps that release charge carriers at longer times, thereby leading to a delayed luminescence, a power-law decay and, in consequence, extremely long decay times towards the end of the decay.Detrapping effects can cause the peculiar situation of PL decay times that increase with increasing shallow defect density, which is the opposite trend as that observed for deep defects (Supplementary Fig. 23). Influence of charge-extracting layers Figure 3a shows the ΔE F of layer-stack samples acquired from steady-state PL measurements (Supplementary Fig. 30).The ITO/ Me-4PACz/perovskite sample shows the highest ΔE F value.However, interfacing the perovskite layer with C 60 substantially lowers ΔE F , possibly by introducing additional interfacial defects, as suggested by studies 27,28 and consistent with previous reports on steady-state 29 and transient PL 15,30,31 .OAI modification can effectively passivate the perovskite/C 60 interface defects, as samples with the perovskite/C 60 interface show a stronger enhancement in ΔE F after OAI modification than stacks without C 60 .Figure 3b,c shows the time-dependent tr-PL decay curves and the differential decay time τ diff as a function of the ΔE F of different layer stacks.The decay times of different stacks are remarkably similar.While interfaces between perovskite and C 60 reduce ΔE F values, interfaces with only Me-4PACz show an increase in ΔE F .This indicates that film growth on Me-4PACz improves the bulk properties and suggests that the Me-4PACz/perovskite interface is electronically rather benign.This also contributes to the Me-4PACz/perovskite samples showing the longest τ diff at high ΔE F values.While the general shape of the τ diff versus ΔE F curves is similar, the samples with charge-extracting interfaces (either electron transport layer ETL or hole transport layer HTL) show a somewhat lower slope at intermediate values of ΔE F (highlighted by the plateau in the figure).Possible reasons for this feature are Coulomb effects that have previously been shown to lead to an S-shaped decay time versus ΔE F curves 15,32 . Device characteristics Finally, inverted solar cells were fabricated based on the films.Figure 4a shows that the V oc of the device increases from 1.114 to 1.214 V after OAI modification, resulting in an efficiency increase from 19.9% to 21.4%.Figure 4b = V rad oc − V oc ≈ 100 mV (current density versus voltage (J-V) curves in Supplementary Fig. 36), whereby the open-circuit voltage in the radiative limit is given by V rad oc = 1.332V.The horizontal lines represent the values of V oc expected for different PL quantum efficiency (Q lum e ) values.Results show that Q lum e increases from ∼0.04% to ∼2% as a function of OAI concentration.Although the value is lower than the record value of >5% (refs.33,34), it is still higher than most triple-cation-perovskite-based inverted devices (Supplementary Fig. 37).More details about device performances as well as the band alignment between absorber and contact layers can be found in Supplementary Note 4. Figure 4c shows the intensity-dependent open-circuit voltages of full devices, from which we derive ideality factors of around 1.4 (control) to 1.5 (OAI-modified device).These ideality factors are considerably lower than 2, as would be expected from an intrinsic absorber layer dominated by a deep defect, and are therefore consistent with the assumption that the existence of shallow traps still dominates the behaviour in the final cell.This observation is further corroborated by the behaviour of the decay times from tr-PL, shown in Fig. 4d.The decay times show a rather similar behaviour to the films and layer stacks shown in Fig. 3c.At high ΔE F (for example, 1.3-1.5 eV), the high carrier concentration results in strong radiative recombination, leading to the fast variation of τ diff .In the intermediate region (for example, 1.05-1.3eV), the decay follows a roughly constant slope of approximately exp(-ΔE F /6k B T), which is less steep than for the films and layer stacks.At low ΔE F (for example, <1.05 eV), τ diff sharply increases again.One possible reason could be the capacitance effect caused by the electrodes 15 .The single-photon counting data cannot reflect the real variation of τ diff in the low ΔE F region because of the limitation of the repetition rate. Outlook We show that typical triple-cation perovskite layers, layer stacks and solar cells are strongly affected by shallow defects that manifest themselves in steady-state and transient PL data.Detrapping from such shallow traps then leads to extremely long decay times of hundreds of microseconds that can only be measured using a technique with an extremely low repetition rate.These shallow traps are less problematic for device performance than deeper traps with given SRH lifetimes of τ n and τ p but are still dominating the steady-state properties.Furthermore, the signatures of shallow traps in transient and steady-state experiments are difficult to distinguish from radiative recombination, which may have contributed to the wide spread of reported values for the radiative recombination coefficient in lead halide perovskites [35][36][37][38][39][40] as well as the frequent reports on non-radiative contributions to the quadratic recombination coefficient 9,12,38,39,41 .Furthermore, the work highlights that the often used approximations of the SRH recombination rate must be applied with caution and should not be considered as the default recombination model.The work also shows that the absolute value of the PL decay time extracted from single-or multiexponential fits to low dynamic range fractions of the complete datasets can lead to highly misleading values as decay times may vary over orders of magnitude (tens of nanoseconds to hundreds of microseconds) depending on the excitation density and the repetition rate of the PL set-up.Thus, considering the decay time observed from transient experiments on halide perovskites to be a single number is one of the key fallacies the community needs to overcome to gain insights on recombination dynamics in these materials.A possible alternative to effective decay times for decays that rather resemble a power law instead of an exponential decay is the determination of an effective recombination coefficient. Device fabrication Patterned ITO glasses (Kinetic, 2.0 × 2.0 cm 2 ) were used as substrates and ultrasonically cleaned with soap solution (Seife Hellmanex III, 2%, 50 °C, 20 min), acetone (20 °C, 20 min) and IPA (20 °C, 20 min), one by one.The substrates were further cleaned by oxygen plasma (Diener Zepto, 50 W, 13.56 MHz, 10 min) and then transferred into a N 2 -filled glove box to await use.The Me-4PACz powder was dissolved by EtOH solvent with a concentration of 1 mmol l -1 .After it was completely dissolved, the solution was spin-coated on the substrates at 3,000 r.p.m. for 25 s (acceleration time, 4 s) and then annealed at 100 °C for 10 min.For the perovskite solution, 1.2 M Cs 0.05 FA 0.73 MA 0.22 PbI 2.56 Br 0.44 triple-cation perovskite precursor solution was prepared by mixing CsI (0.06 M), methylammonium (MA) iodide (0.264 M), formamidinium (FA) iodide (0.876 M), PbBr 2 (0.264 M) and PbI 2 (0.936 M) solutes in DMF/DMSO (3:1 volume ratio) solvent.Then PMMA (∼0.06 mg ml -1 ) was added to the solution.The precursor solution was stirred at 75 °C until fully dissolved, and then filtered with a polytetrafluoroethylene filter (0.45 µm).Some 180 µl solution was dropped onto the Me-4PACz layer and spin-coated at 4,000 r.p.m. for 15 s (acceleration time, 5 s) and 6,000 r.p.m. for 40 s (acceleration time, 5 s).Some 300 µl anisole was dripped onto the film as an antisolvent 20 s before the end of the spin process.The films were immediately annealed at 100 °C for 20 min.For the OAI-modified sample, 100 µl OAI/IPA solutions (with concentrations of 1, 1.5, 2 and 2.5 mg ml -1 ) were each dynamically spin-coated on a perovskite layer at 5,000 r.p.m. for around 30 s, and then annealed at 100 °C for 5 min.The as-prepared films were covered by C 60 (25 nm) and BCP (8 nm) layers by thermal evaporation at a rate of 0.1 Å s -1 .Finally, 80 nm of silver was thermally evaporated on the film with a mask.All of the solution preparation and film preparation was performed in a N 2 -filled glove box and attached thermal evaporation system.The active cell area (0.06 and 0.16 cm 2 ) is the intersection of the silver and patterned ITO. Material characterizations The surface morphologies of the perovskite films were characterized by scanning electron microscope (Zeiss LEO 1550VP).Absorptance spectra of the film samples were measured by an ultraviolet-visible-near-infrared spectrometer (PerkinElmer Lambda 950).The thickness of perovskite films was measured by a step profiler (Veeco Dektak 6M).Ultraviolet photoelectron spectroscopy (UPS) measurements were carried out for each layer to investigate the energy level alignment.The UPS system is a Multiprobe MXPS system from Scienta Omicron with an ARGUS hemispherical electron spectrometer and is part of the JOSEPH cluster system at the research center in Jülich.The base pressure in the system is 3 × 10 −11 mbar.The light source for UPS measurement is a HIS13 He I gas discharge VUV source from FOCUS (main line He Iα, 21.22 eV).The binding energy scale is referenced to the Fermi edge measured on a freshly evaporated gold sample, measured under identical conditions.Work functions were determined from the spectra by measuring the position of the cut-off at high binding energies using linear fits at the background and the steep edge.The same method was applied to the leading edge of the UPS spectra to determine the valence band position for the HTL and ETL.For the control and OAI-treated perovskite films, the valence band onset was plotted on a logarithmic scale and was determined by using exponential fitting. Device characterizations The current-voltage curves were measured by a calibrated air mass 1.5 (AM1.5)spectrum of a class AAA solar simulator (WACOM-WXS-140S-Super-L2 with a combined xenon/halogen lamp-based system) using a crystalline silicon cell as a reference and providing a power density of 100 mW cm -2 .The reference cell was certified by the photovoltaic calibration laboratory at the Fraunhofer ISE, Germany, and the spectral mismatch factor is ∼0.98.For both forward and reverse scans, the scan speed was about 76 mV s -1 with a measurement time of around 17 s.All the samples were kept uniformly under the light for 5 s before scanning without any other preconditioning.Additionally, a white light light-emitting diode (LED; Cree XLamp CXA3050) was also used as light source.The light intensity of the LED was adjusted to the one sun condition using the short-circuit current resulting from the solar simulator measurement of a perovskite solar cell.A 2450 Keithley was used as a source measure unit.All measurements were carried out under inert atmosphere in a glove box.We did not use a mask for the measurements, as it would lead to erroneous V oc and fill factor FF values with masking, though the determination of short-circuit current density J sc can be more accurate 42 .To solve the J sc issue, we consider that validating the J sc with the external quantum efficiency results is a good choice.For our samples, they matched well with each other. For the external quantum efficiency measurement, a set-up with a xenon light source (Osram XPO 150 W) and a Bentham monochromator (TMC 300) was used.A photodiode (Gigahertz Optik SSO-PD 100-04) was used to calibrate the light source.The cells were mounted inside a sealed, nitrogen-filled sample box with a quartz cover glass.The raw data of external quantum efficiency and integrated J sc have been corrected by subtracting the reflectance of the cover glass. Fourier-transform photocurrent spectroscopy measurements were carried out using a Fourier-transform infrared spectrometer (Bruker Vertex 80v) equipped with a halogen lamp.A low-noise current amplifier (Femto DLPCA-200) was used to amplify the photocurrent generated upon illumination of the solar cell devices with light modulated by the Fourier-transform infrared spectrometer.We used a mirror speed of 2.5 kHz and a resolution of 12 cm −1 .Measurements with different filters were combined to get a spectrum with a higher dynamic range of the bandgap. Sample preparation for PL measurement We prepared five types of sample for PL measurement, that is, perovskite film, film/C 60 , Me-4PACz/film, Me-4PACz/film/C 60 and the full device.Unless otherwise noted, the OAI-modified perovskite samples were prepared using 2 mg ml -1 OAI/IPA solution.Perovskite film and film/C 60 samples were prepared on quartz glass substrates.In order to reduce the defect density of the glass/perovskite interface, we prepared PMMA film on the glass before perovskite preparation.To be specific, 20 mg ml -1 PMMA was dissolved in chlorobenzene solvent and then spin-coated on the quartz glass at 3,000 r.p.m. for 25 s, followed by annealing at 100 °C for 10 min.Other types of stacks were prepared on ITO and have the same preparation parameters as devices described previously. Apart from the triple-cation perovskite, we also performed transient PL measurement for Cs 0.05 FA 0.95 PbI 3 and CsPbBr 3 films, as well as GaAs wafer.The GaAs wafer was purchased from a company https://doi.org/10.1038/s41563-023-01771-2without any treatment.The CsPbBr 3 precursor was prepared by adding 0.35 M CsBr and 0.35 M PbBr 2 into DMSO solution.After fully dissolving it at 70 °C, 100 µl solution was dropped on the bare glass and spin-coated at 4,000 r.p.m. for 1 min, followed by annealing at 100 °C for 10 min.As for Cs 0.05 FA 0.95 PbI 3 , the precursor was mixed together with formamidinium iodide (1.71 M), PbI 2 (1.8 M) and CsI (0.09 M) powders and then dissolved in DMF/DMSO (8:1 v/v).To facilitate crystallinity, we added an extra 5% PbI 2 and 30% methylammonium chloride (molar ratio) into the solution.The perovskite layer was prepared by spin-coating with ∼100 µl precursor at 1,000 r.p.m. (10 s) and 5,000 r.p.m. (30 s).Some 200 µl chlorobenzene was used as antisolvent and dropped onto the film at 10 seconds prior to the end.The film was annealed at 100 °C for 30 minutes.All processes were performed in the glove box. The tr-PL measurement The tr-PL decay was measured using time-correlated single-photon counting and gated CCD recording, separately.For the time-correlated single-photon counting set-up, a 630 nm laser with a pulse width of 96 ps was used.The laser spot size was 50 µm in diameter, and the laser pulse repetition rate applied was 25 kHz and 50 kHz.The time resolution of the system was approximately 2 ns.To vary the laser intensity hitting the sample, filters with different OD values were used.The applied excitation fluences using 2.6 OD, 2 OD, 1 OD and 0 OD filters were 2.00, 7.97, 79.65 and 796.54 nJ cm -2 , respectively.The whole system was placed in a black box to protect the signal from ambient light. Regarding to gated CCD recording, a pulsed UV-solid-state laser was used as an excitation source, which served as a pump laser for the dye laser.The set-up parameters followed the following description unless otherwise noted.The pumped dye (Coumarin) used in the tr-PL set-up emitted a down-converted, pulsed laser radiation of 512 nm.The repetition rate was 100 Hz.This radiation passed through an optical fibre and impinged at an angle of 30° on the sample surface, illuminating an elliptically shaped spot with a diameter 3.07 mm on the samples.The applied excitation fluence was around 2.83 µJ cm -2 , making the corresponding initial carrier concentration and ΔE F values 1.46 × 10 17 cm −3 and 1.48 eV, respectively.The PL signal emitted by the samples was focused and coupled into the spectrometer (SPEX 270M from Horiba Jobin Yvon).An intensified CCD camera (iStar DH720 from Andor Solis) was used to detect the spectrally dispersed signals.To get a time resolution, we exploited the inherent shutter functionality of our intensified CCD camera and a signal of the laser as a trigger.By changing the decay time between the trigger signal and an acquisition of a spectrum, the PL can be measured at different times after the excitation pulse. For both methods, the samples were mounted inside a sealed, nitrogen-filled sample box.To analyse the data, we first subtracted the background and then normalized the data, as well as shifting the peak to position zero.Detailed instructions can be found in ref. 15. Steady-state PL measurement and Quasi-Fermi-level splitting calculation The samples were optically excited by a continuous wave 532 nm laser (Coherent Sapphire).The laser beam was widened to a square of about 5.3 mm × 5.3 mm to illuminate the entire cell area (4 mm × 4 mm).The luminescence spectra were detected via a spectrometer (Andor Shamrock 303) with an Andor Si (deep depletion) CCD camera (iDus Series).The laser power was 17.3 mW.PL measurements were performed for different laser intensities impinging on the sample by using different OD filters.During the measurements, dark spectra were taken following each illuminated measurement to subtract the background. The quasi-Fermi-level splitting of layer-stack samples was acquired from steady-state PL data.With the open-circuit voltage V oc and corresponding PL intensity ϕ PL,cell at the 1 sun condition (V oc (ϕ sun ) and ϕ PL,cell (ϕ sun ), respectively) of the control device as a reference, the ΔE F of layer-stack samples was calculated by where k B , T, h and c are Boltzmann's constant, the temperature, Planck's constant and light speed in a vacuum, respectively. PL quantum yield calculation The PL quantum yield (Q lum e ) values of devices were calculated from where V rad oc is the radiative open-circuit voltage limit.It can be calculated using the approach described in ref. 43. Numerical simulation Band diagrams of devices were simulated using SCAPS software based on the UPS measurement results.The tr-PL and steady-state PL simulations were performed with self-developed MATLAB scripts based on the coupled rate equations (Supplementary Note 6). Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. фFig. 2 | Fig. 2 | The tr-PL and steady-state PL results of OAI-modified films.a, Change of initial amplitude ϕ(t = 0) of tr-PL decay curve (time-correlated single-photon counting set-up) for an OAI-modified perovskite film as a function of carrier concentration.The amplitude is proportional to n 2 , indicating the film is intrinsic.b, Measured differential decay time as a function of Fermi-level splitting by both time-correlated single-photon counting (using different optical density (OD) filters) and gated CCD set-ups.Note that the curve for 0 OD is nearly overlapping with the gated CCD curve.The solid line is the simulated result.c, Calculated and simulated results of ΔE F versus illumination intensity for OAI-modified film.The calculated data are based on the steady-state PL results. Fig. 4 | Fig. 4 | Device performance.a, J-V curves of the control and OAI-modified (2 mg ml -1 ) small-area device.The value shown in the figure, from top to bottom, is J sc , V oc , FF and efficiency.b, Statistical open-circuit voltage data of control devices and OAI-modified devices with different OAI concentrations in milligrams per millilitre.The solid lines indicate the PL quantum yields.The box contains the values from the upper to lower quartiles.The lines outside the box ΔEF 2 h 3 c 2 1 = qV oc (ϕ sun ) + k B T ln ( ϕ PL / ∫ Aϕ BB dE ϕ PL,cell (ϕ sun )/ ∫ Q EQE ϕ BB dE )where ϕ PL , A, E and Q EQE are the PL intensity of the sample, absorptance of the films, energy and external quantum efficiency of the cells, respectively.ϕ BB is the spectral black-body radiation as shown inϕ BB (E ) = 2πE exp [E/(k B T )] − 1 Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
8,507.8
2024-01-09T00:00:00.000
[ "Materials Science", "Physics" ]
Lack of Modulation of Nicotinic Acetylcholine Alpha-7 Receptor Currents by Kynurenic Acid in Adult Hippocampal Interneurons Kynurenic acid (KYNA), a classical ionotropic glutamate receptor antagonist is also purported to block the α7-subtype nicotinic acetylcholine receptor (α7* nAChR). Although many published studies cite this potential effect, few have studied it directly. In this study, the α7*-selective agonist, choline, was pressure-applied to interneurons in hippocampal subregions, CA1 stratum radiatum and hilus of acute brain hippocampal slices from adolescent to adult mice and adolescent rats. Stable α7* mediated whole-cell currents were measured using voltage-clamp at physiological temperatures. The effects of bath applied KYNA on spontaneous glutamatergic excitatory postsynaptic potentials (sEPSC) as well as choline-evoked α7* currents were determined. In mouse hilar interneurons, KYNA totally blocked sEPSC whole-cell currents in a rapid and reversible manner, but had no effect on choline-evoked α7* whole-cell currents. To determine if this lack of KYNA effect on α7* function was due to regional and/or species differences in α7* nAChRs, the effects of KYNA on choline-evoked α7* whole-cell currents in mouse and rat stratum radiatum interneurons were tested. KYNA had no effect on either mouse or rat stratum radiatum interneuron choline-evoked α7* whole-cell currents. Finally, to test whether the lack of effect of KYNA was due to unlikely slow kinetics of KYNA interactions with α7* nAChRs, recordings of a7*-mediated currents were made from slices that were prepared and stored in the presence of 1 mM KYNA (>90 minutes exposure). Under these conditions, KYNA had no measurable effect on α7* nAChR function. The results show that despite KYNA-mediated blockade of glutamatergic sEPSCs, two types of hippocampal interneurons that express choline-evoked α7* nAChR currents fail to show any degree of modulation by KYNA. Our results indicate that under our experimental conditions, which produced complete KYNA-mediated blockade of sEPSCs, claims of KYNA effects on choline-evoked α7* nAChR function should be made with caution. Introduction Nicotinic acetylcholine receptors (nAChRs) are ligand-gated, nonselective cation channels. To date, nine a-subunits (a2-10) and three b-subunits (b2-4) have been discovered in the CNS (Reviewed in [1,2,3]). The a-subunits are required for ligand activation while the b-subunits serve as structural components and can affect receptor characteristics, such as ligand affinity and desensitization rate [1,2,3]. Heterologous expression studies, as well as studies with null mutant mice show that these subunits assemble in various combinations to form pharmacologically and biophysically distinct nAChR subtypes and these subtypes show regionally distinct patterns of expression [1,2,3]. Kynurenic acid (KYNA) is a well-established antagonist of the AMPA-,NMDA-, and kainite-type glutamate receptors [17,18]. A metabolite of tryptophan, KYNA is synthesized primarily by glia and released into the extracellular space (Reviewed in [18,19]). Although cerebral spinal fluid (CSF) levels of KYNA are below the established IC 50 values for AMPA and NMDA receptors, some studies indicate de novo synthesis and release of KYNA reduces glutamate-mediated excitotoxicity suggesting that KYNA release may be located near synaptic sites thus creating micro domains of high KYNA concentration [19,20]. In 2001, it was reported KYNA also blocks a7* nAChRs [21]. This study measured the direct effects of KYNA on a7* receptors expressed in cultured embryonic hippocampal neurons and revealed that KYNA had greater affinity for a7* receptors than for NMDA receptors [21]. Additional studies in hippocampal slices showed that KYNA reduced choline-evoked increases in GABAergic spontaneous inhibitory postsynaptic currents (sIPSCs); an indirect measure a7* function. However, the KYNA effect in slices was much less robust than that seen in cultured neurons [21]. The lower potency of KYNA for a7* receptors in hippocampal slices as compared to cultured neurons was interpreted to result from diffusion barriers inherent to slices as well as the relative hydrophobicity of KYNA (however, a recent report suggest that the age of the tissue could account for the reduced effects of KYNA [26]). Subsequent studies directly measured the effects of KYNA on a7* nAChRs expressed in hippocampal slices confirming the results of their initial report. Recently, however, reports have failed to find any effect of KYNA on a7*-mediated events and we present further support for the lack of KYNA effects on a7* nAChRs currents using direct patch-clamp recording from adolescent or mature rodent acute brain slices. [22,23]. Hippocampal Slices Male C57BL/6J/Ibg mice, 45-to 60-days old, were obtained from the Institute for Behavioral Genetics (Boulder, CO). Male Sprague Dawley rats, 21-28 days old, were obtained from (Harlan, Wilmington, MA) and tested at 30-45 days of age. Housing and treatment of all animals were in accordance with the NIH and the University of Colorado, Boulder IACUC guidelines. The mice were sacrificed by cervical dislocation and rats were sacrificed by isoflurane anesthesia. The brains were removed quickly and placed into a ''cutting solution'' of the following composition (in mM): Sucrose 75, NaCl 87, NaHCO 3 25, KCl 2.5, NaH 2 PO 4 1.25, 0.5 CaCl 2 MgCl 2 7, and glucose 25, bubbled continuously with a mixture of 95% O 2 and 5% CO 2 at 4uC. The brains were blocked and secured to the cutting platform using cyanoacrylate glue. Horizontal hippocampal slices (250 mm thickness for mice, 300 mm thickness for rat) were obtained using a Vibratome (VT1000P, Leica Microsystems, Wetzlar, Germany) and transferred to a storage chamber containing a continuously bubbled solution comprised of a 50:50 mixture of cutting solution and artificial CSF (ACSF) of the following composition (in mM): NaCl 126, NaHCO 3 26, KCl 3, NaH 2 PO 4 1.2, CaCl 2 * 2H 2 O 2.4, MgCl 2 1.5, and glucose 10. Slices were allowed to equilibrate for at least 1 hr at 34-35uC before they were transferred to the recording chamber. Electrophysiological Recordings All experiments were performed at 32-34uC while the tissue was superfused with ACSF at a rate of 2.5 ml/min. Whole-cell patch-clamp recording was accomplished by using glass pipettes pulled on a Flaming/Brown electrode puller (Sutter Instruments, Novato, CA). The resistance of the pipettes was 3-5 MV when filled with a potassium gluconate-based internal solution, which consisted of (in mM): 132 K-Gluconate, 4 KCl, 1 EGTA, 2 MgCl2, 0.1 CaCl2, 2 Mg-ATP, 0.3 Na-GTP, and 10 HEPES adjusted to pH 7.25 with additional KOH. Cells were viewed with an upright microscope equipped with IR-DIC optics (Nikon 800 FN, or Olympus BX51WI). Neurons were recorded from using the whole-cell voltage clamp technique with a Multiclamp 700 (Axon Instruments, Foster City, CA). Data were recorded to a desktop computer and analyzed off-line using pClamp 9 software (Axon Instruments, Foster City, CA). Drug Application Kynurenic acid, DHbE, and MLA were delivered by bath application. Brief pulses (10-300 ms) of choline (10 mM) were applied directly to the cell body via pressure microejection (2-10 psi, pipette tip <20-50 mm from cell border) from pipettes identical to the recording pipettes, using a Picospritzer II (General Valve, Fairfield, NJ). Due to the brief duration of agonist application (10-300 ms), choline was applied at 20-30 second intervals without any measurable desensitization. Glutamate was applied in a similar fashion (2-10 psi, 10-100 ms) but the interval between puffs was extended to five minutes to avoid possibility of glutamate receptor-induced plasticity effects on the glutamateevoked and spontaneous synaptic glutamate currents. Statistical Analysis Data were analyzed using either the paired or unpaired student's t-test, where appropriate. Kynurenic Acid Effects on a7* nAChRs Expressed on Mouse Hilar Interneurons Previous studies of rat hilar neurons revealed functional a7* nAChRs [24], and studies in the mouse revealed a high density of a7* nAChRs using a ligand binding assay [25]. We sought to investigate whether mouse hilar interneurons expressed functional a7* nAChRs. The present study utilized pressure application of choline (10 mM) to adolescent-to-adult hilar neurons under voltage-clamp control. The initial results showed that choline application elicited an inward current that was completely blocked by the bath application of the a7-selective antagonist, MLA ( Figure 1A and 1B). As an additional control we performed experiments using a7 null mutant mice that revealed no choline-evoked responses under identical experimental conditions ( Figure 1A and 1B). Together, these results indicate that the choline-evoked inward currents recorded from mouse hilar interneurons were mediated by functional a7* nAChRs. To the best of our knowledge, this is the first demonstration of functional a7* nAChRs expressed in mouse hilar region. One characteristic of hilar interneuron recordings is the high frequency, large amplitude spontaneous excitatory postsynaptic currents (sEPSCs) that are resistant to MLA and made the analysis of the a7* currents problematic (see Figure 1A 1C). To determine the pharmacology of these sEPSCs, we applied a saturating concentration (1 mM) of the broad spectrum ionotropic glutamate receptor antagonist KYNA. Bath applied KYNA (1 mM) completely blocked the sEPSCs, indicating the sEPSCs were glutamatergic. Surprisingly, concurrent measurements of evoked a7* currents revealed that 1 mM KYNA failed to block these responses ( Figure 1C and 1D) indicating that the a7* currents in mouse hilar interneurons were insensitive to KYNA. Figure 1C shows representative choline-evoked a7* currents before, during, and after bath applied KYNA (1 mM). Notice that while the sEPSCs are absent during the presence of KYNA, the a7* current is unaffected. Out of 23 neurons studied, 20 displayed choline-induced and KYNA resistant whole-cell currents; the remaining three neurons were unresponsive to choline. The results presented here showed that all of the choline responsive mouse hilar neurons fail to show evidence of modulation by KYNA. Effects of KYNA on Glutamatergic Whole-cell Currents in Mouse Hilar Interneurons In the original report of KYNA blockade of a7* nAChRs, the effect of KYNA of a7* nAChRs was much less pronounced in acute hippocampal slices compared to cultured hippocampal neurons [21]. The authors concluded that the reduced effect of KYNA on a7* nAChR function in slices was due to a reduced ability of KYNA to penetrate the hippocampal slice preparation. This is unlikely, given that KYNA readily blocked sEPSCs in the current study, while having no effect on the a7* currents. However, one possible explanation for the lack of KYNA blockade of a7* current in the current study is that the pressure application of choline displaced KYNA from its binding site. To test for this possibility, we pressure-applied glutamate and determined the effects of bath-applied KYNA on glutamate-evoked whole-cell currents in mouse hilar interneurons. Our results showed that pressure-applied glutamate failed to displace KYNA from its binding site and completely blocked both the exogenous glutamate currents and the endogenous sEPSCs. Furthermore, the onset of antagonism was rapid, with substantial block after 15 min. bath exposure, and complete reversal of blockade after 20 min. washout ( Figure 2). The results of these control experiments address two main technical questions related to the lack of modulation of a7* nAChR current by KYNA indicating that minimal diffusion barriers exist for KYNA in the hippocampal slice preparation, and that pressure-applied agonist does not displace KYNA from its site of action. Given that KYNA has a greater affinity for a7* nAChRs compared to glutamate receptors [21], we interpret that the lack of KYNA blockade of a7* currents in the current study is not best explained by its displacement by the pressure application of choline. KYNA Effects on CA1 a7* nAChRs in Mouse and Rat Stratum Radiatum Interneurons Previously published reports of KYNA effects on a7* nAChRs in hippocampal slices were done in interneurons located in the rat CA1 stratum radiatum subfield [21,26,27,28]. Another possible explanation for the lack of KYNA effect on hilar a7* nAChRs is that they are somehow different from those expressed in the CA1 region; possibly due different post translational modification or the inclusion/exclusion of additional subunits. Indeed, evidence exists that native a7* nAChRs may include other subunits [29,30,31,32,33]. To test this hypothesis, we measured the KYNA sensitivity of choline-evoked a7* currents expressed in mouse CA1 stratum radiatum interneurons and again found no evidence for KYNA modulation of a7* nAChRs ( Figure 3A and 3B). Another possible explanation for the lack of KYNA blockade of mouse a7* nAChRs is that they differ from those expressed in the rat. Papke and colleagues showed that pharmacological differences exist between rat and human a7 nAChRs expressed in oocytes [34,35]. We tested for this by recording choline-evoked a7* currents expressed in rat CA1 stratum radiatum interneurons, and again found no evidence for KYNA blockade ( Figure 3C and 3D). Effects of Long Term KYNA Exposure on a7*-mediated Whole-cell Currents Hilmas et al., (2001) [21] suggested that KYNA blockade of a7* nAChRs is slow to develop in the slice. To address this issue, experiments were done in which hippocampal slices were cut, stored, and continuously perfused with 1 mM KYNA. Additionally, 1 mM KYNA was present in the in the choline (10 mM) puffer pipette to control for the possibility that choline application was displacing KYNA from its site of action. In these experiments, the slices were exposed to 1 mM KYNA for at least 90 min. Figure 4A shows representative traces from a hilar neuron exposed to 1 mM KYNA for 2 hrs. The top trace shows the inward response to a 30 ms pressure application of 10 mM choline/1 mM KYNA. The bottom trace shows the choline response was blocked by 10 nM MLA, indicating that it was mediated by a7* nAChRs and not due to a mechanical artifact resulting from pressure application. Figure 4B presents the time course for this experiment, showing stable, large amplitude a7* nAChR-mediated currents in the presence of 1 mM KYNA that were subsequently blocked by bath applied MLA (10 nM). Because no baseline responses were obtained in these experiments, the range of amplitudes of choline-evoked responses obtained in the absence of KYNA from separate experiments were compared to those obtained after at least 90 min of continuous KYNA exposure and are summarized in Figure 4C. The amplitudes of baseline choline-evoked responses ranged from 34-574 pA (n = 20). Choline-evoked responses from neurons exposed to KYNA for at least 90 min ranged from 185-557 pA (n = 10). Given the large range of amplitudes for each condition, no statistical significance for a KYNA effect was seen. However, if KYNA was partially blocking a7* receptors, one would expect to see shift in the range of amplitudes to the lower end, which was not observed. These results show that longterm KYNA exposure has no effect on choline-evoked a7*mediated currents. Also, the inclusion of KYNA in the application pipette confirms, yet again that choline application in the previous experiments was not displacing KYNA from its supposed binding site on the receptor. Discussion Results presented here failed to replicate prior reports by the Albuquerque laboratory [21,26,27,28] showing that KYNA blocks a7* nAChRs. However, our negative results are consistent with those reported recently by the Hernandez-Guijo and Kew laboratories [22,23]. In the original report of KYNA antagonism of a7* nAChRs, Hilmas et al., (2001) [21] stated that DMSO was used to get KYNA into solution. It is possible that high concentrations of DMSO necessary to dissolve KYNA produced indirect nonspecific effects. Indeed, one group that failed to observe a modulatory role for KYNA on a7* nAChRs, Mok et al., (2009) [23] showed that high concentrations of DMSO inhibit a7* currents regardless of the presence of KYNA. This result may explain the discrepancy between the initial report [21] and the results presented here, as well as those of Mok et al.,(2009) [23]. However, these discrepancies are not accounted for with later reports of KYNA effects on a7* nAChRs citing that KYNA was dissolved using NaOH [26,27]. There is increasing evidence that some native a7* nAChRs may be heteromeric (i.e., containing non-a7 subunits [14,29,30,31,32,33]. These studies show that heteromeric a7* nAChRs have different pharmacological and biophysical properties compared to homomeric a7 nAChRs. Since nAChR subunits are differentially expressed both regionally and developmentally [36], this raises the possibility that regional differences in a7* nAChR subunit composition could account for the differences in sensitivity to KYNA. To determine if our initial lack of KYNA effect on a7* currents was due to a regional difference in a7* nAChRs (i.e., hilar vs. CA1), we recorded choline-evoked a7* currents in CA1 stratum radiatum interneurons. These studies also revealed no effect of KYNA indicating that with regard to KYNA sensitivity, a7* nAChRs in the hilus and CA1 stratum radiatum are similar. Species differences in pharmacological sensitivity of a7 nAChRs have been reported [34,35] and could account for the lack of effect we saw in our initial studies of KYNA effects on mouse a7* nAChRs. To address this, we measured the ability of KYNA to block choline-evoked a7* currents in rat CA1 stratum radiatum interneurons and, again, found no effect. This result was also reported by Mok et al., (2009) [23]. One indirect measure that resulted in a positive modulatory action of KYNA on a7* nAChR function was choline-evoked GABA release in hippocampal slices [21,23]. This phenomenon is action potential dependent [23] and, as such, requires the coordinated action of several cellular functions (i.e., activation of voltage-gated sodium and calcium channels) required for neurotransmitter release. Therefore, KYNA could be acting nonspecifically anywhere between the activation of the a7* nAChRs and the activation of GABA receptors. Indeed, Mok et al., (2009) [23], showed that KYNA blocked a7* nAChR-induced increases in GABAergic synaptic transmission, however, like the results we report here, concurrent recordings of a7* currents showed no effect of KYNA on these currents. These authors also showed that KYNA blocked GABA A receptors in cultured rat hippocampal neurons, however, this result was not replicated for spontaneous GABAergic IPSCs in hippocampal slices. Together, these results indicate that in adolescent rat brain, KYNA blocks choline-evoked GABAergic synaptic transmission at a site other than the a7* or GABA A receptors. One explanation for the variability of KYNA effect on a7* nAChR function reported in the literature put forth by Albuquerque and colleagues [26] is the age of the preparation. They report that a7* nAChRs in preweaned (,18 days old) rat hippocampal slices are insensitive to KYNA, while a7* nAChRs in postweaned (.18 days old) rats are sensitive to KYNA blockade. On the surface this explanation seems plausible as both Arnaiz-Cot et al [22] and Mok et al., (2009) [23] report that a7* nAChRs expressed in cell culture are insensitive to KYNA blockade, however, Hilmas et al., (2001) [21] report the opposite. Additionally, the results presented here used tissue obtained from adolescent to adult mice (45-60 days old) and from early adolescent to adolescent rats (30-45 days old). Given the wide range of results for comparable preparations it seems more likely that subtle differences in methodology not discernible from the published methods are responsible for the disparate results. One result that appears to be consistent is the effect of KYNA to block choline-induced increases in GABAergic function in hippocampal slices. Both Albuquerque and colleagues [21,26,28,37] and Mok et al., (2009) [23] report that KYNA blocks choline-induced increases in GABAergic function in hippocampal slices. However, concomitant recordings of a7* nAChR function in these experiments revealed that this was not due to a7* nAChR blockade [23]. Recently, KYNA was demonstrated to be an agonist for the orphan g-protein receptor GPR-35 [38,39]. GPR-35 is expressed in the brain [40] and is linked to the G i-o pathway [38,39]. Other receptors coupled to the G i-o pathway have been shown to block action potentialdependent neurotransmitter release (i.e., the GABA B receptor and metabotropic glutamate receptor groups II & III, Reviewed in: [41,42,43]). If GPR-35 is located on GABAergic nerve terminals, this raises the possibility that KYNA actions previously attributed to its effects on a7* nAChRs could be the result of its actions on GPR-35 or some other pharmacological target. Regardless, the results presented here, as well as the finding that KYNA could be acting through GPR-35, suggest that caution should be used when interpreting the mechanism of action of KYNA in complex preparations. . Effects of long term exposure to KYNA on pressureapplied choline-evoked a7* currents. In these experiments, slices were prepared, stored and recorded in the presence of 1 mM KYNA (at least 90 min exposure). Panel A shows representative traces for a baseline choline-evoked a7* current after 2 hrs exposure to KYNA (top trace) and after 5 min exposure to MLA (10 nM, bottom trace). Panel B shows the time course for this experiment. Panel C shows the distributions for control choline-evoked a7* currents (n = 20) and choline-evoked a7* currents after at least 90 min exposure to 1 mM KYNA (n = 10). Each point represents the average of 5-10 events, the error bars were omitted for clarity. There was no significant difference between groups (t = 0.5899, df = 28, p = 0.56, unpaired t-test). doi:10.1371/journal.pone.0041108.g004
4,772.6
2012-07-25T00:00:00.000
[ "Biology" ]
Linear Phase Two-Dimensional FIR Digital Filter Functions Generated by applying Christoffel-Darboux Formula for Orthonormal Polynomials Filter theory represents one of the strictest disciplines with the possibilities of applications in various frequency ranges and technologies [1–4]. In this theory, successful applications of powerful orthogonal polynomials are wellknown [4–7]. A number of problems in various scientific and technical areas have been solved by applying the classical Christoffel-Darboux formula for all classic orthogonal polynomials [8, 9]. New class explicit filter functions for continuous signals generated by the classical Christoffel-Darboux formula for classical Jacobi and Gegenbauer orthonormal polynomials are described in detail [10, 11]. On the other hand, there have been a number of attempts to solve the complex problem of generating linear phase two-dimensional finite impulse response (FIR) digital filters of lower order, e.g. [12]. They are based on either a transformation of one-dimensional FIR filters or direct application of the approximation techniques in two dimensions. Further generalization of the previous research [10, 11] in two dimensions is presented in this paper. The global Christoffel-Darboux formula for four orthonormal polynomials on two equal finite segments for generating filter functions is proposed here in a compact explicit form. A new class of the linear phase two-dimensional FIR digital filters generated by the proposed formula is given. Introduction Filter theory represents one of the strictest disciplines with the possibilities of applications in various frequency ranges and technologies [1][2][3][4].In this theory, successful applications of powerful orthogonal polynomials are wellknown [4][5][6][7].A number of problems in various scientific and technical areas have been solved by applying the classical Christoffel-Darboux formula for all classic orthogonal polynomials [8,9].New class explicit filter functions for continuous signals generated by the classical Christoffel-Darboux formula for classical Jacobi and Gegenbauer orthonormal polynomials are described in detail [10,11].On the other hand, there have been a number of attempts to solve the complex problem of generating linear phase two-dimensional finite impulse response (FIR) digital filters of lower order, e.g.[12].They are based on either a transformation of one-dimensional FIR filters or direct application of the approximation techniques in two dimensions. Further generalization of the previous research [10,11] in two dimensions is presented in this paper.The global Christoffel-Darboux formula for four orthonormal polynomials on two equal finite segments for generating filter functions is proposed here in a compact explicit form.A new class of the linear phase two-dimensional FIR digital filters generated by the proposed formula is given. Let while m -th order norm, ) ( 2 m h , for the polynomial , respectively, and orthogonality defined by: The finite (summed from zero to n -th component) global Christoffel-Darboux formula for two same order orthogonal polynomials with x as a variable, ) ( x P r and ( n is the order of continual orthogonal polynomials), on the equal finite segment   b a, , and for two same order orthogonal polynomials with y as a variable, , on the equal finite segment   d c, , is proposed here in the following explicit compact representative form of orthonormal components: By standard technique, the previous formula can be mapped into the new domains, analogue, s , and digital, z , [13][14][15].For example, in the 1 z (or 2 z ) domain, the following relations are always valid: where The third way of mapping is given by the following example: Filter function A linear phase two-dimensional FIR filter of N  Norder is defined by where K is the gain constant and ) , ( k r b are the filter coefficients that are real numbers.Square of the filter frequency response can be represented by or alternatively in absolute units and dBs, respectively: , , , , New class of two-dimensional FIR filter functions Applying the proposed formula, Eq. ( 9), a new class of two-dimensional FIR filter functions is obtained as For the linear phase two-dimensional symmetric FIR digital filters generated by the proposed approximation technique, the following simetries are valid: The linear phase function of the two-dimensional symmetric FIR digital filter defined by Eq. (20) has the following form for The two-dimensional frequency response of this filter for the parameters in absolute units and dBs as well as the contour plot is given in Fig. 1 -Fig.3. The view from above of the frequency response is presented in Fig. 1(a), 2(a) and 3(a), while the view from below (the corresponding response multiplied by -1) is presented in Fig. 1 Conclusions This paper presents an original approach to linear phase two-dimensional FIR digital filter design yielding significant improvements.The global Christoffel-Darboux formula for four orthonormal polynomials on two equal finite segments is proposed in a compact explicit form.The proposed formula represents a superior identity for solving extremely complex and always actual problem of linear phase two-dimensional filter design.The formula can be most directly applied in generating two-dimensional filter functions.It enables efficient design of high order filters.The filters designed in this way are highly selective, and all parasitic effects are suppressed.These filters can be applied in various areas including telecommunications where they can be of special interest.Three-dimensional frequency response (and corresponding contour plot) of a new class linear phase two-dimensional FIR digital filter is presented illustrating the advantages of the proposed approach.Global Christoffel-Darboux formula for four orthonormal polynomials on two equal finite segments for generating linear phase twodimensional finite impulse response (FIR) digital filter functions in a compact explicit representative form is proposed in this paper.The formula can be most directly applied for solving mathematically the approximation problem of a filter function of even and odd order.An example of a new class extremely economic linear phase two-dimensional FIR digital filter without multipliers obtained by the proposed approximation technique is presented.The generated linear phase two-dimensional FIR filter functions have two symmetries, that is, the following relations are valid: polynomials of the first kind and second kind, respectively.Alternatively, the mapping into the 1 z (or 2 z ) domain can be represented by (b), 2(b) and 3(b). Fig. 1 .Fig. 2 .Fig. 3 . Fig. 1.A three-dimensional (3-D) plot of two-dimensional frequency response of the linear phase two-dimensional FIR digital filter designed by the proposed formula: (a) view from above, (b) view from below
1,433.6
2012-03-04T00:00:00.000
[ "Engineering", "Computer Science", "Mathematics" ]
Stable Semisimple Modules, Stable t- Semisimple Modules and Strongly Stable t-Semisimple Modules Throughout this paper, three concepts are introduced namely stable semisimple modules, stable tsemisimple modules and strongly stable t-semisimple. Many features co-related with these concepts are presented. Also many connections between these concepts are given. Moreover several relationships between these classes of modules and other co-related classes and other related concepts are introduced. Hadi I-M.A. and Shyaa F.D. in (3) extend the notion of t-semisimple in to strongly tsemisimple modules and studied them. In (4), they introduced and studied these concept FI-semisimple modules, where "an −module is called FI-semisimple if every fully invariant submodule is a direct summand" (4)." is called FI-t-semisimple module if for each fully invariant submodule of , there exists ≤ ⊕ such that ≤ " (4). " is called strongly FI-tsemisimple if for each fully invariant submodule of , there exists a fully invariant submodule of with ≤ " (4). "A submodule of is called fully invariant if for each endomorphism (i.e. ∈ ( )), ( ) ⊆ " (1). " is called stable if for each homomorphism : → , ( ) ⊆ (5). " is called Duo (fully stable) if every submodule is fully invariant (stable)" (6) and (5). Obviously "every stable submodule is fully invariant but the converse is not true in general", see (5), (7). This motivate us to introduce and study these types of modules: stable semisimple, stable t-semisimple and strongly stable t-semisimple modules. Section 2 is devoted for studying stable semisimple modules. The direct sum of stable semisimple modules is stable semisimple (see proposition 3). However a direct summand of stable semisimple inherits the property under certain condition (see proposition 4). Also, stable submodules inherit the property if the module is stable injective (see proposition 5). In Section 3, the stable t-semisimple modules are introduced and studied which as a generalization of t-semisimple modules and also a generalization of FI-t-semisimple modules. The direct sum of stable t-semisimple modules 1 and 2 is stable t-semisimple and the converse hold if = 1 ⊕ 2 is stable injective and 1 + 2 = (see Theorm 1). Beside this, many characterizations of stable t-semisimple module (with certain conditions) are presented. In Section 4, strongly stable t-semisimple is introduced and studied. This concept is a generalization of strongly t-semisimple, also a generalization of strongly FI-t-semisimple. Many connections between this concept and other concepts such as stable semisimple, 2 −torsion are given. Strongly stable t-semisimple modules and strongly FI-t-semisimple modules are coincide under certain conditions (see Remarks and Examples 3(6), (7)). The direct sum of two strongly stable t-semisimple modules 1 , 2 with 1 + 2 = is strongly stable t-semisimple, and the converse hold if = 1 ⊕ 2 is stable injective. (Theorem 3). Also every stable direct summand of strongly stable t-semisimple module is strongly stable-t-semisimple if is stable-injective (see Proposition 4). Many other results are given in section 4. Stable Semisimple: In this section, the stable semisimple modules are introduced and studied. Definition 1: An R-module is called stable semisimple (briefly s-semisimple) if every stable submodule of is a direct summand. A ring is s-semisimple if every stable ideal of is a direct summand of . Note that an R-module is s-semisimple module if for each stable submodule of , there exists ≤ ⊕ such that ≤ . Remarks and Examples 1: 1. Every semisimple module is s-semisimple, but the converse may be not correctly, for instance the − module is s-semisimple since it has only two stable submodules namely (0), and they are direct summands, and Z is not semisimple. 2. Since every stable submodule is fully invariant, then every FI-semisimple module is ssemisimple. However s-semisimple module may be not FI-semisimple; as: as − module is s-semisimple and it is not FI-semisimple since every proper non zero submodule of is fully invariant but it is not direct summand. 3. "An R-module is called stable extending ( −extending) if every stable submodule of is essential in a direct summand " (7). Proposition 1: Let be a s-semisimple and is a stable submodule of . Then is s-semisimple. Proof: Let be a stable submodule of where U≤M and U contains N . By Lemma 1, is a stable submodule of . But is s-semisimple, hence ≤ ⊕ ; that is ⊕ V = M for some ≤ . This implies = ⊕ + and so that ≤ ⊕ and is s-semisimple. Corollary 1: Let : → ′ be an epimorphism such that is a stable submodule of . If is s-semisimple, then ′ is s-semisimple. The following proposition shows that the property of s-semisimple inhirts to direct summands, under certain conditions. First the following Lemma is given. Proposition 5: Let be a s-injective −module. If is s-semisimple module, then every stable submodule of is s-semisimple. Proof: Let be a stable submodule of and let be a stable submodule of , then by (8,Lemma: 2.15). Stable t-Semisimple Modules: In this section, the concept of stable tsemisimple modules are introduced and studied, which is a generalization of s-semisimple modules. Also it is a generalization of t-semisimple modules and FI-t-semisimple modules. Remarks and Examples 2: 1. clearly every s-semisimple module is s-tsemisimple, but the converse is not true in general, for example: the Z-module 4 is s-tsemisimple, since for each ≤ 4 , is stable and (0 ≤ because 0 + 2 ( ) = ≤ ), see (2, proposition1.1). 2. Every Singular (and hence 2 -torsion) module is s-t-semisimple, since for each ≤ , (0) + 2 ( ) = (0) + = ≤ and hence (0) ≤ by (2, proposition.1.1). 3. Every t-semisimple module is s-t-semisimple, but the converse may be not true, for example: as −module is not t-semisimple (since But is s-t-semisimple since It is ssemisimple. Also = ⊕ 2 as −module is s-semisimple by part 2., So it is s-t-semisimple, but is not t-semisimple since Note that under the class of fully stable modules the two notions (t-semisimple) and (st-semisimple)module are equivalent. Also they are equivalent under the class of comultiplication modules, since "every comultiplication modules is fully stable", see (9,lemma,1.2.12,p.39). 4. Every FI-t-semisimple is s-t-semisimple, but the convers may be false, as the following example shows: as −module is s-t-semisimple and it is not FI-t-semisimple by ( Proposition 6: Let be an s-injective module. If is a s-t-semisimple module, then every stable submodule of is s-t-semisimple. Proof: Let be a stable submodule of and let be a stable submodule of . Since is stable injective, is stable in by (8,Lemma: 2.15). It follows that there exists ≤ ⊕ and ≤ , since is s-t-semisimple. Hence = ⊕ T for some ≤ and so that = ( ⊕ ) ∩ = ⊕ (T ∩ U), thus ≤ ⊕ and hence U is a stable tsemisimple. Recall that for any submodule of , is contained in a t-closed submodule of , such that ≤ by (10,Lemma 2.3). is called a tclosure of (10). Proposition 8: Let be an s-injective module such that a complement of 2 ( ) is stable and a tclosure of stable submodule is stable. If is s-tsemisimple, then is t-stable extending. Proof: By Theorem 2 ((1)⇒(5)), each stable submodule of with 2 ( ) ⊆ , ≤ ⊕ . Hence every t-closed stable submodule is direct summand, since every t-closed submodule contains 2 ( ). On the other hand, by hypothesis a t-closure of stable submodule is stable, hence by (8, proposition. 2.5), is t-stable extending. Proposition 9: Let be a s-injective module such that a complement of stable submodule is stable and a t-closure of stable submodule is stable. If is st-semisimple, then is s-t-semisimple for each stable t-closed submodule C. Proof: By Proposition 8, is t-stable extending, so by (8, Proposition 2.5), every stable t-closed submodule is a direct summand of . Hence = ⊕ for some ≤ . It follows that is a complement of and hence is a stable submodule of . Thus by Proposition 6, is s-t-semisimple. But ≅ , so is stable t-semisimple. Strongly Stable t-semisimple Modules: Our concern in this section is extending the notions of s-t-semisimple modules into strongly stable t-semisimple. Also this concept is a generalization of the concept strongly t-semisimple which is introduced in (3) where " an -module M is strongly t-semisimple if for each submodule of ,there exists a fully invariant direct summand (hence stable direct summand) of such that ≤ " (3). Definition 3: An -module is called strongly stable t-semisimple ( shortly s-s-t-semisimple) if for each stable submodule of , there exists a stable direct summand of with ≤ . Remarks and Examples 3: 1) Every s-semisimple module is s-s-tsemisimple but not conversely as can see by the example 12 as -module is s-s t-semisimple, but not stable semisimple. 2) Every strongly t-semisimple is s-s-t-semisimple, but the converse may be not achieved , for example: Let = ⊕ as -module. Since has only two stable submodules which are and (0), so is s-semisimple and hence by (1) is st-semisimple. However is not strongly tsemisimple since 2 ( ) ≃ is not t-semisimple (9,Ex.4,p.26). 3) Every 2 -torsion module is s-t-semisimple by (3,Rem &Ex. (3)), so It is s-s t-semisimple. Note that 4 as 4 -module is s-t-semisimple but not 2 -torsion. 4) Every s-s-t-semisimple implies s-t-semisimple. 5) "An -module is called strongly FI-tsemisimple if for each fully invariant submodule of , there exists a fully invariant direct summand of , with ≤ " (4), Then every strongly FI-t-semisimple is s-s tsemisimple, but the converse is not achieved for example: the -module is s-s t-semisimple, but is not strongly FI-t-semisimple since if = , ∈ , > 1 . then (0) is the only direct summand of such that (0) ⊆ but (0) ≰ . 6) Let be a FI-quasi-injective -module. Then is s-s -t-semisimple if and only if strongly FI-tsemisimple. 7) Let be a fully stable -module. Then the following statements are equivalent. (1) is strongly t-semisimple. is strongly FI-t-semisimple. is t-semisimple. Proof: it is clear ≤ . Now, since K ≤ ⊕ M then ⊕ ′ = and so = ⊕ ( ′ ∩ ), thus K ≤ ⊕ M. but by (9,Rem 1.1.36), K is stable in N. Thus K is a stable direct summand of N with ≤ , so that is s-s-t-semisimple. Corollary 2: Let be an s-injective. If is s-s-tsemisimple, then every nonsingular stable submodule of is s-s-t-semisimple. Proof: Let be a nonsingular stable submodule of . Since is s-s-t-semisimple, then is stable tsemisimple by Rem &Ex 3(4). And by Theorem 3.5(1⇒4), N ≤ ⊕ M. thus N is s-s-t-semisimple by Proposition 4. Corollary 3: For an s-injective -module which satisfies (a Complement of 2 ( ) is stable). If is s-s-t-semisimple, then every stable submodule of which contains 2 ( ) is s-s-t-semisimple. Proof: since is s-s-t-semisimple module, then by Theorem 3.5(1⇒5), N ≤ ⊕ M. it follows that is ss-t-semisimple by Proposition 4.
2,559.2
2021-01-01T00:00:00.000
[ "Mathematics" ]
Herceptin-Mediated Cardiotoxicity: Assessment by Cardiovascular Magnetic Resonance Herceptin (trastuzumab) is a recombinant, humanized, monoclonal antibody that targets the human epidermal growth factor receptor 2 (HER2) and is used in the treatment of HER2-positive breast and gastric cancers. However, it carries a risk of cardiotoxicity, manifesting as left ventricular (LV) systolic dysfunction, conventionally assessed for by transthoracic echocardiography. Clinical surveillance of cardiac function and discontinuation of trastuzumab at an early stage of LV systolic dysfunction allow for the timely initiation of heart failure drug therapies that can result in the rapid recovery of cardiac function in most patients. Often considered the reference standard for the noninvasive assessment of cardiac volume and function, cardiac magnetic resonance (CMR) imaging has superior reproducibility and accuracy compared to other noninvasive imaging modalities. However, due to limited availability, it is not routinely used in the serial assessment of cardiac function in patients receiving trastuzumab. In this article, we review the diagnostic and prognostic role of CMR in trastuzumab-mediated cardiotoxicity. Introduction Herceptin (trastuzumab) is a recombinant, humanized, monoclonal antibody directed against the extracellular domain IV of the human epidermal growth factor receptor 2 (HER2) and is indicated for the treatment of HER2-positive breast and gastric cancers [1][2][3]. HER2 positivity is relatively frequent, found in around one-fifth of breast and gastric cancer patients [4,5]. Trastuzumab has been transformational for the prognosis of these patients, acting through its mechanisms of preventing HER2 dimerization and downstream signalling, HER2 internalization and degradation, and antibody-dependent cellular cytotoxicity [6,7]. Although the chemotherapeutic mechanisms of trastuzumab are well characterised, the molecular aspects of trastuzumab-induced cardiotoxicity, recognised since its phase III trial [8], remain incompletely understood. Early studies have reported trastuzumab-related cardiotoxicity to be largely reversible with endomyocardial biopsies demonstrating an absence of the typical anthracycline-induced cardiomyocyte vacuolization or dropout [9]. However, in vivo mice studies have found trastuzumab to alter the expression of 15 genes involved in cardiac contractility, adaptation to stress, as well as DNA repair, cellular proliferation, healing, and mitochondrial function [10]. Furthermore, trastuzumab-mediated phosphorylation of HER1 and HER2 has been reported to activate the autophagy inhibitory Erk signalling pathway in human primary cardiomyocytes, inducing cardiotoxicity by disrupting the cardiomyocyte's ability to recycle cellular toxins [11]. ese data, together with analyses of major trastuzumab trials, have highlighted the potential for trastuzumab to induce persistent left ventricular (LV) systolic dysfunction (LVSD) despite drug cessation [12]. is is of concern particularly as heart failure induced by cancer therapy is associated with worse outcomes than that of more common heart failure patients [13]. Despite this, it is important to recognise that close clinical surveillance and discontinuation of trastuzumab at an early stage of LVSD will allow the timely initiation of heart failure drug therapies that can result in the rapid recovery of cardiac function in most patients [1,14]. Consequently, a distinct multidisciplinary clinical subspecialty, cardio-oncology, has emerged with the aim of preventing, monitoring, and treating cancer therapeuticsrelated cardiac dysfunction (CTRCD) [15]. In current cardio-oncology practice, transthoracic echocardiography (TTE) remains the first line for cardiac surveillance among oncology patients due to its widespread availability and lack of radiation exposure [16][17][18][19]. However, with a reported temporal inter-and intra-observer variability of 10% in the assessment of left ventricular ejection fraction (LVEF) by 2D TTE [20], cardiac magnetic resonance (CMR) is gaining an increasingly prominent role in cardio-oncology. Often considered the reference standard for the assessment of cardiac volume and function, CMR has demonstrated superior reproducibility and accuracy compared to other conventional methods [21]. However, due to limited availability, it is not widely used in the serial monitoring for cardio-oncology assessment. Here in this article, we aim to review the diagnostic and prognostic role of CMR in trastuzumab-mediated cardiotoxicity. Volumetric Assessment and CMR e assessment of cardiac function before, during, and after therapy is essential for all cancer patients undergoing potentially cardiotoxic therapy [16]. Whilst CMR is widely considered as the reference standard for cardiac volumetric assessment, its current role remains reserved for patients with inadequate echocardiographic windows due to limitations in availability, higher cost, and the requirement of patient cooperation with breath-holding and an absence of claustrophobia [1,16,21]. Conversely, echocardiography with its wider availability and cost-effectiveness is highly suited for serial surveillance. Consequently, given that definitions of cardiotoxicity in many oncology trials are based on a reduction of LVEF, TTE-derived LVEF remains the first-line method for the detection of CTRCD according to consensus guidelines [1,[16][17][18][19] (Figure 1). One of the key limitations of 2D TTE is its significant inter-and intraobserver variation, often quoted at 10% [16,20]. erefore, it can be challenging to discern whether a change in LVEF, for instance from 55 to 45%, represents true dysfunction or merely inter-study variation. is variability can be improved with the use of LV opacification contrast and is better still with 3D TTE [20,23]. Similarly, 3D TTE has been reported to possess superior sensitivity (53%) to 2D TTE (25-29%) for the identification of LVEF <50% in adult survivors of childhood cancer when using CMR quantification as the reference standard [24]. However, it is evident from a recent survey of 96 echocardiographic laboratories from 22 different countries across Europe that there are wide variations in the adoption of 3D TTE with only 32% of centres routinely capturing 3D data for all TTE studies, and 20% of centres not performing any 3D TTE [25]. Furthermore, the feasibility of 3D TTE can be suboptimal even under research conditions. In a study of 100 breast cancer patients undergoing baseline and surveillance TTE during chemotherapy, 3D TTE was reported to be feasible in only 66% of studies, with factors such as increasing age, weight, smoking, mastectomy, and concomitant radiotherapy contributing to poor 3D image quality [26]. Multigated acquisition (MUGA) scanning was once a commonly used method for serial evaluation of cardiotoxicity. Despite low inter-and intra-observer variability, such methodology may be rendered obsolete in modern times due to low sensitivity to subtle changes and radiation exposure [27] (Table 1). e superior accuracy and lower variability of CMR lend it clinical significance not only for the timely diagnosis of CTRCD, via detection of true positives cases, but also for its ability to avoid false negatives, thereby preventing unnecessary treatment interruptions. is is evident from a retrospective cohort study of 369 patients receiving trastuzumab therapy for breast cancer where trastuzumab was withheld for at least 4 weeks in patients who had experienced a decline in LVEF ≥16%, or decline ≥10% whilst below normal LVEF limits [28]. is treatment interruption allowed time for cardiology review and cardioprotective therapy initiation. Despite trastuzumab being recommenced in those whose LVEF recovered to normal, patients experiencing any treatment interruption possessed significantly worse outcomes in terms of both disease-free survival (adjusted hazards ratio of 4.4, P � 0.001) and overall survival (adjusted hazards ratio 4.8, P < 0.001) [28]. In the absence of randomized prospective studies directly comparing patient outcomes from CMR and TTE derived LVEF, guidance from the British Society for Echocardiography (BSE) and British Cardio-Oncology Society (BCOS) recognises the addition of recent pilot data on the safe use of trastuzumab in patients with asymptomatic reductions in TTE-derived LVEF down to 40% [17,29,30]. ese guidelines may help to compensate for the variability associated with TTE derived LVEF discussed above, emphasising the ESC's personalized approach to cardiac surveillance by cardio-oncology services [16], and support echocardiography in remaining at the core of cardio-oncology diagnostics. Myocardial Strain and CMR While LVEF has historically been used as a standard measure of systolic function, there is increasing interest in the use of more sensitive markers that can detect "subclinical" signs of LV dysfunction that can aid earlier initiation of cardioprotective therapy. e extent of myocardial deformation which occurs following application of contractile and relaxation forces can be quantified as strain, defined as the percent change in myocardial length from the relaxed to the contractile state. is deformation represents a fundamental property of the tissue [31] and there is increasing evidence for a causative relationship between the development of myocardial fibrosis and a reduction in ventricular deformation across a range of conditions [32][33][34]. Deformation imaging may, therefore, act as a functional imaging biomarker of myocardial fibrosis and offer additional prognostic information for the personalized management of patients receiving trastuzumab. Unlike the inherent flaws of a simplistic measurement such as LVEF, strain allows quantification of the different spatial components of contractile function in either longitudinal (GLS), circumferential (GCS), or radial (GRS) directions, both globally and regionally. Most myocardial strain studies in patients receiving trastuzumab have used GLS derived from 2D speckle tracking echocardiography (STE). A meta-analysis of 9 studies found reduced GLS to be associated with a higher CTRCD risk (odds ratio 12.27; 95% CI 5. 84-42.85; area under the hierarchical summary receiver operating [35]. However, there remains uncertainty regarding whether a strain-guided management approach offers incremental prognostic value compared to an LVEF-guided approach. In an observational study where 24 out of 81 consecutive women receiving trastuzumab developed CTRCD, GLS reduction was the strongest predictor of cardiotoxicity [36]. However, in the only prospective randomized controlled trial where 331 anthracycline-treated patients were randomized to either LVEF or GLS guided therapy, there were no significant differences in the primary outcome of change in LVEF between the two different study arms [37]. Despite this, it is important to recognise that the GLS-guided approach led to greater use of cardioprotective therapy, higher final LVEF, and lower incidence of CTRCD [37]. e current limitation of STE derived GLS is in its significant inter-vendor variability [38] with guideline quoted normal GLS values of <−17% for men and <−18% for women being specific to General Electric (United States) analysis software [17] alongside the demands for good image quality. Strain analysis of 3D STE datasets is also feasible. However, as a relatively novel technique, there is a lack of data for its use in CTRCD and it generally demands patients to breath-hold as well as a regular cardiac rhythm to enable multi-beat 3D acquisition [39]. Myocardial strain quantification is also feasible with CMR and is traditionally performed with one of many dedicated "tagging" sequences (such as spatial modulation of magnetization (SPAMM), harmonic phase (HARP), displacement encoding (DENSE), and strain encoding (SENC). ese sequences magnetise temporary tags into the myocardium which are prominent during systole and fade during diastole. ese tags can be tracked throughout the cardiac cycle to highlight myocardial movement. CMRtagging derived GLS and GCS have been noted to be worse (less negative) than STE derived strain in a study of 46 cancer survivors exposed to anthracycline therapy with normal range LVEF, suggesting CMR to be more sensitive to subclinical LV dysfunction compared to TTE [40]. Looking beyond cardio-oncology, CMR-tagging GCS was again found to offer incremental predictive value to the traditional parameters of LVEF, left ventricular mass, and cardiovascular risk factors, for the future onset of heart failure in 1768 asymptomatic individuals from the Multi-Ethnic Study of Atherosclerosis (MESA) cohort [41]. e main disadvantage of dedicated deformation CMR sequences is their time-consuming nature. To overcome this, it is possible to derive strain from feature-tracking of steadystate free precession (SSFP) cine images, with important distinctions being made between 2D (average strain value of three long-axis studies) and 3D derived strain values [42]. Whereas 3D STE is adversely affected by both poor spatial and temporal resolution (leading to coarser speckle patterns) and requires stitching together of volumes to achieve adequate frame rates for analysis at higher heart rates, CMR cine stack datasets are intrinsically three-dimensional with strain quantification highly feasible [42]. eoretically, 3D strain quantification (either by CMR or STE) overcomes the overestimation of myocardial movement that results from the through-plane loss of features into the third dimension which plagues 2D myocardial deformation techniques [43]. is means that the absolute values of 3D strain are usually lower than that of 2D strain and are likely provide a closer representation of underlying myocardial mechanics [42]. Being relatively novel techniques, the incremental value of CMR feature-tracking derived strain is not well known, with only one study confirming feasibility of 2D CMR feature tracking and its correlation with CMR derived LVEF [44]. A large meta-analysis comprising 65 studies and 2888 patients compared the most used noninvasive imaging modalities to the reference standard CMR in the last two decades [45]. e findings revealed significant negative bias in LV end-diastolic volume (LVEDV) and LV end-systolic volume (LVESV) for 2DE ± contrast and 3DE, demonstrating that echocardiography-based techniques tend to underestimate these values, whereas computed tomography (CT) correlates closely, when compared to CMR ( Figure 2). In an earlier study involving 114 patients, echocardiography was compared to CMR imaging, focusing on the reference standard for LV function [24]. e study reported that LV volume was consistently underestimated in 2DE and 3DE compared to CMR, and cardiac mass was higher in 2DE than CMR. Compared to CMR, the echocardiographic methods correlated rather poorly, specifically 2D TTE, which demonstrated a low sensitivity (25%) and high false-positive rate (75%) with a mean LVEF 5% higher than CMR. While 3D TTE compared more favourably to CMR and demonstrated less variability, the authors concluded the technique lacks the desired accuracy to detect subtle changes that may have important therapeutic implication. Varying Definitions of Cardiotoxicity Cardiotoxicity is a broad term that refers to any direct untoward toxic effects on cardiac structure and function or the acceleration of cardiovascular disease (CVD) among patients with cardiovascular risk factors or preexisting CVD as a result of cancer therapy [46]. A universal definition for cardiotoxicity is lacking and often oversimplified, resulting in the term being shrouded with controversy due to a lack of clarity. Since the earliest definition of cardiotoxicity was defined [47], definitions used for clinical decisions have varied among different consensus guidelines and clinical trials, usually based on variable cut-off values for LVEF in various imaging modalities [48] (Table 2). More recently, the European Association of Cardiovascular Imaging (EACVI) and the American Society of Echocardiography (ASE) defined cardiotoxicity as ≥10% decline in LVEF to a final LVEF <53% by echocardiography, multigated acquisition scan (MUGA), and cardiac magnetic resonance imaging (CMR), as well as being the first reported guidelines to include global longitudinal strain (GLS) reduction defined as >15% [16,19]. e British Society of Echocardiography (BSE) and the British Cardio-Oncology Society (BCOS) have jointly published similar guidelines for adult cancer patients, specifically patients receiving anthracycline ± trastuzumab therapy [17]. e consensus guideline classified cardiotoxicity into three categories: (1) cardiotoxicity, (2) probable subclinical cardiotoxicity, and (3) possible subclinical cardiotoxicity, which should ideally be achieved via advanced echocardiographic measures (2D/ 3D LVEF and GLS). Additionally, technical considerations should be accounted for due to various factors (clear visualisation endocardial border and timing of measurement during cardiac cycle) that could influence GLS values, thereby further limiting efforts to define abnormal GLS. Establishing a definitive description of cardiotoxicity is vitally important with major clinical implications because while failing to detect cardiotoxicity promptly is harmful, overdiagnosis is equally detrimental, potentially causing interruption to a patient's cancer treatment and thereby impacting upon oncological outcomes. Trastuzumab has demonstrated effectiveness when used either as monotherapy or in combination with other substances [52]. However, rarely is trastuzumab administered as a single agent, but is instead more commonly combined with surgery, chemotherapy, and radiotherapy as adjuvant therapy. To date, most patients treated with trastuzumab monotherapy have previously been exposed to other forms of treatment such as anthracycline, either prior to, or concurrently with trastuzumab administration. Consequently, the assessment of trastuzumab-related cardiotoxicity is often confounded by the lack of patients with no prior anthracycline exposure. is is important as trastuzumab and anthracycline are considered to have different mechanisms of action. Trastuzumab tends to cause cellular dysfunction in most patients and is perceived to be largely reversible (type 2 cardiotoxicity), whereas anthracycline cardiomyopathy is associated with irreversible myocyte necrosis in the form of apoptosis (type 1 cardiotoxicity) ( Table 3) [9]. However, this distinction may be further complicated as recent evidence suggests that trastuzumab could share some common mechanisms with anthracycline-mediated cardiotoxicity, with equally profound toxicity, particularly amongst the elderly population with nearnormal ejection fraction and risk factors for CVD. While anthracycline cardiotoxicity is often perceived to be irreversible, there have been reports of partial recovery of cardiac function. Trastuzumab-induced cardiotoxicity is not always reversible [67,68]. Hence, the classifications of cardiotoxicity so far are oversimplifications, failing to reflect the nuance of its complex pathophysiology and natural history. Mechanisms of Trastuzumab Cardiotoxicity e mechanisms of trastuzumab-induced cardiotoxicity remain to be definitively identified. Limited data from myocardial biopsies reveal rather different mechanisms between trastuzumab and anthracycline, and the prompt recovery of trastuzumab-induced toxicity upon treatment discontinuation further supports this [9]. Different mechanisms have been proposed relating to the cardiotoxic mechanism, while potentially multifactorial and likely attributed to the anti-HER2 activity; this remains a topic for extensive discussion. In vivo work in HER2-deleted mice showed interruption to the HER2 signalling pathway resulted in the spontaneous development of dilated cardiomyopathy [75], supporting the notion that HER2 signalling is an important modifier in heart failure. Preclinical studies revealed an overactive HER pathway characterised as overexpression of HER2 receptor on a breast tumour cell or multiple copies of HER2 gene in the nucleus of the cell being the potential underlying mechanism of HER2+ breast cancer [76]. Presently, disruption to NRG/ErbB signalling is recognised as the most likely mechanism of trastuzumab-induced cardiotoxicity. Trastuzumab is known to selectively bind to the juxtamembrane domain IV of HER2 -a section of the extracellular domain essential for HER2 -ErbB4 and HER2-ErbB4 dimerization within the cardiomyocytes. Upon binding, the antibody downregulates the expression of HER2 which initiates a cascade of downstream signalling of the PI3K-AKT-mTOR pathway, which is an important contributor in cellular growth, proliferation, and survival [77]. In patients preexposed to anthracycline, it is probable these patients have begun to undergo subclinical or clinical apoptotic/necrotic process, thereby increasing susceptibility to further myocardial damage. Trastuzumab-associated heart failure is likely the cause of ongoing attrition of myocytes over time. Prognosis and Reversibility of Trastuzumab-Induced Cardiotoxicity In contrast to anthracycline, the clinical outcome for trastuzumab-induced cardiotoxicity is generally considered to be more favourable since LV dysfunction appears largely reversible upon the discontinuation of trastuzumab, and the inclusion of standard cardioprotective therapy seems to accelerate the recovery process [78]. A right ventricular-(RV-) focused CMR study by Barthur et al. [50] found that while RVEF and LVEF had declined with increased RVEDV and RVESV during therapy, all parameters had normalised at 18 months, six months following the cessation of therapy. Consistent with these findings is another study by Ong et al. [72] which utilised feature tracking (FT) strain analysis. e authors reported a reduction in LVEF, FT-GLS, and FT-GCS at 6 and 12 months into therapy. By 18 months, with treatment completed 6 months prior, the parameters returned to near baseline level. Ewer et al. [9] reported on the reversibility of trastuzumab-related LVEF reduction, showing improvements in cardiac function typically at 4 to 6 weeks (before, 0.61 ± 0.13; during, 0.43 ± 0.16; after, 0.56 ± 0.11) following the withdrawal of therapy [79]. Trastuzumab-mediated cardiotoxicity is generally considered to not cause ultrastructural changes, though benign ultrastructural changes were observed from endomyocardial biopsy samples in a trial by Ewer and Ewer [80]. It should be noted that while this is a sensitive method for the evaluation of chemotherapeutic drug-induced cardiotoxicity, its invasive nature and questionable ability to predict clinical outcome renders it impractical for routine clinical use. Moreover, abnormalities uncovered from cardiac biopsy only reflect recent and ongoing changes rather than earlier insults. Additionally, an earlier trial comprising 160 patients by Fallah-Rad et al. [81] identified 10 trastuzumab-induced cardiotoxic patients with subepicardial linear LGE in the lateral portion of the LV. Interestingly, at 6-month follow-up evaluation, despite EF recovery in 6 of the 10 patients, these LGE findings persisted, suggesting persistent myocardial injury. Such findings were amplified in a study by Wadhwa et al. [82] where of the 36 patients that developed mostly asymptomatic cardiotoxicity, subepicardial linear LGE of the LV was observed in 34 patients. Elevation of troponin-I was also reported in 4 patients following >6 cycles of treatment in another trial [83], implying ongoing myocardial necrosis. e underlying mechanism for the presence of LGE is unclear, particularly in the subepicardial lateral portion of the LV, perhaps merely a typical distribution and location associated with this agent (Figure 3). Relatively little is currently known about the long-term prognosis of trastuzumab-induced cardiotoxicity. To our knowledge, CMR studies to date have seldom followed up patients beyond 18 months. From the available data [50,72,73], despite most CMR parameters having demonstrated statistically significant changes at 18 months, the magnitude of reductions is small. is raises the question as to whether these statistically significant reductions are also truly clinically significant for previously cardiotoxic patients or whether they might potentially pose a greater risk of cardiac functional deterioration in the coming years. ese findings suggest trastuzumab-mediated cardiotoxicity could be associated with long-term marked impairment of cardiac function and may contribute to increased risk of late-occurring cardiovascular disease in survivors of HER2-positive breast cancer. One long-term study aimed to determine whether trastuzumab-induced cardiotoxicity recovers and explore any association with long-term cardiopulmonary dysfunction in survivors of HER2+ breast cancer [84]. e trial enrolled 57 patients after completion of trastuzumab-based therapy (median, 7.0 years after therapy). Patients were assessed in three groups using speckle-tracking echocardiography: (1) developed cardiotoxicity during therapy (TOX) group, (2) no evidence of cardiotoxicity during therapy (NTOX) group, and (3) A large meta-analysis of randomized and cohort studies of over 29,000 women with breast cancer observed the frequency of severe cardiotoxicity up to 3 years following trastuzumab initiation [85]. Among the 58 studies, severe cardiotoxicity occurred in 844 breast cancer patients, accounting for 3% (95% CI 2.41-3.64) of the total sample. 557 incident cases occurred in the early breast cancer group, 203 in the metastatic breast cancer group, and 84 in the mixed population. Mild or asymptomatic cardiotoxicity was reported in 45 studies with a total incidence case of 2251 (out of 20,491 patients). Two years following the initiation of trastuzumab therapy, severe cardiotoxicity was reported in approximately 3% of the total patient cohort. e incidence rate observed from cohort studies is higher compared to randomized control trials, possibly due to such trials excluding patients at higher risk of adverse events. Accordingly, this renders those studies less reflective of real-world settings. Variability of incident cases between studies was high with frequencies ranging from 0 to 9.8% in the early breast cancer group and 0 to 16.1% in the metastatic group. Such variability of cardiotoxic events is likely associated with patient selection, definition of cardiotoxicity, and methods of assessment. Based on these findings, the consensus is that trastuzumab-mediated cardiotoxicity is largely reversible, or at least partially reversible, particularly from a functional standpoint. ough the true prevalence and extent of reversibility is debatable, late toxicity remains a possibility. With the toxicity profile of trastuzumab yet to be fully established, treatment necessitates close monitoring, and in the face of new, emerging data, such issues warrant revisiting. An important limitation to these studies, from the CMR perspective, other than the small sample sizes, is the lack of CMR imaging for evaluating cardiotoxicity, whilst cardiac biomarkers, myocardial biopsy (in some cases), and echocardiography, or other imaging modalities, were adopted instead. Large, prospective CMR-studies are warranted to enable a more definitive conclusion on the diagnostic and prognostic role of trastuzumab-induced cardiotoxicity. Collectively, these studies highlight the potential need for the utilisation of cardiac MRI in the early detection of subclinical cardiotoxicity, as well its extended toxicity profile. Establishing a validated risk stratification tool to distinguish patients that are at increased risk of developing cardiotoxicity to those of lower risk may be necessary so that monitoring by and utilisation of CMR can be reserved for those at higher risk. From the present data, a multitude of risk factors are associated with increased risk of trastuzumab-treated cardiac events, including age ( [72,73]. A scoring system based on these parameters may be valuable for estimating the risk of developing cardiotoxicity during therapy. Additionally, it is important to establish the length of follow-up for previously cardiotoxic patients that are deemed to be at potential higher risk for late toxicity. It is yet to be established if "recovered" patients with mild-tomoderate cardiotoxicity with asymptomatic or oligosymptomatic status possess a higher risk of late toxicity compared to those that developed severe toxicity with intense clinical symptomology. Tissue Characterisation and CMR Chemotherapy associated myocardial oedema, diffuse interstitial fibrosis (collagen deposition in the absence of myocyte loss), and coarse replacement fibrosis (collagen deposition in the presence of myocyte necrosis) can be uniquely imaged with CMR based T2 mapping, T1 mapping, and late-gadolinium enhancement (LGE) sequences, respectively [86] (Table 4). Given the lack of consensus of a precise LVEF-based definition of CTRCD, increasing evidence for deformation imaging to provide incremental prognostic information, and a potential causative relationship between myocardial fibrosis development and reduced myocardial strain [32][33][34], there is increasing appeal for direct myocardial characterisation in the earlier detection of CTRCD. To date, only the presence of absence of LGE has been studied following trastuzumab therapy [69]. While T1 and ECV increase following anthracycline therapy [87,88], this has not been characterised for trastuzumab (Table 5). Summary: When Should You Do a CMR for Trastuzumab? In the 2016 ESC position paper, there was recognition of the value of CMR for the following: evaluating cardiac structure and function, identifying the cause of LV dysfunction, and distinguishing left and right ventricular function in difficult cases where other imaging modalities are unsuccessful [1]. Consistent with this are the consensus recommendations from the European Society for Medical Oncology (ESMO) and the joint guidelines from BSE and BCOS, which recommends the utilisation of CMR if significant and unexplained discrepancies exist in echo-derived measures of LVEF and GLS [17,101]. While CMR is the reference standard procedure for assessing cardiotoxicity, it remains largely underutilised for breast cancer cardiotoxicity surveillance [16]. e choice of imaging modality depends on local expertise and availability; it is strongly encouraged that the imaging modality utilised for baseline assessment remains the same for the remainder of the treatment pathway. A potential protocol for CMR assessment of trastuzumab cardiotoxicity is illustrated in Figure 4. CMR is demonstrably superior to echo-based imaging of left ventricular function, whether by assessment of LVEF or strain, offering greater sensitivity and specificity in the detection of cardiotoxicity in patients receiving trastuzumab. Furthermore, it offers the ability to assess for myocardial oedema, diffuse interstitial fibrosis, or replacement fibrosis. It also carries some limitations. us, currently published normal LVEF reference ranges show an overlap of normal ranges. Application of CMR to patients would require a baseline CMR and regular surveillance scans with associated healthcare costs and requires gadolinium administration. ere are several important lines of enquiry to guide future research. Firstly, whether CMR-based detection of cardiotoxicity as assessed by LVEF and strain leads to improved outcomes compared to their detection by echo remains to be determined. Secondly, whether CMR-based identification of oedema and fibrosis, particularly the type and distribution of the latter, leads to improved risk stratification in trastuzumab cardiotoxicity is unknown. irdly, can CMR offer detection of features that would suggest a greater likelihood of recovery from cardiotoxicity by tissue characterisation findings? Given the greater availability of echocardiography than CMR, these three questions are central to further research into the optimum detection, follow-up, and surveillance of cardiotoxicity in trastuzumab patients. In the interim, we agree on the current echo-based
6,374.2
2022-02-27T00:00:00.000
[ "Medicine", "Biology" ]
A Novel Fault Prediction Method of Wind Turbine Gearbox Based on Pair-Copula Construction and BP Neural Network , I. INTRODUCTION According to the structure of the transmission system, wind turbines (WTs) are mainly divided into two categories: doubly-fed WTs with gearbox, and direct-drive WTs without gearbox [1]. Because of the WTs operate in harsh environments for a long time [2] and are affected by factors such as random wind loads, acceleration and deceleration shocks [3]- [6], the gearbox has become one of the key components with a high failure rate [7]. The maintenance cost of the gearbox is also relatively high, the repair time and cost are at the forefront of all major equipment. An effective way to reduce the cost of breakdown maintenance is to use condition monitoring technology for early detection of faults. Therefore, carrying out research on the condition monitoring and fault early warning of WT gearbox, and rationally adjusting the operation and arranging maintenance according to the healthy The associate editor coordinating the review of this manuscript and approving it for publication was Gongbo Zhou . degeneration trend of the gearbox are great significance for improving the reliability and reducing the maintenance costs. The gearbox condition monitoring methods mainly include oil analysis, vibration analysis and SCADA data analysis. The essence of oil analysis is to analyze the physical and chemical properties of lubricating oil. The health state of the gearbox is monitored by indicators such as metal abrasive particles, which can directly reflect its mechanical deterioration and has a high accuracy [8]- [10]. However, the cost of metal abrasive particle sensors is high and the real-time performance of oil analysis method is poor. Thus, It is necessary to further investigate low-cost solutions [11]- [13]. Vibration analysis is to collect vibration signals from the sensitive parts of the gearbox's fault features. The characteristic factors are extracted and analyzed by timedomain and frequency-domain signal processing methods. Vibration analysis has high sensitivity and good real-time performance, and it can perform full life cycle condition monitoring and fault diagnosis on the gearbox. Li Z.X. et al. proposed a periodic potential underdamping stochastic resonance (PPUSR) in the paper and used PPUSR to extract the characteristics of the gearbox vibration signals. The results showed that the proposed method can detect gear wear faults and tooth broken faults [14]. In the research of Li Y.B. et al., the vibration data was filtered by using the Vold-Kalman filter (VKF), and the fault features were extracted by using the refined composite multi-scale fuzzy entropy (RCMFE). Rolling bearing failure was adopted as an example to prove the effectiveness of the VKF-RCMEF method. Experimental results showed that the performance of the proposed method was better than that of RA-RCMFE and VKF-MFE, and it could accurately diagnose inner race fault, outer race fault and ball fault [15]. However, the early fault signal is weaker than when the fault is heavy, so the early fault signal is difficult to detect. Therefore, the above feature extraction method cannot achieve early fault prediction. In response to this problem, Lu L. et al. proposed to use deep belief network (DBN) to extract early fault features in vibration signals. Least squares support vector machine (LSSVM) was used to establish a fault prediction model, and early faults of gearbox were detected [16]. But most of the diagnostic methods based on vibration data are after-the-fact diagnosis. So, it is difficult to provide early warning for more than one day if the gearbox has a potential failure. In addition, additional dedicated sensors must be installed to acquire vibration signals, which will cause significant upfront investment and subsequent maintenance costs. Among wind farms,the supervisory control and data acquisition (SCADA) system is widely adopted to monitor the operating status of WTs and components. The rich data foundation is provided by SCADA system for the research of fault prediction technology. So, mining fault information contained in SCADA data is gradually becoming a current research hotspot. SCADA data-based condition monitoring methods are mainly divided into three categories: The first category is based on temperature monitoring. The second category is based on power curves. The third category is based on machine learning methods. In the studies of condition monitoring based on SCADA data, many studies take temperature as the monitoring target [17]. In [18], Gearbox oil temperature, nacelle temperature and rotor rotations were used as inputs to predict output power. In [19], rear bearing temperature, active power output, nacelle temperature, turbine speed were used as inputs. Rear bearing temperature was used as output. A normal behavior model was established based on artificial neural network (ANN). In [20], the bearing temperature was used as the predicted target variable, and Time Delay Neural Network (TDNN) was used to establish the generic models. In reference [21], bearing temperature was also selected as the prediction target variable, abnormal level index (ALI) was defined to quantify the abnormal level of prediction error of each selected model, and a fuzzy synthetic evaluation method was used to integrate the identification results. In [22], A kind of health index from the perspective of temperature-related parameters was developed by separating the statistics concerning the conformity of the predicted values of key temperature parameters within a certain time window from the measured values. In recent years, there have been many studies based on SCADA data to achieve condition monitoring by comparing power curves. In [23], a method of evaluating the WT performance based on a kernel method was proposed. First, the kernel method was used to estimate the distribution of power data, and then the similarity index was used to evaluate the abnormality of power. In [24], Pandit used the Gaussian Process (GP) to model the power curve, and monitored the abnormal state of the WT by comparing the difference between the normal power curve and the abnormal power curve. In [25], Pandit used GP to model the power curve and realized the detection of absolute yaw error. The experimental results showed that the performance of the proposed method is better than the online power curve model and probabilistic assessment using binning. In [26], Sun Q.L. found that using the rotor speed-power and rotor speed-pitch angle curves can accurately monitor the abnormality of WT. The fault of the pitch system was accurately identified by the distance from the actual operating point to the theoretical curve. Although the functions of fault prediction and diagnosis can be realized through the above methods, fewer variables are used in the methods. A single variable is easily interfered by the external environment, which result in the consequence that a lot of noise is contained. Consequently, the weak fault information will be overwhelmed. The timeliness and accuracy of fault prediction are affected by above problems. In order to make full use of SCADA data, mine fault information from multiple variables and achieve more accurate fault prediction, a variety of machine learning algorithms have been applied to the condition monitoring of WT. In [27], a WT fault detection method based on expanded linguistic terms and rules using non-singleton fuzzy logic was proposed. In [28], a multivariable power curve model was constructed with a modified Cholesky decomposition Gaussian process (GP) and validated with Supervisory Control and Data Acquisition (SCADA) data. In [29], through irregular space-division and nonlinear space-mapping, stepwise data cleaning procedure was proposed. On this basis, the optimized least squares support vector machine (LSSVM) was selected to model the WT power curve (WTPC). In [30], a new condition monitoring approach was introduced for extracting fault signatures in WT blades by utilizing the data from a real-time SCADA system, and a hybrid fault detection system based on a combination of Generalized Regression Neural Network Ensemble for Single Imputation (GRNN-ESI) algorithm was proposed. In [31], a first attempt to use Dempster-Shafer (D-S) evidence theory for the fault diagnosis of WT on SCADA data was presented. Due to the traditional parameter methods such as artificial neural networks and mixed Gaussian models, there are shortcomings such as poor generalization ability and limited ability to mine complex correlations among variables, which resulting in low modeling accuracy. Deep learning can make up for the above deficiencies and effectively use massive SCADA data. In [32], Wan proposed a deep feature learning (DFL) approach for wind speed forecasting because of its advantages at both multi-layer feature extraction and unsupervised learning. In [33], a framework based on deep neural network (DNN) was developed to monitor the conditions of WT gearbox and identified the impending failures. In [34], Jiang proposed a novel fault detection method based on denoising autoencoder (DAE). In [35], Wang established a normal behavior model of WTs based on deep belief networks. The optimized modeling method can capture the sophisticated nonlinear correlations among different monitoring variables, which is helpful to enhance the prediction performance. However, the principles of deep learning models at this stage are poorly interpretable and good parameter tuning skills are required to ensure the performance of algorithm, and further research on practical engineering applications is necessary. The disadvantage of deep learning is that the data samples of full fault types and full operating conditions are necessary to train a highprecision prediction model. There are two reasons why longterm data is difficult to save. On the one hand, the storage capacity of the SCADA system is limited. On the other hand, a large amount of data needs to be collected during operation. Therefore, the data samples of full operating conditions and full failure types are scarce. The actual data are mostly smallscale data sets close to the failure time, which makes the research of condition monitoring methods based on smallscale data sets have great significance and application value. The innovations and main contributions of the proposed method are listed below: In this paper, the conditional mutual information and Pair-Copula model are introduced to tackle with WT fault prediction. With the powerful variable filtering capability of conditional mutual information, redundant variables can be removed while retaining useful variables to the greatest extent. This method is used to solve the problem of filtering out the model input variables from multiple variables, thereby modeling accuracy and failure prediction accuracy are improved. The Pair-Copula model can handle multiple variables and mine the correlation between multiple variables, the limitation that conventional Copula model can only handle two dimensional variables is overcome. The Pair-Copula model combines three input variables into one variable, which greatly reduces the complexity of modeling. A complete fault prediction model is established based on the combination of Pair-Copula model and BP neural network. In order to solve the problem that the conventional Pair-Copula model cannot process real-time data which must be required in fault prediction, a kind of improved Pair-Copula model combined with the kernel density estimation is used to calculate the real-time data. Finally, the method in [42] has been modified and applied to the determination of fault alarm threshold. This method can continuously update the threshold with change in operating conditions and has better accuracy and flexibility. The rest of this paper is organized as follows. In Section 2, the proposed method is introduced, and the basic knowledge of each part in this method is explained in detail. Section 3 describes the modeling process and validates the effectiveness of the proposed method with actual SCADA data, the experimental results are discussed in section 4. At last, the conclusions are described in Section 5. II. METHODS INTRODUCTION According to Sklar's theorem, the marginal distributions of the random variable X = [X 1 , X 2 , · · · , X n ] is denoted as F 1 , F 2 , · · · , F n and the corresponding joint distribution function is denoted as F, then there is a Copula probability distribution function C such that the equation (1) holds for any X ∈ R n . In equation (1), [X 1 , X 2 , · · · , X n ] is the input vector and [x 1 , x 2 , · · · , x n ] is a data point in the vector. n is the number of variables in the vector. A. PRINCIPLE OF COPULA Copula functions include the Normal-Copula, t-Copula, and Archimedean-Copulas function cluster. Only the Archimedean-Copulas function cluster can describe asymmetric correlation. The Archimedean-Copulas function cluster includes three functions: Gumbel-Copula, Clayton-Copula, and Frank-Copula. The principle and characteristic of each function are as follows [36]: Gumbel-Copula: In equation (2), U = (u 1 , u 2 , · · · , u n ) is the input vector and n is the number of input variables. θ = [1, +∞], when θ = 1, the random variables are independent of each other. When θ → +∞, the random variables are completely related. The distribution of Gumbel-Copula function is relatively sensitive to the upper tail correlation among variables. Frank-Copula: In equation (3), θ = (−∞, +∞) \ {0}, When θ is a positive number, the random variables are positively correlated. When θ is a negative number, the random variables are negatively correlated. The distribution of Frank-Copula function is relatively average and can well describe the overall correlation among variables. Clayton-Copula: In equation (4), θ = (0, +∞), If θ → 0, random variables tend to be independent. If θ → +∞, random variables tend to be completely related. Clayton-Copula function has the opposite characteristics of Gumbel-Copula function and is sensitive to the lower tail correlation among variables. B. PAIR-COPULA CONSTRUCTION Aiming at the problem that the correlation among WT parameters is complex and difficult to accurately modeled, Pair-Copula is used to model the correlation between multiple parameters. The Pair-Copula method is first proposed by Aas et al. [37]. The n dimensions Pair-Copula model has n − 1 layers, and the number of layers is denoted as T p (p = 1, 2, · · · , n − 1). Each layer has a root node which is connected to the other nodes, and the multivariate probability distribution is constructed by merging these nodes hierarchically. The structure is shown in Fig. 1. Each node in the figure is a binary Copula function. u i is the distribution function value corresponding to the input variable In equation (5) and (6), j = 2, 3, · · · , n − 1 and t = 1, 2, · · · , n − j. The structure of Pair-Copula contains multiple nodes corresponding to multiple types of binary Copula functions, which makes the flexibility and fitting accuracy of Pair-Copula model higher than conventional Copula functions [37]. C. FITTING ACCURACY EVALUATION It is necessary to calculate the fitting accuracy in order to quantitatively evaluate the fitting effect of the probability model. In this paper, Euclidean distance (ED) is used as the evaluation index. the calculation process of ED is shown in equation (7). One of data samples in the n-dimensional input vector X is denoted as (x 1i , x 2i , · · · , x ni ) (i = 1, 2, · · · , m). m is the number of data samples contained in X . C em is the empirical distribution function, and C is the Copula function obtained by training. The ED is smaller, the distribution of the input vector is better fitted by the Copula function. D. OVERALL FRAMEWORK The overall framework of this paper is shown in Figure 2. During the training model phase, the Pair-Copula model is used to fit the complex correlation among the three input variables and construct a multivariate joint distribution based on historical SCADA data of normal state. The output value of the Pair-Copula model is called ''state parameter''. The gearbox bearing temperature at t − 1 moment T B (t − 1) and state parameter are taken as the inputs of BP neural network and the gearbox bearing temperature T B (t) is used as the prediction target. The BP neural network model is trained with historical SCADA data in normal state. During the real-time monitoring phase, the Pair-Copula model and the BP model trained in the training model phase are used to predict the gearbox bearing temperature. The realtime SCADA data is used as input. The residual between predicted value and actual value is calculated. Whether the WT has potential faults can be judged by the residual exceeding threshold. Specific steps are as follows: (1) Variables selection: wind speed (WS), main shaft rotation speed (RS), active power (AP) and gearbox bearing temperature at t − 1 moment T B (t − 1) are selected as input variables, and gearbox bearing temperature T B (t) is selected as the predicted target. (2) The Copula function's types of each node in the Pair-Copula structure are determined, and the optimal model parameters are calculated. The Pair-Copula model of the WT in normal state is trained, and the state parameter is calculated. (3) BP neural network parameters are determined. State parameter and T B (t − 1) are taken as the inputs of BP neural network and the gearbox bearing temperature T B (t) is taken as the prediction target variable, the BP neural network is trained to produce the prediction model of the WT in normal state. (4) The real-time state parameter can be calculated by inputting the real-time SCADA data into the Pair-Copula model trained in step (2). The predicted value of gearbox bearing temperature can be calculated by inputting the realtime T B (t − 1) and the real-time state parameter into the BP model trained in step (3). (5) Calculating the residual of predicted value and actual value. (6) The threshold is calculated from the residual in normal state, and is used to determine whether the gearbox has a potential fault. III. CASE STUDY Real SCADA data of a doubly-fed WT in Fujian, China is used to verify the actual effect of the proposed method. The rated power of the WT is 1.5 MW, the cut-in wind speed is 3 m/s, and the cut-out wind speed is 25 m/s. The SCADA system records operational data per 10 minutes. At 10:20 on July 13, the WT was shut down due to a gearbox bearing failure. The SCADA data was divided into two parts. The data of the first part was collected before the shutdown caused by the fault, there are 14,000 data in total, this part is called the fault data set (FDS). The data of the other part was collected after the WT was repaired, a total of 14,000 data, this part is called the health data set (HDS). When the two data sets are used for this research, the samples close to the cut-in wind speed and the samples exceeding the rated wind speed are removed. The number of remaining data samples is 9000. According to the order, 3500 samples are selected as training data samples, 1000 samples as testing data, and 4500 samples as verifying data. The two data sets are allocated in the same way, as shown in Table 1. A. VARIABLES SELECTION There are 19 parameters in the SCADA system, as shown in Table 2. The gearbox bearing failure is the actual example in this paper, furthermore, gearbox bearing temperature is the most relevant variable that can directly reflect the health state of the gearbox bearing. Hence, gearbox bearing temperature is selected as the dominant variable in the variable screening process. This is also why gearbox bearing temperature is selected as the predicted target variable in Fig.2. Then the conditional mutual information (CMI) is used to screen out the effective auxiliary variables from the other 18 parameters. Information entropy is an index to measure the uncertainty of random variables. The information entropy H (M ) of random variable M is calculated by follow formula [38]: In the equation, f (m) is the probability distribution function of M . The information entropy of two-dimensional random variable (M , N ) is called joint entropy H (M , N ), and its calculation formula is as follows: f (m, n) is the joint distribution of (M , N ). If M is known, the information entropy of N is called conditional entropy H (N |M ), and the calculation formula is equation (10): In the equation, f (n |m ) is a conditional probability distribution of N . The mutual information I (M , N ) reflects the amount of information in random variable M containing random variable N . The calculation formula is equation (11): The larger the mutual information, the more N information is contained in M , and the stronger the correlation between M and N . If the auxiliary variable set is a multidimensional random variable M = {M 1 , M 2 , · · · , M s }, the dominant variable is a one-dimensional random variable N . Then the conditional mutual information between the auxiliary variable M s and dominant variable N is I (N ; M s |M 1 , M 2 , · · · , M s−1 ). The calculation formula is as follows: In this paper, the dominant variable N is determined as gearbox bearing temperature. The auxiliary variables M 1 , · · · , M s correspond to the 18 variables in Table 2 except the dominant variable. According to formula (12), the conditional mutual information between 18 parameters and gearbox bearing temperature can be calculated, s = 1, 2, · · · , 18. The variables with conditional mutual information I > 0.4 are selected as the effective auxiliary variables. A total of 3 variables are screened, namely main shaft rotation speed, wind speed and active power. In addition, due to the specific heat capacity of the solid, the change of temperature will be affected by the temperature at the previous moment. Therefore, when gearbox bearing temperature T B (t) is predicted, the influence of the temperature at t − 1 moment T B (t − 1) should be considered. Summarizing the above, a total of 4 variables (wind speed, active power, main shaft rotation speed, gearbox bearing temperature) are chosen to establish the normal behavior model for WT's gearbox bearing. Wind speed, active power, main shaft rotation speed, gearbox bearing temperature at t − 1 moment T B (t − 1) are used as input variables, and gearbox bearing temperature T B (t) is used as the predicted target variable, as shown in Fig.2. B. ESTABLISHMENT OF PAIR-COPULA MODEL The modeling process of Pair-Copula model is described in this section. The premise of Pair-Copula model is that the distribution and probability of input variables are known. However, the distribution of the entire samples cannot be reflected by a small number of data samples. If a large number of data samples are accumulated, the timeliness of fault prediction will be seriously reduced. To solve the above problems, the kernel density estimation (KDE) is proposed to estimate the probability of the real-time input data. Suppose x 1 , x 2 , · · · , x N is a set of normalized wind speed data with N samples. f (x) is the original probability density function of wind speed, andf (x) is KDE of f (x), expressed as:f where, h is the window width; K (u) is the kernel function; x i is the i-th wind speed data sample. The Gaussian kernel function is chosen in this paper, that is. The estimatef (x) is expressed by equation (15): The estimation error is calculated by the root mean square error. The window width h plays a local smoothing role tô f (x). If h is too small, the estimation bias can be improved but the randomness will be increased, which resulting in an irregular shape off (x). If h is too large,f (x) will be too smooth and the detailed features of f (x) cannot be displayed sufficiently [39]. In this paper, the iterative method is used to determine the window width of KDE. As shown in Fig.3, the probability densities of the three input variables (wind speed, active power, and main shaft rotation speed) are estimated by KDE, and then the three probability densities are input to the Pair-Copula model to calculate the intermediate variable ''state parameter'' which is a joint distribution of the three input variables and has no practical physical meaning. If a key component of WT has potential fault, the relationship between the three input variables changes from the normal state, which is reflected in the distance that the threedimensional joint distribution deviates from the normal state. If the distance is large, it means that the health state of WT is worse. If the distance is small, it means that the health state is good. Therefore, although state parameter has no practical physical meaning, it can represent the health state of WT. Fig.4 is the distribution of state parameter corresponding to HDS and FDS. It can be seen from the figure that when there is potential fault in the WT, the distribution of state parameter will change significantly. It indicates whether the WT is faulty can also be judged by comparing the distribution. However, this method needs to accumulate many data samples for each comparison. The accumulation of data samples takes a long time. The fault may have evolved from a minor fault to a serious fault within this period. Therefore, the distributed comparison method seriously reduces the timeliness of fault prediction and cannot meet the requirements of real-time fault prediction. Aiming at this problem, a combined model of Pair-Copula and BP neural network is proposed, which uses the residual change of predicted value and actual value to realize the real-time fault prediction function. The Pair-Copula model is used to fit the relationship among the three input variables and the three variables are merged into one variable, which is equivalent to that the number of input variables is reduced. The relationship between the inputs and output of the prediction model is simplified. Based on the above analysis, a learning algorithm with strong learning ability and simple structure can meet the modeling need. BP neural network is the basis of all neural networks and has good learning performance. Therefore, BP neural network is selected as the complementary model. In the rest of this section, the modeling process of Pair-Copula model is depicted in detail. First, wind speed, main shaft rotation speed, and active power are used as inputs to the Pair-Copula model, and the correlation coefficients between two variables are calculated, as shown in Table 3. Main shaft rotation speed has a strong correlation with wind speed and active power, so main shaft rotation speed is selected as the input variable corresponding to the root node in Fig. 3. The three input variables are combined into one variable (state parameter) by the Pair-Copula model. To determine the function types of the node Copula functions c 1,2 , c 1,3 , c 2,3|1 and the corresponding optimal parameters is the key process for establishing the Pair-Copula model. 1) CORRELATION BETWEEN MAIN SHAFT ROTATION SPEED AND ACTIVE POWER The node c 1,2 is the connection point between u 1 and u 2 , u 1 corresponds to main shaft rotation speed, and u 2 corresponds to active power. The frequency histogram of main shaft rotation speed and active power is shown in Fig. 5. It can be seen that the correlation is asymmetric. According to the characteristics of the three types of Copula functions in section 2.1, only Gumbel-Copula, Frank-Copula and Clayton-Copula can be used to fit the asymmetric correlation. The correlation between main shaft rotation speed and active power is fitted by the three functions, respectively. Table 4 shows the optimization results of the parameters which are optimized by maximum likelihood estimation. Before comparing various Copula functions, the definition of empirical Copula function must be introduced. Experience Copula's definition: Let (x i , y i ) (i = 1, 2, · · · , n) be a data sample taken from the two-dimensional vector (X , Y ), and the empirical distribution functions of X and Y are F (x) and G (y), respectively. The empirical Copula of u and v is calculated by the following formula [40]. where n is the number of data samples in the input vector; C e ( * ) is empirical Copula function; C ( * ) is the evaluated Copula function. The value of d is smaller, the distance between fitted distribution and empirical Copula is smaller, the fitting effect is better. Analysis of Fig. 6 shows that the difference among each figure is small, indicating that the fitting results of the above three functions are very close to the experience Copula. Therefore, it is necessary to further evaluate the fitting effect VOLUME 8, 2020 where d G is the smallest, it can be determined that Gumbel-Copula is the most suitable function to fit the correlation between main shaft rotation speed and active power. It can be seen from Fig. 5 that the distribution of main shaft rotation speed and active power is denser at the upper tail, and the Gumbel -Copula function is characterized by a better description of the upper tail characteristic. This conclusion is consistent with the analysis result by the SED values. 2) CORRELATION BETWEEN MAIN SHAFT ROTATION SPEED AND WIND SPEED The Copula function selection process for the node c 1,3 is similar to part (1). It can be seen from the frequency histogram of Fig. 7 that the correlation between main shaft rotation speed and wind speed at the lower tail is strong, and the correlation is also asymmetric. The fit results of the The d C is the smallest, indicating that the Clayton-Copula function has the best fitting effect. According to the characteristic of Clayton-Copula, the lower tail correlation can be described better by the function. The judgment result of the SED is consistent with the characteristic of Fig. 7. The Copula function of the second layer node c 23|1 is determined in the same way as in part (1) and part (2). The frequency histogram of the first layer nodes c 1,2 and c 1,3 is shown in Fig. 9. The SED values determine that the optimal function of the node c 23|1 is Frank-Copula. C. MODEL PERFORMANCE EVALUATION In this paper, the combination of Pair-Copula model and BP neural network is used as a gearbox fault prediction model for WT. In order to prove the effectiveness of the proposed method, four methods are adopted to establish four models, and their respective performances are compared. The modeling methods and model types are as follows: (1) BP model. Wind speed, main shaft rotation speed, active power and T B (t −1) are taken as inputs, and gearbox bearing temperature is used as the target variable to train BP neural network and obtain corresponding prediction model. (2) SVM model. Wind speed, main shaft rotation speed, active power and T B (t − 1) are taken as inputs, gearbox bearing temperature is taken as the predicted target variable. The SVM is trained to obtain the corresponding prediction model. In order to evaluate the accuracy of each model, three indexes are introduced, namely RMSE, MAE, and R2.The respective definitions are shown in following formulas [41]. where o i is the i-th actual value,ô i is the i-th predicted value, andō is the mean of actual values. If the values of RMSE and MAE are smaller, the error between predicted value and actual value is smaller. If the value of R2 is larger, the overall fitting effect is better. The prediction accuracies of the four models are shown in Table 5. It can be seen from the table, the prediction accuracy of BP is similar to that of SVM, and the accuracy of C-BP is similar to that of C-SVM. Moreover, the prediction accuracies of C-BP and C-SVM are significantly higher than the other two models, which indicates that the addition of Pair-Copula model can improve the prediction accuracy. The main reasons are summarized in two aspects: First, the multi-parameter joint distribution density function model is constructed and the complex correlation between multiple variables is captured by Pair-Copula model. The shortcomings of conventional learning algorithms in capturing complex correlations are compensated, and the prediction accuracy of prediction model is improved. Second, the dimension of the input variables is reduced by Pair-Copula model, which greatly reduces the complexity of the supplementary model. The prediction accuracy is indirectly improved. According to the results in Table 5, the prediction accuracy of C-BP is slightly higher than that of C-SVM, so C-BP is selected as the final prediction model. A. THRESHOLD CALCULATION The MD-Weibull method was proposed in [42] and used for the fault detection in notebook computer. In this paper, the method is modified to make it suitable for the need of this study. The modified MD-Weibull method is used to calculate the threshold. MD is an unitless distance measurement method that can be used to calculate the degree which a single data sample deviates from the mean of sample set. In this paper, the residual data is used as the measurement object, and the MD value of each residual data sample is calculated. The residual data sequence is denoted as E, and the i-th data sample in the sequence is denoted as e i , where i = 1, 2, · · · , n. n is the number of data samples in the sequence. Before the MD value is calculated, the data needs to be normalized by using formula (21). whereĒ is the mean of E and S i is the standard deviation. The calculation method is shown in the following formula: Since the residual data is a 1-dimensional vector, the calculation formula of MD can be simplified to following formula: The MD values of all elements in E are calculated by formula (23), and the MD data set is obtained. It can be seen from the histogram in Fig.10 that the distribution of the MD data set is basically in line with the Weibull distribution, so the 2-parameters Weibull distribution is used to fit the MD data set. The 2-parameters Weibull distribution can be expressed by formula (24). where β is the shape parameter and determines the graph form. η is the scale parameter, which determines the spread of distribution. The parameters ( β and η ) of Weibull distribution can be estimated by using maximum likelihood estimation. The Weibull distribution's CDF is used to calculate the fault alarm threshold. The CDF is defined as Fig.11 is the CDF curve. V % in the figure is defined as the normal interval. That is, the residual value within this range is normal, and the residual value outside the range is abnormal. Fig.11 is taken as an example, assuming V % is equal to 70%, the corresponding MD value D V can be obtained by the following formula: In formula (26), v a is the corresponding fault alarm threshold when the normal interval is set to 70%. The normal interval V % can be set to any value within 0 − 100% according to actual situation. The three thresholds in Fig.12 correspond to three normal intervals of 70%, 80%, and 90% respectively. Fig.12(a) is the original residual data which is calculated by the C-BP model with HDS. Fig.12(b) is obtained after the original residual data is processed by sliding window method, where the window width w = 1000, the number of original residual data n o = 4500, The number of the data after processing n p = 3500, n p = n o − w. The definition of sliding window method is as follow: In formula (27), i = w, w + 1, · · · , n, r = i − w, e w is the residual value after sliding window processing. e is the original residual value. w is the window width. n is the number of original residual data. In engineering applications, the system can calculate the normal interval V % based on the peak value of the residual fluctuation when the WT is in a normal state, and then the alarm threshold is automatically calculated. This function can be achieved through simple programming without theoretical difficulty. Since different faults correspond to different monitoring index variables, the corresponding residuals and thresholds are different. In other words, the threshold and the type of failure are related. In this paper, the normal interval and threshold can be determined by analyzing Fig.12 (b). For the C-BP model, when the normal interval is set to 70%, it can ensure that the residual value of HDS is below the threshold. That is, when the WT is in healthy state, the false alarm will not occur. But the residual fluctuation of the BP model is relatively large. So, a higher threshold needs to be determined. In order to make an unified comparison of the two models, the normal interval is set to 95%, and the corresponding threshold is 0.0187, as shown in Fig.14 and Fig.15. B. RESULT ANALYSIS The actual value and predicted value of BP model based on HDS are compared in Fig.13(a). In order to clearly show the fluctuation of curves, 1-500 data points are selected for comparison. It can be seen from Fig.13(a) that although the overall trend of the predicted value and the actual value is consistent, the error between each data point is large and the prediction accuracy is also low. The predicted values and actual values of C-BP model are compared in Fig. 13 (b). By comparing Fig.13 (a) and (b), it can be found that the prediction accuracy of C-BP model is significantly higher than that of BP model, and the prediction error is significantly reduced. This result indicates that the accuracy of prediction model can be effectively improved by Pair-Copula model. Fig. 14 is the comparison of BP and C-BP prediction residuals based on HDS. The original residual data is processed by the sliding window method. As shown in Fig. 14, there is a large fluctuation in the residual curve of BP model, which seriously affects the result of fault prediction. In comparison, the variation of C-BP is small, and the overall curve is relatively stable. Since the current WT is in normal state, the residual curve should theoretically be stable overall and there is no upward trend. So, the residual curve of C-BP is more authentic and representative. The residuals of BP and C-BP based on FDS are compared in Fig.15 and the threshold is same as Fig.14. The residual curve of C-BP shows an increasing trend, this is due to the fact that the deviation between predicted value and actual value of the gearbox bearing temperature increases with time. The upward trend of the curve indicates that the gearbox bearing temperature gradually deviates from the normal state, which makes the degradation tendency of the gearbox bearing is intuitively reflected. The residual curve corresponding to C-BP model exceeds the threshold at the 843th data point and then rises continuously, which can be used as an early alarm point for the WT. In contrast, the BP model's prediction residual curve fluctuates greatly, which cause the fault can not be predicted effectively. Although the curve exceeds the threshold at the 549th point, the hold time is very short and then falls below the threshold. So, this point cannot be used as a valid alarm point. At the position of the 2526th data point, the curve again exceeds the threshold and remains for a longer period of time and then falls below the threshold again. So, the 2,526th data point can barely be used as an alarm point for the BP model. In summary, although both BP model and C-BP model can implement fault prediction, the performance of C-BP model is significantly better than that of BP model in terms of prediction accuracy and timeliness. The SCADA data sampling interval is 10 minutes. According to the above paragraphs, the alarm point of the C-BP model is the 843th data point. From this point to the WT's shutdown, there are 2657 sample points, which is converted into 443 hours, about 18.5 days. The alarm point of the BP model is the 2,526th sample point, there are 974 sample points between the point and the shutdown, which is converted into 162 hours, about 6.8 days. The analysis result in this paragraph shows that the prediction effect of C-BP model is significantly better than that of BP model, which once again proves the effectiveness of the proposed method. V. CONCLUSION Based on the Pair-Copula model, a novel WT gearbox fault prediction method is proposed in this paper. The superiority and effectiveness of the proposed method are verified with actual SCADA data. The following is a summary of the paper work: (1) Conditional mutual information is introduced for variable screening, and 3 variables are selected from 18 variables. Through comparative analysis, it can be known that all 3 variables are effective auxiliary variables. Generator rotation speed is accurately eliminated as a redundant variable corresponding to main shaft rotation speed. The above results show that conditional mutual information can retain useful variables and accurately eliminate redundant variables. (2) The good performance of Pair-Copula model in mining correlation among multiple variables and establishing multidimensional joint distribution is shown. Pair-Copula model is combined with SVM and BP neural network to form two combined models. The experimental results show that the prediction accuracies of the two combined models are significantly higher than the original models, and the key functions of Pair-Copula are fully reflected. (3) The conventional Pair-Copula model cannot process real-time data, but fault prediction must require the function of real-time calculation. In order to solve this problem, kernel density estimation is adopted to modify the Pair-Copula model, so that it has the function of real-time data calculation. (4) The proposed method is more suitable for small-scale data samples. Experimental results show that the proposed method has higher prediction accuracy and can identify potential faults earlier than conventional learning algorithms. Finding potential faults early and taking preventive measures can ensure the safe operation of WTs and reduce maintenance costs. Although the gearbox bearing fault is taken as the case in this paper, the proposed method may also be effective in predicting other faults. It is the follow-up work to further explore the application of the proposed method in other faults. In addition, the research results of reference [43]- [48] show that air density has a significant effect on the output power of WTs. So, the SCADA data processing method which takes into account the influence of air density is also one of the future research directions. He is currently a Lecturer with North China Electric Power University. His current research interests include wind turbine condition monitoring, fault prediction, and fault diagnosis.
9,977.8
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
BUYING TIME: DETECTING VOCS IN SARS-COV-2 VIA CO-EVOLUTIONARY SIGNALS C Introduction Current genomic sequencing efforts facilitate virological epidemiological surveillance close to real time.The challenge is to efficiently identify variants within these viral sequences that pose further threats.We present here a bottom up framework facilitating the rapid detection of variants of interest (VOI) and of concern (VOC), given a times series of multiple sequence alignments (MSA) consisting of viral genomes. The key idea is to identify maximal sets of sites exhibiting co-evolutionary signal within the MSA instead of considering the emergence of particular mutations.These signals naturally induce a complex of motifs formed by sets of co-evolving sites.The sites are selected on the basis of exhibiting sufficient mutational activity and satisfy a certain diversity criterion.Higher dimensional simplices are constructed using distances capturing the co-evolutionary coupling of pairs.Via this method we develop and analyze an alert protocol: an alert is triggered by a cluster exhibiting a significant fraction of newly emerging sites.Then our alerts are put to the test, analyzing retrospectively SARS-CoV-2 sequence data collected from November 2020 through August 2021.That is we issue alerts based on "historic" data, not "real-time" alerts.These alerts are issued with no a priori assumptions, except of the Wuhan reference sequence upon which the MSAs are built.MSAs are constructed on a weekly basis for England, USA, India and South America (SA). Thereafter we relate our alerts to established VOIs and VOCs, i.e. employing the a posteriori knowledge of VOI/VOC-designations and lineages in order to evaluate the accuracy of the issued alerts.We remark that an alert does not provide any biological semantics, its primary purpose is to enable a fast biological analysis on a handful of critical sites. Background Genomic surveillance plays an instrumental role in combating rapidly mutating RNA viruses [14].In particular, it is becoming a vital necessity in the effective mitigation and containment of the COVID-19 pandemic [41,8].While mRNA vaccine development and distribution was successful in the US, recently VOCs, in particular the Delta-variant [32], give rise to questions regarding the efficacy of current vaccines and timely vaccine development necessitates the rapid recognition of critical adaptations within SARS-CoV-2.Genomic surveillance leverages applications of next-generation sequencing and phylogenetic methods to detect variants that are phenotypically or antigenetically different, facilitating early anticipation and effective mitigation of potential viral outbreaks. One of the central tasks in genomic surveillance is to identify emerging variants that are more virulent, or more resistant to available vaccines.The designation of SARS-CoV-2 variants of concern/interest exemplifies such an identification process [26,48]. Currently, the designation of such a VOC is based on phylogenetic methods and involves four steps: lineage assignment, mutation extraction, biological analysis, and declaration.First, a large phylogenetic tree is constructed from publicly available SARS-CoV-2 genomes, and its sub-trees are examined and cross-referenced against epidemiological information to designate new lineages [3,40].Secondly, a collection of mutations, frequently observed in a lineage is extracted and defined to be characteristic.The biological impact of this collection of mutations is then analyzed in wet-lab/in silico experiments.Thirdly, in the wake of identified biological features, such as an increase in transmissibility or severity, the lineage/variant is declared a VOC. Population-based approaches were developed to complement phylogenetic-based methods with the goal of rapidly identifying and monitoring critical mutations on the SARS-CoV-2 genome.Frequency analysis is widely used to monitor variant circulation [27,35].The increasing prevalence of a mutation might indicate the emergence of a new variant.Entropy measurements, derived from nucleotide frequency, highlight nucleotide positions with high variation and facilitate the compact representation of SARS-CoV-2 variants [12]. Mutations on viral genomes do not always appear independently.For example, the D614G (A23404G) mutation on the SARS-CoV-2 genome is almost always accompanied by three other mutations: C241T, C3037T and C14408T [37]: these four positions exhibit a co-evolutionary pattern.In fact, positions in a molecule that share a common constraint do not evolve independently, and therefore leave a signature in patterns of homologous sequences [11,38].Extracting such co-evolution signals from a sequence alignment leads to a deeper understanding of the impact of mutations and can facilitate the early detection of emerging variants.Present correlation analysis techniques are not amenable to co-evolutionary analysis.For instance, the Pearson correlation coefficient [36] requires the computation of the average value of a random variable.While the nucleotide type at a fixed position in an alignment can be regarded as a random variable, averaging techniques are not straightforwardly applicable.While Spearman's correlation coefficient [34] and the Kendall Tau rank coefficient [25] work for rank correlation analysis, there exists no canonical ranking for the different types of nucleotides.Co-evolution detection strategies currently are based on observing the frequency of nucleotide combinations in two distinguished positions [31,47].It is a challenge to dynamically keep track of all such pairwise frequencies on sizable data sets.Thus, a novel measurement for the degree of co-evolution is of relevance. In protein folding MSA are employed to identify related positions [46] via mutual information.While there are contributions on the level of networks: [1] studies mutual information networks of enzymatic families in protein structures to unveil functional features, the work is focused on how to account for the effect of phylogeny on this identification [6].To this end, two modifications to mutual information are introduced: row-column weighting [15] and average product correction [10].Mutual information has also been used to detect coevolution signals in alignments of RNA sequences [17].In [16,49] statistical methods differentiate correlation patterns induced by functional constraints from those induced by shared ancestry.These methods are concerned with pairwise relations since their objective is to determine RNA secondary structure. Thus, the idea of considering pairwise relations between columns within an MSA has been successfully used in protein and RNA folding more than a decade ago.The motif complex represents an extension of this, encapsulating k-ary relations within the MSA, that cannot be reduced to pairwise relations.When applying our method to SARS-CoV-2, however, we consider only pairwise relations.This allows us to use mutual information and standard clustering algorithms, where clusters approximate maximal motifs.The motif complex suggests extending the notions of mutual information and distance beyond two random variables and points, respectively. Motifs and alerts.The framework developed here represents a bottom-up approach requiring no a priori knowledge of phylogeny, lineages or any type of biological impact analysis.Its output consists of a collection of a small number of distinguished clusters composed by critical, tightly co-evolving positions on the SARS-CoV-2 genome.The particular nucleotide identity at these positions, as relevant as they are for subsequent analysis, only plays a subordinate role.The notion of a reference sequence also plays a substantially different role: it is exclusively employed for the generation of the multiple sequence alignment from which the aforementioned notion of position/site (i.e.column in the alignment matrix) originates.A group of mutations co-evolving via a similar pattern exhibits footprints of evolutionary selection pressure.The fitness induced by a group of co-evolving mutations can be more significant than their total fitness when they occur independently, as hidden links might exist between the sites in question.Therefore, a group of sites having sufficient nucleotide diversity that are clustered by means of co-evolution measures can represent a signal of selective advantages. We consider here clusters that carry a significant portion of newly active sites.These sites represent a sufficiently large additive fitness component of the underlying cluster and are potentially indicative of the emergence of a functional block, namely a keystone mutation event.The alert picks up the induced differential in the evolutionary dynamics, very much in the spirit of a derivative.Specifically, alerts (Section The motif complex of SARS-CoV-2 ) are closely related to the derivate of the logarithm of the size of the cluster inducing the alert.We are in effect constructing a guidance system, alerting not only to the sites where biological analysis should be performed, but also quantifying at which rate they co-evolve.This provides crucial information for biologists, since identifying co-evolution relations provides clues about underlying biological mechanisms.GISAID-sequence data are rich enough to allow for a weekly time resolution and within this timeframe towards a handful of alerts, each involving on the order of 10 sites are triggered.Relating alerts with the a posteriori knowledge of VOI/VOCdesignations and lineages, motif-induced alerts detect VOIs/VOCs rapidly, typically weeks earlier than current methods.We show how motifs provide insight into the organization of the characteristic mutations of a VOI/VOC, organizing them as co-evolving blocks.Finally we study the dependency of the motif reconstruction on metric and clustering method and provide the receiver operating characteristic (ROC) of the alert criterion. The motif complex In this section we specify a mathematical framework that allows us to express coevolutionary signals in MSA.In a natural way these signals give rise to a weighted, simplicial complex, upon which our notion of alert derives. Let A = (a p,q ) denote a multiple sequence alignment (MSA) composed of m sequences of length n.Here a p,q ∈ A = {e, A, T, C, G} represents the nucleotide at the qth position of the pth sequence and e denotes a gap. We consider a k-tuple of A-columns i 1 , i 2 , . . ., i k and query the existence of a k-ary relation between the nucleotides present at i 1 , i 2 , . . ., i k , respectively.We stipulate that such a relation is the result of selective pressure exerted on a collective of sites forming a block having implicit connection in the viral genome.Such a sitedependency can manifest via a variety of mutational constellations. For each collection of sites {i 1 , i 2 , ..., i k }, a relation corresponds to a set M k [i 1 , ..., i k ] consisting of k-tuples (a i 1 , ..., a i k ), representing all the constellations that satisfy the "hidden" relation.We shall refer to as the set of k-motifs or motifs.Constellations are projective: (a i 1 , ..., âi j ..., a i k ) ∈ M k−1 [i 1 , ... îj ..., i k ] for any j ∈ {1, ..., k}, where âi j expresses the fact that a i j is omitted, i.e. any (k − 1)- tuple.The projectivity reflects the fact that, by construction, any sub-motif will be observed as an induced co-evolutionary dependency. Suppose the set of all motifs, X := ∪ k X k , is given.Its simplices encapsulate relations that are represented by mutational constellations within the MSA and it is natural to endow them with weights, representing the number of distinct constellations realizing them.Accordingly, X gives rise to a weighted simplicial complex [5] over the set of columns defined as follows: We now adopt the following perspective: suppose we are given an MSA providing consistent labelings of the sites relative to a reference sequence and suppose a family of motifs (M k [i 1 , . . ., i k ]) k exists, but is not known to the observer.Then the MSA allows to obtain information about maximal motifs, representing the sets of coevolving sites.Depending on size and composition of the MSA, as well as errors introduced by constructing the MSA the "true" motif complex, i.e. the collection of all blocks having implicit connection, can only be approximated. To this end we construct simplices starting from vertices (sites) (0-simplices) to maximal simplices.These maximal simplices are of central relevance, since they represent maximal collections of co-evolving positions, which include 1 the crucial functional units in the virus genome. In view of M 1 = {(a i,i 1 ) | a i,i 1 ∈ A} there exist, no a priori constraints on the selection of the sites of the motif complex.We shall select sites within the MSA that play a distinguished role in the evolutionary dynamics of the sequence sample: • sites contained in competing variants within the multi-sequence alignment or • sites exhibiting significant variation for intrinsic, biochemical reasons. In order to recover the motif complex, we employ measures of nucleotide diversity and co-evolution distances as follows: first we identify the critical sites where selection induces evolutionary variation and secondly we quantify those pairs of sites that co-evolve. First, we use Hamming distance and explicitly incorporate a particular class of relations that is induced by permutations and secondly we employ entropy and mutual information.We remark, that in the latter case, although not explicitly encoded, permutation induced relations again emerge. The motif complex of SARS-CoV-2 In this section we consider the motif complex of SARS-CoV-2.Using results from section Materials and Methods we approximate the complex and discuss alerts, actual clusters and true and false positives.By construction, the motif complex does not allow us to draw conclusions as for which motifs will constitute a "problem"; this can only be achieved by detailed biological analysis.Short of providing such an analysis, the identification of motifs is critical and of timely value because of • a dramatic reduction of the number of potentially relevant sites from the order of 10 4 to 10 2 , • rapid detection of collections of sites that constitute potential threats. We next specify alerts.To this end, we refer to a site as newly emerging ((+)-site), if it changed its activity state from being inactive to active within the MSA and (−)site, otherwise.Here, the activity is measured by the average number of pairwise different nucleotides at a specific site, denoted by D(i).We approximate motifs as detailed in Section Materials and Methods, referring to these as clusters.Given a discrete time series a particular cluster, M splits M = M + ∪M − . Alert: is the emergence of a motif, M , such that |M | ≥ 5 and ρ M = |M + |/|M | ≥ 0.5.A cluster triggering an alert is referred to as predicted positive. A cluster of size at least five containing at least one (+)-site is referred to as actual. Actual clusters provide the background for the ROC-curve provided in Subsection Alerts: genericity and ROC-curves.As we shall see, alerts are not random and, as the below case studies show, only few are triggered at a given time, involving on the order of 10 positions. An alert, i.e. a predicted positive can be either a true or false positive.This is decided via the following criterion: in case more than 70% of the sites contained in the cluster, irrespective of them being (+) or (−)-sites, are later 2 confirmed to be characteristic mutations of a single VOI/VOC, we consider an alert a true positive and a false positive, otherwise.We give a detailed analysis of the dependency of the alert criterion on its key parameter ρ M .The alert criterion, specified above, is arguably ad hoc.It turns out that while it can be optimized, the optimization does not increase true positives, but decreases false positives, see Subsection Alerts: genericity and ROC-curves for details. Finally, we draw the attention to an additional feature of motifs: when mapped onto specific VOCs and VOIs, motifs provide deeper insight in how characteristic mutations organize, which in itself aids the biological analysis. In Tab. 1 we compare the time of detection of critical motifs, corresponding to VOC/VOIs to the time of (a) WHO designation [48] as being of concern/interest and (b) Pango lineage designation [39].We next discuss alerts and how their underlying motifs relate to VOCs/VOIs in terms of four case studies: the Alpha (England), Delta (India), Delta AY.Delta exhibits 20 characteristic mutations, including the C 1 -mutations (grey) also found in any current variant.We shall consider here the organization of the remaining 16 mutations.In November, we find that 10 mutations among the 16 characteristic mutations are active, and 9 cluster, leaving A28881T isolated.A28881T is a C 2 -mutation which emerged earlier and can be found in other variants.In December, all 16 characteristic mutations are active, 14 forming a cluster, while A28881T and G24410A are isolated.G24410A is an essential mutation on the spike protein region, resulting in D950N amino acid substitution.The situation is somewhat similar during November, January and February, in March, however, all 16 characteristic mutations become active and form three clusters: of size 8 (red), of size 7, composed of newly emerging mutations (green), and of size one, T26267C (blue).In April, the situation is similar to March.Two large clusters are observed, where T26267C merges into one of them (green).We conduct the same type of analysis via the HCS-method on the April data.Here the 16 characteristic mutations partition into However, we also observe systematic differences: in general, HCS-clustering tends to produce smaller clusters when compared to the k-means method.This is due to the fact that forming a highly connected component is a restrictive condition.We also observe that P-distance produces slightly more signals compared to J-distance. To study the diagnostic capability of alerts, we perform a receiver operating characteristic (ROC) analysis [13].The key parameter here is the threshold of (+)mutations, θ.In case the fraction of (+)-mutations contained in the cluster exceeds θ, an alert is triggered and the corresponding cluster is considered a predicted positive.By construction, the total number of predicted positive clusters is a monotoneously decreasing function of θ. Any predicted positive cluster corresponding to a VOC/VOI, is considered a true positive and a false positive, otherwise.Let T P and F P denote the total number of true positive and false positive clusters, respectively.Then the true and false positive rates T P R and F P R are given by T P/P and F P/N , where P and N denote the numbers of actual positives and negatives, respectively, where an actual cluster consists of at least 5 sites and contains at least one (+)-site.The latter guarantees that the cluster is only counted once.When at least 70% (50%) of an actual cluster correspond to the characteristic mutations of a certain VOC/VOI, this cluster is considered be associated with that VOC/VOI, and contributes to P .If an actual cluster does not correspond to any VOC/VOI, then it contributes to N . We note that the ROC curve depends on (P, N ) being correctly recognized.If, for instance, an only later to be declared VOC is not taken into consideration all fractions will change.Each θ induces a tuple (F P R, T P R).Varying θ, produces the ROC curve [13], where the x-axis and y-axis represent the F P R and T P R, respectively. Integrating over all data, that is, considering all geographical locations, we observe a total of 163 actual clusters (each consisting of at least 5 sites and containing at least one (+)-site).Among them, P = 20 correspond 3 to a VOC/VOI and are accordingly considered to be actual positives.The remaining N = 143 do not correspond to a VOC/VOI and are considered to be actual negatives.We point out that none of the actual clusters is associated with Mu.In fact, the characteristic mutations of Mu The motif complex introduced here detects maximal sets of sites that experience selection pressure -and this is the crucial point-as a collective.This amounts to identify differential changes within the MSA, providing information about the viral "heartbeat".We have shown that this pressure leads to a small number of constellations that appear as distinguished patterns within the MSA. Thus the method represents a significant reduction in data and facilitates subsequent biological analysis. In contrast to the current approach, the motif-complex does not require any a priori assumptions as it is a bottom-up approach.In contrast, the current approach of defining a lineage or variant is based on its location in the phylogenetic tree.Determination of branch points can be biased and there is no clear boundary between lineages since their characteristic mutations can overlap.Motifs provide a new way of partitioning mutations.A position can only be clustered in a group at a time.A cluster is possibly representing a functional block, and the current defining variant is a combination of these functional blocks.For example, all variants contain the cluster mutations C241T, C3037T, C14408T, and A23604G. The more sequences the MSA contains the more easily such constellations are observed.Quality and quantity of sequence data affect the fidelity of approximation. Sparse data, on the other hand, as it is the case for India, have a detrimental efsince the MSA does not provide a sufficient basis for the reconstruction of the "true" motifs.England and USA surveillance have a higher number of sequences [4], allowing for the reconstruction of the motif complex with much higher fidelity.We observe that retrospective alerts based on England and USA GISAID-data appear rarely and typically correspond to VOCs/VOIs.On the contrary, Columbia, where the Mu variant is emerging from, has much fewer sequences due to global disparities in sequence surveillance [4].It is worth mentioning that sequence surveillance has an impact on the motif complex detecting the Mu variant in Columbia. Alerts are based on the approximation of motifs.Irrespective of the particular parameterization, observing a motif is non-random since it is produced by selection pressure acting on a collective of sites.Evolution in absence of selection, i.e. on a flat landscape produces lineages and clusters 4 [9] but never exhibits P -or Jdistances small enough to form even a motif composed by only two positions.Thus, in contrast to lineages, the existence of motifs is tantamount to the existence of selection pressure. A set of co-evolving sites can originate from a variety of biological scenarios.Particularly relevant events closely connected to motifs are for instance functional blocks. These induce subsets of motifs since the method cannot rule out that only a core of sites is directly relevant for the underlying functionality, while remaining sites are "carried along" by founder effect or other mechanisms.It is, for instance, easily conceivable to have two functional units forming a motif, where the existence of one excludes the other.In any case, motifs represent a dramatic data-reduction, since are only a few of them and they consist typically of ≤ 20 sites. Even if all motifs would correspond to functional blocks, not all would result in or VOIs.The emergence of these depends on the viral dynamics itself, strain competition, as well as external factors such as selection pressures exerted via vaccinations or social distancing. The approximation of the motif-complex of an MSA identifies the co-evolutionary relations between sites on the genome.This constitutes key data, that can be utilized to get an instant read on the evolutionary dynamics within the viral sample. Our results show that this information is instrumental for the early detection of differential changes in the dynamics of the sequences in the MSA.Such signals can be detected before the adaption of a variant is complete.As the Delta variant exemplifies, this adaption can be a months-long process, through which sites configure themselves into an optimal constellation in multiple steps, each of which leaving its co-evolutionary footprint. Combining the maximal simplices or clusters with a phylogenetic analysis provides deeper insight into how VOCs are organized, see Fig. 2. As a result we can show that the characteristic mutations representing a VOC split into distinguished components of co-evolving sites. The concept of alerts works well to achieve the goals of the method.Over a wide parameter range we produce a true positive rate of 1, i.e. no VOI/VOC is missed, which motivates the title of this contribution.However, that is not to say that the method is not without its limitations.This has to do with our notion of "universe", i.e. the set of actual clusters and what amounts to a positive.The Mu variant does not appear to cover a sufficient fraction of any actual cluster and is therefore not counted as an actual positive.Consequently, it is fair to say our method genuinely all VOI/VOC that exhibit a significant fraction of de novo mutations.Mu can be considered as a recombination of sites, that are present in other variants. Mu-sites are thus distributed over a large number of actual clusters, which leads by construction to its exclusion from the set of positives.Similarly our method does not detect signals mapping to the Beta variant in the four countries/regions, see Tab. 1. The framework generates false positives at a rate below 0.2.This is acceptable in view of the fact that the underlying clusters are small and only emerge within a week.We remark that our measure of false positives includes clusters that may under different circumstances not be false positives at all.These clusters do represent a critical threat, but for reasons of strain competition, founder effect, or external measures such as social distancing, this threat never materializes. Materials and Methods First approximation.We first compute the nucleotide diversity of a column, i, i.e its average Hamming distance [19] D(i) = 1≤k<j≤m ∆ a k,i ,a j,i / m 2 , where ∆ i,k = 1 − δ i,k and δ i,k is the Kronecker symbol. We proceed by approximating its 1-simplices.To this end we consider all permutations τ : A → A and make the Ansatz, see Fig. 6: In the context of a noisy data-set, P (i, j) be viewed as to reverse-engineer the dependencies induced by permutations between the two columns.Such permutations produced a restricted, yet relevant collection of binary relations.For instance, relations like identity or complementarity can readily be expressed via such mappings.P (i, j) satisfies by construction P (i, j) = P (j, i) and the triangle inequality P (i, h) ≤ P (i, j) + P (j, h).That is P (i, j) is a pseudo-metric and since there are 5! permutations, P (i, j) can be computed easily.P (i, j) is completely determined by the joint distribution p i,j (x, y) of pairs of nucleotides, namely We are now in position to approximate the motif complex based on D(i) and P (i, j) as follows: first, we define the 0-simplices to be the columns i such that D(i) is greater than the threshold h 0 , D(i) > h 0 .Second, we define the 1-simplices as follows: a pair of columns (i, j) is a 1-simplex if P (i, j) is smaller than the threshold , P (i, j) < .In view of the property if P (i, j) = P (j, h) = 0, then P (i, h) = 0, we shall thirdly approximate the higher dimensional k-simplices for k ≥ 2 as follows: any k + 1 columns [i 0 , i 1 , . . ., i k ] form a k-simplex of the motif complex if any pair of these columns forms a 1-simplex.These k-simplices for k ≥ 2 can be approximated by means of cluster analysis or alternatively, extractions of highly connected subgraphs of a similarity graph induced by P (i, j) via the HCS-algorithm. Second approximation.We determine the 0-simplices of the complex via the Shannon entropy, H(i), of a site i, given by [43] where the units of H are bits, and p i (x) is the probability of the nucleotide x appearing in column i.The entropy H(i) has been widely utilized to quantify the diversity of nucleotides at position i in a population of sequences [43,7] Hamming distance min Figure 6.Permutation-induced relations: at site i and j suppose v i = CCAA and v j = AAGG, respectively.Let g map e to e, A to C, C to G, G to U, and U to A. Then g(v i ) = GGCC and the Hamming distance between g(v i ) and v j is four.For f mapping e to e, A to G, C to A, G to C and U to U, the Hamming distance between f (v i ) and v j is zero and P (i, j) = 0, accordingly. entropy, we shall construct a distance via joint entropy and mutual information as follows: the joint entropy H(i, j) of two sites i and j is defined as x y p i,j (x, y) log 2 p i,j (x, y), where p i,j denote the joint distribution of columns i and j, i.e., p i,j (x, y) specifies the probability of pairs of nucleotides (x, y) ∈ A × A. Clearly, the marginal probability distributions for columns i and j are given by p i (x) = y p i,j (x, y) and p j (y) = x p i,j (x, y), respectively. k-means clustering [30] is an unsupervised machine learning algorithm of vector quantization, aiming to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean to the cluster center.The problem is in general NP-hard [2], but efficient heuristic algorithms are available and can achieve a local optimum [20,42].As the number of clusters k is part of the input, the first step is to determine a suitable k.Here we use the gap statistic method [45], with maximum k = 30, to determine the optimum k.The analysis is performed by the factoextra package in R [29,23]. We also perform the highly connected subgraphs (HCS) clustering [21].The HCS clustering algorithm is based on the partition of a similarity graph into all its highly connected subgraphs.More precisely, two active columns v 1 , v 2 ∈ V are in the same cluster if they belong to the same highly connected subgraph of G.As already mentioned, HCS clustering does not make any a priori assumptions on the number of clusters.Furthermore, it satisfies the following property: all clusters C 1 , C 2 , . . ., C r have diameter at most 2. It guarantees that the co-evolution distance between two positions of the same cluster is at most 2 , where J(v 1 , v 2 ) < .Based on the coevolution distance, we construct the similarity graph G = (V, E) as follows: two active positions v 1 , v 2 ∈ V are connected by an edge if their P -or J-distance is smaller than the threshold J(v 1 , v 2 ) < .We then group the active positions into disjoint clusters C 1 , C 2 , . . ., C r via HCS-clustering on the similarity graph. Data preparation.High-quality SARS-CoV-2 whole genome data were collected from GISAID [44].Each sequence was individually aligned to the reference sequence collected from Wuhan, 2019 (GISAID ID: EPI_ISL_402124).A multiple sequence alignment (MSA) was produced by MAFFT [24].We partitioned the collected sequences by week, further differentiating them by their country or region.For the scope of our analysis, we consider data from England, India, USA and SA, since these cover the regions from which the respective VOCs originate. We consider all current VOCs Alpha, Beta, Delta (AY.3 included), Gamma, and all current VOIs Lambda, Mu.We collect the characteristic mutations of these VOCs and VOIs, including both synonymous and non-synonymous, from Outbreak.infoand NextClade [33,18] Analysis Protocol.We validate our framework employing curated SARS-CoV-2 data as follows: moving back in time and exclusively using the MSA, we shall reconstruct the motif complex, i.e. identify the clusters of positions corresponding to co-evolving mutations.We then take advantage of the fact that we also have an independent biological analysis including phylogenies.This in turn allows us to discuss how our clusters "fit" in the landscape of recognized VOCs and VOIs. We show that our motifs are detected weeks, if not months before the associated variants begin to be observed by other means. of size at least five containing at least one (+)-site is considered as the signal of a potential emerging variant.Finally our findings are discussed within the context of the identified VOCs and VOIs, in particular how early can we detect blocks of positions corresponding to mutations that later were recognized to be characteristic for a VOC. Figure 1 .Figure 2 .Fig. 1 . Figure1.Motif-induced alerts: each vertical represents a cluster satisfying our alert criterion, where the height denotes the size of the cluster and the color portion of a vertical bar representing the ratio of (+)-positions within the cluster.An alert mapping onto a particular VOI/VOC is colored accordingly and colored grey, otherwise.Triangles label the first week of detection of the motif-based alert, WHO and Pango designation, see Tab. 1. 8Figure 3 . 2 . Figure 3.The evolution of the co-evolutionary footprint of Delta: India, from November 2020 to April, 2021, based on P -distance and k-means clustering.Sites not clustered are inactive at the respective time (D(i) ≤ 0.1, see Materials and Methods). Figure 4 . Figure 4. k-means and HCS clustering based on J-and P -distance: the x-and y-axis display active columns (D(i) > 0.1).If the ith and jth positions belong to the same cluster, (i, j) is colored black and white, otherwise.LHS: J-distance and k-means clustering (upper triangle), P -distance and k-means clustering (lower triangle).RHS: J-distance and HCS clustering (upper triangle), P -distance and HCS clustering (lower triangle). 3Figure 5 . Figure5.ROC curves of alerts: x-and y-axis denote the false and true positive rates F P R and T P R. Each point corresponds to a specific θ, i.e. the fraction of newly emerging sites within the cluster triggering the alert, ranging from 1 to 0 (left to right).We display the ROC-curves in case of true positives corresponding to 70% (red) and 50% (blue) of the alert-cluster representing characteristic mutations of a VOC. Table 1 . Rapid detection: motif-based alert, WHO and Pango designation.Motif detection: the date an alert is observed.WHO designation: the date the respective variant was declared to be of concern/interest.Pango designation: the date a lineage was assigned to the respective variant according to Pango designation website.
7,686
2022-07-21T00:00:00.000
[ "Computer Science" ]
The method of determination of mercury adsorption from flue gases For several recent years Faculty of Energy and Fuels of the AGH University of Science and Technology in Krakow conduct intensive studies on the occurrence of mercury contained in thermal and coking coals, as well as on the possible reduction of fossil-fuel mercury emissions. This research focuses, among others, on application of sorbents for removal of mercury from flue gases. In this paper we present the methodology for testing mercury adsorption using various types of sorbents, in laboratory conditions. Our model assumes burning a coal sample, with a specific mercury content, in a strictly determined time period and temperature conditions, oxygen or air flow rates, and the flow of flue gases through sorbent in a specific temperature. It was developed for particular projects concerning the possibilities of applying different sorbents to remove mercury from flue gases. Test stand itself is composed of a vertical pipe furnace inside which a quartz tube was mounted for sample burning purposes. At the furnace outlet, there is a heated glass vessel with a sorbent sample through which flue gases are passing. Furnace allows burning at a defined temperature. The exhaust gas flow path is heated to prevent condensation of the mercury vapor prior to contact with a sorbent. The sorbent container is positioned in the heating element, with controlled and stabilized temperature, which allows for testing mercury sorption in various temperatures. Determination of mercury content is determined before (coal and sorbent), as well as after the process (sorbent and ash). The mercury balance is calculated based on the Hg content determination results. This testing method allows to study sorbent efficiency, depending on sorption temperature, sorbent grain size, and flue-gas rates. Introduction Thermal coal combustion is one of the major sources of anthropogenic mercury emission to the atmosphere. In Poland, this is the dominant source with annual share of approximately 56% [1], which amounts to 5 700 kg of emitted mercury per year. In Polish power plants flue gases leaving boiler are treated in a following manner. Fly ash is removed using electrostatic precipitators (ESPs), fabric filters, or in a lesser extent, cyclone separators. Sulfur oxides are removed using wet and semi-dry installations. Nitrous oxides are removed either by catalytic or by non-catalytic reduction [2]. Depending on mercury content in coal, its chemical composition, type of boiler, combustion conditions, and flue gases treatment installation, the mercury content in gases released to the atmosphere varies from 1 μg/Nm 3 to even 30 μg/Nm 3 [1,3]. Combination of high mercury content in coal, unfavorable elemental composition (low chlorine, bromine and iron content, high calcium content), requires, in addition to passive methods, usage of additional technologies reducing mercury emission (so called 'active methods') [4,5]. Potentially the Best Available Techniques (BATs) include: injection of pulverized activated coal into flue gases, adsorption on immobilized activated carbon bead, and usage of sulfur-impregnated sorbents. Reduction of mercury emission can be achieved by selection of coals to be combusted, its physical enrichment, mild pyrolysis, as well as combustion in fluidized bed [5]. One of the methods with highest efficiency, which is widely used in USA, is an injection of pulverized activated carbons into flue gases. These activated carbons are often modified to facilitate mercury removal [5,6]. Although adsorptive methods are highly efficient, their cost is very high, reaching from 40 to 90 thousand USD per 1 kg of mercury removed. This cost can be decreased by usage of cheaper sorbents like selected fractions of coke dust, especially those acquired from coke dry cooling installations [7]. The first step to determine if a given sorbent is suitable for mercury removal from flue gases, is a laboratory test. This enables precise evaluation of efficiency of mercury removal from specified flue gases by a given sorbent. Such evaluation can be achieved by measuring mercury content in gases before and after sorbent treatment. However, such measurements on laboratory scale encounter several problems, which can significantly reduce reliability of the results. Because of that, the mercury balance for measuring system is more reliable method of mercury removal assessment in laboratory scale experiments. For recent years Faculty of Energy & Fuels is performing research on mercury occurrence in coals as well as on technologies reducing mercury emission to the atmosphere. Within this research lot of effort is made to use pulverized and granulated cheap sorbents to remove mercury from gaseous phase. In this paper, several methods of laboratory scale mercury adsorption measurements were presented. Calculations of efficiency of mercury removal were based on mercury balance and not on direct measurements conducted before and after adsorption. Sorbents Within the scope of research three types of sorbents were analyzed: mineral (A), organicwaste product from energo-chemical coal processing (B), and organic-produced during waste pyrolysis (C). Properties of these sorbents were presented in Table 1. Figure 1 presents scheme of a test stand for measuring mercury sorption from flue gases from solid fuel combustion. It consists of tube furnace with temperature regulation, quartz combustion pipe, gas cylinder, flowmeter and sorbent containing container. Temperature of flue gases can be controlled between quartz tube outlet and combustion pipe. The fuel sample is combusted in flow of air or oxygen, and flue gases are directed through the sorbent container. Subsequently, the amount of absorbed mercury is measured. Analysis is performed in defined, controlled conditions which include: temperature and time of combustion, temperature of flue gases going through the sorbent, and flow rate. The fuel sample is inserted in a small ceramic container, which is gradually transferred into the area of highest temperature. Combustion time in this area is 10 min. Single fuel sample input is around 1 g. During single experiment it is possible to combust several portions of fuel, the amount of which depends on mercury content. The sorbent container is located inside a heating element, which allows stabilization of sorbents temperature during the measurement. This allows simulation of industrial conditions as well as analyzing mercury sorption in different temperatures. Sorbent mass, which is determined mainly by its density and granulation, varies between 1 g and 4 g. Before the procedure, mercury content is measured in both fuel and sorbent according to methods described in section 2.3. After the procedure, mercury content is analyzed in ash as well as in sorbent. In all instances experimental conditions and the coal used were the same (see table 2). Mercury content analysis The content of mercury in samples of coal, sorbent and ash was measured using atomic absorption spectrometry with cold vapor atomization (CV-AAS) method on automatic mercury analyzer MA-3000 (Nippon Instruments Corporation). Sample mass was between 50.0 and 52.5 mg, with analytical granulation of 0.2 mm. Program for subbituminous coal samples was used. Briefly, sample was heated for 2 min at 180°C, then heated for 7 minutes at 850°C. Subsequently, mercury vapors were reduced to elemental form (Hg 0 ), which was captured in amalgam by gold covered sand bed. After the sample was decomposed the amalgam was heated to approximately 700°C. Released mercury vapors were directed into cuvette and measured with UV light at 253.7 nm. The final result was a mean from three to five analysis for the same samples. When standard deviation exceeded 10%, additional measurements were performed. Comparison with other methods Other analytical methods for mercury content determination are presented in many publications [8][9][10][11][12][13][14][15][16][17][18][19]. They are based on the following detection methods:  atomic absorption spectrometry (AAS) -this method draws upon a high vapour pressure of Hg at relatively low temperatures. A very often used technique is coldvapour atomic absorption spectrometry,  atomic fluorescence spectrometry (AFS) -initially, atomic fluorescence spectrometry with flame atomization was used, further modifications included electrothermal atomization and cold-vapour method,  atomic emission spectrometry (AES) -traditional induction methods such as flame induction or arc discharge are replaced with DC-induced plasma, radio frequencyinduced plasma and microwave-induced plasma,  mass spectrometry (MS) -first applications of MS for determining the mercury content appeared as a variation of method based on spark ionization (Spark Source -SS-MS). Coupling MS with inductively-coupled-plasma (ICP-MS) found a much more applications,  UV-visible spectrophotometry (Colorimetry) -this had been a popular method of total mercury determination until 1960s when the AAS method was introduced,  neutron activation analysis (NAA) -main advantages of this method are short analysis time, non-destructive analysis, large sensitivity and precision,  X-ray fluorescence spectrometry (XRD) -the main advantages are the same as in case of NAA. Additionally it allows for testing many elements simultaneously,  electron-capture detection spectrometry (ECD) -this method is widely used for determining organic mercury compounds. Its main advantage is a possibility of methylmercury immediate determination without need of turning it into volatile form. Table 3 presents detection limit of selected methods of the mercury determination. Table 3. Detection limit of selected methods of the mercury determination. [9,12,14,15,17,18] Calculating the balance The above mentioned method allows for preparation of mercury balance through analysis of mercury distribution between combustion products and sorbent. The following data is required for calculating the balance:  mass of mercury in fuel (C),  mass of mercury in sorbent before analysis (SB),  mass of mercury in sorbent after analysis (SA),  mass of mercury in ash (A). Thus the efficiency of mercury removal using given sorbent was calculated according to formula (1): 3 Results and discussion Table 4 presents results of validation of method in terms of CV-AAS method accuracy. Table 5 presents summary of validation of mercury content determination method in terms of repeatability and reproducibility for coal samples. Where: SD -standard deviation, CV -variation coefficient, indexes: r -repeatability, R -reproducibility. Table 6 presents additional parameters of mercury determination method acquired during validation. The full validation performed for CV-AAS method showed that: (i) method is accurate for coal samples with mercury content between 30 and 330 ppb, (ii) repeatability and reproducibility of the method are acceptable, (iii) method extended uncertainty fits between 3 and 10%, depending on range, (iv) method quantification limit is 0.006 ng of mercury, (v) method is highly linear. Validation of mercury content determination To sum up, performed validation confirmed that the CV-AAS method using MA-3000 automated mercury analyzer is useful for determining mercury content in coal samples. For calculations it was assumed that all mercury in coal is transferred to combustion products (ash and flue gases), and subsequently it is transferred to sorbents bed. This assumption is fulfilled because of tightness of installation and forced flow of oxidizer and flue gases. Values of mercury removal efficiency are presented in Table 7. Conclusions During presented study the main challenge was determination of mercury content in analyzed samples. Most of presently used methods use different variants of atomic spectrometry, mass spectrometry, or fluorescence spectroscopy [19]. In presented system scope of analysis was reduced to mercury content in solid matrix (coal sample, ash, and sorbent). Determination of mercury content in solid phase is easier than in gaseous phase, because it lacks complications resulting from condensation during flue gases cooling, mercury speciation, and high dust level present in industrial installations. Presented laboratory test stand for studying mercury sorption enables: (i) analysis of mercury distribution between fuel, combustion products and ash, without need for troublesome measurements of mercury content in gaseous phase, (ii) analysis of sorbents efficiency based on mercury balance, (iii) analysis of influence of sorbents temperature on mercury sorption, (iv) analysis of sorbents granulation on mercury sorption, (v) analysis of gas flow rate on mercury sorption. The experiments reported here were supported by Ministry of Science and Higher Education in Poland (Project AGH University of Science and Technology No 11.11.210.213 and 11.11.100.276).
2,696.4
2017-01-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
A New Algebraic Inequality and Some Applications in Submanifold Theory : We give a simple proof of the Chen inequality involving the Chen invariant δ ( k ) of submanifolds in Riemannian space forms. We derive Chen’s first inequality and the Chen–Ricci inequality. Additionally, we establish a corresponding inequality for statistical submanifolds. Introduction One of the most important topics of research in the geometry of submanifolds in Riemanian manifolds is to establish sharp relationships between extrinsic and intrinsic invariants of a submanifold. The most used intrinsic invariants are sectional curvature, scalar curvature and Ricci curvature. The main extrinsic invariant is the squared mean curvature. There are well-known relationships between the above extrinsic and intrinsic invariants for a submanifold in a Riemannian space form: (generalized) Euler inequality, Chen-Ricci inequality, Wintgen inequality, etc. In [1,2], B.-Y. Chen introduced a sequence of Riemannian invariants, which are known as Chen invariants. They are different in nature from the classical Riemannian invariants. B.-Y. Chen established optimal relationships between the squared mean curvature and Chen invariants for submanifolds in Riemannian space forms, known as Chen inequalities (see [2]). The proofs of these inequalities use an algebraic inequality, discovered by B.-Y. Chen in [1]. In the present paper, we give simple proofs of some Chen inequalities by using a different algebraic inequality. Other Chen inequalities were proved in [3] by applying another inequality. Let {e 1 , ..., e n } be an orthonormal basis of T p M. The scalar curvature τ at p is given by where K(e i ∧ e j ) is the sectional curvature of the plane section spanned by e i and e j . If X is a unit vector tangential to M at p, consider the orthonormal basis {e 1 = X, e 2 , ..., e n } of T p M. The Ricci curvature is defined by Let L be an r-dimensional subspace of T p M and {e 1 , ..., e r } an orthonormal basis of L, 2 ≤ r ≤ n. The the scalar curvature τ(L) of L is given by τ(L) = ∑ 1≤α<β≤r K(e α ∧ e β ). In particular, for r = 2, τ(L) is the sectional curvature of L and for r = n, τ(T p M) = τ(p) is the scalar curvature of M at p. We shall consider the Chen invariant δ(k), which is given by where L k is any k-dimensional subspace of T p M. An Algebraic Inequality In this section, we give an algebraic inequality and study its equality case. As an application, we get a simple proof of the Chen inequality for the invariant δ(k). Lemma 1. Let k, n be nonzero natural numbers, 2 ≤ k ≤ n − 1, and a 1 , a 2 , ..., a n ∈ R. Then Moreover, the equality holds if and only if ∑ k α=1 a α = a j , for all j ∈ {k + 1, ..., n}. Proof. We prove this Lemma by using the Cauchy-Schwarz inequality. We have which implies the desired inequality. The equality holds if and only if we have equality in the Cauchy-Schwarz inequality, i.e., ∑ k α=1 a α = a j , for all j ∈ {k + 1, ..., n}. Proof of the Chen Inequality for δ(K) We apply Lemma 1 for obtaining a simple proof of the Chen inequality corresponding to the Chen invariant δ(k) for submanifolds in Riemannian space forms. LetM(c) be an m-dimensional Riemannian space form of constant sectional curvature c. The Euclidean space E m , the sphere S m and the hyperbolic space H m are the standard examples. Consider M an n-dimensional submanifold ofM(c) and denote by h the second fundamental form of M inM(c). The mean curvature vector H(p) at p ∈ M is defined by where {e 1 , ..., e n } is an orthonormal basis of T p M. The submanifold M is called minimal if the mean curvature vector H(p) vanishes at any p ∈ M. We recall the Gauss equation (see [4]): for all vector fields X, Y, Z, W tangential to M. Theorem 1. LetM(c) be an m-dimensional Riemannian space form of constant sectional curvature c and M an n-dimensional submanifold ofM(c). Then, for any 2 ≤ k ≤ n − 1, one has the following Chen inequality: Moreover, the equality holds at a point p ∈ M if and only if there exist suitable orthonormal bases {e 1 , ..., e n } ⊂ T p M and {e n+1 , ..., e m } ⊂ T ⊥ p M such that the shape operators take the forms Proof. Let p ∈ M, L ⊂ T p M be a k-dimensional subspace and {e 1 , ..., e k } be an orthonormal basis of L. We take {e 1 , ..., e k , e k+1 , ..., e n } ⊂ T p M and {e n+1 , ..., e m } ⊂ T ⊥ p M as orthonormal bases, respectively. The Gauss equation implies Additionally, by the Gauss equation one has Then we get By using the algebraic inequality from the previous section, we obtain which implies the inequality to prove. If the equality case holds at a point p ∈ M, then we have equalities in all the inequalities in the proof, i.e., for any r ∈ {n + 1, ..., m}. If we choose e n+1 parallel to H(p), then the shape operators take the above forms. If k = 1, we derive Chen's first inequality: [1] LetM(c) be an m-dimensional Riemannian space form of constant sectional curvature c and M an n-dimensional submanifold ofM(c). Then one has Equality holds at a point p ∈ M if and only if, with respect to suitable orthonormal bases {e 1 , ..., e n } ⊂ T p M and {e n+1 , ..., e m } ⊂ T ⊥ p M, the shape operators take the following forms: Recall that δ(n − 1) = max Ric. Then, from Theorem 1 we deduce the Chen-Ricci inequality: It is known that T is a minimal hypersurface of S n+1 , but a non-minimal submanifold of E n+2 . A Chen Inequality for Statistical Submanifolds A statistical manifold is an m-dimensional Riemannian manifold (M, g) endowed with a pair of torsion-free affine connections∇ and∇ * , which satisfy for any X, Y, Z ∈ Γ(TM). The connections∇ and∇ * are called dual connections (see [6,7]), and it is easily seen that ∇ * * =∇. The pairing (∇, g) is said to be a statistical structure. If ∇ , g is a statistical structure onM m , then ∇ * , g is a statistical structure too [6,8]. Any torsion-free affine connection∇ onM always has a dual connection given bỹ where∇ 0 is the Levi-Civita connection onM. The dual connections are called conjugate connections in affine differential geometry (see [9]). Denote byR andR * the curvature tensor fields of∇ and∇ * , respectively. They satisfy A statistical structure ∇ , g is said to be of constant curvature ε ∈ R if A statistical structure (∇, g) of constant curvature 0 is called a Hessian structure. The Equation (2) implies that if ∇ , g is a statistical structure of constant curvature ε, then ∇ * , g is also a statistical structure of constant curvature ε (obviously, if (∇, g) is Hessian, (∇ * , g) is also Hessian). The dual connections are not metric, then we cannot define a sectional curvature in the standard way. A sectional curvature on a statistical manifold was defined by B. Opozda [10]. More precisely, if one considers p ∈M, π a plane section in T pM and an orthonormal basis {X, Y} of π, then a sectional curvature is defined bỹ which is independent of the choice of the orthonormal basis. Next, we consider a statistical manifold (M, g) and a submanifold M of dimension n ofM. Then (M, g| M ) is also a statistical manifold with the connection induced by∇ and induced metric g. In Riemannian geometry, the fundamental equations are the Gauss and Weingarten formulae and the equations of Gauss, Codazzi and Ricci. As usual, we denote by Γ T ⊥ M the set of the sections of the bundle normal to M. In our case, for any X, Y ∈ Γ(TM), according to [8], the corresponding Gauss formulae are∇ are symmetric and bilinear, called the imbedding curvature tensor (see [6,8]) of M inM for∇ and the imbedding curvature tensor of M inM for∇ * , respectively. In [8], it was also proven that (∇, g) and (∇ * , g) are dual statistical structures on M. Since h and h * are bilinear, there are linear transformations A ξ and A * ξ on TM defined by for any ξ ∈ Γ T ⊥ M and X, Y ∈ Γ(TM). Further (see [8]), the corresponding Weingarten formulae arẽ for any ξ ∈ Γ T ⊥ M and X ∈ Γ(TM). The connections ∇ ⊥ and ∇ * ⊥ are Riemannian dual connections with respect to the induced metric on Γ T ⊥ M . Let {e 1 , ..., e n } and {e n+1 , ..., e m } be orthonormal tangential and normal frames, respectively, on M. Then the mean curvature vector fields are defined by and for 1 ≤ i, j ≤ n and n + 1 ≤ α ≤ m. The Gauss equations for the dual connections∇ and∇ * , respectively, are given by (see [8]) Geometric inequalities for statistical submanifolds in statistical manifolds with constant curvature were obtained in [11]. In this section we prove the Chen inequality corresponding to the Chen invariant δ(k) for statistical submanifolds in statistical manifolds of constant curvature. We consider an m-dimensional statistical manifoldM(ε) of constant curvature ε and an n-dimensional statistical submanifold M. Let p ∈ M and L be a k-dimensional subspace of T p M. Denote by {e 1 , ..., e k } an orthonormal basis of L, {e 1 , ..., e k , e k+1 , ..., e n } an orthonormal basis of T p M and {e n+1 , ..., e m } an orthonormal basis of T ⊥ p M, respectively. The Gauss equation implies where h 0 is the second fundamental form of the Riemannian submanifold M. We denote by τ 0 the scalar curvature with respect to the Levi-Civita connection and byτ 0 = ∑ 1≤i<j≤nK0 (e i ∧ e j ). We state the following result. Theorem 2. Let M be an n-dimensional statistical submanifold of an m-dimensional statistical manifoldM(ε) of constant curvature. Then, for any p ∈ M and any k-plane section L of T p M, we have: ).
2,531.2
2021-05-23T00:00:00.000
[ "Mathematics" ]
Vortex Thermometry for Turbulent Two-Dimensional Fluids We introduce a new method of statistical analysis to characterise the dynamics of turbulent fluids in two dimensions. We establish that, in equilibrium, the vortex distributions can be uniquely connected to the temperature of the vortex gas, and apply this vortex thermometry to characterise simulations of decaying superfluid turbulence. We confirm the hypothesis of vortex evaporative heating leading to Onsager vortices proposed in Phys. Rev. Lett. 113, 165302 (2014), and find previously unidentified vortex power-law distributions that emerge from the dynamics. We introduce a new method of statistical analysis to characterise the dynamics of turbulent fluids in two dimensions. We establish that, in equilibrium, the vortex distributions can be uniquely connected to the temperature of the vortex gas, and apply this vortex thermometry to characterise simulations of decaying superfluid turbulence. We confirm the hypothesis of vortex evaporative heating leading to Onsager vortices proposed in Phys. Rev. Lett. 113, 165302 (2014), and find previously unidentified vortex power-law distributions that emerge from the dynamics. Turbulence arises in chaotic dynamical systems across all scales, from mammalian cardiovascular systems, to climate, and even to the formation of stars and galaxies [1]. The unpredictability inherent to turbulent systems is further confounded by physical properties such as boundaries and spatial dimensionality, and due to its complexity, there is currently no unified theoretical framework to explain turbulence. As such, there is a need to develop new methods to characterise the evolution of turbulent states in order to provide further insights into this important problem. Onsager developed a model of statistical hydrodynamics to describe turbulence in two-dimensional (2D) flows [2]. In this representation the fluid is modelled by an N-particle gas of interacting point-like vortices which can be characterised by an equilibrium temperature. As the bounded system of vortices has a finite configuration space, the entropy S of the system has a maximum, and hence there is a range of energy E where the absolute Boltzmann temperature T = (∂S /∂E) −1 becomes negative [2][3][4]. These states correspond to large-scale rotational flows known as Onsager vortices [2]. Experimentally, BECs provide unprecedented opportunities to investigate 2D superfluid turbulence due to the high degree of controllability available in these systems. It is now possible to create and image complex vortex configurations such as dipoles [30][31][32][33], few-vortex clusters [34] and quantum von Kármán vortex streets [35]. Many experiments have also been devoted to the study of quantum turbulence in both two- [11,19,20,23] and three-dimensional [22,36,37] geometries. However, the formation of Onsager vortex structures in statistical equilibrium has not yet been reported. Recent theoretical works have suggested that one significant obstacle is the harmonic trapping commonly used in experiments, as vortex clusters appear to be suppressed in this geometry [14,20,38]. In addition, the detection of vortex circulation signs is experimentally difficult, and only recently have techniques been proposed [39] and implemented [23] to achieve this. Analysis of turbulent dynamics is made even more challenging by current condensate imaging methods, which only allow a small number of frames to be captured for a single experimental realisation [40]. As such, it is desirable to be able to characterise the state of a turbulent fluid using a robust method of statistical analysis that links the instantaneous microscopic configuration of the system to its macroscopic behaviour. Onsager's thermodynamical description of turbulence is one such method, and hence we propose to use its central observable-the vortex temperature-for this purpose. In contrast to velocimetry-based observables that require the measurement of the velocities of the atoms, the thermometry presented here only requires the measurement of the positions and circulation signs of the quantised vortices. We first outline our method for measuring the temperature of the vortex gas, before examining a specific case of decaying superfluid turbulence using mean-field Gross-Pitaevskii simulations. In the dynamics, we observe that the vortex gas undergoes rapid equilibration before settling into a quasiequilibrium state where it continues to heat adiabatically via vortex evaporation [13]. We have discovered that in this evolution, the numbers of clusters, dipoles and free vortices follow robust power-laws with respect to the total vortex number. The existence of this quasi-equilibrium allows quantitative thermometry of the turbulent fluid. To calibrate the vortex thermometer, we use Monte Carlo (MC) simulations to map out the equilibrium vortex configurations as a function of the inverse temperature β = 1/k B T , where k B is Boltzmann's constant. We do this for a system of lar boundary of radius R [13,43], and set a hard vortex core of radius 0.003 R to prevent energy divergences. As we vary the temperature across both positive and negative regimes, we quantify the effect on the vortex configuration using a vortex classification algorithm [10,44]. The algorithm uniquely divides the vortex gas into three separate components: clusters of ≥ 2 like-sign vortices, closely bound vortex-antivortex dipoles, and relatively isolated free vortices (for further details, see Ref. [44]). We then calculate the number of clusters N c , dipoles N d and free vortices N f as functions of temperature, and the resulting fractional population curves are presented in Fig. 1. At low positive absolute temperatures (left hand side of Fig. 1), the vortex gas is at its 'coldest', as both the energy and entropy are minimised. In this regime, bound vortexantivortex dipole pairs dominate the configuration, as shown in the schematic inset of Fig. 1. Above the Berezinskii-Kosterlitz-Thouless (BKT) critical temperature β BKT [45][46][47][48][49], the vortex dipoles dissociate, causing an increase in both the energy and entropy. At β = 0, the vortex configuration becomes a disordered arrangement of vortices and antivortices, thereby maximising the entropy. In the negative temperature region, low-entropy clusters of like-sign vortices tend to form (see schematic inset), and because of their high energy, these negative temperature states are 'hotter' than those at positive temperature. Above the critical temperature β EBC , the vortices form an Einstein-Bose condensate (EBC), a state where the Onsager vortex clusters condense, as indicated by the satura-tion of the cluster population in Fig. 1 [13,44,50]. For a neutral vortex gas the two aforementioned critical temperatures are defined as β BKT = 2/E • and β EBC = −4/N v E • [6], respectively, where the energy E • = ρ s κ 2 /4π is defined in terms of the superfluid density ρ s and the quantum of circulation κ = h/m, with m being the mass of the condensed atoms. Figure 1 demonstrates that the dipole and cluster populations are monotonic functions of β-this is the key observation enabling thermometry of the vortex gas. Given an arbitrary vortex configuration in thermal equilibrium, we may determine its temperature by calculating the populations of clusters and/or dipoles and comparing the obtained values to the curves in Fig. 1. Strictly, the p j (β) curves in the negative temperature region of Fig. 1 are dependent on the vortex number. However, we repeated our MC simulations for N v = 100 and 200 vortices and found that, for the vortex numbers relevant to the dynamical simulations presented here, there is no qualitative change to the thermometry curves, and the quantitative change is not significant (see Supplemental Material [41]). The cluster and dipole fractions are not the only observables that vary monotonically with vortex temperature in our MC simulations. For example, both the energy and dipole moment of the vortex gas also fulfill this requirement [13], and could therefore, in principle, be used for thermometry. However, of all variables considered, we have found that the cluster and dipole fractions provide the most robust thermometers. Also shown in Fig. 1 is the Einstein-Bose condensate fraction, which quantifies the density of vortices in the largest cluster (for details, see Ref. [44]). For β > β EBC , the condensate fraction is zero, but when β < β EBC it rises sharply. In this extreme temperature region, the other thermometers saturate and the condensate fraction becomes the relevant observable for vortex thermometry. As an application of our vortex thermometer, we use it to characterise decaying turbulence in a disk-shaped BEC as previously studied in Refs. [13,14]. We simulate the two-dimensional time-dependent Gross-Pitaevskii equation where ψ ≡ ψ(r, t) is the classical field of the Bose gas and g 2D is the two-dimensional interaction parameter resulting from the s-wave atomic collisions. To obtain the uniform circular geometry, we use a two-dimensional power-law trapping potential of the form V tr = µ(r/R) 50 , where r = x 2 + y 2 is the radial distance from the axis of the trap, µ is the chemical potential, and R ≈ 171 ξ is the radius of the trap, measured in units of the healing length ξ = 2 /2mµ [14]. The interaction parameter is set to g 2D = 4.6 × 10 4 2 /m. We solve the GPE using a fourth-order split-step Fourier method on a 1024 2 numerical grid with a spacing of approximately ξ/2. Turbulence is generated by imprinting vortices into the phase of ψ and evolving Eq. (1) for a short amount of imaginary time to establish the vortex core structures. We detect vortices and their circulation signs within r < 0.98 R by locating singularities in the phase of the field. The initial vortex configurations used in our GPE simulations are produced by randomly drawing N v = 800 vortex locations from a uniform distribution, with equal numbers of each circulation sign. The resulting state is well approximated to have β ≈ 0, although the short imaginary time propagation step causes a small amount of cooling towards positive temperatures. As the turbulence decays, the vortices annihilate and the vortex gas evaporatively heats, resulting in the emergence of two large Onsager vortices at late times [13,14]. Three sample frames from a single simulation are shown in Fig. 2, where panels (a)-(c) show the density |ψ| 2 of the fluid, and panels (d)-(f) show the corresponding vortex configuration after the vortex detection and classification steps. A Helmholtz decomposition [8] has been used to extract the divergence-free component of the condensate velocity field, and the resulting streamlines are also shown in the lower panels. The Onsager vortex clusters are clearly visible in panel (f). The number of clusters, dipoles and free vortices are shown in Fig. 3 as functions of both time t (inset) and the total number of vortices N v (t). The time-dependent populations (inset) do not follow any simple function. However, the populations as functions of the total number of vortices (main frame) show clear power-law scaling behaviour. The corresponding powerlaws are: of vortices per cluster N vc ≡ N c /N cl , where N cl is the total number of clusters of any size at a given time. To study the effects of system size on these power-laws, we have also considered two smaller disk-shaped systems of radii R ≈ 49 ξ and R ≈ 85 ξ respectively, each with N v = 100 vortices initially imprinted. We find that the scaling behaviour is unchanged in these smaller systems, suggesting that the evolution of the vortex gas is underpinned by a universal microscopic process. In this system, the primary cause of vortex number decay is the annihilation of vortex-antivortex dipoles. Despite this, the populations of dipoles and free vortices follow approximately the same power-law, demonstrating an interconversion between the vortex populations. However, a distinct powerlaw emerges for the vortex clusters. This behaviour points toward a two-fluid model, where the dipoles and free vortices behave as a weakly interacting thermal cloud, while the clusters act as a quasi-condensate whose relative weight grows over time as a result of vortex evaporative heating. Extrapolating the data toward N v → 0 leads to the inevitable decay of all dipoles and free vortices, with only Onsager vortex clusters remaining. At this point, the rate of pair annihilation becomes insignificant in the dynamics due to the very low probability of vortex-antivortex collisions. In Fig. 3, the N d and N f curves are well described by the N 6/5 v scaling throughout the dynamical evolution. The N c curve, on the other hand, only begins to follow the N 4/5 v powerlaw once the total vortex number has decayed to N v 200, suggesting that the statistical behaviour of the vortex gas changes at this point in the dynamics. In accordance with the existence of power-law scaling, we interpret this change to be the realisation of a state of quasi-equilibrium for the The temperature is measured independently using the populations of both clusters and dipoles. In the inset, the temperature readings from each thermometer is shown as a function of the total vortex number N v (t). As in Fig. 1, the positive and negative temperature regions have been scaled by their respective critical temperatures, and a dashed horizontal line denotes β = 0. The vertical axis of the inset is the same as for the main frame. decaying turbulence. Under this quasi-equilibrium condition, the vortex evaporative heating process becomes adiabatic in the sense that the vortex gas is able to rearrange into a higher entropy configuration between the vortex annihilation events. For N v 200, the vortex number decays too rapidly for this to be possible. This quasi-equilibrium condition is not a true equilibrium of the system, since vortex-antivortex annihilations and vortex-sound interactions are continuously driving energy from the vortices into the sound field. Presumably, the true equilibrium of the condensate will only be realised when all vortices have decayed and the total entropy of the system is maximised. In the Supplemental Material [41], we present vortex number decay data for a range of other initial vortex configurations, observing in all cases evidence for the same power-law and quasi-equilibration behaviour. We now have an algorithm to assign a vortex temperature to the dynamical GPE simulations. We determine the fractional populations of vortex dipoles and clusters as a function of time, and use each of these to read off a temperature from the curves in Fig. 1. The two resulting measurements of β(t) are presented in the main frame of Fig. 4. Both measurements show that the temperature of the vortex gas is spontaneously increasing as Onsager vortex clusters form, thereby confirming the evaporative heating scenario posited in Ref. [13]. At late times, a small discrepancy between the two temperature readings emerges, which we attribute to the compressibility of the fluid not accounted for in the MC model. The same temperature measurements are plotted as a function of the total vortex number in the inset. Based on our quasi-equilibrium interpretation discussed above, we note that the temperature reading is strictly only valid for N v 200 (t 2000 /µ), since outside of this range the vortices are out of equilibrium and their temperature is not well defined. To obtain these curves, we have applied the thermometer calibrated with N v = 50 vortices, despite the fact that N v varies between ≈ 30 and ≈ 200 throughout the equilibrium dynamical evolution. In the Supplemental Material [41], we show that using a thermometer calibrated with a different number of vortices does not affect the qualitative shape of the β(t) curve. We have developed a methodology that allows the temperature of point vortices in two-dimensional fluids to be determined using only the information about the vortex positions and their signs of circulation. We have applied the vortex gas thermometers to freely decaying two-dimensional quantum turbulent systems and quantitatively shown the transition to negative temperatures and the emergence of Onsager vortices due to the evaporative heating of the vortex gas [13,14]. Our vortex thermometers may also be useful for characterisation of turbulent classical fluids, as the continuous vorticity distributions can be approximated accurately by a discretised set of point vortices before performing the vortex classification and thermometry. This methodology may therefore open new pathways to quantitative studies of two-dimensional turbulence. We To assess the sensitivity of the observed power-laws (Fig. 3 of the main text) to the choice of initial vortex configuration, we have run Gross-Pitaevskii simulations with a diverse range of initial conditions. In addition to the randomly sampled initial condition (case I) described in the main text, we have considered four other types of initial state. The cases II and III are configurations with lower incompressible kinetic energy E i k , created by imprinting the vortices randomly throughout the condensate as dipole pairs with sizes 8 ξ and 12 ξ, respectively (before the imaginary time propagation step). For case IV, motivated by experiments (e.g. [1][2][3]), the vortex creation is simulated dynamically by stirring an initially unperturbed condensate with a repulsive Gaussian potential of waist 30 ξ and amplitude 5 µ. The stirring potential is moved back and forth with centroid position x • (t) = 100ξ cos(2πµt/1050 ) for four periods, and then ramped down to zero over a fifth period. Finally, a large incompressible kinetic energy in case V is initiated by imprinting a periodic square array of vortex clusters with alternating circulation sign, each with a radius of ≈ 43 ξ and containing up to 25 randomly placed vortices. Examples of initial conditions for cases II-V are shown in Fig. 5 FIG. 5. Examples of initial vortex configurations for our turbulent Gross-Pitaevskii simulations. Panels (a)-(d) correspond to cases II-V, respectively, and show the vortices in positive (negative) clusters as blue (green) squares, dipoles as red triangles, and free vortices as yellow circles. Note that each vortex dipole contains one vortex and one antivortex. The streamlines in each frame are obtained by calculating the incompressible component of the velocity field of the classical field describing the Bose gas. Panel (c) shows the dynamically stirred configuration immediately after the stirrer has been switched off. See also Fig. 2(d) in the main text, which shows the initial condition for case I. Panel (e) shows the incompressible kinetic energy E i k per vortex for each of the five cases, averaged over 80 (10) initial conditions generated for case I (cases II-V). Error bars denote one standard deviation. The shading is indicative of how 'cold' (blue) or 'hot' (red) a given initial state is. where the vortices have been classified into clusters, dipoles and free vortices as described in the main text. The corresponding mean incompressible kinetic energy for each initial condition is shown in Fig. 5(e). This energy is defined E i k = (m/2) |ψ| 2 |v i | 2 d 2 r, where the v i (r) is the divergencefree component of the total velocity field. The resulting number decay curves for each vortex type are shown in Fig. 6(a)-(c). The dipole and free vortex decay curves [panels (b) and (c), respectively] remain relatively unchanged across different initial configurations. By contrast, the clusters [panel (a)] show clear variation across the set of initial conditions, suggesting that initially the system is in fact behaving very differently under each constraint. However, despite initial differences (at large N v ), all cluster decay curves eventually exhibit behaviour consistent with the power-laws obtained in Fig. 3 of the main text, demonstrating a loss of memory of the initial vortex configuration. This provides further evidence that these power-laws correspond to a state of quasi-equilibrium in which the vortex gas should have a welldefined temperature, as argued in the main text. In Fig. 6(a), the approximate value of N v at which the N c curve begins to follow the N 4/5 v power-law is also highlighted, which we interpret as being the point at which the vortex gas reaches quasiequilibrium. Applying our thermometer ( Fig. 1 from main text) to all five initial conditions, we obtain the temperature readings for each, which are presented in Fig. 7. Here, the temperature is calculated from the mean of the cluster and dipole thermometer measurements, which are themselves ensemble averaged over 80 (10) simulations for case I (cases II-V). These curves show a clear dependence on initial condition, with the low energy configurations (cases II and III) being consistently colder than those with high energy (cases IV and V). The random initial configuration (case I) lies between the two extremes. The approximate value of N v at which the vortex gas appears to reach equilibrium in each case [see Fig. 6(a)] is also shown. Even before this point (i.e. for larger N v ), the vortex thermometer provides a plausible temperature reading, but the measurement is not reliable if the vortex gas is out of equilibrium. In cases IV and V, the equilibration point corresponds to a turning point in the temperature curve, providing further evidence for our interpretation of the vortex gas equilibrium condition. VORTEX NUMBER DEPENDENCE OF THERMOMETRY In Fig. 4 of the main text, the temperature measurement of the decaying turbulence for case I was obtained using a single thermometer calibrated with N v = 50 vortices. Strictly, this thermometer is only quantitatively valid for the short time in the dynamical evolution when N v ≈ 50, and additional thermometers should be calibrated to obtain more accurate measurements for other vortex numbers. Here we demonstrate that changing N v in the Monte Carlo simulations has only a small effect on the thermometry curves, and consequentially on the dynamical β(t) measurements. We have repeated our Monte Carlo simulations with N v = 100 and N v = 200 (as argued in the main text, the vortex gas appears to be out of equilibrium for N v 200 for case I, and hence any vortex numbers beyond 200 are not relevant for thermometry here). The obtained p c (β) and p d (β) curves are shown in Fig. 8 for all three values of N v . The thermometry curves show very little variation as N v is changed, especially in the positive temperature region. Most importantly, the curves are always monotonic, regardless of N v , and hence can always be used for thermometry. Using the three cluster thermometers in Fig. 8, we have remeasured the dynamical temperature β(t) from our Gross-Pitaevskii simulations, and the resulting curves are presented in Fig. 9. Evidently, our measurement using the N v = 50 FIG. 9. Inverse temperature of the vortex gas as a function of time, averaged over a set of 80 dynamical GPE simulations for case I, and measured using three different thermometers, calibrated using N v = {50, 100, 200}, respectively. The temperature is measured using the populations of clusters only. As in Fig. 8, the positive and negative temperature regions have been scaled by their respective critical temperatures, and a dashed horizontal line denotes β = 0. The mean number of vortices remaining in the system is indicated at particular times by vertical dotted lines. See also Fig. 4 of the main text. thermometer slightly underestimates the temperature at early times (50 N v 200), and slightly overestimates it at the latest times (N v 50). Despite this minor quantitative correction, the qualitative behaviour of vortex heating-our main conclusion from the data-is unchanged regardless of which thermometer is used. Note that we have measured these temperatures using the spline fits to the data in Fig. 8, as described in the main text. The fluctuations that are visible in the β(t) curves arise from the variations in p c (t), which then appear in each temperature measurement.
5,418
2017-02-17T00:00:00.000
[ "Physics" ]
Review of Methods for Automatic Plastic Detection in Water Areas Using Satellite Images and Machine Learning Ocean plastic pollution is one of the global environmental problems of our time. “Rubbish islands” formed in the ocean are increasing every year, damaging the marine ecosystem. In order to effectively address this type of pollution, it is necessary to accurately and quickly identify the sources of plastic entering the ocean, identify where it is accumulating, and track the dynamics of waste movement. To this end, remote sensing methods using satellite imagery and aerial photographs from unmanned aerial vehicles are a reliable source of data. Modern machine learning technologies make it possible to automate the detection of floating plastics. This review presents the main projects and research aimed at solving the “plastic” problem. The main data acquisition techniques and the most effective deep learning algorithms are described, various limitations of working with space images are analyzed, and ways to eliminate such shortcomings are proposed. Introduction Water ecosystems play a major role in human life.The functioning of industrial plants, energy, fisheries, water management, development of agricultural production, and other sectors depends on them.Human activity, in turn, has a considerable impact on the state of the aquatic environment.Pollution of water areas, i.e., the introduction of components into the water composition that are not typical of its natural structure, is the consequence of a negative human impact on aquatic ecosystems.One of the most large-scale pollutants of our time is plastic.Rivers are considered the main source of plastic entering the ocean [1].Plastic can also reach the ocean as a result of fishing and shipping and also due to illegal dumping of waste into the marine environment [2,3]. Water availability and sustainability is one of the seventeen Sustainable Development Goals (SDGs) adopted by the United Nations Member States in 2015.Another goal strives to conserve marine ecosystems [4].And despite the fact that the likelihood of reaching all the SDGs is poorly assessed in real-world conditions, it is necessary to try to maximize their implementation by 2030 through the development of science and technology [5].In addition, one of the first marine environmental protection organizations in the world, Ocean Conservancy, says there is a direct link between climate change and the release of plastic waste into the ocean.The reason for this fact is that plastics are produced using fossil fuels, such as oil, gas, and coal.Plastics are already responsible for 3-4% of global greenhouse gas emissions; if they continue to grow, this will triple by 2050 [6]. A second reason for the climate impact of plastic is that small polymer particles destroy bacteria and plankton in the ocean.Such particles are formed as a result of the destruction of large plastics and are called microplastics (MPs) [7].Due to their size, which is practically indistinguishable to the human eye (<5 mm), MPs are called an "invisible problem".However, the significance of this trouble has reached global proportions.To date, microplastics are found in all components of the environment, and animals mistakenly feed Sensors 2024, 24, 5089 2 of 19 on them, which leads to the ingestion of this pollutant into the human body [8].At the same time, the harm of MP particles on health has been repeatedly confirmed by scientists [9][10][11]. There are three ways to fight plastic pollution: 1. Reducing waste entering the ocean from land-improving waste management policies; 2. Identifying debris in the ocean-it allows large accumulations of plastic to be detected to identify the sources and then cleaned up; 3. Cleaning the ocean of debris. It is necessary to gradually expand the waste management infrastructure worldwide with the average growth of consumer demand for this material, which scientists estimate at 210% until 2060 [1].The Ocean Conservancy is especially calling for the U.S. to do so, as the world's first plastic waste-producing country [12]. The crisis situation in this sphere concerns Russia as well.As of 2022, according to a study by the Russian Environmental Operator (REO), the real share of waste utilization in Russia is only 11.9% [13].At the same time, according to the Federal State Statistics Service, the output of plastic products as of October 2023 compared to the output of plastic products in October 2022 increased by 13.7% [14].In the Rostov region, for example, there are only three waste processing complexes, so only 5-6% of the total volume of production and consumption of waste in the region is used and neutralized [15].A comprehensive inventory and analysis of key indicators of the waste management industry was conducted by the REO in 2019-2020, and the following indicators were established as a result: 1. The volume of waste generation is 65 million tons per year; 2. The volume of waste generation per person is 450 kg per year; 3. The volume of solid waste treatment is 18.2 million tons; 4. The volume of solid waste utilization is 2.7 million tons [16]. In the coastal zone of Lake Baikal, waste accumulation has been found both in the most visited places and in hard-to-reach coastal areas, where light and floating garbage is carried by winds, storms, and surf [17].Therefore, the problem of dumping and accumulation of solid waste, including plastic, in the water area requires a broad solution at both global and regional levels on the part of the state, science, and the public. However, it is impossible not to mention the positive side of the issue.Dutch inventor Boyan Slat founded an ocean cleanup project called Ocean Cleanup in 2013.The project scales the technology of cleaning plastic from the world's oceans up to the present with the goal of removing 90% of the debris from the oceans.To achieve this goal, a dual strategy is being used: capturing plastic in rivers to reduce the influx of pollutants and cleaning up the trash already accumulated in the ocean [18]. Another guiding vector in combating the "plastic" problem has been the numerous developments in detecting the accumulation of waste plastic in river systems and the ocean.Modern satellite systems that make Earth remote sensing data (RSD) publicly available allow the identification of pollution. Since the last century space images have been actively used in various branches of science, including ecology.Industries have the strongest impact on the environment.Remote sensing (RS) methods help to promptly obtain information necessary for environmental monitoring of industrial waste disposal sites, which in the long term allows specialists to assess the negative impact of the mineral resources sector on the environment [19].In addition, monitoring of oil spills on the water surface is an important area of satellite imagery used in ecology, which is caused by the growth of offshore mining operations [20].When not balanced ecologically and economically, this type of mining negatively impacts the vulnerable ecosystems of the Arctic, where it has been actively developed recently [21]. Space images have also found a wide application in the study of geological and structural features of hard-to-reach territories in order to identify disturbances and fractured rock zones [22].In mining, remote sensing is also used to study the dynamics of enterprise operations, check the composition of mining and transportation equipment, and evaluate the results of reclamation of disturbed areas [23].The impact of large industrial complexes on the environment is usually assessed by analyzing samples, but this method does not allow assessing spatial and dynamic processes, so a combination of methods is used, supplementing the analysis of samples with satellite observations [24] (p.181).In addition to space data, aerial photography from unmanned aerial vehicles (UAVs) is often carried out, and maps of pollutant distribution are made from the images [25]. Any anthropogenic activity entails certain risks, both to the environment and to human health.In this regard, it is important to assess the probability of occurrence of hazardous events for which space methods are also used.Thus, the processes of bed uplift and subsidence in the territory of the Kirov mine were studied using space radar imaging data.The method used increased the efficiency and reliability of the forecast of geomechanical processes, contributing to the prevention of risks at the enterprise and, consequently, reducing the cost of eliminating negative consequences [26]. These examples show the importance of using satellite data directly during the operation of industrial enterprises and after their liquidation for the assessment of environmental damage.However, it is necessary to emphasize the possibility of using RS data at the initial stage.When planning landscape transformations, i.e., before mining or any other operations, satellite data are used to analyze the ecosystem diversity of the territory [27].Multitemporal images also help to quantify the forest cover of the region and identify "deforestation" zones, which is important for the sustainable development of the areas under exploitation [28]. In addition to industries, scientists identify urbanization and the growth of cities as one of the significant environmental impact factors.The analysis of urban infrastructure, especially the monitoring of the state of green spaces, is also conducted using RSD [29].Returning to the problem of plastic pollution in the hydrosphere, it should be noted that cities are the main source of waste generation.Tourism and population growth as a result of urbanization are also adding to them [30]. Satellite-based methods are indispensable in studying the sources of waste discharges into the marine environment and estimating their accumulation.A study showed that the amount of plastic waste is influenced by a number of variables, including demographic factors and economic activity [31].The physics of plastic transport in the aquatic environment plays an important role in studies of plastic accumulation in water.The dynamics of the distribution of plastic debris depend on two factors: 1. Physical characteristics of plastic (density and size); 2. In one study, the speed of plastic debris movement was estimated to be 6 km/day [32].Moreover, plastic in the ocean can float on the surface or settle in the water column, which creates significant difficulties with its detection by satellites [33]. In addition to the possibility of covering large areas, an important advantage of using RS methods is the minimization of time.For example, one aerial survey of the Hawaiian Islands required eight analysts to work 688 h for three and a half months to manually interpret all the large macro-and mega-debris [34]. It is also worth noting that modern advanced artificial intelligence (AI) and machine learning (ML) technologies are helping to reduce the time required to process survey results.Deep learning (DL) is the identification of complex patterns by neural networks based on a large dataset.Machine learning is a type of artificial intelligence, which is based on various tools of mathematical statistics, numerical methods, mathematical analysis, etc.Such technologies allow to automatically process a large array of data while increasing the speed and accuracy of the process. For the task of recognizing plastic in images, the dataset is satellite images or UAV images, which are used by AI or ML to determine the presence of contamination.In deep learning, this is accomplished by having neural networks extract basic features, such as lines, angles, and textures, after which they recognize complex levels of features, such as shapes and boundaries of plastic contamination.Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNNs) are effective for recognizing objects in images [35]. In machine learning, plastic identification is achieved by training different models to find pixels similar in spectral brightness, i.e., the machine "reads" pixels with plastic on them and finds similar ones by grouping them together.This makes DL and ML a suitable choice for remote sensing data processing. This approach allows, firstly, to process a large number of images quickly, and secondly, to find features hidden to humans between objects in the images.In addition, neural networks and machine learning involve the creation of models that can be adapted and improved as new data are acquired, which will positively affect the advantage of this approach [36]. Thus, the use of remote sensing and UAV imagery combined with machine learning and artificial intelligence to identify areas of plastic pollution on the water surface could be a key solution to a global problem.In the following, the review describes the technology of plastic identification from satellite images using machine learning based on examples of existing studies.A generalized methodological flowchart showing the sequential steps of this technology is shown in Figure 1. For the task of recognizing plastic in images, the dataset is satellite images or UAV images, which are used by AI or ML to determine the presence of contamination.In deep learning, this is accomplished by having neural networks extract basic features, such as lines, angles, and textures, after which they recognize complex levels of features, such as shapes and boundaries of plastic contamination.Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNNs) are effective for recognizing objects in images [35].In machine learning, plastic identification is achieved by training different models to find pixels similar in spectral brightness, i.e., the machine "reads" pixels with plastic on them and finds similar ones by grouping them together.This makes DL and ML a suitable choice for remote sensing data processing. This approach allows, firstly, to process a large number of images quickly, and secondly, to find features hidden to humans between objects in the images.In addition, neural networks and machine learning involve the creation of models that can be adapted and improved as new data are acquired, which will positively affect the advantage of this approach [36]. Thus, the use of remote sensing and UAV imagery combined with machine learning and artificial intelligence to identify areas of plastic pollution on the water surface could be a key solution to a global problem.In the following, the review describes the technology of plastic identification from satellite images using machine learning based on examples of existing studies.A generalized methodological flowchart showing the sequential steps of this technology is shown in Figure 1. Criteria for Choosing a Satellite Satellite images are the main source of data for the remote method.Satellite data from the Sentinel-2 spacecraft are currently used to identify plastic debris in the images.The spacecraft consists of two identical satellites: Sentinel-2A and Sentinel-2B, developed and Criteria for Choosing a Satellite Satellite images are the main source of data for the remote method.Satellite data from the Sentinel-2 spacecraft are currently used to identify plastic debris in the images.The spacecraft consists of two identical satellites: Sentinel-2A and Sentinel-2B, developed and operated by the European Space Agency as part of the Copernicus program.Each satellite has a multispectral instrument that measures solar radiation (i.e., it works passively), providing high spatial resolution (10 m, 20 m, or 60 m, depending on the spectral range), which is its main advantage compared to other satellites.Also, the choice of Sentinel is determined by its circulation period: the flyover takes place every five days, which makes it possible to re-explore the necessary territory with a short time interval.It is also important that Sentinel-2 provides free and open data to all users [37].However, Sentinel satellites Sensors 2024, 24, 5089 5 of 19 do not have coverage over the oceans.Coverage is limited to the near shore water and to inland seas (like the Mediterranean). It is also known that the Worldview-2 satellite has been successfully utilized.In 2011, after the Japanese earthquake, it was used to monitor the formation of marine plastic debris, but recent studies do not mention the possibility of using Worldview-2 data [32].The Landsat-8 satellite was also previously used for this purpose, but its spatial resolution is 30 m, which makes it impossible to distinguish small garbage accumulations in the images [38].In addition, the interval between images of this spacecraft is 16 days.In a given period of time, the object under study may move a significant distance or change in area due to ocean currents.For these reasons, Landsat-8 is not currently used for the purpose of detecting plastics on the surface of water [39].However, the effectiveness of combining Landsat-8 data with data from the Planet and Sentinel-2 satellites has been proven.The simultaneous use of several satellites, according to the authors, can become an effective tool for monitoring marine plastic debris and can become the basis for a future model for predicting waste accumulation in the ocean [32]. In summary, the choice of the Sentinel-2 satellite for the identification of plastic debris in the marine environment is based on the following facts: 1. The orbital period of the satellite is 5 days; 4. The satellite has 13 spectral bands covering visible, near infrared (NIR), and shortwave infrared (SWIR) spectra. Despite the fact that it is recommended to use Sentinel-2 data, many authors consider that the developed methods for polymer identification are reproducible with other satellites.The only important condition is the identity of the Sentinel-2 spectral bands, as the detection algorithms are based on them [30]. Examples of the Use of Plastic 'Targets' for Data Collection and Remote Sensing Experiments on Plastic on Water by Satellites and UAVs The possibility of detecting plastics in the aquatic environment from unmanned aerial vehicles and the Sentinel-2 satellite was first studied within the framework of the Plastic Litter Project (PLP) in 2018 [40,41].Also, as part of this study, an experiment was conducted to develop a reference dataset on polymer materials entering the sea. As an experiment, so-called "targets" consisting of plastic bags, bottles, and natural garbage with built-in GPS sensors were placed on the water in the Gulf of Gera on the island of Lesbos in Greece.The 10 × 10 m targets (imitation of a satellite pixel) were fixed on the water with special anchors.Land control points were placed on the beach.Plastic rafts were released on the water every 5 days for three months while the Sentinel-2 satellite was flying over the area.Together with the satellite data, aerial photography from UAVs was conducted with a 30 min difference (this is not a serious error in the experiment, as noted by the authors) [42].However, the position of the targets, although firmly fixed, is unstable.Therefore, this time interval should be minimized in future studies. The experiment was repeated in 2019, but with 1 × 5 m and 5 × 5 m targets.All data from the experiment and information about the project are in the public domain and available online [43].For 2018 and 2019, it was the only large-scale project of this type.The project continues until the present, with the latest results published for 2023. Based on existing experience, in 2019, scientists from Greece succeeded in verifying that smaller plastic targets can also be identified from Sentinel-2 images.For this purpose, 3 × 10 m plastic rafts with GPS trackers were installed in the water area of the city of Limassol (Greece) at a distance of 200 m from the shoreline.The study was also conducted using UAV aerial photographs and Sentinel-2 satellite images: the multispectral cameras of the UAV allow for studying the spectral response of plastic, which is then compared with Sentinel-2 spectra [37]. However, in real conditions, plastic accumulations are heterogeneous, with many natural components present.It was possible to obtain a spectrum of mixed material after collecting information about the presence of litter on the coasts of Canada, Vietnam, and Scotland from the scientific literature, and press and social networks [38].In a recent study, data on plastic pollution was also collected from scientific reports, articles, and social media [39]. Figure 2 shows the difference between the plastic target made for the experiment and plastic pollution in the real environment.As can be seen from the figure, under uncontrolled conditions in the marine environment, plastic accumulates together with natural debris, changes color under the influence of sunlight, and undergoes biofouling, which makes it much more difficult to identify in satellite images. that smaller plastic targets can also be identified from Sentinel-2 images.For this purpose, 3 × 10 m plastic rafts with GPS trackers were installed in the water area of the city of Limassol (Greece) at a distance of 200 m from the shoreline.The study was also conducted using UAV aerial photographs and Sentinel-2 satellite images: the multispectral cameras of the UAV allow for studying the spectral response of plastic, which is then compared with Sentinel-2 spectra [37]. However, in real conditions, plastic accumulations are heterogeneous, with many natural components present.It was possible to obtain a spectrum of mixed material after collecting information about the presence of litter on the coasts of Canada, Vietnam, and Scotland from the scientific literature, and press and social networks [38].In a recent study, data on plastic pollution was also collected from scientific reports, articles, and social media [39]. Figure 2 shows the difference between the plastic target made for the experiment and plastic pollution in the real environment.As can be seen from the figure, under uncontrolled conditions in the marine environment, plastic accumulates together with natural debris, changes color under the influence of sunlight, and undergoes biofouling, which makes it much more difficult to identify in satellite images.It should be noted that in the previously mentioned examples, satellite and UAV images are used together.This is due to the fact that there are some limitations and disadvantages in using them separately (Table 1). Large area coverage Lower spatial resolution compared to UAVs (10-30 m from free services, 3-5 m from paid services, 30 cm on the commercial level) Access to a large archive of the dataset for different time periods (it is possible to view images for the past several years) [44] Photography for a specific area is not conducted on a daily basis (usually once every 5-7 days) It should be noted that in the previously mentioned examples, satellite and UAV images are used together.This is due to the fact that there are some limitations and disadvantages in using them separately (Table 1). Large area coverage Lower spatial resolution compared to UAVs (10-30 m from free services, 3-5 m from paid services, 30 cm on the commercial level) Access to a large archive of the dataset for different time periods (it is possible to view images for the past several years) [44] Photography for a specific area is not conducted on a daily basis (usually once every 5-7 days) Free access to images to any territory at any time Dependence on weather conditions: in cloudy weather, the areas to be filmed may be blocked by clouds The ability to survey the entire globe on a day-by-day basis The cost is higher than with UAVs [44] Table 1.Cont. UAV imagery Higher spatial resolution compared to satellite imagery (image accuracy can be up to 1-2 cm per pixel) [45] Less coverage of the territory compared to satellite images Possibility to obtain up-to-date data on any day Shorter flight distance (battery life and range limit the area that can be captured in a single flight) Convenient remote control Geographically restricted use of UAVs (currently, UAVs are not allowed to fly over urban areas due to privacy concerns) The flight is below the clouds, so there is less dependence on weather conditions to take images A well-trained person is required to launch and operate the UAV Table 2 presents the most significant projects in the field of plastic detection in the water from 2015 to the present. Features of Satellite Imagery Processing for Floating Plastic Detection The choice of an effective atmospheric correction algorithm for coastal reservoirs is important for improving the accuracy of detecting floating plastics based on RSD [31].The ACOLITE atmospheric correction processor, developed by the Royal Belgian Institute of Natural Sciences (RBNSs) for the application of Landsat and Sentinel data in the aquatic environment, performs atmospheric correction and calculates the surface reflection coefficient using water parameters [30].Atmospheric correction in ACOLITE can be performed using two methods: exponential extrapolation function (EXP) and dark spectrum filtering (DSF).The greatest effectiveness of the latter method has been proven in order to detect plastic in water areas from satellite images [32,39,42]. A "land mask" is used to reduce the probability of pixel misidentification through the calculation of the Normalized Difference Vegetation Index (NDVI) [39].However, cloud cover can hinder this operation.Modern machine learning methods can improve image quality by reconstructing a high-quality cloud image using various algorithms (Linear Regression-LR, Random Forest Regression-RFR, Support Vector Regression-SVR) based on Sentinel-2 images [51]. If the atmospheric correction is incorrect, a classification error may occur.An example is shown in Figure 3 [44]. Methods and Applications for Obtaining Spectral Characteristics of Various Components of the Marine Environment and Plastics When applied to remote sensing, spectral analysis means extracting qualitative and quantitative information from the reflectance spectra of a given pixel based on wavelength-dependent properties.In machine learning, spectral characteristics of objects are used to train models, i.e., they are features by which the program determines the object in the image to one or another class.Spectral characteristics of objects are obtained by capturing them with multispectral cameras of unmanned aerial systems (UASs) or with a spectroradiometer.For example, the SVC HR-1024 spectroradiometer was used to obtain spectral characteristics of water surfaces and plastic "targets".Imaging was performed from heights of 1.5 and 3 m at 20 points in 1 m increments in order to test the spectral response at different heights [37].In order to obtain spectra of controlled "garbage targets" located on the beach for UAV imaging, the spectrometry session can be carried out in laboratory conditions [52]. It was found that plastic shows a reflection peak mainly in the near infrared spectrum (NIR), while seaweed reflects light also in the green (560 nm) and red (700-780 nm) bands [38].The spectral characteristics of plastic bottles at different depths and different types of Methods and Applications for Obtaining Spectral Characteristics of Various Components of the Marine Environment and Plastics When applied to remote sensing, spectral analysis means extracting qualitative and quantitative information from the reflectance spectra of a given pixel based on wavelengthdependent properties.In machine learning, spectral characteristics of objects are used to train models, i.e., they are features by which the program determines the object in the image to one or another class.Spectral characteristics of objects are obtained by capturing them with multispectral cameras of unmanned aerial systems (UASs) or with a spectroradiometer.For example, the SVC HR-1024 spectroradiometer was used to obtain spectral characteristics of water surfaces and plastic "targets".Imaging was performed from heights of 1.5 and 3 m at 20 points in 1 m increments in order to test the spectral response at different heights [37].In order to obtain spectra of controlled "garbage targets" located on the beach for UAV imaging, the spectrometry session can be carried out in laboratory conditions [52]. The spectral properties of the following materials have been studied by L. Birman [38]: 1. Wood; 2. It was found that plastic shows a reflection peak mainly in the near infrared spectrum (NIR), while seaweed reflects light also in the green (560 nm) and red (700-780 nm) bands [38].The spectral characteristics of plastic bottles at different depths and different types of plastic-polyethylene terephthalate and polyethylene-have been studied in the same way [52].The spectral reflectance of materials other than plastics allows us to compare the spectral responses of all objects to plastic [39].Figure 4 shows examples of spectral characteristics of different objects obtained in the studies [32,39,52]. plastic-polyethylene terephthalate and polyethylene-have been studied in the same way [52].The spectral reflectance of materials other than plastics allows us to compare the spectral responses of all objects to plastic [39].Figure 4 shows examples of spectral characteristics of different objects obtained in the studies [32,39,52].Average spectrums calculated over all pixels identified in the study [39]; (d) Spectrums of plastic, seaweed and water [32]. Description of Spectral Indexes Used for the Identification of Floating Plastics In order to develop a classification model to predict the presence/absence of plastic waste in imagery, a certain set of attributes is required.Such attributes are indexes. In 2020, B. Burman [38] developed a special index for plastics detection, the Floating Debris Index (FDI), which includes three spectrum bands.The FDI is based on the previously known "Floating algae index" created for the Landsat-8 satellite.Except in this case, the red channel was replaced by RedEdge [36].The formula for calculation is presented below. Description of Spectral Indexes Used for the Identification of Floating Plastics In order to develop a classification model to predict the presence/absence of plastic waste in imagery, a certain set of attributes is required.Such attributes are indexes. In 2020, B. Burman [38] developed a special index for plastics detection, the Floating Debris Index (FDI), which includes three spectrum bands.The FDI is based on the previously known "Floating algae index" created for the Landsat-8 satellite.Except in this case, the red channel was replaced by RedEdge [36].The formula for calculation is presented below. where R NIR , R RE2, and R SWIR1 -denote the reflectance values measured using the satellite per grid corresponding to near infrared (NIR), red edge 2 (RE2,) and shortwave infrared band SWIR-1, respectively, λ NIR , λ RED, and λ SWIR1 are the wavelengths (in nanometers) corresponding to the NIR, RED, and SWIR-1 bands of the Sentinel-2 satellite presented in Table 3.In addition to the FDI, the PI-Plastic Index-was developed in 2020.Its use is most appreciated when combined with spectral channels B4 and B8 [37].Together with it, the authors use the reverse vegetation index RNDVI for the first time, whereas before that only NDVI was known.The NDVI vegetation index makes it possible to distinguish seawater, wood materials, pumice (volcanic rock), and sea foam in the images, but it is not sensitive to the response of plastic.NDVI values range from −1 to 1, with a low value for water and a high value for vegetation.With FDI, plastic objects can be distinguished in the image, but vegetation and other natural materials will create a definition error.Therefore, NDVI and FDI are most often used in combination to achieve the highest efficiency of recognizing materials in the image [38]. The most complete effectiveness of the indexes in the identification of plastics in images is described in the study by M. Duarte.The XGBoost machine learning model was trained on all Sentinel-2 spectral indexes and spectral ranges in order to identify inefficient elements, remove them from the sample, train the model on the remaining components, and, based on this, determine the best combination of channels and indexes.The result was a combination of channels B1 and B8A with indexes NDSI, MNDWI, NDWI, OSI, FDI, WRI, and MARI [39].Table 4 shows all the indexes and formulas for their calculation, which were used to identify plastics on the surface of the water. 2.6.The Most Effective Machine Learning Methods for Detecting Plastic on the Surface of Water Machine learning (ML) is one of the perspective directions in many fields of science.The data on which the model is trained are images of various objects that may be present in accumulations of floating debris.These include plastic itself, as well as other various objects, including seawater, foam, wood, etc.The attributes on which the model detects general patterns are indexes.The last component of the model is a machine learning algorithm. B. Basu [30] studied the performance of two unsupervised (K-means and Fuzzy C-means-FCM) and two supervised classification algorithms (Support Vector Regression-SVR and Shape Fuzzy C-Means-SFCM) for identifying floating plastics in coastal water bodies.In order to test the performance of each model, three different sets of attributes were selected.It was found that the performance of each of these algorithms is higher with the largest set of attributes.The most efficient model is SVR [30].L. Burman used spectral curves to identify macroplastics and the Naive Bayes algorithm to classify mixed materials, which were successfully identified as plastics with 86% accuracy [38]. Support Vector Machine (SVM) and Random Forest (RF) models were also tested to perform classification analysis.The spectral characteristics of different materials and indices were used to develop ML models.For this purpose, a spectral curve profile of marine plastic was created to differentiate floating plastic from other marine debris.Both SVM and RF algorithms performed well in five models and combinations of test cases, but the highest performance was observed for the RF algorithm [53]. The main results in the area of detecting plastic in aquatic environments using machine learning are summarized in Table 5. Supervised classification algorithms are undoubtedly used more frequently and show higher efficiency, but unsupervised methods are applied when insufficient data are available [30]. Taken together, the results suggest that high-resolution remote sensing imagery and automated ML models can be an effective way to rapidly detect marine floating debris. Discussion After a detailed description of the methods used in the identification of plastics, in this section, we want to draw attention to the drawbacks and limitations of the use of satellites that are highlighted in the studied works. 1. Cloud cover.The presence of clouds in images is highlighted as a major limitation when working with satellite data in most studies [39,54,55].Despite the fact that images are usually selected with a filter "cloudiness of <25%", the obtained data may be insufficient [32].In addition to clouds themselves, the classification quality can also be affected by cloud shadows, increasing the recognition error of objects in the images.In order to avoid classification errors due to clouds, three machine learning algorithms (LR, RFR, SVR) capable of generating "synthetic" pixels whose spectrum matches that of real objects were tested on Sentinel-2 images.This method can serve as a solution to the problem of image cloudiness.In addition, interference can be caused by sun glare in the images [42].For more details on atmospheric correction of satellite images see Section 2.3. 2. Limited data availability.A few authors point out the need to expand the existing library of marine debris data [39,51,52].In this context, "data" refers to images of various kinds of marine debris, on which machine learning models can be trained.Unsupervised classification methods are also known to be used, but their accuracy is lower than that of "learning with a teacher" (see Table 5).With supervised classification, user and producer accuracy is improved, but insufficient data can lead to classification errors.Researchers from different countries are calling for a global spectral database of marine debris data from around the world. 3. Inability to distinguish material type.This is a limitation rather than a disadvantage.If information on plastic types in the contaminated area or for any other qualitative assessment is needed, the data obtained from satellites and UAVs should be supplemented with in situ measurements [56].The use of remote sensing methods alone, however, can provide a comprehensive picture of the amount of pollution, e.g., to calculate the area or to construct a map of litter density [34,56].If it is necessary to pro-cess images in a large volume, it is advisable to turn to machine learning algorithms in order to automate the process [37].4. Plastic accumulation.When speaking about the detection of plastic in the water area using space images, we mean its accumulation on the water surface [39].At small volumes, it is practically impossible to identify it.This is evidenced by the studies described in Section 2.2: the minimum size of a plastic target recognizable on the image is 1 × 5 m.At the same time, the Sentinel-2 pixel coverage should be at least 25% [44].Taking into account the rapid pace of space industry development, it can be assumed that in the future satellites with lower spatial resolution will be launched, and then, this problem will be solved. 5. Inability to obtain information on submerged litter.The use of satellite imagery has shown a breakthrough in recognizing plastic waste on the water's surface.However, identifying submerged debris remotely is currently not possible [32].In the context of solving this problem, it is suggested that efforts should be directed towards the timely removal of marine debris to prevent its submergence due to biofouling and decomposition into microparticles [44].6. Weather conditions can be an obstacle in the detection of debris.We are not talking here about cloud cover or sun glare but about natural oceanic phenomena, such as storms and strong winds, that can last for a long period of time [32].This can be a serious problem for the identification of plastics in the high seas, so the focus should be on preventing debris from entering the ocean.For this purpose, it is necessary to introduce a system of monitoring waste accumulations on coastal areas, on beaches, and in river systems.These places are the primary sources of plastic entering the ocean.7. Classification errors may occur in coastal waters.This is related to the spectral response of water: deep water has a higher reflection coefficient, so plastics are distinguished more effectively and the results are more reliable [39].In coastal waters, sand and stoniness can interfere with the detection of plastics.However, it should be taken into account that it is technologically easier to conduct a controlled experiment in the coastal area; in addition, storms and strong winds can hinder the detection of plastics at a great distance from the coast (see the previous point).8. Plastic biofouling.Plastics lose their natural properties when exposed to water for a long time: their structure, shape, and size change, and natural material accumulates on them, which changes the spectral response of plastic [57].The studies presented in this review prove the effectiveness of solving this problem with the help of machine learning, whose algorithms are capable of automatically decoding images and recognizing suspicious objects.However, no ML model can still be put on the same level as a qualified RS specialist at the moment [58].ML misses many debris objects, especially if they are biofouled, mistaking them for vegetation, wood, and other natural materials.Further research on improving deep learning models and expanding the database may solve this problem in the future [34]. More than 50 studies were analyzed for the preparation of this review.Figure 5 shows the percentage of all limitations in plastic identification from satellite imagery mentioned in the submitted studies. Thus, the disadvantages of using satellite optical images provide significant limitations when working with them.In particular, the influence of weather conditions on the survey results.In this regard, it is necessary to pay attention to the possibilities of another type of remote sensing-radar imagery of the Earth.Synthetic Aperture Radar (SAR) uses an active sensor whose detector emits electromagnetic (EM) waves and also records the reflected signal [59].The EM wave received by the sensor is called the measured backscatter.The SAR image is a two-dimensional visualization of the measured backscatter. especially if they are biofouled, mistaking them for vegetation, wood, and other natural materials.Further research on improving deep learning models and expanding the database may solve this problem in the future [34]. More than 50 studies were analyzed for the preparation of this review.Figure 5 shows the percentage of all limitations in plastic identification from satellite imagery mentioned in the submitted studies.Thus, the disadvantages of using satellite optical images provide significant limitations when working with them.In particular, the influence of weather conditions on the Unlike optical sensors, an SAR sensor can operate both day and night independently of sunlight because it emits the signal itself [60,61].In addition, electromagnetic waves can penetrate through clouds and 'see' under the crown of trees, ensuring operation in any weather conditions.This is probably due to the fact that an SAR sensor uses microwave wavelengths, ranging from K-band (7.5 × 10 −3 m) to P-band (1 m), while an optical sensor uses wavelengths from visible (4 × 10 −7 m) to thermal infrared (15 × 10 −6 m) [62]. However, it should be realized that SAR images cannot be immediately interpreted by the human eye, as they contain only the backscatter signal, and pre-processing of SAR data is a long and complex procedure, including the application of the orbit file, radiometric calibration, gap removal, "multileveling", speckle correction, and terrain correction [61]. SAR sensing is actively used in various industries, including environmental monitoring: oil spills, urban sprawl, flooding, green space monitoring, etc. Given the advantages of microwave sensing, the possibility of applying SAR to detect plastic pollution on the water surface is worth considering in future studies. Future work by the authors.To date, no studies on remote detection of plastic debris on the water surface have been conducted in Russia.In this regard, we would like to note the need to conduct them in Russia.The existing problems with the disposal of solid waste in the country put river and marine ecosystems at high risk of pollution, which, in turn, affects the state of the aquatic environment of neighboring countries [63,64].The water ecosystems of the Arctic, as the region most exposed to climate change, are particularly vulnerable [21,65,66].Researchers estimate that the total amount of plastic currently floating in Arctic waters may reach 1200 tonnes [67].The issue of microplastic pollution is also acute, particularly in the Barents Sea [68].We want to replicate the experience of our foreign colleagues described in this article in Russia.This may become a significant contribution to the development of a global system for monitoring plastic waste around the world. We would also like to note that, despite the fact that this article describes mainly the application of machine learning algorithms, deep learning represents a wide range of possibilities for recognizing objects in images.This is due to the fact that neural networks can use pixel data of images and find patterns in them.Therefore, in our future work, we also want to explore the possibility of using artificial intelligence in order to detect plastic from images. Conclusions The use of computer and information technologies has become a necessity in solving most tasks in many branches of science in the modern world.They allow us to simplify and automate the process of working with various data by means of computing and communication. In the field of Earth remote sensing computers and geoinformation systems, software allows specialists to acquire, process, and interpret space images, which in combination with ground observations, serve as an indispensable tool for solving various tasks.This method is actively used in the field of ecology: space images are used to calculate the areas of mineral resource enterprises' dumps, to promptly identify forest fires, to determine oil spills, to monitor the condition of tree vegetation, and many others.However, it is necessary to go further and through modern computer technologies to find new ways to solve global environmental problems.Detecting the sources and spread of plastic pollution in the ocean is one of these challenges. Plastic pollution of the ocean can be called a major environmental disaster of our time.The increasing levels of this material in the marine environment pose serious threats to the marine ecosystem and biodiversity and potentially to humans as well.In the fight against plastic pollution of the planet, ocean clean-up projects are being created, but this is not enough to eliminate the accumulated damage.To date, there are a number of problems associated with the disposal of municipal solid waste, which is the root cause of plastic in water areas. Accurate detection of plastic litter can help in taking appropriate measures to reduce the amount of plastic in water bodies, providing an opportunity to identify the sources of dumping of solid waste into water bodies, and determining its distribution pathways and locations for elimination of this pollution. The method is performed by collecting data on the spectral response of materials through the installation of special plastic targets and their imaging from unmanned aerial systems.In the meantime, images of the selected area are acquired from spacecrafts.The European Space Agency Sentinel-2 satellite with a spatial resolution of 10 m is chosen for this purpose.The images are then manually processed by operators and plastic is identified using advanced technologies.These technologies include the new FDI and PI indexes, which "read" the spectral response of the material. Manual processing of large amounts of data is labor intensive, so the process of detecting debris in images is being automated using machine learning.For this purpose, scientists have tested various algorithms, such as XGBoost, SVR, SVM, and others, the reliability of which shows good results.One of the most high-performance models is Support Vector Regression. Remote sensing data and advanced machine learning algorithms are effective solutions for identifying large plastic patches, but the potential application of such resources for detecting and recognizing marine floating debris is limited.The main problems when using satellite images are cloudiness and shadows from clouds on images, solar glare, lack of data for training of machine learning models, the possibility to identify plastic only in large volumes (close to the Sentinel-2 pixel size), and others. In accordance with this, future research is possible along the following directions: 1. Collecting data on the location of plastic pollution around the world, including natural debris, to train IO models; 2. Developing a global database of different types of plastic litter for deep learning; 3. Developing and testing new ML algorithms; 4. Creating a system for monitoring sources of plastic waste entering coastal areas, beaches, and river systems; Figure 1 . Figure 1.Methodological flowchart showing the sequential steps for automatic detection of plastic waste in the ocean [compiled by the authors]. Figure 1 . Figure 1.Methodological flowchart showing the sequential steps for automatic detection of plastic waste in the ocean [compiled by the authors]. 24, x FOR PEER REVIEW 9 of 20If the atmospheric correction is incorrect, a classification error may occur.An example is shown in Figure3[44]. Figure 3 . Figure 3. Example of a classification error on Sentinel-2 image (18 April 2019).Misidentified "plastic targets" are shown in the right middle part of the image with a bold white square.Areas labeled "A" represent commission errors due to the presence of clouds and shadows; "B" represents false detection of ships, tracks, and exhaust fumes; and "C" represents a false detection of pixels filled with intense sunshine.Almost the entire coastline is misclassified due to the effect of reflection from the bottom [44]. Figure 3 . Figure 3. Example of a classification error on Sentinel-2 image (18 April 2019).Misidentified "plastic targets" are shown in the right middle part of the image with a bold white square.Areas labeled "A" represent commission errors due to the presence of clouds and shadows; "B" represents false detection of ships, tracks, and exhaust fumes; and "C" represents a false detection of pixels filled with intense sunshine.Almost the entire coastline is misclassified due to the effect of reflection from the bottom [44]. Figure 4 . Figure 4. (a) Comparison of the spectral reflectance of all pixels containing an image of plastic objects and all pixels capturing water objects[39]; (b) Spectral curves of the two types of plastic[52]; (c) Average spectrums calculated over all pixels identified in the study[39]; (d) Spectrums of plastic, seaweed and water[32]. Figure 4 . Figure 4. (a) Comparison of the spectral reflectance of all pixels containing an image of plastic objects and all pixels capturing water objects[39]; (b) Spectral curves of the two types of plastic[52]; (c) Average spectrums calculated over all pixels identified in the study[39]; (d) Spectrums of plastic, seaweed and water[32]. Figure 5 . Figure 5. Percentage of all limitations in plastic identification from satellite imagery referred to in the submitted studies [compiled by the authors]. Figure 5 . Figure 5. Percentage of all limitations in plastic identification from satellite imagery referred to in the submitted studies [compiled by the authors]. Table 1 . Advantages and disadvantages of using satellite and UAV imagery.[compiled by the authors]. Table 1 . Advantages and disadvantages of using satellite and UAV imagery.[compiled by the authors]. Table 2 . Projects in the detection of plastic in water from 2015 to 2024 [compiled by the authors]. Table 4 . Indexes and formulas used to investigate the possibility of detecting plastic in water. Table 5 . Research in the field of plastic detection in water using machine learning [compiled by the authors].
10,987.2
2024-08-01T00:00:00.000
[ "Environmental Science", "Computer Science", "Engineering" ]
Nanomaterial Complexes Enriched With Natural Compounds Used in Cancer Therapies: A Perspective for Clinical Application Resveratrol and quercetin are natural compounds contained in many foods and beverages. Reports indicate implications for the health of the general population; on the other hand the use of both compounds has interesting results for the treatment of many diseases as cardiovascular affections, diabetes, Alzheimer’s disease, viral and bacterial infections among others. Based on their capacities described as anti-inflammatory, antioxidant, and anti-aging, resveratrol and quercetin showed antiproliferative and anticancer activity specifically in maligned cells. These molecular characteristics trigger the pharmacological repurposing of both compounds and improved its research for treating different cancer types with interesting results at in vitro, in vivo, and clinical trial studies. Meanwhile, the development of different systems of drug release in specific sites as nanomaterials and specifically the nanoparticles, potentiates the personal treatment perspective in conjunct with the actual cancer therapies; regularly invasive and aggressive, the perspective of nanomedicine as higher effective and lower invasive has gained popularity. Knowledge of molecular interactions of resveratrol and quercetin in diseases confirms the evidence of multiple benefits, while the multiple analyses suggested a positive response for the treatment and diagnostics of cancer in different stages, including at metastatic stage. The present work reviews the reports related to the impact of resveratrol and quercetin in cancer treatment and its effects when the antioxidants are encapsulated in different nanoparticle systems, which improve the prospects of cancer treatment. INTRODUCTION Cancer is a compound collective of multi-factorial diseases brought about by intracellular environmental both for aspects such genetic, epigenetic, metabolic deregulations, and other risk-factors (1,2). This complex of diseases arises from many variations at the molecular level that produce deregulations in molecular pathways, intermediary molecules, and losing control in distinct pathways, stimulating tumorigenesis, associating it with lifestyle, longevity, and other risk factors related to contemporary life. However, palaeontological evidence revealed cancer in fossil remains up to 1.98 million years (3,4). Nowadays, cancer has grown into the public health problem of universal attention because the treatments' high cost and its invasiveness have generated several types of research on different drugs and modern treatments, fewer invasive and more efficient (5,6). The progress of technologies such as nanomedicine (Nm) has opened new scopes for the treatment of many diseases (7). By definition, Nm serves to improve the biodistribution and in situ drug release to deal with a specific disease, specifically in pathologies whose strategies are invasive and nonspecific. For example, it could improve the release of chemotherapeutic drugs in the therapy of cancer and could increase the balance between the effectiveness and toxicity of the drugs (8). In this context, the application of different nanomaterials in the form of nanoparticles (Nps) has attracted attention because of the multiple advantages that are presented, such as low manufacturing cost, the delivery of drugs to specific sites, less invasive therapies, and greater efficiency in the treatment and recovery (9). Nowadays, cancer treatment comprises the application of drugs and chemotherapy, along with surgical intervention (10). For cancer therapy, alternative drugs as resveratrol (Rv) and quercetin (Qr) have shown potential results in their treatment, since their chemical characteristics allow them complexing with different Nps increasing their release (11,12) ( Figure 1 and Table 1). The present work reviews the perspectives and advances in implementing these drugs and its complexing with nanomaterials for their use in cancer treatment. CURRENT DRUGS IMPLICATED IN CANCER TREATMENT Nowadays, the application of chemotherapeutic drugs is a conventional treatment perspective, where each treatment must be specific for the type of cancer (19,20). Constant research on Qr-HO-H-NH- (18) new drugs and new prospects has brought about novel points of view with respect to the current cancer therapy (21). For example, Zhou et al. conducted a clinical phase 3 trial in which they tested the response of lung cancer (LC) patients to Atezolizumab (Az) plus chemotherapy, showing that this combination led to patients having more prolonged survival and shorter survival likelihood of relapse (22). In another study, the effect of Lutetium-177-Dotatate, an experimental drug in phase 3 for the therapy of neuroendocrine tumors, showed that the participants treated with Lutetium-177-Dotatate showed further survival rate; besides, the participants showed a higher progression-free survival (PFS) in comparison with the control group treated with high doses of Octreotide LAR (20 to 30 mg) (23). The Olaparib impact in subjects with human epidermal growth factor receptor type 2 (HER2) negative breast cancer (BC) diagnosis showed important benefits, and the data suggested that the application of Olaparib leads to PFS for a more extended period of 2.8 months, while the risk of disease progression was 42% lower compared to monitored control groups (24 (30). In Alzheimer's disease with Rv modulation of neuro-inflammation, they showed the regulation of genes related to neurodegenerative disorders as sirtuin 1 (31). Glycemic and HDL-cholesterol levels could be controlled by Rv regulating genes as PPAR-g and sirtuin 1 in patients with type 2 diabetes mellitus and coronary heart disease improving the metabolic status of patients after 4-weeks of treatment, notwithstanding the study has some limitations but suggested a relevant effect of Rv in type 2 diabetes mellitus patients (32). Most et al. demonstrated the Rv impact in fat oxidation mediated by the 12-week treatment with Rv; the results suggested the modification in the microbiota of patients, specifically in men; this data suggested a positive effect in fat oxidation and mitochondrial oxidative capacity. Nevertheless more studies are necessary for understanding the correlation between gender and therapy effect (33). Another study analyzed 85 patients with stable coronary heart disease. The study reported improvement of the treatment with ß-blockers, statins, aspirin administrated with Qr as adjuvant, proving the cardioprotective properties of Qr at different levels (34). On the other hand, the use of Qr for treating different pathogens was analyzed in the recurrence of HPV. 59 patients were treated for 551 days, proving that the treatment showed 100% patients were HPV free. The analysis of results did not show adverse experiences in symptomatic patients; in contrast, the results weren't replicable in asthma patients; for these reasons more studies are necessary (35). Another Qr application is the control of metabolic diseases. The Qr supplementation limited Citrobacter rodentium colitis effects, consistent with other studies that have shown an impact in the modification of microbiota and in the inflammatory process. Nevertheless, Citrobacter rodentium is a non-human pathogen, but the data suggested the possible Qr use for glut pathogen treatment (36). Likewise, Qr treatment improved the morphology and histopathology in testis. The treatment increased plasma and testicular testosterone concentrations, suggesting that Qr could prevent the toxic effects induced by sodium arsenite (37). Notwithstanding, pharmacological reuse of Rv and Qr for cancer treatment has increased since researchers have shown, among other effects, their anti-inflammatory, antioxidant, and anticancer effects. Perspectives of Resveratrol in Cancer Treatment Using Rv as drug in cancer treatment has been studied. Rv is a natural polyphenolic stilbene produced by certain plants with antioxidant, anticancer, anti-inflammatory, and anti-aging properties (38)(39)(40)(41). The Rv investigation for cancer treatment confirmed these properties in different pathologies like rheumatoid arthritis, Alzheimer's disease, type 2 diabetes, allergic rhinitis, and cancer (31,(42)(43)(44). Researchers have studied the uses of Rv for cancer therapy. Banaszewska et al. reported Rv's impact on the reduction of the ovarian and adrenal androgens in the treatment of polycystic ovary syndrome (POS) (45). Another clinical trial published showed Rv's effect on reducing Vascular Endothelial Growth Factor (VEGF) and Hypoxia-Inducible Factor 1-alpha (HIF1) expression in granulose cells in women with POS. The study showed a relation between the protective effect of Rv in the patients and changes in sexual hormone levels probably by the angiogenesis pathway modification related to the VEGF and HIF1 expression levels; besides, the study suggested that the Rv consumption leads to a high-quality of the oocyte and embryo rate (46). The analysis of phytotherapeutic approach in preventing PC relapse by applying turmeric, Rv, green tea, and broccoli sprout reported in a pilot study, revealed the use of phytotherapeutics as a workable approach to prevent a relapse of PC and as a potential treatment in men with biochemically recurrent PC, and a moderate increase rate in serum prostate-specific antigen (47). Rv has been examined in BC treatment in a group of female mice (C57BL/6) that received a special diet plus Rv; the treated group developed tumors with a smaller size in contrast with the control group; likewise the Rv presence in situ indicated a protective bringing about in the BC development (48). Conversely, other reports indicated that Rv can induce BC apoptosis via p53 by the phosphorylation of Ser-15 and that effect can masked by the dihydrotestosterone in these cells by the receptor competition; however, the result also suggested a protective role induced by the induction of apoptosis mediated by Rv (49). Other investigations treated CRC cells with Rv in conjunct with pharmacological inhibitors. The data exposed that Rv affects cell phenotype, suppressing invasion, while the viability reduction of cells contrasted to pharmacological inhibitors. Associating the effect with the Sirt1 up-regulation, FAK down-regulation, and with the inhibition of focal adhesion, Rv use showed the NF-kb inhibition pathway by suppression of molecule intermediates involved in invasion, metastasis, and apoptosis (50). Another investigation exhibited the anti-tumor effects in the CRC cells of Rv. The authors showed that Rv application in conjunct with MALAT1 lentivirus short hairpin RNA, inhibited the Wnt/b catenin signal pathway by the interference in the gene expression of targets such as c-Myc, MMP-7, and MALAT1 (51). Perspectives of Quercetin in Cancer Treatment Qr is a naturally occurring flavonoid found in a wide variety of fruits as watermelon, cantaloupe, avocado (contains one Qr unit/ piece), blueberries (96 units), and Apples (24-58 units), vegetables as tofu, squash, and corn (one unit), yellow onions (326 units), and green beans with (25 units), beverages as tea, Lipton (26 units), and wine containing (8.4 units) (52). Qr is investigated for its biological activity as antioxidant, anti-inflammatory, and anticancer molecule. Besides, the antiproliferative and proapoptotic Qr activity is unique in cancer cells (53)(54)(55). In vitro research reported the improvement of apoptosis induction in human melanoma cells treated with Qr in comparison with tamoxifen (Tx). The use of 3 μM of each component triggers an important change in the clonogenic capacity of M10, M14, and MNT1 cell lines, likewise the ability of Qr to promote cell apoptosis by the heat shock protein-70 (Hsp70) downregulation. The data suggested Qr and Tx protective role in melanoma, indicating the potential effect of both drugs in cancer treatment (56). Nowadays, many strategies result in dosedependent toxicity as doxorubicin (Dx), being a common hazard to continuing therapy, specifically in drug-resistant cancers. In liver cancer (LiC), the protective role of Qr in treating toxicity in mice models revealed that Qr potentiated the anti-tumor effect of Dx dose-dependent toxicity and protected normal cells from the toxicity generated by toxic therapy. The accumulation of p53 mediated the Qr anti-tumor effect in LC cells, likewise the subsequent activation of mitochondrial apoptosis by the cleavage pro-caspases. The Qr use for toxicity treatments could protect the normal cells of the patient and could improve therapy resistance (57). Recently the Qr effectiveness as an adjuvant in the therapy of advanced pancreatic cancer (PcC) and other cancers types was demonstrated. Gemcitabine (Gc) use in PcC is frequent, but normally drug resistance development is the source of chemotherapy failure. The therapy, enriched with Qr induced apoptosis, causes the cell arrest in the S phase and increases the p53 expression; besides, the boosted Gc effect by Qr, especially in cancer cells resistant to Gc, was observed (58). Furthermore, the evidence indicated Qr's anticancer role in PcC resistance to Gc treatment, presenting a relation between the high mortality rate and the receptor for advanced glycation end products and its role in different signaling cascades. They tested autophagy stimulation in PcC cell lines resistant to Gc treated with Qr as an adjuvant. The results implied the mediation of the autophagy effect by the deletion of advanced glycation end products which conduced to the increase in the ratio of Bax/Bcl-2 and the down-regulation in NF-kb p65 expression, unleashing the CASP3 dependent apoptosis in the cell lines studied (59). Other reports indicated the absence or low Qr toxicity in treated rats with PC. The data showed that a dose of 30-3,000 mg of Qr/kg during 28 days did not show secondary effects at the experimental groups, demonstrating that Qr can generate chemo-protection in vivo models by down-regulation in oncogenes related to cell survival and a regulation in proteins related with apoptosis signaling (60). Furthermore, Qr could be employed as a preventive therapy in BC in female ACI rats by providing an enriched food with a dose of 2.5 g/kg of Qr for eight months. The rats fed with Qr and estrogen 17b-estradiol showed a higher PFS rate in comparison with the rats powered only with estrogen 17b-estradiol. The survival rate in the fed group of Qr plus estrogen 17b-estradiol was lower compared with the group powered only with Qr (61 NANOPARTICLES AND ITS USE IN CANCER TREATMENT In past years the Nm experimented an exponential growth, generating new approaches in the treatment of several diseases including different types of cancer (66). The progress generated by the research has generated remarkable progress in synthesis, fabrication, and characterisation of Nps specifically as liposomes for intravenous administration (67,68). Nps can form complex with polymers, metals, and inorganic materials for controlling the delivery efficiency and effective properties. NP synthesis with indocyanine green core, coated with poly (lactic-co-glycolic acid) and cancer cell membrane has impacted the in vitro and in vivo models; the treated groups with the system plus laser triggered cell lysis. The therapy inhibited tumor growth 6 days after starting the treatment. and the survival rate was of 40%; likewise it showed a higher biomimetic rate and the ability of the system in theranostic (69). Perspectives in the BC treatment analyzed the activity of nano carried lipids with supphoraphane and Tx, considered as the "gold line". In some BC treatment molecular subtypes showed that the system potentiated the Tx effect and decreased drug toxicity. This is a regular trouble for quality patients' life and improves the binding affinity of Tx to estrogen receptors (70). A phase 2 clinical trial analyzed the use of nanoparticle albumin (Am)-bound-paclitaxel (Px) in complex with Gemcitabine-Cisplatine (GcCs) in advanced biliary tract cancer therapy. The typical therapy resulted in an average PFS rate of 8 months and overall survival of 11.7 months. Notwithstanding, this approach prolonged the average PFS rate during 14.9 months, and 58% of participants reported adverse events (71 Resveratrol Nanoparticles in Cancer Treatment Using Nps is useful in biological applications, especially in the implementation of certain natural compounds such as Rv, Table 2. Table 2 summarizes some examples reported about the using of Rv Nps in the treatment of different cancer types. In 2018, Peñalva et al. determined the increment of bioavailability of Rv complex with casein Nps. The report described the release of Rv into physiologic pH and showed that the 100% of release efficiency happened at gastric fluid pH after 9-h; besides, the data were similar in Wistar rat models; pharmacokinetics data reported the half-life of about to 2.7-h, i.e., ten times higher accumulation contrasted to another distribution procedure; also, they demonstrated that the excretion of the system was 48-h post-oral administration (87). A study showed that the distribution of Rv into mesoporous silica Nps in PC has 100% of distribution into pH 7.4, 8-h post-administration; the therapy diminished PC3 cells that proliferate with 20 μM, notwithstanding the IC50 was 14.86 μM; besides, the use of this system plus Dc showed an increase of 50% in the cytotoxicity in immune cells to Dc (88). In another 2018 study, the Rv used as adjuvant to omega-3 polyunsaturated fatty acids encapsulated in a lipid matrix showed 25% minor oxidation rate in rat models; the complex integration in the HT-29 CRC cells was 277% greater, and the cell growth inhibition-rate improved at 72-h post-treatment in different adenocarcinoma cells lines; besides, the CASP3 activation in cells was 150% higher compared with the control groups. The cell proliferation in the treated group was 20.4% lower in contrast with the controls (82). The brunt of Rvferulic acid was carried on chitosan coated folic acid into solid lipidic-Nps into apoptosis induction; they disclosed the drugrelease was 42.87%; meantime the IC50 was detected around 10 μg/ml; besides the induction of apoptosis was sharper in HT-29 cells and NIH 3T3 cells evaluated with the compound in relation with non-treated groups (89). An in vivo research analyzed the impact of Rv and Dc encapsulated in lipid-polymer hybrid Nps conjugated with epidermal growth factor (EGF); the pharmaceutical co-delivery in vitro was 90% in HCC827 cells. Contrarily, the results of co-release in HUVEC cells did not show a difference between the treated group and the control group, Furthermore, in vivo models showed the localization in situ 48-h post-application in tumor tissue, leading to smaller tumor and a tumor growth rate of 79% lower in comparison with the control group (90). In 2016, a study analyzed the impingement of Rvgold Nps complex in MCF-7 (BC cells) in invasive process prompted by 12-O-tetradecanoylphorbol-13-acetate. The therapy with 10 μM of the complex inhibited the cellular migration and invasion, seemingly by the block of NF-kB phosphorylation and the successive activation of MMP-9 and COX-2, the molecules required in the metastatic cancer process (91). In Nps of Rv loaded in nano-capsules in melanoma mice model, in vitro results showed that 100 μM and 300 μM decreased the cell viability of B16F10 cells between 24 and 72h post-treatment; besides, the studied mice produced tumors smaller, 10 days post-therapy in contrast with the control group (92).The application of Nps into the theranostic procedure has effective impact in the employment of Human neuroglioma with down toxicity (<10%); the system supplemented with Rv, showed the induction of apoptosis in 81.4% of the treated cells; besides, their results showed greater targeting in tumor cells 5-min posttreatment, and it improved the localization in situ after 6-h posttreatment (83). On the other hand, Lv et al. reported the manufacture of a micro-bubble structure capable of discharging Rv in an exact pH; it was explained that the capability of this system to deliver or release Rv was faster in acid pH (~5.0); a lower pH was more proximal to physiologic condition (~7.0); besides, its bio-safety was higher that others systems (93). In vitro and in vivo experiments analyze the brunt of the encapsulation of Rv on Am NPs plus human serum albumin (HSA), in PANC-1 cells and Balb/c nude mice. The procedure could encapsulate 62.5% of Rv and efficient drug release in pH 5.0 at 37°C; the system unleashed in most cells the cellular apoptosis (85%) by pyknotic nuclei formation; also the half-life time of the Np was improved with HAS at in vivo models. Furthermore the system didn't have tissue toxicity (84). Another study demonstrated the apoptotic potential of Gc in human ovary cancer using Rv as a reducing and stabilizing agent in silver NPs; the system was conducted to a rate lower than 60% in the viability and cellular proliferation in A2780 line; likewise, it showed the free radical generation, and the treated cells raised the CASP3 and CASP9 expression. Furthermore the system conduced to DNA damage and the consecutive apoptosis induction (85). On the other hand, Gumireddy et al. generated a novel nano-compound based on 2-Hydroxypropyl bcyclodextrin in complex with Cm and Rv in a solid lipid NP; this formulation increased the solubility of the NP. They confirmed that the bio-availability and the anticancer activity of the compounds had risen; in vitro application argued that drugs released were optimal in the physiological condition with an IC50 of 9.9 μM (94). The Rv effect in MDA-MB-231 BC cells promoted by the oxidized mesoporous carbon-Nps raised the intracellular levels of Rv in BC cells; the results pointed a 2.8 foldchange higher in relation with the control group; the apoptosis induction was 40% sharper in the control group; also they showed the compound caused apoptosis by the PAPR cleavage and activation of CASP3. Also the cytotoxicity of the Np was lower 24-h post-treatment (86). Another perspective demonstrated in primary patient samples of chronic lymphocyte leukemia the improved transfection of ribonucleic acids, such as mRNA and siRNA encapsulated in Nps mediated by Rv; this method proved the improvement of transfection rate after 1-h of exposition with 10 μM of Rv, putting that this approach enhances the transfection; besides, the results showed minimum toxicity rate in treated cells (95). Quercetin Nanoparticles in Cancer Treatment Another attractive natural compound used in cancer therapy analysis is the Qr. Table 3 summarizes some examples reported about the use of Qr Nps in the treatment of different cancer types. Bishayee et al. studied the Qr gold complex with Nps of poly (DL-lactide-co-glycolide); the treatment of HepG2 HCC, HeLa cells, A375 cells, and WRL-68 cells with the system showed differential noxious factors in cancer cells, especially in HepG2 cells; they proved the arrest in the S phase of the cell cycle, contributing to fewer cell proliferation. The complex has the capacity to interact with DNA and promote the production of ROS conducing the cell apoptosis (102). Other studies in HaCaT cell line treated with Poly (lactide-co-glycolide) copolymer loaded with Qr showed that the kinetics at physiological pH was similar in the acid pH, showing an accumulative drug release of 70% triggered a lower cancer's cell viability rate in contrast with the control group (103). For its part a second report performed with the same Np system showed an encapsulation efficiency of 81.7% and the inhibition of COX-2 after 6-h with UV damage; besides, the capability of Qr as defensive factor increased in the face of at risk factors (104). Nano-diamonds loaded with Qr in HeLa cells showed that a concentration of 100 μM of Nps inhibited the cell growth in 54% 58-h post-treatment, and the anti-proliferative proprieties appeared in 74% of the cells; besides, the cell viability reduced to 50%, 72-h posttreatment; the results indicated the induction of apoptosis by the cleavage of pro-form CASP3 72-h post-treatment (105). Other repors revealed that Qr-gold Nps impacted in the autophagy induction and apoptosis in U87 cells, and in male BALB/c-mice, the use of 50 μg/ml reduced the cell viability up to 50%; still, in vitro experiments indicated conversion of LC3B-I in LC3B-II. Furthermore the p62 induction was reduced in the Qr treated group; meanwhile, the in vivo results showed a KI-67 decrease; besides, the mice cared for with the system didn't develop detectable tumors, and the treatment could inhibit the PI3K/Akt/mTOR pathway (97). Fifty cervical cancer patient samples treated with NPs of gold loaded with Qr conduced to autophagosome induction and a poorer Janus Kinase 2 expression; besides, the treatment arrested at cells in phase G0/ G1 and reduced the induction of S phase; this effect induced the down-regulation of STAT5 and Bcl-2 and upregulation of BAX, BAD, Cyto-c, Apaf-1 and CASP3. Furthermore, the results showed the suppression of the PI3K/AKT pathway, and the cyclin-D1 suppression led to the formation of auto-phagosomes and cell apoptosis (106). Other strategies based on superparamagnetic Nps for cancer therapy enriched with Qr showed in the MDA-MB-231line and HeLa cells line that the system could load 12.1% of the drug and the encapsulated percentage was up to 80% of Qr; in physiologic conditions it showed that up to 83% of Qr could be released, 250-h post-treatment, triggering reduction in cell growth. Furthermore the nanocarrier gets an up bio-compatibility and strengthened the Qr intracellular delivery (107). In the context of specific site drug-delivery, Liu Furthermore the cleavage of CASP3, CASP9 besides the changes of Cytochrome-c localization increased 20% the apoptosis rate and regulated AP-2b/hTERT signaling, and p50/NF-kB/COX-2 and Akt/ERK1/2 pathway; consistently, mouse models treated showed the tumor volumes and weights were significantly lower (109). Gold nano-cage with tetradecanol was loaded with Qr and Dx to analyze the co-delivery in MCF-7/ADR cells determining that the system could release 10% of Dx and 7% of Qr after 2-h at 37°C, showing a relation between the temperature and the release efficiency; besides, the treatment inhibited the expression of permeability glycoprotein in MCF-7/ADR cells; likewise, the rate of early apoptosis was 55.9%, and system arrested 57.9% cells in G2/M cell cycle phase in treated cells; besides the IC50 of the system was 1.5 μg/ml (110). Zhao et al. camouflaged Qr-loaded hollow bismuth selenide Nps in macrophage membrane for BC therapy; the system presented rapid binding and drug-releasing, showing a high bio-compatibility in 4T1 cells; in vitro data showed the down-regulation of HSP70, 43.3% apoptosis rate mediated by the AKT phosphorylation inhibition; the induction of cleaved CASP3 and photo-thermal therapy synergy in mouse model proved a strong ability of targeting after 6-h post-intravenous injection and the tumor size reduction presented 8 days posttreatment; interestingly, the metastasis capacity decreased at 17%, and the system presented a low hemolysis rate indicating high biocompatibility (99). Different perspectives of treatment were approached by Nan's research group that produced TPP Chitosan Nps in complex with Qr to treat and prevent skin deterioration and skin cancer in HaCaT cells and by topical application on mice models; in vitro data revealed that the system increased the internalization and retention in HaCaT skin cells; in vivo results confirmed the defensive effect of the system after UV damage by the inhibition of NF-kB/COX-2 signaling pathway by downregulation of IkB-a. In fact, the system prevented the mice from developing edema in the experimental group, showing a higher thickness of epidermis and dermis (111). The therapy with 1,2-distearoyl-sn-glycero-3phosphoethanolamine-N-methoxy-poly (ethylene glycol 2000) and D-a-tocopherol polyethylene glycol succinate in complex with Qr and alantolactone released 7.6% of total Qr loaded, showing in CT26-FL3 tumor-bearing mice that the treatment had a considerably smaller volume; the therapy reduced the content of Treg cells, showing the inhibition of IL-10, TGF-b, IL-1b, and CCL2, and the increasing the effect of CD3+ T-cells, and the system improved the survival rate (100). On the other hand, Liu et al. analyze the relation between nanocrystal with different sizes loaded with Qr and its biological effects. The A549 cell treatment with three different concentration systems (200 nm, 500 nm, and 3 μm) reduced 50% cellular proliferation, specifically in the 200 nm and 500 nm size systems; smaller nanocrystals with higher Qr concentrations correlated with a poorer formation of the microfilaments, blocking the normal localization of the actin fibers, and the reduction of STAT3 expression changes the migration rate after 24-h treatment (112). Other therapy perspectives based on Zr-MOF loaded with Qr could sensitize the DNA in different tumoral cell lines; the treatment triggered an 18% at the survival rate, and was more sensible to irradiation. The DNA breaking and the induction of g-H2AX was higher in the treated group; in vivo models showed 8% of the bio-distribution in situ. The inhibition of HIF1 could suppress the development of neo-vascularization in tumor tissues, and the analysis of BALB/c mice showed that the treated group has a 52.8% tumor inhibition rate by the downregulation of Ki6 (113). In relation with the effect of Qr on DNA, Abbaszadeh et al. informed that use of chitosanbased in nano-hydrogel loaded with Qr could alter genomic global DNA methylation and down-regulate DNMTs (DNMT1/3A/3B) in HepG2 cancer cells, increasing the level of methylated cytosine, and the correlation between the use of Qr and DNA methylation rate improved the anti-tumor effect (101). Other analyses proved the Qr delivery by TiO2 and Al2O3 Nps in MCF-7 cells, showing that the use of 25 μg/ml of the system has 90% bio-compatibility, and the nanocarrier internalized plus irradiation reduced the cell viability at 50% in relation with the control group, Furthermore the treatment with nano-sheets plus irradiation promoted the ROS production, DNA breaking, and altered the functionalities of mitochondria triggering the apoptosis or other cell death pathways (114). Other processes described for producing economic Nps enriched with HA plus Qr for targeting tumor showed 100% compound release rate and promoted the internalization by the CD44 receptor in 4T1 cells and HepG2 cells; besides, they showed 26.55% apoptosis rate, and in vivo data showed that the treatment led to less tumor growth (115). CONCLUSIONS Multiple evidence results of in vitro, in vivo, and of different phases of clinical studies emphasize the perspectives about the potential of natural compounds to apply such as drugs by cancer therapy; here we brought to light the general benefits of these compounds in face of synthetic drugs; besides, it showed up the role of the natural particles as an adjuvant in the minimization of the secondary effects in classical therapies as chemotherapy and surgery in diverse studies, and these results showed the improvement in the treated group with natural compounds. In this review, we focused especially on resveratrol and quercetin effects in cancer treatment. We noted an upsurge in interest of investigators to find out the mechanisms of compounds in the cancer treatment, and because of the report of different investigations, the possible specificity of these natural compounds in the sensitization of the abnormal cells that triggered different molecules that conduced to cellular damage by the activation of apoptosis, the induction of this pathway could be related by the activation of other pathways such as caspases, activation of PI3/AKT/mTOR and DNA double-strand break. The development of different techniques boosted the efficiency of the drug release; these technological advances could improve the treatment of cancer by the induction of immune system response; this viewpoint about nanomaterials complexing with different cellular membranes could emphasize the specificity of the drug by cancer cells' treatment. In this understanding, the developer of nanomaterials mediated by in silico applications results in pre-designed system of molecules more efficient for certain types of cancers, potentiating the multiple benefits of nanocomplex; likewise the novel approaches and modification of the classic methodologies could decrease the cost of these nanoparticle production. Despite the promising benefits presented in models in vitro and in vivo about the application of the resveratrol and quercetin complexing with different nano-materials enriched with certain drugs, and because of the insufficient clinical evidence, clinical studies in phase I are currently required to confirm the results referred in models in vitro and in vivo; besides, other technologies could simplify the bridge between these pre-clinical studies and clinical phase I studies, such the 3D culture analysis. Novel perspectives to prevent the evolution of any type of cancer are carried out centralized in the natural drugs' treatment. The evidence presented in certain research proposed that a defensive character of resveratrol and quercetin could convert them such as in molecules' potential application in preventive medicine; some data showed the preventive aspect of certain types of diets under a specific plan. The fast advancement in phytopharmaceutical investigation triggered many expansions to reinterpret the treatment for a disease, such as cancer, as complex. The many advantages of natural compounds use have been shown in different cancer types, which frequently have been proven in clinical studies in different phases, which although not concentrated on the resveratrol or quercetin complexing with any type of nano-material, they have showed that the use of these molecules can enhanced because this type of nano-formulations favors not only the treatment, but can be used as preventive complexes and like promising molecules and with hopeful results in the theranostic, thus improving the probabilities of the good progression of the patients. AUTHOR CONTRIBUTIONS MS-L, EJ-A, CL-C, AP-G, CL-P, and MS-C wrote all the sections of the manuscript. The Correspondence author and Principal author conceived and designed the review. All authors contributed to the article and approved the submitted version.
7,366.8
2021-04-01T00:00:00.000
[ "Medicine", "Materials Science" ]
Nano-Structured Gratings for Improved Light Absorption Efficiency in Solar Cells Due to the rising power demand and substantial interest in acquiring green energy from sunlight, there has been rapid development in the science and technology of photovoltaics (PV) in the last few decades. Furthermore, the synergy of the fields of metrology and fabrication has paved the way to acquire improved light collecting ability for solar cells. Based on recent studies, the performance of solar cell can improve due to the application of subwavelength nano-structures which results in smaller reflection losses and better light manipulation and/or trapping at subwavelength scale. In this paper, we propose a numerical optimization technique to analyze the reflection losses on an optimized GaAs-based solar cell which is covered with nano-structured features from the same material. Using the finite difference time domain (FDTD) method, we have designed, modelled, and analyzed the performance of three different arrangements of periodic nano-structures with different pitches and heights. The simulated results confirmed that different geometries of nano-structures behave uniquely towards the impinging light. Introduction Given the economic and environmental incentives, and the resulting paradigm shift towards a more sustainable development, producing clean energy is becoming increasingly important, where PV systems development is one of the fastest growing industries.Solar cells or photovoltaic (PV) systems, categorized in solid state electrical devices, have been suggested to provide a clean alternative method for generating electricity from sunlight.They attracted a lot of attention due to viability, reliability, and accessibility especially in the remote areas even in satellite systems [1].The Compound Annual Growth Rate (CAGR) of PV installations was reported to be 44% between 2000 and 2014 [2].However, the performance of solar cells is still far from a satisfactory level.Ongoing research towards enhancing the generation of clean electrical energy is focused on improving the functionality of solar cells.Studies towards optimization of PV cells are mainly about construction of more efficient and flexible PV cells, allowing for easy installation and transportation. Solar cells convert sunlight into DC voltage due to the PV effect [3].Subsequently, the generated DC electricity is transformed into AC source via a converter and thus it can be introduced to various systems for applications.Furthermore, transformation of solar energy to chemical energy can be performed by a PV cell named photoelectrolysis cell (PEC) which uses photon energy to split water [4,5].Absorption of the illuminated sunlight results in the creation of electron and hole (e-h) pairs which are later separated under influence of an internal electric field.Generally, PV cells are composed of layers of semiconductor materials, conventionally silicon in its different crystalline forms, i.e., mono-or multi-crystalline.While the crystallized silicon does not show promising electric conducting behavior, selectively contaminating the semiconductor with a controlled level, namely doping, helps to generate a good amount of electric current [6,7].Usually, the top and bottom layers of the PV cell are doped with boron and phosphorus to enable negative and positive charge generation, respectively [8]. An individual square cell averaging about four inches on each side is only able to produce a small amount of power, and due to its small size, once it is exposed to harsh environmental conditions, it might fail to function properly.Thus, interconnected solar cells are usually grouped together and designed in series to form modules for commercial usage.Furthermore, bigger units called solar panels and solar arrays can be assembled in groups or used individually.The panels and arrays are usually protected with glass or plastic front covers against environmental threats to eliminate potential damages [9]. Second generation solar cells, which benefit from relatively newer manufacturing technologies, are a thin film solar cell group which is developed by depositing layers of thin films on a substrate [10].The deposited material is not confined to silicon and this method is growing rapidly due to relatively easy mass production capability, flexibility to work under different situations, and fine performance under high temperatures. Regardless of the advantages of thin film solar cells, it has been argued that the alternative technologies used for the development of second generation solar cell design do not achieve a better efficiency compared with the first generation.Therefore, the technology of third generation or next generation solar cells was introduced which includes thin film solar cells combined with organic [11,12], polymer [13], dye-sensitized [14][15][16], nano-structures, and nano-structured interfaces, i.e., nano-wire/particle or quantum dot solar cells [17][18][19][20]. There are several environmental parameters affecting solar cell performance, such as geographical latitude, seasons, weather, position, and the sunlight' s angle of incident [21][22][23].However, it is fundamental to study the characteristics of PV systems from a basic point of view.These include parameters which have direct relation with the material composition and physical design of the PV cell which ultimately enable the device to collect or welcome more sunlight. The choice of material for nano-structures in solar cell design is broad, and PV industry is facing challenges to achieve the best efficiency [24][25][26].Hence, it is useful to design solar cells with specific geometries to benefit from some interesting characteristics, e.g., using metallic and semiconductor nano-structures can cause extraordinary optical transmission for light localization [27][28][29].Using metals to build nano-structures introduces plasmonic properties; however, semiconductor nanostructures are also popular to improve light absorption performance [30][31][32].While our understanding from semiconductor nano-structures is growing rapidly, the interest towards engineering them is rising due to the fact that semiconductors are less lossy than metals [33]. Among all mentioned types, a convenient method to obtain optimal light-matter control and manipulation at subwavelength (SW) scale is to use nano-structured features.Recently, nano-structures received attention due to the unique characteristics which lead to the development of various nanoscale instruments, such as biosensors, imaging devices, photodetector, and PV cells [34][35][36][37][38]. Therefore, the design of proper nano-structures for specific purposes became of great interest [39][40][41]. The working mechanism of nano-structures basically includes interaction with light and providing characteristics different from bulk material properties.Renewable energy and specifically solar cell designs are directly influenced by these properties, such as a larger surface to volume ratio provided for sunlight exposure on the PV cell surface once covered with nano-structures [42].In this paper, we study nano-structure optimization effect on the performance of solar cells with our specific choice of material, i.e., gallium arsenide (GaAs). Energies 2016, 9, 756 3 of 14 Due to extensive research on silicon technology, many photovoltaics and optoelectronic devices were based on silicon due to its well stablished processing and affordability [43,44].However, silicon solar cells have a lower efficiency compared with other substitute materials.GaAs has interesting properties that can easily overcome silicon applications in industry, and it is turning into the reference system for thin film solar cells [45].These properties include having a wide and direct band gap and being twice as effective as silicon in converting the incident solar radiation to electrical energy while being much thinner than the common bulk silicon bases.This quality results from faster movement of electrons through crystalline structures [46].Moreover, GaAs surface is very resistant to moisture and ultraviolet (UV) radiation which makes it quite durable.Moreover, GaAs in its pure single crystal form has high optical absorption coefficient and mobility near the optimum range for solar energy conversion [47,48].Alta Devices, Inc. reported that GaAs naturally moves more electrons to the conduction band and effectively converts the sun's energy into electricity [49], and presented a new world record for solar cell conversion efficiency of GaAs thin film-about 28.2% [50].Several interesting studies have been reported afterwards to improve GaAs-based solar cell devices performance [51][52][53]. Considering the extensive research done on the characterization of the GaAs-based solar cell, we are interested in studying a thin film based solar cell partially covered with optimized nano-features that acts as an anti-reflective coating and improves power absorption efficiently.Furthermore, they satisfy the zero-order (ZO) diffraction grating condition [54][55][56], for which apart from zero-order of diffraction, higher orders are excluded as they decay while propagation, i.e., evanescent.Therefore, ZO diffraction gratings are considered to behave like a slab of ordinary homogeneous material with an effective refractive index in which ideally the losses are minimum [57,58].In order for the solar cell optimization, we intend to study different nano-structure geometries and investigate the T, A, and R parameters of the proposed designs. The most important parameter to evaluate the performance of solar cells is the quantum efficiency.Quantum efficiency indicates the ratio of the carriers collected to the number of impinging photons on the solar cell.Therefore, it is required to reduce the reflection losses for solar cell design to increase the number of photons reaching the active area.Although the nano-structures have proven to be efficient for light absorption in the solar cells, it is important to examine how the relevant modification affects the overall performance of the device [42,59,60].Our proposed GaAs-based design is composed of periodic nano-structures with a duty cycle (DC) of 100%.This indicates that within the nano-structured area, there is no active layer section which is directly exposed to the sunlight; hence, one nanostructure's bottom-base width is equal to the pitch.However, for trapezoidal and triangular features, grooves are created between two full pitches as a result of inclined walls on the sides of nano-structures. FDTD Simulation Method In this section, we introduce the modelling method implemented to obtain the light reflection (R), transmission (T), and absorption (A) response of the solar cell design.Finite difference time domain method (FDTD) is a simple numerical solution to model electromagnetic field components through the computation domain [61,62], for which a numerical grid network is designed composed of unit cells for transverse magnetic (TM) field as shown in Figure 1.Electric (E) and magnetic (H) field components can be identified with respect to Maxwell's equations at edges of the computation grid [63].For TM illumination only E x , E z , and H y components of the wave are non-zero.This condition provides the required wave vector alignment and facilitates the study of resonant interactions on the surface of nano-structured features.The distribution of the field components is always shown on edges of the square cells indicating the FDTD grid system; thereby, each cell's field component information is required to update the calculations for the next cell.For the outer edges of the 2-D computation domain, an absorbing boundary condition, i.e., anisotropic perfectly matched layer, is designed to successfully estimate the field components from the previous cell computation and prevent the reflection loss from the boundaries.A mesh step size of 10 nm was selected for the computation; however, simulation results with a finer mesh also agree with the reported results in this paper.The excitation field is a Gaussian modulated continuous wave with the center wavelength of 830 nm.For a photon to be absorbed by semiconductor, its energy should be equal or higher than the material bandgap [64].Hence, in PV studies, the highest efficiency occurs for the photons with the energy close to the bandgap [65]. Design of GaAs-Based Solar Cell In this section, we describe the solar cell design we have simulated using Optiwave software (OptiFDTD).This solar cell is composed of a semiconductor base with one-dimensional (1-D) nanofilms/gratings on top.The incident light hits the nano-structures surface normally.Our study will focus on the geometrical properties of nano-structures made of GaAs, with the refractive index of 3.666 + i0.0612, on top of the same material, while the nano-structures' height and pitch vary. Furthermore, in this study, we investigate how covering the surface with nano-structures affects the solar cell performance, as varying the pitch size results in a change in the active area which is covered with nano-structures, e.g., 900 nm pitch size covers the entire surface.For consistency, the duty cycle, which defines the lateral width of the nano-structure in one pitch, is selected to be 100% throughout this study.Nonetheless, for triangular and trapezoidal geometries with 100% DC, the spacing between the ridges provides texture on the top surface; hence, the reflected light might strike the walls and be absorbed while this light on a flat surface is considered as loss.The specific geometries' response will be discussed separately in the relevant sections, and R, A, and T of the solar cell devices will be calculated; thereby, depending on our simulation results, we will have an overview of the nano-structures' suitable size for the experimental studies. As shown in Figure 2, application of nano-structures helps to reduce the reflection while interacting with illuminated light.Figure 2b confirms even applying a thin film assists in more field absorption.We design rectangular, trapezoidal, and triangular nano-structures with heights ranging from 50 nm to 400 nm and the nano-structure pitches varying from 100 nm to 900 nm.Considering the wavelength of illumination in this study, 900 nm pitch would be excluded from the ZO diffraction grating; hence, the device performs differently for the nano-structures at 900 nm pitch compared with the shorter pitch sizes for all zero-order diffraction gratings. A normal silicon PV cell has a thickness of around 200 µ m to 500 µ m.Considering the conventional methods, thinner films are not easy to fabricate and silicon has a low absorptivity factor which makes the usage of thicker films essential.Furthermore, thicker films provide good surface passivation and less surface recombination. However, we proposed a thin film based solar cell design with 1 µm thickness of semiconductor material, i.e., GaAs.While the device might lose the conventional thick solar cell fabrication advantages, the flow of current and light trapping in the PV cell can be improved via application of periodic nano-structures; hence, less material is needed to absorb the solar flux which reduces the device size.Furthermore, nano-structures have the ability to make the electrons and holes less likely to interact on the PV cell surface.Oscillation of incident sunlight coupled with the free electrons on the surface of the nano-structures makes the e-h recombination less probable, and a high level of light trapping occurs in the GaAs thin film. Design of GaAs-Based Solar Cell In this section, we describe the solar cell design we have simulated using Optiwave software (OptiFDTD).This solar cell is composed of a semiconductor base with one-dimensional (1-D) nano-films/gratings on top.The incident light hits the nano-structures surface normally.Our study will focus on the geometrical properties of nano-structures made of GaAs, with the refractive index of 3.666 + i0.0612, on top of the same material, while the nano-structures' height and pitch vary. Furthermore, in this study, we investigate how covering the surface with nano-structures affects the solar cell performance, as varying the pitch size results in a change in the active area which is covered with nano-structures, e.g., 900 nm pitch size covers the entire surface.For consistency, the duty cycle, which defines the lateral width of the nano-structure in one pitch, is selected to be 100% throughout this study.Nonetheless, for triangular and trapezoidal geometries with 100% DC, the spacing between the ridges provides texture on the top surface; hence, the reflected light might strike the walls and be absorbed while this light on a flat surface is considered as loss.The specific geometries' response will be discussed separately in the relevant sections, and R, A, and T of the solar cell devices will be calculated; thereby, depending on our simulation results, we will have an overview of the nano-structures' suitable size for the experimental studies. As shown in Figure 2, application of nano-structures helps to reduce the reflection while interacting with illuminated light.Figure 2b confirms even applying a thin film assists in more field absorption.We design rectangular, trapezoidal, and triangular nano-structures with heights ranging from 50 nm to 400 nm and the nano-structure pitches varying from 100 nm to 900 nm.Considering the wavelength of illumination in this study, 900 nm pitch would be excluded from the ZO diffraction grating; hence, the device performs differently for the nano-structures at 900 nm pitch compared with the shorter pitch sizes for all zero-order diffraction gratings. A normal silicon PV cell has a thickness of around 200 µm to 500 µm.Considering the conventional methods, thinner films are not easy to fabricate and silicon has a low absorptivity factor which makes the usage of thicker films essential.Furthermore, thicker films provide good surface passivation and less surface recombination. However, we proposed a thin film based solar cell design with 1 µm thickness of semiconductor material, i.e., GaAs.While the device might lose the conventional thick solar cell fabrication advantages, the flow of current and light trapping in the PV cell can be improved via application of periodic nano-structures; hence, less material is needed to absorb the solar flux which reduces the device size.Furthermore, nano-structures have the ability to make the electrons and holes less likely to interact Optimization of Nano-Structured Solar Cell In this section, we provide information about optimizing solar cell performance with application of different shaped nano-structures.For conventional solar cells, a substantial portion of light is expected to go through reflection losses [66].Therefore, application of diffraction nano-structures would help as an optimization method. Our design, apart from having ZO diffraction grating characteristics, provides a good property known as gradual change in refractive index which leads to a perfect anti-reflective (AR) medium.Hence, the nano-structures act like a homogeneous medium with an AR coating that allows to achieve lower reflection loss for a range of wavelengths and angles of incidence with respect to the incident light wavelength [67].The first AR coating was experimentally presented on glass substrate by Fraunhofer in 1817 [68].However, the concept of gradual transition in refractive index was first introduced by Lord Rayleigh in 1879 who mathematically reported the amount of reflection at the interface between tarnished glass and air [69]. On the other hand, the use of SW nano-structures in solar cell has also been validated by the insect eye design.The surface of an insect eye is covered with nano-structured film that absorbs most of the incident light with a minimal reflection which means there is almost no light diffraction.Due to this design, the refractive index between the air and the surface changes gradually, therefore, the light reflection decreases [70].Inspired by insect eye design, the SW nano-structures are applied to the solar cells to increase the conversion efficiency and allow a larger portion of the electromagnetic wave to reach the embedded charge carrier zone [34]. For a fully AR coating layer, the light reflecting from substrate and AR coating layer have a destructive interference [71].The minimum reflectance can be achieved when the refractive index equals the square root of the refractive indices of the two media [72].The reflection loss of a SW nano-structure can be calculated easily using Fresnel's equation [73]. where R is the reflectance, nAR is the nano-structures/AR coating medium refractive index, and ns is the substrate refractive index.For the current design, where both the diffraction nano-gratings and Optimization of Nano-Structured Solar Cell In this section, we provide information about optimizing solar cell performance with application of different shaped nano-structures.For conventional solar cells, a substantial portion of light is expected to go through reflection losses [66].Therefore, application of diffraction nano-structures would help as an optimization method. Our design, apart from having ZO diffraction grating characteristics, provides a good property known as gradual change in refractive index which leads to a perfect anti-reflective (AR) medium.Hence, the nano-structures act like a homogeneous medium with an AR coating that allows to achieve lower reflection loss for a range of wavelengths and angles of incidence with respect to the incident light wavelength [67].The first AR coating was experimentally presented on glass substrate by Fraunhofer in 1817 [68].However, the concept of gradual transition in refractive index was first introduced by Lord Rayleigh in 1879 who mathematically reported the amount of reflection at the interface between tarnished glass and air [69]. On the other hand, the use of SW nano-structures in solar cell has also been validated by the insect eye design.The surface of an insect eye is covered with nano-structured film that absorbs most of the incident light with a minimal reflection which means there is almost no light diffraction.Due to this design, the refractive index between the air and the surface changes gradually, therefore, the light reflection decreases [70].Inspired by insect eye design, the SW nano-structures are applied to the solar cells to increase the conversion efficiency and allow a larger portion of the electromagnetic wave to reach the embedded charge carrier zone [34]. For a fully AR coating layer, the light reflecting from substrate and AR coating layer have a destructive interference [71].The minimum reflectance can be achieved when the refractive index equals the square root of the refractive indices of the two media [72].The reflection loss of a SW nano-structure can be calculated easily using Fresnel's equation [73]. where R is the reflectance, n AR is the nano-structures/AR coating medium refractive index, and n s is the substrate refractive index.For the current design, where both the diffraction nano-gratings and substrate are made of the same material, the refractive index of the coating layer continuously changes from n AR to n s [74]. Once the solar cell surface is illuminated, the photons hitting the top surface will either reflect from the top, or be absorbed by the semiconductor structure.Otherwise, they will be transmitted through the material [75].However, only the absorbed photons are able to move an electron from the valence band to conduction band and generate power.Reflected and transmitted portions of sunlight are not desirable and are included in energy losses. We made changes to the aspect ratios of the nano-structures which are responsible for the absorption and reflection of light in the PV cell.The aspect ratio is defined as a quantity to compare top and bottom bases of the nano-structures.For the aspect ratio of one, the SW nano-structure is rectangular; for the aspect ratio equalling zero, the SW nano-structure turns into triangular shape, and for any positive value of the aspect ratio between these two values, the SW nano-structure shape turns into trapezoidal. Rectangular Shaped Nano-Structured Solar Cell In this subsection, rectangular nano-gratings with 100% duty cycle and different thicknesses are studied.The mentioned design turns into uniform thin film on the top surface of the solar cell with different heights.Investigating this shape is necessary as under application of zero-order gratings and anti-reflective medium, it is assumed that the medium is a homogeneous slab.In the following, it is shown how a real homogeneous thin film affects the solar cell performance. In Figure 3, light reflection and absorption curves of the solar cells topped with rectangular nano-structures with 100% duty cycle are shown.Each curve illustrates the solar cell behavior with various nano-structure pitches and fixed nano-structure height.The curves consist of eight discrete data points having values associated with the maximum light reflection (Figure 3a) and absorption (Figure 3b) for eight different designs with related characteristics.The procedure is repeated for nano-structures with different heights in the range of 50 nm to 400 nm.The biggest and smallest nano-structure heights in Figure 3a, i.e., 50 nm and 400 nm, have maximum reflection values regardless of the pitch size, and the light reflection associated with the nano-structures heights of 200 nm and 250 nm show the smallest reflection values, with 250 nm also having the maximum light absorption value.These results are satisfied for all pitch sizes.Therefore, 250 nm is the optimized height over the whole range of wavelengths for this design. As shown in Figure 3b, the absorption increases from 50 nm to 250 nm, and decreases afterwards, and these results are compatible with the absorption depth of GaAs. To investigate how the thin film affects the solar cell performance, simulations are also performed for the solar cell without any nano-structure.The A, T, and R results for the 100 nm pitch and 50 nm height are almost the same as the solar cell without nano-structures.The electric field distribution of these structures are shown in Figure 2b.However, for other pitches and heights, A, T, and R start to be different, as shown in Figures 3 and 4. Energies 2016, 9, 756 7 of 14 As the relation, T + R + A = 1 (total amount of light), is always satisfied for transmission, reflection, and absorption plots, the transmission plot behavior for different heights can be predicted given the results in Figure 3. Therefore, the least reflection and transmission occurs for the rectangular nano-structures with bigger pitches.Considering the results presented in Figures 3 and 4, the optimized size for nano-structures pitch is close to 800 nm, because a bigger part of the surface is covered with the thin film. Trapezoidal Shaped Nano-Structured Solar Cell In Figure 5a,b, the light refection and absorption curves for trapezoidal nano-structures applied on the top surface of the solar cells are presented.Varying nano-structures' height from 50 nm to 400 nm, the maximum reflection happens for 50 nm height and the least reflection occurs for 200 nm height data points.Comparing the results for thin films (Figure 3) with trapezoidal nano-structures (Figure 5) indicates that the thin film reflection behavior is smoother than trapezoidal ones; however, application of the trapezoidal nano-structures reduces the reflection minima for all heights.Furthermore, the transmission plot in Figure 5b also shows an increased absorption behavior for trapezoidal nano-gratings compared with the thin film design in Figure 3b.The plots in Figure 5 Figure 4 shows the transmission curves for designs with different pitch sizes through the visible and near-infrared range wavelengths, while the thickness is kept constant at the value for which the maximum absorption and minimum reflection happens.The transmission decreases with increasing pitch size, and the undulating nature of the transmission curves originates from the light interference and reflections at the material interface. As the relation, T + R + A = 1 (total amount of light), is always satisfied for transmission, reflection, and absorption plots, the transmission plot behavior for different heights can be predicted given the results in Figure 3. Therefore, the least reflection and transmission occurs for the rectangular nano-structures with bigger pitches.Considering the results presented in Figures 3 and 4, the optimized size for nano-structures pitch is close to 800 nm, because a bigger part of the surface is covered with the thin film. Trapezoidal Shaped Nano-Structured Solar Cell In Figure 5a,b, the light refection and absorption curves for trapezoidal nano-structures applied on the top surface of the solar cells are presented.Varying nano-structures' height from 50 nm to 400 nm, the maximum reflection happens for 50 nm height and the least reflection occurs for 200 nm height data points.Comparing the results for thin films (Figure 3) with trapezoidal nano-structures (Figure 5) indicates that the thin film reflection behavior is smoother than trapezoidal ones; however, application of the trapezoidal nano-structures reduces the reflection minima for all heights.Furthermore, the transmission plot in Figure 5b also shows an increased absorption behavior for trapezoidal nano-gratings compared with the thin film design in Figure 3b.The plots in Figure 5 represent one more data point, i.e., 900 nm pitch size for different heights.Considering the illuminating light wavelength, this pitch size does not satisfy zero-order gratings condition, the abrupt change at 900 nm pitch size for both plots confirms this fact. Energies 2016, 9, 756 8 of 14 represent one more data point, i.e., 900 nm pitch size for different heights.Considering the illuminating light wavelength, this pitch size does not satisfy zero-order gratings condition, the abrupt change at 900 nm pitch size for both plots confirms this fact. Triangular Shaped Nano-Structured Solar Cell In this subsection, we present results for triangular nano-structures applied on the top surface of the solar cell.This geometry will have sharp edges and, as an extreme case with aspect ratio of zero, it would be useful to study whether triangular nano-structures' light reflection and absorption behavior is in favor of solar cell performance. Maximum light reflection behavior of the solar cells with triangular nano-structures is shown in Figure 6 for different nano-structures' heights.The minimum reflection occurs at 800 nm pitch for all different heights.Each curve is composed of a set of discrete values for the maximum reflection of fixed heights ranging from 50 nm to 400 nm.Therefore, based on the results shown in Figure 6, the nano-structure's height having the least reflection in triangular case is 300 nm.To confirm that 800 nm pitch is the optimized size, it should also provide satisfactory light absorption behavior.The light absorption curves for the triangular nano-structures varying the pitch from 100 nm to 900 nm are shown in Figure 7.As can be seen, the maximum absorption occurs for 800 nm pitch along all different heights. Triangular Shaped Nano-Structured Solar Cell In this subsection, we present results for triangular nano-structures applied on the top surface of the solar cell.This geometry will have sharp edges and, as an extreme case with aspect ratio of zero, it would be useful to study whether triangular nano-structures' light reflection and absorption behavior is in favor of solar cell performance. Maximum light reflection behavior of the solar cells with triangular nano-structures is shown in Figure 6 for different nano-structures' heights.The minimum reflection occurs at 800 nm pitch for all different heights.Each curve is composed of a set of discrete values for the maximum reflection of fixed heights ranging from 50 nm to 400 nm.Therefore, based on the results shown in Figure 6, the nano-structure's height having the least reflection in triangular case is 300 nm.To confirm that 800 nm pitch is the optimized size, it should also provide satisfactory light absorption behavior.The light absorption curves for the triangular nano-structures varying the pitch from 100 nm to 900 nm are shown in Figure 7.As can be seen, the maximum absorption occurs for 800 nm pitch along all different heights. In Figure 8, we compare the results of nano-structures having one and zero aspect ratios rectangular (thin film) and triangular nano-structures, respectively.The horizontal axis in Figure 8 shows the wavelength range for a fixed nano-structure height of 350 nm, and the light absorption behaviors of three different pitches are studied.Because the nano-structures' heights are close to the optimized height of triangular structure, the maximum absorption occurs for triangular nano-structures.Furthermore, the light absorption for both rectangular and triangular nano-structures increases with size of the pitch.This means that a bigger portion of the surface is covered with nano-structures; hence, light interaction happens more effectively, having the best performance for the optimized pitch. fixed heights ranging from 50 nm to 400 nm.Therefore, based on the results shown in Figure 6, the nano-structure's height having the least reflection in triangular case is 300 nm.To confirm that 800 nm pitch is the optimized size, it should also provide satisfactory light absorption behavior.The light absorption curves for the triangular nano-structures varying the pitch from 100 nm to 900 nm are shown in Figure 7.As can be seen, the maximum absorption occurs for 800 nm pitch along all different heights.In Figure 8, we compare the results of nano-structures having one and zero aspect ratios rectangular (thin film) and triangular nano-structures, respectively.The horizontal axis in Figure 8 shows the wavelength range for a fixed nano-structure height of 350 nm, and the light absorption behaviors of three different pitches are studied.Because the nano-structures' heights are close to the optimized height of triangular structure, the maximum absorption occurs for triangular nanostructures.Furthermore, the light absorption for both rectangular and triangular nano-structures increases with size of the pitch.This means that a bigger portion of the surface is covered with nanostructures; hence, light interaction happens more effectively, having the best performance for the optimized pitch.In Figure 8, we compare the results of nano-structures having one and zero aspect ratios rectangular (thin film) and triangular nano-structures, respectively.The horizontal axis in Figure 8 shows the wavelength range for a fixed nano-structure height of 350 nm, and the light absorption behaviors of three different pitches are studied.Because the nano-structures' heights are close to the optimized height of triangular structure, the maximum absorption occurs for triangular nanostructures.Furthermore, the light absorption for both rectangular and triangular nano-structures increases with size of the pitch.This means that a bigger portion of the surface is covered with nanostructures; hence, light interaction happens more effectively, having the best performance for the optimized pitch.For each nano-structure geometry, the reflection is in close conjunction with absorption and transmission.A summary of the best combination of the mentioned parameters for rectangular, trapezoidal, and triangular shaped nano-structures is represented in Table 1.For each nano-structure geometry, the reflection is in close conjunction with absorption and transmission.A summary of the best combination of the mentioned parameters for rectangular, trapezoidal, and triangular shaped nano-structures is represented in Table 1. Conclusions In this paper, in order to optimize the solar cell design, diffraction nano-gratings have been employed on top of the solar cell from the same material to eliminate the unwanted reflection losses using the graded refractive index properties.Different subwavelength nano-structures, such as rectangular (the thin film), trapezoidal, and triangular shaped nano-gratings were designed on top of the GaAs-based solar cell and each geometry's impact was examined under light illumination.As the nano-structures' aspect ratios varied, the transmission, reflection and absorption performance of each design have been calculated, and the results were analyzed and compared.Based on our simulation plots, we introduced an optimized size for different nano-structured/diffraction grating shapes which represents better performance for the solar cells.The maximum absorption was calculated for trapezoidal nano-structures and the minimum reflection for triangular nano-structures.Therefore, it is confirmed that having slanted walls for the nano-structures on the top surface of solar cell would result in less reflection and better absorption as the surface area for incident light interaction increases.Development of GaAs-based solar cells revolutionary improves the photovoltaic market, and has a huge potential for application in various fields from household heating and generating electricity to telecommunications and transportation.This study would contribute to facilitating the fabrication process for construction of photovoltaic devices with better light absorption performance than conventional solar cells. Figure 1 . Figure 1.Electromagnetic field components on specific locations on FDTD grid system for TM polarized wave. Figure 1 . Figure 1.Electromagnetic field components on specific locations on FDTD grid system for TM polarized wave. 14 Figure 2 . Figure 2. Schematic cross-section view of GaAs-based solar cell with (a) rectangular (thin film) (b) illustrates the electric field intensity plot for the solar cell without nanostructures (left) and a solar cell design with 50 nm thin film on top (right) (c) trapezoidal and (d) triangular nano-structure shapes to assist efficient light manipulation and trapping inside the device. Figure 2 . Figure 2. Schematic cross-section view of GaAs-based solar cell with (a) rectangular (thin film) (b) illustrates the electric field intensity plot for the solar cell without nanostructures (left) and a solar cell design with 50 nm thin film on top (right) (c) trapezoidal and (d) triangular nano-structure shapes to assist efficient light manipulation and trapping inside the device. Figure 3 . Figure 3.Light reflection (a) and absorption (b) values shown in percentage for solar cells with rectangular-shaped nano-structures. Figure 3 . Figure 3.Light reflection (a) and absorption (b) values shown in percentage for solar cells with rectangular-shaped nano-structures. Figure 3 . Figure 3.Light reflection (a) and absorption (b) values shown in percentage for solar cells with rectangular-shaped nano-structures. Figure 4 . Figure 4. Light transmission values shown in percentage for solar cells with rectangular nanostructure on top.Each curve is associated with a specific pitch while the nano-gratings height is kept constant at 250 nm. Figure 4 . Figure 4. Light transmission values shown in percentage for solar cells with rectangular nano-structure on top.Each curve is associated with a specific pitch while the nano-gratings height is kept constant at 250 nm. Figure 5 . Figure 5.Light reflection (a) and absorption (b) curves shown in percentage for solar cells with trapezoidal nano-structures on the top surface of solar cell for different nano-structure pitches ranging from 100 nm to 900 nm. Figure 6 . Figure 6.Light reflection curves shown in percentage for solar cells with triangular nano-structures for different nano-structures' pitch sizes.Each curve indicates the reflection behavior for a single height. Figure 5 . Figure 5.Light reflection (a) and absorption (b) curves shown in percentage for solar cells with trapezoidal nano-structures on the top surface of solar cell for different nano-structure pitches ranging from 100 nm to 900 nm. Figure 6 . Figure 6.Light reflection curves shown in percentage for solar cells with triangular nano-structures for different nano-structures' pitch sizes.Each curve indicates the reflection behavior for a single height. Figure 6 . 14 Figure 7 . Figure 6.Light reflection curves shown in percentage for solar cells with triangular nano-structures for different nano-structures' pitch sizes.Each curve indicates the reflection behavior for a single height.Energies 2016, 9, 756 9 of 14 Figure 7 . 14 Figure 7 . Figure 7.Light absorption curves in percentage for solar cells with triangular nano-structures for different nano-structures' pitch sizes.Each curve indicates the absorption behavior for a single height. Figure 8 . Figure 8.Light absorption curves in percentage for solar cells with triangular and rectangular nanostructures on the solar cell for different nano-structure pitches.The nano-structures' height is kept constant at 350 nm. Figure 8 . Figure 8.Light absorption curves in percentage for solar cells with triangular and rectangular nano-structures on the solar cell for different nano-structure pitches.The nano-structures' height is kept constant at 350 nm. Table 1 . The optimized values in percentage for A, R, and T of three different nano-structure shapes are shown in Table1.The DC is 100%.
8,625.4
2016-09-19T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
On Local Linear Approximations to Diffusion Processes Diffusion models have been used extensively in many applications. These models, such as those used in the financial engineering, usually contain unknown parameters which we wish to determine. One way is to use the maximum likelihood method with discrete samplings to devise statistics for unknown parameters. In general, the maximum likelihood functions for diffusion models are not available, hence it is difficult to derive the exact maximum likelihood estimator MLE . There are many different approaches proposed by various authors over the past years, see, for example, the excellent books and Kutoyants 2004 , Liptser and Shiryayev 1977 , Kushner and Dupuis 2002 , and Prakasa Rao 1999 , and also the recent works by Aı̈t-Sahalia 1999 , 2004 , 2002 , and so forth. Shoji and Ozaki 1998; see also Shoji and Ozaki 1995 and Shoji and Ozaki 1997 proposed a simple local linear approximation. In this paper, among other things, we show that Shoji’s local linear Gaussian approximation indeed yields a good MLE. Introduction Diffusion processes are used as theoretical models in analyzing random phenomena evolved in continuous time.These models may be described in terms of It ô's type stochastic differential equations dX t A X t , θ dt σ X t , θ dW t , 1.1 where W t t≥0 is a Brownian motion, with some unknown parameters θ to be determined in rational ways. International Journal of Mathematics and Mathematical Sciences It is, however, difficulty to derive the maximum likelihood estimator for θ if the diffusion coefficient i.e., the volatility σ is unknown.On the other hand, in practice, the volatility is determined first by using the fact that when σ is a constant.Therefore we will limit ourselves on diffusion models with constant volatility: Since there is no much difference at technical level, we will consider one-dimensional models only.That is, we will assume throughout the paper that W is a one-dimensional Brownian motion, and X is real valued.The distribution μ X T of X t t≥0 over a finite time interval 0, T has a density with respect to the Wiener measure μ W T the law of the Brownian motion W , given by the Cameron-Martin formula: which is in turn the likelihood function with continuous observation.In practice, only discrete values X t 0 , . . ., X t n may be observed over the duration 0, T , where 0 t 0 < t 1 < • • • < t n T and t i − t i−1 δ.The corresponding likelihood function L θ is the conditional expectation under Wiener measure: E L θ | X t 0 , . . ., X t n n j 1 p θ δ, X t j−1 , X t j n j 1 G δ, X t j−1 , X t j , 1.5 where p θ t, x, y is the conditional probability density function of X t given X 0 x, and G t, x, y is the Gaussian density 1/ √ 2πt exp{−|x − y| 2 /2t} see 1 .Since the denominator of 1.5 does not depend on θ, we may simply consider the numerator L X t 0 , . . ., X t n ≡ n j 1 p θ δ, X t j−1 , X t j , 1.6 as a likelihood function.Therefore, the MLE for θ under a discrete observation may be found by solving either explicitly if possible or numerically the likelihood equation ∇L X t 0 , . . ., X t n 0. 1.7 The difficulty with this approach is that, unless for a very special drift vector field A, an explicit formula for p θ t, x, y is not known.To overcome this difficulty, many approximation methods have been proposed in the literature by various authors.The idea is to replace the diffusion model 1.3 by an approximation model for which an explicit formula for the likelihood function is available.One possible candidate is of course the Euler-Maruyama approximation where {ξ j } is an i.i.d.sequence with standard normal distribution N 0, 1 and X 0 X 0 .However, the likelihood function L 1 X 0 , . . ., X n for this model is not, in general, close enough to that of the diffusion model if measured in terms of the ratio of their corresponding likelihood functions The second approach is to discretize the likelihood function dμ X T /dμ W T for continuous observations.In order to utilize this likelihood function, we need to handle the It ô integral here the right-hand side involves only the sample X.This idea to get rid of It ô's integral and replace it by an ordinary one has far-reaching consequences, see the interesting paper 2 for some applications. One can also use approximations to the probability density function p θ t, x, y and construct functions which are close to the maximum likelihood function.There are a great number of articles devoted to this approach, such as 3-5 , for example.The difficulty, however, is that even f t, x, y is a uniform approximation of p θ t, x, y , there is no guarantee that the approximate likelihood function j f t, x j−1 , x j would tend to j p θ t, x j−1 , x j when n → ∞. In this paper we consider the linear diffusion approximation proposed by Shoji and Ozaki 6 to the diffusion model 1.3 , which leads to the following approximation of the likelihood function L X t 0 , . . ., X t n : where t j jT/n so that X t j is a sample with fixed duration δ t j − t j−1 over 0, T , and h j t, x, y is the probability transition density of the following linear diffusion model 12 when t j−1 ≤ t < t j and X t j−1 X t j−1 . International Journal of Mathematics and Mathematical Sciences The approximation 1.12 is called the local linearization of the diffusion model 1.3 , which has been studied in Shoji and Ozaki 6 .Shoji has showed numerically that the local linearizations do yield better estimates.Shoji's approximation was revisited in Prakasa Rao 7 , without a definite conclusion. The main goal of the paper is to prove Theorem 3.1 which implies that the local linear approximations 1.12 is efficient for the propose of deriving MLE with discrete samples. The paper is organized as follows.In Section 2, we present the MLE for linear models such as 1.12 .In Section 3, we state our main result for Shoji's local linear approximation, and give some comments about the conditions on the sampling data.Our main theorem provides a deterministic convergence rate for the likelihood functions.In Section 4, we prove that the likelihood function for the local linear approximation converges to the Cameron-Martin density but only in probability sense.Sections 5, 6, and 7 are devoted to the proof of our main result.In Section 5, we state the main tool, a representation formula for diffusions, established by Qian and Zheng 8 .In Section 6, we develop the main technical estimates in order to prove Theorem 3.1, whose proof is completed in Section 7. Section 8 contains a discussion about the Euler-Maruyama approximation which concludes the paper. Linear Diffusions Let us begin with the MLE of parameters a, b, and σ > 0 for the linear diffusion model Mishra and Bishwal 9 discussed a similar model : whose finite-dimensional distributions are Gaussian, determined through the probability transition function h t, x, y .Fortunately we have an explicit formula for h.Indeed the linear equation 2.1 may be solved explicitly and its solution is given by the formula e −a t−s dW s , 2.2 formula 6.8 of Karatzas and Shreve 10 , page 354 , and therefore Suppose we have a discrete sample observed over the equal time scale during the period 0, T , X iT/n , i 0, . . ., n.According to the Markov property, their joint distribution, or the maximum likelihood function L a, b, σ; x 0 , . . ., x n μ x 0 where δ T/n, and μ x is the probability density function of the initial distribution.Therefore the logarithmic of the maximum likelihood function l a, b, σ; 2.5 The maximum likelihood estimates for a, b, and σ are the stationary points of l, that is solutions to the equation ∇l 0. Set ρ e −aT/n .Then a − n/T log ρ and β b/a. 2.6 As an interesting consequence we have the following. Corollary 2.2.The maximum likelihood estimators a, b, σ to the linear diffusion model 2.1 are not sufficient statistics while a, b, σ, X 0 , X n are sufficient. Diffusion Models We consider the diffusion model 1.3 .Our approach and our conclusions are applicable to multidimensional cases as long as the diffusion coefficients are constant.For simplicity, we only consider one-dimensional case.The question is to estimate θ under a discrete observation {x 0 , . . ., x n } over the time scale δ in the time interval 0, T .Then, up to a constant factor, its maximum likelihood function where p t, z, y is the transition probability density of X t we have dropped the subscript θ for simplicity .The approximation maximum likelihood function, proposed in 6 , is given by where h j t, x, y is the transition density function to the linear diffusion model which is the first-order approximation to 1. 3 . In what follows we assume that A has bounded first and second derivatives and for some constant C 0 > 0 independent of parameters θ. The main result of the paper is follows. Theorem 3.1.Assume that A •, θ and A •, θ are bounded uniformly in θ.Let T > 0 be a fixed time and C > 0 be a constant.Suppose {x n j } j≤n (n 1, 2, . ..) is a family of discrete samples such that for all pair j, n such that j ≤ n, n 1, 2, . .., where δ n T/n.Then where L and L 2 are defined in 3.1 and 3.2 with δ δ n T/n. The convergence in 3.6 happens in a deterministic sense, and therefore conditions such as so that on average we should have |x n j − x n j−1 | 2 ≤ Cδ n .Since X t has continuous sample paths, so that {X ω t : t ∈ 0, T } for a fixed sample point ω is bounded.Since x n j are sampled from the fixed duration 0, T , thus we can assume that {x n j } is bounded, though here we have a countable many samples.It is possible to relax this constraint, for example, we may impose that |x n j | ≤ Cn α with α < 1/2, but for simplicity we only consider the bounded case.This condition is placed as a kind of "integrability" condition on the samples. From the asymptotic of the transition density function p t, x, y , it is easy to see that lim for each j, while, as our observation {x 0 , . . ., x n } happens over a fixed time interval 0, T , the ratio 3.6 as n → ∞ is really an infinite product, its behavior thus depends on the global behavior of p t, x, y .Although there are many results about bounds of p t, x, y in the literature see 2, 11 e.g. , the best we could find are those which yield 3.8 uniformly in x j , none of them yields the precise limit 3.6 .In fact, the proof of 3.6 depends on careful estimates on p t, x, y through a representation formula established in 8 . Linear Diffusion Approximations Without losing generality, we may assume that T 1.Let X j/n be a discrete observation of the diffusion model 1.3 at t j j/n j 0, . . ., n .For simplicity, write X j/n as X j if no confusion may arise.Consider the family of linear diffusions a j A X j−1 , θ . 4.2 Then where δ 1/n.The approximating likelihood function is International Journal of Mathematics and Mathematical Sciences We need to compare this function to the likelihood function with continuous observationthe Cameron-Martin density, which, however, should be discounted with respect to the Wiener measure.Thus we have to renormalize L 2 δ against the discrete version of Brownian motion, which is given by where Hence its logarithmic Proof.Let D j X j − X j−1 .Then 4.10 Since b j A X j−1 , θ − a j X j−1 and a j A X j−1 , θ , so that 4.11 However, 12 in probability.The claim thus follows immediately. A Representation Formula From this section, we develop necessary estimates in order to prove Theorem 3.1.In this section, we recall the main tool in our proof, a representation formula proved by Qian and Zheng 8 .Based on this formula, we prove the main estimate 6.65 , which has independent interest, in the next section.We conclude the proof of Theorem 3.1 in Section 7. Let x ∈ R. Consider the linear diffusion Our main tool is a representation formula 5.7 discovered in 8 .Let X t , P x be the solution to the linear stochastic differential equation 5.1 .Proposition 5.2.For x, y ∈ R and T > 0 one has p T, x, y h T, x, y 1 where which is a martingale under the probability P x . To prove 3.6 , we need to estimate the double integral appearing on the right-hand side of 5.7 , which requires a precise estimate for which can be achieved since we know the precise form h T, x, y .Of course, if we knew the joint distribution of U t , X t , our task would be easy, but unfortunately it is rarely the case. Our arguments are based on the fact that U t is a martingale under P x , together with some delicate estimates for the functional integral P x ∇h T − t, X t , y h T, x, y p , 5.10 which will be done in the next section. Main Estimates We use the notations established in the previous section.Let T > 0, x, y ∈ R and d y − x.Then and therefore 6.2 where S T y − x − σ a, T 2 ax b . 6.3 For t ∈ 0, T and p > 1 we set for simplicity. Lemma 6.1.For any p > 1 one has for all t ∈ 0, T . Proof.The two inequalities follow from the fact that e 2aT − 1 p e 2aT − 1 − p − 1 e 2a T −t − 1 6.6 assumes its maximum 1 and minimum 1/p. Since 6.15 Then In what follows, we always assume that T > 0 is chosen such that the condition 6.23 is satisfied.Next we estimate D p t , which is provided in the following.Lemma 6.5.Let p > 1.Then where the positive constant C 1 depends only on p, ζ, and C 0 . Proof.Let Then, by the H ölder inequality 6.27 Next we estimate the expectation On the other hand 6.31 International Journal of Mathematics and Mathematical Sciences Lemma 6.6.Let T > 0 satisfy condition 6.23 , x, y ∈ R, and p > 1 and q > 1 such that 1/p 1/q 1.Then p T, x, y h T, x, y ≤ 1 exp e 2a T −t − 1 e aT − 1 6.32 Proof.Since is a martingale under P x , so that 6.34 By the H ölder inequality we deduce that 6.35 Equation 6.32 , follows from the representation 6.9 . 6.36 Proof.We have p × e − pa/ e 2a T −t −1 |y−e a T −t X t − e a T −t −1 /a b| 2 . 6.37 Under the probability P x , In terms of Z t and d y − x 6.40 Making change of variable N 2a e 2aT − e 2a T −t Z t . 6.41 Then, under P x , N has the standard normal distribution N 0, 1 , so that International Journal of Mathematics and Mathematical Sciences Let us simplify the last integral.Indeed, set 6.43 Then we rewrite the term appearing in the exponential in the last line of 6.42 .45 the inequality 6.42 may be rewritten as follows: 6.48 Thus 6.46 yields that 6.50 International Journal of Mathematics and Mathematical Sciences Therefore 6.60 In particular D p t dt. Proof of Theorem 3.1 We are now in a position to prove Theorem 3.1.We may assume that T 1, so that δ n 1/n.Let x n j j 0, 1, . . ., n be discrete samplings with time scale δ δ n 1/n on 0, 1 .By our assumptions, |x n j − x n j−1 | 2 ≤ Cδ n , and |x n j | ≤ C for all pair j, n such that 0 ≤ j ≤ n and n ≥ 1.For simplicity we write x j for x n j if no confusion may arise.In the proof below, we will use C i to denote nonnegative constants which may depend on C, T 1 and the bounds of A and A appearing in our diffusion model 1.3 , but independent of n. Recall that h j t, x, y is the probability transition density function of the diffusion 3.3 , that is, dX t b j a j X t dt dW t , 7.1 where b j A x j−1 , θ − x j−1 A x j−1 , θ , a j A x j−1 , θ . The Euler-Maruyama Approximation Recall that the Euler-Maruyama approximation to 1.3 is a Markov chain given by X j X j−1 A θ, X j−1 δ ξ j δ, 8.1 where {ξ j } is an i.i.d.random sequence, with standard normal N 0, 1 .The conditional distribution of X j given X j−1 x j−1 is Gaussian with mean x j−1 A θ, x j−1 δ and variance δ so that the likelihood function is given as From which we may deduce the following estimate. 2 . uniformly in θ, in probability with respect to the Wiener measure, where l is the log of the Cameron-Martin density 1.4 . 7 . 2 International 2 j e a j δ 1 2 a j x j−1 b j 2 × e a j−1 δ − 1 a j− 1 ⎛ 1 ≤ C 9 , Journal of Mathematics and Mathematical SciencesAccording to 6.65 we have p δ, x j−1 , x j h j δ, x j−1 , x j ≤ 1 C 8 e |a j |δ e p−1 a j / e 2a j δ −1 S j x j−1 − x j − x j−1 .7.4Since a j and x j are bounded,a j x j−1 b j A x j−1 , θ ≤ C 4 1 x j−of Theorem 3.1 is complete. Proposition 2.1. The maximum likelihood estimates for the linear diffusion model 2.1 with discrete observations are given by |C| 2 X s ds |C X t | 2p . × P x {U t C X t K t }dt, t − x − σ a, t 2 ax b t 0 e a t−s dW s .
4,578
2011-09-27T00:00:00.000
[ "Mathematics" ]
An Intelligent Metaheuristic Optimization with Deep Convolutional Recurrent Neural Network Enabled Sarcasm Detection and Classification Model —Sarcasm is a state of speech in which the speaker says something that is externally unfriendly with a purpose of abusing/deriding the listener and/or a third person. Since sarcasm detection is mainly based on the context of utterances or sentences, it is hard to design a model to proficiently detect sarcasm in the domain of natural language processing (NLP). Despite the fact that various methods for detecting sarcasm have been created utilizing statistical machine learning and rule-based approaches, they are unable of discerning figurative meanings of words. The models developed using deep learning approaches have shown superior performance for sarcasm detection over traditional approaches. With this motivation, this paper develops novel deep learning (DL) enabled sarcasm detection and classification (DLE-SDC) model. The DLE-SDC technique primarily involves pre-processing stage which encompasses single character removal, multispaces removal, URL removal, stop word removal, and tokenization. Next to data preprocessing, the preprocessed data is converted into the feature vector by Glove Embeddings technique. Followed by, convolutional neural network with recurrent neural network (CNN-RNN) technique is utilized to detect and classify sarcasm. In order to boost the detection outcomes of the CNN+RNN technique, a hyper parameter tuning process utilizing teaching and learning based optimization (TLBO) algorithm is employed in such a way that the classification performance gets increased. The DLE-SDC model is validated using the benchmark dataset and the performance is examined interms of precision, recall, accuracy, and F1-score. I. INTRODUCTION Sarcasm detection in discussions has become ever more popular amongst natural language processing (NLP) scientists with the greater usage of communicative threats on social networking platforms. Natural language is an essential data source of human emotions. Automatic sarcasm detection is repeatedly defined as an NLP problem since it mainly needs to understand the human emotions language, expressions expressed by the non-textual/textual content. Sarcasm detection has gained more attention in previous decades since it facilitates precise analysis in online reviews and comments [1]. As an illustrative approach, sarcasm utilizes word in a manner which differs from the traditional meaning and order as result of misleading polarity classification. The results obtained in this development can be used for information categorization. Sarcasm could be deliberated as an implied form of emotion. Usually, it transmits the reverse of what has been aimed. Generally, Sarcasm is related to literary devices like satire and wit/irony i.e., utilized for insult, refutes, amuse or make fun of. Specifically, the teacher exclaimed "Credit to your hard work. I have been never impressed more in my lifetime. Lol!" these sentences might expose i.e., gratitude. But, the expression of a speaker and context demonstrate the sarcastic manner of these expressions. In the lack of visible expression, defining sarcasm in Twitter is a challenging one. A stimulating perception of sarcasm has been proposed by [2] in which the analyses were carried out in 2 sarcastic states: all centric and egocentric. The previous terms indicate that the sarcasm was observed/felt only from the participant's point of view and not from addressees" perception and the last one indicates sarcasm being observed from the addressee and participant perspectives. The generic understanding of the result transmits the prosodic feature, the one including pattern of sounds and stress is more useful in identifying sarcasm when compared to contextual features. Fundamental analyses of sentiment from the text mightn"t be effective for understanding the clear stimulation because of the existence of different literary devices like irony, sarcasm, and so on [3]. Thus, sarcasm detection is highly required for avoiding all kinds of misinterpretation in all kinds of transmission and for ensuring that meaning aimed in the statement is assumed accordingly. Automatically identifying sarcasm could be a difficult task that could be demonstrated by automatic sarcasm analysis and detection. Identifying sarcastic statements becomes an essential process in social networking applications since it effects the organization that mines social networking data. In spite of the existence of several potential features are extracted from text, they could be gathered into major classes, such as contextual, lexical, pragmatic, and hyperbolic features [4]. The fundamental objective of this study is to classify sarcasm into different kinds that aid in understanding the intent to hurt or level of hurt i.e., existing in the sarcastic statements. Because sarcasm may elicit a broad range of feelings in a person, it can either make the receiver www.ijacsa.thesai.org laugh or, in the worst-case it might elicit a deeper sense of emotional harm. The applications of type detection might be effective in understanding the sentiments behindhand sarcasm, which offer a perspective to the sentimental condition of the person engaging in a sarcastic discussion, namely, the one on whom sarcasm was meant and the person who employs sarcasm. Several machine learning, rule-based, deep learning, and statistical based methods have been stated in related works on automated sarcasm detection in one sentence i.e., frequently based on the content of words in isolation. This involves a variety of methods like multimodal (text image) content [5] sense disambiguation and polarity flip detection in text [6]. Previous research on detecting sarcasm in text includes pragmatic (context) and lexical (content) clues [7] such as sentiments, interjections, and punctuation alterations, which are major indicators of sarcasm [8]. The characteristics in this study are handmade and cannot be generalized due to the presence of metaphorical slang and informal language, which are often used in online communication. Current research [9,10] use NN for learning contextual and lexical features, eliminating the necessity for handmade features with the development DL method. In this paper, word embeddings are used to train recurrent, deep convolutional, or attention-based neural networks to achieve advanced results on a variety of large-scale datasets. This paper develops novel deep learning (DL) enabled sarcasm detection and classification (DLE-SDC) model. The DLE-SDC technique primarily involves pre-processing stage which takes place at different levels. Then, Glove Embedding technique is used for the representation of word vectors. Moreover, convolutional neural network with recurrent neural network (CNN-RNN) technique is utilized to detect and classify sarcasm. In order to boost the detection outcomes of the CNN+RNN technique, a hyper parameter tuning process using teaching and learning based optimization (TLBO) algorithm is employed in such a way that the classification performance gets increased. A wide range of simulations take place on benchmark datasets and validate the results interms of different measures. II. LITERATURE REVIEW In Nayel et al. [11], a method that relied on a supervised ML approach named SVM was utilized for detecting sarcasm. The presented method was calculated by an ArSarcasm-v2 dataset. The efficiency of the presented method was related to another method provided to sarcasm detection shared task and sentiment analyses. Kumar and Harish [12] proposed a new method for classifying sarcastic text with content based FS technique. The projected method is composed of 2 phase FS methods for selecting better representation features. In initial phase, traditional FS approaches like MI, IG and Chi-square are utilized for selecting appropriate features subset. The selected feature subset is additionally developed by the next phase. In following phase, k-means clustering process is utilized for selecting better representation features between same features. The selected features are categorized by 2 SVM and RF classifiers. Chatterjee et al. [13] designed features to detect sarcasm by realistic features which considered the context of word. The method is depending upon a linguistic method which defines how human differentiate among various kinds of untruth. Later, they train different ML based classifiers and relate their accuracy. Razali et al. [14] focus on detecting sarcasm in tweets by combining DL derived features with contextual constructed feature sets. A feature set is retrieved from a CNN framework and carefully combined with the handmade feature set. Those custom feature sets are developed based on their contextual explanation. Every feature set is specifically designed for the solitary task of detecting sarcasm. The aim is to find the optimum features. Few sets are beneficial for working even if it is utilized individually. Other sets aren"t really substantial without integration. The result of the experiment shows positive based on Precision, Accuracy, F1-measure, and Recall. The integration of features is categorized by ML methods for the purposes of comparison. The LR approach is considered as an optimal classification approach for this work. In Rajeswari and ShanthiBala [15], a supervised classification method viz., MNNB is utilized for detecting sarcasm, and SVM is utilized for detecting the types of sarcasm. In this work, the sarcasm is extracted from the twitters using MNNB. The tweets contain noisy messages and are managed well for efficient detection of sarcasm. Additionally, the types of sarcasm are also detected for diagnosing the state of the user. Zhang et al. [16] proposed the utilization of NN for detecting sarcasm tweets and compared the impacts of continuous automated features using discrete manual features. Particularly, they utilize bi-directional gated RNN for capturing syntactic and semantic data on twitters, and a pooling NN for extracting contextual features manually from past twitters. Akula and Garibay [17] concentrate on identifying sarcasm in textual conversation from different societal and online platforms for networking. Eventually, they developed an interpretable DL method with gated recurrent units and multihead self-attention. The major goal of this work [18] is the sentiment analyses of people's opinions exposed on Face book based on the present epidemic condition in lower resource language. To perform this, they have made a large scale dataset consist of 10,742 automatically categorized commentaries in the Albanian language. Moreover, in this study, they reported the effort on the development and design of sentiment analyses based on DL approach. Consequently, they reported the investigational finding attained from this presented sentiment analysis by different classification methods using static and contextualized word embedding, i.e., BERT and fast Text, validated and trained on these curate and collected datasets. Das and Kolya [19] ,the sarcastic word distribution properties of a common pop culture sarcasm corpus, which includes sarcastic speeches and dialogues, are automatically extracted. Further, they proposed an amalgamation of 4p LSTM, each contains unique activation classifier. Those models are mainly intended to effectively identifying sarcasm from the text corpus. Sundararajan and Palanisamy [20] aim are to enhance the present methods by integrating a novel perception that categorizes the sarcasm on the basis of the levels of harshness applied. The main application of the projected study will be associating the mood of an individual to the types of sarcasm www.ijacsa.thesai.org shown by him/her that can give main perceptions regarding the emotional behaviour of an individual. An ensemble-based FS approach was proposed for choosing the optimum collection of features for detecting sarcasm in tweets. This optimal collection of attributes was used to determine whether the tweets were sarcastic or not. Afterward identifying the sarcastic sentence, a multi-rule based method was projected for determining the sarcasm types. Kumar et al. [21] used Mustard, a typical conversation dataset to determine the use of an ensemble supervised learning approach for identifying sarcasm. Furthermore, it can be useful in reducing model bias and assisting decision makers in knowing how to use this model accurately. Liyuan Liu et al. [22] Proposed a method called A2Text-Net which combines auxiliary variables to improve the performance of sarcastic sentiment classification. III. THE PROPOSED MODEL This study has developed a DLE-SDC technique to classify the presence of sarcasm. The working process is demonstrated in Fig. 1. The proposed method involves different processes namely, preprocessing, Glove based word vector representation, CNN-RNN based classification, and TLBO based parameter optimization. A. Data Pre-processing At the first stage, the data is pre-processed to transform into a compatible format. The different sub processes involved in data pre-processing are:  Remove single letter words.  Remove multiple spaces.  Remove punctuation marks.  Remove numbers.  Remove stop words and.  Convert uppercase characters into lowercase. B. Glove based Word Representation The Glove approach can able to generate a vector depiction of words in the application of similarity between words as invariant. It utilizes 2 different methods as CBOW and Skipgram. The problem related to the conventional methods includes minimum accuracy, maximum processing time, etc. The primary objective of Glove is to incorporate the approaches proposed by 2 techniques wherein optimal accuracy must be assured. In previous to generating Glove approach, the vector depiction of words has been determined. The approaches are employed to generate a vector using standard dimensions (d) for all the words. The approach that employs similarity between 2 words as invariant wherein the words in similar contents is taken into account and show same meaning. Assume the terms beforehand presenting a formulation of Glove:  Take a matrix of word to word -existence count as denoted by , whereas values store the amount of iterations in a word in a sentence of word  Assume that ∑ represents the amount of times a word could be repeating in content of word  Finally, consider denotes a likelihood of word displayed in context of word Let us take 2 words & i.e., associated with each other in content; e.g., suppose that cricket is a subject matter so that duck and boundary. Analyzing a ratio of coexistence probability with distinct probe words, k, reveals the relationships between those words. In words , i.e., associated with duck by not including the boundary, let thus the ratio is maximalized. Similarly, in words depends on boundary by not including duck, let six, so that ratio is minimalized. Hence, words like score i.e., appropriate to duck and boundary, as ratio is nearly 1. Conventional logic suggests that the proportion of coexistence possibilities might be used as a starting point to calculate the similarity between those terms. The ratio is based on 3 words j, and k, in which standard method simulates the process as follows, Whereas denotes word vector and ̂ represents separate context word vector. As vector spaces are integrally linear structures and assume vector variances. ̂ (2) The application of algebraic function and group theory are given below: ̂ ̂ (IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 13, No. 2, 2022 307 | P a g e www.ijacsa.thesai.org ̂ In which " " represents a dot product between 2 vectors and whereas indicates an exponent function. ̂ (4) and the represents a constant in word , the aforementioned operation is altered by: In which represents a bias for word and ̂ indicates a bias for word Later, the optimum function from ML perception is written as follows: RHS from previous equations are calculated from corpus, which should be updated from LHS to produce relevant RHS. Thus, hypothesis (h) determines LHS, while RHS is referred to as output (y). The cost function is then converted using the least square method: and it is necessary to minimize the cost function. But, previous to employing GD, the cost function should be increase dramatically by some weights to each two words; hence, a cost function may be thought of as a memory that preserves data depending on previously calculated values. whereas denotes weight associated with -existence of term with . Generally, is called as: Later, when a partial derivative of is handled by as follows: In which implies a dimension of word If is a vector for , would be: Therefore, a vector of . Hence, the manner of calculating derivatives is based on a word wherein GD is employed by learning rate alpha , to train module. Consequently, when the module is trained, the words with similar meanings are extracted with producing an arbitrary word as input. In businesses, words with similar meanings such as business, Market, industry, products, share market, stock, etc. C. Sarcasm Detection using CNN-RNN Technique The extracted feature word vectors are fed into the CNN-RNN technique for the classification of sarcasm. RNN is a kind of NN which preserves internal hidden state for modelling dynamic temporal behaviour of series using random lengths by directed cyclic relations among its unit. It could be taken into account as a hidden Markov method extension which applies nonlinear transition function and can able to model long-term temporal dependency. LSTM prolongs RNN by including a forget gate for controlling either to forget the present state; an input gate for indicating whether it read the input; an output gate for controlling either to output the states [23]. That gate enables LSTM for learning long-term dependencies in a series, and also facilitates it for optimizing since that gate helps the input signal to efficiently broadcast via the recurrent hidden state without influencing the output. Also, LSTM efficiently handles the gradient exploding or vanishing problems which usually appear in RNN training Fig. 2 illustrates the framework of CNN model. In which denotes an activation function, indicates the product using gate value, and different matrices are learned parameters. They use the rectified linear unit (ReLU) as the activation function in this performance. A new CNN-RNN architecture is employed for multi label classification problems. It consists of: The CNN extract semantic representation from the image; the RNN models label or image relations and label dependency. The recurrent, label and image depictions are proposed to the similar low dimension space for modelling the label redundancy and the image text relation. Fig. 3 demonstrates the structure of RNN model. The RNN method is applied as a compact but a strong representation of the label co-existence dependencies in this space. It takes the embedding of the predictive labels at every time step and maintains a hidden state for modelling the label's co-existence data. The a priori likelihood of a label provided the previous prediction label could be calculated based on their dot product with the addition of recurrent and image embeddings. A label is denoted as a one-hot vector [ ] , i.e., 1 at kth position, and somewhere else. The label embedding could be attained by multiplying the one-hot vector using a label embedding matrix The kth row of denotes the label embedding of label (10) The dimension of is generally lower compared to the amount of labels. The recurrent layer takes the label embedding of the previous prediction label, and models the co-existence dependency in its hidden recurrent state by learning nonlinear function: whereas and represents the hidden state and output of the recurrent layer at the time step , correspondingly, indicates the label embedding of t-th label in the predictive path, and , signifies the nonlinear RNN function. The image depiction and output of recurrent layer are proposed to the similar low dimension space as the label embedding. (12) whereas & denotes the prediction matrix for image depiction and recurrent layer output, correspondingly. The column count of & indicates the similar as label embedding matrix represents the CNN image depiction. They would display in second that the learned joint embedding efficiently characterizes the significance of labels and images. Lastly, the label score could be calculated by multiplying the transpose of & for computing the distances among and every label embedding. (13) The prediction label likelihood could be calculated by soft regularization on the scores. Fig. 4 depicts the architecture of LSTM model. D. Hyperparameter Optimization using TLBO Algorithm At the final stage, the learning rate of the CNN-RNN technique is optimally chosen by the use of TLBO algorithm in such a way that the sarcasm detection outcome gets increased. TLBO technique is a novel type of metaheuristic approach which is dependent upon teaching-learning model. It can be established by Rao et al. [24] for solving optimization issues. It can be simulated as feeding the knowledge in a class where students initially gain information from teacher and next with mutual interface. The TLBO technique has population based optimized technique where the set or class of students regarded as population. Therefore, the student of class signifies the possible solution of difficulty. The TLBO technique includes two stages as given below. 1) Teacher level: This level defines the learning of student from teacher. The teachers attempt for improving the knowledge level of student and uses for obtaining optimum marks. However, the student gains information and attain marks based on quality of teaching distributed as teacher and quality of student existing in the class. In order to simulate, supposing there are "n" amount of subjects ( ) existing to " " amount of students (population size, ). In some teaching-learning cycles (iteration, ), represents the mean outcome of students in specific subject "j". The teacher is one of the skilled, experienced, and extremely learned person in society. For simulating this model, an optimum student (possible solution) in total population was regarded as teacher. The variance among the outcome of teacher and the mean outcome of students in subject "j" is provided as: where implies the teaching influence that decided the value of mean that altered and implies the arbitrary number in range 0 to 1. signifies not parameter of TLBO technique and their value is either be one or two [25]. The possible solution (student) is enhanced by moving its places near the place of an optimum possible solution (teacher) by taking into account the present mean value of possible solution. For simulating this detail, the th possible solution in the population at k th teaching-learning cycle is upgraded based on subsequent written as: (15) When implies the superior to , then is recognized; Then it can be rejected. Every accepted possible solution is continued and these developed the input to student phase. 2) Student level: Here, the student gains information with mutual communication. The students interrelate arbitrarily with another student of class for improving knowledge. Therefore, when the student (v) has superior to student (u), afterward student (u) is stimulated near student (v). Then, student (u) was stimulated away from student (v). The learning viewpoint of this phase is provided here. Two students (possible solution, ) are arbitrarily elected from class (population), where , are 2 integers arbitrary number go to [1, ] and . Else Endif where implies the fitness function (FF) which is utilized for finding the fitness value of possible solutions, refers the th design variable of altered possible solutions from student level at kth teaching-learning iteration. Afterward, the fitness value of is estimated. Else End if B. Results Analysis This section examines the sarcasm detection performance of the DLE-SDC technique against several aspects. The DLE-SDC technique is investigated interms of different measures namely precision, recall, accuracy, and F-measures. The confusion matrix generated by the DLE-SDC technique on the classification of sarcasm is depicted in Fig. 5. The figure showcased that the DLE-SDC technique has classified a total of 14166 instances into non-sarcastic and 12639 instances into sarcastic ones. A brief classification results analysis of the DLE-SDC technique with other DL techniques takes place in Table I A loss graph analysis of the DLE-SDC technique is examined under variable number of epochs in Fig. 8. The figure has shown that the training and training loss values get reduced with a rise in number of epoch. Particularly, the validation loss seems to be lower than the training loss of the DLE-SDC technique. Finally, a comprehensive comparative study of the DLE-SDC technique with other techniques takes place in Table II [27]. From the above mentioned results analysis, it is observed that the DLE-SDC technique has accomplished maximum sarcasm detection performance and can be employed to detect sarcasm in online social media content. V. CONCLUSION This paper has presented a new DLE-SDC technique to identify and classify the sarcasm using DL technique. The proposed DLE-SDC technique comprises different stages of operations such as pre-processing, word vector representation, CNN+RNN based classification, and TLBO based hyper parameter optimization. Besides, the CNN-RNN technique involves the BiLSTM model of the detection and classification of sarcasm. In order to increase the sarcasm detection performance of the CNN-RNN model, TLBO algorithm is applied to determine the optimal learning rate of the presented CNN-RNN model and it is mainly used to boost the detection performance to a maximum extent. A wide range of simulations take place on benchmark datasets and validate the results interms of different measures. The simulation outcomes pointed out the supremacy of the DLE-SDC technique over the recent state of art techniques. As a part of future work, the sarcasm detection performance can be extended to the design of feature selection and clustering techniques. www.ijacsa.thesai.org
5,651.8
2022-01-01T00:00:00.000
[ "Computer Science" ]
Federal Algorithm Design and Numerical Experiments of Ensemble Kalman Filter Data Assimilation Analysis Process Based on the federated Kalman filter proposed by Carlson with linear systems, we propose a federated computing scheme for the ensemble Kalman filter (EnKF) assimilation analysis process for nonlinear systems, and give an optimal information fusion estimation algorithm weighted by diagonal matrix under the linear minimum variance criterion, that is, the assimilation analysis values of each variable in the global estimation are linear combinations of the assimilation analysis values of the corresponding variables in the local estimation of sub-filters, and the calculation of the combination coefficients is given. The federated algorithm of the EnKF assimilation analysis process for nonlinear systems is verified by the Lorenz (1963) system. Introduction With the development of science and technology, there are usually a variety of observation instruments to provide their information in the process of the atmosphere and ocean.If the standard EnKF [1][2][3][4] data assimilation method is adopted, the amount of computation is very large due to the need to centrally fuse the measurement data of all observation instruments.In addition, this centralized EnKF method does not have pluggability for the assimilation algorithm design of different types of observation data.Once an observation instrument fails or an observation data is wrong, the entire data assimilation process will not work properly.If the assimilation method of separate processing and federal design can be adopted for different types of observation data, it can not only reduce the computational burden of centralized assimilation data, but also improve the scalability and fault tolerance of the assimilation system.According to the principle of the federated Kalman filter [5] [6], a federal computational algorithm based on matrix weighted optimal fusion for the EnKF assimilation analysis process suitable for nonlinear systems is proposed.However, this federal method for the EnKF has two disadvantages.One disadvantage is need to calculate the inverse of the covariance matrix of the state variable analysis value after the assimilation analysis of different types of observation data [7].Because the covariance matrix in the EnKF is obtained by statistical methods, the inverse of the covariance matrix does not necessarily exist, especially when the number of sets is less than the number of variables.Another disadvantage is that matrix inversion has a large computational burden.In view of the above shortcomings, we propose an optimal information fusion estimation algorithm weighted by diagonal matrix under the linear minimum variance criterion.The weighted optimal fusion formula of diagonal matrix is derived by Lagrange multiplier method, and the optimal weighting matrix is replaced by calculating the optimal weighting coefficient, which solves the difficulty of federation of EnKF.Finally, a numerical experiment is carried out to test our proposed algorithm. P represent the variable value of the main filter and its variance covariance matrix, respectively; N is the number of types of observation data and the number of sub-filters.The core of the federated filtering algorithm is to fuse the results of each filter according to the following formula: It can be obtain the global optimal estimation, Carlson [8] gave proofs of Eq.( 1) and Eq.( 2) when the errors between are not correlated. Then each sub-filter uses the EnKF calculation method proposed by Evenson [2].For the main filter, its function is to update the time first, and the second is to fuse the estimation results of each sub-filter globally.The main filter has no measurement update. Global Optimal Estimation Federal Filtering Algorithm Weighted by Diagonal Matrix under Linear Minimum Variance Criterion For the EnKF assimilation federal analysis scheme suitable for nonlinear systems, the current problem is how to use the local estimation given by each sub-filter, including the main filter, to obtain the global optimal estimation.If the global optimal estimation of the state variable analysis values estimated by each sub-filter and the main filter is performed according to Eq.( 1) and Eq.( 2), two difficulties will be encountered: firstly, in the EnKF method, the analysis covariance matrix Pi,j (i=1, 2, … , N, N+1) of each sub-filter is obtained by the statistics of the estimated state variable set of each sub-filter, so it is not guaranteed that the inverse of the analysis covariance matrix of each sub-filter must exist, especially when the number of analysis state variables is greater than the number of set members, which will cause trouble in the calculation of the global optimal estimation in Eq.( 2); the second difficulty is that the statistical calculation of the covariance matrix of the analysis variable and the calculation of the matrix inversion is very large [2][7].In order to solve these two difficulties, we propose a global optimal estimation federal filtering algorithm weighted by diagonal matrix under the linear minimum variance criterion.In this algorithm, it is assumed that the kth element () k g x of the global estimate g x is only a linear combination of the kth element , , which w is the weight coefficient and n is the number of control variables, then: , 1 The global estimation should satisfy the following conditions: x W P W of the global estimation takes the minimum value, which x is the true value.Because the local estimates are not correlated, that is According to condition ①, that g x is unbiased, it can be obtained: where nn  R I is a unit diagonal matrix, and the variance matrix of the global estimate is : where, , 1 where The objective function is defined as follows: 11 TT , 11 where is the Lagrange multiplier vector.It is necessary to determine the parameter vectors and Λ, so that the objective function J is minimized, for this let , 2 , 1, 2, , , 1 The above two equations can be written as the following Eq.( 11) form: Since the matrices Di,i and I are diagonal matrices, the Eq.( 11) can be decomposed into the following n independent n + 2 order linear equations: , It can be seen that this system has a unique solution.The solution is: Numerical Experiments To test the feasibility and effectiveness of the proposed EnKF federal design scheme that the global optimal estimation algorithm weighted by diagonal matrix under the linear minimum variance criterion, we use Lorenz (1963) [23] ; and the mode error is not considered.The fourth-order Runge-Kutta method is used for the numerical solution of the equations.The time integration step is 0.001, and the total integration is 800 steps.The probability density distribution of the three initial perturbations is a Gaussian distribution with a mean value of 0 and a variance of 0.4, and the number of perturbations is 100.The first kind of observation data is: which O(k) denotes the observation data at time k, x(k), y(k) and z(k) are the values of the corresponding time calculated from the real initial value, respectively.ε(k) is the observed perturbation, it is a Gaussian white noise with a mean value of 0 and a variance of 0.6.The second kind of observation data is: , which the value of o in the superscript represents the observation data, x(k), y(k) and z(k) are the values of the corresponding time calculated from the real initial value, respectively; wx(k), wy(k) and wz(k) are observation disturbances, which are Gaussian white noises with mean value of 0 and variance of 0.9, respectively.These two kinds of observation data are assimilated every 10 steps, that is, 80 times.In order to test the assimilation effect of various EnKF test schemes, the error function is defined as J=|φa-φt |, which φ represents x, y or z respectively, the quantity with superscript a represents the estimated value obtained by various EnKF assimilation schemes; the quantity with superscript t represents the true value, that is, the quantity calculated from the true initial value.Firstly, the assimilation effect of the EnKF assimilation analysis process using the federated scheme is tested.We carried out the first group of numerical experiments, which includes three test schemes.In the first experiment, the two kinds of observation data were assimilated by the method of joint processing, and then the optimal solution was obtained by global fusion, the scheme of average distribution of information factors is adopted, that is β1=β2=β3=1/3.The second experiment is to assimilate only the first kind of observation data; the third experiment is to assimilate only the second kind of observation data.Fig. 2 shows the variation of the error function with time in the first group of three experiments.It can be found from the figure that the assimilation analysis values of the three experiments gradually approach the true value with the increase of the number of assimilation observation data.For experiments 2 and 3 that only one observation data is assimilated, the speed of the assimilation analysis value approaching the true value is obviously slower than that of the experiment 1 that uses the federated processing to assimilate the two observation data at the same time.The results of these numerical experiments also show that the proposed federal scheme and the global optimal estimation method weighted by diagonal matrix under the linear minimum variance criterion are feasible, and the accuracy of the assimilation results using the federal scheme to assimilate the two observation data is obviously better than that of only assimilating a single data.To further analyze the assimilation accuracy of the EnKF federal processing, we carried out the second group of numerical experiments.The first experiment was the same as the first experiment in the first group; the second experiment is to assimilate the two kinds of observation data by traditional centralized processing method.The third experiment is to use the federal processing method for the two observation data.The distribution schemes of information factors are β1=0.4,β2=0.1 and β3=0.5.In addition, we also carried out multiple sets of assimilation experiments with different information distribution factors. Fig. 3 shows that the joint EnKF using the global optimal estimation algorithm weighted by diagonal matrix under the linear minimum variance criterion is effective.The accuracy of its results is the same as that of the centralized data assimilation and the difference in information distribution factors does not affect the accuracy of global filtering. Conclusions A federated calculation scheme is proposed for the EnKF with nonlinear systems, its optimal information fusion is weighted by diagonal matrix under the linear minimum variance criterion that the optimal weighting matrix is replaced by calculating the optimal weighting coefficient. Based on the Lorenz (1963) system with two kinds of observation data, the federal EnKF calculation scheme and the fusion estimation algorithm are tested.The numerical test results show that the federal EnKF calculation scheme and the optimal information fusion estimation algorithm weighted by the diagonal matrix are feasible and effective. In addition, the system used in the numerical experiment is relatively simple, and the observation data is simulated observation data.If the actual observation data are used, the application effect of this method in complex numerical models needs to be further tested. Acknowledgment Supported by the National Natural Science Foundation of China Grant No.42075080. Figure 1 . Figure 1.The federal design diagram of the EnKF assimilation analysis process. Figure 2 .Figure 3 . Figure 2. The variation of error function with time in the first group of three experiments.(Solidline : Experiment 1 ; dot line : Experiment 2 ; disconnection : Experiment 3) system to carry out numerical experiments.The system control equations are:
2,606.8
2024-04-01T00:00:00.000
[ "Computer Science", "Engineering", "Mathematics" ]
EPLAODV: Energy Priority and Link Aware Ad-hoc On-demand Distance Vector Routing Protocol for Post-disaster Communication Networks In Mobile Ad-hoc Networks (MANETs) the major issue between nodes for data transfer is link quality. The majority of the MANET routing protocols try to maintain the link quality between nodes but still need improvement. This paper focuses on link quality between nodes in an ad hoc network in emergency scenarios through the design of the Energy Priority and Link Aware Ad-hoc On-demand Distance Vector (EPLAODV) routing protocol. EPLAODV improves the link quality between nodes through SNR. For performance analysis, the NS2.35 network simulator was used. Simulation time and traffic load were the major simulation parameters. From the simulation results, it is observed that EPLAODV performs better than AODV. Keywords-mobility; SNR; EPLAODV; link-quality; lifetime INTRODUCTION Many users use wireless communication devices or channels for the exchange of information. In an MANET, mobile nodes are connected through wireless links. An MANET lacks pre-deployed infrastructure for environment monitoring and disaster scenarios [1][2][3]. RF frequencies are used from mobile nodes to flood data packets from source to destination nodes. An MANET has flexible and dynamic topology. In dynamic topology, the nodes of the network frequently move with different speeds within the transmission range. Nodes may join or leave the network thus changing its topology. Node mobility is a main cause of frequent link failure. To prolong network lifetime and to get maximum throughput, MANETs use reactive, proactive and hybrid routing protocols to discover shortest, stable, and efficient routes. The reactive routing protocols AODV [4], DSR [5] and TORA [6] are on demand routing protocols which discover routes when required, avoid traffic congestion, reduce routing overhead, minimize end-to-end delay, and reduce traffic collision [7]. Generally, MANET nodes have battery constraints. In a network, the nodes with very low energy level are disconnected. To discover a route, the conventional AODV routing protocol does not consider the fairness of node energy consumption. MANETs are used in various real time applications such as environment monitoring control, air traffic control, in battle fields, and in many emergency scenarios like major accidents and disaster relief operations [8]. In real time scenarios, where the replacement or recharging of batteries is almost not possible, the lifetime of the network is important. To prolong the network lifetime link various energy efficient and link aware routing protocols have been proposed. These routing protocols avoid link failures and consider less node energy consumption [9][10][11][12], although the current status in maximizing network lifetime still needs improvement. In disaster-affected areas, the telecommunication infrastructures are seriously damaged or even collapsed. MANETs can be deployed in critical areas to carry out the rescue operations. Rescue workers or first responders are equipped with mobile devices to share important information about operations at the disaster site. The first responders share videos, make voice calls and send text messages to their supervisors to inform about the situation in the area. Wireless communication is used to support the operational analysis of disaster response. To avoid communication interruption among the rescue works, and to reduce the losses of infrastructure and human lives, critical information travels through high powered nodes or stable links. The proposed energy, priority and link aware ad hoc on demand distance vector (EPLAODV) routing protocol discovers energy efficient and link aware routes from source to destination for the exchange of critical information in disaster operations. II. LITERATURE REVIEW MANET routing protocols are used to discover routes from source to destination. Node mobility and unfair consumption of node energy and frequent link failure are inherent in an MANET, which causes routing overhead and end-to-end delay. Conventional AODV reactive routing protocol does not consider link stability and energy consumption, while energy consumption is the main issue of MANETs. Different routing protocols based on the node energy consumption and link quality parameters to prolong network life time, have been proposed. Authors in [13] proposed MAODV routing protocol which is based on AODV. The algorithm considers two parameters, node energy level and node distance. The distances among the nodes are measured and a node with a good energy level and with small distances is considered in route selection. MAODV selects an energy efficient route in which all nodes possess good energy level. This protocol consumes less energy and maximizes packet delivery ratio and network lifetime. In [14], authors suggested a local repair method to avoid link breakage. The protocol takes preemptive measures based on the residual energy, and uses three state modes to avoid link failure: normal, selfish, and sleep. The algorithm selects stable and energy efficient routes. It minimizes routing overheads by avoiding link breakages and maximizes network lifetime. Authors in [15] proposed the energy efficient routing protocol Ad-hoc On-demand Distance Vector Energy Aware (AODVEA) which utilizes a min-max algorithm to take routing decisions. A node must have a minimum remaining energy to participate in route selection. The algorithm computes the minimum remaining energy of all possible routes. To prolong the network lifetime and minimize end-to-end delay, the protocol selects the path having maximum value of minimum remaining energy. Authors in [16] proposed the Dynamic Energy Ad-hoc On-demand Distance Vector (DE-AODV) routing protocol. Route selection is based on node energy level. The node energy level is compared with a threshold. A node having more energy is selected in a route. To avoid link failures, the algorithm provides external batteries to the nodes with less energy. The protocol maximizes the network lifetime by minimizing energy consumption. Authors in [17] suggested a new method to improve link quality. The suggested method is based on ant colony optimization (ACO) algorithm. The algorithm considers received signal strength (RSS) as a link quality parameter. The protocol selects nodes in a route that have stable links with good RSS values. The protocol minimizes routing overhead and node energy consumption by avoiding link failure. Authors in [18] proposed the Route Stability and Energy Aware (RSEA)-AODV routing protocol, which considers RSS, drain rate of nodes, remaining energy, and delay as route metrics to select a stable route. In the route discovery process, these parameters are compared with respective thresholds. A less congested node, having good energy level and good RSS values is selected in an energy efficient and stable route. The protocol minimizes energy consumption, end-to-end delay and maximizes network lifetime and packet delivery ratio. III. THE PROPOSED EPLAODV In the proposed EPLAODV routing protocol, the discovered routes are based on traffic priority, node energy, and link quality. The RREQ and RREP control packets are modified by adding new priority fields. High priority 0 is assigned to time critical traffic for exchange of information in the form of voice and video. Low priority 1 is assigned to normal traffic for the exchange of information in the form of text and data. When the source node wants to discover a route to a destination, it sets the traffic priority in the RREQ control packet and broadcasts it to all its neighbors. In the following section the route discovery process is discussed in detail. A. Route Discovery Assume that node N s is the source and node N d is the destination, as shown in Figure 1. The source node initiates route discovery process, N s assigns a traffic priority 0 in the RREQ control packet, and broadcasts RREQ packets to neighbor nodes n 1 , n 3 , and n 5 . The intermediate nodes n 1 , n 3 and n 5 compare their residual energy with the energy threshold Th e . If their residual energy is greater than Th e , they make a reverse path entry and broadcast RREQ packets to their neighbors. If any node among n 1 , n 3 or n 5 has a residual energy level lower than threshold Th e , it drops the RREQ packet. Assuming that these nodes have sufficient residual energy level, node n 1 rebroadcasts the RREQ packet to node n 2 , which does not accept the packet due to its insufficient energy level. Similarly, node n 3 rebroadcasts RREQ packet to node n 4 and node n 5 rebroadcasts RREQ packet to node n 6 . This process is continued until RREQ packet reaches the destination. When the RREQ control packet reaches the destination node N d , it updates the priority field in the RREP control packets for assigning traffic priority, and the RREP packets are unicast to nodes n 4 and n 7 . When nodes n 4 , and n 7 receive the RREP packets, they check their traffic priority value, and their own residual energy and signal-to-noise ratio (SNR) value. If the RREP packet contains priority value of 0, the nodes assume that the route is going to be established for high priority traffic, otherwise low priority traffic is established. Intermediate nodes do not accept any RREP control packet until they check their node energy level and SNR value. If the residual energy of node n 4 and n 7 is greater than Th e , and the SNR value of node n 4 or n 7 is greater than Th snro , then node n 4 unicasts the RREP packet to nodes n 3 and node n 7 unicasts RREP packet to node n 6 . If node n 4 or n 7 have residual energy level lower than Th e and less SNR than Th snro , they drop the RREP packet. Node n 5 drops RREP control packet due to insufficient energy or insufficient SNR value. This discovery process continues until RREP control packet reaches the source node N s . Finally, node n 3 unicasts the RREP packet to the source node N s and the route N s − n 3 discover low priority routes, priority 1 is assigned in RREQ and RREP control packets and the same steps are taken in route discovery process to establish energy efficient and stable routes. Once the route has been established, the source node forwards data packets to the destination node. B. EPLAODV Algorithm In Table I, the route setup process of the EPLAODV protocol is described. Let Ns denote the source node and Nd denote the destination node. Let N represent the set of intermediate nodes between Ns and Nd, where N={n0,n1, n2,……., nm} and ni represent the current node. 3. Let nie represent the residual energy level of the current node. Let Thsnr0 represent the SNR threshold for real time traffic 6. Let Thsnr1 represent the SNR threshold for normal traffic 7. Node Ns sets the traffic priority value in the priority field in the RREQ packet and broadcasts to its neighbors. 8. Node ni receives the RREQ packet, where ni Є N. IV. PERFORMANCE ANALYSIS The proposed EPLAODV routing protocol is implemented in the open source network simulator NS2 [19]. The simulator helps in evaluating the performance of MANET routing protocols. The proposed EPLAODV is compared with AODV. To measure the performance of the proposed EPLAODV routing protocol, we run different emergency scenarios (simulation) in NS2 environment with varying traffic load and simulation time. The simulation results show that the proposed EPLAODV performs better than the original AODV. The simulation parameters are presented in Table II. The simulation results are benchmarked with AODV routing protocol with keeping in mind that energy performance is crucial in MANETs because node mobility drains the energy of the node. Results show that the proposed EPLAODV controls the node mobility in efficient way and prolongs the energy of the node as compared to AODV. Figure 3 shows the simulation results of packet delivery ratio versus simulation time. EPLAODV and AODV are compared for high and low priority traffic. Initially at 200s the packet delivery ratio is less because less number of data packets are generated , and less packets are reaching their destination. As the simulation time increases, packet delivery ratio also increases. In conclusion, EPLAODV performs better for high priority and low priority traffic. Figure 4 shows the simulation results of routing overhead versus simulation time for proposed EPLAODV and AODV. As the simulation time increases, EPLAODV selects more stable routes on the basis of link quality. The protocol generates less number of control packets, which causes less routing overhead. It can be seen that EPLAODV performs better than AODV. Figure 5 shows the simulation results of network energy consumption versus simulation time for the proposed EPLAODV and AODV. Initially at 200s, the network consumes less energy. As the simulation time increases, EPLAODV generates more control and data packets, causing more energy consumption. EPLAODV discovers energy efficient and stable routes, which minimizes the flooding of control packets and energy consumption. Figure 6 shows the simulation results of end-to-end delay versus simulation time. The proposed EPLAODV and AODV are compared for both high priority traffic and low priority traffic. Initially, at 200s the end-to-end delay is maximum, because more packets are generated, due to congestion or link breakage. As the simulation time of the network increases, end-to-end delay decreases, because EPLAODV selects stable paths based on link quality which helps avoiding link breakage and congested traffic. The proposed EPLAODV performs better for both high priority and low priority traffic. End-to-End delay vs. simulation time Figure 7 shows the simulation results of packet delivery ratio versus traffic load for high and low priority traffic. Initially, the packet delivery ratio is maximum. As the traffic load increases, the network becomes more congested, which reduces packet delivery ratio. The proposed protocol EPLAODV performs better than AODV in both cases. Packet delivery ratio vs. traffic load Figure 8 shows the simulation results of routing overhead versus traffic load. Initially the routing overhead of the network minimizes. As the traffic load increases, the congestion of the network also increases, which causes link breakage and increased routing overhead. Again, the proposed EPLAODV routing protocol performs better than AODV. Routing overhead vs. traffic load Figure 9 shows the simulation results of network energy consumption versus traffic load. Initially, with minimum traffic load, the energy consumption of the network is reduced. As the traffic load increases, the network becomes more congested. AODV routing protocol generates more control and data packets, causing more energy consumption. The proposed EPLAODV protocol reduces the broadcasting of control packets, in order to minimize energy consumption. The proposed EPLAODV routing protocol performs better than AODV. Figure 10 shows the simulation results of end-to-end delay versus traffic load. EPLAODV and AODV are compared for both high and low priority traffic. Initially, with minimum traffic load, end-to-end delay is reduced. As the traffic load increases, the congestion of the network also increases, which causes more end-to-end delay. The proposed EPLAODV performs better for both cases of high priority and low priority traffic. An energy, link and traffic aware extension of AODV routing protocol named EPLAODV is introduced in this paper. The EPLAODV selects energy efficient and link aware routes for time critical and normal traffic. Some emergency scenarios have been simulated by varying simulation time and network traffic load. The simulation results show that the proposed scheme performs better than the traditional AODV for real time and textual data in terms of packet delivery ratio, energy consumption, routing overhead, and end-to-end delay for all simulated scenarios.
3,546.2
2020-02-03T00:00:00.000
[ "Computer Science" ]
Corrosion Behavior of Duplex and Lean Duplex Stainless Steels in Pulp Mill The cyclic potentiodynamic polarization behavior of duplex stainless steel (DSS) and lean duplexstainless steel (LDSS) was studied in white and green liquors froma pulp processingplant. The corrosion behavior in industrial and also synthetic liquor was compared. The polarization curves of the duplex steels in synthetic white liquor were shifted to lower potentials and higher current densities in relation to the steel in industrial white liquor, which proved to be less aggressive to the duplex steel. The duplex steels also showed the highest values of transpassive potential in industrial white liquor compared to synthetic liquor. Cold and hot rolled duplex and lean duplex steels in green liquor showed the lowest values of transpassive potential. Introduction Chemical pulps are made by cooking (digesting) the raw materials, using the Kraft process (sulfate) and sulfite processes.The Kraft process is the most dominating chemical pulping process worldwide.In the Kraft pulp process, the active cooking chemicals (white liquor) are mainly sodium hydroxide (NaOH) and sodium sulfide (Na 2 S) 1,2 , the operation is at high temperature (about 170 ºC) and the pressure of 6.5 to 8.5 bar is used in delignification during the chip cooking cycle [3][4][5] .The chemical recovery cycle is generated the subproduct, extracted from pulping wood in the digester, called the black liquor, with smaller amounts of wood extractives and residual inorganic pulping salt 6,7 .The combustion of the strong black liquor converts the recovered inorganic chemicals to smelt which is dissolved in water to give the green liquor, which are mainly sodium carbonate (Na 2 CO 3 ), and sodium sulfide (Na 2 S) generated during the liquor recovery cycle.The green liquor is causticized to regenerate the white liquor [8][9][10][11][12][13] .The scheme of the main operation of the chemical recovery cycle is shown in Figure 1.Moreover, the white liquor is considered the most aggressive of the alkaline pulping liquors 14 . Duplex stainless steel (DSS) has been used as a material of construction in the pulp and paper industry for the past 35 years due to its excellent corrosion resistance and high mechanical strength allowing for thickness reduction in equipment.Lean DSS (LDSS) is a DSS with a lower content of molybdenum and nickel and thus a lower cost.Nitrogen addition is used in LDSS to provide the austenite content in alloys with a lower nickel concentration [15][16][17] .Currently, the materials commonly selected for the construction of pulp digesters are lean duplex (UNS S32304) or standard duplex (UNS S32205).Older facilities were constructed from AISI 316 austenitic stainless steel coupled to an anodic protection system [18][19][20][21][22] .DSSs combine low nickel content with high mechanical strength, which makes them an efficient and cost alternative to austenitic stainless steel grades. In the pulp and paper industry, carbon steel pulp mill equipment, like digesters, storage vessels, have been showed general corrosion and stress corrosion cracking 14,[23][24][25] .Singh and Anaya evaluated the corrosion behavior of carbon steel (A 516-Gr 70), AISI 304 and 316 stainless steels, and two DSSs (UNS S32304, UNS S32205) in black liquor produced by pulping five wood species in a synthetic liquor.The potentiodynamic polarization tests were performed at room temperature, with and without the addition of catechols.The organic compounds in the black liquor were found to play a major role in steel corrosion, and the DSSs showed a high corrosion resistance in all tested black liquors 22 . Corrosion resistance of DSS in white liquor has been widely studied 14,[21][22][23] .The corrosion properties and electrochemical behavior of different DSSs (UNS S32304, UNS S32205, and UNS S32101) in high pH caustic and alkaline sulfide solutions at different temperatures were investigated by Bhattacharya and Singh 14 .They studied the role of alloying elements in DSS in these environments by analyzing the polarization behavior of pure Fe, Cr, Ni and Mo, and DSS UNS S32205.The increase in corrosion rates of DSS with sulfide addition can be related to the presence of sulfur in the passive layer and the formation of metal-sulfur compounds thatare less protective than the oxide film 14 .The S32205 steel was found to be most susceptible to general corrosion and the S32304 steel had the lowest corrosion rates in a sulfide caustic environment.A more stable passive film containing magnetite and awaruite (FeNi 3 ) developed on UNS S32304 steel, resulting in lower corrosion rates 14 . Wensley and Champagne 26 evaluated the effect of sulfide concentration on the corrosion resistance of carbon steel specimens with different silicon contents (low-silicon A285-Grade C and medium-silicon A516-Grade 70), austenitic stainless steel (ASI04) and two DSS (UNS S32304 and UNS S32205) in white, green, weak black, strong black, and flash tank liquors.All of the stainless steels (UNS S30403, UNS S32304, and UNS S32205) were highly resistant to corrosion in all the liquors tested, regardless of sulfide content 23,26,27 . The aim of this work is to evaluate the corrosion behavior of DSS with two processing conditions, hot and cold rolling in liquors provided by a pulp and paper industry.Also, the corrosion resistance of DSS and LDSS in synthetic and industrial white liquor (WL) is compared.To the best of our knowledge, literature on corrosion behavior of LDSS and DSS in industrial liquors from pulp and paper industry is not reported and the mechanisms involved are not fully understood. Materials and Methods The duplex stainless steels were supplied by Aperam South America (Brazil) in hot rolled and cold rolled conditions and the chemical compositions of the steels are shown in Table 1.The DSS studied is 31803 with 5%wt.Ni and 2.6%wt.Mo and the LDSS is 32304 with a lower content of Ni (4%wt.) and Mo (0.3%wt.).The steels were examined as-received: hot rolled coils annealed at 1075 ± 25ºC with a thickness of 4 mm, and cold rolled coils annealed at 1070 ± 25 ºC, with a thickness of 1.8 mm.Table 2 shows the mechanical properties of the cold rolled and hot rolled steels provided by the manufacturer 28 .The metallographic analysis was performedafter etching the steel samples in modified Behara reagent [80 mL distilled and deionized water, 20 mL hydrochloric acid (HCl), and 1 g of potassium metabisulfite (K 2 S 2 O 5 )]; 2 g of ammonium bifluoride (NH 4 H HF) were added to this stock solution just before the etching 29 .The microstructure analyses were carried out using an optical light microscope (LOM, Leitz Metalloplan) with Image Pro software. The steel sheets were cut in dimensions of 1 cm x 1 cm with the exposed surface being the rolling surface.The samples were embedded in epoxy resin and electrical connections necessary for the tests were made by welding a copper wire tothe back of the sample, which was not in contact with the electrolyte.The samples were wet ground to 1200 grit SiC abrasive papers, and then polished using 3 µm, 1 µm, and 0.25 µm diamond paste, and ultrasonically cleaned in ethanol.In order to avoid crevices, the samples were masked with black wax (Apiezon Wax W) 30 at least 12 h before testing and were stored in a desiccator.The black wax was dissolved in trichloroethylene to assist application. The white and green liquors were supplied by a North American pulp and paper company.The WL contained sodium hydroxide (NaOH) and sodium sulfide (Na 2 S) at pH > 13, the green liquor (GL) aqueous solution contained sodium carbonate (Na 2 CO 3 ) and sodium sulfide (Na 2 S) at pH >10.The electrochemical tests were also performed in synthetic white liquor (SWL) composed of 150 g/L of NaOH and 153.8 g/L of Na 2 S.9H 2 O (3.75 mol/L NaOH + 0.64 mol/L Na 2 S) 14,31 . All sodium components were considered on the basis of the equivalent amount of sodium oxide (Na 2 O).The definitions used were based on the Technical Association of Pulp and Paper industries (TAPPI).The effective alkali (EA) is: NaOH +1/2 Na 2 S, expressed as Na 2 O; the total alkali is NaOH + Na 2 S + Na 2 CO 3 + 1/2 Na 2 CO 3 , all expressed as Na 2 O 22 .The sulphide-containing caustic solutions used were: WL consisting of 88 g/L EA as Na 2 O and GL consisting of 117g/L total titratable alkali (TTA) as Na 2 O.The GL and WL were originated by pulping of pine mix and hardwood mix. A Reference 600 Gamry potentiostat was used for the electrochemical tests.The polarization curves were collected by scanning to the anodic direction at 0.167 mV/s from the corrosion potential (E corr ).Transpassivity was observed in all cases, and the scan was reversed when the current reached 3 mA/cm 2 .All the electrochemical measurements were repeated at least three times to ensure the reproducibility. Potentiostatic polarization was performed in GL to evaluate the efficiency of the protective layer.Three potential values were chosen for the potentiostatic tests: -500 mV SCE at the first passivation (around 10 -6 A/cm 2 ), -150 mV SCE at the second passivation (around 10 -5 A/cm 2 ), and 0 mV SCE in the transpassive region. Steel characterization The microstructure of the two steels, DSS and LDSS are shown in Figure 2. Lighter austenite (γ) islands are embedded in the darkeretched ferrite (α) matrix with no other secondary precipitates 32,33 .The cold rolled condition of the steels showed higher yield strength, a higher tensile strength, and a lower elongation than the hot rolled condition of the steels, even though the steels were annealed at 1070 ± 25ºC after rolling.Table 3 shows the contents of austenite and ferrite obtained using Image Pro Software. Cyclic polarization of DSS S31803 and S32304 steels in synthetic and in industrial white liquor Studies of steel corrosion in SWL have been extensively reported in the literature 12,14,[34][35] , but reports of corrosion resistance of duplex steels in industrial liquors of pulp and mill industry are not available. The cyclic polarization curves of the S31803 duplex steel and the S32304 lean duplex steel were similar in industrial WL at room temperature as shown in Figure 3. Polarization curves started at open circuit potentials in the range from -500 mV SCE to -300 mV SCE.The current density increased slightly with potential until about -100 mV SCE , was almost constant until about +100 mV SCE and then increased slightly again at higher potentials.A slight but reproducible hysteresis was observed during the reverse scan for all samples.This electrochemical test was not able to distinguish differences in corrosion behavior of DSS and LDSS or any effect of the two processing conditions. Figure 4 shows polarization curves of S31803 steel in synthetic and industrial WL.The corrosion potential of cold rolled S31803 DSS in SWL was -1.137 ± 0.005 V SCE and in the real industrial WL it was -0.459 ± 0.014 V SCE .Much higher values of current density were observed in the synthetic liquor.The differences in behavior indicate that testing of steels in synthetic liquor may predict a shorter life of the material than what would be achieved in service.The industrial WL is a final product from the causticizing system and it has more chemical compounds than the synthetic solution. Industrial liquors are very different from synthetic liquors, being that in the industrial process, the causticizing reactions to the formation of sodium hydroxide are reversible, which causes a variable amount of sodium carbonate to remain in the liquor.In addition, the pulp production process is cyclic, with accumulations of elements in smaller quantities coming from wood, water, and compounds present with lime.In the industrial process, it is not necessary to determine the exact chemical composition of the liquor, but rather to define an operation parameter, which is the total alkali 13,36 .The WL was regenerated in the process, as the green liquor was made caustic to produce WL, which has a complex chemical composition 27 .These extra components seem to inhibit the corrosion of DSS.The S31803 DSS in SWLshowed two regions of almost constant current density.In contrast, in industrial WL, only one passivation region was obtained.A similar kind of behavior has been reported by Bhattacharya and Singh 14 for S32205 DSS in SWL.They suggested that the current density increased in the beginning due to the dissolution of iron, and then the primary passivation was formed by the formation of nickel sulfide (Ni 2 S), with Ni and Cr also contributing to the first passivation layer 14 .The current density above the primary passivation increased due to oxidation of sulfur species corresponding to the S 2-/SO 4 2-oxidation reaction, and to the dissolution of Cr in the form of CrO 4 2-ions.A region of constant current density of 10 -3 A/cm 2 was observed.Finally, the current density increased due to the oxygen evolution reaction 22 .It is reasonable to assume that this explanation is also valid for the results obtained in this study. The corrosion potential of DSS in industrial WL at 60 ºC was less noble than the value at room temperature (RT) as shown in Figure 5, which presents the cyclic potentiodynamic polarization curves of cold and hot rolled S31803 DSS at the two temperatures.The passive current density was higher at 60 ºC (10 -5 A/cm 2 ) than at RT (10 -6 A/cm 2 ), indicating a less protective passive film in sulfide-containing caustic solution at the higher temperature.At about -150 mV SCE for each alloy at 60 ºC, the current density increased above 10 -5 A/cm 2 .Temperature plays an important role in corrosion of duplex steels in WL and causes an increase of current densities and a decrease of open circuit potential. Cyclic polarization of S31803, S32304 DSS in green liquor The polarization curves of the DSSs tested in GL containing sodium carbonate and sodium sulfide at pH > 10 indicate a similar behavior for the lean and duplex stainless steels as shown in Figure 6.Two passivation regions were identified with the first one exhibiting a range of current densities between 10 -6 to 10 -5 A/cm 2 .The duplex and lean stainless steels showed positive hysteresis in the reverse scans indicating a possibility of localized corrosion.SEM and EDS analyses identified inclusions enriched in aluminum, magnesium, oxygen, manganese and sulfur on the surface of steels.It has been reported that MgO, Al 2 O 3 and MnS inclusions on the surface of S32750 super duplex steel are the preferred sites for pit initiation 31 .In this case, pits were not observed on the surface steels but these inclusions are cathodic regions and the steel near inclusions can act as anodes (Figure 7). Figure 6 shows repassivation occurring at a lower potential.Corrosion behavior of the standard duplex S31803 steel and lean duplex steel was similar in GL. The behavior of platinum was compared with the DSS in these electrolytes as shown in Figure 6.The current densities associated with platinum are higher than the current densities exhibited by DSS, indicating that platinum surface acts as a catalyst for the redox reactions.On the other hand, the results of the DSSs were completely different from that of the platinum, and a corrosive process occurred on the surface of DSS as shown in Figure 6.Potentiostatic polarization was performed to evaluate the behavior of the current density with time and thus the efficiency of the protective layer in GL.Three regions were chosen for the potentiostatic tests: -500 mV SCE at the first passivation (around 10 -6 A/cm 2 ), -150 mV SCE at second passivation (around 10 -5 A/cm 2 ), and 0 mV SCE in the transpassive region. The potential of -0.5 V SCE was chosen to investigate the first passivation region showed in Figure 6.At the applied potential of -0.5 V SCE , the current density remained constant and low with time for the steels in the cold-rolled condition and showed a slight increasing trend for the steels in the hot-rolled condition as shown in Figure 8.The cold rolled steels showed the lowest current density in green liquor.The hot rolled S31803 steel showed the highest current density in green liquor, using the potentiostatic test.The cold and hot rolled steels were annealed at 1070 ºC and 1075 ºC, respectively.In the cold rolled steel, the microstructure is finer as shown in Figure 9.The hot-rolled S31803 steel showed a passivation current density one order of magnitude higher (10 -5 A/cm 2 ) than the current of the cold rolled S31803, and cold and hot rolled S32304 steels (10 -6 A/cm 2 ).The hotrolled S31803 steel exhibited a less protective layer than the other DSSs in GL. In order to reveal the grain boundaries in austenite and ferrite in DSS, electrolytic etching was used with 60% HNO 3 solution at an etching potential of 2.2 V for 3 minutes.The cold rolled steels exhibited finer grains than the hot rolled steels (Figure 9).This finding can explain a higher tensile strength of the cold rolled steels than the hot rolled steels as shown in Table 2. Grain boundaries are barriers to the movement of dislocations and steels with finest grains have a higher density of grain boundaries that inhibits the dislocation movement and contributes to enhance the tensile strength of steel.The result obtained that the hot-rolled S31803 steel exhibited a less protective layer than the other DSSs in GL can also be explained by the lower density of grain boundaries in hot rolled S31803 steel than the steel in cold rolled condition.Finer grains contribute to improve the protective ability of the passive layer of stainless steels 37 . Cyclic polarization curves (Figure6) showed a possible second passivation region at current density of 10 -4 A/cm 2 .At the applied potential of -150 mV SCE , the current density remained almost constant for the hot-rolled S31803 and the cold-rolled S32304 steels (Figure 10).The second passivation layer of hot-rolled S32304 and cold-rolled S31803 was less protective than the layer of the hot-rolled S31803 and cold rolled S32304.As is evident in Figure 11, the current density increased with time when the potential of 0 mV SCE was applied for all steels studied, which indicates that a corrosive process occurs at a current density on the order of 10 -3 A/cm 2 . Temperature plays an important role for steel corrosion in WL by decreasing the transpassive potential as temperature increases (Figure 12).The WL was the less aggressive liquor for the duplex steels, which showed the highest values of transpassive potential in this medium. In addition, cold and hot rolled duplex and lean duplex steels in green liquor showed the lowest values of transpassive potential. Considering the passivation current density as a parameter to evaluate the corrosion behavior of steels, the GL was the most aggressive medium.It is well known that WL is more aggressive liquor due to the alkalinity 14,38 .But, in this work, the sulfide precipitation (Figure 13) plays the main role in the corrosive process, increasing corrosion, and decreasing the transpassive potential.As shown in Figure 14, the precipitate was not corrosion product of the steel due to the absence of Fe, Cr, and Ni in its chemical composition.Sodium, sulfur, and oxygen were found in the chemical composition of the precipitate, which is probably a sulfide 11 . Conclusions The polarization curves of the S31803 steels in SWL were shifted to lower potentials and higher current densities in relation to the steel in industrial WL, which proved to be less aggressive to the duplex steel. The average of transpassive potential showed that temperature plays an important role for DSS corrosion in WL decreasing the E Transpassive as temperature increases. WL was the less aggressive liquor for the duplex steels, which showed the highest values of transpassive potential in this medium. Cold and hot rolled duplex and lean duplex steels in green liquor showed the lowest values of transpassive potential. Figure 1 . Figure 1.Schematic representation of the paper and pulp mill (Kraft process). Figure 4 . Figure 4. Cyclic potentiodynamic polarization curves for cold rolled S31803 steels in SWL and industrial WL at room temperature. Figure 6 . Figure 6.Cyclic potentiodynamic polarization curves of cold and hot rolled S31803 and S32304 steels, and Pt wire in green liquor at room temperature. Figure 8 . Figure 8.(a) Potentiostatic curves at -500 mV SCE of cold and hot rolled S31803 and S32304 steels in green liquor at room temperature, (b) S31803 cold rolled surface after potentiostatic test at -500 mV SCE . Figure 10 . Figure 10.Potentiostatic curves at -150 mV SCE of cold and hot rolled S31803 and S32304 steels in green liquor at room temperature. Figure 11 .Figure 12 . Figure 11.(a) Potentiostatic curves at 0 mV SCE of cold and hot rolled S31803 and S32304 steels in GL at room temperature, (b) 32304 Cold Rolled surface after potentiostatic test at 0 mV SCE . Figure 13 . Figure 13.SEM micrographs after the cyclic potentiodynamic polarization showing carbonate precipitates on the hot 31803 duplex in green liquor. Figure 14 . Figure 14.SEM micrographs after the cyclic potentiodynamic polarization showing carbonate precipitates on the hot 31803 duplex in green liquor. Table 2 . Mechanical properties of duplex steels. Figure 2. Microstructure of lean duplex and duplex stainless steels. Table 3 . Austenite and ferrite contents obtained using Image Pro Software. Figure 3. Cyclic polarization curves of S31803 and S32304 steels in industrial WL at room temperature.
4,947.6
2017-12-18T00:00:00.000
[ "Materials Science" ]
Effect of Dengeling on Bending Fatigue Behaviour of Al Alloy 7050 and Comparison with Milling and Shot Peening Dengeling is a new surface mechanical treatment developed as an alternative to the shot peening method used for enhancing the fatigue resistance of metallic materials. In this work, Dengeling is compared with milling and shot peening with regard to the effect on bending fatigue behavior of aluminium alloy AA 7050 T7651. In addition, the influence of certain Dengeling process parameters on the fatigue resistance is studied. Flat bar samples were milled and then subjected to the respective surface treatments. The induced surface integrity changes, namely residual stresses and surface deformation, were characterized by X-Ray diffraction measurement. Four-point bending fatigue tests with a stress ratio of R=0.1 were performed. The results show that all the surface treatments in general improve the fatigue performance of the milled samples but the samples treated by the Dengeling process with similar Almen intensity as the shot peening treatment perform best. Introduction Surface mechanical treatments especially shot peening has been developed and widely used by various industries including automative and aerospace to enhance fatigue properties of metallic components [1,2]. The increased fatigue resistance is attributed to beneficial compressive residual stresses and strain hardening induced in a surface layer due to impact of hard shots. Dengeling is a new surface mechanical treatment developed as an alternative method to shot peening. The treatment is carried out by striking the metal surface with a hard indenter to induce surface plastic deformation and compressive residual stresses. Dengeling can be performed on the same machine that is used for machining the component and the operator can control exactly the location and magnitude of the residual stress. In a previous study [3], it was shown that significant compressive residual stresses can be introduced to a large depth in aluminum alloy 7050 by Dengeling. In the current project, the potential of Dengeling as an effective way to improve the fatigue resistance of milled aluminium alloys was investigated. The fatigue behavior of Dengeling treated AA7050 was studied by the four-point bending fatigue testing and the results were interpreted with respect to the induced changes in residual stresses. Comparison with shot peening was also made. Experiment details Rectangular bars of 10 mm x 10 mm x 80 mm with 1 mm chamfer on both sides of the surface to be tested in bending fatigue were machined from AA7050 T7651. In addition to the milled samples used as reference, four other groups were treated to different surface conditions, as listed in Table 1. For shot peening (T6) a common process for the alloy was used to compare with a Residual Stresses 2018 – ECRS-10 Materials Research Forum LLC Materials Research Proceedings 6 (2018) 203-208 doi: http://dx.doi.org/10.21741/9781945291890-32 204 Dengeling treatment (T1) of similar Almen intensity as the shot peening. Two other groups (T2 and T3) were selected to investigate the effect of Dengeling process parameters. The dimple overlap is calculated from the percentage of diameter overlapping between two neighboring dimples. Table 1 Dengeling and shot peening process parameters Sample group Treatment Indenter size, stroke distance, dimple size, dimple overlap and line feed direction T1 Dengeling Φ3 mm, 0.2 mm, 0.215 mm, 25%, parallel T2 Dengeling Φ8 mm, 0.5 mm, 0.72 mm, 50%, parallel T3 Dengeling Φ8 mm, 0.5 mm, 0.72mm, 0%, parallel T6 Shot peening Shots: S230H, Φ0.59 mm; intensity: 0.2 mmA; coverage: 125% M Milling The 4-point bending fatigue testing was carried out with a stress ratio of 0.1 and in a frequency of 15 Hz. Samples survived more than 3x10 cycles are considered to be runout. Residual stresses were measured using the X-Ray diffraction technique. The Cr-Kα radiation was used to measure elastic strains for Al-(311) planes. Upon the assumption of biaxial stress state, in-plane residual stresses were calculated using the sin2ψ method and an elastic constant 1⁄2S2 of 19.54x10/MPa. In order to obtain the depth residual stress profile, stepwise layer removal by electrolytic polishing was employed. Fractographic analysis was performed in SEM to identify the fatigue initiation points and possible origins. Results and discussion Surface morphology. The surface morphology is compared in Fig. 1 between the Dengeling (T1) and shot peening (T6) with similar Almen intensity. As can be seen, a regular pattern of indents was observed for the Dengeling but a random pattern consisting of larger and smaller crats was found for the shot peening. Some small, untreated areas could also be seen for the shot peened surface while full coverage was observed for the Dengeling group. The corresponding roughness is Ra = 2.3±0.3 μm for T6 and Ra = 3.1±0.8 μm for T1 measured along the line direction and Ra= 3.1±0.5 μm for measurement along the line feed direction. For Dengeling using the larger indenter (T2 and T3), the surface impression was less obvious and marks from the milling operation were still visible. Residual stresses. Residual stress profiles for all the sample conditions are presented in Fig. 2. Low residual stresses are observed in the milled sample, as shown in Fig. 2a. As expected, the shot peening process introduced a compression layer of about 0.3 mm with maximum compressive stress slightly over 300 MPa and significant plastic deformation as indicated by the diffraction peak broadening (Fig. 2b). Comparison between Fig. 2b and c reveals that the corresponding Dengeling process generated a larger compression layer (about 0.4 mm), lower surface plastic deformation and similar maximum subsurface compressive residual stress in the axial direction while the transverse compressive stresses were much larger. For the two Dengeling treatments using the large indenter (Fig. 2d and e) the compression layer thickness was about doubled as compared to shot peening, the maximum compressive stresses and surface deformation are however smaller than for both shot peening and Dengeling with the small indenter. The lowest surface deformation and compressive stresses were found with the treatment with no dimple overlap (T3). Residual Stresses 2018 – ECRS-10 Materials Research Forum LLC Materials Research Proceedings 6 (2018) 203-208 doi: http://dx.doi.org/10.21741/9781945291890-32 205 Figure 1. Micrograph showing surface indents on the shot peened (left) and Dengeling sample (T1) (right). LD: line direction; LFD: line feed direction. Fatigue. The fatigue testing results are presented as S-N plots in Fig. 3 and 4. On the assumption of elastic loading, the maximum stress in the surface was calculated according to the LD LFD Introduction Surface mechanical treatments especially shot peening has been developed and widely used by various industries including automative and aerospace to enhance fatigue properties of metallic components [1,2].The increased fatigue resistance is attributed to beneficial compressive residual stresses and strain hardening induced in a surface layer due to impact of hard shots.Dengeling is a new surface mechanical treatment developed as an alternative method to shot peening.The treatment is carried out by striking the metal surface with a hard indenter to induce surface plastic deformation and compressive residual stresses.Dengeling can be performed on the same machine that is used for machining the component and the operator can control exactly the location and magnitude of the residual stress. In a previous study [3], it was shown that significant compressive residual stresses can be introduced to a large depth in aluminum alloy 7050 by Dengeling.In the current project, the potential of Dengeling as an effective way to improve the fatigue resistance of milled aluminium alloys was investigated.The fatigue behavior of Dengeling treated AA7050 was studied by the four-point bending fatigue testing and the results were interpreted with respect to the induced changes in residual stresses.Comparison with shot peening was also made. Experiment details Rectangular bars of 10 mm x 10 mm x 80 mm with 1 mm chamfer on both sides of the surface to be tested in bending fatigue were machined from AA7050 T7651.In addition to the milled samples used as reference, four other groups were treated to different surface conditions, as listed in Table 1.For shot peening (T6) a common process for the alloy was used to compare with a Dengeling treatment (T1) of similar Almen intensity as the shot peening.Two other groups (T2 and T3) were selected to investigate the effect of Dengeling process parameters.The dimple overlap is calculated from the percentage of diameter overlapping between two neighboring dimples.The 4-point bending fatigue testing was carried out with a stress ratio of 0.1 and in a frequency of 15 Hz.Samples survived more than 3x10 6 cycles are considered to be runout. Residual stresses were measured using the X-Ray diffraction technique.The Cr-Kα radiation was used to measure elastic strains for Al-(311) planes.Upon the assumption of biaxial stress state, in-plane residual stresses were calculated using the sin2ψ method and an elastic constant ½S 2 of 19.54x10 -6 /MPa.In order to obtain the depth residual stress profile, stepwise layer removal by electrolytic polishing was employed. Fractographic analysis was performed in SEM to identify the fatigue initiation points and possible origins. Results and discussion Surface morphology.The surface morphology is compared in Fig. 1 between the Dengeling (T1) and shot peening (T6) with similar Almen intensity.As can be seen, a regular pattern of indents was observed for the Dengeling but a random pattern consisting of larger and smaller crats was found for the shot peening.Some small, untreated areas could also be seen for the shot peened surface while full coverage was observed for the Dengeling group.The corresponding roughness is Ra = 2.3±0.3 µm for T6 and Ra = 3.1±0.8µm for T1 measured along the line direction and Ra= 3.1±0.5 µm for measurement along the line feed direction.For Dengeling using the larger indenter (T2 and T3), the surface impression was less obvious and marks from the milling operation were still visible. Residual stresses.Residual stress profiles for all the sample conditions are presented in Fig. 2. Low residual stresses are observed in the milled sample, as shown in Fig. 2a.As expected, the shot peening process introduced a compression layer of about 0.3 mm with maximum compressive stress slightly over 300 MPa and significant plastic deformation as indicated by the diffraction peak broadening (Fig. 2b).Comparison between Fig. 2b and c reveals that the corresponding Dengeling process generated a larger compression layer (about 0.4 mm), lower surface plastic deformation and similar maximum subsurface compressive residual stress in the axial direction while the transverse compressive stresses were much larger. For the two Dengeling treatments using the large indenter (Fig. 2d and e) the compression layer thickness was about doubled as compared to shot peening, the maximum compressive stresses and surface deformation are however smaller than for both shot peening and Dengeling with the small indenter.The lowest surface deformation and compressive stresses were found with the treatment with no dimple overlap (T3).The Dengeling group (T1) is compared with the milling group (M) in Fig. 3a.As can be seen, application of the Dengeling treatment greatly improved the fatigue performance of milled samples.More than one order of magnitude better in fatigue life was observed for the slope region.At the lower stress, 326 MPa, both runout and failure were observed.Nonetheless, the failed Dengeling sample had a much longer life than the milled sample.Comparison for the lowest stress level is difficult as only one milled sample was tested. An increase of fatigue life up to 100% in the slope region was obtained by the shot peening.The improvement is however much less in comparison with the Dengeling treatment as Fig. 3b illustrates.At the lowest stress level, the two treatments seem to show similar behavior.One Dengeling sample and two shot peened samples survived 3 million loading cycles while the Dengeling samples failed with a fatigue life close to 3 million cycles.Actually the two "Runout" samples of shot peening failed shortly after continued loading beyond 3 million cycles. The fatigue life of T2 samples in Fig. 4a was twice to three times of the milled samples in the upper part of the slope region.However, in the lower part of the slope region the fatigue data scattered largely, from about 1.55x10 5 to over 3x10 6 cycles.The T3 group presented in Fig. was treated with same parameters as T2 but with no dimple overlap.For this group, the improvement in fatigue life was marginal in the slope region and varied in the low stress region. Fractography.Fractographic analysis reveals that surface crack initiation, preferably at precipitates, and its propagation resulted in fatigue failure of the milled samples.For the Dengeling samples, the fatigue damage always started below the surface and the initiation site moved closer to the surface with increasing applied stress.For loading at and below 363 MPa, the crack initiation site was located about 500 to 560 µm below surface, i.e. outside the compression zone which is about 400 µm.For larger applied stresses, the fatigue cracking started inside the compression zone.It can be concluded that the Dengeling treatment successfully suppressed the surface crack initiation.Examples of fracture surface revealing the crack initiation sites (indicated by arrows) are given in Fig. 5a for milling and Fig. 5b for Dengeling treated samples.In spite of the surface compressive residual stresses, fatigue cracks tend to start from surface or very near to the surface, see Fig. 6a, in the shot peened group.In another word, the shot peening process employed is less effective in suppressing surface crack initiation in comparison with the Dengeling treatment. For the T2 samples, fatigue crack origins were observed mostly on the chamfer surface, see Fig. 6b, although crack initiation near the edge of the top flat surface was also observed.The treatment of the chamfer surfaces was made separately from the top flat surface.The chamfer surfaces are more prone to fatigue, which means that the strengthening was not as effective as on the top flat surface.It could be due to the indenter diameter that is much larger than the width of the chamfer surface.Similar to the milled group, surface crack initiation often resulted in final failure in T3 samples.The 0% overlap means that about 21.5% of the surface was not covered by the indents.Such bared areas could become preferable sites for crack initiation. Concluding remarks The effect of a surface mechanical treatment on fatigue originates from the induced changes in surface integrity especially surface roughness, strain hardening and compressive residual stresses [1].The results in the previous section reveal that for the Dengeling and shot peening treatments using similar Almen intensity, the magnitude of compressive residual stresses in the fatigue loading direction is similar and better surface roughness with a higher degree of strain hardening was found in the shot peened samples.However, the Dengeling treatment is much more effective for fatigue resistance reinforcement in the slope region of the S-N graph.This is likely attributed to that the Dengeling treatment successfully suppressed surface initiation of fatigue cracks where the shot peening failed.For the shot peened samples subjected to higher loading stresses, microcracks might exist or quickly develop from other types of surface defects and the positive effect of shot peening can be explained by the retardation of crack growth by the compressive residual stresses.For the Dengeling samples in the same stress region, fatigue cracks initiated below the surface, which is a much slower process and the more significant improvement may come from the delay of fatigue crack initiation.At the low stress region, surface crack initiation may become difficult for the shot peened samples and therefore the behavior of both groups are similar.It should be pointed out that the fatigue crack initiation sites tend to be located close to the edge near the chamfer especially for the shot peened samples. Dengeling treatments using the large indenter generated a much deeper compression zone.However, as fatigue crack initiates often from the weaker chamfer surface in T2 or untreated surface areas in T3, both treatments are less effective and the fatigue data are scattered in the low stress region. Large intermetallic particles are common in the alloy.Those located at or near surface may result in microcracks during shot peening or Dengeling.EDS analysis of crack initiation sites also reveals such precipitates at crack origins in a number of failed samples in all conditions: milling, shot peening and Dengeling.These particles and their distribution could be responsible for scattered data near the runout stress region. Figure 3 Figure 3 Comparison of Dengeling (T1) with milling (M) (a) and with shot peening (SP) (b).The lines serve as a guide to the eye. Figure 5 . Figure 5. Crack initiation from (a) the surface of an M sample that failed at 363 MPa and (b) subsurface (about 560 µm below surface) of a T1 sample that failed under 326 MPa.The insets reveal the respective crack initiation site. Figure 6 Figure 6 Surface crack initiation in (a) a T6 sample failed at 326 MPa and (b) a T2 sample failed at 363 MPa.The insets show the respective fatigue initiation site at the surface. Table 1 Dengeling and shot peening process parameters
3,890.6
2018-09-11T00:00:00.000
[ "Materials Science", "Engineering" ]
Simulation Study of Hatch Spacing and Layer Thickness Effects on Microstructure in Laser Powder Bed Fusion Additive Manufacturing using a Texture-Aware Solidification Potts Model Microstructure control in the laser powder bed fusion additive manufacturing processes is a topic of major interest because of the submillimeter length scale at which the manufacturing process occurs. The ability to control the process at the melt pool scale allows for microstructure control that few other manufacturing techniques can match. The majority of work on microstructure control has focused on altering laser parameters to control solidification conditions (Ref (R.R. Dehoff, M.M. Kirka, W.J. Sames, H. Bilheux, A.S. Tremsin, L.E. Lowe, and S.S. Babu, Site Specific Control of Crystallographic Grain Orientation through Electron Beam Additive Manufacturing, Mater. Sci. Technol., 2014, 31(8), p 931–938. R. Shi, S.A. Khairallah, T.T. Roehling, T.W. Heo, J.T. McKeown, and M.J. Matthews, Microstructural Control in Metal Laser Powder Bed Fusion Additive Manufacturing Using Laser Beam Shaping Strategy, Acta Mater., 2020, 184, p 284–305, https://doi.org/10.1016/j.actamat.2019.11.053.)). Other machine parameters, besides the laser parameters, have also been shown to affect the microstructure of AM parts (Ref (N. Nadammal, S. Cabeza, T. Mishurova, T. Thiede, A. Kromm, C. Seyfert, L. Farahbod, C. Haberland, J.A. Schneider, P.D. Portella, and G. Bruno, Effect of Hatch Length on the Development of Microstructure, Texture and Residual Stresses in Selective Laser Melted Superalloy Inconel 718, Mater. Des., 2017, 134, p 139–150, https://doi.org/10.1016/j.matdes.2017.08.049. F. Geiger, K. Kunze, and T. Etter, Tailoring the Texture of IN738LC Processed by Selective Laser Melting (SLM) by Specific Scanning Strategies, Mater. Sci. Eng. A, 2016, 661, p 240–246, https://doi.org/10.1016/j.msea.2016.03.036.)). We propose an investigation of the effects of hatch spacing and layer thickness on microstructure development in laser powder bed fusion additive manufacturing. A Monte Carlo Potts model with textured solidification capabilities is used to study the effects of these parameters on a unidirectional scan strategy. The simulation results reveal substantial changes in grain morphology as well as texture. Additionally, EVP-FFT crystal plasticity simulations were performed to evaluate the effect of the microstructural shifts on mechanical response. The conclusions from this work elucidate the effects of these parameters on part microstructure as predicted by the texture-aware solidification Potts model and inform understanding of how bulk texture is predicted by the simulation approach. Introduction Laser powder bed fusion (LPBF) additive manufacturing (AM) is a manufacturing technique that offers a variety of advantages over other, more traditional, manufacturing tech-niques. The technique works in a layer-by-layer fashion to deposit material to build up a 3D metal part. First, a computeraided design (CAD) 3D model of a part is sliced into a stack of many layers. These layers are used as the blueprint for material deposition. In LPBF AM, the material is deposited via the melting of a powder layer using a laser. For each layer of the sliced CAD file, a thin layer of fine metal powder is deposited. A slice from the CAD file is overlaid on top of the powder layer, and the laser is used to melt the powder in the regions defined by the CAD slice. This technique allows for complex parts to be produced that cannot be easily manufactured via subtractive machining or casting. Parts with lattice structures or complex internal geometries are classic examples. The technology also lends itself to low-volume part production and rapid prototyping. The technology has encountered a variety of challenges that have occupied a large portion of research efforts. Porosity stemming from keyholing ( Ref 5,6), lack of fusion between subsequent melt pool passes and layers (Ref 7), and residual porosity from the powder have been identified as serious issues with the manufacturing technique ( Ref 8). Additionally, hot cracking during fabrication has been identified as a concern with regard to the successful fabrication of LPBF AM parts (Ref 9,10). Significant progress has been made in addressing these defects. One largely unrealized capability offered by LPBF and other AM techniques is that of microstructure control, which includes at least grain morphology, orientation, phase structure, and dislocation content. As microstructure directly impacts mechanical properties, microstructure control is of interest in virtually every manufacturing technique. For example, the columnar to equiaxed transition in casting has been a subject of study for many decades ( Ref 11). LPBF AM is of interest in this respect due to the fact that a small melt pool is employed to manufacturing these parts. This means that the operators of a LPBF AM machine are making decisions about the part microstructure at a scale of order 100 lm. Accordingly, LPBF microstructures are complex and change dramatically depending on the build parameters employed to fabricate the parts ( Ref 12). Many reports on LPBF AM microstructure and microstructure control are already available ( Ref 3,[13][14][15][16]. Two important components of microstructure are grain morphology and crystallographic texture, and these are often referenced in many of these works. This work focuses on these two microstructural features and discusses the results with reference to relevant work. Both Narra and Gockel investigated controlling prior-beta grain size in Ti64 parts using electron beam powder bed fusion ( Ref 17,18). They illustrated the use of constant melt pool cross section to drive constant beta grain size. Scaling the melt pool size scaled the prior-beta grain size in both full parts and thin walls. Dehoff The previously mentioned studies clearly show that altering laser parameters in specific regions can be used to vary the associated microstructures. Despite the promising preliminary work cited above, full control over microstructures remains challenging. Clearly, this is a consequence of the complex nature of microstructure development in AM and the large number of processing parameters in AM. Geiger et al. showed how scan strategy can also drastically shift texture in IN738C LPBF parts ( Ref 4). Changes in the rotation between layers and the rotation of the scan strategy with respect to the part can dramatically change the texture of the part. Andreau et al. studied texture development in SS316L produced via LPBF AM ( Ref 14). The authors were able to make conclusions about the texture development by considering the overlap between subsequent melt pool passes and clearly observe the development of texture on the melt pool scale. This highlights the importance of considering melt pool interactions with previously deposited microstructure on the melt pool scale when considering AM texture and grain structure. Attard et al. investigated a variety of build parameters and their effect on grain structure and texture for a few scan strategies, including the Island scan strategy ( Ref 12). They observed dramatic shifts in texture and grain morphology depending on the parameters selected. They then illustrated how this concept could be used in the fabrication of a turbine blade by imparting a desired microstructure onto the part in specific regions. The complex nature of the microstructures and the large number of parameters that influence the final microstructure (power, velocity, hatch spacing, layer thickness, scan strategy, material, etc.) means that interpreting the microstructures from 2D cross sections after the build can be quite difficult. For example, there are frequent references to nucleation and equiaxed grain shape, but it is clear that, in the absence of serial sectioning, isolated 2D cross sections are not sufficient to make unambiguous claims about grain structure ( Ref 19). Simulation work has shown that grains can appear equiaxed depending on the plane the cross section is made in, despite have a distinctly non-equiaxed shape ( Ref 20). Simulation studies of AM microstructure offer a partial solution. A complete understanding of the assumptions used to simulate microstructure development and the ability to evaluate full 3D synthetic microstructures can resolve confusion about the development of grain shapes and texture in LPBF AM parts. A variety of simulation methods have been employed to generate synthetic AM microstructures. Cellular automata have been frequently employed as a natural follow-on from their application to modeling solidification prior to the advent of metals AM (Ref [21][22][23]. We recently showed that adding a misorientation dependent mobility function to a Monte Carlo Potts model enabled simulation of texture evolution in Inconel 718 parts fabricated via LPBF AM for a variety of laser parameters (Ref 20). This computationally efficient simulation approach offers the potential to further elucidate microstructure development in AM parts via computationally efficient production of bulk AM microstructures. However, as the simulation approach is still in its infancy, exploration of its predictive capabilities and limitations is still necessary. We explore the effects of changing hatch spacing and layer thickness within the texture-aware solidification Potts model to understand the effect of these variables on the synthetic microstructures generated by this technique. We quantify both the grain morphology and crystallographic texture, as well as characterize the mechanical response of the microstructures via EVP-FFT micromechanical simulation. This study helps to understand what the modeling approach, in its current form, predicts the effect of hatch spacing and layer thickness alterations will have on microstructure development during LPBF AM builds. The conclusions reveal potentially necessary modifications to the textured solidification Potts model as point to potential strategies for microstructure control in LPBF AM parts. Methods The texture-aware solidification (TS) Potts model approach is derived from the classical grain growth Monte Carlo (MC) Potts model (Ref [24][25][26], and the use of Potts models for simulation of AM parts by Rodgers et al. (Ref 27,28). The grain growth Potts model is an energy minimization approach that is able to accurately simulate curvature-driven grain growth ( Ref 24). To visualize this simulation approach, consider a square 2D lattice of sites. Each site is assigned a spin value (grain ID), and each region of contiguous sites with the same spin value is considered a grain. At each timestep in a MC Potts model, grain growth simulation n switch attempts are made, where n is the total number of sites in the simulation volume. A switch attempt is when one site attempts to flip from one spin value to another. An energy calculation is made before and after the switch is made for a particular site (Eq 1). E is the total system energy, NN is the number of neighbors of the candidate site, J is the Hamiltonian used to determine the energy between site i (S i ) and j (S j ) , and d is the Kronecker delta. This energy calculation can be understood to essentially be the number of unlike neighbors in the system. The more sites with neighbors of different spin, the higher the total system energy. Comparing the two energy calculations determines whether the switch would increase or decrease the system energy (Eq. 2). From there, the switch is either accepted or rejected via equation 3, where W M is the switching probability, M is the switching mobility, k B is the Boltzmann constant, and T sim is the simulation temperature. As more switches are accepted and the system evolves, the reduction in surface energy mimics grain growth. An important term in equation 3 is M. This term essentially acts as an additional control over the rate of switching in the simulation. In Eq. 3, M can be described as the probability that a switch is accepted if it decreases the system energy. This term can be tuned to control the rate of coarsening in grain growth simulations. In order to simulate AM processes, Rodgers et al. added a moving region of random sites to the simulation volume to simulate a melt pool ( Ref 27). Additionally, a decaying mobility was employed around this simulated melt pool to create a heat-affected zone (HAZ). In a similar fashion, a moving heat source is added to the standard grain growth Potts model in order to simulate a melt pool moving through the synthetic volume. The Rosenthal solution is employed for its ease of implementation and computational efficiency (Ref 29). In Eq 4 and 5, T is the physical temperature of the site, T 0 is the preheat temperature, Q p is the laser power, k is the absorptivity, k is the thermal conductivity, R is the radial position of the calculation site relative to the point heat source, v is the laser velocity, a is the thermal diffusivity, n is the position of the site in the direction of the point source movement, and y and z are the calculation site positions in the plane perpendicular to the point source movement direction. The inclusion of this heat source transforms Eq. 3 into Eq. 6. M(T) is a temperature-dependent mobility function as defined by Eq. 7. This allows the amount of grain growth to drop off as the temperature decreases as naturally occurs in the HAZ. For the simulations performed in this study, p 0 was set to 8.0 x10 -25 and p 1 was set to 0.0347. These values were selected in order to allow only small amounts of coarsening in the heat-affected zone. In order to motivate solidification to occur in the simulated volume, all sites with temperature greater than T m (melting temperature) are assigned a spin value of 0 (Eq. 8. A stored energy value is then associated with this spin value (Eq. 9 such that the removal of these liquid sites is encouraged by minimization of total system energy (Eq. 10). Simulation of solidification texture is accomplished by first assuming that material system being simulated solidifies as a phase with a cubic crystal structure. This allows for assumption of a <001> preferred growth direction for the solidifying material. For sites switching from liquid (spin ID of 0) to solid, the switching mobility is assigned to be a function of the misorientation between the solidification direction and the nearest <001> direction of the candidate solidification switch (M ðhÞ). This causes Eq. 6 to become Eq. 11. The solidification direction ( G ! ) is assumed to be the thermal gradient direction. Using the analytical Rosenthal solution, the thermal gradient direction can be quickly and easily calculated by taking the gradient of Eq. 4. In order to find the misorientation between this gradient direction and the nearest <001> of a particular spin, first all spins must be assigned an orientation. This is done by assigning a set of three Euler angles to each spin value. From there, a transformation matrix (g) can be developed (Eq. 12) that can be used to calculate the <001> directions of the particular orientation in the sample frame (Eq. 13). The misorientation between each of the six <001> directions can then be easily calculated using Eq. 15, and the minimum is selected as the misorientation. Using this misorientation, the switching mobility can be determined using the function presented in Eq. 15. For the simulations performed in this study, c 0 and c 1 were set to 0.5 and c 2 was set to be 2.5. Previous work (Ref 20) had identified these values as giving the best fit between simulated and experimental microstructures This function allows for orientations with <001> directions that are well aligned with the solidification direction to grow more rapidly than other orientations based on the local switching events as liquid sites convert (transform) to solid. This approach has shown the ability to reproduce solidification textures observed in casting and LPBF AM applications (Ref 20). Each simulation is initialized on a 3D lattice of cuboidal sites where each site is initially assigned a random spin. The melt pool is then allowed to move through the simulation domain at a constant step size in order to simulate the AM process. For these simulations, the voxel size was set to be 1 lm x 1 lm x 1 lm and each Monte Carlo step was set to be 10 ls. The set scan speed for the laser parameters used in the simulation was 1.2 m/s. At this scan speed, the melt pool can be assumed to have traveled 12 lm or 12 voxels at each time step. At the end of each melt pool pass, once the heat source had reached a predefined position such that the melt pool completely passed through the simulated volume, the position of the melt pool in the layer was updated and the incremental movement of the melt pool was repeated. This updated position was defined by the preset hatch spacing for the simulation. The hatch spacing, as defined for this study, is the lateral offset for each melt pool within a single layer of melt pool passes. Once the total number of melt pool passes in one layer reaches the desired number, the layer height is updated and the process repeats itself. This updated layer height is defined by the assigned layer thickness. For ease of interpretation of the simulation results, no rotation or shift factor was employed for the current study. For simulation of bulk AM microstructure, multiple passes and layers were simulated for each parameter set. Five melt pool passes and eight layers were simulated. The bulk microstructure was then extracted from each simulation. As mentioned previously, constant melt pool parameters were employed. These parameters calculate the temperature distribution at each time step via the Rosenthal solution (eq. 4). The parameters used were selected to approximate the constant material parameters of a Nickel-based superalloy. Values of the physical properties were assumed to have the following: thermal conductivity (k) 11.2 Wm -1 K -1 , density (q) 8220 kgAEm -3 , specific heat (C p ) 650 JAEkg -1 K -1 , and melting temperature (T m ) 1573 K. Laser power (Q p ) was set to be 285 W, velocity (v) 1.2 m/s, and absorptivity (k) 0.155. To assess the effects of hatch spacing and layer thickness, 4 hatch spacings were investigated (80 lm, 90 lm, 100 lm, and 110 lm) and 3 layer thicknesses were investigated at each of the hatch spacings (30 lm, 40 lm, and 50 lm). The bulk microstructure of each simulation measures 150 lm in the scan direction, 400 lm in the build direction, and between 320 lm and 440 lm in the transverse direction depending on the hatch spacing. The model also assumes that no nucleation is occurring at the selected solidification parameters. Additionally, the model does not include undercooling in the calculation of the mobility, which can be considered analogous to solidification front velocity. This is accomplished by assuming large undercoolings and subsequently high solidification velocities such that the velocities of the dendrite tips are essentially equal to the melt pool velocity selected for the simulation. Synthetic Microstructure Results and Discussion The bulk microstructures as predicted by the TS Potts model simulation approach for each hatch spacing (HS) and layer thickness (LT) combination are presented in Fig. 1. These bulk microstructure simulations each took 7200 Monte Carlo steps to complete. They were run in parallel on 840 cores and each took between 140 and 150 cpu hours to complete, depending on the size of the simulations. A significant variation in microstructure morphology is evident. The low hatch spacing and low layer thickness microstructures exhibit large columnar grains with some grains spanning the height of the simulation volume. These results are consistent with experimental microstructures reported in literature. Andreau et al. reported microstructures in 316L parts fabricated using the same, simplified scan strategy as was simulated in this work (Ref 14). They observe microstructures with large columnar grains that span hundreds of microns and possess the same <101>//BD texture observed in Fig. 1. Additionally, Rowenhorst et al. generated a 3D reconstruction of AM microstructure via automated serial section EBSD that showed grains growing epitaxially through many layers of an AM build (Ref 30). As hatch spacing and layer thickness increase, the columnar grains appear to break up and these smaller grains take on angular features that result from the partial remelting of grains by subsequent melt pool passes. Note that the HS100 LT50 and HS110 LT50 microstructures possess regions of lack of fusion g ¼ (LOF) porosity. In this simulation approach, these regions are not voids but instead regions of randomized sites that remain unmelted and retain their original configuration from initialization of the simulation domain. Qualitatively, a shift in grain size can be observed between the simulations. In order to quantify this, the equivalent diameter for each of the grains was calculated per simulation. These results are plotted on normal distribution plots in Fig. 2. From the results, it is obvious that none of the microstructures possess normal distributions of grain size as none of the curves lie on the straight line included on the plot. Additionally, the grain size decreases with increasing layer thickness within each hatch spacing. Less overlap in the simulations appears to result in less epitaxial, multilayer growth. Increased overlap between subsequent melt pool passes results in greater remelting, leaving less competition for the large columnar grains during solidification. For the simulations with less overlap, more of the fine grains that grow from the ''powder layer'' survive and prevent the epitaxial growth of the large columnar grains. To assess texture evolution, the <001> pole figure from each simulation case is plotted in Fig. 3. The pole figures were calculated using Dream3D (Ref 31). The three columnar microstructures (HS80 LT30, HS80 LT40, and HS890 LT30) possess very similar textures. This rotated cube texture is frequently reported in literature when LPBF AM builds have employed similar scan strategies ( Ref 13,14,32,33). As the grain structure shifts away from the columnar morphology, the texture also experiences a shift. Generally, as the amount of overlap between subsequent melt pool passes decreases, the texture strength decreases. Additionally, as the layer thickness decreases in the non-columnar microstructures, the simulations predict the development of a TD-direction fiber texture. While this texture has been previously reported for scan strategies similar to the one employed for the present study (Ref 20,34), this trend could be a by-product of the current texture development simulation approach. As mentioned previously, the simulation volume is initialized on a lattice of random spin values. These regions of randomized spins are the highest density of candidate orientations in the entire simulation volume. This means that the likelihood of a well-aligned spin value existing in these regions is much higher than in the regions of remelting where the microstructure is much coarser and the number of different candidate spins is much lower. The simulation approach is likely artificially inflating the development of the TD fiber texture because, as the layer thickness decreases, the regions of the melt pool edges that are exposed to this high-density candidate spin region are the area of the melt pool where the thermal gradients point horizontally, or nearly horizontally. This effect could be potentially mitigated by coarsening the initial starting simulation domain or the inclusion of a small amount of nucleation to prevent this effect from dominating the texture development. As with all simulations, care should be taken when interpreting the texture prediction results, especially when experimental validation has not been performed. Misorientation distributions were plotted for each simulation case to further quantify the crystallographic texture present in the synthetic microstructures. These distributions take into account the area of each grain boundary as calculations were made on a voxel-by-voxel basis. These misorientation distributions are plotted in Fig. 4. Again, a clear trend is visible. As the layer thickness increases, the distribution shifts toward the Mackenzie distribution (Ref 35). Additionally, bimodal distributions can be observed in a few of the simulation cases. This is likely associated with the large columnar <101>//BD grains possessing low misorientation between each other. Crystal Plasticity Results and Discussion To investigate the mechanical responses of the different microstructures, an elasto-viscoplastic (EVP) crystal plasticity model was employed to simulate the response of the synthetic microstructures under tensile loading. The fast Fourier transform (FFT) algorithm was implemented to solve to for the stress state with a 2D slice of each microstructure used as the input to the modeling approach (EVP-FFT). This image-based approach does not require meshing of the microstructure, and each voxel of the input microstructure is considered in the calculation, with the local properties of each voxel dependent, largely, on crystallographic orientation. Constant amounts of strain are applied at each time step, which stretches the voxels in the simulation grid. The modeling approach calculates the resulting stress and strain fields at each voxel as a result of this macroscopic strain. The calculation of the strain tensor at each voxel is presented in eq. 16. In this equation, eðxÞ represents the strain tensor at the voxel of interest, C is the elastic stiffness tensor, r is the stress, e p;t is the total plastic strain at the previous time step, _ e p is the plastic strain rate at the current time step, and Dt is the size of the time step. The formulation for calculation of the plastic strain rate is given in eq. 7. In eq. 17, m s x ð Þ represents the Schmid tensor for slip system s at x, _ c s x ð Þ is the shear rate of slip system s, _ c o is a normalization factor, s s x ð Þ is the critically resolved shear stress (CRSS), and n is the stress exponent. For additional details and derivations of the solution and the FFT-based methods implemented to solve for the finial stress and strain states, the reader is directed to the following manuscript (Ref 36). The Voce hardening law is incorporated into the model to simulate the changes in the critically resolved shear stress of the slip systems as shear strain accumulates (eq. 19) (Ref 37). In eq. 19 s s 0 is the initial CRSS, h s 0 is the initial hardening rate, h s 1 is the asymptotic hardening rata, s s 0 þ s s 1 is the back-extrapolated CRSS, and C is the accumulated shear. The stress-strain curves for each of the different microstructures for loading in both the build direction (BD) and transverse direction (TD) are presented in Fig. 5. 2D slices taken from the center of the microstructures were used as inputs to the mechanical model. Inconel 625 material parameters were used in keeping with the approach of approximating a Nickel-based superalloy material system ( Ref 38). These elastic stiffness constants are presented in Table 1. The Voce hardening law parameters that were used are shown in Table 2. For easier interpretation of the results, the elastic modulus, yield stress, and anisotropy (r BD y =r TD y ) are plotted in Fig. 6 in a grid format based on the variation in hatch spacing and layer thickness. Elastic modulus was measured from the linear portion of each plot, and yield stress was determined via the 0.2 % offset approach. Additionally, the multiples of random values extracted from inverse pole figures plotted with respect to the BD and TD were used to compare the intensity of <001> crystallographic planes parallel to the BD and TD directions (Fig. 6f). Some trends are evident. The most obvious trend is the BDdirection yield strength increases as the layer thickness and hatch spacing increases. Additionally, the anisotropy observed in the simulations appears to increase as the hatch spacing and layer thickness increase. These effects appear to be explained by the texture shifts as illustrated in Fig. 6(f). The anisotropy measured in Fig. 6(e) is the ratio of the BD-direction yield stress to the TD-direction yield stress. The large hatch spacing and layer thickness cases possess high BD-direction yield stress and low TD-direction yield stresses, resulting in anisotropy measures greater than 1. Each simulation case with a yield Fig. 6(f), indicating that the microstructures possess more <001> crystal directions parallel with the TD than the BD. As the <001> direction is the soft direction in cubic materials, the larger concertation of the <001> crystal directions results in a lower yield stress for the microstructures in the TD than the BD. This is most clearly observed in the HS80 LT30 case and the HS90 LT30 case. These microstructures are the only ones which show a yield stress ratio of less than 1. This result is nicely coupled with the large ratio for each microstructure shown in Fig 6(f). The large amount of <001> crystal directions parallel to the BD in these simulation cases was responsible for the measured drop in build direction yield strength. These results indicate the large effect the texture shifts can have on the mechanical properties of the printed material. Quantification of Grain Morphology The grains in LPBF AM parts vary greatly in size and morphology. Measuring only the size of the grains neglects much of the information present in the microstructure. Qualitatively, from Fig. 1, a significant shift in the morphology of the grains present in the microstructure can be observed such that the most columnar shapes when large overlap between subsequent melt pool passes is employed. In order to further quantify the microstructures, analysis of the shapes present in the microstructure was performed via the use of second-order moment invariants. Moments invariants have been used in pattern recognition since Hu first proposed the approach in 1962 ( Ref 39). Additionally, the use of moment invariants as measure of shape has already been successfully implemented in the study of microstructure. MacSleyne et al., for example, used 2-D moment invariants to quantify the evolution of c 0 precipitate morphology throughout a 2-D phase field simulation ( Ref 40). In a similar fashion, 2-D moment invariants are used to quantify the grain shapes present in 2-D slices of each of our simulated microstructures. For these voxelized microstructures, equation 19 is employed to calculate the 2-D moment (of order p + q) for a particular grain ID. In this equation, x and y are the Cartesian coordinates of a particular voxel and f(x,y) is an indicator function where f(x,y) is equal to one when the grain ID of the voxel is equal to the grain ID of interest and zero when the grain ID of the voxel is different from the grain ID of interest. In order to make these moments invariant to features of the image like position, size, and rotation, they can be transformed into moment invariants. These moment invariants are better suited to function as pure shape descriptors of the grains in the x p y q f ðx; yÞ ð Eq 20Þ ðEq 21Þ ðEq 22Þ Figure 7 illustrates the second-order moment invariant map (SOMIM) for the HS80 L50 microstructure. A few selected grains have been highlighted on the map to illustrate where different shapes appear on the SOMIM. Nearly circular grains are located in the top right of the SOMIM, whereas elongated shapes are plotted in the top left. The x 1 axis is essentially a measure of aspect ratio, and the x 2 axis can be described as a measure of shape complexity. Figure 8 presents the SOMIM for each of the microstructures, which shows that each microstructure has a distinctive fingerprint. The SOMIMs for columnar microstructures (HS80 LT30, HS80 LT40, and HS90 LT40) are concentrated near the top of the map, i.e., x 2 % 1. As the hatch spacing and layer thickness increases, the density of near-circular grains increases, i.e., the points concentrate in the top right corner. Additionally, there many grains close to the right bound of the SOMIM. These correspond to an approximately triangular shape with angular features that appear in many of the microstructures. The SOMIM maps thus quantify the shifts in grain morphology found in the dataset that are also evident by inspection. In order to link these variations to properties, each point was colored with respect to the average stress measured across each grain at the final EVP-FFT simulation step. Figure 8 illustrates the average stress response for BD loading. Clear clustering of some colors can be observed, indicating that placement on the map can be correlated, to some extent, with mechanical response. It appears that grains with low x 1 values appear to possess low average stress in the BD-direction loading case. These grains correspond to the thin, high aspect ratio grains that run along the centerline of the melt pools through multiple layers. These grains possess an <001> direction parallel to the build direction, resulting in the low average stress value observed. While the mechanical response of these grains is largely dependent on crystallographic orientation in the utilized EVP-FFT simulation approach, the information presented here is still useful. Clearly the grain shapes developed in these microstruc- Fig. 6 Gridded plots of (a) modulus in the build direction, (b) modulus in the transverse direction, (c) yield stress in the build direction, (d) yield stress in the transverse direction, (e) anisotropy between the yield stress in the build direction and transverse direction (r BD y =r TD y Þ, and (f) the ratio of the intensity of <001> crystal planes parallel to the BD and to the TD tures are coupled to the crystallographic orientation and, as a result, the mechanical response. This is because preferred solidification is present in the simulation approach and the shape and position of the grains is linked to the complex thermal history and changing solidification directions inherent to the manufacturing technique. These results indicate promise in use of the SOMIM plots to evaluate microstructure by linking grain shapes and properties in AM parts. Simulation of a Functionally Graded Microstructure As these microstructures clearly show strong effects on texture, grain shape, and mechanical response from the changes in hatch spacing and layer thickness, the ability to create functionally graded microstructures from altering these parameters becomes a possibility. In order to investigate the feasibility of this approach within the texture-aware Potts model, a simulation was run with the same parameters as previously outlined but with a shift in layer thickness midbuild. The majority of the simulation imposes a 40 lm layer thickness. For 8 layers mid-build, the layer thickness was increased to 50 lm. The synthetic microstructure generated from this approach is illustrated in Fig. 9. A clear shift in microstructure can be observed where the layer thickness is shifted mid-simulation. The graded microstructure presented in Fig. 9 was also subject to the micromechanical EVP-FFT simulations. Again, loading in both the BD and TD was simulated. Overall there was minimal anisotropy in the bulk mechanical response of the microstructure. However, when looking at the spatially resolved stress states from the different loading conditions as presented in Fig. 9(b) and Fig. 9(c), the effect of the microstructure shift can be observed. In the TD, the 50-lm LT zone shows a significantly smaller stress response in than the 40-lm zone. The SOMIM plots for both the BD and TD loading are presented in Fig. 9(d) and (e), respectively. Again, the cluster of low stress grains at low x 1 values is present. When plotted for the TD loading simulation, these grains possess much higher stress response. This indicates the high level of anisotropy generated in these builds. Precise control of microstructure for reduction in stress near potential stress concentrations could be accomplished by shift in microstructure via hatch spacing and layer thickness modifications. Conclusions • Systematic variation of the hatch spacing and layer thickness in the TS Potts simulations of LPBF AM microstructures reveals significant variation in the synthetic microstructures produced. • The texture predicted by the approach also shows significant shifts across HS-LT space. • The texture shift into the TD-direction fiber texture is an artifact of the current simulation approach. The region of random spin values used to initialize the synthetic volume and simulate the powder layer artificially exaggerates the texture strength in regions of the melt pool that interact with these powder regions. Modifications to the modeling approach may be necessary to mitigate this effect and better reproduce the textures present in experimental LPBF AM builds. • Scan strategies that extensively overlap successive melt pool passes (i.e., substantial re-melt) in the TS Potts approach result in synthetic microstructures with large columnar grains with strong <101>//BD texture when no melt pool shift or rotation between layers is present. As the overlap decreases, grain size also decreases. • A distinctly non-random misorientation distribution was observed for nearly all synthetic microstructures generated. This included a distinct bimodal distribution for the HS 80 LT 30 microstructure. Less overlap resulted in a shift toward the Mackenzie distribution. • Classification of the grain morphology through the use of second-order moment invariants quantified the variations in shape (grain morphology) of the microstructures. Different fingerprints for each microstructure were developed that resembled qualitative observations of grain morphologies present across the synthetic microstructures. • The variations in microstructure were shown to have strong effects on the mechanical response as predicted by an EVP-FFT micromechanical model. The yield strength increased as the material shifted away from the strong <101> textures of the columnar microstructures. • Stress development in the individual grains could be correlated with grain shape when plotted on the second-order moment invariant map. This highlights the potential for grain shape to be linked to properties when analyzing AM microstructures. • A functionally graded synthetic microstructure was simulated by altering the layer thickness of the build mid-simulation. Significant differences in mechanical response were evident ($150 MPa of stress) between the two regions of microstructure.
8,647.8
2021-08-19T00:00:00.000
[ "Engineering", "Materials Science" ]
Laboratory Investigation of Rubberized Asphalt Using High-Content Rubber Powder Rubberized asphalt (RA) has been successfully applied in road engineering due to its excellent performance; however, the most widely used rubber content is about 20%.To improve the content of waste rubber and ensure its performance, seven rubberized asphalts with different powder content were prepared by high-speed shearing. Firstly, penetration, softening point, and ductility tests were carried out to investigate the conventional physical features of high-content rubberized asphalt (HCRA). Then, the dynamic shear rheometer test (DSR) was conducted to estimate the high-temperature rheological properties. The bending beam rheometer test (BBR) was carried out to evaluate the low-temperature rheological performance. Finally, combined with the macroscopic performance test, the modification mechanism was revealed by the Fourier transform infrared reflection (FTIR) test, and scanning electron microscope (SEM) analysis was used to observe the microscopic appearance before and after aging. The results show that rubberized asphalt has excellent properties in high- and low-temperature conditions, and fatigue resistance is also outstanding compared with neat asphalt. As the crumb rubber content increases, it is evident that the 40% RA performance is the best. The low-temperature properties of HCRA are better than the traditional 20% rubberized asphalt. This study provides a full test foundation for the efficient utilization of HCRA in road engineering. Introduction Road rubberized asphalt (RA) is made by the high-temperature shear mixing of asphalt, waste tire rubber powder, and various admixtures [1]. This not only recognizes the recycling of waste tire rubber but also improves the properties of neat asphalt [2]. Rubberized asphalt has always been a research hotspot in the pavement industry [3]. The entire development process of rubberized asphalt can be divided into four stages according to the different application methods in different periods [4]. The first stage is mixing rubber powder and aggregate and then adding asphalt to produce a rubber-modified asphalt mixture [5,6]. The second stage is mixing asphalt + rubber powder + rubber oil, stirring in a tank at a temperature of 180 to 200 • C for about 40 to 60 min, then mixing with aggregate to produce a rubber-modified asphalt mixture [7]. The third stage involves factory-based stabilized rubberized asphalt, asphalt + rubber powder, stirred at the specified temperature and time and then added to the product tank when the product standard is reached [2,8]. Thermal storage does not stratify the mixture, and it does not decay. It can be used in Rubber Powder The particle size of the waste rubber powder was less than 0.6 mm, and the conventional properties are shown in Table 2. Preparation of Rubberized Asphalt According to a specific method, the stabilized rubberized asphalt was prepared [27], in which the blending content of waste tire rubber powder was 20%, 25%, 30%, 35%, 40%, 45%, and 50% (internal blending),this is a total of seven types of rubberized asphalt. Rubberized asphalt with more than 20% rubber powder was defined as a high-content rubberized asphalt (HCRA). Processing equipment, as shown in Figure 1, and the preparation steps are as follows: (1) Weigh the raw materials of the modified asphalt according to the mass ratio. (2) Heat the neat asphalt to 160 • C, and then transport it to the pilot reactor. (3) Add viscosity reducing agent and reinforcing agent to the pilot reactor, and stir evenly. The viscosity reducing agent is a mix of activator 950, dioctenyl phthalate, and epoxy fatty acid methyl ester. The reinforcing agent is natural asphalt, petroleum resin, and phenolic resin. Experiment Methods In this work, all macroscopic performance tests were carried out in accordance with the specifications of the "Standard Test Methods of Bitumen and Bituminous Mixture for Highway Engineering" (JTG E20-2011) [26]. Penetration, Ductility, Softening Point, and Rotation Viscosity The penetration, ductility, softening point, and rotation viscosity of neat asphalt and rubberized asphalt are measured. The standard conditions for penetration tests are a temperature of 25 °C, a penetration time of 5 s, a 100 g load, and the same sample is tested in parallel three times. The ductility test temperature is 25 °C, and the tensile speed is 5 ± 0.25 cm /min, and the same sample is tested in parallel three times. The softening point of neat asphalt and rubberized asphalt is measured by the ring and ball method, and the same sample is tested twice in parallel. Rotation viscosity was carried out by Brookfield at 180 °C, and the same sample was tested in parallel three times. Dynamic Shear Rheology Test (DSR) For the original asphalt test, the sample diameter is 25 mm, and the thickness is 1 mm. Dynamic shear rheometer (DSR) equipment and strain control mode are used to test the specimen, DSR equipment is shown in Figure 2. For this, we apply a sine oscillating load with a frequency of 0.1-100 rad/s to test the neat asphalt and rubberized asphalt's rheological properties to determine their dynamic modulus and phase angle. The frequency sweep temperature is 60 °C. The specific test steps are as follows: (1) Prepare samples according to standard methods. (2) Select a test plate with a diameter of 25 ± 0.05 mm and clean its surface. Place it on the testing machine and move the top plate to make the plate gap 1 ± 0.05 mm. (3) Take out the test board, pour the sample on the test board, and install the test board on the testing machine after the sample hardens. Moreover, move the test board to squeeze the sample, heat the test piece repairer, and clean up the overflowing test piece. Then, adjust the gap to 1 ± 0.05 mm. (4) After the temperature control box temperature remains stable at 60 °C for 2 min, start loading, and perform frequency scanning. Experiment Methods In this work, all macroscopic performance tests were carried out in accordance with the specifications of the "Standard Test Methods of Bitumen and Bituminous Mixture for Highway Engineering" (JTG E20-2011) [26]. Penetration, Ductility, Softening Point, and Rotation Viscosity The penetration, ductility, softening point, and rotation viscosity of neat asphalt and rubberized asphalt are measured. The standard conditions for penetration tests are a temperature of 25 • C, a penetration time of 5 s, a 100 g load, and the same sample is tested in parallel three times. The ductility test temperature is 25 • C, and the tensile speed is 5 ± 0.25 cm /min, and the same sample is tested in parallel three times. The softening point of neat asphalt and rubberized asphalt is measured by the ring and ball method, and the same sample is tested twice in parallel. Rotation viscosity was carried out by Brookfield at 180 • C, and the same sample was tested in parallel three times. Dynamic Shear Rheology Test (DSR) For the original asphalt test, the sample diameter is 25 mm, and the thickness is 1 mm. Dynamic shear rheometer (DSR) equipment and strain control mode are used to test the specimen, DSR equipment is shown in Figure 2. For this, we apply a sine oscillating load with a frequency of 0.1-100 rad/s to test the neat asphalt and rubberized asphalt's rheological properties to determine their dynamic modulus and phase angle. The frequency sweep temperature is 60 • C. The specific test steps are as follows: (1) Prepare samples according to standard methods. Bending Beam Bheometer Test (BBR) After short-term aging (rolling thin-film oven test (RTFOT) 85 min) and long-term aging (pressurized aging vessel (PAV) 20 h), the flexural creep stiffness and creep rate of neat asphalt and rubberized asphalt are measured by a bending beam rheometer at −12, −18 and −24 °C. Three parallel tests are conducted for each sample and temperature. BBR equipment is shown in Figure 3. The specific test steps are as follows: (1) Put the test piece into the prepared thermostatic bath immediately after demolding, keep it there for 60 ± 5 min, then place it on the support, and keep the thermostatic bath within ±0.1 °C of the test temperature; (2) Input relevant information such as test temperature, test load, and test piece data into the computer; (3) Manually apply a contact load of 35 ± 10 mN to the specimen, and ensure that the application time does not exceed 10 s. The specimen must be in contact with the load head during the application process; (4) Activate the automatic loading system, apply an initial load of 980 ± 50 mN within 1 ± 0.1 s, reduce the load to 35 ± 10 mN, and maintain it for 20 ± 1 s. Apply a test load of 980 ± 50 mN for 240 s, and the computer will automatically record and calculate the load and deformation values from 0.5 s at intervals of 0.5 s. Remove the test load and return the system to a contact load of 35 ± 10 mN, remove the test piece, and proceed to the next test. Bending Beam Bheometer Test (BBR) After short-term aging (rolling thin-film oven test (RTFOT) 85 min) and long-term aging (pressurized aging vessel (PAV) 20 h), the flexural creep stiffness and creep rate of neat asphalt and rubberized asphalt are measured by a bending beam rheometer at −12, −18 and −24 • C. Three parallel tests are conducted for each sample and temperature. BBR equipment is shown in Figure 3. The specific test steps are as follows: (1) Put the test piece into the prepared thermostatic bath immediately after demolding, keep it there for 60 ± 5 min, then place it on the support, and keep the thermostatic bath within ±0.1 • C of the test temperature; (2) Input relevant information such as test temperature, test load, and test piece data into the computer; (3) Manually apply a contact load of 35 ± 10 mN to the specimen, and ensure that the application time does not exceed 10 s. The specimen must be in contact with the load head during the application process; (4) Activate the automatic loading system, apply an initial load of 980 ± 50 mN within 1 ± 0.1 s, reduce the load to 35 ± 10 mN, and maintain it for 20 ± 1 s. Apply a test load of 980 ± 50 mN for 240 s, and the computer will automatically record and calculate the load and deformation values from 0.5 s at intervals of 0.5 s. Remove the test load and return the system to a contact load of 35 ± 10 mN, remove the test piece, and proceed to the next test. Bending Beam Bheometer Test (BBR) After short-term aging (rolling thin-film oven test (RTFOT) 85 min) and long-term aging (pressurized aging vessel (PAV) 20 h), the flexural creep stiffness and creep rate of neat asphalt and rubberized asphalt are measured by a bending beam rheometer at −12, −18 and −24 °C. Three parallel tests are conducted for each sample and temperature. BBR equipment is shown in Figure 3. The specific test steps are as follows: (1) Put the test piece into the prepared thermostatic bath immediately after demolding, keep it there for 60 ± 5 min, then place it on the support, and keep the thermostatic bath within ±0.1 °C of the test temperature; (2) Input relevant information such as test temperature, test load, and test piece data into the computer; (3) Manually apply a contact load of 35 ± 10 mN to the specimen, and ensure that the application time does not exceed 10 s. The specimen must be in contact with the load head during the application process; (4) Activate the automatic loading system, apply an initial load of 980 ± 50 mN within 1 ± 0.1 s, reduce the load to 35 ± 10 mN, and maintain it for 20 ± 1 s. Apply a test load of 980 ± 50 mN for 240 s, and the computer will automatically record and calculate the load and deformation values from 0.5 s at intervals of 0.5 s. Remove the test load and return the system to a contact load of 35 ± 10 mN, remove the test piece, and proceed to the next test. Aging Test A rolling thin-film oven test (RTFOT) and an accelerated aging test of the asphalt binder using a pressurized aging vessel (PAV) are used to simulate short-term aging and long-term aging, respectively. Rolling Thin-Film Oven Test (RTFOT) Short-term aging test procedure: weigh 35 ± 0.5 g asphalt sample and place it in a short-term aging bottle; adjust the rotating oven to a certain level, and preheat it to 163 ± 0.5 • C for no less than 16 h to heat the air in the box evenly. Adjust the temperature controller and put all the sample bottles into the metal ring rack. At this time, the oven temperature should reach 163 ± 0.5 • C within 10 min; adjust the distance between the air nozzle and the opening of the sample bottle to 6.35 mm. Furthermore, to adjust the flow rate, the historical air flow rate is 4000 ± 200 mL/min; the test's total duration is 85 min. This kind of equipment is produced by China Beijing Zhongjian Road Industry Instrument and Equipment Co., Ltd., and the equipment is shown in Figure 4a. adjust the distance between the air nozzle and the opening of the sample bottle to 6.35 mm. Furthermore, to adjust the flow rate, the historical air flow rate is 4000 ± 200 mL/min; the test's total duration is 85 min. This kind of equipment is produced by China Beijing Zhongjian Road Industry Instrument and Equipment Co., Ltd., and the equipment is shown in Figure 4a. Accelerated Aging Test of Asphalt Binder Using a Pressurized Aging Vessel (PAV) This kind of equipment is produced by China Beijing Zhongjian Road Industry Instrument and Equipment Co., Ltd., and the equipment is shown in Figure 4b. Long-term aging test procedure: (1) Pour the asphalt residue from the rolling thin-film oven test into the container. (2) Balance the standard film oven test sample tray of known quality, add 50 ± 0.5 g of asphalt to the plate, and make the asphalt film thickness about 3.2 mm. (3) Put the tray rack in the pressure vessel, select the temperature of the pressure aging container, then turn on the heater to preheat the rack to the selected aging temperature of 100 °C. Fourier Transform Infrared Reflection (FTIR) In order to reveal the mechanism of rubberized asphalt, infrared spectroscopy is used for functional group analysis. Fourier transform infrared reflection (FTIR) is an analysis method to obtain the molecular composition of a substance based on the absorption of infrared light by a substance. Usually, infrared reflection adopts a mid-infrared band, which ranges from 4000-500 Accelerated Aging Test of Asphalt Binder Using a Pressurized Aging Vessel (PAV) This kind of equipment is produced by China Beijing Zhongjian Road Industry Instrument and Equipment Co., Ltd., and the equipment is shown in Figure 4b. Long-term aging test procedure: (1) Pour the asphalt residue from the rolling thin-film oven test into the container. (2) Balance the standard film oven test sample tray of known quality, add 50 ± 0.5 g of asphalt to the plate, and make the asphalt film thickness about 3.2 mm. (3) Put the tray rack in the pressure vessel, select the temperature of the pressure aging container, then turn on the heater to preheat the rack to the selected aging temperature of 100 • C. (4) After reaching the aging temperature, quickly open it and put it into the prepared sample tray, then and close the pressure vessel. (5) When the internal temperature of the pressure vessel is lower than 2 • C (required within 2 h), start to supply air of 2.1 ± 0.1 MPa pressure and time. Fourier Transform Infrared Reflection (FTIR) In order to reveal the mechanism of rubberized asphalt, infrared spectroscopy is used for functional group analysis. Fourier transform infrared reflection (FTIR) is an analysis method to obtain the molecular composition of a substance based on the absorption of infrared light by a substance. Usually, infrared reflection adopts a mid-infrared band, which ranges from 4000-500 cm −1 , among which 4000-1300 cm −1 is the functional group and 1300-600 cm −1 is the fingerprint area. The former is an infrared spectrum to analyze the most valuable analysis area. While the latter is involved, slight differences in molecular structure will produce changes [9,16,28]. The neat asphalt and rubberized asphalt are measured by the nexus Fourier transform infrared spectrometer produced by Thermo Nicole (Wisconsin, WI, USA). The specific steps are as follows: (1) Use carbon tetrachloride (CCI 4 ) reagent to fully dissolve the modified asphalt (0.1 g asphalt needs 2 mL of CCI 4 reagent to dissolve). (2) After dissolving completely, drop 2 drops on a potassium bromide (KBr) wafer and air dry. (3) When the sample is cooled, it can be put into the sample tank for scanning. During the test, the beam acquisition interval is set at 400-4000 cm −1 , the scanning times are 32, and the resolution is 4 cm −1 . Scanning Electron Microscope (SEM) The information contained in the electronic scanning image can well reflect the surface morphology of the sample. To observe the microscopic morphology of the neat asphalt and rubberized asphalt, the asphalt sample is imaged by a scanning electron microscope, and the magnification is 200 times. This research chooses the S-3400N tungsten filament scanning electron microscope produced by Hitachi (Tokyo, Japan), which guarantees a resolution of 10 nm at a low acceleration voltage of 3 kV. In order to obtain better scanning electron microscopy images, sputtering ion equipment is used to spray the sample gold before scanning General Physical Properties The test results of the three major indices of rubberized asphalt with different rubber powder contents are shown in Figure 5. Figure 5a shows the penetration of rubberized asphalt; Figure 5b shows the ductility of rubberized asphalt; Figure 5c shows the softening point of rubberized asphalt. Penetration, ductility, and softening point tests were carried out for each content of rubber-modified asphalt. It can be obtained from Figure 5a that the penetration of rubberized asphalt with different contents of rubber powder occurs in different zones. The penetration of 20% rubberized asphalt is in the range of 40-60 (0.1 mm). The penetration of 25-40% rubberized asphalt is concentrated in the range of 60-80 (0.1 mm). The penetration of 45-50% rubberized asphalt is distributed in the range of 80-90 (0.1 mm). Thus, as the rubber powder content increased, the penetration increased. Figure 5b demonstrates that the ductility of rubberized asphalt meets the requirements of "Asphalt rubber for highway engineering" (JT/T798-2019) in a cold area greater than 100 mm [29]. As the content of rubber powder increases, the ductility first increases and then decreases. Among them, the rubberized asphalt with 35% rubber powder has the highest ductility, reaching 315.7 mm. General Physical Properties The test results of the three major indices of rubberized asphalt with different rubber powder contents are shown in Figure 5. Figure 5a shows the penetration of rubberized asphalt; Figure 5b shows the ductility of rubberized asphalt; Figure 5c shows the softening point of rubberized asphalt. Penetration, ductility, and softening point tests were carried out for each content of rubber-modified asphalt. It can be obtained from Figure 5a that the penetration of rubberized asphalt with different contents of rubber powder occurs in different zones. The penetration of 20% rubberized asphalt is in the range of 40-60 (0.1 mm). The penetration of 25-40% rubberized asphalt is concentrated in the range of 60-80 (0.1 mm). The penetration of 45-50% rubberized asphalt is distributed in the range of 80-90 (0.1 mm). Thus, as the rubber powder content increased, the penetration increased. Figure 5b demonstrates that the ductility of rubberized asphalt meets the requirements of "Asphalt rubber for highway engineering" (JT/T798-2019) in a cold area greater than 100 mm [29]. As the content of rubber powder increases, the ductility first increases and then decreases. Among them, the rubberized asphalt with 35% rubber powder has the highest ductility, reaching 315.7 mm. It is obtained from Figure 5c that the softening point of rubberized asphalt mixtures with different contents meets the standard requirements, which is mostly concentrated at 65-73 °C. They are 31.7-45.7% higher than neat asphalt, indicating that rubber powder can effectively improve the high-temperature properties of neat asphalt [16]. However, there is no apparent difference in the softening point of rubberized asphalt in various rubber powder contents. This cannot effectively distinguish the difference in the high-temperature performance and viscoelastic properties of rubberized asphalt [20]. It can be seen from Figure 5d that the viscosity of all rubberized asphalt forms at 180 °C is less than 3 Pa·s, which has an excellent construction mixing performance. Compared with 20% rubberized asphalt, the viscosity of HCRA is slightly reduced at 25%. When the content is more than 25%, the viscosity is greater than 20% RA, and the viscosity of 50% RA is 127.17% higher than 20% RA. High-Temperature Rheological Properties Under the condition of 60 °C, the frequencies of eight kinds of asphalt were scanned by DSR equipment. The complex shear modulus and phase angle were measured at different rates, and the rutting index and fatigue index were calculated. The rutting index in high-temperature conditions It is obtained from Figure 5c that the softening point of rubberized asphalt mixtures with different contents meets the standard requirements, which is mostly concentrated at 65-73 • C. They are 31.7-45.7% higher than neat asphalt, indicating that rubber powder can effectively improve the high-temperature properties of neat asphalt [16]. However, there is no apparent difference in the softening point of rubberized asphalt in various rubber powder contents. This cannot effectively distinguish the difference in the high-temperature performance and viscoelastic properties of rubberized asphalt [20]. It can be seen from Figure 5d that the viscosity of all rubberized asphalt forms at 180 • C is less than 3 Pa·s, which has an excellent construction mixing performance. Compared with 20% rubberized asphalt, the viscosity of HCRA is slightly reduced at 25%. When the content is more than 25%, the viscosity is greater than 20% RA, and the viscosity of 50% RA is 127.17% higher than 20% RA. High-Temperature Rheological Properties Under the condition of 60 • C, the frequencies of eight kinds of asphalt were scanned by DSR equipment. The complex shear modulus and phase angle were measured at different rates, and the rutting index and fatigue index were calculated. The rutting index in high-temperature conditions and the fatigue index of various asphalts were compared. Figure 6 manifests that the complex shear modulus and phase angle of the asphalt is affected by the loading frequency. As the loading frequency increases, the complex shear modulus increases, and the phase angle decreases. Materials 2020, 13, x FOR PEER REVIEW 9 of 18 It can be found, from Figure 6a, that the dynamic shear modulus of 40% RA is the largest, and the neat asphalt is the smallest. The complex shear modulus is not only influenced by the rubber powder content, but also by comprehensive factors such as rubber powder content and additives [25,30]. Figure 6a represents that, at 10 Hz, the dynamic shear modulus of 25% RA increases by 11.6%, and that of 40% RA increases by 145.3%, and the dynamic shear modulus of the other rubberized asphalt lies between them, while the dynamic modulus of the neat asphalt is less than 14 kPa. Figure 6b manifests that the phase angle of the neat asphalt is concentrated between 80° and 90°, and the phase angle of rubberized asphalt is concentrated between 50° and 65°, among which the phase angle of rubberized asphalt with 20-30% rubber powder is at the same level. Moreover, 40% RA, 45% RA, and 50% RA are at the same level. Generally speaking, with the increase in rubber powder, the phase angle shows a decreased trend [31,32]. Figure 6b summarizes that the phase angle of rubberized asphalt is much smaller than the neat asphalt. Consequently, rubber powder plays a crucial role in modification due to the swelling of rubber powder and the formation of a three-dimensional network structure with asphalt, which increases the flow resistance. Viscoelastic Properties The storage modulus and loss modulus of rubberized asphalt is further analyzed. As shown in Figure 7, Figure 7a shows the storage modulus of rubberized asphalt, and Figure 7b shows the loss modulus of rubberized asphalt. It can be seen from Figure 7 that no matter the storage modulus or loss modulus, 40% RA is the highest, neat asphalt is the lowest, and another asphalt type lies between them, which is consistent with the conclusions of Figure 5. (b) Loss modulus. It can be found, from Figure 6a, that the dynamic shear modulus of 40% RA is the largest, and the neat asphalt is the smallest. The complex shear modulus is not only influenced by the rubber powder content, but also by comprehensive factors such as rubber powder content and additives [25,30]. Figure 6a represents that, at 10 Hz, the dynamic shear modulus of 25% RA increases by 11.6%, and that of 40% RA increases by 145.3%, and the dynamic shear modulus of the other rubberized asphalt lies between them, while the dynamic modulus of the neat asphalt is less than 14 kPa. Figure 6b manifests that the phase angle of the neat asphalt is concentrated between 80 • and 90 • , and the phase angle of rubberized asphalt is concentrated between 50 • and 65 • , among which the phase angle of rubberized asphalt with 20-30% rubber powder is at the same level. Moreover, 40% RA, 45% RA, and 50% RA are at the same level. Generally speaking, with the increase in rubber powder, the phase angle shows a decreased trend [31,32]. Figure 6b summarizes that the phase angle of rubberized asphalt is much smaller than the neat asphalt. Consequently, rubber powder plays a crucial role in modification due to the swelling of rubber powder and the formation of a three-dimensional network structure with asphalt, which increases the flow resistance. The storage modulus and loss modulus of rubberized asphalt is further analyzed. As shown in Figure 7, Figure 7a shows the storage modulus of rubberized asphalt, and Figure 7b shows the loss modulus of rubberized asphalt. It can be seen from Figure 7 that no matter the storage modulus or loss modulus, 40% RA is the highest, neat asphalt is the lowest, and another asphalt type lies between them, which is consistent with the conclusions of Figure 5. The storage modulus and loss modulus of rubberized asphalt is further analyzed. As shown in Figure 7, Figure 7a shows the storage modulus of rubberized asphalt, and Figure 7b shows the loss modulus of rubberized asphalt. It can be seen from Figure 7 that no matter the storage modulus or loss modulus, 40% RA is the highest, neat asphalt is the lowest, and another asphalt type lies between them, which is consistent with the conclusions of Figure 5. Anti-Rutting Performance According to the complex shear modulus and phase angle in Figure 6, the rutting index of asphalt is calculated by G * / sin δ (Here, the G * is the complex shear modulus, and the δ is the phase angle.) and three frequencies of 0.1, 1, and 10 Hz are selected [33]. The rutting index of asphalt is shown in Figure 8. The more significant the rutting index, the more preferable the high-temperature stability of asphalt is and the better its resistance to permanent deformation is. Anti-Rutting Performance According to the complex shear modulus and phase angle in Figure 6, the rutting index of asphalt is calculated by * / sin G δ (Here, the * G is the complex shear modulus, and the δ is the phase angle.) and three frequencies of 0.1, 1, and 10 Hz are selected [33]. The rutting index of asphalt is shown in Figure 8. The more significant the rutting index, the more preferable the high-temperature stability of asphalt is and the better its resistance to permanent deformation is. Figure 8 indicates that the rutting index of asphalt is comprehensively affected by the loading frequency and the content of rubber powder. As the frequency increases, the rutting index of asphalt shows an increasing trend. When the frequency is 1 and 10 Hz, the rutting factor of neat asphalt is seven times and 42 times 0.1 Hz. The addition of rubber powder can effectively improve the rutting resistance of neat asphalt and improve the anti-rutting performance at high temperature. The reason is that the interaction between rubber powder and neat asphalt is apparent, forming a physical crosslinking effect [24]. This makes the proportion of asphalt components change proportionally, forming a stable gel structure, thereby improving the high-temperature properties of neat asphalt [33]. A rubber powder content greater than 20% is defined as high-content rubberized asphalt. When the rubber powder content is greater than 25%, as the rubber powder content increases, the rutting index first increases and then decreases. When the rubber powder content is more than 35%, the high-temperature properties of rubberized asphalt with high content are better than those of 20% rubberized asphalt. When comparing Figure 8 and Figure 5, there is no visible difference in the softening point of various asphalts, but there are remarkable differences in the rutting index. Dynamic shear rheology Figure 8 indicates that the rutting index of asphalt is comprehensively affected by the loading frequency and the content of rubber powder. As the frequency increases, the rutting index of asphalt shows an increasing trend. When the frequency is 1 and 10 Hz, the rutting factor of neat asphalt is seven times and 42 times 0.1 Hz. The addition of rubber powder can effectively improve the rutting resistance of neat asphalt and improve the anti-rutting performance at high temperature. The reason is that the interaction between rubber powder and neat asphalt is apparent, forming a physical crosslinking effect [24]. This makes the proportion of asphalt components change proportionally, forming a stable gel structure, thereby improving the high-temperature properties of neat asphalt [33]. A rubber powder content greater than 20% is defined as high-content rubberized asphalt. When the rubber powder content is greater than 25%, as the rubber powder content increases, the rutting index first increases and then decreases. When the rubber powder content is more than 35%, the high-temperature properties of rubberized asphalt with high content are better than those of 20% rubberized asphalt. When comparing Figures 5 and 8, there is no visible difference in the softening point of various asphalts, but there are remarkable differences in the rutting index. Dynamic shear rheology can profoundly analyze the high-temperature performance of asphalt and determine the influence of frequency on it. Fatigue Resistance According to the complex shear modulus and phase angle in Figure 6, the fatigue index of asphalt is calculated by G*sinδ, taking the asphalt fatigue index in 0.1, 1, and 10 Hz as an example, as shown in Figure 9 [15]. Compared with Figure 8 and Figure 9, this demonstrates that the asphalt fatigue index's variation pattern is consistent with the rutting index, which increases with the loading frequency. The fatigue index of rubberized asphalt is higher than the neat asphalt. At three frequencies of 0.1, 1, and 10 Hz, the fatigue index of rubberized asphalt with 20% rubber powder is increased by 5.8, 2.9, and 1.9 times, respectively; the difference is within one order of magnitude. For high-content rubberized asphalt, the fatigue index increases first and then decreases with the increase in rubber powder content from 25%. The fatigue index of rubberized asphalt with 40% rubber powder is the largest, and the overall fatigue index is relatively stable and remains at a low level. Low-Temperature Rheological Properties The BBR test results of eight kinds of asphalt are shown in Figure 10, in which Figure 10a is the stiffness modulus (S), and Figure 10b is the creep rate (m) [11,23]. The samples have all been aged by RTFOT and PAV [24,34]. Compared with Figures 8 and 9, this demonstrates that the asphalt fatigue index's variation pattern is consistent with the rutting index, which increases with the loading frequency. The fatigue index of rubberized asphalt is higher than the neat asphalt. At three frequencies of 0.1, 1, and 10 Hz, the fatigue index of rubberized asphalt with 20% rubber powder is increased by 5.8, 2.9, and 1.9 times, respectively; the difference is within one order of magnitude. For high-content rubberized asphalt, the fatigue index increases first and then decreases with the increase in rubber powder content from 25%. The fatigue index of rubberized asphalt with 40% rubber powder is the largest, and the overall fatigue index is relatively stable and remains at a low level. Low-Temperature Rheological Properties The BBR test results of eight kinds of asphalt are shown in Figure 10, in which Figure 10a is the stiffness modulus (S), and Figure 10b is the creep rate (m) [11,23]. The samples have all been aged by RTFOT and PAV [24,34]. powder content from 25%. The fatigue index of rubberized asphalt with 40% rubber powder is the largest, and the overall fatigue index is relatively stable and remains at a low level. Low-Temperature Rheological Properties The BBR test results of eight kinds of asphalt are shown in Figure 10, in which Figure 10a is the stiffness modulus (S), and Figure 10b is the creep rate (m) [11,23]. The samples have all been aged by RTFOT and PAV [24,34]. (b) Creep rate. The test results in Figure 10a show that no matter what kind of asphalt is used, the stiffness modulus will increase with the decrease in temperature, which is in line with the actual pavement situation. Under the three test temperature conditions, the low-temperature stiffness modulus of rubberized asphalt is smaller than the neat asphalt. The stiffness modulus is smaller as the increase The test results in Figure 10a show that no matter what kind of asphalt is used, the stiffness modulus will increase with the decrease in temperature, which is in line with the actual pavement situation. Under the three test temperature conditions, the low-temperature stiffness modulus of rubberized asphalt is smaller than the neat asphalt. The stiffness modulus is smaller as the increase in rubber powder content, and the cracking resistance in the low-temperature environment of rubberized asphalt improves. Compared with the rubberized asphalt with 20% rubber powder, the low-temperature properties of high-content rubberized asphalt do not show a downward trend. On the contrary, with the increase in the rubber powder content, the low-temperature modification effect is more significant. Figure 10b illustrates that asphalt's creep rate will decrease when the temperature is lowered; accordingly, cracking is more likely to occur [3,22]. Under the three temperature conditions, rubberized asphalt's creep rate is much better than the neat asphalt; compared with rubberized asphalt with 20% rubber powder, the creep rate of high-content rubberized asphalt is higher than it. Unlike the stiffness modulus, the creep rate does not always increase with the rubber powder content. At -24 • C, The creep rate of rubberized asphalt with 20% and 25% powder content is extremely equivalent. Further analysis of the low-temperature sensitivity of asphalt manifests that the stiffness index can represent the low-temperature sensitivity of asphalt, and the calculation is as follows [3]: S TS is the stiffness index; T is the test temperature; C is the regression constant. According to Equation (1), the logarithm of the stiffness modulus is linearly fitted with temperature. The fitting results are shown in Figure 11, and the fitting parameters are shown in Table 3. Figure 11 and Table 3 demonstrate that with the rise in temperature, the stiffness modulus of rubberized asphalt descends linearly, and the fitting parameter C represents the intercept with the ordinate. It can be seen that the intercept of neat asphalt is the largest, that is, the line position is the highest. With the rise in rubber powder content, the line position gradually decreases. S TS implies the slope of the fitting curve, that is, the sensitivity to low temperature. From the fitting results and the slope of the curve, the sensitivity of neat asphalt to low temperature is small; 40% RA is the most sensitive to low temperature. Furthermore, the low-temperature stability of high-content rubberized asphalt is similar to the rubberized asphalt with 20% rubber powder, which indicates that high-content rubberized asphalt has an excellent anti-cracking performance. We can summarize that low-temperature properties are the primary benefit of the application of crumb rubber powder compared with existing works. Further analysis of the low-temperature sensitivity of asphalt manifests that the stiffness index can represent the low-temperature sensitivity of asphalt, and the calculation is as follows [3]: STS is the stiffness index; T is the test temperature; C is the regression constant. According to Equation (1), the logarithm of the stiffness modulus is linearly fitted with temperature. The fitting results are shown in Figure 11, and the fitting parameters are shown in Table 3. Figure 11. Low-temperature sensitivity of high-content rubberized asphalt. Fourier Transform Infrared Reflection (FTIR) Fourier transform infrared reflection usually analyzes the chemical structure of petroleum asphalt and a polymer [9,16]. This is mainly carried out through the absorption of polymers under different wavelengths of infrared radiation. A slice of the polymer components absorbs the radiation of part of the wavelengths and weakens the infrared light, thus forming the infrared spectrum. The infrared spectrum of a substance is the reaction of its molecular structure, and it plays an essential role in identifying the particular functional groups in asphalt and polymers. The neat asphalt and rubberized asphalt with 30% rubber powder were tested by Fourier transform infrared reflection to reveal the modification mechanism. The test results are shown in Figure 12, the red and blue lines represent the transmittance of rubber asphalt (30%RA) and neat asphalt at different wave numbers, respectively. of part of the wavelengths and weakens the infrared light, thus forming the infrared spectrum. The infrared spectrum of a substance is the reaction of its molecular structure, and it plays an essential role in identifying the particular functional groups in asphalt and polymers. The neat asphalt and rubberized asphalt with 30% rubber powder were tested by Fourier transform infrared reflection to reveal the modification mechanism. The test results are shown in Figure 12, the red and blue lines represent the transmittance of rubber asphalt (30%RA) and neat asphalt at different wave numbers, respectively. It can be obtained from Figure 12 that the peak positions of rubberized asphalt and neat asphalt are the same, and there is no new absorption peak. The specific position of the wave crest has been identified in the figure. The results are indicative of the chemical functional groups of saturated alkanes and are mainly divided into C-H vibrations and C-C skeleton vibrations [30]. The C-H vibration includes a C-H stretching vibration (absorption peak in the range of 3000-2850 cm −1 ) and a C-H variable angle vibration (near 1460 and 1370 cm −1 ). In contrast, the absorption peak of C-C skeleton vibration occurs in the range of 1100-1020 cm −1 . The stretching vibration absorption peak of the carbon-carbon double bond (C=C) mainly occurs in the range of 1700-1370 cm −1 , while the absorption peak of the C=C stretching vibration of the aromatic ring mainly occurs in the range of 1610-1370 cm −1 due to the sizeable π-conjugated system [22]. The olefin (trans) C-H out-of-plane bending vibration band is relatively stable, mainly at 965 cm −1 , and the carbonyl absorption peak mainly occurs in the interval of 1800-1650 cm −1 [9,28]. Figure 12 shows visible absorption peaks at 1372, 1455, 2853 and 2923 cm −1 in the infrared spectra of neat asphalt and rubberized asphalt samples [10]. Among them, 1372 and 1455 cm −1 belong to a methyl (-CH3) bending vibration. At the same time, the absorption peak at a 2853 cm −1 wavelength is caused by the C-H stretching vibration of alkanes, and the absorption peak at 2923 cm −1 gives rise to the stretching vibration of the methylene C-H bond [8]. For other absorption peaks, at a wavelength of 751 cm −1 , these are attributed to the out-of-plane bending vibration of the olefin C-H [4]. At a wavelength of 807 cm −1 , the absorption peak is triggered by the out-of-plane bending vibration of the olefin C-H. At an 868 cm −1 wavelength, this results from an out-of-plane bending vibration peak of the hydroxyl group (O-H) [14]. The absorption peak at the wavelength of 1606 cm −1 is attributed to the aromatic ring C=C stretching vibration. The absorption peak at the wavelength of 1739 cm −1 is set off by the aldehyde group (C=O stretch) vibration [1]. Comprehensive analysis confirms that the rubber powder and asphalt in the rubberized asphalt are physically blended, and there is no chemical reaction between them. Instead, the added stabilizer and viscosity reducer may have interacted. Similarly, as with SBS-modified asphalt, when the proportion of each component in asphalt is appropriate, SBS will interact with asphalt immediately after being evenly dispersed into asphalt, and then interact with asphalt. Thus, the swelling phenomenon occurs. Absorbed asphalt molecules surround an isolated SBS particle, creating an SBS molecular chain in the particle after full swelling. The incorporation of rubber powder improves the high-and low-temperature stability of the neat asphalt. The rubber powder and the neat asphalt form a physical cross-linking effect, leading to the proportions of the asphalt components changing to form a stable gel-type structure, thereby improving the high-and low-temperature performance and stability of neat asphalt [14]. Scanning Electron Microscope (SEM) The neat asphalt, 20% rubberized asphalt, and 30% and 50% high-content rubberized asphalt were selected, and they were imaged by a scanning electron microscope with a magnification of 200 times. The results are shown in Figure 13. It can be concluded from Figure 13 that, for original asphalt, the surface of the neat asphalt is relatively smooth, but there are apparent wrinkles or silver streaks [10]. The rubberized asphalt's surface with 20% rubber powder is smooth and flat because the viscosity reducer plays a prominent role in it. Compared with the neat asphalt, there are fewer wrinkles. No rubber powder agglomeration phenomenon is found on the asphalt surface, which indicates that the rubber powder has excellent compatibility with the neat asphalt [30]. These results mean that the rubberized asphalt has promising stability, and the effect of the stabilizer is remarkable. When the rubber powder content is 30%, it is similar to that when the rubber powder content is 20%. There is no apparent change, which indicates that the increase in rubber powder content will not adversely affect the rubberized asphalt [11]. On the contrary, when the rubber powder content is as high as 50%, the rubberized asphalt surface gradually becomes rough and has an apparent striped shape [35]. This is because rubber powder is evenly distributed in the neat asphalt, and swelling occurs and forms a stable three-dimensional network junction with asphalt [15]. From this point of view, the advantages of the preparation method of high-content rubberized asphalt are explained. Figure 12 shows visible absorption peaks at 1372, 1455, 2853 and 2923 cm −1 in the infrared spectra of neat asphalt and rubberized asphalt samples [10]. Among them, 1372 and 1455 cm −1 belong to a methyl (-CH3) bending vibration. At the same time, the absorption peak at a 2853 cm −1 wavelength is caused by the C-H stretching vibration of alkanes, and the absorption peak at 2923 cm −1 gives rise to the stretching vibration of the methylene C-H bond [8]. For other absorption peaks, at a wavelength of 751 cm −1 , these are attributed to the out-of-plane bending vibration of the olefin C-H [4]. At a wavelength of 807 cm −1 , the absorption peak is triggered by the out-of-plane bending vibration of the olefin C-H. At an 868 cm −1 wavelength, this results from an out-of-plane bending vibration peak of the hydroxyl group (O-H) [14]. The absorption peak at the wavelength of 1606 cm −1 is attributed to the aromatic ring C=C stretching vibration. The absorption peak at the wavelength of 1739 cm −1 is set off by the aldehyde group (C=O stretch) vibration [1]. Comprehensive analysis confirms that the rubber powder and asphalt in the rubberized asphalt are physically blended, and there is no chemical reaction between them. Instead, the added stabilizer and viscosity reducer may have interacted. Similarly, as with SBS-modified asphalt, when the proportion of each component in asphalt is appropriate, SBS will interact with asphalt immediately after being evenly dispersed into asphalt, and then interact with asphalt. Thus, the swelling phenomenon occurs. Absorbed asphalt molecules surround an isolated SBS particle, creating an SBS molecular chain in the particle after full swelling. The incorporation of rubber powder improves the high-and low-temperature stability of the neat asphalt. The rubber powder and the neat asphalt form a physical cross-linking effect, leading to the proportions of the asphalt components changing to form a stable gel-type structure, thereby improving the high-and low-temperature performance and stability of neat asphalt [14]. Scanning Electron Microscope (SEM) The neat asphalt, 20% rubberized asphalt, and 30% and 50% high-content rubberized asphalt were selected, and they were imaged by a scanning electron microscope with a magnification of 200 times. The results are shown in Figure 13. Original asphalt (50% RA). RTFOT + PAV (50% RA). It can be concluded from Figure 13 that, for original asphalt, the surface of the neat asphalt is relatively smooth, but there are apparent wrinkles or silver streaks [10]. The rubberized asphalt's surface with 20% rubber powder is smooth and flat because the viscosity reducer plays a prominent role in it. Compared with the neat asphalt, there are fewer wrinkles. No rubber powder agglomeration phenomenon is found on the asphalt surface, which indicates that the rubber powder has excellent compatibility with the neat asphalt [30]. These results mean that the rubberized asphalt has promising stability, and the effect of the stabilizer is remarkable. When the rubber powder content is 30%, it is similar to that when the rubber powder content is 20%. There is no apparent change, which indicates that the increase in rubber powder content will not adversely affect the rubberized asphalt [11]. On the contrary, when the rubber powder content is as high as 50%, the rubberized asphalt surface gradually becomes rough and has an apparent striped shape [35]. This is because rubber powder is evenly distributed in the neat asphalt, and swelling occurs and forms a stable three-dimensional network junction with asphalt [15]. From this point of view, the advantages of the preparation method of high-content rubberized asphalt are explained. For the asphalt after RTFOT and PAV, compared with the original asphalt, the neat asphalt has apparent cracks, the structure has been damaged, and its anti-aging performance is insufficient [35]. The most significant change after the aging of rubberized asphalt with 20% rubber powder is the surface folds-that is, the rubberized asphalt becomes more viscous; the 30% and 50% rubberized asphalt are similar. The difference is that the higher the content, the more pronounced the wrinkles are-that is, aging increases the rubberized asphalt's flow resistance. The rubberized asphalt surface still has no agglomeration phenomenon and no structural damage [3], proving the superior aging resistance of high-content rubberized asphalt. Economic Analysis In order to further analyze the advantages of HCRA prepared in this study, the preparation costs of HCRA are briefly analyzed, and the specific calculations are shown in Table 4. For the asphalt after RTFOT and PAV, compared with the original asphalt, the neat asphalt has apparent cracks, the structure has been damaged, and its anti-aging performance is insufficient [35]. The most significant change after the aging of rubberized asphalt with 20% rubber powder is the surface folds-that is, the rubberized asphalt becomes more viscous; the 30% and 50% rubberized asphalt are similar. The difference is that the higher the content, the more pronounced the wrinkles are-that is, aging increases the rubberized asphalt's flow resistance. The rubberized asphalt surface still has no agglomeration phenomenon and no structural damage [3], proving the superior aging resistance of high-content rubberized asphalt. Economic Analysis In order to further analyze the advantages of HCRA prepared in this study, the preparation costs of HCRA are briefly analyzed, and the specific calculations are shown in Table 4. It can be seen from Table 4 that the cost of rubberized asphalt and HCRA is about 3400 RMB/ton, while the SBS-modified asphalt on the market is usually higher than 5000 RMB/ton in comparison, leading to significant economic benefits and market competitiveness [36]. Simultaneously, the HCRA prepared in this study has been successfully applied in actual pavement engineering. The application effect shows that the HCRA has an excellent road performance, consistent with the laboratory test results, and has broad application prospects.
11,386.2
2020-10-01T00:00:00.000
[ "Materials Science", "Engineering" ]
Cytotoxicity of Amino‐BODIPY Modulated via Conjugation with 2‐Phenyl‐3‐Hydroxy‐4(1H)‐Quinolinones Abstract The combination of cytotoxic amino‐BODIPY dye and 2‐phenyl‐3‐hydroxy‐4(1H)‐quinolinone (3‐HQ) derivatives into one molecule gave rise to selective activity against lymphoblastic or myeloid leukemia and the simultaneous disappearance of the cytotoxicity against normal cells. Both species′ conjugation can be realized via a disulfide linker cleavable in the presence of glutathione characteristic for cancer cells. The cleavage liberating the free amino‐BODIPY dye and 3‐HQ derivative can be monitored by ratiometric fluorescence or by the OFF‐ON effect of the amino‐BODIPY dye. A similar cytotoxic activity is observed when the amino‐BODIPY dye and 3‐HQ derivative are connected through a non‐cleavable maleimide linker. The work reports the synthesis of several conjugates, the study of their cleavage inside cells, and cytotoxic screening. Introduction Fluorescent dyes conjugated with other molecules belong to essential bioimaging tools for several decades. [1][2][3][4][5][6][7][8][9][10] Their role in visualizing the appropriate process and detecting or determining the desired analyte is irreplaceable in contemporary chemical biology. As they have been extensively used in in vitro as well as in vivo assays, their toxicity should not affect the biological processes in the living system. One of the most used dyes in fluorescent labeling and monitoring is the boron-dipyrromethene dye, frequently called BODIPY. Its derivatives have been used several times for detection of pH, [11][12][13][14] bio-labeling/bio-imaging [15,16] and in various other applications. [17][18][19][20][21] It is also frequently used in conjugates with various drugs, [22][23][24][25] nanoparticles, [26,27] or proteins. [18,28] The application of BODIPY dyes in medical research and chemical biology studies was nicely reviewed by Marfin et al. in 2017. [29] Very recently, a model fluorescent system able to reflect the enhanced concentration of glutathione causing the drug release has been described by our group. [30] This new drug-delivery system is based on 3-hydroxyquinolin-4(1H)-ones (3-HQ) as a model drug conjugation with fluorescent amino-BODIPY dye enabling the tracking of the whole system and detection of the drug release. Importantly, the drug and the dye are connected through a self-immolative disulfide linker allowing its selective cleavage inside a cancer cell. [31,32] This phenomenon is possible due to glutathione (GSH) as a linker cleavage agent. Its concentration in cancer cells, reaching up to 10 mM, [33,34] is by 2-3 orders of magnitude higher than in plasma and blood. [35,36] According to numerous previously published studies, the BODIPY dyes are used as the fluorescent species with high intensity and low toxicity. [29] Although many of the recently developed BODIPY-drug conjugates [37][38][39][40][41][42] indicate the BODIPY as a promising candidate for biological applications, to the best of our knowledge, none of the studies describe its potential cytotoxicity or even direct application as an cytotoxic agent. Here we report the amino-BODIPY dye as an anticancer agent. Its cytotoxicity is possible to modulate via conjugation with 3-HQs to achieve selective cytotoxicity against leukemia cell lines. As reported previously, the GSH mediated cleavage of the disulfide linker results in the release of the 3-HQ derivative together with the Amino-BODIPY. [30] Different excitation and very similar emission profile of the free Amino-BODIPY 16 and the one bound in the conjugates enabling the OFF-ON effect is demonstrated for conjugate 11 in Figure 2, where excitation and emission spectra of compounds 16 and 11 are presented. As the mechanism of GSH-mediated cleavage and LC/MS analysis in Figure 2A and 2B depict nucleophilic thiol group on glutathione attacks the disulfide bond resulting in the formation of GSH adducts 19 and 20. Intermediate 19 further reacts with excess of GSH and self-immolative linker is cyclized while free Amino-BODIPY 16 is released. According to LC/MS analysis the intermediate 20 is relatively stable and further conversion to free 3-HQ 1 was not observed. To evaluate the cleavability of conjugates 6-15, their fluorescence spectra were measured in HEPES buffer with and without the presence of GSH (5 mM). The cleavable conjugates 11-15 have an emission maximum at around 530 nm after excitation by 510 nm. Their cleavage affords the amino-BODIPY 16 with the similar emission maximum (530 nm) achieved after excitation at different wavelength (485 nm). Thus, when the ratio of emission intensities at 530 nm obtained after excitation at 485 nm and 510 nm was monitored within the time, the total conjugate cleavage was possible to detect by ratiometric fluorescence sensing (see Figure 3). As demonstrated in Figure 3A, conjugates 11-15 exhibit sufficient stability within the first three hours of the experiment when dissolved in HEPES buffer in the absence of GSH. When GSH as the cleavage agent is added, the Amino-BODIPY 16 releasing accompanied by the 3-HQs detachment [30] is indicated by a substantial increase of the 530 nm emission intensity ratio obtained after 485 nm and 510 nm excitation (I 485 /I 510 ) ( Figure 3A). Study of Conjugate Cleavage Inside Cells Precise time monitoring of the drug release was performed in HeLa cells, where the conjugate was disrupted to a maximal level within the first several tens of minutes as demonstrated in Figure 3B and Figures S1-S3 in the Supporting Information. When the HeLa cells were pretreated by glutathione to increase the internal concentration of thiol, the cleavage was faster. The value of I 485 /I 510 responding to the amino-BODIPY, and drug release responded to a higher concentration of these liberated compounds. Additionally, HeLa cells were treated with these conjugates, and microscopy images of their cellular internalization before and after treatment with GSH (20 mM) were recorded. It is apparent that after GSH treatment, the green fluorescence of released Amino-BODIPY 16 has appeared. Thus OFF-ON effect is observed as exemplified in Figures 3C and 3D. Similarly, fluorescence ratio I 485 /I 510 of non-cleavable conjugates 6-10 was monitored in DMSO/HEPES buffer (2 : 1) ( Figure 4, Figure S4). In these cases, no significant changes were observed, confirming the conjugates' inertness towards the GSH. The conjugates are also stable in HeLa cells, as demonstrated on representative derivative 9 ( Figure 4). Cytotoxic Activity Finally, the amino-BODIPY 16 and all prepared conjugates 1-15 were tested for cytotoxic activity against selected cancer cell lines ( Table 1). The tests were performed on cancer cell lines derived from solid tumors as well as hematological malignancies: CCRF-CEM (acute lymphoblastic leukemia), K562 (chronic myeloid leukemia), A549 (lung adenocarcinoma), colorectal carcinoma cell lines HCT116 with and without functional p53 protein HCT116p53, respectively. The panel also included chemoresistant subclone CCRF-CEM-DNR (resistant to daunorubicine) overexpressing P-glycoprotein and/or lung resistancerelated protein (LRP), which are pumps or detoxifying systems responsible for the most common forms of clinical resistance. To evaluate non-tumor cells' toxicity, we used human skin fibroblast cell line BJ and lung fibroblast cell line MRC-5. From Table 1, we can see that non-conjugated 3-HQs (1-5) do not exhibit any cytotoxic activity, while amino-BODIPY 16 is active against lymphoblastic as well as myeloid leukemia cell lines and also against colorectal carcinoma. This dye is also slightly toxic to normal fibroblast BJ and MRC with IC 50 > 40 μM and lowdensity seeding variants BJ-LD and MRC-LD with higher proliferation and no contact inhibition. Connection of amino-BODIPY 16 with 3-HQs via non-cleavable maleimide linker (conjugates 6-10) as well as via cleavable linker (compounds 11-15) causes higher selectivity toward CCRF-CEM lines. The exception is derivative 9 having the selectivity to K562 line and derivative 15, which is entirely inactive, probably due to low According to these results, we can conclude that the conjugation of cytotoxic Amino-BODIPY and inactive 3-HQs alters its cytotoxicity profile and gives the selectivity to leukemia cell lines. This effect is surprisingly independent of the cleavability of conjugates, what can be explained by the ability of pharmacophore to interact with a target regardless of release from the conjugate. The selectivity of conjugates toward leukemia cells could be caused by interaction with a specific target for the CCRF-CEM or K562 cells, respectively, or by particular transport to these cell lines. The later reason could explain the lower toxicity of amino-BODIPY released from the conjugate compared to free amino-BODIPY 16 directly applied to the cells. Conclusion A series of target conjugates were synthesized by combining 2phenyl-3-hydroxy-4(1H)-quinolinone (3-HQ) derivatives with Amino-BODIPY dye. While some of them (6-10) were uncleavable in the presence of glutathione in increased concentration, disruption of cleavable conjugates (11)(12)(13)(14)(15) within the time was possible to monitor using ratiometric fluorescence. The released Amino-BODIPY is possible to detect also by the OFF-ON effect. While the prepared 3-HQs appeared to be quite inactive to selected cancer cell lines, the Amino-BODIPY was proved to possess cytotoxic activity against almost all of them as well as against proliferating non-tumor cells. When these cell lines were treated with Amino-BODIPY conjugated with 3HQs, the selectivity against lymphoblastic or myeloid leukemia has appeared. The cytotoxicity of the conjugates against normal cells has disappeared regardless of the linker cleavability. The specific toxicity of the system to leukemia cells and a possibility of a synergic effect of the Amino-BODIPY and maybe any other anticancer agent accompanied by a possibility of the cleavage monitoring could make this system attractive for future studies of new theranostics. Materials and Methods All chemicals and solvents for the synthesis were obtained from Sigma-Aldrich. NMR spectra were measured in DMSO-d 6 and CDCl 3 using a Jeol ECX-500 (500 MHz) spectrometer. Chemical shifts (δ) are reported in parts per million (ppm) and coupling constants (J) are reported in Hertz (Hz). HRMS analysis was performed using an Exactive Plus Orbitrap high-resolution (Thermo Fischer Scientific, MA, USA). The machine was operated at the positive full scan mode (120 000 FWMH). The chromatographic separation was performed using column Phenomenex Gemini (C18, 50 × 2 mm, 3 μm particles) in isocratic mode with mobile phase using 95 % MeOH and 5 % H 2 O with 0.1 % of formic acid. Quantum Yield Determination Quantum yields (Φ) were calculated by the standard procedure using fluorescein in 0.1M NaOH as a reference (Φ = 0.91) and according to equation (1). where Φ R is the quantum yield of the reference fluorophore, I is the area under the emission peak, A is absorbance at the excitation wavelength, and η is the refractive index of the solvent. Synthesis of BODIPY Conjugates The compounds 1-5 were prepared by solid-phase chemistry approach according to the published procedure. [30] Characterization of compound 1 was in accordance with the published data [30] 1: 1 13 13 13 Characterization of the compound 14 was in accordance with the published data. [30] 14: 1 13 13
2,423.4
2021-08-23T00:00:00.000
[ "Chemistry", "Medicine" ]
Vertical plasmonic resonance coupler Efficient wavelength-selective coupling of lights between subwavelength plasmonic waveguides and free space is theoretically investigated. The idea is based on a new type of vertical resonance coupling devices built on plasmonic metal/insulator/metal (MIM) waveguides. The device structure consists of a vertical grating coupler in a resonance cavity formed by two distributed Bragg reflectors (DBRs). With the metal loss included, maximum coupling efficiency around 50% can be obtained at the 1550 nm wavelength with a filtering 3dB bandwidth around 20 nm (7nm for the lossless case), demonstrating the feasibility of the idea for achieving high efficiency wavelength-selective vertical coupling through optical resonance. By utilizing this coupler, a plasmonic add-drop device is proposed and theoretically demonstrated. This kind of compact wavelength selective coupling devices shall have the potential to open up a new avenue of photonics circuitry at nanoscale. ©2015 Optical Society of America OCIS codes: (240.6680) Surface plasmons; (130.3120) Integrated optics devices; (050.2770) Gratings; (050.6624) Subwavelength structures; (060.1810) Buffers, couplers, routers, switches, and multiplexers; (250.5403) Plasmonics. References and links 1. W. L. Barnes, A. Dereux, and T. W. Ebbesen, “Surface plasmon subwavelength optics,” Nature 424(6950), 824– 830 (2003). 2. S. I. Bozhevolnyi, V. S. Volkov, E. Devaux, J. Y. Laluet, and T. W. Ebbesen, “Channel plasmon subwavelength waveguide components including interferometers and ring resonators,” Nature 440(7083), 508–511 (2006). 3. R. F. Oulton, V. J. Sorger, T. Zentgraf, R. M. Ma, C. Gladden, L. Dai, G. Bartal, and X. Zhang, “Plasmon lasers at deep subwavelength scale,” Nature 461(7264), 629–632 (2009). 4. Z. Fang, Q. Peng, W. Song, F. Hao, J. Wang, P. Nordlander, and X. Zhu, “Plasmonic focusing in symmetry broken nanocorrals,” Nano Lett. 11(2), 893–897 (2011). 5. C. M. Chang, M. L. Tseng, B. H. Cheng, C. H. Chu, Y. Z. Ho, H. W. Huang, Y. C. Lan, D. W. Huang, A. Q. Liu, and D. P. Tsai, “Three-dimensional plasmonic micro projector for light manipulation,” Adv. Mater. 25(8), 1118– 1123 (2013). 6. S. M. Nie and S. R. Emory, “Probing single molecules and single nanoparticles by surface-enhanced Raman scattering,” Science 275(5303), 1102–1106 (1997). 7. T. W. Ebbesen, C. Genet, and S. I. Bozhevolnyi, “Surface plasmon circuitry,” Phys. Today 61(5), 44–50 (2008). 8. B. H. Cheng and Y. C. Lan, “Multi-layered dielectric cladding plasmonic microdisk resonator filter and coupler,” Phys. Plasmas 20(2), 020701 (2013). 9. S. A. Maier, Plasmonics: Fundamentals and Applications (Springer, 2007). 10. J. P. Tetienne, A. Bousseksou, D. Costantini, Y. De Wilde, and R. Colombelli, “Design of an integrated coupler for the electrical generation of surface plasmon polaritons,” Opt. Express 19(19), 18155–18163 (2011). 11. S. Ura, S. Murata, Y. Awatsuji, and K. Kintaka, “Design of resonance grating coupler,” Opt. Express 16(16), 12207–12213 (2008). 12. G. Roelkens, D. Van Thourhout, and R. Baets, “High efficiency grating coupler between silicon-on-insulator waveguides and perfectly vertical optical fibers,” Opt. Lett. 32(11), 1495–1497 (2007). 13. K. Kintaka, Y. Kita, K. Shimizu, H. Matsuoka, S. Ura, and J. Nishii, “Cavity-resonator-integrated grating input/output coupler for high-efficiency vertical coupling with a small aperture,” Opt. Lett. 35(12), 1989–1991 (2010). 14. Y. Zhou, M. Moewe, J. Kern, M. C. Y. Huang, and C. J. Chang-Hasnain, “Surface-normal emission of a high-Q resonator using a subwavelength high-contrast grating,” Opt. Express 16(22), 17282–17287 (2008). 15. X. F. Li, S. F. Yu, and A. Kumar, “A surface-emitting distributed-feedback plasmonic laser,” Appl. Phys. Lett. 95(14), 141114 (2009). #223864 $15.00 USD Received 26 Sep 2014; revised 15 Dec 2014; accepted 17 Dec 2014; published 7 Jan 2015 © 2015 OSA 12 Jan 2015 | Vol. 23, No. 1 | DOI:10.1364/OE.23.000292 | OPTICS EXPRESS 292 16. S. Ura, H. Moriguchi, S. Kido, T. Suhara, and H. Nishihara, “Switching of output coupling in a grating coupler by diffraction transition to the distributed Bragg reflector regime,” Appl. Opt. 38(12), 2500–2503 (1999). 17. K. Kintaka, J. Nishii, Y. Imaoka, J. Ohmori, S. Ura, R. Satoh, and H. Nishihara, “A guided-mode-selective focusing grating coupler,” IEEE Photon. Technol. Lett. 16(2), 512–514 (2004). 18. H. Zhang and H. P. Ho, “Low-loss plasmonic waveguide based on gain-assisted periodic metal nanosphere chains,” Opt. Express 18(22), 23035–23040 (2010). 19. Y. Kou, F. Ye, and X. Chen, “Low-loss hybrid plasmonic waveguide for compact and high-efficient photonic integration,” Opt. Express 19(12), 11746–11752 (2011). 20. J. B. Khurgin and G. Sun, “Practicality of compensating the loss in the plasmonic waveguides using semiconductor gain medium,” Appl. Phys. Lett. 100(1), 011105 (2012). 21. Y. T. Wang, B. H. Cheng, Y. Z. Ho, Y. C. Lan, P. G. Luan, and D. P. Tsai, “Gain-assisted hybrid-superlens hyperlens for nano imaging,” Opt. Express 20(20), 22953–22960 (2012). 22. H. A. Haus, Waves and Fields in Optoelectronics (Prentice, 1984). 23. J. Ohmori, Y. Imaoka, S. Ura, K. Kintaka, R. Satoh, and H. Nishihara, “Integrated-optic add/drop multiplexing of free-space waves for intra-board chip-to-chip optical interconnects,” Jpn. J. Appl. Phys. 44(11), 7987–7992 (2005). 24. K. Kintaka, J. Nishii, K. Shinoda, and S. Ura, “WDM signal transmission in a thin-film waveguide for opitcal interconnection,” IEEE Photon. Technol. Lett. 18(21), 2299–2301 (2006). 25. H. A. Haus and Y. Lai, “Theory of cascaded quarter wave shifted distributed feedback resonators,” IEEE J. Quantum Electron. 28(1), 205–213 (1992). 26. W. Lin and G. P. Wang, “Metal heterowaveguide superlattices for a plasmonic analog to electronic Bloch oscillations,” Appl. Phys. Lett. 91(14), 143121 (2007). Introduction In recent years the surface plasmon polariton (SPP) structures [1] have attracted intensive investigation both theoretically and experimentally due to their unique properties.In these structures, lights propagating along the metal-dielectric surface are strongly confined with exponentially decaying fields away from the interface, which in principle can lead to the miniaturization of photonic devices and circuits down to the nanometer scale.By manipulating the geometric and material parameters of the metal-dielectric surfaces, various applications in subwavelength waveguiding [2], light generation [3], focusing [4,5], and biomedical plasmonics [6] have been investigated.Thanks to the advances of related technologies, modern nano-fabrication and characterization techniques have experienced huge improvement with regards to the structured metal surfaces.It is expected that the SPP-based waveguides can offer the possibility for reducing the device structure down to the nanometer scale.The SPP-based photonic circuitry may also offer an effective solution to merge photonics and electronics when implementing future photonic systems based on optical fibers and photonic integrated circuits [7].Stimulated by this plasmonic optoelectronic circuit concept, many passive and active plasmonic devices such as waveguides, couplers, filters, switches and light sources have been demonstrated [8].However, due to their nanoscale field confinement characteristics, the light coupling issues between the sub-wavelength plasmonic waveguides and the free space are still challenging in many applications. In the literature, there have been many reported research works for achieving efficient excitation of the SPP waveguide modes.Adiabatic mode conversion based on tapering device structures for improving mode mismatch is one of the fundamental approaches that have been widely used.However, many reported results were still performed on the larger confinement cases [9].For geometries below the diffraction limit, such as MIM waveguides with a deep sub-wavelength dielectric core, the coupling efficiency is still low due to the small overlap between the excitation beam and the SPP waveguide mode [10].On the other hand, the grating coupler (GC) is another approach that has been widely used for coupling between a guided wave and a free-space propagated wave.Vertical coupling can be achieved through proper phase matching design and the mode mismatch issue can be greatly reduced due to the enlarged grating coupling region.A vertical GC can thus provide many functions such as the far-field observation of a guided wave [11] and the vertical light coupling between different optical devices [12,13].The approach has also been utilized in implementing new light sources [14,15], optical switches [16], and guided-mode selectors [17]. The resonance grating coupler concept previously developed for dielectric waveguides [12,13] is an interesting one.High coupling efficiency as well as wavelength-selective coupling can be simultaneously achieved through the optical resonance effects.In this work, we investigate a plasmonic resonance grating coupler structure particularly designed for subwavelength MIM SPP waveguides, with the anticipation for achieving efficient wavelengthselective optical coupling in a small footprint.Ideally 100% coupling is possible if the device is lossless, as predicted by the coupled mode theory.We demonstrate a designed example that can reach 94% vertical coupling efficiency under the lossless condition.This corresponds to the optimal performance if the intrinsic metal loss of plasmonic waveguides could be overcome by some active means [18][19][20][21].The vertical coupling efficiency can still reach around 50% even when reasonable metal loss is included, which corresponds to the practical coupling efficiency that can be achieved for passive devices.After the introduction given in this section (Section 1), the physical principles and design considerations for the studied device structure are described in Section 2. In particular, the coupled mode theory in time [22] is used to explain why the ideal 100% coupling efficiency could be achieved if the metal loss can be overcome.In Section 3, the Finite Difference Time Domain (FDTD) numerical method is utilized to simulate the performance of the designed device.In Section 4, an ultra-compact plasmonic add-drop device composed of two vertical plasmonic resonance couplers is proposed.The footprint of the novel add-drop device is thousands of times smaller than the dielectric devices [23,24].It has the potential to perform coarse wavelength division optical channel add/drop functionality [25] for SPP-based photonic integrated circuits.Finally in Section 5, we give a brief conclusion about the whole study. Wavelength-selective vertical coupling by optical resonance The schematic of the proposed plasmonic vertical coupler is depicted in Fig. 1.A GC formed by refractive index modulation on one cladding side of the metal-insulator-metal (MIM) waveguide is inserted between a front and a rear MIM distributed Bragg reflectors (DBRs) with refractive index modulation on both cladding sides.The optical wave is guided in the ydirection, propagates in the x-direction, and the whole structure is assumed to be uniform in the z-direction for the modeling simplicity.The right DBR is with longer length to totally reflect the light at the Bragg wavelength while the left DBR is with shorter length so that it also functions as an input/output coupling port.The other input/output coupling port is through the GC. Some important design considerations are explained below.The grating period of the center GC section is determined by the phase-matching condition stated in (1).The forward/backward propagated MIM waveguide modes with the propagation vector K  are phase-matched to the vertically propagated free space wave through the first order diffraction.This is how vertical coupling can be achieved through a properly designed waveguide grating coupler.However, as stated in (2), the second order diffraction effect of the GC will also couple the forward and backward propagated MIM waveguide modes directly, which will produce additional wave reflection effects ( ) ( ) On the other hand, the grating periods of the front and rear DBRs are properly determined to produce Bragg reflection at the same operation wavelength according to the formula / 2 .    Most importantly, when the GC is inserted between the two DBRs, the two phase shift spaces( , F R L L ) between the GC and the two DBRs need to be properly adjusted so as to avoid unnecessary resonances.This is because the second order diffraction effects of the center GC section will also produce wave reflection as explained above, which may cause unwanted resonances after reflecting back by the DBRs.By inserting additional phase shifts to enforce destructive interference for these unwanted resonances, one can make sure the resonance occurred at the operation wavelength is only caused by the cavity formed by the two DBRs.Compared to previous dielectric cases [11,12], one important difference of the current plasmonic device structure is that we only need to put the index modulation on one cladding side of the GC for achieving high coupling efficiency.An additional reflection mirror is required for the dielectric cases to enforce light output in one port [13].Such a disadvantage has been completed avoided in the current plasmonic device.If the device is designed by following the above considerations carefully, the main optical resonance in the device is defined by the cavity formed by the two DBRs and the GC will mainly function as an output coupler.The whole device can be considered as a single resonator directly coupled to two input/output ports as illustrated in Fig. 1(a).Within the framework of the coupled mode theory in time, the optical field in the resonator will evolve according to [22]: ) If there is only one single-frequency incidence wave at port 1 with the frequency ω , one can obtain the following relation for the output wave at port 2: e e e e S S j Therefore, the power transmission from port 1 to port 2 is: This simple derivation provides the theoretical basis to achieve high coupling efficiency based on the studied device structure.It should be noted that the present analysis based on the coupled mode equations in the time domain is approximate in the sense that all the spatial mode details have been reduced into a few coefficients (i.e., resonance frequencies and decay rates due to internal loss & external coupling).We simply utilize this theory to prove the possibility of achieving high coupling efficiency (ideally 100%) for the proposed device structure and do not try to evaluate these coefficients for performance analysis.The actual performance analysis is resorted to the direct FDTD simulation presented in the next section, so that all the wave propagation effects can be included in the calculation without approximation. Numerical simulation results By choosing appropriate parameters, we have designed an example device structure to operate at the wavelength of 1550 nm.The transmission spectra obtained by 2D FDTD simulation are illustrated in Fig. 2(a).The dielectric property of the metal cladding is modeled by the Drude formula: Here p ω and γ are the plasma frequency and the decay rate respectively.For forming grating structures by index modulation, we will assume that there are two kinds of metallic materials can be used: one with 20 , 0.01 operating at 1550 nm, the period of GC is chosen to be 0.983 μm.The width for the mL ε material is 0.577 μm and the thickness is 0.0125 μm.For minimizing the device size, a small #223864 -$15.00USD aperture of vertical coupling is adopted.The total length of GC is 4 GC Λ ( GC N = 4) such that the output coupling is still strong enough with the high contrast of the metal grating.When the GC is inserted between the rear DBR and front DBR, the two phase shift lengths ( , F R L L ) are set to be 0.02 μm and 0.37 μm respectively.The reflection of the front DBR should also be fine-tuned to optimize the net transmission at the operation wavelength.For the ideal lossless case, the net vertical transmission can be demonstrated to be as high as 94% as shown in Fig. 2(a), which is already close to the 100% theoretical limit.To demonstrate vertically light propagation from the free space port of the device, the E x field distribution at 1550 nm is shown in Fig. 2(b), in which the 90 degree free space out-coupling can be clearly seen.With the metal loss included, the design is changed from 5 In Fig. 2(b), it can be noted that when the light is coupled out of the free space port, the diffraction effect of free space propagation occurs and the wave fronts become curved gradually, since the beam size is only around 3 micro-meter in our design.When the free space coupling distance is long, this diffraction effect may produce additional optical modemismatch loss for optical coupling.In principle, one can reduce the effect by increasing the beam size or by decreasing the coupling distance.In our design example, we try to keep the device size smaller and thus choose the 3 micro-meter beam size for demonstration.If the device size is not a concern, the effect can be much more reduced by adopting a larger beam size. From the numerical studies, we have found that the most sensitive physical quantity is the center wavelength of the device.Other physical quantities like bandwidth, reflectivity, and coupling efficiency are much less sensitive to the device parameters.This is because the center wavelength is determined by the resonance condition of the DBR cavity.The changes of the width and/or thickness parameters will slightly change the effective propagation constant of the waveguide and thus the resonance wavelength will also be changed.We have performed the sensitivity analyses for some of the parameters to estimate the implementation difficulties.The thickness of the GC layer is one of the most critical parameters.From our numerical simulation, the sensitivity ratio between the center wavelength change (in nm) and this thickness change (in nm) is around 10 for our design example.With today's technologies one already can grow 10 nm Ag layer thicknesses with ± 0.5nm deviation.Thus it is reasonable to expect that one should be able to control the center wavelength of the device to the 10 nm order with today's technologies, which should be adequate for experimentally demonstrating the device.With more push of the technologies, more optimization of the design, and more development of the possible post fine-tuning processes, one might be able to push the control accuracy down to the nm order, which should be adequate for some applications.The sensitivity analyses for other critical parameters reach similar conclusions.We thus believe that the device structure is not totally out of the reach for practical implementation. Some more details about the performed numerical simulation can also be given below.The grid sizes in the x and y dimensions are both 3.5 nm for calculating the presented results.The perfect matched layer (PML) absorbing boundary condition and the dielectric volume average method are used along with the FDTD simulation.We have found that if the grid size is further reduced, the center wavelength of the device will be somewhat shifted, possibly due to the slightly change of the calculated effective propagation constant of the optical waveguide.The other physical quantities (bandwidth, reflectivity, coupling efficiency, etc.) will basically remain the same.By slightly changing the length parameters, it is easy to shift the center wavelength back to the designed wavelength and obtain basically the same spectra.With such confirmation, we believe the presented results here should be a trustable numerical demonstration for the achievable device performance. Plasmonic channel add-drop multiplexer The vertical grating resonance coupler described in Section 2 already can be utilized as a channel add/drop multiplexer between the MIM waveguide and the free space.The signal channel at the operation wavelength can be filtered out of the MIM waveguide into the free space port or can be added into the reflection port of the MIM waveguide with the injection from the free space port.To perform channel add/drop multiplexing between two separated MIM waveguides, one can place two identical vertical plasmonic resonance couplers adjacent to each other to form a novel plasmonic add-drop device as depicted in Fig. 3.The footprint of the novel add-drop device is only 13.062 μm by 1.2 μm.The input light is injected at the Port 1 of the multiplexer.It can be coupled vertically to free space by the vertical resonance coupler if the wavelength is at the operation wavelength.Through the symmetric design of the device configuration, the light will enter the upper MIM waveguide portion and eventually output at Port 3. The simulated reflection spectrum at Port 1, transmission spectrum at Port 2 and Port 4, as well as the output spectrum at Port 3 are shown in Fig. 4 for the lossless case.In the figure the specific channel wavelengths of Coarse Wavelength Division Multiplexer (CWDM) (1470, 1490, 1510, 1530, 1550, 1570, 1590, 1610 nm) are also indicated by the yellow lines.One can see that the net transmission can be as high as 75% for the ideal lossless case with a 3dB bandwidth of 6.5 nm.When the metal loss is included, the net transmission is around 0.23 with a 3dB bandwidth of 12 nm. It is interesting to find that the free space coupling length ( coupling L ) shown in Fig. 3 can affect the performance of the device.The simulated influence of coupling L on the coupling efficiency and the center wavelength of the device are shown in Fig. 5.We have found that when coupling L is shorter than two wavelengths, a large center wavelength shift or significant transmission peak splitting may occur.The optimal transmission profile demonstrated in Fig. 4 is simulated with coupling L = 2.54 um.The variation of center wavelength and coupling efficiency should be caused by the extra resonance occurred between the two free space ports.In principle, there are at least three resonators (2 resonance couplers + 1 extra resonance between free space ports) coupled together, which thus leads to the complicated oscillating behavior in Fig. 5 as well as the mentioned transmission peak splitting behavior exhibited in the calculated spectra.However, one can still observe that the oscillation is more pronounced when the coupling distance is small and becomes somewhat damped when the coupling distance is increased, in particular for the center wavelength.This may be due to the diffraction effect of free space propagation, which weakens the extra resonance occurred between the two free space ports through the occurrence of a larger optical mode-mismatch loss. Conclusion Coupling of lights from free space into the MIM plasmonic mode and vice versa is still a challenging issue because of the small overlap between the coupling modes.The studied plasmonic resonance coupler is free from this drawback with simple GC and DBR design.In this way we have demonstrated a novel wavelength selective vertical coupler capable of launching lights into plasmonic waveguides or converting lights in the plasmonic waveguide mode into free space.Under proper design, 94% of the lights in the MIM mode can be vertically coupled to free space when the metal loss is ignored.At the telecom wavelength of 1550 nm, the efficiency of the device can be around 50% when the metal loss is included in the numerical simulation.This study confirms the physical operation methodology of the device.The device can be designed to match the desired operation wavelength by optimizing different parameters including the material permittivity, sub-component lengths, and so on.A novel structure for plasmonic channel add-drop functionality has also been proposed.Two of the same vertical grating resonance couplers can be combined to form a CWDM add-drop filter with a 75% transmission peak at the operating wavelength for the ideal lossless case and close to −6dB when the metal loss is included.This approach may provide a new coupling method for plasmonic MIM waveguides under sub-wavelength confinement and find useful applications in many plasmonic researches. Fig. 1 . Fig. 1.(a) Schematic of the proposed vertical plasmonic resonance coupler.(b) The cross section of the device: the core of MIM waveguide is 15 nm.The period of the DBR ( DBR Λ ) is see that the ideal 100% transmission can occur at resonance two metals with low and high indices respectively.Similarly, d ε is the dielectric constant of the MIM waveguide in the core region.For the ideal lossless case, one can simply set 0. γ = The two DBRs have the same period of 0.46 μm and the width for the mH ε material is 0.27 μm.The reflection and transmission spectra of the rear Bragg reflector is also shown in the inset of Fig. 2(a) with 14 RDBR N = periods to get total reflection.For coupling and the net transmission spectrum is also plotted in Fig.2(a) (the black line).The maximum transmission can still be close to 50% as demonstrated by the considered design example. Fig. 2 . Fig. 2. (a) Vertical coupling efficiency and refection spectra of the whole device under lossless condition.Red: Reflection, Green: Transmission, and Blue: coupling to free space for the lossless case, Black: coupling to free space when loss is included.The inset: Reflection spectrum of the rear DBR individually.(b) Simulated E x field distribution at the wavelength of 1550 nm. Fig. 3 .Fig. 4 . Fig. 3. Configuration of the plasmonic add-drop device, which is composed of two vertical plasmonic resonance couplers.The top coupler is the x-axis mirror image of the bottom coupler.The grating parameters are the same with those in Fig. 1. Fig. 5 . Fig. 5. Coupling efficiency at port 3 in the wavelength of 1550 nm under different lengths of coupling L
5,619.6
2015-01-12T00:00:00.000
[ "Engineering", "Physics" ]
Local pressure calibration method of inductively coupled plasma generator based on laser Thomson scattering measurement Based on laser Thomson scattering (TS) measurements and finite element method (FEM) simulations of electron density in inductively coupled plasma (ICP), the simulated local pressure calibration curves of ICP generator are obtained by comparing the experimental and simulated electron density distributions and maxima. The equation coefficients of theoretical model associated with the ICP generator experimental system can be obtained by fitting the simulation curve with the least square method, and the theoretical pressure calibration curves under different absorbed powers can be further obtained. Combined with the vacuum gauge measurements, both the simulated and theoretical pressure calibration curves can give the true local pressure in the plasma. The results of the local pressure calibration at the different absorbed powers show that the density gradient from the vacuum gauge sensor to the center of the coil in ICP generator cavity becomes larger with the increase of electron density, resulting in a larger gap between the measured value and the pressure calibration value. This calibration method helps to grasp the local pressure of ICP as an external control factor and helps to study the physicochemical mechanism of ICP in order to achieve higher performance in ICP etching, material modification, etc. Scientific Reports | (2022) 12:4655 | https://doi.org/10.1038/s41598-022-08679-y www.nature.com/scientificreports/ measurement methods for ICP electron density include Langmuir probe method 14 , emission spectroscopy 15 , microwave interferometry 16 , laser interferometry 17 and laser Thomson scattering (TS) measurement 18 . Among these techniques for measuring plasma electron density, the TS can measure the electron density in a given region without disturbing the plasma, and has very high spatial and temporal resolution 19 . It is recognized as one of the most accurate electron density measurements available. So in this paper we use TS to measure the electron density distribution of ICP. Accurate measurement and calibration of pressure, especially local pressure, has rarely been reported in previous research work. In 2007, Shimada 20 measured the spatial distribution of neutral gas temperature and total pressure, electron pressure and neutral pressure by inserting a thin tube probe with diameter of 3.15 mm into the plasma. However, this interventional measurement itself interferes with the local environment of the measurement area, making it difficult to achieve accurate measurements. In order to avoid the measurement uncertainty caused by interventional measurements, a new method of ICP local pressure calibration combining TS experiments and finite element method (FEM) simulations is proposed in this paper. First, the electron density distribution of ICP is measured non-invasively by laser TS. The ICP pressure to be calibrated is measured by a vacuum gauge probe (TPG 201, Pfeiffer Vacuum) at the gas inlet of the ICP generator. The absorbed power of the ICP is the product of the input power of a 13.56 MHz RF power supply and the power transfer efficiency. As the power and pressure change, the plasma density changes, and the plasma load on the matching network varies accordingly. To compare the experimental and simulated values of electron density distribution at a specific absorbed power, the capacitance in the matching circuit is experimentally adjusted by Smith chart to match the internal impedance of the RF power supply so that a given power such as 400 W, 350 W and 300 W is transmitted into the plasma. The power transfer efficiency is 96.3%, 94.9% and 93.0% at an absorbed power of 400 W, 350 W and 300 W, respectively, as measured by a similar method in reference 21 . Second, a FEM model with the same physical dimensions as the ICP generator experimental setup is established. At the same absorbed power as the experiment, the pressure measured by the vacuum gauge is taken as the reference starting point, and the pressure is scanned in the lower pressure direction. The resulting simulation yields a series of spatial distributions of the electron density with pressure in the ICP generator cavity. Third, this series of plasma electron density simulation data is used as the true value of the measurement and compared with the electron density data measured by TS. When both the experimental and simulated electron density distributions and maxima are within the error tolerance, the pressure value in the simulation can be determined as the corrected value for the pressure measurement. This completes a single pressure data calibration of the ICP generator. This cycle can be repeated to complete a pressure calibration simulation curve for a specific ionized gas at a given absorbed power. At last, combined with the theoretical model, the equation coefficients associated with the ICP generator experimental system are obtained by fitting the pressure calibration simulation curve using the least square method, so as to obtain the theoretical pressure calibration curves for the different absorbed powers. Both simulated and theoretical calibration curves can be used for the local pressure calibration in low-pressure plasma processes. Laser TS experiment Laser TS technique can measure the electron density and electron temperature of plasma by measuring the secondary radiation emitted by the interaction between free electrons and the incident laser 22 . When a laser with a wavelength of λ 0 enters plasma, the free electrons in the plasma radiate electromagnetic waves under the action of the incident laser electric field. Since TS is elastic scattering, the wavelength of the scattered light is the same as that of the incident laser. However, the Doppler Effect is obvious due to the fast electron motion, so the Doppler broadening of TS spectrum appears. The electron density and electron temperature of plasma can be obtained by measuring TS spectrum and data post-processing. As shown in Fig. 1, the ICP generator is placed in our TS experimental system, using Nd: YAG's double frequency of 532 nm laser with repeat frequency of 30 Hz, maximum laser pulse energy of 300 mJ, pulse width of 10 ns as detection light, and triple-grating spectrometer (TGS) as a detection system. The laser is transmitted to the window of the plasma vacuum cavity through some high reflection mirrors dedicated for high intensity laser. To avoid the influence of stray light on TS signal, Brewster windows are adopted in the incident window and the exit window. The laser entering the ICP generator cavity is focused to the center of the plasma beam by a focusing lens. The laser passing through the plasma eventually enters the laser dump and terminates transmission. In order to keep the flexibility of space, we use optical fiber to collect the TS signal and transmit them to the slit of TGS system in this scheme. The collection direction of optical fiber is at a 90-degree angle to the laser transmission direction and the axial direction of the plasma beam. A set of plano-convex lenses is arranged in front of the fiber to focus the laser-plasma interaction region to the fiber entrance. To avoid Rayleigh scattering from the air and reflections from other surfaces, the collection end of the optical fiber and the collection lens are placed in a dark chamber. The TS signal is transmitted to the TGS system through the optical fiber, passing through the grating, lens and other optical devices in the TGS system, and finally is imaged or formed into spectrum on the ICCD camera. The DG645 is a time controller that synchronizes the laser pulse with the ICCD camera shutter and optimizes the detected signal by changing the ICCD camera exposure time. The TGS system can effectively filter out Rayleigh scattering and stray light, which is beneficial to the extraction of laser TS signal. The electron density measured with TS in this experiment ranges from 10 18 m -3 ~ 10 19 m -3 . The electron density distribution of ICP measured with laser TS at 350 W absorbed power and 65 Pa pressure (measured by vacuum gauge) is shown below (Fig. 2). Experimental parameters such as ICP pressure and absorbed power were recorded during the experiment. The pressure measured in the experiment was used as the reference starting value of the pressure parameter scanning in the plasma simulation. The parameters such as absorbed power, coil size, and cavity dimensions were kept consistent with the experimental system when the ICP was simulated. FEM simulation The Frequency-Transient study is used to simulate ICP with COMSOL software in this paper. The diffusion model is Mixture-averaged including migration in electric field. Reduced electron transport characteristics with local energy approximation are used in plasma properties settings. The electron energy distribution function is set as Maxwellian. And the model temperature is specified as 300 K. The electron transport property is restricted to specify mobility only, and the isotropic reduced electron mobility is set to 4 × 10 24 (V m s) −1 . The finite element model of ICP is developed according to the physical dimensions of its generator experimental system. Since the axisymmetric tubular structure is to be simulated, a 2D axisymmetric simulation model is established to save computational resources, as shown in Fig. 3a. The length of the tube is 15 cm. The inner diameter is 2.6 cm. The wall thickness is 0.2 cm. The coil has 6 turns, the wire diameter of the coil is 0.4 cm and the spacing between turns is 1 cm. The thickness of the air layer outside the tube is set as 2 cm. The tube is filled with argon gas to be ionized. For the ionized argon gas, there are seven kinds of electrochemical reactions and two kinds of surface reactions on the tube wall. The types and coefficients of electrochemical reactions are shown in Table 1, and the surface reactions on the tube wall are shown in Table 2. Convective flux is not considered and therefore no gas inflow and outflow are used. The air surrounding the coil is considered to possess vacuum characteristics. During the simulation, the electron neutral initial value of argon ion and the neutral mass of argon atom are constrained. The wall surrounding the plasma is grounded and the general wall reflection coefficient is set as 0.2. The initial electron density is set as 1 × 10 15 m −3 , and the initial average electron energy is set as 5 V. The coil is set to a single wire model in the magnetic field settings, specifying the absorbed power. The model is filled with triangle meshes of 17,356 cells and the minimum cell is set to 45 µm. Boundary layers have been created next to the walls in order to solve properly the high gradients formed during the simulation. Each boundary layer is divided into 5 sub-layers and smoothed over to the interior mesh. The mesh has been refined several times. The external boundaries of the whole geometry are magnetically insulated excepting the symmetry axis. When the ICP device is first energized, all the power dissipated is in the coil. After about 1 µs, the plasma ignition begins and as the neutral gas atoms split into electrons and ions, the electrons begin to absorb more and more power and further ionize the neutral gas. The local electron density in the center reaches instantaneous maximum value in more than 30 µs, and then reaches a stable state with the diffusion of electrons. In order to ensure that the ICP could reach a stable state, the calculation time should be set long enough. The electron density distribution evolution over time varies with pressure and absorbed power, but 0.1 s is sufficient to achieve a stable www.nature.com/scientificreports/ distribution in our study. In order to understand the evolution process of ionization over time and ensure that the electron density distribution reaches a stable state, we have performed simulations with 20 time interpolation points between 10 -6 s and 10 -1 s. Results and discussion The analysis of the plasma electron density distribution was carried out after convergence. As the plasma electron density is axisymmetric, cut line data should be collected along the axis of the glass tube and perpendicular to the axis of the glass tube at the center of the coil as shown in Fig. 3b. 2D drawings of the axial and radial cut line data at the center of the coil are plotted in Fig. 4. And they are compared with the experimental data obtained from TS measurement at corresponding positions respectively as shown in Fig. 2. Thus, the consistency between the simulated data and experimental data is used to determine which pressure value in the simulation is the calibration value of the measured value. During the simulation, a series of electron density distribution data for a certain absorbed power can be obtained by scanning the pressure parameters. Capturing their maximum values and plotting them with a series of pressure-related dotted lines, a local pressure correction curve for the ICP center can be obtained, as shown in Fig. 5. Since these pressure calibration simulation curves vary with absorbed power, a unified theoretical model is needed to facilitate the depiction of pressure calibration curves in future experimental work. The dynamic behavior of the electron density is characterized by the ambipolar diffusion of electrons and ions. The plasma-electron rate equation based on the ambipolar diffusion is obtained from the continuity equation 23 (1) ∂n e ∂t − D A ∇ 2 n e = Q(r, z, t), where D A is the ambipolar diffusion coefficient and n e is the electron density. In cold plasmas like ICP, D A = kT e /Mυ c where k is Planck's constant, T e is the electron temperature of plasma, M is the rest-mass of ions 24 . And υ c is the ion-neutral collision frequency which is proportional to the density of neutral particle n g expressed as υ c = n g σ i (kT i /M) 1/2 , where σ i is the collision cross section of ion to neutral and T i is the ion temperature. The right term Q(r, z, t) in Eq. (1) is the source of plasma generation by the RF power, where electrons basically ionize neutrals by collision. So the source term is expressed as Q(r, z, t) = n e n g K i ε i , where K i is the average ionization rate coefficient, including the ionization of the ground state and several excited states. ε i is the total ionization energy. In the steady-state case characterized by ∂n e /∂t = 0 , the diffusion loss of the plasma may be in balance with the generation from the plasma source. Then, the Eq. (1) is simplified to where the symbol � = 1/(∂/∂r) represents the inverse of the density gradient, which depends sensitively on the geometrical configuration and other physical conditions in the individual experiment. The electrons in the plasma will move inside the discharge tube. The mean free path of electron in the discharge tube is inversely proportional to the product of the electron scattering cross section σ e and the neutral particle density n g expressed as = 1/ σ e n g . The electrons inside the discharge tube are accelerated by the induced and residual electric field E = E in + E re , gaining their kinetic energy of eE before they collide with neutrals. The induced electric field E in is produced by the time variation of the magnetic field caused by the coil current of the RF power supply. It is an important electron-heating source under high pressure. The residual electric field E re is caused by ambipolar diffusion, eventually leading to plasma potential, which may be an important process of electron-heating at low pressure 25 . Electrons scatter isotropically in collisions with neutral particles, thermalizing the energy they gain. This process is repeated until their temperature T e is determined. Thus, the electron temperature is proportional to the product of the mean free path and the total electric field inside the discharge tube 26 expressed as (2), we get 24 We assume that P in is the power density inside the discharge tube provided by the induction coil from the RF power system. The power balance equation is given by 27 where K ex and K c are the average excitation and collision rate coefficients respectively, ε ex is the average excitation energy, and m e is the electron mass. Substituting Eq. (3) and Eq. (4) into Eq. (5), we get For the present experimental conditions, the neutral density n g is much higher than the electron density n e . So we expect that the neutral density n g is linearly proportional to the pressure p expressed as n g = sp , where s is the proportional coefficient. Then we get ξ eE σ e . A and C are directly proportional to the total electric field E. As shown in Table 3, the same theoretical model has different parameters under the different absorbed powers, corresponding to different relationship curves between maximum electron density and pressure, as shown in Fig. 5. Taking three absorbed powers of 300 W, 350 W and 400 W as examples, the relationships between the maximum electron densities measured by laser TS and the pressures measured by vacuum gauge have been compared with the simulated and theoretical results, as shown in Fig. 6. It can be clearly seen that at the same maximum electron density, the pressure measured by the vacuum gauge deviates from the simulated and theoretical matching pressure values, which is the significance of pressure calibration. From the deviation comparison in Fig. 6, we know that the measurement deviation of the vacuum gauge increases with increasing maximum electron density, which means that the pressure gradient from the vacuum gauge sensor to the center of the coil in ICP generator cavity becomes larger as a consequence. The pressure difference between the experimental and calibrated values shown in Fig. 6 is actually due to the pressure gradient at two different locations, the vacuum gauge sensor and the center of the coil in ICP generator cavity. Since the pressure p and the density n can be related by a thermodynamic equation p = nkT , their gradient relationship is ∇p = ∇nkT where T is the temperature of the particle 23,27 . We can assume that the neutral density on the vacuum gauge sensor n gs is equal to the sum of the neutral density n gc and the ion density n i at the center of the coil n gs = n gc + n i . And the ion temperature is equal to the temperature of gas T i ≈ T g for the cold plasma of ICP. Because the pressure at the center of the coil in ICP generator cavity is p c = n gc kT g + n i kT i + n e kT e and the pressure on vacuum gauge sensor is p s = n gs kT g , we can get the pressure gradient at these two different positions ∇p = p c − p s ≈ n e kT e . It can be seen that with the increase of electron density, the density gradient from the vacuum gauge sensor to the center of the coil in ICP generator cavity will also become larger, resulting in a larger gap between the measurement value and the calibrated value of pressure. (5) P in = n e n g K i ε i + n e n g K ex ε ex + n e n g K c 3m e M T e . Conclusions In this paper, the plasma electron density distributions of ICP are obtained by means of TS experiment and FEM simulation. The simulated calibration value for the pressure measured by the vacuum gauge has been determined by comparing simulated and experimental results of the electron density distribution and the maximum electron density under the same conditions including the absorbed power. As a result, calibration simulation curve has been obtained for a given absorbed power. A more accurate theoretical equation for the relationship between maximum electron density and pressure has been derived for the first time. And the simulation curves have been matched with the theoretically derived equation to obtain the exact equation parameters for given powers. As calibration examples, three theoretical fitting curves have been used to calibrate the pressures at 300 W, 350 W and 400 W respectively for the same ICP generator. The pressure calibration results show that the pressure gradient from the vacuum gauge sensor to the center of the coil in ICP generator cavity become larger with the increase of electron density. The control of the local pressure at the ICP center point will help to improve the accuracy of low-pressure plasma processes such as plasma thin film deposition, etching and material surface treatment in the future.
4,839.8
2022-03-18T00:00:00.000
[ "Physics" ]
Total Variation and Separation Cutoffs are not equivalent and neither one implies the other The cutoff phenomenon describes the case when an abrupt transition occurs in the convergence of a Markov chain to its equilibrium measure. There are various metrics which can be used to measure the distance to equilibrium, each of which corresponding to a different notion of cutoff. The most commonly used are the total-variation and the separation distances. In this note we prove that the cutoff for these two distances are not equivalent by constructing several counterexamples which display cutoff in total-variation but not in separation and with the opposite behavior, including lazy simple random walk on a sequence of uniformly bounded degree expander graphs. These examples give a negative answer to a question of Ding, Lubetzky and Peres. Introduction Consider an irreducible discrete-time Markov chains X = (X t ) t≥0 , defined on a finite state space Ω (we call a chain finite if Ω is finite). We let P denote its transition matrix. We further assume that X is reversible, that is that there exists a probability measure π which satisfies the detailed balanced equation ∀x, y ∈ Ω, π(x)P (x, y) = π(y)P (y, x). This measure is unique because of irreducibility. Let us assume furthermore that our Markov chain is lazy, meaning that ∀x ∈ Ω, P (x, x) ≥ 1/2. (1.1) A particular important case of such a Markov chain is lazy simple random walk (SRW) on a simple graph G = (V, E), in which case Ω = V , P (x, y) = 2 deg(x) and π(x) = deg(x) 2|E| , where deg(x) := |{y : {x, y} ∈ E}| and | · | denotes the cardinality of a set. It is a classic result of probability theory that for any initial condition the distribution of X(t) converges to π when t tends to infinity. The object of the theory of Mixing times of Markov chains is to study the characteristic of this convergence (see [16] for a self-contained introduction to the subject). We denote by P t x (P x ) the distribution of X t (resp. (X t ) t≥0 ), given that X 0 = x. For any two distributions µ, ν on Ω, their total-variation distance is defined to be The (total-variation) ε-mixing-time is defined as t mix (ε) := inf {t : d(t) ε} . When ε = 1/4 we omit it from the above notation. Theorem A (Chen and Saloff-Coste [7]). Let (Ω n , P n , π n ) be a sequence of reversible lazy Markov chains. Let λ (n) 2 be the second largest eigenvalue of P n . Then the following assertions are equivalent • The sequence exhibits ℓ p -cutoff for some 1 < p ≤ ∞. Observe that under reversibility (for any fixed chain) (1.5) expresses an equivalence between the separation and the total-variation mixing times, parallel to the one, expressed in (1.6), holding between the different ℓ p mixing times for p ∈ (1, ∞]. Hence a natural question (in light of Theorem A) is whether (under reversibility) there is cutoff in totalvariation if and only if there is cutoff in separation. This is Question 5.1 in [10], where an affirmative answer was given for the class of birth and death chains (which are Markov chains for which the set of edges (x, y) with P (x, y) > 0 forms a segment). In fact, both cutoffs were shown to be equivalent to the product condition (3.2). In this note we give a negative answer to that question in general by constructing counter-examples. Theorem 1.1. (i) Total-variation and separation cutoff are not equivalent for lazy reversible Markov chains and neither one implies the other. (ii) The above statement remains true within the class of lazy simple random walks on graphs of maximal degree at most 7. Remark 1.2. We can also produce non-reversible or non-lazy counter-examples by performing artificial modifications in our chains, but this is not a very important point. Nonlazy or non-reversible chains can have very pathological behavior and we want to underline that we are not using "unfair tricks" to produce our counter-examples. Of course a full proof of this statement only requires two counter-examples as (ii) is a stronger statement than (i). However, we have chosen to include also examples that are not simple random-walks because they are much simpler. We present a total of five counter-examples. Apart from the first one, they are all lazy (weighted nearest-neighbor) random walks on bounded degree graphs, with transition rates which are bounded away from zero. The last two example, which are a bit more technical to analyze, are lazy SRWs on a sequence of bounded degree graphs G n := (V n , E n ) (i.e. sup n max v∈Vn deg(v) < ∞). Note that for all our counter-examples the graph supporting the transitions contains some cycles. An interesting open problem is to determine whether Theorem B can extended to the case of lazy weighted nearest-neighbor random walk on trees for which it is already known (cf. [5]) that separation cutoff implies total-variation cutoff. A sequence of Markov chains is said to display pre-cutoff (in total-variation resp. separation) if The proof of (1.8) involves more computation than (1.7). We present a complete proof of it in Appendix A.2) These two inequalities imply that the notion of pre-cutoff is equivalent for the two distances and the pre-cutoff ratio of one is at most twice that of the other. In particular, cutoff in one distance implies pre-cutoff with ratio at most 2 in the other. With our examples, we shall show that this is in fact sharp in some cases: There exists a sequence of lazy reversible Markov chains for which we have cutoff in total-variation and only pre-cutoff with ratio 2 in separation and vice-versa. Our last point of comparison between total-variation mixing and separation mixing is related to the width of the cutoff window. We say that a sequence of chains exhibits total-variation (resp. separation) cutoff with a cutoff window w n if w n = o(t (n) mix ) and for all 0 < ε ≤ 1/4 there exists some constant C ε > 0 (depending only on ε) such that . Note that the window defined in this manner is not unique, but informally "the" cutoff window is given by the "smallest such w n ". Our examples demonstrate that the cutoff windows for total-variation and separation do not have the same behavior. The following result is due to Chen and Saloff-Coste [6,Theorem 3.4]. We present a much simpler proof in the Appendix. Theorem C. Let (Ω n , P n , π n ) be a sequence of lazy irreducible finite chains which exhibits total-variation cutoff with a cutoff window w n . Then w n = Ω( t (n) mix ). The bound given by Theorem C is obviously sharp for the biased random walk on a segment. Conversely, some very standard Markov chains like the lazy SRW on the ndimensional hyper-cube have a cutoff window w n >> t (n) mix (here w n = n and t (n) mix = ( 1 2 ± o(1))n log n). As indicated in Remark 1.6 the laziness assumption in Theorem C can be replaced by the assumption that inf n min x∈Ωn P 2 n (x, x) > 0 (as is the case for simple random walk on a sequence of bounded degree graphs). In light of Theorem C one might expect that whenever separation cutoff occurs for a sequence of discrete-time lazy chains, the width of the separation cutoff window is . We are unaware of any previously analyzed example in which this fails. We find it remarkable that as the following remark asserts, the width of the separation cutoff window for a sequence of discrete-time lazy SRWs on a sequence of bounded degree graphs, can in fact be a constant! This, or more precisely, the mechanism that allows such behavior (see § 2.4 for more on this point) demonstrates that the separation distance can exhibit profoundly different behaviors than the total variation distance. Our counter-examples show that the cutoff window in one distance can be as small as allowed even if there is no cutoff for the other distance: Remark 1.4. We will construct sequences of bounded degree graphs such that the corresponding sequences of lazy SRWs exhibit the following behaviors (resp.) (i) There is no separation cutoff but there is total-variation cutoff with window t (n) mix . (ii) There is no total-variation cutoff but there is separation cutoff with window 1. In § 2.4 we refine the statement of (ii) and describe further surprising properties of the relevant example for (ii) above (listed in § 2.4 as properties (i)-(v)). Remark 1.5. Let δ n ∈ (0, 1). We call a sequence of discrete time chains (Ω n , P n , π n ), δ n -lazy if for all n, P n (x, x) ≥ δ n for all x ∈ Ω n . It is not hard to extend the proof of Theorem C and show that if a sequence of δ n -lazy chains exhibits total-variation cutoff with a window w n , then w n = Ω( δ n (1 − δ n )t (n) mix ). Theorem C can also be extended to the continuous time setup, with the additional assumption that the sum of the transition rates from any given state is bounded above by 1 (or by some absolute constant). Remark 1.6. Let G n = (V n , E n ) be a sequence of connected non-bipartite simple graphs of maximal degree d n . Consider the sequence of (non-lazy) SRWs on G n . Then P 2 n (v, v) ≥ 1/d n , for every v ∈ V n . By considering P 2 rather than P it follows from the previous remark that if the sequence exhibits total-variation cutoff with a window w n , then w n = Ω( t (n) mix /d n ). This is in fact sharp by considering a sequence of random d n -regular graphs of size n for some d n such that lim n→∞ d n = ∞ and d n = o( log n log log n ) [17, Theorem 3]. 1.1. Organization of the note. In § 2 we describe the construction of our examples and our general strategy. We also describe relevant examples due to Aldous and Pak. In § 3 we introduce a general framework, which under a certain condition, allows to reduce the study of the mixing-time to the study of the hitting time of a special point. In § 4 we describe two examples of sequences of Markov chains which exhibit totalvariation cutoff but do not exhibit separation cutoff. The first example, Example 1, demonstrates that (1.7) may be sharp (even when the r.h.s. of (1.7) equals 1). The second example, Example 2, is a weighted nearest neighbor random walk on a bounded degree graph with transition probabilities which are bounded away from 0 and 1. In § 5 we construct an example of a sequence of Markov chains that exhibits separation cutoff but no total-variation cutoff (Example 3). Finally, in § 6 we transform Examples 2 and 3 into examples of sequences of lazy SRWs on bounded degree Expander graphs. The reason we first describe Examples 2 and 3 is that the key ideas of our constructions are more transparent in theses examples. 2. An overview of the main ideas of our constructions 2.1. A very basic chain with different cutoff times for separation and total variation. In this section we settle with a high-level description of some key ideas. Let us first present a very simple Markov chain which exhibits cutoff in both distances (see Figure 1) but for which the mixing-time in separation is twice as large as that in total variation. Consider a random walk on a segment a, b of length 2n which presents a constant bias towards the middle point which we call z (see Figure 1). Most of the equilibrium measure is concentrated on a small neighborhood of z and for this reason (cf. Proposition 3.3) the total-variation mixing-time corresponds to the time which is needed to hit z (starting from either of the end-points). The system displays cutoff because this hitting time is concentrated around its mean. Figure 1. A very simple chain for which the separation mixing-time is twice as large as the total-variation mixing-time (6n and 12n, respectively). The transition rates (apart from at the special states a, b and z) are 1/3 in the z direction and 1/6 in the opposite one (the holding probability is 1/2), making the chain travel at speed 1/6 towards z. The separation mixing-time on the other hand is twice as large. Roughly speaking, this is because for P t (a, b) to come close to its equilibrium value, "information" has to pass from one end to the other. The time required for this to occur corresponds more or less to the sum of the times needed to reach z from a and b, respectively (see Proposition 3.8). This scheme with two extremal opposite initial conditions, though not ubiquitous among Markov chains, appears in many natural examples for which cutoff has been proved: e.g. the lazy SRW on the hyper-cube (see [16,Theorem 18.3]), the Ising model at high temperature [19] or the adjacent-transposition shuffle on the segment [15]. 2.2. An idea to avoid cutoff in separation while keeping that in total-variation. Our idea to produce counter-examples with total-variation cutoff but only pre-cutoff in separation is to modify the structure (state space and transition rates) of the simple chain above (Figure 1), only on one side (say, the side of b), to break the symmetry. To be precise, in Example 2 we first set the holding probabilities on both sides to be 3/4 (and consider the obtained chain as the "original chain", as opposed to Example 1, for which the chain in Figure 1 serves as the "original chain") before modifying the b-side. We want to perform our modifications in the following manner: • We want to keep the property that every path from a to b goes through z, which shall still bear a positive proportion of the equilibrium mass. • We want a to remain the initial condition from which it takes the longest time to reach equilibrium (equivalently, to hit z). More precisely, we want that also after the modification, the distribution of the hitting time of z, T z := inf{t : X t = z}, starting from a would still stochastically dominate the distribution of T z , starting from any other initial state. Moreover, we want the hitting time distribution of z, starting from any state between a and z (including a), to remain un-changed. • We want the hitting time of z from initial state b, to become non-concentrated, and to remain of the same order of magnitude as the mixing-time of the whole chain. Moreover, we want this hitting time to remain (stochastically) larger than the hitting time of z, starting from any other state which lies between b and z, and to become stochastically dominated by the hitting time distribution of z (from b) in the original chain (which equals the hitting time distribution from a in the modified chain). In this manner, the hitting time distribution of z under P a remains un-changed (and in particular, remains concentrated). Moreover, after the modification it is still the case that d(t) ≈ P a [T z > t], and thus by the aforementioned concentration there is still cutoff in total-variation (see Proposition 3.3). Using Proposition 3.8, we deduce that d sep (t mix +t) ≈ P b [T z > t] and so there is no cutoff in separation as the hitting time distribution of z under P b in the modified chain is no longer concentrated. To perform such a modification, we borrow ideas from previous constructions of Pak (for Example 1) and Aldous (for Example 2), which we present now. 2.3. Related Constructions. When the product condition (Definition 3.1) was shown to be a necessary condition for cutoff, it was conjectured that it should also be a sufficient one for "nice" chains. However, two counter-examples constructed, respectively by Aldous and Pak (see [5,Example 8.1], [7] and [16,Chapter 18] for a more detailed description and analysis), show that in general the product condition does not imply cutoff. The mechanisms used to prevent cutoff in those two constructions are of different nature. • Aldous' example ( Figure 2) locally looks like a biased random walk on a segment, so that most of the equilibrium measure is concentred on a small neighborhood of the end-point towards which the walk is biased (we call this end of the segment z and the opposite one b). To avoid cutoff, the half of the segment closer to z is split into two distinct parallel branches. The transition rates on these branches are tuned so that there is still a bias towards z but such that one path is slower than the other. Starting furthest away from equilibrium (i.e. at state b) we have two possible scenarios to reach z given by the two distinct branches and the probability of each is bounded away from 0 and 1. As the speed along the two branches is different, the CDF of the hitting time distribution of z starting from b has two abrupt jumps. Consequently, d (n) (t) exhibits two distinct abrupt drops and there is no cutoff. • Pak's idea is to start with a sequence of chains which exhibits cutoff and to modify it by adding transitions which are such that with a constant rate (which is chosen to be somewhere between the spectral gap and the inverse of the mixing-time of the original chain, say their geometric mean) the system is brought to equilibrium at once. For the modified Markov chain, the total-variation distance decays (up to a negligible error) exponentially with the rate of the newly added transitions and hence cutoff does not occur, neither pre-cutoff. Figure 2. A version of Aldous' example. The walk is always biased towards z but the speed of the walk depends on the branch. On the top branch, as well as on the rest of the segment, the transition rates are 1/6 in the z direction and 1/12 in the opposite one (the holding probability is 3/4) whereas on the bottom branch the (exit) rates are twice as large (and the holding probability is 1/2), resulting in a larger speed. As a result, two transitions occur for the total-variation distance at times 9n and 12n respectively, where n denotes the total distance from z to b and the length of each of the two parallel branches is ⌈n/2⌉ (above n = 14). The rates at b, z and at the branching point are not very relevant but we display them for the sake of concreteness. In our Example 1 (see Figure 3), we adapt Pak's idea: on the b-side (of the chain from Figure 1) we add transitions from states on the b-side to the center of mass z, and we choose the inverse of the rate to be of the same order as the mixing-time (which is of order of the length of the segment: n). This makes the hitting time of z started from b non-concentrated and (stochastically) smaller than started from a. Moreover, after this modification, all of the properties described in the beginning of § 2.2 are satisfied. In our Example 2, (see Figure 4), we simply replace the b-side by Aldous' construction, and set the holding probability on the a-side to be 3/4 (which is the holding probability of the slow branch of the b-side). After this modification, all of the properties described in the beginning of § 2.2 are satisfied. 2.4. An idea to keep cutoff in separation while avoiding that in total-variation. For this part we must rely on a different idea. What we want to alter in our chain is the way the separation distance shrinks to zero. Loosely speaking, in the original chain on the segment, the separation mixing-time is determined by the sum of the hitting times of z from a and b since z is the only channel of communication between the two extremities. Our construction (Example 3) relies on the following idea (see Figure 5). We take the length of the line segment to be 2(M + 1)n for some large (fixed) integer M . • We connect the two sides of the segment at a second point z ′ which is far from the center of mass z. We do so by merging the two states which are of distance n from z (one on the a-side and one on the b-side) into a single state z ′ . This connection maintains the cutoff in separation. However, it has the effect of shortening the separation cutoff time by some constant factor, while, as we now describe, drastically altering the nature of the abrupt transition of d sep (t) around the (separation) cutoff time. It follows from our analysis of Example 3 and the refined analysis of Example 5 in § 6.5, that provided that M is taken to be sufficiently large: (i) Also after creating the connection at z ′ we have that lim n→∞ sup t |d (n) sep (t) − max(0, 1 − P t n (a, b)/π n (b))| = 0. (ii) Due to the connection of A and B at point z ′ , up to negligible terms, around the separation cutoff time, P t n (a, b) is supported by trajectories which never get much closer to z than z ′ is, and so are contained in a set whose stationary probability is exponentially small in n. (iii) Let T a,b z ′ (Definition 3.4) be a random variable distributed as a convolution of the hitting time distribution of z ′ started from a with that started from b (in this case the two distributions are identical). Around the (separation) cutoff time, P t n (a, b)/π n (b) can be understood in terms of the behavior of T a,b z ′ in the large deviation regime (namely, the cutoff occurs around the time t for which sep (and decays exponentially for t < t (n) sep ) and continues to do so for Θ(n) steps around t (n) sep (in particular, shortly after t (n) sep , (a, b) no longer minimizes P t n (x, y)/π n (y)). By (i), it follows that w n = 1 is a (separation) cutoff window (and we can take C ε = C| log ε|, for some absolute constant C, for all ε ∈ (0, 1/4]). (v) sup t P t n (a, b)/π n (b) = Θ(max t P[T a,b z ′ = t]/π n (z ′ )) = Θ(2 n /n) → ∞ as n → ∞. This behavior (namely, on the one hand having property (i) and on the other having properties (ii), (iv) and (v)) is atypical and quite surprising at first sight. We are not done yet, as after creating the connection at z ′ , there are two symmetric parallel distinct branches from z ′ to the center of mass z, resulting in the hitting time of z from either a or b being concentrated. Consequently, there is still cutoff in total-variation (as by Proposition • We break the symmetry (between the two branches, but not between a and b) in order to "destroy" the cutoff in total-variation by making the speed along the two paths which link z ′ to z different as in Aldous' example ( Figure 2). Observe that as opposed to Examples 1-2, here a and b play symmetric roles (the chain looks the same starting from either one of them). As one should expect from property (ii) above (provided that M is sufficiently large), breaking the symmetry as described above does not influence the asymptotic pattern of convergence in separation, and (i)-(v) above remain valid. However the quantitative analysis of this example turns out to be more intricate than that of the first two. 2.5. Constructing counter-examples which are lazy SRW on bounded degree graphs. It was observed by Peres and Wilson that the sequence of chains in Aldous' example could be modified into a sequence of lazy SRWs on bounded degree expander graphs (see Definition 3.6). In [18] Lubetzky and Sly constructed explicit 3-regular expanders with total-variation cutoff. We use similar ideas to transform our Examples 2-3 into SRWs on bounded degree graphs (Examples 4-5). Our constructions includes one new idea: by introducing a sufficient amount of symmetry, (roughly speaking) we are able to reduce the analysis of Examples 4-5 to that of Examples 2-3. Consequently, the analysis of the asymptotic convergence profile of d n (t) is simpler than in [18] (at the cost of having maximal degree ≤ 7 rather than 3). Preliminaries The aim of this section is to introduce some general theory which shall reduce the analysis of our Examples 1-3 to the analysis of hitting time distributions of a specific state. The results appearing in this section are later generalized in § 6.1 (these generalizations reduce the analysis of Examples 4-5 to the analysis of hitting time distributions of a specific set). All proofs are deferred to the appendix. As we shall only prove the more general versions, we now describe the correspondence between the results of this section to the ones from § 6.1: Proposition 6.4 corresponds to Proposition 3.3, Lemma 6.3 to Lemma 3.5 and Proposition 6.5 to Proposition 3.8. Definition 3.1. We say that a family of reversible Markov chains satisfies the product condition if Because of the following well-known fact (e.g. [16,Proposition 18.4]), all our counterexamples satisfy the product condition. mix }, if the sequence exhibits a pre-cutoff (either in totalvariation or separation) and lim n→∞ t Given z ∈ Ω we let T z := inf{t : X t = z} denote the hitting time of z. The following result allows us to characterize the mixingtime of the chain in terms of the hitting time of a given point which carries a positive proportion of the mass. As hitting times are sometimes easier to control than mixing-times, it will assist us in determining the total-variation profile of convergence to equilibrium in Examples 1-3. Proposition 3.3. Let (Ω n , P n , π n ) be a sequence of lazy reversible irreducible finite Markov chains which satisfies the product condition. Let us furthermore assume that there exists z n ∈ Ω n such that inf Then setting Note that in particular the result shows that total-variation cutoff occurs if and only if τ n (·) displays the following abrupt transition To characterize the separation time, we introduce a notion of "double-hitting time". Definition 3.4. Given x, y and z in Ω. We let T x,y z denote a random variable obtained by taking the sum of two independent realizations of T z , once under P x and once under P y . That is, In particular, if every path from x to y goes through z) then for all t ≥ 0 (3.9) All our examples would be of sequences of chains whose spectral gaps are uniformly bounded away from zero, that is, ones satisfying Although this is not necessary, working with such chains substantially simplifies the analysis of our examples. To check this condition, we use the notion of the Cheeger constant and the well-known discrete analog of Cheeger's inequality (3.10) [3,4,21] (the proof can also be found at [16, Theorem 13.14]). We define the Cheeger constant of the chain to be We call a sequence of chains (Ω n , P n , π n ) an expander family if inf n Φ n > 0. The following result implies that a sequence of reversible chains satisfies (⋆) if and only if it is an expander family. Theorem 3.7. Let λ 2 be the second largest eigenvalue of a reversible transition matrix on a finite state space. Let Φ be as in Definition 3.6. Then It is rather straightforward to check in all of our examples that the Cheeger constant is bounded away from zero. Proposition 3.8. Let (Ω n , P n , π n ) be a sequence of lazy reversible irreducible finite Markov chains which satisfies (⋆). Let us furthermore assume that there exist z n ∈ Ω n , sets A n , B n ⊂ Ω n , with A n ∪ B n = Ω n \ {z n } and a n ∈ A n , b n ∈ B n , such that (i) inf n π n (z n ) > 0. (ii) For any x ∈ A n and y ∈ B n , P x [T zn < T y ] = 1. Then lim Proof. We want to show that P t n (x, y)/π n (y) achieves its smallest value for (x, y) = (a n , b n ) up to a negligible correction. According to (iv) we do not need to worry about the case when both x and y lie in A n . For the other cases, condition (iii) combined with Lemma 3.5 guaranties that (3.14) Finally, applying Lemma 3.5 again yields that This allows to conclude the proof by noticing that the right-hand side of (3.15) is o(1) (using (i) and (v)). Remark 3.9. We note that for lazy chains condition (v) in Proposition 3.8 follows from the condition lim n→∞ dist(a n , z) = ∞ (which is satisfied in Examples 1-3), where dist(a n , z) is the minimal k such that P k (a n , z) > 0. To see this, consider the non-lazy path the chain performed from a n to z by time T z , γ = (γ 0 = a n , γ 1 , . . . , γ ℓ = z) (i.e. for all i < ℓ, γ i+1 = γ i and possibly after spending some time at γ i the chain moved to γ i+1 ). The conditional law of T z , given γ, is that of a sum of ℓ independent geometric random variables with parameter 1/2), and so by the local CLT its mode is at most C/ √ ℓ ≤ C/ dist(a n , z). Finally, note that the mode of a mixture is at most the maximal mode of a distribution in the mixture. Total-variation cutoff without separation cutoff examples In this section we describe two similar examples of sequences of reversible chains which exhibit total-variation cutoff but no separation cutoff. The analysis of both examples is extremely similar. We present both examples since while the first demonstrates that (1.7) is indeed sharp, it is much harder to transform it into an example of lazy SRWs on bounded degree expander graphs. Example 1. Given n ≥ 2, set Ω n := A ∪ {z} ∪ B where A = A n := {a = a n , a n−1 , . . . , a 1 } and B = B n := {b 1 , b 2 , . . . , b n−1 , b n = b}. For notational convenience we write a 0 := z =: b 0 . The matrix P n has positive transition rates on the set of (un-oriented) edges With a small abuse of notation we define e L 1 and e B 1 to be two distinct parallel edges. To each of these edges, we associate conductances (or weights), as follows , and w n (e L n ) = 2 −n (n−1) . We let P n be the transition matrix of the (1/2)-lazy random walk on the graph (Ω n , E) with conductances w n , i.e. we set where w n (x) := y∈Ωn w n (x, y) with the convention that w n (z, b 1 ) = w n (e L 1 ) + w n (e B 1 ). This Markov chain is reversible with respect to n n 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 00 00 11 11 0000000000000000000 1111111111111111111 000000000000000000 111111111111111111 (n−1)/2n 00 00 11 11 00 00 11 11 00 00 11 11 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 Figure 3. A schematic representation of the transition rates for Example 1. On the segments A and B the transition rates away from and towards the center of mass z are equal respectively to 1/6 and 1/3 (on the A side) and (n − 1)/6n and (n − 1)/3n (on the B side). The rate for using a green-edge to land on z is equal to 1/2n. The rates for using green edges in the other direction has a more complicated expression prescribed by reversibility. These rates are described below despite the fact that they play no role in our analysis. A simple calculation show that which implies lim n→∞ π n (z) = 1/4. The transition matrix obtained from w n is • P n (x, x) = 1/2, for all x ∈ Ω n . Note that for this chain, condition (⋆) is easily verified using Theorem 3.7. Since under P an , T z is concentrated around time 6n, to prove total-variation cutoff around time 6n for this sequence of chains (using Proposition 3.3), we only need to verify that a n is the initial state from which T z is (stochastically) the largest. A crucial fact which shall assist us in this task is that for all i ∈ [n] and all t (4.4) The reason for this identity is the following: We couple X A and X B starting from a i and b i (resp.) in the following manner: with probability 1/2 both stay put, with probability ( 1 2 − 1 2n ) X A and X B make "the same move" (+/ − 1 (towards/away from z) with (conditional) probability 1/3 and 2/3, resp. (unless the current position of the chain is either a n or b n in which case the move has to be −1) and with probability 1/(2n), X B is sent directly to z while X A moves towards/away from z with probability 2/3 and 1/3 (unless it is located at a n ). We do not need to specify how the coupling is defined after X B has hit z. A way to describe X t starting from B before it hits z is the following: at each step it is killed (hits z) with rate 1/(2n) and conditionally on not being killed, it performs "the same" random walk as that on A (in terms of the index of its current position) but with holding probability n/(2n Moreover, it follows from the above discussion that We now turn to the task of verifying that there is no cutoff in separation. Note that conditions (i)-(ii) of Proposition 3.8 hold by construction, condition (v) holds by Remark 3.9, while condition (iv) holds by (3.8). Lastly, condition (iii) of Proposition 3.8 follows form We now describe a variant of the previous example which is a nearest neighbor lazy weighted random walk on a bounded degree graph with bounded transition probabilities. For notational convenience we write a 0 := z =: b 0 = c 0 and c n = b n . Consider the following transition matrix • P n (x, x) = 3/4 for all x ∈ Ω n \ C and P n (c i , c i ) = 1/2 for all i ∈ {1, . . . , n − 1}, • P n (z, a 1 ) = P n (z, b 1 ) = P n (z, c 1 ) = 1/12. When at a state of degree two or three (other than z), conditioned on making a nonlazy step, the chain moves away from (resp. towards) z with conditional probability 1/3 (resp. 2/3). For vertices of degree 2: along the green edges, rates away from and towards the center of mass z are equal respectively to 1/12 and 1/6 and along the red edges they are equal to 1/6 and 1/3, respectively. The transitions away from vertices of degree 1 and 3 are given on the figure. States a 2n , b 2n and z play here the same respective roles as a n , b n and z in the previous example. A simple calculation (similar to (4.3)) yields that lim n→∞ π n (z) = 2/7. We argue that for all t ≥ 0 and i ∈ [2n] In particular, Since the hitting time of z under P a 2n is concentrated around time t = 24n, by Proposition 3.3 the sequence exhibits total-variation cutoff around time 24n. The last inequality in (4.12) is trivial. For the first one we consider the case where P n is replaced by P ′ n which satisfies 2P ′ n (c i , c i+1 ) = 1/3 = P ′ n (c i , c i−1 ) and P ′ n (c i , c i ) = 1/2 for 1 ≤ i ≤ n − 1 and P ′ (x, y) = P (x, y) elsewhere. As adding extra laziness increases stochastically the hitting time T z (as in Remark 3.9 consider the law of γ, the non-lazy path performed by the chain by time T z ; Clearly it is invariant under this transformation, while the conditional law of T z , given γ, can only increase, stochastically), (where P ′ denotes the distribution of the modified chain with the increased holding probability on C n ) and the same holds when b i is replaced by c i . To prove that b 2n is the vertex from which the hitting time of z is the largest, we need to prove the following two inequalities valid for i ∈ {1, . . . , n} (4.15) Both can be proved by coupling arguments. For the first one, we can couple the non-lazy path of the chains starting from b i and c i until they reach either b n or z (the second being at position c j when the first is at position b j ), and then in the case they reach b n = c n let them evolve together until they reach z. The larger laziness on the path starting from c i until the merging time, implies stochastic domination. For the second inequality, the case i = n follows from fact that starting from b 2n the chain has to go through b n before reaching z. For i < n, we can couple the chain starting from b i and b i+n until the pair of chains reaches either (b n , b 2n ) or (z, b n ) (the second chain being at position b j+n when the first is at position b j ), and conclude using the case i = n. As in the previous example, we can apply Proposition 3.8. The reason why separation cutoff does not occur is that when starting from b 2n , the hitting time T z is not concentrated. Indeed it is concentrated around 18n under the conditioned probability measure P this yields While this result is rather elementary (we use some surgery to compare T z with a sum of independent variables, and then the law of large number for this sequence), the proof in full detail is long to expose (c.f. [5, Example 8.1]) and we choose to leave it as an exercise. Applying Proposition 3.8 for an adequate choice of sets and states (here (a 2n , b 2n , A n , B n ∪ C n ) plays the role of (a n , b n , A n , B n ) from Proposition 3.8) yields (4.17) In particular, there is no cutoff in separation. Separation cutoff without total variation cutoff example In the following example the analysis of the sharp transition of d sep (t) is reduced to the analysis of the behavior of sum of i.i.d. random variables in the large deviation regime. The analysis below is too coarse for the purpose of determining the width of the cutoff window. We later present a refined analysis for Example 5 (which is the bounded degree un-weighted version of Example 3) in § 6.5, which shows that in fact t for some absolute constant C > 0. The analysis in § 6.5 is built upon the analysis of Example 3 below, as it relies (in a non-quantitative manner) on the fact that certain large deviation estimates hold uniformly over compact sets (the identity of the large deviation rate function is not important for the analysis in § 6.5). Example 3. Let M ≥ 10 be a fixed integer whose exact value shall be determined later. Consider the state space Figure 5. A schematic representation of the transition rates for Example 3. When at a state of degree two or four (other than z), conditioned on making a non-lazy step, the chain moves away from (resp. towards) z with conditional probability 1/3 (resp. 2/3). The transition rates away from and towards the center of mass z, from degree two states, are equal respectively to 1/6 and 1/3, except on the segment C, due to increased holding probability. The transition rates away from the rest of the states are specified in the figure. This chain is a modification of Aldous' example (which was discussed in § 2). The difference lies in the introduction of an additional branch B to the graph. This branch has no effect on the total-variation profile of the convergence to equilibrium, but crucially modifies the separation profile, as P t n (a, b)/π n (a) (recall a := a nM and b := b nM ) is the quantity that takes the longest time to reach equilibrium (i.e. up to negligible correction (x, y) = (a, b) maximizes 1 − P t n (x, y)/π n (y) for all relevant t). A standard calculation yields that lim n→∞ π n (z) = 2/11 and lim n→∞ 2 n π n (z ′ ) = 6/11. (5.1) By symmetry, the law of T z starting, resp., from a i and b i is identical for all i and by the Markov property, it is stochastically increasing in i (for i > j, to reach z from a i (resp. b i ) the chain must first hit a j (resp. b j )). Only minor efforts are necessary to prove rigorously that a and b are the points in A ∪ B ∪ D for which the hitting time T z is stochastically the largest (the coupling arguments are similar to the one developed in the previous section), while for any choice of M > 1, Due to the different holding probabilities along the two branches, C, D, the distribution of T z under P a is not concentrated around its mean. Thus, by Proposition 3.3, there is no total-variation cutoff, and the total-variation asymptotic profile is given by To show that there is separation cutoff, it suffices to prove that lim inf n→∞ inf t min x,y∈Ωn P t n (x, y)/π n (y) − min 1, P t n (a, b)/π n (b) = 0, (5.3) and to show that min(1, P t n (a, b)/π n (b)) displays an abrupt transition. Let us start with the second point. According to Lemma 3.5 (first inequality of (3.7)), we have By definition T a,b z ′ is the sum of two independent hitting times of a biased random walk on a segment of length M n (from one end-point towards the one towards which there is a bias). We make some efforts to compute the large deviation behavior of this sum. Lemma 5.1. Consider a lazy random walk (Z t ) t≥0 on Z + with rates p(x, x + 1) = 1/3, p(x + 1, x) = 1/6, x ∈ Z + . Let T N be the first hitting time of N . We have Proof. Let X ′ be the random walk with the same rates on Z, and T ′ N be the first hitting time of N for this walk. By the Markov property T ′ N is the sum of N IID copies of T ′ 1 and hence we can use Cramér's Theorem (see e.g. [8,Chapter 2]) to obtain the large deviation for T ′ N below its mean. If one decomposes according to the value of X ′ 1 we notice that the Laplace transform f(λ) and we deduce the right value for f(λ) from this relation (the fact that f(0) = 1 and continuity of f indicates which root to choose in (5.7)). Note that the derivative of log f(λ) at zero is equal to 6 which implies that Ψ(6) = 0 (Alternatively, E[T ′ 1 ] = 6, hence by Cramér's Theorem it must be the case that Ψ(6) = 0). As Ψ is non-negative (since log f(0) = 0), it must be the case that it attains a global minimum at 6, which implies that Ψ ′ (6) = 0 and Ψ ′′ (6) > 0. Now, note that T i − T i−1 are independent variables, which are dominated by T ′ 1 and who converge (when i tends to infinity) to T ′ 1 in law. In particular, by dominated convergence (and Cesaro's Theorem) we have that for any λ ∈ (−∞, log 6 and thus in that case the result follows from Gärdner Ellis Theorem [8]. Finally, the local large deviation estimate (the result on P[T N = ⌊sN ⌋]) can be deduced from the large deviation principle using the fact that due to laziness We leave it as an exercise. Note moreover that the convergence (5.5) holds uniformly on s ∈ K for any compact K (it can be deduced e.g. from (5.9)). A consequence of (5.4) and the previous lemma in conjunction with (5.1) and Lemma 3.5 is that if s M is given by 2M s * , where s * is the unique solution in ( In what follows we let s ∈ (s M , 12M ] be fixed. We first use Lemma 3.5 to reduce to the case of x = a i , y = b j , i, j ≥ M n/2. Set E := {a i : i ≥ M n 2 } ∪ {b i : i ≥ M n 2 }. By (3.7) for any x ∈ Ω n and y ∈ Ω n \ E we have P ⌈sn⌉ n (x, y) π n (y) Finally, to treat the case x = a i , y = b j (the cases (a i , a j ) or (b i , b j ) are treated in the same manner), i, j ≥ M n/2 , we use again Lemma 3.5 which asserts that is (cf. the proof of Lemma 5.1) a sum of i+j independent random variables (not identically distributed) and that lim n→∞ sup i,j≥M n/2 One deduces from Gärdner Ellis Theorem [8] and the following consequence of laziness Note that the l.h.s. in the second line satisfies As 2M Ψ s 2M < log 2 (since s ∈ (s M , 12M ]) and δ can be chosen arbitrarily small, (5.18) (second line) and (5.19) imply that for sufficiently large n, for any i, j satisfying i + j ≥ sn 6 + ηn, we have Combining this with (5.18) (first line) and (5.14) we can conclude that (5.21) 5.1. Concerning Remark 1.3. Note that by performing a minor modification in the above construction we can bring the pre-cutoff ratio for total-variation to the largest possible value: 2. A way to achieve this is to make one of the branches linking z ′ to z much faster than the other (instead of only twice faster as in Example 3, we want the ratio of speeds to tend to infinity). What we can do is to make these branches of length ⌈ √ n ⌉ while A and B are of length n. Furthermore, we choose the speed on one branch to be 1/6 while that one the other being 1/(6 √ n) by increasing the holding probability on this branch (see Figure 6). Using similar reasoning as in the analysis of Example 3 one can show that for this construction there is separation cutoff around time 12n (note that here − log π n (z ′ ) = Θ( √ n), which by (5.4) implies that for t n := ⌈(12 − ε)n⌉, P tn We can also find a similar example with transition rates bounded uniformly from zero by considering two branches of different lengthes, but in that case the analysis turns out to be more intricate. which coincides with Definition 3.6 (see e.g. [16,Remark 7.2]). We say that G is a c-lazy expander if ch Lazy (G) > c. We say that a sequence of finite graphs (G n ) n≥1 is a family of c-lazy expanders if inf n ch Lazy (G n ) > c. In our new context, the center of mass is rather a set which contains a positive fraction of the vertices. We shall relate the mixing-time of the chain to the hitting time of this set. Mutatis mutandis, the results of Section 3 and in particular Lemma 3.5 can be adapted to this new context, but only if the set and the starting point satisfy a special relation: Definition 6.2 (Balanced sets). For any Z ⊂ Ω we denote the hitting time of Z by T Z := inf{t : X t ∈ Z}. • We say that Z is balanced seen from x ∈ Ω if for all t such that P where π Z (·) = 1 ·∈Z π(·) π(Z) is π conditioned on the set Z. • We say that Z is balanced seen from the set A if it is balanced seen from x for all x ∈ A. • We define T x,y Z to be a random variable distributed like the sum of two independent realizations of T Z , once under P x and once under P y . That is, for all t ≥ 0, Note that sets are not likely to be balanced by "pure luck" and we will be careful to introduce a sufficient amount of symmetry when constructing our graphs, so that our center of mass will be balanced seen from many starting points. However, this property cannot be satisfied for all starting points and we will have to deal with the remaining initial vertices separately (and show that they are irrelevant for determining the worstcase total-variation and separation distances), by using a crude ℓ 2 estimate (Lemma 6.8). Lemma 6.3. Let (Ω, P, π) be a finite irreducible lazy reversible Markov chain and consider x, y ∈ Ω, and Z ∈ Ω which is balanced seen from both x and y (i) For all t ≥ 0 we have if every path from x to y goes through the set Z) then for all t ≥ 0 we have that (1 − π(Z))/π(Z). (6.4) We use this result directly but also to prove the following key propositions whose aim is to replace Propositions 3.3 and 3.8. Proposition 6.4. Let (Ω n , P n , π n ) be a sequence of lazy reversible irreducible finite chains which satisfies the product condition. Assume that for each n there exist sequences of sets and vertices I n , Z n ⊂ Ω n , a = a(n) ∈ Ω n which satisfy (i) inf n π n (Z n ) > 0. (ii) Z n is balanced seen from I n for all n. Then lim In particular, there is separation cutoff if and only if T an,bn Zn is concentrated around its median. Remark 6.6. Note that the results presented above are generalizations of those presented in Section 3. Hence we shall only prove the more general versions in the Appendix. 6.2. Building blocks of our constructions. Let us now describe the building blocks of our constructions. We assume for simplicity that n is an even integer. To produce the analog of a biased nearest-neighbor random walk, our constructions must include structures which look like regular trees (for which the SRW has a bias towards the leaves). We must also care about adding some extra connections to avoid producing dead-ends on the leaves (which could lead to a small Cheeger constant). Finally, we must introduce extra symmetries to ensure that the center of mass is balanced seen from all vertices which are sufficiently far from it. Finally, we "stretch" the edges which are far away from the center of mass (that is, replace each such edge by a path of length L, for some fixed large constant L), to ensure that the worst-case total-variation and separation distances are obtained by vertices which are far away from the center of mass (which is balanced, seen from those vertices). Step 1: Let T a = (V a , E a ) be a binary tree of depth n rooted at a (in the rest of the construction, we keep calling a the root, even though the graph will no longer be a tree). Replace each edge between a pair of vertices belonging to the first n/2 generations of T a by a path of L edges, where L is an integer which does not depend on n. As L shall remain fixed we omit the dependence in L from our notation. In the course of the proof we will have to require L to be sufficiently large for the purpose of applying a certain crude ℓ 2 estimate. We call the obtained graph H 1 n . It is a tree rooted at a and we denote its set of leaves by L n := (u 1 , . . . , u 2 n ), (L n stands for the n-th generation of T a ), where the labels are chosen in an arbitrary fashion. On H 1 n the walker starting from a will have a bias towards the set of leaves, which can be considered as the center of mass of these graph, since it contains a positive proportion of the vertices. The parameter L here is present only to make the walk slower (the expected number of steps to cross an L-path is 2L 2 , i.e. if v ∈ H 1 N is either the root a or a vertex of degree 3 adjacent to three degree 2 vertices E v [inf{t : D(X t , v) = L}] = 2L 2 where D denotes the graph distance). This shall assist us in verifying that the worst-case totalvariation and separation distances are obtained by vertices which are far away from the center of mass. The problem of this construction is that seen from a vertex which is not a the set of leaves is not balanced. To cope with this defect, we add n extra "generations" of vertices, which make the center of mass balanced from "many" starting points. Step 2: For all 1 ≤ m ≤ n we label the vertices of the "n + m-th generation" (they are at distance (L + 1)n/2 + m from a) as follows , k ∈ [2 n−m ]} and we connect them to generation n+m−1 using the following scheme: for all k ∈ [2 n−m ] u k i 1 ,...,i m−1 ,1 , u k i 1 ,...,i m−1 ,2 , u k i 1 ,...,i m−1 ,3 , u k i 1 ,...,i m−1 ,4 are connected to u 2k−1 i 1 ,...,i m−1 and u 2k i 1 ,...,i m−1 . We call the obtained graph H n 1 . The "center of mass" of H 2 n is the set L 2n (it bears roughly half of the total mass of H 2 n ), which is balanced seen from any vertex in H 1 n . Step 3.1 and 3.2: We now want to plug (attach) to the leaf set of H 1 n "two paths" with different speeds (to have something similar to the structures present in Examples 2 and 3). The construction is the following (see Figure 7): (i) We start with a rooted binary tree T of depth n (assume n ≥ 4). And let us call 1 and 2 the two neighbors of the root and T 1 and T 2 the subtrees rooted at 1 and 2, respectively. (ii) In T 1 we add edges between any pair of vertices which have a common ancestor and are not leaves. (iii) Finally we assign labels to the leaf sets of T 1 and T 2 in a way that the two labeled trees (prior to step (ii) that is) are isomorphic (see e.g. Figure 7) and we merge each leaf of T 1 to the leaf of T 2 with the same label. We let T n denote the obtained graph. (iv) We let T ′ n denote the graph which is obtained by the same construction, in which we also add edges within T 2 in step (ii) using the same role as for T 1 (see Figure 7). To each vertex v ∈ L 2n , we glue a copy of T n (v is merged with the root of T n and we obtain H 3,1 n ). If we glue a copy of T ′ n (to each v ∈ L 2n ,) instead of T n , we obtain H 3,2 n . For both graphs we call L 3n the set of vertices at distance (L + 5)n/2 (i.e. maximal distance) from a. Root Root Figure 7. Representations Tn (on the left) and T ′ n (on the right) for n = 4. The red edges are those added in step (ii). On step (iv) leafs with the same label are merged. Finally we want to link together all the vertices of L 3n in order to avoid dead-ends in the graph. We choose to link them together using an explicit expander (see e.g. [1,20] for examples of explicit construction of expanders) so that (total-variation) mixing occurs rapidly once L 3n is reached. Step 4: We let F n = (V n , E n ) be a family of explicit 3-regular c-lazy expanders with V n = [2 3n−1 ]. We glue together G n and H 3,i n (i = 1, 2) without adding vertices by identifying V n with L 3n−1 . More precisely, we start with a copy of H 3,i n with root a. We label the vertices of L 3n by z 1 , . . . , z 2 3n−1 (the labeling is arbitrary). We then connect z i with z j if and only if {i, j} ∈ E n . We call the final result of our construction H 4,i n (i = 1, 2). We call a the root of H 4,i n (i = 1, 2). With some efforts and using the tools developed in the following sections, the reader can check that the lazy SRW on H 4,1 n exhibits pre-cutoff but not cutoff in total-variation. This is a SRW version of Aldous' counter-example. 6.3. A sequence of Lazy SRW on bounded degree expanders with total-variation cutoff and no separation cutoff. The following is a modification of Example 2 into a sequence of lazy SRWs on a sequence of bounded degree graphs. Example 4. Take a copy of H 3,1 n with root b and a copy of H 3,2 n with root a. We glue together the two by merging the vertices of L 3n (of both graphs): we give labels z 1 , . . . , z 2 3n−1 to the vertices lying in L 3n of each of the two graphs, and then merge each pair of vertices who share the same label. Finally, we build extra-connections between z 1 , . . . , z 2 3n−1 using an expander graph F n with 2 3n−1 vertices, like in Step 4. We let G 1 n := (V 1 n , E 1 n ) denote the obtained graph. In order to apply Propositions 6.4 and 6.5, we need to identify which vertices and sets will play which role. • The center of mass Z n is given by the 2 3n−1 vertices which are linked by the expander. • a is the vertex which maximizes (stochastically) the hitting time of Z n . • The pair of vertices (x, y) which (up to negligible terms) attains the minimum for P t n (x, y)/π n (y) (for all t ≥ 0) is given by (a, b). • The sets A n and B n are chosen to be the largest set of points around a and b (resp.) such that Z n is balanced seen from I n := A n ∪ B n . Namely, these are the vertices within respective distance (L + 1)n/2 from a and b (the vertices of H 0 n in both H 3,1 n and H 3,2 n ). Indeed, due to step 2 of the construction, the set L 2n of H 3,1 n , respectively, H 3,2 n (i.e. the collection of vertices whose distance from a (resp. b) is (L + 3)n/2) is balanced seen from A n , resp. B n . This implies that the distribution of X T Zn is uniform on Z n . Step (iv) of the construction of T n is there to guaranty that T Zn and X T Zn are independent (and hence that Z n is balanced seen from A n and B n ). It is then not difficult to check (cf. Figure 8) from the construction that assumptions (i) − (iii) resp. (i) − (v) of Propositions 6.4 and 6.5, are satisfied. Moreover, the hitting time of Z n from a is concentrated around (17 + 3L 2 )n, while from b it satisfies that We want to prove that the system displays cutoff in total-variation around time (17+3L 2 )n, and that the asymptotic behavior for the separation distance is given by 12) The only thing we have to do to prove these statements is to verify condition (iv) in Proposition 6.4 and condition (vi) of Proposition 6.5 (resp.). The only delicate point is to show that for starting points outside of I n the walk mixes rapidly. I.e. that there exists an absolute constant C > 0, which does not depend on L, such that Before proving (6.13) let us explain how we use it to verify the remaining conditions. Note that if L is chosen to be sufficiently large (i.e. such that (17 + 3L 2 ) > C) then (6.13) implies condition (iv) of Proposition 6.4. For condition (vi) of Proposition 6.5, for the case x ∈ Ω n y / ∈ I n , we use Lemma 6.8 and the total-variation cutoff result to show that for t ≥ (18 + 3L 2 + C) which is uniformly close to one. This yields the right condition provided 32 + 6L 2 > 18 + 3L 2 + C (which can obviously be fulfilled by picking L to be sufficiently large). We now treat the case where both x and y lie in A n (whose analysis does not rely on (6.13)). We use Lemma 6.3 with Z = Z ′ n chosen to be the set of vertices within distance (L + 3)/2n from a (corresponding to L 2n in the copy of H 3,2 n ). Recall that by construction this set is balanced seen from A n . By (6.3) we have that P t n (x, y)/π n (y) ≥ P T x,y Z ′ n ≤ t . P T x,y Z ′ n ≤ (6L 2 + 18 + ε)n = 1 (6.16) and this suffices to conclude that condition (vi) of Proposition 6.5 indeed holds. Now let us prove (6.13). We want to use a simple ℓ 2 bound using the Poincaré inequality (see Lemma A.1). The issue is that the spectral gap of our graph is rather small (of order L −2 ) due to the presence of stretched edges. However starting outside of I n the walk has a very small chance to visit the part of the graph where the edges are stretched, before the walk is already extremely mixed. Hence our idea is to apply the ℓ 2 bound for the walk on a smaller graph which corresponds to the vertices which are likely to be visited. This graph will have no stretched edges and a spectral gap which is bounded away from zero and does not depend on L. We let G 1 n = ( V n , E n ) denote the graph which is obtained from G 1 n when all the vertices within distance Ln/2+1 from a and b have been deleted, together with all edges connected to them. First we observe that the Cheeger constant associated to G 1 n is large (i.e. it is bounded from below by some positive absolute constant, which is independent also of L), see e.g. Lemma 2.1 in [18] for a proof. Proposition 6.9. Let κ := (min(c/3, 1/18)) 2 /2. Then Consequently, the relaxation-time of the lazy SRW on G 1 n , t rel (n) , satisfies If we let P t x and π n refer to the distribution at time t and at equilibrium for the walk on G 1 n , this implies (by Lemma A.1) that for x ∈ V 1 n , for all t ≥ nκ −1 log 9. P t x − π n TV ≤ 1 min y π n (y) e −κt ≤ max v∈ Vn deg v | V n |9 −n ≤ 6(8/9) n . (6.19) What remains to be proven is that if one considers V 1 n as a subset of V 1 n , then for any x ∈ V 1 n \ I n , the distances P t x − π n TV and P t x − π n TV are very close. Note that P t x − π n TV ≤ P t x − P t x TV + P t x − π n TV + π n − π n TV , (6.20) The term π n − π n TV is exponentially small in n because only an exponentially small fraction of the vertices of G 1 n lie outside of G 1 n . Now if one lets T ∂ V 1 n denote the hitting time of n is the vertex set of G 1 n ) we have (by a standard coupling argument) that Now if x ∈ V 1 n \ I n , it lies at distance of at least n/2 from ∂ V 1 n and has to overcome a drift to reach it. For this reason it should take time which is exponentially large in n. More rigorously, we let Ω x be the set of vertices y ∈ V 1 n such that there exists a graph automorphism of G 1 n preserving a and b which maps x to y (in most cases it is just a pedantic manner to describe the set of points at a fixed distance from a, but we have to introduce this definition due to the lack of symmetry of the b-side). Note that |Ω x |/|∂ V 1 n | ≥ 2 n/2 if x / ∈ I n . Hence we have for all i > 0 and x / ∈ I n that where in the first inequality we have used the stationarity of π n , y∈V 1 n π n (y)P i y ∂ V 1 n = π n (∂ V 1 n ). Plugging this in (6.21) we obtain (6.13) or more precisely: Example 5. Take a copy of H 4,1 n with root a and a copy of H 1 n with root b. We glue them together as follows: we give labels in [2 2n ] to the vertices in L 2n in the two graphs and merge the vertices which share the same labels. We denote the set of merged vertices by Z ′ n (this is the set of vertices of distance (L + 3)n/2 from a and b). Let G 2 n denote the obtained graph. However, as in Example 3, the separation mixing-time is determined by the behavior of T a,b Z ′ in the large deviation regime. Note that Z ′ is a set of small equilibrium measure (it has 4 n vertices whereas the full graph has order 8 n vertices). The reader can easily check that here a and b play symmetric roles. We let A n and B n denote the vertices within distance (L + 1)n/2 from a and b, respectively. Moreover, • The center of mass Z n is given by the 2 3n−1 vertices which are linked by the expander (which are the vertices belonging to L 3n of H 4,1 n ). • Z n is balanced seen from A n ∪ B n . • a and b maximize (stochastically) the hitting time of Z n . It is then not difficult to check (see Fig.9) from the construction that assumptions (i)−(iii) Proposition 6.4 are satisfied. Assumption (iv) can be showed to be satisfied as in the previous example by using an ℓ 2 bound for the graph in which points within distance Ln/2 of a and b have been deleted. The asymptotic behavior of the hitting time of Z n from a (or b) is once again given by (6.11) and hence the system does not display cutoff in total-variation. For cutoff in separation, we cannot use Proposition 6.5. We use instead Lemma 6.3, and the relevant set to hit is Z ′ n . This set is balanced seen from I n := A n ∪ B n and thus is the relevant one for the purpose of computing the separation mixing time. An analog of the analysis performed for Example 3, does the job. To control the quantity P t n (x, y)/π n (y) when one of x and y (or both) does not belong to A n ∪ B n we use an ℓ 2 estimate (in conjunction with Lemma 6.8) for the subgraph G 2 n obtained by deleting the stretched edges in G 2 n , similarly to what we have done in the analysis of Example 4. 6.5. Proof of Remark 1.4. Part (i) follows from the analysis of Example 4. We shall prove now that part (ii) is satisfied by Example 5. We denote by π Z ′ the distribution of π n conditioned on Z ′ (suppressing the dependence on n). By (6.4) we have that for all t and every x ∈ A n and y ∈ B n P t n (x, y)/π n (y) = We know from the previous analysis of Example 5 that for the separation distance to equilibrium only (x, y) ∈ A n × B n matter, or more precisely lim n→∞ sup t≥0 |d (n) sep (t) − max(0, 1 − min (x,y)∈An×Bn P t n (x, y)/π n (y))| = 0. (6.25) Hence setting t n η (x, y) := min{t : P t n (x, y)/π n (y) ≥ 1 − η} we prove that cutoff window is constant by proving that, for all ε > 0, there exist some n ε ∈ N and some absolute constant C 2 such that for all n ≥ n ε and all (x, y) ∈ A n × B n t n ε (x, y) − t n 1−ε (x, y) ≤ C 2 | log ε|. (6.26) ∀t ≥ t ε (x, y), P t n (x, y)/π n (y) ≥ 1 − ε. (6.27) In what follows for simplicity we drop the dependence in n in the notation t η (x, y). Although this is not used in the analysis below (and hence not proven), we can identify t 1/4 (x, y) for all (x, y) ∈ A n × B n as follows: where t ′ (x, y) := inf{t : P[T x,y Z ′ ≤ t] ≥ π n (Z ′ )} andt(x, y) := inf{t : P[T x,y Z ′ = t] ≥ π n (Z ′ )}. This follows from the analysis below, together with (6.24) and the exponential decay of P t π Z ′ (Z ′ )) − π n (Z ′ ) as a function of t. Fact 6.12. The family of Geometric distributions is log-concave. Fact 6.13. The family of log-concave distributions over Z is closed under convolutions. The following representation of hitting times in birth and death chains is due to Karlin and McGregor [13,Equation (45)]. It was later rediscovered by Keilson [14]. The discrete time case of this result was given by Fill [11,Theorem 1.2]. We are now ready to prove (6.26) and (6.27). For clarity of exposition, we first expose our analysis for the special case x = a, y = b. Consider the sequence of graphs G 2 n from Example 5. Let G 3 n the subgraph of G 2 n whose set of vertices is given by V 3 n := {v : dist(v, {a, b}) ≤ (L + 3)n/2}, and whose edges are those of E 2 n for which both ends are in G 3 n (Note that this graph is connected and includes Z ′ but not any point further away from {a, b}) Let (Y t ) t∈Z + be lazy SRW on G 3 n . Consider the projectionȲ t := 1 + dist(Y t , {a, b}). Our construction implies that the projection is Markovian and thus (Ȳ t ) t∈Z + is a lazy birth and death chain on [1 + (L + 3)n/2]. Consequently, by Theorem 6.14 and Facts 6.12-6.13, the law of T a,b Z ′ , which is a sum of independent hitting time and thus of geometric variables, is log-concave. For any v ∈ V 3 n the distribution of T Z ′ , given that Y 0 = v, is the same as that of T 1+(L+3)n/2 (for the chain (Ȳ t )), given thatȲ 0 = 1 + dist(v, {a, b}). Consequently, by Theorem 6.14 and Facts 6.12-6.13, the law of T a,b Z ′ is log-concave. Let z * be the mode of T a,b Z ′ . A standard computation is sufficient to show that (in fact, the first inequality follows from unimodality). Fix some δ > 0 sufficiently small such that P[T a,b Z ′ ≤ z * − δn] ≫ 2 −n (2 −n is the order of magnitude of π n (Z ′ )). By a large-deviation estimate and log-concavity there is some α > 1 such that for all sufficiently large n we have that hence, again by log-concavity, Consequently, by (6.24) π n (Z ′ ) = α P t n (a, b) π n (b) . (6.30) As T a,b Z ′ is log-concave and hence by Fact 6.11 also unimodal, (6.24) also yields that and that there exist some absolute constants c, C 6 > 0, β ∈ (1, 2) such that This concludes the proof of the case (x, y) = (a, b) as (6.30) implies (6.26) with C 2 := (log α) −1 and (6.27) can be deduced from the four other equations. For general (x, y) ∈ A n × B n we decompose T x,y Z ′ into a convolution of a log-concave distribution and some other negligible term. Let (X x t ) t and (X y t ) t be independent realizations of the random walk, started from respective initial vertex x and y, defined on the same probability space. Let T x Z ′ := inf{t : X x t ∈ Z ′ } and T y Z ′ := inf{t : X y t ∈ Z ′ }. We define T ′ x (and T ′ y in an analogous manner, using (X y t ) and T y Z ′ ) as follows (with the convention sup ∅ = 0) By Theorem 6.14 and Facts 6.12-6.13 the laws of T x Z ′ − T ′ x and T y Z ′ − T ′ y are log-concave (by a similar argument to the one used before using a projection to a birth and death chain), and so T 1 is also log-concave (by Fact 6.13). Observe that T 1 + T 2 has the same law as T x,y Z ′ . Denote the mode of T 1 by z * = z * (x, y). Fix some δ > 0 sufficiently small such that min (x,y)∈An×Bn P[T 1 (x, y) ≤ z * (x, y) − δn] ≫ 2 −n . Imitating the proof of (6.30), using a large-deviation estimate on P[T 1 (x,y)=z * (x,y)] P[T 1 (x,y)=z * (x,y)−⌊δn⌋] which is uniform in (x, y) (the existence of such a uniform large-deviation estimate follows from the analysis of Example 3, or alternatively, by [5, Lemma 6.2]), together with log-concavity, we get that if α > 1 is chosen sufficiently small, then (6.29) remains valid simultaneously for all choices of x, y, if one replaces T a,b Z ′ by T 1 (x, y) (and z * with z * (x, y)). We argue that (6.28)-(6.33) can be extended (excluding the middle terms) to all (x, y) ∈ A n × B n (in the role of (a, b)), with the same choice of constants for all (x, y) ∈ A n × B n . To extend (6.30) and (6.31), note that after conditioning on T 2 we can imitate the above proofs and so the extensions are obtained by averaging over T 2 . For (6.32), note that by unimodality P[T x,y Z ′ = z * (x, y)+⌈n 2/3 ⌉]/π n (Z ′ ) ≥ c 1 2 n P[T 2 (x, y) ≤ ⌈n 2/3 ⌉]P[T 1 (x, y) = z * (x, y)+⌈n 2/3 ⌉]. It is not hard to show that there exists some γ < 2 and c 2 , C 6 > 0 such that P[T 1 (x, y) = z * (x, y) + ⌈n 2/3 ⌉] ≥ c 2 γ −n and P[T 2 (x, y) ≤ ⌈n 2/3 ⌉] ≥ 1 − C 6 n −2/3 . for all (x, y) ∈ A n ×B n (by Markov inequality and the fact that max (x,y)∈An×Bn E[T 2 (x, y)] = O(1)). For (6.28) use unimodality (first inequality) to show that for all (x, y) ∈ A n × B n |z * (x, y) − E[T 1 (x, y)]| ≤ C 4 Var(T 1 (x, y)) ≤ C 4 Var(T a,b Z ′ ) ≤ C 5 √ n. A.3. Proof of Lemma 6.3. By decomposing over the possible values of T Z , using the assumption that Z is balanced seen from x and reversibility (which implies that P s π Z (y)/π(y) = P s y (Z)/π(Z), for all s), we get that P t (x, y) π(y) = k 1 ≤t P x [T Z = k 1 ] P t−k 1 π Z (y) π(y) + P x [X t = y and T Z > t] π(y) P t−k π Z (Z) π(Z) . The first inequality in (6.4) is obtained by plugging the last estimate in the second term of (6.4). For the second inequality in (6.4) it follows from the estimate A.4. Proof of Proposition 6.4. The result is mostly a consequence of the following result which relates the mixing time starting from x to the hitting time of a set Z balanced seen from x. Let s ′ := max(t x,Z (p) − s ε , 0). Then we have Moreover if Z is balanced seen from x then we also have that Proof. The first result is proved by coupling the chain with initial distribution P k−sε x with the stationary chain (k s ε to be determine soon). We have P x [T Z ≥ k] P k−sε x − π TV + P π [T Z ≥ s ε ] P k−sε x − π TV + ε. (A. 16) where the last inequality is a consequence of (A.2) and the choice of s ε . Setting k = t x,Z (p) we obtain the result (as if s ′ = 0 there is nothing to prove). We now prove (A.15). By the assumption that Z is balanced seen from x, for all ℓ ≤ t (A.17) By the triangle inequality and the fact that the distance to π decreases in time, we obtain ≤ P x [T Z > ℓ] + P t−ℓ π Z − π TV Using this inequality for ℓ := t x,Z (p) (and so t − ℓ = r ε ) we only have to show that P t−ℓ π Z − π TV = P rε π Z − π TV ≤ ε. Combining (A.1) with the definition of r ε , we have that P rε π Z − π TV ≤ λ rε 2 π(Z c )/π(Z) ≤ ε. (A. 18) We can now proceed to the proof of Proposition 6.4. With our assumptions on t rel and Z n , Lemma A.3 allows us to show that mixing time starting from x and t x,Zn (p) are equivalent when Z n is balanced seen from x (i.e. for x ∈ I n ). Assumption (iv) ensures that what occurs for other initial conditions does not matter and Assumption (iii) establishes that a is the worst initial condition. A.5. Proof of Proposition 6.5. From Lemma 6.3 and assumptions (i), (iii) and (v) we know that P t n (x, y)/π n (y) and P[T x,y Zn ≤ t] differ only by a negligible amount, provided that x ∈ A n and y ∈ B n . Assumption (iv) ensures then that lim inf n→∞ inf t≥0 min (x,y)∈An×Bn P t n (x, y) π n (y) − P t n (a n , b n ) π n (b n ) = 0. (A. 19) We are left checking the other cases. Assumption (vi) takes care of most of them, and leaves the case where (x, y) ∈ B n × B n , for which Lemma 6.3 implies that P[T x,y Zn ≤ t] is a lower bound for P t n (x, y)/π n (y). Hence the conclusion follows by assumption (v) again. A.6. A short alternative proof of Theorem C. We are going to show that there exists an absolute constant c > 0 such that for any lazy chain Indeed set t := t mix (1/4) and s := ⌊c √ t⌋. A sample of the distribution of the lazy chain at time t can be generated by running the non-lazy version of the chain for ξ t steps, where ξ t ∼ Bin(t, 1/2) and is independent of the non-lazy version of the chain. By the triangle inequality we have (first inequality) and a standard coupling argument (second inequality) Moreover, if c is chosen well, we have for every t ≥ 0 that ξ t − ξ t+⌊c √ t⌋ TV ≤ 1/2.
19,747
2015-08-17T00:00:00.000
[ "Mathematics" ]
Topological Inflation We discuss a novel scenario for early cosmology, when the inflationary quasi-de Sitter phase dynamically originates from the initial quantum state represented by the microcanonical density matrix. This genuine quantum effect occurs as a result of the dynamics of the topologically nontrivial sectors in a (conjectured) strongly coupled QCD-like gauge theory in expanding universe. The crucial element of our proposal is the presence in our framework of a nontrivial $\mathbb{S}^1$ which plays the dual role in construction: it defines the periodic gravitational instanton (on the gravity side) and it also defines a nontrivial gauge holonomy (on the gauge side) generating the vacuum energy. The effect is global in nature and cannot be formulated in terms of a gradient expansion in an effective local field theory. We also discuss a graceful exit from holonomy inflation due to the helical instability. The number of e-folds in the holonomy inflation framework is determined by the gauge coupling constant at the moment of inflation, and estimated as $N_{\rm infl}\sim \alpha^{-2}(H_0)\sim 10^2$. We also comment on the relation of our framework with the no-boundary and tunneling cosmological proposals and their recent criticism. I. INTRODUCTION Inflationary scenario is widely recognized as one of the most successful candidates for the description of the early Universe leading to its observable large scale structure.Majority of effective and fundamental models of this scenario are based on the assumption that matter energy density driving the quasi-exponential expansion of the Universe during inflation stage is generated by local fieldtheoretical degrees of freedom, like a scalaron field in the Starobinsky R 2 -gravity [1] or a scalar field inflaton Φ(x) with its potential V (Φ) in chaotic and other inflationary models [2], see textbook [3] for a general overview. However, it is also very possible that the generation of this type of uniformly distributed energy might not be associated with any local propagating particles.Instead, it might be related to some global characteristics (such as holonomy) or topological degrees of freedom which cannot be expressed in terms of any local fields such as inflaton Φ. Examples, in particular, include the global degree of freedom arising in the context of the recently suggested generalized unimodular gravity theory [4].Another example is represented by a strongly coupled QCDlike gauge theory when the vacuum energy is generated by some nontrivial topological features of the gauge systems [5][6][7]. Here we want to apply the ideas when the vacuum energy is induced by the topologically nontrivial holonomy [5][6][7] to the mechanism of inflation in the early quantum Universe driven by the thermal states [8,9].This model, which incorporates the idea of the microcanonical density matrix as the initial quantum state of the Universe [10] is conceptually very attractive because of the minimum set of assumptions underlying it and, moreover, because of a mechanism restricting the cosmological ensemble to subplanckian energy domain and avoiding the infrared catastrophe inherent in the no-boundary wavefunction [11].Furthermore, this thermally driven cosmology [8,9] can serve as initial conditions for the observationally con-sistent models of R 2 and Higgs inflation, see original paper [14] and the recent development [15,16] based on induced gravity aspects of the theory. As we argue below our construction, which can be viewed as a synthesis of two naively unrelated ideas [8][9][10] and [5][6][7] correspondingly, shows a number of very desirable and remarkable features.On the gravity side [8][9][10] the nontrivial element of the construction is represented by the Euclidean spacetime with a time compactified to a circle S 1 .On the gauge field theory side [5][6][7] the same S 1 plays a crucial role when the gauge configurations may assume a nontrivial holonomy along S 1 .Precisely the gauge configurations with the nontrivial holonomy along S 1 may serve as a source of vacuum energy density sustaining the inflationary scenario.Furthermore, as we argue below this construction provides a system with a subplanckian energy scale such that a number of well-known and undesirable properties which always accompany the conventional inflationary scenario when a system is formulated in terms of a local field Φ(x), does not even occur in our framework. Our presentation is organized as follows.We begin in Sect.II with a brief overview of the first crucial element of the proposal: the thermally driven cosmology when the initial state is described by the microcanonical density matrix as originally discussed in [8][9][10].In Sect.III we overview a second crucial element of our proposal related to fundamentally new source of the vacuum energy as suggested in [5][6][7]. Then in sections IV and V we construct two different inflationary models based on similar building principles, but different field contexts.In both models the inflationary vacuum energy is generated by the holonomy of gauge fields.In the first model studied in section IV one can carry out all the computations is theoretically controllable semiclassical approximation as a result of special selection of the matter context.The second model studied in section V is much more attractive phenomenologically, though the semiclassical approximation cannot be justified in this case. We discuss how the inflation ends in our scenario (the so-called reheating epoch) in Sect.VI.In particular, we demonstrate that the number of e folds N infl is always very large N infl ∼ α −2 (H) ∼ 100 as a result of small gauge coupling constant α(H) ∼ 0.1 at the Hubble scale H.We also compare our holonomy inflation with conventional description in terms of the local inflaton Φ and potential V [Φ] in section VI F. We conclude in Section VII with formulation of the basics results and profound consequences of our proposal.We also describe in subsection VII C how this new form of the topological vacuum energy can be tested in a tabletop experiments in physical Maxwell system.We also comment in subsection VII D on differences of our framework with well known no-boundary and tunnelling proposals.Finally, we summarize a number of technical aspects relevant to our topological inflation scenario in Appendix A. In particular, we overview the nature of the contact term in gauge theories in section A 1, the generation of the "non-dispersive" vacuum energy due to the holonomy in section A 2 and its role in cosmological context in section III C. II. ORIGIN OF INFLATION IN THE THERMALLY DRIVEN COSMOLOGY Our goal here is to overview the previous results [8][9][10] with emphasize on the periodic properties of S 1 where gravitational instantons are defined and serve as initial conditions for the cosmological evolution of the scale factor a(t). Analytical continuation to the physical Lorentzian space-time demonstrates the de Sitter like behaviour with constant H.This is precisely the main goal of this section. The model of quantum initial conditions in cosmology in the form of the microcanonical density matrix was suggested in [10], where its statistical sum was built as the Euclidean quantum gravity path integral, over the metric g µν and matter fields Φ which are periodic on the Euclidean spacetime with a time compactified to a circle S 1 .This statistical sum has a good predictive power in the Einstein theory with the primordial cosmological constant and the matter sector which mainly consists of a large number of quantum fields conformally coupled to gravity [8,9].The dominant contribution of numerous conformal modes allows one to overstep the limits of the usual semiclassical expansion, because the integration over these modes gives the quantum effective action of the conformal fields Γ CF T [ g µν ] exactly calculable by the method of conformal anomaly.On the Friedmann-Robertson-Walker (FRW) background, with a periodic scale factor a(τ ) -the functions of the Euclidean time belonging to the circle S 1 [8] -this action is calculable by using the local conformal transformation to the static Einstein universe and the well-known trace anomaly, which is a linear combination of Gauss-Bonnet E = R 2 µναγ − 4R 2 µν + R 2 , Weyl tensor squared C 2 µναβ and R curvature invariants with spin dependent coefficients 1 .The resulting Γ CF T [ g µν ] turns out to be the sum of the anomaly contribution and the contribution of the static Einstein universe -the Casimir and free energy of conformal matter fields at the temperature determined by the circumference of the compactified time dimension S 1 .This is the main calculational advantage provided by the local Weyl invariance of Φ conformally coupled to g µν .Solutions of equations of motion for the full effective ac- tion -saddle points of the microcanonical statistical sum (1) -are the periodic cosmological instantons of S 1 × S 3 topology (in what follows we assume spatially closed cosmology which explains spherical topology of its spatial sections).These statistical sum instantons follow by a usual tracing procedure from the two-boundary instantons of the relevant microcanonical density matrix, which is depicted on Fig. 1.In their turn, the density matrix instantons serve as initial conditions for the cosmological evolution a L (t) in the physical Lorentzian spacetime.The latter follows from a(τ ) by the analytic continuation a L (t) = a(τ * + it) at the point of the maximum value of the Euclidean scale factor a + = a(τ * ), as shown on Fig. 2. This construction is described in [8][9][10] and we refer the readers to these original papers.The only comment we would like to make here is that the starting point of the analysis [8][9][10] is, of course, the density matrix ρ(φ, φ ) with two surfaces carrying its field arguments.These surfaces semiclassically are the boundaries of either Euclidean or Lorentzian spacetime, depending on the relevant size of the scale factor.The entire saddle point solution for ρ(φ, φ ) consists respectively of the Euclidean spacetime interpolating between them or of the Euclidean spacetime between Σ and Σ , sandwiched between the two layers of the Lorentzian spacetime.These two layers interpolate from Σ to the unprimed argument of the density matrix and from Σ to its primed argument and correspond in the density matrix to the chronological and anti-chronological evolution factors of the wellknown Schwinger-Keldysh technique [18] for expectation values in thermal field theory.When calculating the trace in the statistical sum in view of unitarity these two factors cancel out, and the only contribution to the statistical sum remains from the Euclidean domain between the Euclidean-Lorentzian transition surfaces Σ and Σ .These surfaces are uniquely determined from the condition of smooth periodicity in the Euclidean time on the compact S 1 , or as two turning points of the Euclidean trajectory for a(τ ).The equations for these cosmological instantons have the form of the effective Friedmann equation in the Euclidean time τ ( ȧ = da/dτ ), where M P = 1/ √ 8πG is the reduced Planck mass, ρ is the overall energy density of matter fields other than the conformal particles, β is the coefficient of the Gauss-Bonnet term in the total conformal anomaly of these par-ticles and R(η) is their radiation energy density. 2 The latter is given by a boson or fermion sum over field modes with energies ω on a unit 3-sphere at the comoving temperature 1/η -the inverse of the instanton circumference S 1 measured in units of the conformal time, Note that γ does not contribute to the above equations in view of conformal flatness of the FRW metric, while the coefficient α can always be renormalized to zero by a local R 2 counterterm, changes in α thus being equivalent to the inclusion of the non-minimally coupled scalaron of the Starobinsky model, see discussion in [12,13]. The integro-differential equations ( 4)-( 5) form a bootstrap -the amount of radiation constant C is determined from (5) by the underlying scale factor history a(τ ) which, in its turn, is generated by the back reaction of this radiation on a(τ ) via the effective Friedmann equation ( 4).Their solutions represent the set of periodic S 3 ×S 1 gravitational instantons 3 with the oscillating scale factor, garlands [8,10], that can be regarded as the thermal version of the Hartle-Hawking instantons [11].When the matter density is constant or nearly constant and forms a "Hubble factor" the scale factor oscillates m times (m = 1, 2, 3, ...) between the maximum and minimum values, a − ≤ a(τ ) ≤ a + , so that the full period of the conformal time (7) is the 2m-multiple of the integral between the two neighboring turning points of a(τ ), ȧ(τ ± ) = 0. Similarly, the full period of the proper Euclidean time on these periodic m-fold garland instantons is given by the analogous integral, 2 It should be emphasized that non-conformal matter was not completely excluded in the original setup and the gravitational sector of the theory was not assumed to be Wey invariant at all.In particular, the role of ρ could be played by a fundamental cosmological constant, its particular value being selected from the existence of the periodic Euclidean saddle-point solution, as it was in the simplest model of [9].In more realistic models the role of ρ is played by the non-conformal inflaton field in the slow roll regime or the scalaron field of the Starobinsky R 2 -model [12,13], see below.Moreover, ρ can contain ordinary particle matter of negligible amount in the early Universe, but quantum created during inflation in view of its non-conformal nature and, therefore, starting to dominate at later stages of the evolution, see footnote 4. 3 We use the term "gravitational instanton" to avoid confusion with conventional instanton type solutions which describe the interpolation between topologically distinct but physically identical winding sectors |k in gauge theories.The corresponding periodic instantons (the so-called calorons with nontrivial holonomy) is the subject of the Appendix A where we overview relevant for the present work results. These garland-type instantons exist only in the limited range of H 2 [8].As shown in [8], periodic solutions should necessarily belong to the domain where they form a countable, m = 1, 2, ..., sequence of one-parameter families interpolating between the lower and upper boundaries of this domain in the twodimensional plane of H 2 and C.This sequence with m → ∞ accumulates at the upper bound of H 2 max = 1/2B (and minimal value of C min = B/2), which correspond to the bound on the effective cosmological constant The lower bound H 2 min -the lowest point of m = 1 family -can be obtained numerically for any field content of the model. For solutions close to the upper boundary of the domain (10), C 1/4H 2 , the scale factor oscillates with a very small amplitude, and one can write down the following approximation which is valid for 12,13].The full period of the m-folded instanton is thus Remarkably, the bootstrap equations ( 4)-( 5) have explicit solution for large m and close to the upper boundary of the domain (10) [8].In this limit the Hubble parameter is close to the upper bound of its range corresponding to the maximal value of the effective cosmological constant (11).We would like to make few comments on the physical meaning of the topological parameter m which enters eq. ( 9).This parameter looks very similar to the integer instanton number in gauge theories where the Euclidean path integral is defined as the sum over all topological sectors, so that it is tempting to consider summation over m.However, m is not an independent parameter of the cosmological instantons of the above type.Each instanton is parametrized by two dimensional parameters, H 2 = Λ/3 -the cosmological constant or the energy scale of the model and M P -the Planckian mass or gravitational coupling constant.The folding number m is in one to one correspondence with the energy scale H as in (15).Therefore, under a general assumption that at later times the cosmological models with different values of H decohere and become observable, one should not sum over different values of m in the contribution to the initial conditions for inflation with a given H.The concrete values of H and m should, thus, be selected by matching with observations. Inflation stage in this model starts after the "nucleation" of the system from the gravitational instanton when the evolution in the Lorentzian time begins.The Lorentzian time history of the scale factor a L (t) originates by the analytic continuation of the approximate solution (12) to τ = 2mπ/Ω + it.This leads to the replacement of oscillatory behavior of cos(Ωτ ) by exponentially growing cosh(Ωt), so that at later times nonlinear effects start dominating.When solved with respect to ȧ2 Eq.( 4) takes in the Lorentzian spacetime, ȧ2 (τ ) = − ȧ2 L (t), the manifestly general relativistic form (cf. ( 26)-( 27 with the effective Planck mass M eff (ε) depending on the full matter density ε which together with ρ includes the primordial radiation of the conformal cosmology. As shown in [12,13], the above Euclidean-Lorentzian scenario remains valid also when the matter density ρ is represented by an appropriate potential of the slowly varying scalar field playing the role of the inflaton.The evolution consists in the fast quasi-exponential expansion during which the primordial radiation gets diluted, the inflaton field and its density ρ slowly decay by a conventional exit scenario and go over into the quanta of conformally non-invariant fields produced from the vacuum. 4They get thermalized and reheated to give a new post-inflationary radiation with a sub-Planckian energy density, ε → ε rad M 4 P /β.Therefore, M eff tends to M P , and one obtains a standard general relativistic inflationary scenario for which initial conditions were prepared by the garland instanton of the above type. Interestingly, this model can serve as a source of quantum initial conditions for the Starobinsky R 2 -inflation [1] and Higgs inflation theory [15,16], in which the effective H 2 is generated respectively by the scalaron and Higgs field.In particular, the observable value of the CMB spectral tilt n s 0.965 in these models can be related to the exponentially high instanton folding number [12,13] whereas the needed inflation scale in these models H ∼ 10 −6 M P determines the overall parameter β ∼ 10 13 generated by a hidden sector of conformal fields [13,19].If this sector is built of higher spin conformal fields [19] 5 , then the gravitational cutoff [20,21] of the model turns out to be several orders of magnitude higher than the inflation scale, which justifies the omission of the graviton loop contribution and the use of the above nonperturbative (trace anomaly based) method.This concludes our overview of the previous results [8-10, 12, 13] which play an important role in constructions presented in the following sections. III. THE TOPOLOGY AS THE SOURCE OF THE VACUUM ENERGY The goal here is to overview the basic ideas advocated in [5][6][7].We explain a number of technical elements related to these ideas in Appendix A, while here we present the corresponding arguments using a simple plain language and analogies, see next subsection III A. The basic prescription of the vacuum energy which enters the Friedmann equations will be explained in subsection III B. In subsection III C we list a number of key technical elements of the proposal relevant for cosmological applications. A. Intuitive picture The new paradigm advocated in [5] is based on a fundamentally novel view on the nature and origin of the inflaton field which is drastically different from the conventional viewpoint that the inflaton is a dynamical local field Φ.In this new framework the inflation is a genuine quantum effect in which the role of the inflaton is played by an auxiliary topological field.A similar field, for example, is known to emerge in the description of a topologically ordered condensed matter (CM) system realized in nature.This field does not propagate, does not have a canonical kinetic term, as the sole role of the auxiliary field is to effectively describe the dynamics of the topological sectors of a gauge theory which are present in the system.The corresponding physics is fundamentally indescribable in terms of any local propagating fields (such as Φ(x)).It might be instructive to get some intuitive picture for the vacuum energy in this framework formulated in terms of a CM analogy.Such an intuitive picture is quite helpful in getting a rough idea about the nature of the inflaton in the framework advocated in this work. Imagine that we study the Aharonov-Casher effect.We insert an external charge into a superconductor when the electric field E is screened, i.e.E ∼ Q exp(−r/λ) with λ being the penetration depth.Nevertheless, a neutral magnetic fluxon will be still sensitive to an inserted external charge Q at arbitrary large distances in spite of the screening of the physical field.This genuine quantum effect is purely topological and non-local in nature and can be explained in terms of the dynamics of the gauge sectors which are responsible for the long range dynamics.Imagine now that we study the same effect but in a time dependent background.The corresponding topological sectors which saturate the vacuum energy will be modified due to the external background.However, this modification can not be described in terms of any local dynamical fields, as there are no any propagating long range fields in the system since the physical electric field is screened.For this simplified example, the dynamics of the inflaton corresponds to the effective description of the modification of topological sectors when the external background slowly changes.The effect is obviously non-local in nature as the Aharonov-Casher effect itself is a non-local phenomenon, and cannot be expressed in terms of F µν . One should emphasize that many crucial elements of this proposal have in fact been tested using the numerical lattice Monte Carlo simulations in strongly coupled QCD.Furthermore, this fundamentally new sort of energy can be in principle studied in tabletop experiments by measuring some specific corrections to the Casimir pressure in the Maxwell theory, see remarks and references in concluding section VII C. In next subsection we specifically list some important technical elements which will be used in the construction. B. QCD holonomy mechanism of vacuum energy Let us now get to the discussion of the nature of the effective cosmological constant or a Hubble factor (8) in the modified Euclidean Friedmann equation (4).Our interpretation in the present work is based on the prescription that the relevant energy is in fact the difference ∆ρ ≡ ρ − ρ flat between the energies of a system in a non-trivial background and flat space-time geometry, similar to the well known Casimir effect when the observed energy is a difference between the energy computed for a system with conducting boundaries and infinite Minkowski flat space.In this framework it is quite natural to define the "renormalized vacuum energy" to be zero in flat space-time vacuum wherein the Einstein equations are automatically satisfied as the Ricci tensor identically vanishes. In the present context such a definition ∆ρ ≡ ρ FLRW − ρ Mink for the vacuum energy for the first time was advocated in 1967 by Zeldovich [23] who argued that ρ vac = ∆ρ ∼ Gm 6 p must be proportional to the gravitational constant with m p being the proton's mass.Later on such definition for the relevant energy ∆ρ ≡ ρ FLRW − ρ flat which enters the Einstein equations has been advocated from different perspectives in a number of papers written by the researches from different fields, including particle physics, cosmology, condensed matter physics, see e.g.relatively recent works [24][25][26][27][28], and review article [29] with large number of the original references. This subtraction prescription is consistent with conventional subtraction procedure of the divergent ultra local bare cosmological constant because in the infinitely large flat space-time the corresponding contribution is proportional to the δ 4 (x) function, see (A5).At the same time the nontrivial corrections to ∆ρ are non-local functions of the geometry and cannot be renormalized by any UV counter-terms. Precisely this feature of non-locality implies that the relevant energy ∆ρ which enters the Friedmann equation, see (19) below, cannot be expressed in terms of a gradient expansion in any effective field theory.Additional arguemnts supporting the same claim on impossibility to formulate the relevant physics in terms of any local effective field, such as inflaton Φ(x) will be presented in the following subsection III C. This prescription is also consistent with the renormalization group approach advocated in [29][30][31].In fact, it is direct consequence of the renormalization group approach when we fix a physical parameter at one point of normalization to predict its value at a different normalization point.In the present work with the geometry S 3 × S 1 , the proper length of the S 1 -period being T , it implies that the vacuum energy in the Friedmann equation (4) is ρ ≡ ρ(T −1 ) − ρ(0), where ρ(T −1 ) is the energy of the gauge field holonomy on a compactified spacetime coordinate of length T .It can be interpreted as the RG normalization point µ ∼ T −1 , where T is the size of the compactified Euclidean time dimension given by (9).As we already mentioned, this prescription is consistent with the Einstein equations when the vacuum energy approaches zero, ∆ρ → 0 for the flat space-time which itself may be considered as a limiting case with T → ∞. Finally, with the expression for the energy of the gauge field holonomy winding across the compactified coordinate of the length T whose derivation we give in next subsection III C, one has where Λ QCD is the scale of the underlying QCD-like gauge field theory and cT is some dimensionless O(1) constant whose precise value is not important for our argumentation. Our final comment in this subsection goes as follows.As we already mentioned the energy ∆ρ can be interpreted as a running cosmological constant within the renormalization group approach advocated in [29][30][31] with the only difference that odd powers of H are also included into the series as a result of the IR sensitivity and non-locality (in contrast with conventional UV renormalization) as discussed in Appendix A. The linear correction (which is a particular example of the odd power of H) to the vacuum energy can be interpreted in terminology [29][30][31] as possibility of running cosmological constant at very low µ ∼ T −1 M P .This running is originated from non-perturbative and non-local physics in QFT (through the nontrivial holonomy along S 1 ) and can not be seen at any finite level in perturbation theory, as entire "non-dispersive" vacuum energy can not be generated in perturbation theory, see some technical comments on this matter in Appendix A 2. As we will see in next subsection, the leading correction to the vacuum energy (19) is in fact proportional to H, and this linear in H correction in the effective Friedmann equation is saturated by the IR-sensitive topological configurations with nontrivial holonomy which cannot be expressed in terms of any local propagating degrees of freedom. We define the "non-dispersive" vacuum energy E vac in gauge theory in conventional way in terms of the path integral, see Appendix A 1. Precisely this vacuum energy enters all the relevant correlation functions, including the topological susceptibility as defined by (A1). 1. From the arguments of Appendix A 1 one can infer that the θ-dependent portion of the vacuum energy E vac (θ) can not be identified with any propagating degrees of freedom.Furthermore, all effects are obviously non-analytical in coupling constant ∼ exp(−1/g 2 ) and can not be seen in perturbation theory.These arguments obviously suggest that there is no any local effective field Φ(x) (inflaton) which could describe these features of the vacuum energy in gauge theories.These arguments are obviously consistent with our discussions in previous subsection III B. 2. One can view the relevant topological Euclidean configurations which satisfy the properties from item 1 above as the 3d magnetic monopoles wrapping around S 1 direction.These configurations are characterized by the non-vanishing holonomy (A6), which eventually generate the linear correction ∼ 1/T to the vacuum energy density represented by eqs.(20) and (24) below. 3. In the cosmological context such configurations are highly unusual objects: they obviously describe the nonlocal physics as the holonomy (A6) is a nonlocal object.Indeed, the holonomy defines the dynamics along the entire history of evolution of the system in the given confined phase: from the very beginning to the very end.There is no contradictions with causality in the system as there is no any physical degrees of freedom to propagate along this path, see item 1 above.Furthermore, this entire gauge configuration is a mere saddle point in Euclidean path integral computation which describes the instantaneous tunnelling event, rather than propagation of a physical degree of freedom capable to carry an information/signal. 4. The generation of the "non-dispersive" energy E vac is highly non-local effect.In particular, formulae eqs.(20) and (24) below explicitly show that small variations of the background produces large linear correction ∼ T −1 at small T −1 → 0 as a result of this non-locality.Precisely this feature of non-locality implies that the relevant energy ∆ρ which enters the Friedmann equation (19), cannot be expressed in terms of a gradient expansion in any effective local field theory as emphasized in section III B. 5. Our subtraction prescription as explained in section III B is consistent will all fundamental principles of QFT.What is more important is that the correction to the energy ∆ρ which enters the Friedmann equation (19), cannot be renormalized by any UV counter-terms as it is generated by non-local configurations. 6.The basic assumption of this work is that the same pattern (as highlighted in items 1-5 above) holds for other manifolds.In other words, we assume that the vacuum energy density for S 3 × S 1 manifold receives a linear correction T −1 in comparison with flat R 3 × S 1 geometry, similar to the computations in hyperbolic space S 1 × H 3 where computations can be explicitly performed, as reviewed in Appendix A 2, i.e. where c T is a coefficient of order one, similar to computations in Appendix A 2. Formula (20) plays the crucial role in our arguments in sections IV and V.One can use conventional thermodynamical relation to convince yourself that the correction ∼ T −1 does not modify the equation of state.In fact, it behaves exactly in the same way as the cosmological constant does, i.e. where we use formula (A8) for F with correction factor (20).The correction ∼ T −1 does not modify the equation of states w = −1, which is normally associated with the cosmological constant contribution, Finally, using (20) the vacuum energy for S 3 ×S 1 manifold can be represented as follows where we redefined cT ≡ 32π 2 g 4 c T as the parameter c T ∼ 1 is expected to be order of one (based on the previous experience) but is not yet known. We conclude this section with few important comments which are relevant for the physical interpretation of the obtained results. • All computations presented above, as usual, are performed in the Euclidean space-time where the relevant gauge configurations describing the tunnelling processes are defined.Using this technique we computed the energy density ρ and the pressure P in the Euclidean space.As usual, we assume that there is analytical continuation to Lorentizan space-time where the physical energy density has the same form.This is of course, conventional procedure for the QCD practitioners who normally perform computations on the lattice using the Euclidean formulation, while the obtained results are expressed in physical terms in Minkowski space-time.In our context it means that the parameters P, ρ and equation of state (EoS) as given by ( 23) are interpreted as the corresponding parameters in physical Lorentizan space-time. • Therefore, the driving force for the deSitter behaviour in the Lorentzian space is not a local dynamical inflaton field Φ(x) which never emerges in our framework.Rather the driving force in our scenario should be thought as a Casimir type vacuum energy which is generated by numerous tunnelling transitions in a strongly coupled gauge theory determined by the dimensional parameter Λ QCD .Precisely this parameter replaces the dimensional parameters from inflaton potential V [Φ(x)] which cosmology practitioners normally use in their studies. • The equation of state (23) in Lorentizan space-time obviously implies the deSitter expansion.The corrections due to the radiation ρ r and matter ρ m can be easily incorporated into the Friedmann equation written in Lorentizan space-time.The interaction of the system with standard model (SM) particles will modify the EoS (23).Precisely these modifications to EoS (23) will be responsible for the end of inflation as described in section VI. IV. THE HOLONOMY INFLATION. MODEL-1. The origin of inflation in the model reviewed in previous section II is based on two important ingredients -the vacuum energy (8) of a certain local nature and the hidden sector of conformal fields critically important for the contribution of the conformal anomaly and generation of the thermal radiation in effective Friedmann equation.Key technical element for the successful inflation is the presence of S 1 which emerges in the system as a result of thermal initial state formulated in terms of the density matrix.We keep this first ingredient of our construction from previous studies as will be explained below in subsection IV A. A new idea which is advocated in the present work is that the second important ingredient of this framework, the vacuum energy, may be originated from some nontrivial non-local gauge configurations.This structure in our proposal is fundamentally different from all conventional inflationary models because this source of the vacuum energy cannot be expressed in terms of any local degrees of freedom such as scalar inflaton Φ(x). In our construction this source of the vacuum energy is generated by the gauge configurations with nontrivial holonomy in the QCD-like field theory as explained in subsection III B. This construction uses exactly the nontrivial topology S 1 × S 3 of the gravitational instanton considered above.In its turn, the origin of this topology -compactification of the Euclidean time on a circle S 1is entirely due to a subtle effect of conformal radiation, whereas the inflation compatible value of the vacuum energy is the effect of this holonomy in the QCD-like gauge theory with a subplanckian scale as explained in subsection IV B. We treat Model-1 considered in this section as a toy model where, one one hand, one can demonstrate all the crucial elements of the construction.On other hand, one can adjust parameters in a such a way that all computations are under complete theoretical control and the semiclassical approximation is justified.Unfortunately, this model is not very natural as it requires very large instanton folding number m and very large β to be consistent with observations. In next section V we consider Model-2, which is naturally consistent with all presently available observations without special selection of the parameters β or m.However, we should relax some technical requirements for Model-2 in which case the semiclassical approach is not formally justified. A. The effect of the radiation generating S 1 The effect of the radiation related to the difference C − B/2 in ( 5) is indeed quite subtle because the radiation itself is strongly suppressed.For a high folding number m 1 according to equation (15) it is proportional to and very small for instanton solutions at the tip of the triangular domain (10) with H 2 1/2B and C 1/4H 2 .At the same time merely the existence of the radiation enforces us to consider the topology S 3 ×S 1 .If one ignores the radiation then the topology S 3 × S 1 reduces to S 4 .This easily follows from the effective Friedmann equation ( 4) with ρ = 3M 2 P H 2 when it is cast, by solving it with respect to ȧ2 , into the form: For R = 0 it gives as a solution the Euclidean sphere, a(τ ) = sin(Hτ )/H, of the radius 1 ), whereas any however small amount of radiation would provide a bouncing of a back from some nonzero minimal value, otherwise a = 0 occurring at the pole of S 4 . But the contribution of such spherical (Hartle-Hawking) instantons to the path integral is completely suppressed as argued in [8,22].Technically, this suppression occurs as a result of the conformal anomaly which changes the sign of the negative classical action on S 4 and, moreover, makes it divergent at the poles of the 4sphere at a → 0. Thus, it is entirely due to the radiation of conformal particles, that the scale factor never shrinks to zero, which allows one to compactify time on a circle and get the S 3 × S 1 topology, which can bear a nontrivial gauge field holonomy. B. QCD holonomy and inflation scale The prescription we are advocating in the present work essentially corresponds to the identification of the vacuum energy (19) with the energy density ρ in the Hubble factor H 2 (8) of the effective Friedmann equation (4), i.e. With the instanton period of m-folded garland (14), which is inverse proportional to H, this immediately gives This equation is always correct for any value of BH 2 .However, the bootstrap self-consistency solution always has the property that 2BH 2 = O(1) as shown in detail in previous papers on CFT driven cosmology.The corresponding results will be discussed at the end of this subsection, while now we want to make few comments related to the the small value of parameter BH 2 1, which can be achieved in Model-2, to be discussed in the next section V. If we formally take BH 2 1 in expression (29) the term BH 2 1 can be numerically neglected in which case λ ∼ H and the equation ( 19) assumes the form which explicitly shows the linear dependence of the vacuum energy on the Hubble constant, ρ(H) ∼ H, as previously claimed.One can see from ( 20), ( 24) that the source of this linear correction to the vacuum energy is related to the term proportional to T −1 which represents the inverse size of S 1 manifold for our geometry, and proportional to H in our framework.Needless to say that this linear (with respect to H) correction is saturated by the IR topological configurations with nontrivial holonomy which cannot be expressed in terms of any local propagating fields as explained in Appendix A. Therefore, this term cannot be written in a conventional gradient expansion in an effective field theory as it represents a global, rather than local characteristic of the system.Our next step is to make these computations selfconsistent satisfying the semi-classicality condition.Formally, this condition is expressed as the bootstrap equation with solution (15).The physical meaning of the enforcement of the bootstrap equation as explained in previous section II and original papers [8][9][10] is that the temperature of the system (therefore the size of S 1 ) cannot be an arbitrary parameter.Instead, it must be determined by the system itself.In other words, the size of the manifold changes as a result of accounting for the feedback to adjust the changes of the vacuum energy.This formal enforcement obviously implies that all dimensional parameters must be of order of M P as the only scale of the problem.The deviation from the Planck scale may only occur if some very small or very large dimensionless parameters are present in the system.In our Model-1 there are two free parameters, β, which effectively counts the number of degrees of freedom, and the instanton number m which, in principle, assumes any value. In this section, in Model-1, we want to proceed with self consistent computations.Therefore, we enforce the semi-classicality conditions.In this case, for large m and the value of H determined by the bootstrap solution (15) this equation gives the expression for the parameter λ which is equivalent to Λ QCD , i.e. As this model is considered to be a toy model, we can take β as a free parameter and consider β 1, The key observation we want to make here is that both parameters, H, and Λ QCD belong to the subplanckian scale according to (33), which justifies the use of the semiclassical expansion discarding a negligible contribution of graviton loops 6 .Furthermore, there is a hierarchy 6 Subplanckian scale of the model does not imply, however, that of scales which parametrically holds for large β 1: This hierarchy of scales once again demonstrates the self consistency of the computations (on the gauge side) because the "non-dispersive" vacuum energy (19) related to the holonomy is only generated in the confined phase of the gauge QCD theory at temperature below Λ QCD , which is automatically satisfied as a result of the hierarchy (34). Inflation scenario in the Lorentzian domain described in Sect.II (Eqs.( 16)-( 17) above) holds also in the Gauge Holonomy Inflation model advocated in the present work.However the exit from inflation takes place via a decay of H due to helical instability to be discussed below.As mentioned above, if one attempts to match the parameters m and β with observational numbers, one should take extremely high values of these parameters.Indeed, the model becomes phenomenologically compatible with the CMB data within the Starobinsky R 2 or Higgs inflation theory when the scale H ∼ 10 −6 M P .It can be generated by the SLIH scenario [8] with β 10 13 .In particular, it matches the observable value of the spectral tilt n s 0.97 when the number of instanton folds equals (18), m ∼ 10 8 .If we assume that instead of the R 2 -mechanism or the Higgs potential the vacuum energy is entirely due to the QCD holonomy mechanism of the above type, then from (32) it follows that The necessity to have very high value of β, which now can only be generated by a large hidden sector of conformal higher spin fields [19], makes this model rather speculative even though it justifies semiclassical expansion below the gravitational cutoff of [20,21].Therefore we consider the second, much more natural model without any hidden sectors filled by large number of conformal fields. The starting point in this section is the same set of equations discussed in previous sections.However, in (4) we now ignore the higher derivative terms ∼ B ȧ2 and ∼ B ȧ4 .It corresponds to disregarding the higher derivative terms in the effective action as the typical scales of the problem will be much lower than the Planck scale the B-terms in (4) quadratic in curvature and generated by the conformal anomaly can be discarded.Effective action generating the conformal anomaly is nonlocal and strong in the infrared which is an artifact of the conformal invariance of the matter fields.In contrast to the quantum loops of conformal non-invariant graviton, loop effects of conformal matter are not suppressed by inverse powers of the Planck mass and their gravitational effect is treated beyond perturbation theory. M P .The corresponding set of equations has been reviewed above, but now we consider the limit BH 2 1 and for convenience of the readers repeat some important formulae below. A. Overview of gravitational instanton solution The scale factor a(τ ) oscillates between the maximum and minimum values a ± determined as follows The solution for the scale factor a(τ ) is also known Now we implement the ideas formulated in the previous subsection.To proceed with this task we identify the energy (24) with the vacuum energy entering the Friedmann equations as we discussed in previous section, i.e. The prescription we are advocating in the present work essentially corresponds to the identification of the vacuum energy (37) with the cosmological constant Λ/3 entering the equation ( 8), i.e. Up to this point the equation (38) identically coincides with our analysis in Eqs. ( 19)-( 28) from the previous section. B. Relaxing the semiclassical approximation New element for the Model-2 is as follows.We relax the bootstrap-like equation and its solution (15) for this model.Essentially we unlink few parameters which were previously tightly linked.In particular, the de Sitter temperature being expressed in terms of the size of S 1 is unambiguously fixed by the radiation parametrized by parameter β.This relation essentially fixes the size of S 1 which is generated by the radiation and determined by the back reaction of S 1 to the corresponding radiation.The size of S 3 is also not a free parameter in semiclassical gravitational instanton solution.Essentially, by relaxing these links we assume that there could be another physics which determines the size of the gravitational instanton (or a complicated network of strongly interacting gravitational instantons).A self-consistent semiclassical approximation is obviously cannot be justified when some parameters enter from different physics.In Appendix A 3 we overview a well-known example in strongly coupled gauge theory where the holonomy (and corresponding size of the manifold) is not fixed by hands, but rather is determined dynamically by strong quantum fluctuations.We suspect that a similar physics may emerge here. In any case, for Model-2 we unlink the size of S 1 from the radiation and treat it as a free parameter.To simplify our formulae, we also assume the lowest possible instanton number m = 1 in all expressions in this section, which should be contrasted with our studies in previous section analyzing the Model-1 where the consistent description exists only for very large m ∼ 10 8 .This simplification does not modify our main results as the instanton number always accompanies by dimensional parameter Λ QCD and dimensionless coefficient cT which are not yet known and can be always redefined 7 . With these preliminary remarks, and after substituting T = π/H (which is a good approximation in the regime CH 2 1 we are interested in, see below) the equation ( 38) can be rewritten in the following form This equation is very important as it relates the Hubble constant H for our Euclidean geometry S 3 × S 1 with the vacuum energy generated by the gauge configurations with nontrivial holonomy, Few comments are in order.First of all, the hierarchy of scales (33), (34) characterizes the Model-1 from the previous section still holds in the present case However, in Model-2 the hierarchy emerges not as result of extremely large parameter β 1, but rather, as a result of new scale of the problem, Λ QCD which is a free dimensional parameter of the system generated by the dimensional transmutation in classically conformal field theory and plays the same role in QCD as Λ QCD 170 MeV plays in QCD physics. Parameter Λ QCD /M P 1 plays the same role in Model-2 as parameter β −1/ 6 1 plays in Model-1 as expressed by eq.(33).The crucial difference, however, is that we unlink the size of S 1 from the radiation by treating Λ QCD as a free dimensional parameter which defines a new gauge theory coined as QCD.It is assumed 8 at this point that the size S 1 where the holonomy is defined is determined by a different physics as discussed in Appendix A 3. C. Subtle effects of the radiation Due to the hierarchy of scales mentioned in the previous subsection, one can explicitly check that the relevant parameter ≡ 4CH 2 entering Eq.( 36) is very small, Indeed, as it follows from Eq.( 36), ≤ 1 because for larger the turning points disappear and monotonically changing a(τ ) cannot form a periodic solution -the saddle point of the partition function path integral.Thus in view of ( 5) the amount of radiation R(η) is always bounded from above -though the Universe is born not in the vacuum state it is still essentially cold.The hottest possible Universe corresponding to a maximal value = 1 and minimal η = π √ 2 has a moderate maximal value of R(η) = O(1).Actual smallness of assumed above follows from a subplanckian value of H M P , because Eq.( 5) is then equivalent to = β +O(1) Thus one can simplify the formula (36) and present approximate solution for a(τ ) in the following form which is valid everywhere except the points close to zeroes of sin(Hτ ).In the approximation (44) we neglected the terms ∼ in accordance with (43).In particular, a(τ = 0) is in fact ∼ √ rather than zero, and the exact solution (36) is required for the computation of η, see (46).Now we consider only single-folded instantons and compute the full period of conformal time η which can 8 We would like to make a short comment here why and how such unlink between these two parameters may occur.In weakly coupled semiclassical approximation in Model-1 the two parameters (the intensity of radiation characterized by the size of S 1 , which in its turn depends on the anomaly parameter β in view of the bootstrap equation) are tightly linked.In strongly coupled gauge theory as reviewed in Appendix A 3 the holonomy and size of effective S 1 is determined dynamically.This is precisely the reason why these two parameters in strongly coupled regime are not linked.As reviewed in Appendix A 3 it is believed that in strongly coupled QCD the holonomy is also determined by the dynamics, the so-called "confining holonomy" when the instanton dissociates into N constituents.Such a phenomenon may only occur for topological configurations with nontrivial holonomy (A7).The known dependence of the vacuum energy on θ as cos( θ N ) is an explicit manifestation of the same nontrivial holonomy. be rewritten as follows (45) and reduced to incomplete elliptic integral.Within the ln accuracy it reads During this long evolution represented by conformal Euclidean time (46) the scale factor a(τ ) makes some drastic changes in size as one can see from the following estimation One should observe here that there is a qualitative difference with discussions of the Model-1 when the ratio (47) was always parametrically of order one.In the present Model-2 this ratio (47) could be parametrically very large which implies that the largest and smallest sizes in the garland construction could have parametrically different scales. 9e conclude with the following comment.Merely the existence of the radiation enforces us to consider the topology S 3 × S 1 .If one ignores the radiation and the presence of S 1 then the system defined on S 3 × S 1 becomes defined on S 4 , in which case the corresponding contribution to the path integral is strongly suppressed as argued in [22].Technically, this suppression occurs as a result of the conformal anomaly which changes the sign of the classical Euclidean action.In addition, the positive action which is generated due to the conformal anomaly is divergent at a → 0 for S 4 .This divergence leads to the infinitely strong suppression of these vacuum S 4 configurations, see [22] for the comments and details.One should also add that in the Model-2 the relevant S 1 structure might be generated not only by radiation but also by the quantum interactions in strongly coupled gauge theories as argued in Appendix A 3 such that size of the S 1 is a free parameter of the model and it is determined by the dimensional parameter Λ QCD of the strongly coupled QCD gauge theory. VI. HOW THE HOLONOMY INFLATION ENDS The main goal of this section is to argue that the holonomy inflation paradigm advocated in this work is consistent with all presently available observations.One should emphasize that a theory describing the end of inflation (similar to pre-reheating and reheating stages in conventional inflationary scenario) in our framework is yet to be developed.The required technique which would answer the relevant questions are formulated in subsection VI B by items 1-4.Therefore, this section should be treated as a description of a vision and foresight for a future development rather than a final formulation of the theory describing the end of inflation. We focus on three items to demonstrate the consistency of the framework.First of all we want to argue that the equation of state (EoS) almost identically coincides with the EoS which is normally attributed to the cosmological constant.Secondly, we want to argue that the "non-dispersive" vacuum energy which plays the key role in this framework is capable to transfer its energy to the real propagating gauge fields of the Standard Model (SM).Therefore, the topological inflation could end with a successful "reheating epoch".Finally, we estimate the number of e-folds N infl for this framework to show that it is perfectly consistent with presently available observations. A. Equation of State We start with the following generic remark.Consider the holonomy which assumes a nontrivial value along S 1 directed in time direction as discussed in previous section.In this case the Hubble constant and the energy density remain constant even after the nucleation from the gravitational instanton in spite of the fact that the topology of the manifold is not S 3 × S 1 anymore.Further to this point, the system is not described by the Euclidean metric after the nucleation, but rather assumes the conventional Lorentzian signature. The corresponding Hubble constant H is unambiguously determined by the dimensional parameter Λ QCD of a strongly coupled gauge theory as equation (40) states.This solution after the nucleation corresponds to the inflationary (almost) de Sitter behaviour such that the EoS and parameter a(t) assume the form: in accordance with equations ( 22) and ( 23). The inflationary regime described by (48) would be the final destination of our Universe if the interaction of the QCD fields with SM particles were always switched off.One should emphasize that the driving force for this inflationary deSitter behaviour (48) in the Lorentzian space is not a local inflaton field Φ(x) which is not present in our system at all.Rather the driving force should be thought as a Casimir type vacuum energy which is generated by numerous tunnelling transitions in a strongly coupled gauge theory determined by the dimensional parameter Λ QCD .Precisely this parameter replaces the dimensional parameters from inflaton potential V [Φ(x)] which cosmology practitioners normally use in their studies. When the coupling of the QCD fields with SM particles is switched back on, the end of inflation is triggered precisely by this interaction which itself is unambiguously fixed by the triangle anomaly as we discuss below. B. Anomalous coupling of the "non-dispersive" vacuum energy with gauge fields Before we explain the structure of the relevant interaction we want to make few comments in order to explain the physical nature of such unusual coupling between propagating and non-propagating degrees of freedom.First of all, we have to remind that the physics responsible for the generating of the "non-dispersive" vacuum energy (dubbed as a "strange energy" in [5][6][7]) which eventually leads to the de Sitter behaviour (48) can not be formulated in terms of any physical propagating degrees of freedom as discussed in great details in section III C. Instead, the generation of this energy can be explained in terms of tunnelling transitions between topologically distinct but physically identical |k states. The corresponding technique to describe these tunnelling transitions is normally formulated in terms of the Euclidean path integral and the corresponding field configurations interpolating between distinct topologically sectors.In conventional QFT computations the corresponding procedure selects a specific superposition of the |k states which generates the |θ state with energy E vac (θ).In the context of inflation, when the background assumes a non-trivial geometry (instead of R 4 in conventional case) the corresponding computations become profoundly more complicated, though the corresponding procedure is well defined: 1.One should describe the relevant Euclidean configurations satisfying the proper boundary conditions for a nontrivial geometry (similar to calorons with nontrivial holonomy, reviewed in Appendix A 2); 2. One should compute the corresponding path integral which includes all possible positions and orientations of the relevant gauge configurations; 3. The corresponding computations for the vacuum energy ρ and pressure P must be done with all fields which couple to QCD gauge theory.Precisely this coupling is responsible for transferring the vacuum energy to SM particles; 4. As the last step, one should subtract the corresponding expression computed on R 4 as explained in section III B. Precisely this remaining part of the vacuum energy is interpreted as the relevant energy which enters the Friedmann equation, and which cannot be removed by any subtraction procedure and cannot be renormalized by any UV counter terms.The corresponding formulae for ρ, P will depend, in general, on properties of the manifold and relevant coupling constants. While these steps are well defined in principle, it is not feasible to perform the corresponding computations because even the first step in this direction, a finding the relevant Euclidean configurations satisfying the proper boundary conditions for a nontrivial geometry, is yet unknown.Nevertheless, this procedure, in principle, shows that the deSitter behaviour (48) in this framework emerges without any local inflaton field Φ(x) as explained in previous section VI A because the physical force driving the inflation has completely different nature in this proposal.Fortunately, the key ingredients which are relevant for our future studies can be understood in alternative way, in terms of the auxiliary topological non-propagating fields b(x, H) which effectively describes the relevant infrared physics (IR) representing the key elements of the steps 1-4 highlighted above. The corresponding formal technique is widely used in particle physics and condensed matter (CM) communities.For the convenience of the readers we provide (within our cosmological context) the main ideas and results of this approach in Appendix B. In particular, this approach is extremely useful in description of the topologically ordered phases when the IR physics is formulated in terms of the topological Chern-Simons (CS) like Lagrangian.One should emphasize that the corresponding physics, such as calculation of the braiding phases between quasiparticles, computation of the degeneracy etc, can be computed (and in fact originally had been computed) without Chern-Simons Lagrangian and without auxiliary fields.Nevertheless, the discussions of the IR physics in terms of CS like effective action is proven to be very useful, beautiful and beneficial.In our case, unfortunately, we cannot proceed with explicit computations along the lines 1-4 as explained above.Therefore, the alternative technique in terms of the auxiliary topological non-propagating fields is the only remaining option in our case. We refer to Appendix B where we overview the corresponding technique in context of the inflationary cosmology.We also explain there the physical meaning of these auxiliary field b(x, H) which should be thought as the source of the topological fluctuations, similar to the axion field, see below.Precisely this auxiliary non-propagating field eventually generates the "non-dispersive energy" (37) and consequently leads to the de Sitter behaviour (48).This auxiliary field b(x, H) effectively describes (through the correlation functions) the modification of the tunnelling rates between topological |k sectors as result of external background field parameterized by H.In other words, a profoundly complicated procedure of summation over all topological configurations interpolating between |k -sectors in the background parametrized by H as outlined above (steps 1-4), can be expressed in terms of the auxiliary field b(x, H) which, of course, remains the non-propagating auxiliary field in background H. The only information which is required for the future analysis is that the relevant auxiliary field b(x, H), satu-rating the "non-dispersive" vacuum energy (37), couples to the SM particles precisely in the same way as the θ parameter couples to the gauge fields.This claim is explained in Appendix B and is based on analysis of the exact anomalous Ward Identities.In many respects the coupling of the b(x, H) field to the gauge fields is unambiguously determined similar to unique coupling of the η field to the gluons, photons and gauge bosons. As a consequence of this fundamental feature the topological auxiliary b(x, H) field is in fact an angular topological variable and it has the same 2π periodic properties as the original θ parameter.As it is known the θ parameter can be promoted to the dynamical axion field θ(x) by addicting the canonical kinetic term [∂ µ θ(x)] 2 to the effective Lagrangian.The difference of the b(x, H) field with the dynamical axion θ(x) field is that the auxiliary topological field b(x, H) does not have a conventional axion kinetic term. For simplicity we also assume that QCD has a single flavour N f = 1 quark which couples to the non-abelian QCD gauge gluons as well as to the E&W gauge fields, similar to conventional QCD quarks.This is precisely the coupling which provides the interaction between the (conjectured) high energy QCD and the low energy E&W gauge fields.It is naturally to assume that the mass of the corresponding η is of order m η ∼ Λ QCD , similar to the QCD case.Therefore, this heavy degree of freedom can be safely ignored in what follows.In other words, the desired coupling of b(x, H) auxiliary field with E&M gauge field is [5] where α(H) is the fine-structure constant measured during the period of inflation, Q is the electric charge of a QCD quark, N is the number of colours of the strongly coupled QCD, and F µν is the usual electromagnetic field strength.As we already mentioned, the coupling ( 49) is unambiguously fixed because the auxiliary b(x) field always accompanies the so-called θ parameter in the specific combination [θ − b(x, H)] as explained in Appendix B, and describes the anomalous interaction of the topological auxiliary b(x, H) field with E&M photons.In formula (49) we also ignored the heavy η field which couples in the same way as auxiliary b(x, H) field, i.e. [θ − η − b(x, H)].However, η field is very heavy as explained above, in contrast with auxiliary field which generates a topologically protected pole as explained in Appendix B. The coupling of the b(x, H) with other E&W gauge bosons can be unambiguously reconstructed as explained in [5], but we keep a single E&M field F µν to simplify the notations and emphasize on the crucial elements of the dynamics, related to the helical instability which triggers the end of inflation, see next subsection VI C. Based on coupling (49) we present our numerical estimates for number N infl of e-folding in section VI D. Finally, in subsection VI E we interpret the obtained re-sults and give an intuitive explanation why and how the non-dynamical auxiliary field b(x, H) can, nevertheless, produce the real physical propagating degrees of freedom in a time-dependent background parametrized by H. C. The helical instability and the end of inflation It has been known for quite sometime that the structure of the interaction (49) in many respects has a unique and mathematically beautiful structure with a large number of very interesting features.The most profound property which is crucial for our present analysis of the inflationary Universe is the observation that the topological term (49) along with the conventional Maxwell term F 2 µν leads to an instability with respect to photon production if ḃ(x, H) does not vanish.This is the so-called helical instability and has been studied in condensed matter literature [35] as well as in particle physics literature including some cosmological applications [36]. In context of our studies, the closest system where the helical instability develops is the system of heavy ion collisions [37] wherein ḃ(x, H) can be identified10 with the so-called axial chemical potential µ 5 .One can explicitly demonstrate that the interaction (49) leads to the exponential growth of the low-energy modes with The growth (50) signals that the instability of the system with respect to production of the real photons develops [37].It is also known that the fate of this instability is to reduce the axial chemical potential µ 5 which was the source of this instability.In the inflationary context the corresponding instability reduces H which plays the role of µ 5 , see discussions below.One should also comment here that parameter µ 5 in heavy ion collisions is also not a dynamical field, but rather is an auxiliary fluctuating field which accounts for the dynamics of the topological sectors in QCD, similar to our case when ḃ(x, H) describes the dynamics of the topological sectors in QCD.This short detour into the nature of helical instability as a result of interaction (49) has direct relevance to our studies because the auxiliary field b(x, H) entering eq.( 49) exhibits all the features of parameter µ 5 which was the crucial element in the analysis of the helical instability in heavy ion collisions.Indeed, both these auxiliary fields originated from the same physics and they both describe the dynamics of the topological sectors in strongly coupled gauge theories. In terms of physics these non-propagating fields effectively account for the long range variation of the tunnelling processes as a result of some external influence of the backgrounds expressed in terms of H for inflation and in terms of µ 5 for heavy ion collisions respectively, see some additional comments on this analogy in Appendix B of ref. [5]. The net result of the interaction (49) and instability (50) is that the holonomy inflation in this framework inevitably ends by transferring the "non-dispersive" vacuum energy proportional to H as eq.( 30) states into the real propagating gauge fields.One can interpret this energy transfer as a back-reaction to the auxiliary field b(x, H) as a result of adjustment of the system due to the interaction (49).How this back-reaction effect can be in principle computed?The corresponding computations based on the first principles as listed in Section VI B by items 1-4 are not presently feasible as we already mentioned.Effective description in terms of the dynamics of the auxiliary field b(x, H) can be, in principle, carried out along the line mentioned at the very end of Appendix B. One may also wonder if entire vacuum energy will be transferred to the radiation in the form of the SM gauge field, which is the key element for successful graceful exist from inflation.Our comment here is that the transfer of the vacuum energy in this framework is a continuous process, rather than a one-time event.This is obviously the same back-reaction effect which is mentioned in previous paragraph: the radiation decreases the magnitude of the vacuum energy.This process continues as long as the vacuum energy still remains a source of the radiation.This process lasts as long as ḃ(x, H) = 0. The physical picture of this energy transfer is as follows 11 .Non-vanishing value for ḃ(x, H) = 0 leads to the particle production.This radiation of particles obviously decreases the value of ḃ(x, H) (and the corresponding vacuum energy) as the source of this radiation.In terms of real physical processes this energy transfer corresponds to the modification of the tunnelling transition rate with emission of the real particles in a nontrivial background which also varies.The radiation continues as long as the background deviates from the flat Minkowski space-time. The technical description of this energy transfer cannot be carried out in conventional way, let us say, in terms of physical propagating degree of freedom.For example, we cannot model these radiation processes by adding a kinetic term to b(x, H) field because the corresponding anomalous Ward Identities cannot be satisfied with physical propagating degrees of freedom as explained in Appendix B. We think, that he holographic description mentioned in Appendix B offers a possible framework which 11 The intuitive picture presented below is based on our understanding of the fate of the helical instability in heavy ions collisions leading to reduction the axial chemical potential µ 5 which itself is the source of this instability. potentially can accommodate the dynamics of the auxiliary b(x, H) field, strange features of the non-dispersive vacuum energy and back-reaction effects due to the coupling with the SM fields (49).At present time we do not know yet how to formulate a proper computational framework to answer this question. To conclude this subsection we would like to comment here that the energy transfer between non-dynamical auxiliary fields and propagating dynamical fields can be in principle tested in a tabletop experiment based on the Maxwell system.We explain the relevant physics and also offer a possible design for a tabletop experiment in subsection VI E where such unusual effect can be, in principle, experimentally tested in a simplified settings. D. Estimates for the e-folds The number of e-folds in the holonomy inflation is determined by the time τ inst when the helical instability fully develops, which explains our subscript τ inst .This is exactly the time scale where a large portion of the energy density ρ from eq. ( 30) which eventually generates the Hubble constant H according to ( 40) is transferred to SM light fields.The corresponding time scale for the heavy ion system is known [37] and it is given by τ −1 inst ∼ µ 5 α 2 .For our system µ 5 should be interpreted as ḃ(x, H) ∼ H, as the only relevant scale of the problem, see also few additional arguments in Appendix B supporting this estimate.At the moment τ inst the de Sitter growth (48) cannot be maintained anymore as the source of this behaviour ∼ H is completely exhausted due to the transferring its energy to the gauge fields of the SM. Therefore we arrive at the following order of magnitude estimate for the number of e-folds N infl in QCD inflationary paradigm, where number of e-folds N Inf is, by definition, the coefficient in front of H −1 in the expression for the time scale τ inst .At this moment the energy density ρ from eq. ( 30) ceases to exist as the dominant portion of the energy of the system. The key element of this holonomy inflationary scenario is that the number of e-folds N infl when the de Sitter behaviour (48) ends is determined in this framework by the gauge coupling constant α(H) rather than by dynamics of ad hoc inflaton filed Φ governed by some ad hoc inflationary potential V (Φ). In next subsection VI E we explain the concept of mechanism of the energy transfer at the end of inflation.It is very different from conventional mechanism when propagating inflaton Φ couples with physical particles and transfer the energy.In subsection VI F we compare our framework with conventional inflationary scenario to show some similarities and differences between the two approaches. E. Interpretation In this subsection we want to explain a fundamentally new type of particle production which is the key element in all our discussions in this section related to the question how the inflation ends in this framework due to the coupling (49) of the auxiliary field with real physical gauge fields from the SM. The main point is that the driving force for inflation in this framework is the non-dispersive vacuum energy which generates the EoS given by (48).Without anomalous coupling (49) it would be the final destination of the Universe.How does this coupling produce the particles?The main point is that the topological fluctuations with the typical scale ∼ Λ QCD which saturate the vacuum energy are slightly different in the presence of background with scale ∼ H Λ QCD .This time dependent background generates the particle production with the rate ∼ H which is precisely the reason why inflation eventually ends in this framework on the time scale (51). We want to test this mechanism of the particle production from "non-dispersive" vacuum energy using the Maxwell theory as a playground.The corresponding Maxwell system can be, in principle, designed and fabricated with existing technology, see the relevant references in Concluding Section VII C. Therefore, in principle, this novel phenomenon can be tested in a tabletop experiment in a lab. The basic idea is that there is a new contribution to the Casimir pressure which emerges as a result of tunnelling processes when the Maxwell system is formulated on a nontrivial manifold permitting the E&M configurations with nontrivial topology π 1 [U (1)] = Z.Precisely these tunnelling transitions between physically identical but topologically distinct states play the same role in the Maxwell system as the topologically nontrivial configurations in QCD.The corresponding extra energy generated due to these transitions is the direct analog of the "nondispersive" contribution to the energy which is the key player of the present work as it explicitly enters (30), ( 28), (38) in previous sections.This "non-dispersive" energy in the Maxwell system is similar to our studies of the non-abelian gauge theories reviewed in Appendix A this extra energy also cannot be formulated in terms of conventional propagating photons with two transverse polarizations. If the same system is considered in the background of a small external time-dependent field, then real physical particles will be emitted from the vacuum, similar to the dynamical Casimir effect (DCE) when photons are radiated from the vacuum due to time-dependent boundary conditions.Essentially, the "reheating epoch" as advocated in this section when the vacuum energy can radiate real particles in a time dependent background is analogous to the DCE.The difference is that in conventional DCE the virtual particles from vacuum become real propagating particles in a time dependent background and get emitted.In our case the E&M configurations which describe the interpolations between different topological sectors get excited in time dependent background and emit real particles, see concluding section VII C for references and details. We hope this intuitive explanation provides the basic conceptual picture on how the particles can be produced from the vacuum, which represents the key element of the graceful exit from inflation. F. Relation to the conventional inflationary scenario The goal of this section is to collect a number of comments made in different places in this work related to the (possible) connection between our framework and conventional description in terms of a scalar inflation Φ(x) governed by a potential V [Φ].By obvious reasons this is not a one to one correspondence between drastically different descriptions.Nevertheless, these comments, hopefully, may generate some thoughts about the source of the vacuum energy in Nature, and find a proper technical framework to describe it. We start with few generic remarks.The topological inflationary mechanism as formulated in this proposal is fundamentally non-local in nature and cannot be modelled by any local effective inflationary potential V (Φ).Furthermore, this mechanism is fundamentally "no-dispersive" in nature and cannot be described in terms of any propagating physical degree of freedom such as inflaton Φ(x) with canonical kinetic term (∂ µ Φ(x)) 2 .Further to this point.We introduced the topological auxiliary fields a(x, H) and b(x, H) in Appendix B to describe the physics in terms of effective long range fields which in principle should describe the relevant IR physics.These fields are not propagating, in contrast with the inflaton Φ(x) field.The physical meaning of these fields as explained in Appendix B is: the a(x, H) describes the distribution of the topological density in the system, while b(x, H) acts as the axion field (without kinetic term) being the source of the topological density distribution. These obvious differences between drastically different frameworks must obviously lead to distinct observational results.In particular, the conventional computations of the cosmological perturbations are based on treating the inflaton Φ(x) as the conventional scalar field with canonical kinetic term (∂ µ Φ(x)) 2 .The corresponding results can be expressed in terms of the vacuum energy ρ and pressure P as it is formulated in [3].However, merely existence of a local inflaton field Φ(x) has been assumed in computations in [3], while the final results are presented in terms of energy-momentum tensor.Computations in our framework requires a different technique, which is not yet developed as explained at the very beginning of this section VI.Therefore, it is naturally to expect that the outcome would be different even when the final results are expressed in terms of the energy-momentum tensor's parameters ρ and P .However, as the corresponding technical tools are not yet developed, it is very hard to quantify the corresponding differences. In what follows we want to make few comment on some similarities between these two distinct approaches.In particular, we would like to identify (on intuitive level) the topological auxiliary fields a(x, H) and b(x, H) with the inflaton Φ(x) field in a sense that both fields eventually generate the deSitter behaviour, and both approaches lead to the inflationary EoS (48).The fundamental difference between the two is that the inflaton Φ(t) satisfies the classical equation of motion and depends on time t, while a(x, H) and b(x, H) are truly quantum objects, such that all observables in principle must be expressed exclusively in terms of the correlation functions and expectation values when the time dependence enters the physics exclusively in terms of the Hubble parameter H. Still, there are some hints which apparently suggest that some links between the two approaches may exist. Indeed, let us introduce few important parameters which are normally used in conventional inflationary analysis and compare them with our description.For this purposes we introduce conventional slow-roll parameters, see e.g.[75]: For example, the computation of the number of e-foldings in conventional slow roll approximation and estimates (51) in the holonomy inflationary scenario both produce numerically large magnitudes.In the conventional approach one can use the following relation [75]: The large numerical value for N infl 1 in the conventional approach is due to the specific choice of the potential (52) when the integrand entering ( 53) is parametrically large and proportional to −1 .It should be compared with the holonomy inflationary scenario when N infl 1 is parametrically large due to the enhancement factor α −2 as estimates (51) suggests. We conclude this section with few generic comments.First of all, while we identify (on the intuitive level) the auxiliary topological fields with inflaton, the a(x, H) and b(x, H) fields remain to be quantum (not classical) fluctuating fields, saturating the relevant correlation functions.We observed above that there is a number of instances when the holonomy inflationary scenario behaves very much in the same way as the conventional description represented by formulae ( 52), (53) discussed above.Is it a coincidence or there is a more deep reason for these relations? We formulate the same question in a different way: Is it possible to make any connection between with conventional description in terms of auxiliary a(x, H) and b(x, H) fields and local inflaton Φ(x) field which satisfies the classical equation of motion determined by the potential V [Φ]?We do not know how to do it.The main obstacle to make such a connection is related to the fact that the auxiliary topological fields, by construction (reviewed in Appendix B) saturate the topological susceptibility (and the corresponding vacuum energy) with the positive sign according to (A3) and (A5), generating the topologically protected pole (A4), while any conventional degree of freedom (including dynamical propagating inflaton) can only produce a negative sign according to (A2). One possible path to overcome this obstacle is to define the auxiliary fields12 using the holographic description along the line suggested in [76].In this case the axion field which is represented by our auxiliary field b(x, H) becomes the dynamical propagating field in the bulk of multidimensional space but acts as a conventional (non-dynamical) term on the boundary (representing our space-time).This feature is precisely what is required for our auxiliary field b(x, H) defined on physical space-time. VII. CONCLUDING COMMENTS We conclude this work with formulation of our basic results in section VII A. We next formulate the profound consequences of our framework in section VII B. To convince the readers that we study a real physical effect, we suggest to test this new "non-dispersive" type of vacuum energy in a laboratory using the physical Maxwell system as highlighted in section VII C. Finally, we make few comments on relation of our approach with no-boundary and tunnelling proposals in subsection VII D. A. Basic results The heart of the proposal suggested in the present work is a synthesis of two, naively unrelated, ideas. First idea represents the self-consistent treatment of the problem formulated on the Euclidean S 3 ×S 1 manifold through the bootstrap equation [8][9][10]. The second novel idea [5][6][7] is a proposal to treat the vacuum energy entering the Friedmann equation as a "non-dispersive" vacuum energy which is always generated in non-abelian gauge theories as a result of tunnelling transitions between topologically nontrivial sectors in a system.This type of energy is very unusual in many respects.First of all, it is non-analytical in coupling constant ∼ exp(−1/g 2 ) and can not be seen in perturbation theory as reviewed in Appendix III C. Secondly, this vacuum energy is non-local in nature as it cannot be expressed in terms of any local operators in a gradient expansion in any effective field theory.Rather, it can be expressed in terms of the non-local holonomy, similar to Aharonov-Casher effect as mentioned in section III. We coin the marriage of these two sets of ideas as the holonomy inflation which has a number of very attractive and desirable features.First of all, there is the hierarchy of scales for both models given by eq. ( 34) and ( 41) correspondingly which indicates that the distances smaller than Planck scale M −1 P never appear in our framework.Secondly, the Equation of State (48) assumes its de Sitter behaviour as a result of nucleation as Fig. 2 shows.Thirdly, the number of e-folds N infl is naturally determined by the gauge coupling constant α(H) as equation ( 51) suggests. B. Implications and future development There are few important and generic consequences of this framework. 1.The conventional scenarios of the eternal selfproducing inflationary universes are always formulated in terms of a physical scalar dynamical inflaton field Φ(x).This problem with self-reproduction of the universe does not even emerge in our framework as there are no any fundamental scalar dynamical fields in the system responsible for inflation.Instead, the de Sitter behaviour in our framework is pure quantum phenomenon, which is a consequence of the dynamics of the long ranged topological configurations with nontrivial holonomy, rather than a result of a physical fluctuating dynamical field.This type of energy manifests itself in terms of the "wrong" sign in the correlation function which can not be formulated in terms of any local propagating degrees of freedom as explained in Appendix A 1. Therefore, the problem with eternal inflation does not even occur in our framework. 2. There are many other problems in conventional formulation of the inflation in terms of scalar inflaton field Φ(x).For example, the initial value Φ in M PL for the inflaton is normally very large.This problem does not occur in our holonomy inflation scenario as the hierarchy of scales (34) always holds in our framework. 3. We should also mention that the energy described by a formula similar to eqs.( 19), (30), which eventually leads to the de Sitter behaviour (36), has been previously postulated [38][39][40] as the driving force for the dark energy (admittedly, without much deep theoretical understanding behind the formula at that time).The model has been (successfully) confronted with observations 13 , see recent review papers [41,42] and many original references therein, where it has been claimed that this proposal is consistent with all presently available data. Our comment here is that history of evolution of the universe may repeat itself by realizing the de Sitter behaviour twice in its history.The QCD-dynamics was responsible for the holonomy inflation considered in present work, while the QCD dynamics is responsible for the dark energy in present epoch.In this case the DE density is given by expression similar to (30), i.e. ρ DE ∼ HΛ 3 QCD ∼ (10 −3 eV) 4 is amazingly close to the observed value without any fine-tunings or adjustments of the parameters. 4. One should also mention that some recent lattice simulations [43] implicitly support our results.Indeed, the author of ref. [43] studied the rate of particle production in the de Sitter background.The rate turns out to be linearly proportional to the Hubble constant ∼ H, rather than naively expected H 2 .It is fully consistent with our proposal 14 .We hope that some further lattice computations in time dependent background can further elucidate the role of holonomy in generating the vacuum energy. 5. Finally, we want to make a comment about possible future development.As we already mentioned at the beginning of Section VI the relevant technique describing the end of inflation in our framework (including computations of the cosmological perturbations) is yet to be developed.We already mentioned in the text a number of technical challenging problems which need to be resolved, and shall not repeat them here in Conclusion. C. Possible tests of the cosmological ideas in a lab? Our comment here is that we cannot "experimentally" test the first element of the proposal advocated in [8][9][10] in any simplified settings.However, we can test the second element of this proposal advocated in [5][6][7] in tabletop experiments.This subsection should convince the readers that we are dealing with a new physical phenomena which can be realized in cosmology (which is the enters the Friedman equation ( 19), ( 30) is determined by the size of S 1 and behaves in all respects as the cosmological constant.Therefore, it is obviously consistent with presently available data as it does not modify the equation of state as explained in Appendix III C. 14 Indeed, the rate of the particle production in quantum field theory in general is determined by the imaginary part of the stress tensor, Im[T ν µ ], while the vacuum energy is related to the real part of the stress tensor, Re[T ν µ ].Analyticity suggests that both components must have the same corrections on H at small H. Therefore, the lattice measurements [43] of the linear dependence on H of the particle production strongly suggest that the vacuum energy (which is determined by the real part of the same stress tensor) must also exhibit the same linear ∼ H correction.The corresponding lattice computations of the θ dependent portion of the vacuum energy and topological susceptibility in time dependent background are possible in principle, but technically much more involved than the analysis performed in ref. [43]. subject of the present work) as well as in the Maxwell U (1) gauge theory. The basic idea goes as follows.The fundamentally new type of energy advocated in the present work can be, in principle, studied in a tabletop experiment by measuring some specific corrections to the Casimir vacuum energy in the Maxwell theory as suggested in [44][45][46][47][48].This fundamentally new contribution to the Casimir pressure emerges as a result of tunnelling processes, rather than due to the conventional fluctuations of the propagating photons with two physical transverse polarizations.Therefore, it was coined as the Topological Casimir Effect (TCE).The extra energy computed in [44][45][46][47][48] is the direct analog of the QCD non-dispersive vacuum energy (A10), (24) which is the key player of the present work as it explicitly enters (37), ( 28), (38) in the main text.In fact, an extra contribution to the Casimir pressure emerges in this system as a result of nontrivial holonomy similar to (A6) for the Maxwell field.The nontrivial holonomy in E&M system is enforced by the nontrivial boundary conditions imposed in refs [44][45][46][47][48], and related to the nontrivial mapping π 1 [U (1)] = Z relevant for the Maxwell abelian gauge theory.Furthermore, the "reheating epoch" when the physical particles can be emitted from the vacuum in a time-dependent background, similar to the dynamical Casimir Effect, can be also tested in the Maxwell system as argued in [47]. A similar new type of energy can be, in principle, also studied in superfluid He-II system which also shows a number of striking similarities with non-abelian QCD as argued in [49].For the superfluid He-II system the crucial role plays the vortices which are classified by π 1 [U (1)] = Z similar to the abelian quantum fluxes studied in the Maxwell system in [44][45][46][47][48]. D. Cosmological density matrix vs no-boundary and tunneling states We conclude this section with few comments on status of the density matrix initial conditions in cosmology (which is the key element of the present work) as compared to the well known no-boundary [11] and tunneling [50][51][52] proposals for the wavefunction of the Universe. As is known, observer independent treatment of the noboundary state leads to an insufficient amount of inflation.Phenomenologically, the volume weighting [53,54] or top-down approach [55] to the no-boundary state seem to resolve this issue but remain with the problem of consistency of complex tunneling geometries and normalizability of the quantum ensemble in cosmology. On the other hand, tunneling state has a rather uncertain ground based on the hyperbolic rather than Schroedinger nature of the Wheeler-DeWitt equation.No-boundary wavefunction within the Euclidean path integral construction represents a special quasi-vacuum state.The tunnelling state within the approach of path integration over Lorentzian geometries leads to non-normalizable wavefunction with unstable quantum matter and gravity perturbations.This fact has been known since [51], long before the recent works [56][57][58] which extended this criticism also to the no-boundary wavefunction. Diversity of the definitions of the no-boundary and tunneling states (defined either as propagators or solutions of the homogeneous Wheeler-DeWitt equation either in Euclidean or Lorentzian spacetime) as discussed in [56][57][58] actually indicates that neither of these states have a rigorous canonical quantization ground.However, the critical verdict of [56][57][58] invalidating both the noboundary and tunneling states, though it requires deeper consideration, does not actually achieve its goal.This is because what is actually required is not the construction of the wavefunction itself, but rather scattering amplitudes, mean values and probabilities generated by it.The step from the wavefunction (or the density matrix) to these quantities is very nontrivial and requires additional integration over the end points of the path integral histories.This integration can also run along the complex contours of the steepest decent approximation, it can bear UV divergences and might lead to the effects invalidating the main conclusions of [56][57][58] 15 . This is exactly what is done in the microcanonical density matrix setup of [8-10] -we do not calculate the density matrix itself, but directly go over to its partition function dominated by the real valued periodic history in Euclidean spacetime.The starting point is the microcanonical density matrix of a spaially closed cosmology, which is defined as a projector on the space of solutions of the Wheeler-DeWitt equations -quantum Dirac constraints of the canonical quantization of gravity in physical Lorentzian spacetime [10].The periodicity of the relevant saddle-point histories directly follows from the tracing procedure for the normalization of the density matrix (see Fig. 1), and their Euclideanization is the corollary of the fact that periodic solutions exist only in the imaginary (Euclidean) time, which is equivalent to the integration over the complex contour of the lapse ADM function [10]. Thus, our approach differs from the methods of [11,50] and [56][57][58] in two major points -firstly, the microcanonical density matrix prescription for the initial state of the Universe rather than the pure state wavefunction and, secondly, the calculation of the physical quantity -partition function -rather than the wavefunction or the density matrix.Conceptual rigidity of this construction avoids ambiguities of the approach of [11,50,[56][57][58] and unambiguously leads to S 1 -compatification of the Euclidean time bearing the holonomy of the gauge fieldthe corner stone of the strongly coupled nonperturbative The main goal of this Appendix is to review a number of crucial elements relevant for our studies of the "nondispersive" vacuum energy and its cosmological significance.First, we start in subsection A 1 with explanation of a highly nontrivial nature of this type of the vacuum energy in the Euclidean space time. This type of the vacuum energy is well known to the QCD practitioners, while it is much less known in the GR and cosmology communities.We think this ignorance can be explained by the fact this unusual type of the vacuum energy cannot be formulated in terms of conventional local propagating degrees of freedom.Precisely such a "local" formulation is a conventional framework for the cosmology community when the inflation or the dark energy is described in terms of a scalar field, such as inflaton Φ(x) with specifically adjusted local potential V [Φ].On the other hand, this unusual type of energy has been known to the QCD community for quite some time.Furthermore, this unusual "non-dispersive" nature of the vacuum energy has been supported by a numerous lattice simulations, see A 1 with references and the details. We continue in section A 2 by clarifying the crucial role of the holonomy (A6) in generating such type of energy.We review few known analytical calculations of this type of energy by emphasizing the role of the non-local holonomy (computed along S 1 ) which generates this unusual energy.The S 1 in these computations represents an important portion of a larger Euclidean 4d manifold S 3 ×S 1 , which has been extensively employed in the main text of this work, see sections IV and V. In Section A 3 we make few historical remarks on fractionally charged topological objects as they intimately related to non-trivial holonomy defined on S 1 . The topological susceptibility and contact term in flat space-time We start our short overview on the "non-dispersive" nature of the vacuum energy by reviewing a naively unrelated topic-the formulation and resolution of the socalled U (1) A problem in strongly coupled QCD [59][60][61].We introduce the topological susceptibility χ which is ultimately related to the vacuum energy E vac (θ = 0) as follows 16χ = where θ parameter enters the Lagrangian along with topological density operator q(x) = 1 16π 2 tr[F µν F µν ] and E vac (θ) is the vacuum energy density computed for the Euclidean infinitely large flat space-time.This θ-dependent portion of the vacuum energy (computed at θ = 0) has a number of unusual properties as we review below.The corresponding properties are easier to explain in terms of the correlation function (A1), rather than in terms of the vacuum energy E vac (θ = 0) itself.The relation between the two is given by eq.(A1). Few comments are in order.First of all, the topological susceptibility χ does not vanish in spite of the fact that q(x) = ∂ µ K µ (x) is total derivative.This feature is very different from any conventional correlation functions which normally must vanish at zero momentum lim k→0 if the corresponding operator can be represented as total divergence. Secondly, any physical |n state gives a negative contribution to this diagonal correlation function where m n is the mass of a physical |n state, k → 0 is its momentum, and 0|q|n = c n is its coupling to topological density operator q(x).At the same time the resolution of the U (1) A problem requires a positive sign for the topological susceptibility (A1), see the original reference [61] for a thorough discussion, Therefore, there must be a contact contribution to χ, which is not related to any propagating physical degrees of freedom, and it must have the "wrong" sign.The "wrong" sign in this paper implies a sign which is opposite to any contributions related to the physical propagating degrees of freedom (A2).The corresponding vacuum energy associated with non-dispersive contribution to the topological susceptibility χ as defined by (A1) can be coined as "non-dispersive" vacuum energy E vac (θ = 0).It is quite obvious that the nature of this energy is drastically different from any types of conventional energy because it cannot be formulated in terms of any conventional propagating degrees of freedom according to (A2), (A3).In the cosmological context relevant for the present work this type of energy in refs.[5][6][7] was dubbed as the "strange energy", while a scientific name would be the "non-dispersive" vacuum energy E vac (θ = 0) generated by the contact term in the correlation function (A3).It should be contrasted with the "dispersive" energy which, by definition, is associated with some propagating degree of freedom and can be always restored from the absorptive portion of the correlation function through the dispersion relations according to (A2). In the framework [59] the contact term with "wrong" sign has been simply postulated, while in refs.[60,61] the Veneziano ghost (with a "wrong" kinetic term) had been introduced into the theory to saturate the required property (A3). Our next comment is the observation that the contact term (A3) has the structure χ ∼ d 4 xδ 4 (x).The significance of this structure is that the gauge variant correlation function in momentum space develops a topologically protected "unphysical" pole which does not correspond to any propagating massless degrees of freedom, but nevertheless must be present in the system.Furthermore, the residue of this pole has the "wrong sign" which saturates the non-dispersive term in gauge invariant correlation function (A3), We conclude this review-type subsection with the following remark.The entire framework, including the singular behaviour of q(x)q(0) with the "wrong sign", has been well confirmed by numerous lattice simulations in strong coupling regime, and it is accepted by the community as a standard resolution of the U (1) A problem.Furthermore, it has been argued long ago in ref. [62] that the gauge theories may exhibit the "secret long range forces" expressed in terms of the correlation function (A4) with topologically protected pole at k = 0. Finally, in a weakly coupled gauge theory (the so-called "deformed QCD" model [32]) where all computations can be performed in theoretically controllable way one can explicitly test every single element of this entire framework, including the topologically protected pole (A4), the contact term with "wrong sign", etc, see ref. [33,34,63] for the details.In particular, one can explicitly see that the Veneziano ghost is in fact an auxiliary topological field which saturates the vacuum energy and the topological susceptibility χ.It does not violate unitarity, causality and any other fundamental principles of a quantum field theory.What is more important for the present studies is that one can explicitly see that the holonomy (A6) plays a crucial role in generating of the "strange" vacuum energy defined in terms of the correlation function (A1). While all these unusual features of the vacuum energy are well-known and well-supported by numerous lattice simulations in strongly coupled regime (see e.g.[33] for a large number of references on original lattice results) a precise quantitative understanding of these properties (on a level of analytical computational scheme) is still lacking.In next subsection we review some known results on this matter specifically emphasizing on role of the holonomy (A6) in the analytical computations.Precisely a nontrivial holonomy (A6) plays a crucial role in generating of the "strange" vacuum energy as we shall argue in next subsection A 2. This is the key technical element which pinpoints the source of this novel type of energy not expressible in terms of any local operators as the holonomy is obviously a non-local object. Our goal here is to argue that the holonomy plays a key role in generation of the "non-dispersive" vacuum energy in the system.We also want to compare the vacuum energy computed on different manifolds such as S 1 × R 3 versus S 1 × H 3 and S 1 × S 3 .Such studies play the crucial role in our analysis in the main text in section II devoted to construction of the gravitation instanton formulated on S 1 × S 3 . We start our analysis with S 1 × R 3 geometry.The key role in the discussions will play the behaviour of holonomy U (x) ≡ P exp i T 0 dx 4 A 4 (x 4 , x) at spatial infinity, the Polyakov line, The operator TrL classifies the self-dual solutions which may contribute to the path integral at finite temperature T ≡ T −1 , including the low temperature limit T → 0. There is a well known generalization of the standard selfdual instantons to non-zero temperature, which corresponds to the description on R 3 × S 1 geometry.This is so-called periodic instantons, or calorons [64] studied in details in [65].These calorons have trivial holonomy, which implies that the TrL assumes values belonging to the group centre Z N for the SU (N ) gauge group. More general class of the self-dual solutions with nontrivial holonomy (A6), the so-called KvBLL calorons were constructed much later in refs.[66,67].In this case the holonomy (A6) in general, is not reduced to the group centre TrL / ∈ Z N .The fascinating feature of the KvBLL calorons is that they can be viewed as a set of N monopoles of N different types.Normally, one expects that monopoles come in N − 1 different varieties carrying a unit magnetic charge from each of the U (1) factors of the U (1) N −1 gauge group left unbroken by vacuum expectation value due to nontrivial holonomy (A6).There is an additional, so called Kaluza-Klein (KK) monopole which carries magnetic charges and instanton charge.All monopole's charges are such that when complete set of different types of monopoles are present, the magnetic charges exactly cancel, and the configuration of N different monopoles carries a unit instanton charge.In particular, for SU (2) gauge group the holonomy 1 2 TrL = cos(πν), (A7) belongs to the group center 1 2 TrL = ±1 when ν assumes the integer values (trivial holonomy).The so-called "confining" value for the holonomy corresponds to ν = 1/2 when TrL = 0 vanishes. It has been known since [65] that the gauge configurations with non-trivial holonomy are strongly suppressed in the partition function.Therefore, naively KvBLL calorons can not produce a finite contribution to the partition function.However, this naive argument is based on consideration of the individual KvBLL caloron, or finite number of them.If one considers a grand canonical assemble of these objects than their density is determined by the dynamics, and the old argument of ref. [65] does not hold anymore.The corresponding objects in this case may in fact produce a finite contribution to the partition function.A self consistent computations in a weak coupling regime supporting this picture have been carried out in the so-called "deformed QCD" model [32].One can explicitly see how N different types of monopoles with nontrivial holonomy (A6) which carry fractional topological charge ±1/N produce confinement, generate the "strange" vacuum energy (A1) and associated with this energy the topological susceptibility (A5) with known, but highly unusual properties reviewed above in previous subsection A 1, see [33,34,63] for the technical details on these computations. In the strong coupling regime we are interested in, the corresponding analytical computations have never been completed.There is a limited number of partial analytical and numerical results [68][69][70] on computations of moduli space and one loop determinant, controlling the dynamics and interaction properties of the constituents in a large ensemble of KvBLL calorons. While complete analytical solution in strong coupling regime is still lacking, nevertheless there is a number of hints supporting the basic picture that the KvBLL configurations with nontrivial holonomy (A6) and representing N different types of monopoles with fractional topological charges ±1/N saturate the "strange" vacuum energy (A1) and associated with this energy the topological susceptibility (A5) in a very much the same way as it happens in "deformed QCD" model where all computations are performed in a theoretically controllable regime [32][33][34].It is assumed in what follows that the topological susceptibility (A1) and associated with it the "non-dispersive" vacuum energy E vac (θ) is indeed saturated by fractionally charged monopoles with Q = ±1/N which are constituents of KvBLL caloron with nontrivial holonomy (A6), (A7). The corresponding computations of the partition function and the free energy for the vacuum ground state for S 1 × R 3 geometry lead to the following result [68][69][70]: where V is the 3-volume of the system, g is the coupling constant of a non-abelian gauge field, the Λ QCD is a single dimensional parameter of the system generated as a result of dimensional transmutation in classically conformal gauge theory, similar to conventional Λ QCD 170 MeV in QCD physics.Parameter f in (A8) can be interpreted as the monopole's fugacity of the system, while the combination T F vac ≡ E vac V (4) ≡ E vac T V shows the extensive property when ln Z is proportional to the Euclidean 4-volume at large V (4) → ∞.In this framework E vac has dimension 4 and represents the vacuum energy density of the system entering the fundamental formula (A1) and defining the "non-dispersive" portion of the vacuum energy. One can show that free energy (A8) as well as the topological susceptibility χ demonstrate all the features of the "strange energy" briefly described in section A 1 in model-independent generic way, including the "wrong" sign for χ which cannot be associated with any physical propagating degrees of freedom.The specific mechanism based on the KvBLL configurations reviewed above and describing the tunnelling processes between the distinct topological sectors precisely generates all these required properties.In what follows we assume that the very same mechanism generates the "non-dispersive" vacuum energy density E vac for different geometries, including S 3 × S 1 and H 3 κ × S 1 κ −1 exactly in the same way as computed above for R 3 × S 1 . With this assumption in hand the question we address below is as follows.How does the "strange energy" density E vac vary if the geometry is slightly modified at large distances?The main motivation for this question is originated from our fundamental conjecture formulated in sections IV that the energy density which enters the Friedman equation represents in fact the difference ∆E between the energy density computed in a nontrivial background by subtracting the "trivial" portion computed in the flat background similar to the Casimir type computations. Specifically, we want to know how does the vacuum energy density depend on the geometry S 3 × S 1 .Precisely this information is required in our computations in sections IV and V. Unfortunately, there is a number of technical obstacles to carry out the computations similar to (A8) for S 3 × S 1 manifold.In particular, even the monopole solution (which is the crucial ingredient in this type of semiclassical computations) satisfying the appropriate boundary conditions on S 3 × S 1 is not exactly known.As a result of this deficiency, a semi-classical computation which would account for zero and non-zero modes contributions to the partition function (similar to formula (A8) derived for R 3 × S 1 ) is also not known. Fortunately, the exact semi-classical computations are available for the hyperbolic space H 3 κ × S 1 κ −1 .While this manifold is not exactly what we need for our analysis in sections IV and V, nevertheless, the corresponding computations give us a hint on possible corrections to the vacuum energy density E vac due to a small dimensional parameter ∼ κ which emerges in the H 3 κ × S 1 κ −1 in comparison with computations (A8) corresponding to R 3 ×S 1 geometry. The main reason why the semiclassical computations can be carried out in hyperbolic space H 3 κ with the constant negative curvature −κ 2 is as follows.There is a conformal equivalence between (R 4 − R 2 ) and where S 1 κ −1 denotes the circle of radius κ −1 .As a result of this exact equivalence, the monopole's solutions can be explicitly constructed in this case.The holonomy (A6) is computed along a closed loop S 1 κ −1 and assumes a nontrivial value. The key observation of this computation, see formula (A9) below, is that the topological configurations with non-trivial holonomy produce a finite contribution to the vacuum energy density with a small correction being linearly proportional to κ → 0. This effect can not be expressed in terms of any local operators such as curvature as |R| ∼ κ 2 .Rather, the leading correction ∼ κ is generated due to topological vacuum configurations with nontrivial holonomy, not expressible in terms of any local observables.This is precisely the reason why the generic arguments [29][30][31] based on locality simply do not apply here.Now we are ready to formulate the main result of the computations [7] relating the vacuum energy density E vac computed on the original R 3 × S 1 manifold and on the hyperbolic space H 3 κ × S 1 κ −1 .In formula (A9) presented below we assume that the sizes of S 1 from two different manifolds are identically the same, i.e. we identify T = κ −1 .After this identification the only difference between two manifolds is the curvature of the hyperbolic space R[H 3 κ ] ∼ κ 2 at κ → 0. Formula (A9) below suggests a linear dependence on κ at small κ which we interpret as a strong argument supporting our conjecture on linear dependence of "non-dispersive" vacuum energy as a function of external parameter.Such linear scaling obviously implies that this background-dependent correction is not related to any local operators such as curvature, but rather is generated by nonlocal operator (A6) which is sensitive to the global characteristics of the background. The relevant formula can be represented as follows [7]: Using formula (A8) the same result can be written as follows The key observation here is that a small correction here is linear, rather than naively expected quadratic function at small κ → 0. Furthermore, the correction ∼ κ vanishes for configurations with trivial holonomy, ν = 0, ν = 1.This observation unambiguously implies that the relevant Euclidean configurations which are capable to produce the linear correction (A10) must carry a nontrivial holonomy (A6), and therefore, they are non-local in nature.The computations [63] in weakly coupled "deformed QCD" model (where a configuration with nontrivial holonomy produces a linear correction) also support this claim. Generation of the Holonomy in a strongly coupled gauge theory The question we want to address in this Appendix can be formulated as follows.If we consider the thermodynamical limit in eq.(A8) one can explicitly see that the combination T F vac ≡ E vac V (4) ≡ E vac T V shows the extensive property when ln Z is proportional to the Euclidean 4-volume at large V (4) → ∞.In this framework E vac has dimension 4 and represents the vacuum energy density of the system entering the fundamental formula (A1).This formula defines the "non-dispersive" θ dependent portion of the vacuum energy which plays the crucial role in our analysis. The key question we want to address now is as follows: if we start from description of the system on R 4 from the very beginning such that the semiclassical solutions (calorons with nontrivial holonomy) cannot be constructed on R 4 .How do we know anything about the holonomy defined on S 1 (and its direct consequence in form of the objects with fractional topological charges) if it was not a part of our construction to begin with?We should emphasize here that the configurations with fractional topological charges is very strong signal that there is a nontrivial holonomy in the system as the only semiclassical solutions which can be defined on R 4 are integer value instantons. We obviously do not know the answer on the hard question formulated above in strongly coupled 4D QCD.However, there is a well known example of the 2D CP N −1 model which hints that such kind of holonomy (and its manifestation in form of the configurations with fractional topological charges) might be generated dynamically by strong quantum fluctuations such that the "effective calorons" with nontrivial holonomy do appear in the system, but they are strongly coupled quantum objects, rather than the semiclassical configurations defined on S 1 . Historically, the configurations with fractional topological charges emerged in 2D CP N −1 model.These fractional objects have been coined as instanton quarks, also known as "fractional instantons" or "instanton partons".Namely, using an exact accounting and re-summation of the n-instanton solutions in 2d CP N −1 models, the original statistical problem of a grand canonical instanton ensemble (with exclusively integer topological charges defined on R 2 ) was mapped unto a 2d Coulomb Gas system of pseudo-particles with fractional topological charges ∼ 1/N [71,72].This picture leads to the elegant explanation of the confinement phase and other important properties of the 2d CP N −1 models [71,72].The term "instanton quarks" was introduced to emphasize that there are precisely N constituents making an integer instanton, similar to N quarks making a baryon.These objects do not appear individually in path integral; instead, they appear as configurations consisting N different objects with fractional charge 1/N such that the total topological charge of each configuration is always integer.In this case 2N k zero modes for k instanton solution is interpreted as 2 translation zero modes modes accompanied by every single instanton quark.While the instanton quarks emerge in the path integral coherently, these objects are highly delocalized: they may emerge on opposite sides of the space time or be close to each other with alike probabilities.Similar attempt to in 4D QCD was unfortunately unsuccessful due to a number of technical problems, which remain to be solved [73]. There is deep analogy with "deformed QCD" model [32][33][34] where the size of S 1 is fixed for the semiclassical approximation to be justified.However, it is a common view in the QCD community that the physics in strongly coupled QCD is qualitatively the same as in weakly coupled "deformed QCD" model with enforced semi-classicality by specifically chosen S 1 in which case the configurations with nontrivial holonomy (and fractionally charged monopoles) can be explicitly constructed on the semiclassical level.Furthermore, it is expected that even in a case when the corresponding θ parameter in strongly coupled QCD does not vanish, the physics remains the same and the confinement in QCD occurs as a result of condensation of the same fractionally charged monopoles as argues in [74]. The main lesson to be learnt in the context of the present work is as follows.The configurations with fractional topological charges can serve as a trigger for a nontrivial holonomy because conventional semiclassical solutions defined on R 4 can carry only integer topological charges.The lesson from 2d CP N −1 is as follows.The fractional topological charges are not present in the sys-tem when it is defined on R 2 .However, such objects do appear dynamically as a result of strong quantum fluctuations.In terms of effective semiclassical configurations these objects obviously require a nontrivial holonomy (and therefore, nontrivial S 1 where the holonomy is defined).However, this effective S 1 is not the original circle, but rather the effective one which emerges as a result of strong quantum fluctuations.This is precisely the motivation for our Model-2 in section V where we unlink the size of S 1 from matter context of the theory by relaxing the bootstrap equation.Unfortunately, we can only speculate on this matter at present time without making any precise and solid claims.The goal of this Appendix is to introduce the auxiliary field technique and demonstrate that the corresponding alternative computations reproduce the crucial elements of the vacuum energy and its unusual features listed in section III C. Furthermore, this technique will play a crucial role in our studies on anomalous coupling with the SM fields described in section VI B. Precisely this coupling is responsible for the successful reheating phase as advocated in sections VI C, VI D. As we argue below we can identify (on intuitive level) the corresponding auxiliary non-dynamical, nonfield with the inflaton, which is an emergent field in our framework: it only appears in the confined QCD phase, while in the deconfined phase it did not exist in the system.It should be contrasted with conventional description in terms of local dynamical field Φ(x) which always a part of the system, long before and long after the inflation. We should emphasize that the reformulation of the same physics in terms of an auxiliary quantum field rather than in terms of explicit computation of the partition function by summing over all topological sectors is not a mandatory procedure, but a matter of convenience.Similarly, the description of a topologically ordered phase in condensed matter physics in terms of Chern Simons effective Lagrangian is a matter of convenience rather than a necessity as emphasized in Section VI B. We shall demonstrate how this technique works in a simplified version of QCD, the so-called weakly coupled "deformed QCD" model [32] which preserves all relevant features of the strongly coupled QCD such as confinement, nontrivial θ dependence, generation of the "nondispersive" vacuum energy, etc.At the same time, all computations can be performed under complete theoretical control.The computations of the "non-dispersive" term by explicit summation over positions and orientations of the monopoles-instantons describing the tunnelling transitions have been performed in [34].The corresponding results have been reproduced in [33] using the technique of the auxiliary topological fields. One should also mention that the computations are performed in effectively 3d weakly coupled gauge theory, rather than in strongly coupled 4d QCD.Nevertheless, the emergent auxiliary field to be introduced below and identified with inflaton behaves, in all respects as the 4d Veneziano ghost [60,61] which was postulated long ago precisely with the purpose to describe these unusual features of the vacuum energy as reviewed in Appendix A 1. The basic idea to describe the relevant IR physics in terms of an auxiliary field is to insert the corresponding δ-function into the path integral with a Lagrange multiplier and integrate out the fast degrees of freedom while keeping the slow degrees of freedom which are precisely the auxiliary fields.Here and in what follows we use notations from [33] where this technique was originally implemented to demonstrate that the famous Veneziano ghost is nothing but auxiliary topological field.The δfunction to be inserted into the path integral is defined as follows where q(x) ∼ tr F µν F µν in this formula is treated as the original expression for the topological density operator including the fast non-abelian gluon degrees of freedom, while b(x), a(x) are treated as slow-varying external sources describing the large distance physics for a given monopole configuration.One can proceed now with conventional semiclassical computations by summation over all monopoles, their positions and orientations to arrive to the following dual form for the effective action.The new additional topological term ∼ b(x) ∇ 2 a(x) can be immediately recovered from (B1), while interaction of the b(x) field (playing the role of the Lagrange multiplier) coupled to topological density operator q(x) can be easily recovered as it has precisely the structure of the θ term.This observation unambiguously implies that b(x) field enters the effective description in unique combination with θ as follows In this formula parameter ζ plays the role of the monopole's density in the system, such that the vacuum energy is explicitly proportional to ζ, see (B3) below.The dynamical σ fields effectively describe the monopole's ensemble.The most important element for our studies is the Lagrange multiplier b(x) field and topological a(x) field which will be interpreted as the inflaton in what follows.Both fields are not dynamical, and not propagating degrees of freedom, by construction.We obviously do not introduce any new dynamical degrees of freedom by inserting the δ function (B1) and introducing the auxiliary topological fields b(x), a(x).This is obviously important remark when one tries to identify b(x), a(x) with inflaton.One next step is to compute the vacuum energy and topological susceptibility within this framework to demonstrate that it satisfies all the features listed in section III C. The corresponding computations explicitly show that the physical meaning of the vacuum energy is the number of the tunnelling events per unit volume per unit time The corresponding formula can be represented in terms of the correlation function as follows E vac = −N 2 lim k→0 d 4 xe ikx q(x), q(0) where we represented q(x) in terms of the auxiliary field We obviously reproduce our previous result based on explicit computations of the monopoles [34].Now it is formulated in terms of the long -ranged auxiliary topological fields.The fluctuating b(x), a(x) fields simply reflect the long distance dynamics of the degenerate topological sectors which exist independently from our description in terms of b(x), a(x) fields.However, in previous computations [34] we had to sum over all monopoles, their positions, interactions and orientations.Now this problem is simplified as it is reduced to the computation of the correlation function constructed from the auxiliary fields governed by the action (B2). We identify (intuitively) the corresponding auxiliary [a(x), b(x)] fields which saturate this energy (B3) with inflaton in this model in a sense that both objects eventually lead to the deSitter behaviour.We emphasize again that the corresponding dynamics can not be formulated in terms of a canonical scalar field Φ with any local potential V (Φ) as it is known that the dynamics governed by CS-like action is truly non-local.There is a large number of CM systems (realized in nature) where CS action plays a key role with explicit manifestation of the non-locality in the system.It has been also argued that the deformed QCD model which is explored in this section also belongs to a topologically ordered phase with many features which normally accompany the topological phases [33]. What is the physical meaning of this auxiliary [a(x), b(x)] fields which we identify with inflaton?What is the best way to visualize it on the intuitive level?From our construction one can easily see that both fields [a(x), b(x)] do not carry a colour index.However, a(x) field has nontrivial transformation properties under large gauge transformation.In fact our field ∇ i a(x) transforms as the K i (x) in the Veneziano construction (A4).One can support this identification by computing a gauge variant correlation function The massless pole (B4) has precisely the same nature as the pole in the Veneziano construction (A4).What is the physical meaning of b(x) field?This field can be thought as an external axion θ(x) field, without kinetic term, though. Our comment here is that in spite of the gap ∼ ζ in the system, some correlation functions constructed from the topological auxiliary fields a(x), b(x) fields are still highly sensitive to the IR physics.Furthermore, while the behaviour (B4) at small k can be considered to be very dangerous as it includes k 4 in denominator (which normally attributed to the negative norm states in QFT), the physics described here is perfectly unitary and causal as a(x), b(x) are in fact auxiliary rather than propagating dynamical fields as all questions can be formulated and answered even without mentioning the auxiliary topological fields. One should comment here that the results presented above are based on computations in a weakly coupled, effectively 3d, gauge theory (where the system is under complete theoretical control), while we are interested in 4d strongly coupled QCD to study the inflationary phase.Nevertheless, the relation between the a(x) auxiliary field and 4d K µ field still holds and assumes the form while b(x) field always enters the effective Lagrangian precisely in combination with the θ term according to (B2) This observation allows us to exactly reconstruct the interaction with SM particles from the knowledge on their coupling to the θ parameter as eq.( 49) states.What are the typical fluctuation scales of the auxiliary quantum a(x) and b(x) fields?The answer is quite obvious: the typical fluctuations are of order Λ QCD as the FIG. 1 . FIG. 1. Transition from the density matrix instanton to the periodic statistical sum instanton. Appendix B: Topological auxiliary field as a non-propagating and non-dynamical inflaton [θ − b(x)] as long as b(x) field can be treated as a slow degree of freedom.In all respects it is similar to construction of the effective Lagrangian for the η field which enters the action in unique combination with θ as follows [θ − η ].The difference is, of curse, that η meson has a kinetic term as well, in contrast with b(x) field.Therefore, the final expression for the dual effective action which includes new auxiliary b(x), a(x) fields assumes the form[33]:Z[σ, b, a] ∼ D[b]D[σ]D[a]e −Stop[b,a]−S dual [σ,b] , S top [b, a] = −i 4πN R 3 d 3 xb(x) ∇ 2 a(x),
28,807.8
2017-09-27T00:00:00.000
[ "Physics" ]
uafR: An R package that automates mass spectrometry data processing Chemical information has become increasingly ubiquitous and has outstripped the pace of analysis and interpretation. We have developed an R package, uafR, that automates a grueling retrieval process for gas -chromatography coupled mass spectrometry (GC -MS) data and allows anyone interested in chemical comparisons to quickly perform advanced structural similarity matches. Our streamlined cheminformatics workflows allow anyone with basic experience in R to pull out component areas for tentative compound identifications using the best published understanding of molecules across samples (pubchem.gov). Interpretations can now be done at a fraction of the time, cost, and effort it would typically take using a standard chemical ecology data analysis pipeline. The package was tested in two experimental contexts: (1) A dataset of purified internal standards, which showed our algorithms correctly identified the known compounds with R2 values ranging from 0.827–0.999 along concentrations ranging from 1 × 10−5 to 1 × 103 ng/μl, (2) A large, previously published dataset, where the number and types of compounds identified were comparable (or identical) to those identified with the traditional manual peak annotation process, and NMDS analysis of the compounds produced the same pattern of significance as in the original study. Both the speed and accuracy of GC -MS data processing are drastically improved with uafR because it allows users to fluidly interact with their experiment following tentative library identifications [i.e. after the m/z spectra have been matched against an installed chemical fragmentation database (e.g. NIST)]. Use of uafR will allow larger datasets to be collected and systematically interpreted quickly. Furthermore, the functions of uafR could allow backlogs of previously collected and annotated data to be processed by new personnel or students as they are being trained. This is critical as we enter the era of exposomics, metabolomics, volatilomes, and landscape level, high-throughput chemotyping. This package was developed to advance collective understanding of chemical data and is applicable to any research that benefits from GC -MS analysis. It can be downloaded for free along with sample datasets from Github at github.org/castratton/uafR or installed directly from R or RStudio using the developer tools: ‘devtools::install_github(“castratton/uafR”)’. Introduction Chemistry has a profound influence on every physical system in the human environment [1][2][3][4][5], hence the need for biochemical research is of utmost importance.Gas chromatography coupled with mass spectrometry (GC -MS), used to identify the chemical composition of samples, is a commonly used technology across many disciplines of research [6][7][8][9].While the accuracy and efficiency of instruments continues to improve [10], preparing the librarymatched output [i.e.top hit(s) for each set of molecular fragments streamed across the machines m/z detector] for analysis and interpretation remains antiquated.The traditional methods involve manually selecting, integrating, and identifying peaks based on a reference library and comparison to commercial standards across every sample in an experiment [11,12].Software that quickly and accurately identify top library matches for every tentative compound in an entire batch of experimental samples thankfully exist (e.g.Agilent's MassHunter, Thermo Fisher Scientific's Compound Discoverer, Shimadzu's GCMSsolution); however, the output remains uninterpretable without additional process.In even simple experiments, the process of quantifying tentatively "identified" compounds across replicates can take weeks or months and is a significant impediment to collecting and analyzing many, large, and/or complex GC -MS datasets.Furthermore, focusing the interpretation on specific chemicals or chemistries that are meaningful would require looking up each molecule for published information and/or important associations.This additional bottleneck in chemical experimentation can lead to backlogs in collections, delays in chemical data being analyzed and published, and may even create a significant deterrent to collecting GC -MS data in studies (e.g.non-targeted and/or suspect screening analysis) where these data could be highly informative. Another concern with manually selecting component areas for the same tentative molecule across different samples is the inherent subjectivity and inconsistency at many decision points.Every additional keystroke or choice about threshold provides an opportunity for unintended error.Technology exists that could help automate this process, converting the identified compounds into a digitally comparable structure in an instant [4,8]; however, using such technology requires advanced computer programming experience.Any functional interpretation of a chemical benefits from structural comparisons with compounds of known functions, yet the ability to do so has historically been reserved for private industry or hyper-specialized professionals.A package that automates the sorting and collection of component areas across samples in an experiment while simultaneously storing critical information about every tentative molecule could propel every field of science forward by not only removing the bottlenecks and subjectivity in chemical analysis but also removing the need for hours of paid or untrained manual labor before even simple chemical interpretations can occur. To address these barriers in the use of GC -MS data, we developed an R package that takes the raw, aggregated chemical identifications generated from a user-selected peak detection software.In this study, we used Agilent's Unknowns Analysis software to identify peaks with their deconvolution algorithm and match m/z spectra to a locally installed NIST library, but any mass spectrometry software that produces the same information is equally viable.The package here comes after the initial processing of samples and communicates with public chemistry utilities (including PubChem and the National Cancer Institute) to sort and process the aggregated set of all tentatively identified molecules using underlying m/z (mass/charge of chemical fragments) ratio data and automatically interpret close matches across samples.In addition to precisely (but flexibly) grabbing tentative compounds from samples they could theoretically exist in and preparing the component areas for statistical summary and analysisincluding principal component analyses, non-metric multidimensional scaling (NMDS), and/ or machine learning algorithms; uafR also interacts with structural data [in SDF (Structure-Data format)] for all published compounds in the dataset.These data allow detailed summaries of the chemical constituents for each sample to be generated based on the user's chemical(s) of interest.Thus, while a chemical ecologist may be more interested in the relative proportions of alkaloids to polyphenols in a sample [13,14], a biochemist may only be interested in steroids [6,[15][16][17].These groups (or others) can now be selectively pulled from one's dataset to perform follow-up analyses.In addition, researchers (e.g.those performing targeted analysis) that have advanced knowledge of the molecule(s) or functional group(s) of interest can use our functions to isolate these chemistries from experimental data and focus their analysis/interpretation on specified chemicals or chemical groups more generally. Users may also load personal chemical libraries, again as an easily formatted ".CSV" file using long or wide orientation, to compare any list of chemicals against the set(s) of classifier compounds in their.CSV input library.For the chemical structure processing, our package utilizes Tanimoto similarity, a commonly used and rigorously tested metric for physicochemical comparisons [5,18,19].While there is a broad range of diversity in the chemistry of any system [20], there exists common structural subunits that can categorize molecules into their potential function(s) and the Tanimoto index provides efficient functional sorting of even diverse chemistries.As an example, these comparisons could be used in agricultural research to rapidly screen plant molecules for insecticidal or repellent properties [21][22][23][24].More specifically, the pharmaceutical industry uses the Tanimoto similarity metric to discover compounds that will bind known ligands or share biological activity with known drugs [25][26][27].This metric is underapplied, however, because to date, it requires multiple complex steps to generate or acquire data in the appropriate format [28,29].Our R package harnesses direct connections between PubChem and R to stream published information on every known (i.e., published and vetted by peer-review for merit) chemical in the dataset.This bypasses the need for other computer programs or coding environments to perform physicochemical comparisons and allows our algorithm to outperform any comparable utility for this stage of mass spectrometry data processing.If the user can install a package and read a ".CSV" file into R, they will have access to the entirety of PubChem and more. Data science and informatics can circumvent analytical bottlenecks [30].Automating the tedious portions of GC -MS data processing can not only turn weeks or months of work into a few keyboard strokes within a day, but also takes human error and subjectivity out of the equation.An efficient and user-friendly tool for interpreting these chemical data is long overdue. Here, we present two examples to demonstrate the accuracy and efficiency of uafR.The first is the identification and analysis of a GC/MS dataset containing samples of a series of four known internal standards at different concentrations.The second is a re-identification of GC/ MS samples from an already published dataset by Ponce et.al [31].For this dataset, we compare the same statistical tests for the standardized areas for compounds identified with 4 methods by the uafR package and those from the published, manual identifications.We also briefly describe how the package can improve chemical workflows in non-GC -MS datasets or metaanalyses. Software description and workflow The current build of uafR is optimized for raw output from Agilent's Unknowns Analysis Software (Santa Clara, CA, USA, 95051); however, the only aspect of the workflow that is specific to their software are the column names for the input data frame.To briefly describe the output, after setting up the analysis environment [i.e., directing Unknowns Analysis to the sample directory where a ".UAF" file (hence, uafR) is created], running the deconvolution algorithm to identify peaks, and searching the peaks against the installed library (blank subtraction and target matching are also options and will not affect the input for uafR), a single ".CSV" file containing basic GC -MS output [i.e.retention times, peak area, captured mass-to-charge ratios (m/z), compound name, match quality] and a sample origin identifier (i.e.sample name or file name) for tentative compounds across all samples can be exported and read into R using "read.csv()."After reading the data into R and loading the package, uafR can use published information to sort and precisely select portions of the data that the user may be interested in. A diagram of the workflow can be seen in Fig 1 .The first function for GC -MS data is "spreadOut()."Running this function on properly formatted GC -MS input will automatically prepare the data for the next steps in the processing pipeline.Briefly, the function takes every recorded data point for every treatment and expands it in large database formats with unique identifiers assigned for each data point.These unique identifiers (unique IDs) are automatically created from the input data and are used to extract specific area values from the raw data.In addition to setting up large databases containing component area, tentative compound identities, match factors, captured m/z values, retention time indices, sample identities, and the unique IDs, the function also communicates with online databases to download relevant information about every tentative compound.To collect these data, the function converts the chemical names into PubChem compound identifiers (CIDs) using the "get_cid()" function from the R package webchem [32].For published chemicals, this information includes exact mass, m/z histograms, and every name it has.Instances where the chemical cannot be identified by name on PubChem (i.e.compounds for which a CID are unavailable) are redirected to CADD Group Chemoinformatics Tools and User Services (CACTUS, https://cactus.nci.nih.gov/) from which a canonical Simplified Molecular Input Line Entry System (SMILES) can be generated using that server and algorithm.This SMILES notation is then used to simulate the mass and structure data for, as-of-yet unpublished chemicals on PubChem.All this information, including the large databases, are stored as a list in a user-defined object.Subsequent functions are designed to seamlessly interact with the list and will automatically use relevant information collected during "spreadOut()". The next step in the GC-MS workflow will depend on the type of analysis the user is performing.If the chemicals of interest are already known, they can be extracted by name with a single function-"mzExacto()."However, for complex datasets or analyses that involve more unknowns, the user may want to cast a broader, but still accurate, net.There are multiple steps that can be taken to hone in on the most relevant chemicals in a dataset using the features of uafR.A simple and effective approach is to subset the search chemicals by setting a minimal match factor on the raw output of Unknowns Analysis (or other GC -MS software).This can be done with R code described in the vignette published with the package (https://castratton.github.io/uafR/).Another approach could include subsetting with output from the function "categorate()."This function also uses PubChem to communicate with online databases and generate categorically, structurally, and chemically identifying information for every published chemical in the dataset.The categorical data include whether the chemical is biologically derived [Natural Products Online database (LOTUS; https://lotus.naturalproducts.net/)],has flavor or smell [Flavor and Extract Manufacturers Association (FEMA; https://www.femaflavor.org/)], has varied biological activities [Kyoto Encyclopedia of Genes and Genomes (KEGG; https://www.genome.jp/kegg/)],medical subject headings (MeSH; https://www.nlm.nih.gov/mesh/), or other information about their reactivity [Food and Drug Administration-Structured Product Labeling (FDA/SPL; https://www.fda.gov/) and Reactive Groups from PubChem (https://pubchem.ncbi.nlm.nih.gov/)]. After the categorical information is collected, the function generates substructure data for the chemicals to also be subsetted by common functional groups.This information is generated using the "read.SDFset()" function from another R package called ChemmineR [33].This package is a dependency that is installed with uafR and is core to the cheminformatics methods deployed.The substructure information generated using ChemmineR includes the number of rings, all subgroups (e.g., R-COH, R-COOH, etc.) and their counts, all atoms (e.g., C, N, S, As, etc.) and their counts, and the number of charges for every chemical with published structural data (or canonical SMILES from CACTUS) on PubChem.The final steps in "categorate()" will not only assist in subsetting compounds of interest for extracting from GC -MS datasets, but could also be used to perform meta-analyses on published chemistries. In order to run "categorate()," users are required to include an input library that contains columns with labeled chemicals.The labels are customizable, but the most useful approach is to label a set of chemicals by a common feature or biological activity.For example, if a researcher has a set of plant chemicals of interest to test against active ingredients in pharmaceuticals, the input library could contain n columns whose headings are the biological activity (e.g., diuretic, blood pressure, etc.) and the contents (rows under the heading) are the active chemicals used in products that are approved for those medical outcomes.The "categorate()" function will then take the input library (saved as a ".CSV") and compare every chemical of interest to the chemicals in each user-defined "chemical category," returning two additional data frames- (1) whether it has a strong (Tanimoto similarity greater than 0.95) or moderate (greater than 0.85) structural match with any of the chemicals in each group; and, (2) for strong matches, the name of a chemical that it was most similar to.It performs these comparisons using the "fmcsBatch()" function from the R package fmcsR [34]. The utility of this information and approach cannot be overstated.For chemistry, structure defines function, so identifying structural matches is effectively identifying chemicals with the same function.This not only provides a powerful tool for novel chemical activity discoveries and/or natural backups to synthesized chemistries, but can also allow researchers to subset GC -MS data by general chemical structures or activities they are interested in.The possibilities are limited only by the maximum file size a user can create in the specified ".CSV" format and whether structural data were able to be generated from PubChem for the chemical(s).Subsetting of information generated with "categorate()" is easily done using the function-"exactoThese()." Users can specify which set of information they would like to subset and indicate desired criteria the chemicals should meet. Next in the GC -MS workflow is to put the published information to use and aggregate every occurrence of the user-specified chemicals across every GC -MS sample."mzExacto()" takes the output from "spreadOut()" along with the list of chemical names, and returns a single data frame containing their optimal retention time, exact mass, best identified match factor, and aggregated component area across samples in which it occurs (0 when absent).Additional technical details for this algorithm are available with the package (github.com/castratton/uafR).Briefly, after collecting mass and m/z information for the input chemicals of interest, they are ordered by exact masses so likely retention time windows can be determined based on the general structure of the input data and the information stored from "spreadOut()."After identifying perfect matches (i.e., those with high match factors and the same chemical names) the algorithm looks again through each sample for instances where the top 2 published m/z values for the tentative identity are the same as the query chemicals of interest.These matches are based on standard manual approaches to resolve uncertainties in any complex GC/MS workflow.The m/z values within retention time windows generated by the input data must be similar enough that the chemical fragments are practically and theoretically identical.A sub-argument, "decontaminate," is on by default and removes any chemicals that did not have a strong match across samples, were unable to be found in public databases (i.e.PubChem), and/or were unable to have a canonical SMILES generated on the NCI server.This sub-argument can be turned off by adding "decontaminate = F" to the end of the items in "mzExacto()." At this point in the GC/MS workflow, the most common step is to standardize component areas for tentatively identified chemicals by quantifying their values relative to known internal or external standard(s)."standardifyIt()" takes the output from "mzExacto()" and either a user-specified internal standard (e.g., tetradecane, or user defined-internal standard) or calibration curves (raw values) from an external standard(s), along with sub-arguments that allow the standardization to be tuned to the experimental methods."standardifyIt()" returns a data frame that is standardized relative to the known chemical quantifications and formatted for subsequent statistical analyses.Common statistical protocols for GC -MS data include ordination analyses (e.g., PCA, NMDS, etc.), multivariate statistical tests (e.g., ANOSIM, MANOVA, PERMANOVA, etc.) and/or deep learning (neural networks or machine learning).Each of the required formats for running these statistics on GC -MS data are achievable with the final output of "mzExacto()" and "standardifyIt()." Beyond automating a process that can require hours of work per sample, with potentially hundreds of samples per study, uafR makes cheminformatics a possibility for anyone working with GC -MS or chemical identity data.Furthermore, the public databases our package accesses will only improve in data quality/quantity with time and increased use.To showcase the utility and validity of our package for GC -MS workflows, we analyzed two datasets-one containing a set of known standards pipetted in known quantities across three samples (low, medium, and high concentrations) and the other, consisting of a recently published set of 35 samples. The second dataset was GC/MS data collected on the same GC-MS as the previous test chemicals that had been manually processed and published in August, 2022.Briefly, the samples were collected from grain samples that were a) UV sterilized (negative control), b) clean grain from storage (positive control), c) inoculated with asexual fungal spores, d) inoculated with sexual fungal spores (see Ponce et al. 2022 [31] for extended description of methods). After analyzing the samples on the GC-MS, the raw output is saved to a local directory and loaded into Unknowns Analysis following default protocols.For a detailed overview of running this software, Agilent provides a user manual.After loading the samples and loading the methods file to every sample, the deconvolution algorithm identified the most accurate peaks for every chromatogram.Each peak was then searched against the NIST 20 database.The aggregated data frame was exported as a ".CSV" file.This data frame included columns for the compound names ("Compound.Name"), file name the tentative identity is from ("File.Name"), top m/z peaks captured by the GC/MS ("Base.Peak.MZ"), match factors for tentative identities ("Match.Factor"), and retention times ("Component.RT"). Results Peak areas calculated by uafR for the set of standards correlated with the volume of the standards injected, with R 2 values ranging from 0.8273 to 0.9998 (Fig 2).Importantly, the single standard (e.g., octanal) with a lower correlation coefficient was likely misread by the MS or had volatized prior to being run on the GC-MS.It is known that that octanal volatilizes very easily, and is used by plants as an anti-fungal compound to protect fruit [35]. After confirming that uafR can precisely identify chemicals that are known to be in a sample, the next step was to assess its accuracy in a more complex experiment with unknowns.Using raw GC-MS data from a recently published experiment allowed the workflow to be tested against a peer-reviewed study.We found that uafR was able to identify the manually selected compounds with accurate matches to manually identified retention times (Table 1) and yielded the same overall pattern of significance in ANOSIM analysis (Table 2).The true benefit of using uafR is not merely its accuracy, but also its speed.For context, the original manual identifications required months of labor.Using uafR, we re-analyze this entire experiment in 150 minutes of automated computation using a standard desktop computer with a 3.30 GHz processor and 16 GB RAM.While the speed and accuracy for this experiment are apparent, additional trials on larger datasets are warranted. The possible applications of a direct connection between R and PubChem are diverse.Beyond statistical tests and advanced computational pipelines, the graphical framework can provide publication quality visuals with minimal code.This package harnesses the most advanced open-source chemical dataset and makes it accessible to anyone with basic experience working in R. Conclusion Our described workflow and package utilities bring GC -MS data processing up to par with the advanced technology that generates the data.Though technically uafR should apply in the same manner to other mass spectrometers (e.g.Q-TOF, Orbitrap, TimsTOF HT, and/or Astral) and even liquid chromatography coupled MS data, it has not yet been tested in those contexts.In addition, since uafR depends on published data, our algorithms do not yet apply to MS2, MS3 or ion mobility + MS2 data.This functionality will be added to the software once these spectra are published for most chemicals in PubChem's database.The difficult portion of chemical identifications should not occur on the computer.Anyone with the ability to install packages and load a ".CSV" file into R now has access to a suite of functions that streamline a complex workflow so more effort can be spent interpreting rather than preparing data.It is important to mention that while uafR accurately processed the GC/MS data tested here, researchers should still validate that the compound areas identified by the algorithm make chemical and/or biological sense in their study system.Thankfully, the output from categorate () can help in these assessments by collecting relevant information for every molecule in an easily interpretable structure.Chemical knowledge has grown increasingly advanced and accessible in recent years.The precision of GC -MS instruments and, consequently, their output, allows published information to be accessed with 100% accuracy.While previous algorithms have focused on using statistics to separate likely aggregates of compound areas, their accuracy fails in complex contexts because too many distinctly different chemicals "behave" (i.e., have the same mass and/or retention indices) the same so cannot be teased out statistically without additional knowledge. Our approach is the first and, to date, only R package that uses published data to extract compound areas for the most likely compound identifications.By automating this component the GC -MS workflow, we anticipate our package will greatly increase the speed at which chemistry datasets are published, the size of chemical studies that can be conducted, and the accessibility of chemical analyses to scientists in related fields. Fig 2 . Fig 2. Correlations between volume of standards tests via GC/MS and the peak area estimates generated by uafR, for (A) Ethyl hexanoate, (B) Methyl salicylate, (C) Octanal, (D) Undecane.Points represent raw data while the line represents a natural log fit (A) or linear fit (B-D) to the raw data.https://doi.org/10.1371/journal.pone.0306202.g002 Table 2 . Chemicals identified in Ponce et al. 2022 using manual identification, versus compounds identified by the uafR package using the same selection criteria: >75% match of the chemical ID, and present in more than one sample. Compounds shared between identification techniques are in bold print.
5,524.4
2024-07-05T00:00:00.000
[ "Chemistry", "Computer Science" ]
Multi-Modal Image Fusion Based on Matrix Product State of Tensor Multi-modal image fusion integrates different images of the same scene collected by different sensors into one image, making the fused image recognizable by the computer and perceived by human vision easily. The traditional tensor decomposition is an approximate decomposition method and has been applied to image fusion. In this way, the image details may be lost in the process of fusion image reconstruction. To preserve the fine information of the images, an image fusion method based on tensor matrix product decomposition is proposed to fuse multi-modal images in this article. First, each source image is initialized into a separate third-order tensor. Then, the tensor is decomposed into a matrix product form by using singular value decomposition (SVD), and the Sigmoid function is used to fuse the features extracted in the decomposition process. Finally, the fused image is reconstructed by multiplying all the fused tensor components. Since the algorithm is based on a series of singular value decomposition, a stable closed solution can be obtained and the calculation is also simple. The experimental results show that the fusion image quality obtained by this algorithm is superior to other algorithms in both objective evaluation metrics and subjective evaluation. INTRODUCTION The purpose of image fusion is to synthesize multiple images of the same scene into a fusion image containing part or all information of each source image (Zhang, 2004). The fused image contains more information than each source image, thus, it is more suitable for machine processing and human visual perception. Image fusion has a wide range of applications in many fields, such as computer vision, remote sensing, medical imaging, and video surveillance (Goshtasby and Nikolov, 2007). The same type of sensors acquire information in a similar way, so the single-modal image fusion cannot provide information of the same scene from different aspects. On the contrary, multi-modal image fusion (Ma et al., 2019) realizes the complementarity of different features of the same scene through fusing the images collected by different types of sensors and generates an informative image for subsequent processing. As typical multi-modal images, infrared and visible images, CT and MRI images can provide distinctive features and complementary information, that is, infrared images can capture thermal radiation signal and visible images can capture reflected light signal; CT is mainly used for signal acquisition of sclerous tissue (e.g., bones), and MRI is mainly used for signal acquisition of soft tissue. Therefore, multi-modal image fusion has a wide range of applications in engineering practice. To realize image fusion, many scholars have proposed a large number of fusion algorithms in recent years. In general, the fusion methods can be divided into two categories: the spatial-domain methods and the transform-domain methods. The typical methods in the first category include the weighted average method and principal component analysis (PCA) method (Yu et al., 2011) and so on. They fuse the gray values of image pixels directly. Although the direct operation on the pixels has low complexity, the fusion process is less robust to noise, and the results cannot meet the needs of the application in most cases. To overcome this shortcoming, a fusion method based on transform is proposed (Burt and Adelson, 1983;Haribabu and Bindu, 2017;Li et al., 2019). In general, the transform-based methods obtain the transformed coefficients of an image using a certain set of base functions, then fuse these coefficients through certain fusion rules, and finally obtain the final fused image through the corresponding inverse transform. For example, Burt and Adelson (1983) formed a laplacian pyramid (LP) by desampling and filtering source images, and then designed different fusion strategies at each layer. Finally, the fused image is obtained by applying the inverse transform on the fusion coefficients. Haribabu and Bindu (2017) first decomposed the source images by using discrete wavelet transform (DWT) and fused the coefficients with predefined fusion rules, and then obtained the final image by applying the inverse discrete wavelet transform on fused coefficients. Because the transform-based method employs the average weighted fusion rules for the lowfrequency components which carry the most energy of the image, there will be something wrong with the contrast loss of the final fused image. In addition to traditional spatial-domain and transformdomain methods, sparse representation (SR) has been extensively used in image fusion in recent years (Yang and Li, 2010;Jiang and Wang, 2014;Liu et al., 2016;Zhang and Levine, 2016). The SR method assumes that the signal to be processed satisfies y ∈ R n , then y = Dx, where D ∈ R n×m (n << m) is an overcomplete dictionary, and n is the dimensions of the signal and m is the number of atoms in the dictionary D which is formed by a set of image subblocks, x is the sparse coefficients vector. The fused image is reconstructed by means of fusing the sparse coefficients. Although the SR-based method has achieved many results in the field of image fusion, some detailed information will be lost in the reconstructed image (e.g., the edges and textures tend to be smoothed), which limits the ability of the SR to express images (Yang and Li, 2010). To solve this problem, some scholars proposed some improved algorithms (Jiang and Wang, 2014;Liu et al., 2016). For instance, Jiang and Wang (2014) used morphological component analysis (MCA) to represent the source images more effectively. The MCA method first applied SR to separate the source images into two parts: cartoon and texture, then different fusion rules were designed to fuse these two parts respectively. Finally, a fused image with rich information was obtained, and more characteristic features of the source images were preserved. As an extension of the vector and matrix, the tensor (Kolda and Bader, 2009) plays an important role in the high-dimensional data processing. In the field of computer science and technology, a tensor is a multi-dimensional array. It can be extended to some common data types, for example, a zero-order tensor can be defined as a constant, the tensor of order 1 is defined as a vector, the tensor of order 2 is defined as a matrix, the tensor of order 3 and the tensor of order N (N ≥ 3) is called highorder tensor. In essence, tensor decomposition is a high-order generalization of matrix decomposition, which is mainly applied to dimensionality reduction, sparse data filling, and implicit relationship mining. The information processing method based on tensor is more suitable for the processing of high-dimensional data and the extraction of feature information than vector and matrix, therefore, some relevant applications have been emerged in recent years (Bengua et al., 2015(Bengua et al., , 2017a. In view of the excellent performance of tensors in representing Frontiers in Neurorobotics | www.frontiersin.org high-dimensional data and feature extraction, a tensor-based high-order singular value decomposition method (HOSVD) (Liang et al., 2012) was applied to image fusion and achieved good results. In this method, the source image is initialized into a tensor which is subsequently decomposed into several subtensors by using a sliding window technique. Then, the HOSVD is applied on each sub-tensors to extract the corresponding features which are fused by employing certain fusion rules. Since HOSVD is an approximate decomposition method, it will lead to the loss of information in the process of image fusion. At the same time, the calculation process is large and a stable closed-form solution cannot be obtained. To avoid loss of detailed information, a novel method based on matrix product state (MPS) is proposed to fuse the multi-modal images. Compared with HOSVD, MPS achieves the improvement of HOSVD and achieves the purpose of acquisition image information accurately. Moreover, being different from SR who linearly represents images by using atoms in an overcomplete dictionary, MPS decomposes image tensor into MPS. The main difference is that SR is approximate decomposition, while MPS is accurate decomposition. Therefore, in terms of signal reconstruction, MPS has better performance in signal expression. The main contributions of the article are outlined as follows: (i) Considering that image fusion depends more on local Frontiers in Neurorobotics | www.frontiersin.org information of the source images and dividing the image into blocks can get more details of each pixel, the two source images are first divided into several sub-image blocks, and then the corresponding sub-image blocks are initialized into sub-tensors; (ii) We perform the MPS on each sub-tensor separately to obtain the corresponding core matrixes. The core matrixes are fused using the fusion rule based on the sigmoid function which incorporates the conventional choose-max strategy and the weighted average strategy. This fusion strategy can preserve the features of the multi-modal source images and reduce the loss of contrast to the greatest extent; (iii) Due to the application of MPS, the computational complexity of image fusion based on tensor is reduced dramatically. Hence, MPS decomposition is realized by computing a series of sub-tensors with maximum order 3. Moreover, a stable closed-form solution can also be obtained in the proposed algorithm. The rest of the article is organized as follows. Section 2 introduces the theory of matrix product decomposition. In section 3, the algorithm principle and the fusion steps are detailly discussed. Subsequently, the results of the experiments are presented in section 4. Finally, some conclusions are drawn in section 5. Tensor Tensor is a generalization of the vector. A vector is a kind of tensor with order 1. For simplicity and accuracy of the following Frontiers in Neurorobotics | www.frontiersin.org expressions, first, we introduce some notations about tensors. The tensor of order 0 is a constant, represented by lowercase letter x; the tensor of order 1 is a vector represented by a bold lowercase letter x; the tensor of order 2 is a matrix represented by a bold capital letter X; the tensor of order 3 is a tensor represented by bold capital letters in italics X. In this way, a tensor of order N and the size of each dimension are I 1 × I 2 × · · · × I N can be expressed as X ∈ R I 1 ×I 2 ×···×I N , where I i corresponds to the length of the i-th dimension. In general, we use x i 1 · · · x i N to represent the (i 1 , · · · , i N )-th element of X. MPS for Tensor The MPS decomposition (Perez-Garcia et al., 2006;Schollwock, 2011;Schuch et al., 2011;Sanz et al., 2016) aims to decompose an N-dimensional tensor X into the corresponding left-right orthogonal factor matrix and a core matrix. First, all the dimensions of an N-dimensional tensor X are rearranged, which lets the dimension K corresponding to the number of images to be fused, for example, if the number of source images is equal to 2, then K = 2. Additionally, the tensor X satisfies X ∈ R I 1 ×···×I n−1 ×K×I n ×···×I N , I 1 ≥ · · · ≥ I n−1 , I n ≤ · · · ≤ I N , then the elements in the tensor X can be expressed in the form of MPS, and the schematic diagram of MPS form of X is shown in Figure 1: i (j−1) mentioned in the above formula are called leftright orthogonal factor matrix with size δ j−1 × δ j , where δ 0 = δ N+1 = 1, and they are all orthogonal: and where I is an identity matrix, C n k is called core matrix. A tensor X can be decomposed into the form of (1) through two series of SVD decomposition. The process includes a leftto-right sweep and a right-to-left sweep. We summarize it in Algorithm 1. IMAGE FUSION BASED ON MPS In this section, the whole process of image fusion will be described. The source images which have been reconstructed into tensors are decomposed into a series of sub-tensors by using the sliding window technology. The graphical representation of the sliding window technology is shown in Figure 2. Then MPS is applied to the decomposed sub-tensors to obtain the core matrixes, and the sigmoid function is used for the fusion of each pair of core matrixes to obtain the fused core matrixes. The specific theoretical concepts of decomposition and fusion are described in sections 3.1, 3.2, respectively, and the overall process of image fusion proposed in this article is described in section 3.3. Tensor Decomposition by MPS For the two source images A and B with sizes of M × N, we use them to construct a tensor X with dimension M × N × 2. Taking into account the importance of local information of the Frontiers in Neurorobotics | www.frontiersin.org source image, a sliding window technology is used to decompose it into several sub-tensors F with dimension M × N × 2, and the sliding step p used should satisfy p ≤ min{M, N}; the sub-tensor F is obtained by the Algorithm 2, as follows. In Algorithm 2, the fix((M − patch size)/stepsize) represents the nearest integer to (M − patch size)/stepsize and fix((N − patch size)/stepsize) represents the nearest integer to (N − patch size)/stepsize. Then, MPS is applied to each of the sub-tensors. Design of Fusion Rule We introduce the sigmoid function as the fusion rule of the characteristic coefficients, the fusion coefficient of each core matrix can be defined as follows: where the subscript i indicates the number of each sub-image, and l is the label of the corresponding source image. For e i (l) obtained in the previous section, the fusion rule is selected by comparing the values of e i (1) and e i (2). When e i (1) is much less or much more than e i (2), we use the Max rule, and when the relationship between e i (1) and e i (2) satisfy the other relation, we use weighted fusion to fuse the corresponding coefficient matrix and then get the final fusion coefficient matrix. The function is as follows: ×C i (:, :, 1) + exp(−kln( e i (1) e i (2) )) 1 + exp(−kln( e i (1) e i (2) )) × C i (:, :, 2) (5) where k is the shrinkage factor of the mentioned sigmoid function. After D i is obtained, each of the fused sub-image blocks F i can be reconstructed by the inverse operations of MPS. Then the sub-image blocks F i is used to obtain the final fused image G. To make the process of decomposition and fusion more concrete, the first group of the experiment images is used as an example to make a flowchart as shown in Figure 3: The Process of Image Fusion Based on MPS The process of image fusion based on MPS can be divided into the seven steps as follows 1. Input two source images; 2. Reconstructed the two source images into a third-order tensor, and the sub-tensors are extracted by sliding window technology; 3. Matrix product state decomposition is used on sub-tensors to obtain left and right factor matrixes and core matrixes; 4. Compare the vectors representing source image 1 and source image 2 in the core matrixes obtained in step 3, and obtain the fused matrixes by corresponding their quantitative relations to different situations of the sigmoid function, and then construct it as sub-tensors; 5. Multiply the fused sub-tensors by left and right factor tensors to obtain sub-images; 6. Sub-images addition; 7. Output fused image. The specific flowchart is shown in Figure 4. Standard deviation (SD) SD is defined as follows: where µ is the average value of the fused image, H and W are the length and width of the image, respectively. SD is mainly used to measure the contrast of the fused image. Mutual information (MI) Mutual information is defined as follows: where h R,F (i, j) is the normalized joint distribution gray histogram between the source image R and the fused image Frontiers in Neurorobotics | www.frontiersin.org F, h R (i) and h F (j) are the normalized marginal distribution histogram of the two source images, respectively, L is the number of gray levels. Structural similarity (SSIM) Structural similarity is defined as follows: where µ x and µ y are the average value of x and y. The middle term represents the similarity of contrast, σ x and σ y is the SD of x and y. The right term characterizes the structural similarity, and σ xy is the covariance of x and y. The c 1 , c 2 , and c 3 are three constants, and the parameters α, β, and γ , respectively, adjust the contribution of the three terms. SSIM can calculate the similarity between the fused image and the source image. Its value which is between 0 and 1 is closer to 1, the more similar the two images are. The average value of the fused image and the two source images A and B is taken as the final evaluation metric, namely 4. Gradient based fusion metric (Q G ) Q G is defined as follows: where Q AF (x, y) = Q AF g (x, y)Q AF α (x, y), at each pixel (x, y),Q AF g (x, y) and Q AF α (x, y) denote the edge strength and orientation preservation values. Q BF (x, y) is defined as the same as Q AF (x, y). The weighting factors w A (x, y) and w B (x, y) indicate the significance of Q AF (x, y) and Q BF (x, y). Q G is an important fusion image quality evaluation method computing the amount of gradient information that is injected into the fused image from the source image. 5. Phase congruency based fusion metric (Q P ) The Q P is defined as follows: where p, M, and m refer to phase congruency, maximum, and minimum moments. The parameters α, β, and γ are set to 1 in this article. For more detailed information on parameters, please refer to the article Hong (2000). Q P measures the extent that the salient features in the source image are preserved. Study of Patch Size and Step Size Considering the sliding window technology is used, we will first study the respective influence of the size of the sub-image block and the step size of the sliding window on the performance of the fusion image experimentally. In the following statement we use patch size and step size to call the two factors briefly. To obtain the optimal patch size and step size, we will use a pair of infrared and visible images as source images, as shown in Figures 5A,B. In the experiment of patch size, the patch size is set to 2×2, 4×4, 6×6, 8×8, 10×10, 12×12, 14×14, 16×16, 18×18, and 20 × 20 with the step size fixed to 1 and shrinkage factor fixed to 200. In the experiment of step size, the step size is set to 1, 2, 4, 6, 8, and 10 with the patch size fixed to 16 × 16 and the shrinkage factor fixed to 200. The experimental results based on the objective evaluation metrics are shown in Tables 1, 2. The output fused images are shown in Figures 5, 6. It can be seen clearly from Table 1, in most cases, that the best results can be obtained when the size of the sub-image block is 16 × 16. According to simple analysis, when the sub-image block is too small, the image characteristics cannot be effectively represented. Additionally, it can be seen from Table 2 that when the step size is 1, the best result can be obtained. According to simple analysis, when the step size is too large, local information of the image may be lost or cannot be displayed well. Therefore, the in following experiments, the patch size was set to 16 × 16, and the step size was set to 1. Computation Complexity The computation time of each group of experimental images is recorded when different fusion algorithms are used. Experimental results show that the complexity of the proposed algorithm is lower than other algorithms. The results are shown in Table 3 as follows: All the codes are performed under MATLAB R2014a running on computer equipment with an Intel i7-7700K CPU (4.2 GHz) and 16 GB of RAM. As can be seen from the table, compared with SR and Dual-tree complex wavelet transformsparse representation (DTCWT-SR), the running of the proposed algorithm is faster. In general, the computational complexity of the proposed algorithm is reduced. Experimental Results and Discussion In this section, the effectiveness of the proposed method is further verified by comparing the experimental results of this algorithm with other fusion methods. The comparison methods used are DWT (Haribabu and Bindu, 2017) and LP (Burt and Adelson, 1983), SR-based methods (Liu et al., 2016), VGG-Net (Hui et al., 2018), and DTCWT-SR (Singh et al., 2012). In addition to the infrared and visible images used in the previous section, CT and MRI medical images are also used for contrast experiments. The performance of each algorithm is evaluated by calculating the evaluation metrics based on the fusion results. In the experiment, all the experimental source image size is 256×256, the fixed patch size is 16 × 16, the step size is 1, and the shrinkage factor k is 200. The proposed method and several comparison algorithms are applied to nine pairs of source images. The experimental results are shown in Figures 7-15, respectively. The objective evaluation metrics values of the nine pairs of images are shown in Tables 4, 5. It can be seen from the table that in most cases, the algorithm proposed in this article can achieve optimal results, especially for CT and MRI images, the various metrics of the results of MPS are much higher than other methods. For infrared and visible images, the method in this article can also achieve optimal results under more than half of the evaluation metrics. These results show that the proposed method is better than other methods for multi-modal image fusion. This advantage mainly benefits from two aspects: (i) The sliding window method is adopted to divide the image into several sub-images, so the local information of the image can be captured well; (ii) MPS method is an accurate decomposition and reconstruction method, so in the process of image fusion, there will be no loss of information due to the solution. Further analysis of the experimental results shows that: (i) On the whole, VGG-Net has the worst performance in all cases. Compared with other comparison methods, there is a big gap in various evaluation metrics. This is because the information captured is insufficient in the layer-by-layer feature extraction of the source image, and when the details of the fusion image are weighted by the final weight graph, the contrast of the initial detail part of the fusion image is reduced; (ii) Among the two multi-scale methods used, DWT fusion method performs poorly. This is because the DWT method is based on Haar wavelet to achieve fusion, which can only capture image features in horizontal and vertical directions but cannot capture more basic features of the image; LP method is better than the DWT method because the Laplacian pyramid generates only one band-pass component at each scale, which reduces the possibility of being affected by noise; (iii) The results obtained by SR method are better than other multi-scale methods in most cases but not as good as the proposed method. This is because the signal representation ability of SR is better than that of multi-scale transformation, and errors will occur in the process of signal reconstruction, which is unavoidable for the SR method. The method proposed in this article can effectively avoid this problem by non-destructive tensor reconstruction. In addition, the "max-L1" rule of direct fusion in the spatial domain will lead to spatial inconsistency, which affects the performance of the SR method; (iv) DTCWT-SR is an method that multi-scale method combined with SR method. By comparing the objective evaluation metrics, the fusion performance of the algorithm is better than SR in some aspects, but it is still poor compared with MPS. In addition to objective evaluation, the performance of the algorithm in this article is also discussed through some visual comparisons of the fused images. In general, the proposed method achieves the best visual effect among all the fusion images. The fusion results of infrared-visible images are shown in Figures 7-11. It can be seen from the figure that the method proposed in this article has good adaptability, and the fusion images are obtained to retain the information of the infrared and visible images, respectively. In Figure 7, both the multiscale fusion method and SR show varying degrees of artificial traces at the junction between the trees and the sky in the upper left corner, while DTCWT-SR and VGG-Net resulted in severe contrast loss. In Figure 8, the white squares in infrared picture are dimming in varying degrees in DWT, LP, SR, DTCWT-SR, and VGG-Net methods, and the leaf luster in the visible image is not well-displayed in the VGG-Net method. In Figure 9, DWT and SR show the phenomenon of information loss. LP, DTCWT-SR, and VGG can get relatively complete fusion images, but the brightness is weaker than MPS. The clarity of the billboard in the upper left corner of the fused image is better in the MPS method. In Figure 10, the fused images obtained by DWT and SR show some small black blocks, that is information loss, while the human shape brightness on the right side of the images obtained by LP, DTCWT-SR, and VGG method is low. The reason for these shortcomings is the fusion rules used in the fusion process all have a certain degree of weighting on the source image. Our fusion rules based on the sigmoid function can well avoid these shortcomings, that is, in the image, whose colors are only black and white, the weight of the white part of the image will be much larger than that of the black part, thus, evolving into the Choosemax rule. In Figure 11, compared with the other five comparison methods, it can be seen that the human figure on the right and the branch on the lower right corner of the fusion image obtained by MPS have the highest resolution. Figures 12-15 are the fusion results of CT and MRI medical images. It can be seen from the experimental results that the DWT method cannot to be applied to the fusion of medical images, and the other four methods can obtain a complete image. In Figure 12, LP, DTCWT-SR, and VGG-Net methods have no loss in details, but the sharpness of the light and dark junction is insufficient, the edge is blurred, and the contrast is lost. However, the bottom of the fused image obtained by the SR method is fractured, indicating that there is information loss. In Figure 13, the spine in the lower right corner and the jaw in the lower left corner of the image obtained by MPS were more clear than the other five methods, the brain vein was also clearer, and the contrast was higher than the other five methods. In Figure 14, the fused images obtained by LP and SR methods were fractured at the lower right corner. Although DTCWT-SR and VGG methods obtained relatively complete fusion images, there is a certain degree of contrast loss. In Figure 15, LP, DTCWT-SR, and VGG-Net methods have some contrast loss, especially in the middle part, at the same time, the image obtained by the SR method presents spatial dislocation at both sides of the eyeball and a certain degree of distortion appears at the position of white connection of the two images. The SR method also has similar shortcomings in this regard, please refer to the lower right corner of the image. CONCLUSION In this article, we propose a method based on MPS for multimodal image fusion. First, the source images are initialized into a three-dimensional tensor, and then the tensor is decomposed into several sub-tensors by using a sliding window to obtain the corresponding features. The core matrix is fused by the fusion rule based on the sigmoid function, and the fused image is obtained by multiplying the left-right factor matrix. In this article, we use a sliding window to avoid blocking effects, and fully consider the local information of the source images by dividing the source image into a set of sub-images. The experimental results show that the proposed algorithm is feasible and effective for image fusion. Being different from the average fusion rule of the multi-scale method and the "Max-L1" fusion rule of the SR method, the fusion rule based on the sigmoid function used in the article is more effective, but it also makes the fusion process more complicated of the proposed method. Future study will focus on further exploring a more effective fusion rule to improve the fusion results. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
6,780.2
2021-11-15T00:00:00.000
[ "Computer Science", "Engineering" ]
Global Existence of Weak Solutions to a Fractional Model in Magnetoelastic Interactions and Applied Analysis 3 ≤ a2 ∫Ω 󵄨󵄨󵄨󵄨Λαm0󵄨󵄨󵄨󵄨2 dx + ρ2 ∫Ω 󵄨󵄨󵄨󵄨ω1󵄨󵄨󵄨󵄨2 dx + 1 4 ∫Ω 󵄨󵄨󵄨󵄨ωx (0)󵄨󵄨󵄨󵄨2 dx + C (Ω, λ) , (14) where C(Ω, λ) is a positive constant which depends only on Ω and λ. The main result of this paper is the following. Theorem 2. Let α ∈ (1/2, 1),m0 ∈ Hα(Ω) such that |m0| = 1 a.e., ω0 ∈ H1 0 (Ω), and ω1 ∈ L2(Ω). Then there exists at least a weak solution for problem (7)-(8)-(9) in the sense of Definition 1. The proof of Theorem 2 will be given in Section 4. 3. Some Technical Lemmas In this section we present some lemmas which will be used in the rest of the paper. We start by recalling the following lemma due to Simon (see [8]). Lemma 3. Assume A, B, and C are three Banach spaces and satisfy A ⊂ B ⊂ C with compact embedding A 󳨅→ B. Let Θ be bounded in L∞(0, T; A) and Θt fl {ft; f ∈ Θ} be bounded in Lp(0, T; C), p > 1. ThenΘ is relatively compact inC([0, T]; B). There is another lemmawhose proof can be found in [[9], page 12]. Lemma 4. Let Θ be a bounded open set of Rdx × Rt, hn and h in Lq(Θ), 1 < q < ∞ such that ‖hn‖Lq(Θ) ≤ C, hn → h a.e. in Θ; then hn ⇀ h weakly in Lq(Θ). The following lemma will ensure a compact embedding for the space Ws,p. Lemma5. LetΘ be a bounded open set ofR, which is uniform Lipschitz. Let s ∈ [0, 1[, p > 1, d ≥ 1. If sp < d then the injection of Ws,p(Θ) in Lk(Θ) is compact, for any k < dp/(d − sp). The proof can be found in [[10], Theorem 4.54., p 216]. We give now a lemma that will play a very important role in the convergence of approximate solutions (see [11–13] for a proof). Lemma 6 (commutator estimates). Suppose that s > 0 and p ∈ (1, +∞). If f, g ∈ S (the Schwartz class) then 󵄩󵄩󵄩󵄩Λs (fg) − fΛsg󵄩󵄩󵄩󵄩Lp ≤ C (󵄩󵄩󵄩󵄩∇f󵄩󵄩󵄩󵄩Lp1 󵄩󵄩󵄩󵄩g󵄩󵄩󵄩󵄩?̇?s−1,p2 + 󵄩󵄩󵄩󵄩f󵄩󵄩󵄩󵄩?̇?s,p3 󵄩󵄩󵄩󵄩g󵄩󵄩󵄩󵄩Lp4 ) , (15) 󵄩󵄩󵄩󵄩Λs (fg)󵄩󵄩󵄩󵄩Lp ≤ C (󵄩󵄩󵄩󵄩f󵄩󵄩󵄩󵄩Lp1 󵄩󵄩󵄩󵄩g󵄩󵄩󵄩󵄩?̇?s,p2 + 󵄩󵄩󵄩󵄩f󵄩󵄩󵄩󵄩?̇?s,p3 󵄩󵄩󵄩󵄩g󵄩󵄩󵄩󵄩Lp4 ) (16) with p2, p3 ∈ (1, +∞) such that 1/p = 1/p1 + 1/p2 = 1/p3 + 1/p4. Here is another lemma which can be viewed as a result of the Hardy-Littlewood-Sobolev theorem of fractional integration; see [7] for a detailed proof. Lemma 7. Suppose that p > q > 1 and 1/p+s = 1/q. Assume that f ∈ Lq; then Λ−sf ∈ Lp and there is a constant C > 0 such that 󵄩󵄩󵄩󵄩f󵄩󵄩󵄩󵄩?̇?−s,p fl 󵄩󵄩󵄩󵄩Λ−sf󵄩󵄩󵄩󵄩Lp ≤ C 󵄩󵄩󵄩󵄩f󵄩󵄩󵄩󵄩Lq . (17) We finish this section with the following result (the proof can be found in [2]). Lemma 8. If f and g belong to H2α per(Ω) fl {f ∈ L2(Ω)/Λ2αf ∈ L2(Ω)}, then ∫ Ω Λ2αf ⋅ g dx = ∫ Ω Λαf ⋅ Λαgdx. (18) 4. Proof of Theorem 2 Our goal is to show global existence of weak solutions for the fractional problem (7)-(8)-(9). 4.1. The Penalty Problem. Let ε > 0 be a fixed parameter. We construct approximated solutions m converging, as ε → 0, to a solution m of the problem. System (7) is reduced to the following problem: γ−1mεt × m + mεt + aΛ2αmε + l (mε, ωε) + 󵄨󵄨󵄨󵄨mε󵄨󵄨󵄨󵄨2 − 1 ε m = 0 ρωε tt − ωε xx − λ (mε 1mε 3)x = 0 (19) in Q = Ω × (0, T), where the vector l(m, ω) is given by l(m, ω) = (λm3ωx, 0, λm1ωx), λ1 = λ2 = 0, λ3 = λ, and σ1313 = 1. System (19) is supplemented with initial and boundary conditions ωε (⋅, 0) = ω0, ωε t (⋅, 0) = ω1, m (⋅, 0) = m0, 󵄨󵄨󵄨󵄨m0󵄨󵄨󵄨󵄨 = 1 a.e. in Ω, ωε = 0, m (x, t) = m (x + 2π, t) on Σ. (20) 4 Abstract and Applied Analysis We apply Faedo-Galerkin method: let {fi}i∈N be an orthonormal basis of L2(Ω) consisting of all the eigenfunctions for the operator Λ2α (the existence of such a basis can be proved as in [14], Ch. II), Λ2αfi = αifi, i = 1, 2, . . . , fi (0) = fi (2π) , (21) and let {gi}i∈N be an orthonormal basis of L2(Ω) consisting of all the eigenfunctions for the operator −Δ: −Δgi = βigi, i = 1, 2, . . . , gi = 0 on ∂Ω. (22) We then consider the following problem in Q = Ω × (0, T): γ−1mε,N t × m + m t + aΛ2αmε,N + l (mε,N, ωε,N) + 󵄨󵄨󵄨󵄨󵄨mε,N󵄨󵄨󵄨󵄨󵄨2 − 1 ε m = 0 ρωε,N tt − ωε,N xx − λ (mε,N 1 mε,N 3 )x = 0 (23) with initial and boundary conditions, ωε,N (⋅, 0) = ωN (⋅, 0) , ωε,N t (⋅, 0) = ωN t (⋅, 0) , m (⋅, 0) = m (⋅, 0) , in Ω, ωε,N = 0, m (x, t) = m (x + 2π, t) on Σ = ∂Ω × (0, T) , ∫ Ω ωN (x, 0) gi (x) dx = ∫ Ω ω0 (x) gi (x) dx, ∫ Ω ωN t (x, 0) gi (x) dx = ∫ Ω ω1 (x) gi (x) dx, ∫ Ω m (x, 0) fi (x) dx = ∫ Ω m0 (x) fi (x) dx. (24) We are looking for approximate solutions (mε,N, ωε,N) to (23) under the form m = N ∑ i=1 ai (t) fi (x) , ωε,N = N ∑ i=1 bi (t) gi (x) . (25) If we multiply each scalar equation of the first equation of (23) by fi and the second by gi and integrate in Ω we get to a system of ordinary differential equations in the unknown (ai(t), bi(t)), i = 1, 2, . . . , N. We observe that we can write the first equation in the form − aΛ2αmε,N − l (mε,N, ωε,N) − 󵄨󵄨󵄨󵄨󵄨mε,N󵄨󵄨󵄨󵄨󵄨2 − 1 ε m = A (mε,N)mε,N t (26) Introduction The nonlinear parabolic hyperbolic coupled system describing magnetoelastic dynamics in = (0, ) × Ω ( > 0 and Ω is a bounded open set of R , ⩾ 1) is given by (see [1]) Equation ( 1), well known in the literature, is the Landau-Lifshitz-Gilbert (LLG) equation.The unknown m, the magnetization vector, is a map from Ω to 2 (the unit sphere of R 3 ) and m is its derivative with respect to time.The symbol × denotes the vector cross product in R 3 .Moreover we denote by , = 1, 2, 3, the components of m.The constant represents the damping parameter.H eff represents the effective field which is given by where is a positive constant and the components of the vector ℓ(m, u) are given by Here (u) = (1/2)( + ) stand for the components of the linearized strain tensor , = 1 + 2 + 3 ( + ) with = 1 if = = = and = 0 otherwise. Equation (2) describes the evolution of the displacement u, is a positive constant, and the tensors S(u), L(m) are given by S = (u) , = ( ) is the elasticity tensor satisfying the following symmetry property: Many studies have been done on the fractional Landau-Lifshitz equation; we quote here, for example, [2], where the existence of weak solutions under periodical boundary condition has been proven for equation of a reduced model for thin-film micromagnetics.In [3], the main purpose is to consider the well-posedness of the fractional Landau-Lifshitz equation without Gilbert damping.The global existence of weak solutions is proved by vanishing viscosity method. Note that the existence and asymptotic behaviors of global weak solutions to the one-dimensional periodical fractional Landau-Lifshitz equation modeling the soft micromagnetic materials are studied in [4].For the magnetoelasticity coupling, in [1], the authors study the three-dimensional case and establish the existence of weak solutions taking into account three terms of the total free energy.Existence and uniqueness of solutions have been proven in [5] for a simplified model and in [6] a one-dimensional penalty problem is discussed and the gradient flow of the associated type Ginzburg-Landau functional is studied.More precisely the authors prove the existence and uniqueness of a classical solution which tends asymptotically for subsequences to a stationary point of the energy functional.Our aim here is to study the coupled system of magnetoelastic interactions with fractional LLG equation. The rest of the paper is divided as follows.In the next section we present the model equation we will be interested in.Section 3 recalls some useful lemmas.Finally in Section 4 we prove a global existence result of weak solutions to the considered model. The Model and Main Result We assume that Ω is a subset of R and the displacement is only in one direction.Specifically, we consider a simple variable space and assume that Ω = (0, 2).We take the following system: with associated initial and boundary conditions The effective field is given by where Λ = (−Δ) 1/2 denotes the square root of the Laplacian which can be defined via Fourier transformation [7].In this paper we are interested in the case ∈ (1/2, 1).For the vector u, we assume that u = (0, 0, ) and we keep the three components of the vector m = ( 1 , 2 , 3 ). It is a common practice (see [5]) to replace the first equation of system (7) by the quasilinear parabolic equation (Ginzburg-Landau type equation): Here is a positive parameter and m : Ω → R 3 .penalization in (11) replaces the magnitude constraint |m| = 1. Throughout, we make use of the following notation.For Ω, an open bounded domain of R 3 , we denote by L (Ω) = ( (Ω)) 3 and H 1 (Ω) = ( 1 (Ω)) 3 the classical Hilbert spaces equipped with the usual norm denoted by ‖ ⋅ ‖ L (Ω) and ‖ ⋅ ‖ H 1 (Ω) (in general, the product functional spaces () 3 are all simplified to X).For all > 0, , denotes the usual Sobolev space consisting of all such that , fl where F denotes the Fourier transform and F −1 its inverse.Let Ẇ, denote the corresponding homogeneous Sobolev space.When = 2, , corresponds to the usual Sobolev space .Now we give a definition of the solution in the weak sense of problem ( 7)-( 8)- (9). The main result of this paper is the following. The proof of Theorem 2 will be given in Section 4. Some Technical Lemmas In this section we present some lemmas which will be used in the rest of the paper.We start by recalling the following lemma due to Simon (see [8]). The following lemma will ensure a compact embedding for the space , . The proof can be found in [ [10], Theorem 4.54., p 216].We give now a lemma that will play a very important role in the convergence of approximate solutions (see [11][12][13] for a proof). Lemma 6 (commutator estimates Here is another lemma which can be viewed as a result of the Hardy-Littlewood-Sobolev theorem of fractional integration; see [7] for a detailed proof.Lemma 7. Suppose that > > 1 and 1/ + = 1/.Assume that ∈ ; then Λ − ∈ and there is a constant > 0 such that We finish this section with the following result (the proof can be found in [2]). (18) Proof of Theorem 2 Our goal is to show global existence of weak solutions for the fractional problem ( 7)-( 8)-( 9). The Penalty Problem. Let > 0 be a fixed parameter.We construct approximated solutions m converging, as → 0, to a solution m of the problem.System ( 7) is reduced to the following problem: in = Ω × (0, ), where the vector ℓ(m, ) is given by ℓ(m, ) = ( 3 , 0, 1 ), 1 = 2 = 0, 3 = , and 1313 = 1.System (19) is supplemented with initial and boundary conditions a.e. in Ω, We apply Faedo-Galerkin method: let { } ∈N be an orthonormal basis of 2 (Ω) consisting of all the eigenfunctions for the operator Λ 2 (the existence of such a basis can be proved as in [14], Ch.II), and let { } ∈N be an orthonormal basis of 2 (Ω) consisting of all the eigenfunctions for the operator −Δ: We then consider the following problem in = Ω × (0, ): with initial and boundary conditions, We are looking for approximate solutions (m , , , ) to (23) under the form If we multiply each scalar equation of the first equation of ( 23) by and the second by and integrate in Ω we get to a system of ordinary differential equations in the unknown (a (), ()), = 1, 2, . . ., .We observe that we can write the first equation in the form with ) . It is clear that the matrix A is invertible which implies the system of first-order ordinary differential equations is Lipschitz locally; then there exists a local solution to the problem that we can extend on [0, ] using a priori estimates.For this, we multiply the first equation of ( 23 Omitting superscripts, we obtain for all > 0, ∫ thanks to the strong convergence (⋅, 0) → 0 in 1 0 (Ω).For the other term ( (0)), the estimate can be carried out in an analogous way using the strong convergence (⋅, 0) → 1 in 2 (Ω).Moreover, noting that (for a constant independent of and ) therefore, for fixed > 0, we have Note that (37) is due to the Poincaré lemma.Now, from classical compactness results there exist two subsequences which we still denote by (m , ) and ( , ) such that for fixed > 0 and for any 1 < < ∞ m , ⇀ m weakly in (0, ; Convergence ( 38) is due to Lemma 3 and thanks to Lemma 4 it can be shown that = |m | 2 − 1. Moreover from the Sobolev embedding (Lemma 5) () → 4 (), the further compactness result follows: The above estimates allow us to pass to the limit as goes to infinity and to get the desired result.Indeed consider the variational formulation of (23): for any ∈ 2 (0, ; H (Ω)) and ∈ 1 0 ().Taking → ∞ in (40), we find for any ∈ 2 (0, ; H (Ω)) and ∈ 1 0 ().We proved the following result.As is in 2 (0, ; H (Ω)), the following holds: Note that for this choice we have Λ (m × ) ∈ L 2 (), indeed applying the multiplicative estimates ( 16 ≤ ; (46) since 2 > 1 (1 here is the dimension) then H (Ω) → L ∞ (Ω) and consequently (m ) is bounded in L ∞ ().
3,522.4
2016-10-24T00:00:00.000
[ "Mathematics" ]
Study of weak magnetism by precision spectrum shape measurements in nuclear beta decay Nuclear beta decays play an important role in uncovering the nature of the weak interaction. The weak magnetism (WM) form factor, b WM, is generally a small correction to the beta decay rate that arises at first order as an interference term between the dominant Gamow-Teller and the magnetic dipole contributions to the weak current. This form factor is still poorly known for nuclei with higher atomic number. We performed a careful analysis of the measured beta spectrum shape for Gamow-Teller transitions in 114In and 32P nuclei. The precision spectrum shape measurements were carried out using the miniBETA spectrometer consisting of a low-mass, low-Z multi-wire gas tracker and a plastic scintillator energy detector. The preliminary results for the weak magnetism extraction for 114In and 32P nuclei are presented. Introduction High precision β-spectrum shape measurements in nuclear beta decay are very important, as they allow exploring still poorly known effects in the Standard Model (SM) and hypothetical effects not included in it (BSM).Accurate studies of beta-decays have been exploited in various applications of fundamental physics.These studies are carried out in the low-energy regime and by using of effective field theory they can be compared to direct searches for exotic couplings performed at large hadron colliders.The precision experiments in the low-energy research are significantly smaller in size and less expensive, making a perfect complement to large-scale research. At the sensitivity level of new generation experiments, reaching a precision of the order of 1% and below, it is expected that the so-called recoil order effects in the hadronic weak current and radiative corrections will have a sizable contribution and cannot be neglected when interpreting results in terms of BSM physics.The recoil terms in nuclear beta decay originate from QCD effects in the weak interaction of a bound quark, and folds with nuclear structure effects in heavier nuclei [1,2].The most important of these induced currents, the weak magnetism term, is directly related to the difference of the magnetic moments of the proton and the neutron and can be determined in precision measurements of the beta spectrum shape in selected transitions.Most of the available data mainly concern the allowed and first forbidden transitions.Knowledge about induced terms in higher forbidden transitions is very limited though crucial for ongoing research, such as dark matter studies investigations of the anti-neutrino anomaly in the observed antineutrino event rate at nuclear reactors.An overview of current experimental and theoretical knowledge of the most important recoil term, i.e. the weak magnetism, for both the T = 1/2 mirror beta transitions and a large set of beta decays in higher isospin multiplets can be found in Ref. [1].The experimental information on weak magnetism is only available for beta transitions of nuclei with masses up to A= 75.Hence, an experimental result for this quantity is badly needed for isotopes with higher masses (e.g. 114In) as well as for higher F t value transitions (e.g. 32P). Furthermore, the shape of the beta spectrum reveals also a high sensitivity for exotic scalar and tensor coupling contributions to the weak interaction contained in the Fierz term b F .In order to reliably assess both the weak magnetism form factor and the Fierz term one needs to consider a number of spectrum shape corrections, such as atomic effects -screening and exchange processes, radiative corrections, finite size of the nucleus etc.The full analytical description of the allowed beta spectrum shape, including most of them with a relative precision of a few parts in 10 −4 , is presented in Ref. [3]. For a Gamow-Teller transition, the leading order expression for the beta energy spectrum is given by: where F (±Z, W ), p, W and W 0 are the Fermi function, β particle momentum, its total energy and total energy at the spectrum endpoint, respectively.m e and M n are the electron and nucleon masses, A corresponds to the nuclear mass number, the upper (lower) sign is for electron (positron) emission, and b WM /c is the ratio of the weak magnetism and Gamow-Teller form factors in the Holstein formalism [4].b WM appears in the spectrum as a term linear in energy with the slope of, typically, ± 0.5% MeV −1 [2].In experiments measuring b W M , the dominant systematic uncertainties come from incomplete deposit of electron energy in the detectors due to backscattering, partial transmission and Bremsstrahlung.Monte Carlo (MC) simulation of these effects is helpful, however, it introduces its own uncertainty as the input parameters are known with limited accuracy.For extraction of the WM form factor from the beta spectrum shape, we developed a position sensitive spectrometer that allows for identification and three-dimensional (3D) tracking of electrons while maintaining minimal electron energy losses. Experimetal setup The multi-wire gas electron tracker with electron energy detector, named miniBETA spectrometer, was built for studying experimental effects that must be controlled in β spectrum IOP Publishing doi:10.1088/1742-6596/2586/1/0121413 shape measurements.The current version of miniBETA is a combination of a plastic scintillator, serving as energy detector and a trigger source, and a hexagonally structured multi-wire drift chamber (MWDC), filled with a light gas mixture of helium and isobutane at a pressure of 600 mbars.The gas electron tracker is responsible for efficient identification of electrons emitted from β decay sources.Having precise information about the electron track, it is possible to identify electrons backscattered from the energy detector and eliminate those not originating from the β source.Additionally, the coincidence condition between signals from the gas tracker and energy detector suppresses background from gamma emission typically accompanying β decays.The low-mass construction of the MWDC and optimized geometry help reducing background from secondary radiation created inside the chamber due to collisions with wires and mechanical support structures.The hexagonal cell configuration was chosen to assure maximum transparency of the detector in order to minimize electron scattering on wires.Inside the MWDC the electrons are traced in three dimensions.The measured drift time is used to determine the XY-coordinates of the closest approach of the electron track to the anode wires, while ZYcoordinates are determined by the charge division process on the signal wires.The energy detector is made of a plastic scintillator embedded in the gas detector and is connected via a lightguide with four photomultiplier tubes (PMT) installed outside the chamber.The digitized pulse height of the PMT signals carry the electron energy information.Additionally, the PMT signals provide the time reference for the drift time measurement.In Fig. 1 the experimetal setup, the hit illumination of the chamber, the gain map of the scintillator and a sample of the measured 207 Bi spectrum are presented.More information about the spectrometer can be found in Ref. [5][6][7][8].The beta spectrum shape measurements were performed for the pure Gamow-Teller transitions 114 In → 114 Sn and 32 P → 32 S. The miniBETA spectrometer was fully modelled in MC simulations [7].The total experimental and simulated β spectra of 207 Bi used for online energy calibration with the corresponding measurements of 114 In and 32 P (assuming b F and b WM to vanish) are shown in Fig. 2. In the 207 Bi spectrum, the corresponding peaks of the measured conversion electrons from K, L and M shells are reproduced by the simulation at the 10 −2 level.The comparison of the measured and simulated 114 In spectra exhibit a slope difference on the 10 −2 level in the energy region 730 -1700 keV, which can be explained by a non-zero b WM term.In case of 32 P, the energy region of 1150 -1570 keV was explored.In spectrum [7] (left) and from 32 P spectrum (right).The systematic error is not included. Weak magnetism extraction from β spectra shapes In both cases, the MC simulations and the calibration with the 207 Bi conversion spectrum were used to obtain a complete detector response.Consequently, a simulated beta spectrum can be generated for a particular choice of b W M , by convoluting the corresponding theoretical β spectrum [2] with the response.Hence, by means of a minimization algorithm a central value for b W M can be estimated for which a 'best fit' with the experimental spectrum is observed.The preliminary result of this procedure reveals a b WM /Ac = 9.2 ± 1.2 (stat) for 114 In and b WM /Ac = −2.5 ± 5.4 (stat) for 32 P, as demonstrated in Fig. 3.The main contribution to the systematic errors is coming from the gain map and energy resolution uncertainties.The maximum of the systematic error is currently estimated to be around 5. The detailed systematic error analysis is still ongoing. Summary and outlook Experimental studies of the beta spectrum shape with the lowest possible uncertainty are essential to constrain and validate the theoretical predictions.Weak magnetism is a part of the SM, which is still poorly known and new measurements of this quantity with accuracy of about 10% or better are welcome.The use of a 3D gas electron tracker with a plastic scintillator for beta spectrum shape measurements reveals promising results.The measurements for 114 In and 32 P isotopes are completed.Providing sensitive electron tracking, extensive 2D energy calibration and MC simulation, miniBETA allowed, for the very first time, the extraction of the weak magnetism term from the 114 In and 32 P β spectra shape.The detailed systematic error analysis is in progress. Figure 1 . Figure 1.The miniBETA spectrometer with beta sources located in the middle of the chamber.(a) photograph of the setup, (b) illumination of the chamber with the cell colors indicating density of hits, (c) gain map of the plastic scintillator and (d) sample of measured 207 Bi-spectrum used for calibration. Figure 2 . Figure 2. A comparison of the recorded experimental and simulated spectra of 207 Bi conversion electrons with 114 In (left) and 32 P (right). Figure 3 . Figure 3. Results for b WM /Ac extracted from 114In spectrum[7] (left) and from 32 P spectrum (right).The systematic error is not included.
2,304.4
2023-09-01T00:00:00.000
[ "Physics" ]
An Artificial Measurements-Based Adaptive Filter for Energy-Efficient Target Tracking via Underwater Wireless Sensor Networks We study the problem of energy-efficient target tracking in underwater wireless sensor networks (UWSNs). Since sensors of UWSNs are battery-powered, it is impracticable to replace the batteries when exhausted. This means that the battery life affects the lifetime of the whole network. In order to extend the network lifetime, it is worth reducing the energy consumption on the premise of sufficient tracking accuracy. This paper proposes an energy-efficient filter that implements the tradeoff between communication cost and tracking accuracy. Under the distributed fusion framework, local sensors should not send their weak information to the fusion center if their measurement residuals are smaller than the pre-given threshold. In order to guarantee the target tracking accuracy, artificial measurements are generated to compensate for those unsent real measurements. Then, an adaptive scheme is derived to take full advantages of the artificial measurements-based filter in terms of energy-efficiency. Furthermore, a computationally efficient optimal sensor selection scheme is proposed to improve tracking accuracy on the premise of employing the same number of sensors. Simulation demonstrates that our scheme has superior advantages in the tradeoff between communication cost and tracking accuracy. It saves much energy while loosing little tracking accuracy or improves tracking performance with less additional energy cost. Introduction More than 70% of the earth's surface is covered by seas and oceans. Seas and oceans are mysterious and charismatic to human beings because of the huge amount of unexploited resources. Underwater wireless sensor networks (UWSNs) technologies are developing gradually to enhance our abilities to discover resources in aquatic environments [1][2][3][4]. UWSNs are three-dimensional (3D) networks. The communication between underwater sensors relies on acoustic waves. UWSNs have a broad range of applications such as environmental monitoring, undersea exploration, disaster prevention, and distributed tactical surveillance, etc. We study the problem of accurately and energy-efficiently tracking a maneuvering target via UWSNs. UWSNs are the extending of wireless sensor networks (WSNs) which are applied to terrestrial environments [5][6][7]. One of the significant differences [8] between UWSNs and WSNs is the cost. Since underwater sensors need to work in the extreme underwater environment, they are much more expensive than terrestrial sensors. Underwater sensors use acoustic waves while terrestrial sensors use radio frequency waves. The energy consumption for communication between underwater sensors is higher than terrestrial sensors. Moreover, the sensors of UWSNs are battery-powered and it is impracticable to replace batteries when exhausted. This means that the battery life affects the lifetime of the whole network. Compared with the energy cost of sensing and processing, communication cost dominates the whole energy cost according to the energy model shown in [9]. Thus, in this paper, we improve the energy efficiency of target tracking by cutting down less helpful communications between local sensors and the fusion center. This paper addresses the issue of implementing the tradeoff between the communication rate and target tracking accuracy. Local sensors need to figure out whether to send their information to the fusion center or not. This idea was inspired by some research about remote state estimation under communication constraints [10][11][12][13]. When a local sensor obtains a measurement about the target, it needs to figure out whether the measurement residual is large enough. A large measurement residual means the new measurement has enough value to be sent to the fusion center. If the measurement residual is larger than the threshold, the fusion center receives information from the local sensor and works as usual. If the measurement residual is smaller than the threshold, the fusion center receives nothing from the local sensor and generates an artificial measurement to approximate the unsent one, which makes full use of the information of the unsent measurement. Then, we derive the corresponding artificial measurements-based recursive form the filter. A preliminary version of the present paper appeared as a conference paper in [14]. The current version extends the conference version by providing an adaptive method for determining proper criteria which are used to tell local sensors whether their measurements have enough value to be sent to the fusion center. Moreover, a computationally efficient optimal sensor selection scheme is proposed to improve tracking accuracy on the premise of employing the same number of sensors. The main contributions of this paper are threefold. First, we derive an artificial measurements-based filter, which has advantages in energy-efficiency. Second, in order to exploit advantages of our filter, we propose an adaptive method for determining proper criteria, which results in our artificial measurements based adaptive filter. Last, an optimal sensor selection scheme is proposed to further improve the energy-efficiency. The rest of the paper is organized as follows. In Section 2, we discuss the related work in the area of target tracking in UWSNs. In Section 3, we formulate the problem and introduce some propaedeutic. In Section 4, we introduce our artificial measurements-based adaptive filter. In Section 5, we present our simulation results to verify our adaptive filter and discuss its characteristics. Finally, in Section 6, we provide the conclusions. Related Work Target tracking is a focused application for underwater defense systems. Intended targets to be tracked are unmanned underwater vehicles (UUVs) and submarines. As an emerging research interest, only a few works about target tracking in UWSNs can be found in the literature. In early work, a simple target tracking method utilizing only measurement information for 3D underwater is presented by Isbitiren et al. [15]. Based on the time of arrival of the echoes from the target after transmitting acoustic pulses from the sensors, the ranges of the nodes to the target are determined, and trilateration is used to obtain the location of the target. This method tracks the target only based on current measurements, which is adverse in terms of achieving high target tracking accuracy. In sparse networks, it results in tracking failure if not enough sensors are involved. In order to get better target tracking performance, Wang et al. [16] proposed an algorithm that combines the interacting multiple model (IMM) with the particle filter (PF) to cope with uncertainties in target maneuvers. To realize energy-effective target tracking, Yu et al. [17] provided an algorithm named wake-up/sleep (WuS) increasing energy efficiency of each sensor by using a distributed architecture. At each time step, WuS means waking up sensors that have an opportunity to detect the target and sending those that do not to sleep. However, it wastes energy by employing all candidate sensors without the survival of the fittest. Later, Zhang et al. [18] proposed an adaptive sensor scheduling scheme which saves energy by changing the sampling interval. The sampling interval is variable according to whether the tracking accuracy is satisfactory or not at each time step. The main distinction between this paper and the reference [18] is that they improve energy-efficiency from different dimensions. This paper focus on the tradeoff between communication rate and tracking accuracy at each time step, which is from a spatial dimension. Zhang et al. [18] focused on the tradeoff between sampling interval and tracking accuracy, which is from the temporal dimension. Recently, Zhang et al. [19] studied the effect of sensor topology on the target tracking in UWSNs with quantized measurements. They proposed a sensor selection method which selects the optimal topology by minimizing the posterior Cramer-Rao lower bound (PCRLB). This method improved the target tracking performance under the premise of employing the same number of sensors. However, the computation of PCRLB is complicated. In our work, we use the trace of predicted estimate covariance to select the optimal sensor group, which is more convenient than PCRLB. Problem Formulation This section formulates the problem of single target tracking via distributed UWSNs. The issues to be covered include system model, distributed fusion architectures and measurement residual-based sensor scheduling. For ease of reference, we list notations that will be used frequently in Table 1. System Model We consider the conventional target motion model, which is defined as where X k denotes the target state (positions and velocities) at time k, F k is the state transition matrix at time k and w k−1 is the process noise with zero-mean white Gaussian distribution N (0, Q k−1 ). UWSNs consist of N wireless acoustic sensors floating at different seawater layers. The positions of sensors in Cartesian coordinates are denoted by s i = (x s i , y s i , z s i ), i = 1, . . . N. Sensors measure the distance to the target by transmitting acoustic pulses (ping) and calculating the time-of-arrival (ToA) of the pings and echoes. The measurement model of the sensor s i at time k is given by where h i k (X k ) is the measurement function, and v i k is the measurement noise with zero-mean white Gaussian distribution N (0, R i k ). The measurement function is given by where (x k , y k , z k ) is the location of the target at time k. The corresponding Jacobian matrix H i k , which is a useful approximation technique from the well-known extended Kalman filter (EKF) of the measurement function h i k (·) is given by where d = (x k − x s i ) 2 + (y k − y s i ) 2 + (z k − z s i ) 2 is the distance between the target and the sensor i. Distributed Fusion Architectures At the same time, different local sensors have different measurements about the same target. The information comes from different sensors must be fuse together to acquire more accurate estimates of target states. There are two types of fusion architectures, distributed fusion architectures and centralized fusion architectures. Distributed fusion architectures have advantages over centralized architectures in lower communications and processing costs. Therefore, distributed fusion architectures are preferential for application in resource-limited UWSNs. Figure 1 shows the normal structure of the distributed fusion system. Local sensors sample the measurements (Z 1 k , Z 2 k , · · · , Z N k ) from the target periodically. Then, based on new measurements and past information (X k−1 ), local sensors obtain local estimates (X 1 k ,X 2 k , · · · ,X N k ) and transmit local estimates to the fusion center. Finally, the fusion center collects all local estimates and fuses them together to get the fusion estimate (X k ). The fusion estimate will be sent back to local sensors to predict future target states. Measurement Residual-Based Sensor Scheduling For the purpose of saving communication costs, we want local sensors to think carefully before sending their local estimates to the fusion center periodically as usual. If some local estimates have low values in updating the target state estimate, we should leave them at local sensors to reduce energy costs. The value of a local estimate can be measured by a measurement residual before we calculate the local estimate asZ whereZ i k is the measurement residual andẐ k|k−1 is the predicted measurement. These make sense since the larger the measurement residual is, the larger the difference between the measurement updated estimateX k and the predicted estimateX k|k−1 = F kXk−1 will be. A small measurement residual means it can change the predicted estimateX k|k−1 only a little, so the fusion center can simply keep the predicted estimateX k|k−1 or do some approximation. The fusion center should formulate criteria to tell local sensors whether their estimates are needed or not. We adopt the standard proposed in [11] and make some changes in formulations for further convenience. We define an indicator function as where δ is the normalized threshold and the weight E i k is determined by: where H i k is the Jacobian matrix of the measurement function with the predicted estimateX k|k−1 and P k|k−1 is the error covariance ofX k|k−1 . Once a local sensor i obtains fresh measurements at time k, it should calculate the corresponding indicator value λ i k as Equation (7). If λ i k = 0, sensor i needs to do nothing but keep silent to save energy. If λ i k = 1, sensor i will calculate the local estimate and send it to the fusion center. Figure 2 illustrates how these indicator values work. It should be noticed that the measurement residual based fusion framework includes a feedback path from fusion center to local sensors. However, this feedback path adds to the negligible energy consumption of local sensors because the cost of receiving energy is much smaller than the cost of transmissitting energy according to [9]. Artificial Measurement Model Even if the fusion center did not receive local estimates from some local sensors, it obtained useful information that their measurement residuals are smaller than the threshold. For instance, if a fusion center did not get a packet from local sensor i at time k, then λ i k = 0 and Based on the well-known Bayesian formula, and before we get So f (Z i k |λ i k = 0,X i k|k−1 ) is simply a truncated normal distribution as where We do not want to drop above information about Z i k . Thus, we define an artificial measurement model as whereZ i k is the artificial measurement and u i k is the zero-mean measurement noise. This model can be regarded as a measurement model of the real measurement and we want to use the artificial measurement to approximate the real measurement. The Equation (15) can be rewritten aŝ and It is obvious that Equation (17) matches our artificial measurement model such as Equation (16) well. If we letẐ i k|k−1 stand as the realization of our artificial measurement, the measurement noise u i k has the same distribution as ξ i k . According to the characteristics of the truncated normal distribution, the variance of u i k can be calculated as where Φ(·) is the standard Φ-function defined by So our artificial measurement model Equation (16) has a unique realizationẐ i k|k−1 and a truncated normal noise u i k . Artificial Measurement Based Filter Based on previous discussion about our artificial measurement model, we can derive our artificial measurement-based filter as follows (1) Predict: Assume that we already obtain the fusion estimateX k−1 and the corresponding error covariance P k−1 at time k. The predicted state estimate and the corresponding error covariance at time k can be calculated asX (2) Update: If λ i k = 1, local sensor i can update the estimate and corresponding error covariance as EKF as follows. Measurement residual: covariance of Measurement residual: where H i k = H i k (X i k|k−1 ) refer to Equation (4). Optimal Kalman gain: Updated state estimate and updated estimate covariance: If λ j k = 0, the fusion center will not receive the local estimate of j, it can update the artificial local estimate and corresponding covariance itself with the help of the artificial measurement model as follows. Measurement residual: covariance of Measurement residual: The above equation can be obtained bȳ Optimal Kalman gain: Updated state estimate and updated estimate covariance: Finally, the fusion center will fuse these local estimates and artificial local estimates together to get the fusion estimate using the fusion algorithm proposed in [20]. Figure 3 shows the structure of our artificial measurements-based distributed fusion filter. The red part tells us that the fusion center makes full use of the information of λ 1 k = 0 and generates the artificial local estimate to compensate for the unsent real local estimate. Adaptive δ Determination The normalized threshold δ plays a key role in our artificial measurements based filter. Indicate function refereing to Equation (7), which is a function of δ, gives local sensors a criterion to decide whether to send local estimates to the fusion center or not. The larger the δ is, the lower probability local sensors have to send local estimates. That means δ affects the communication frequency of local sensors. In other words, δ can determine the energy consumption of local sensors. In addition, the estimate covariance in Equation (34) is a function of δ, which means δ affects not only the energy consumption but also the estimate accuracy of the target state. Both energy cost and target tracking accuracy are important in underwater target tracking issues. In this section, we proposes an adaptive δ determination method to make a better tradeoff between the communication cost and target tracking accuracy. From Equations (7) and (8), we can obtain the probability distribution function of the indicator Consequently, the expectation of communication rate of local sensor i at time k is In this paper, we use the number of packets sent to the fusion center to measure energy cost at local sensors, which is defined as Energy(δ). From Equations (28) and (34), we can calculate the expectation of estimate covariance of local sensor i at time k as E(P i k ) = p(λ i we rewrite above equation as and It is clear that the estimate covariance increases with the increase of Error (δ). Equations (39) and (41) formulate how δ affects the energy costs of local sensors and target tracking accuracy. For the purpose of selecting proper δ during target tracking missions, we define an objective function as where α k is a coefficient to adjust the weight of tracking error. The optimal δ can be determined by Since we should guarantee target tracking performance first, the coefficient α k should be large when tracking performance is bad. Conversely, α k should be small if tracking accuracy meets our demand. In this paper, we use the trace of estimate covariance to measure tracking performance and α k is determined as where Θ k is the trace of P k in Equation (35) and Θ r is a pre-given reference value. It is not easy to get an analytical solution of Equation (44). Therefore, we provide an efficient numerical way to get the proper δ. Since δ is equivalent to the standard deviation of standard normal distribution, we select δ from 0 to 3 according to the well-known Pauta criterion. Then we take n uniformly-spaced samples from [0, 3] and form a set as δ ∈ {L 1 , · · · , L n−1 , L n }. It should be noticed that Energy(L i ) and Error(L i ) can be calculated off-line to improve the online computational efficiency. Optimal Sensor Group Selection Assume that filtering resultsX k and P k are given at time k. Then, target position at time k + 1 can be predicted referring to Equation (1) and the distance d i k+1 from sensor i to the predicted target position can be calculated. Sensor i will have a chance to track the target and be a candidate sensor at time k + 1 if d i k+1 is smaller than its sensing range. However, in 3D networks, we need four sensors to locate the position of a target [15], which means that adopting more than four sensors is not worthwhile if we consider their energy consumption. Thus, we select the best four sensors if there are more than four candidate sensors. Certainly, we employ all the candidate sensors when the number is less than or equal to four. We had proposed a posterior Cramer-Rao lower bound (PCRLB) based sensor selection scheme for particle filters in [21]. It calculated the PCRLBs of different sensor groups to evaluate how they contribute to tracking performance. However, in this paper, the fusion estimate covariance is given in Equation (35). That means we can evaluate how sensor groups contribute to tracking performance by P k+1 . This is more convenient than PCRLBs. Given that we have a set of N c candidate sensors as C = {c 1 , c 2 , · · · , c N c } at time k + 1, the sensor selection problem can be formulated by where G = {c g 1 , c g 2 , c g 3 , c g 4 } stands for sensor groups selected from set {c 1 , c 2 , · · · , c N c }. Equation (48) tells us that the sensor group that minimizes the trace of the fusion estimate covariance will be the best one. Since the rank of the traces of sensor groups in Equation (48) will not be changed with δ, P i k+1 is given by Equation (28) for simplified calculation. The exhaustive search is the most direct way to find the optimal sensor group and there will be N c ! 4!(N c −4)! groups. However, when N c is large, the number of the groups increase rapidly and the exhaustive search has a heavy computational burden. Therefore, we use the generalized Breiman, Friedman, Olshen and Stone (GBFOS) proposed in [22] to find the optimal sensor group. Initially, N = N c and the GBFOS algorithm keeps finding the optimal N − 1 elements subset from the last optimal N elements set until N = 4. Thus, GBFOS needs to try sensor groups to find the optimal one. Table 2 lists some numerical examples to compare exhaustive search with GBFOS. It is obvious that GBFOS can reduce much more computational burden when N c becomes larger and larger. We should mention the other search algorithm, called the greedy search, which is the reverse of GBFOS. The greedy algorithm keeps taking one optimal sensor out from candidate sensors until four sensors are taken out. Thus, the greedy search needs to try (4N c − 6) sensor groups to find the optimal one. It seems better than GBFOS when N c is larger than 8. However, it is infeasible for our sensor selection scheme because there is no optimal sensor if we only choose one. That is according to The flow chart of our artificial measurements-based adaptive filter is shown in Figure 4. It shows how information flows between the fusion center and local sensors. The dash line means the information flow from local sensors to the fusion center is non-existant when λ = 0 and artificial measurements are introduced to compensate this missing information flow. Simulation Scenario We employ our artificial measurement-based filter to a target tracking mission for verification. In order to get more realistic performance measures, the target is assumed to move in a 3D underwater environment. The monitored field is 1000 m×1000 m×1000 m and sensors are deployed as a 5 × 5 × 5 uniform grid. All local sensors are identical. Their detection radius and measure covariance are 300 m and 10 m 2 , respectively. The initial state of the target is assumed to be [300, 10, 300, 2, 10, 2] T . From 1 s to 40 s, it moves at constant velocity (CV). From 41 s to 80 s, it makes a coordinate turn (CT) with turn rate 0.052 rad/s. From 81 s to 100 s, it moves at CV. CV and CT can be formulated as where F CV and F CT are state transition matrixes. w k is the process noise with zero-mean white Gaussian distributions N (0, Q k ). F CV , F CT and Q k are given by where q is the intensity of the process noise. For an underwater target, we consider that only on the xoy plane does it move as a CT model and it moves as a CV model in Z-axis direction. Performance Verification Simulation results are averaged over 100 Monte-Carlo runs. We adopt root mean square error (RMSE) to assess the accuracy of target tracking and the number of packets sent from local sensors to indicate the energy consumption. Performance Comparison In our simulation, we compare performances between the conventional target tracking scheme referring to δ = 0 and our artificial measurements-based energy-efficient target tracking scheme with δ = 1 to see how our algorithm achieves the goal of energy-efficiency. Figure 5 shows the real trajectory of the target and performances of different tracking schemes. All real measurements mean that the normalized threshold is equal to 0 and the fusion center has all local estimates. Containing artificial measurements means that the normalized threshold is not equal to 0 (in this case δ = 1) and the fusion center updates the estimate with the help of artificial measurements. Both target tracking schemes can successfully track the target with high accuracy. The detailed tracking error and communication costs of the two schemes are displayed in Figure 6a,b respectively. They tell us that our target tracking scheme can save much energy (about 80%) while only loosing a small amount of tracking accuracy (about 40%). This is a worthwhile trade in the pursuit of saving energy. Impacts of δ We also want to know how the normalized threshold δ affects the target tracking performance and communication costs of our algorithm. So we change the δ from 0 to 2 with the increment a of 0.1 at each step. Figure 7a illustrates the target tracking performance with different δ and we mark some points for further discussion. It is clear that target tracking error increases very slowly with a small δ. However, it increases faster and faster with the increase of δ. Compared with δ = 0, it increases only 5.6% when δ = 0.5 and 41.2% when δ = 1. The error turns out to be extremely high (more than 300%) when δ = 2. Correspondingly, the impact of different δ on communication costs is displayed in Figure 7b. Inversely, the communication rate falls rapidly when δ is small and it changes slowly when δ is lager than 1.5. The communication cost decreases by 46.5% when δ = 0.5 and 79% when δ = 1. Through Figure 7, we can find that our algorithm has a nice property with δ, which shows the potential that we can save much energy while only loosing a little tracking accuracy. Take δ = 0.5 as a powerful example. We save 46.5% energy while loosing only 5.6% tracking accuracy. Therefore, the artificial measurements-based adaptive filter is proposed to find the proper δ. Performance of Adaptive Filter Since the normalized threshold δ has opposite effects on target tracking accuracy and energy consumptions, it is important to find the proper δ to exploit the advantages of our artificial measurements-based filter. The core idea is to reduce energy consumption as much as possible under the premise of sufficient target tracking performance. In this paper, tracking performance is represented by α k in Equation (45). A large α k means tracking error is too large and we should put more effort into decreasing tracking error. In contrast, a small α k means tracking performance meets our demand and we should pay attention to reducing energy cost. Θ r in Equation (45) is a pre-given reference value to represent our demand for target tracking performance. In order to show the superiority of our artificial measurements-based adaptive filter, we change Θ r from 10 to 100 with the increment of 10 at each step and set n in Equation (46) to 30. Figure 8a displays the target tracking performance with different Θ r . It is clear that target tracking error increases slowly with Θ r , which means our adaptive filter can guarantee target tracking performance even with a large Θ r . in contrast, the target tracking error becomes extremely high with a large δ in Figure 7a. Correspondingly, the impact of different Θ r on communication costs is shown in Figure 8b. The communication cost decreases observably as Θ r increases even with a small Θ r , which means our adaptive filter can effectively reduce energy consumption. Marked points in Figure 8 illustrate that our adaptive filter can exploit its advantages in energy saving and guarantee good target tracking performance. For example, compared with δ = 0, we save 29.1% energy while lost only 2.11% tracking accuracy when Θ r = 10 and we save 74.35% energy while loosing 32.9% tracking accuracy when Θ r = 100. By setting different Θ r , the artificial measurements-based adaptive filter can achieve varying degrees of pursuits of energy saving. Performance of Sensor Group Selection Sensor group selection is a feasible method to improve the energy-efficiency of target tracking in UWSNs. We compare performances of three sensor group selection schemes to support this opinion. Here, we set Θ r = 40. Target tracking error and energy consumptions of three schemes are displayed in Figure 9a,b , respectively. Selecting the best four sensors means that the sensor group is optimized using Equation (48). In contrast, selecting the worst four sensors means the sensor group is generated by Selecting a random four sensors means the sensor group is generated at random. Obviously, the performance of selecting the worst four sensors is much worse than other schemes. This must be avoided in terms of energy-efficiency and target tracking accuracy. The sensor group optimized by our sensor selection scheme has the best performance in both target tracking and energy saving. Average tracking accuracy and the total number of packets are listed in Table 3. Compared with the random sensor group, selecting the best four sensors can improve tracking accuracy by 19.61% and save 8.65% of energy. Moreover, in some cases, we want to improve tracking accuracy with less additional energy consumption. This goal can be realized by selecting more sensors and using our artificial measurements-based adaptive filter. In order to have more candidate sensors, the sensors are deployed as a 7 × 7 × 7 uniform grid. The number of selected sensors N s at each step is changed from 4 to 11. We compare performances between artificial measurements-based adaptive filter with Θ r = 40 and conventional all real measurements-based filter. The averaged target tracking error and communication costs of the two schemes are shown in Figure 10a,b respectively. It is clear that selecting more sensors can improve target tracking accuracy effectively with regard to both schemes. However, the energy consumption of an artificial measurements-based adaptive filter increases much more slowly than a conventional all real measurements-based filter. That means our target tracking scheme can improve target tracking performance with less additional energy consumption. Through marked points in Figure 10, the energy cost of our scheme with N s = 10 is 4.25% less than conventional scheme with N s = 4 and the target tracking error of our scheme with N s = 10 is 28.1% lower than conventional scheme with N s = 4. That means our scheme can have similar energy to a conventional scheme but get much better tracking performance than a conventional scheme. In a dense sensor network, the exhaustive search needs to try too many cases to find the best sensor group, which deteriorates the real-time performance of our optimal sensor group selection scheme. Hence, we use the GBFOS algorithm to reduce the number of cases and improve the computation efficiency of our optimal sensor group selection scheme. The number of cases needed to try to find the best sensor group of exhaustive search and GBFOS is plotted in Figure 11. Here we set N s = 4. Compared with the exhaustive search, the GBFOS algorithm reduces the number of cases by about two orders of magnitude. Hence, the GBFOS algorithm can remarkably improve the real-time performance of our optimal sensor group selection scheme. Overall, this work focus on providing an energy-efficient target tracking algorithm for resource limited UWSNS. Our artificial measurements-based adaptive filter is easy to implement because it has widely applied the Kalman filter structure and low online computation demand. Conclusions This paper proposes an artificial measurements-based energy-efficient target tracking scheme in UWSNs. The basic idea of our approach is that, under the distributed fusion framework, we abandon low value local measurements to decrease the communication rate from local sensors to the fusion center to save energy. We guarantee the tracking accuracy by generating corresponding artificial measurements in the fusion center to compensate for unsent measurements. Then, we derive an adaptive filter based on these artificial measurements. In addition, we propose an optimal sensor selection scheme to further improve the energy-efficiency. Through simulation results, we can draw the following conclusions. Firstly, the artificial measurements based adaptive filter can save much energy while loosing less tracking accuracy. Secondly, by setting a different pre-given reference value Θ r , this adaptive filter can achieve varying degrees of pursuit of energy saving. Thirdly, our computationally efficient optimal sensor selection algorithm can efficiently improve target tracking performance under the premise of employing the same number of sensors. Finally, with the increase of the number of selected sensors, our artificial measurements-based adaptive filter better utilizes its advantages in energy-efficiency.
7,649.6
2017-04-27T00:00:00.000
[ "Computer Science", "Engineering", "Environmental Science" ]
Character randomized benchmarking for non-multiplicity-free groups with applications to subspace, leakage, and matchgate randomized benchmarking Randomized benchmarking (RB) is a powerful method for determining the error rate of experimental quantum gates. Traditional RB, however, is restricted to gatesets, such as the Clifford group, that form a unitary 2-design. The recently introduced character RB can benchmark more general gates using techniques from representation theory; up to now, however, this method has only been applied to"multiplicity-free"groups, a mathematical restriction on these groups. In this paper, we extend the original character RB derivation to explicitly treat non-multiplicity-free groups, and derive several applications. First, we derive a rigorous version of the recently introduced subspace RB, which seeks to characterize a set of one- and two-qubit gates that are symmetric under SWAP. Second, we develop a new leakage RB protocol that applies to more general groups of gates. Finally, we derive a scalable RB protocol for the matchgate group, a group that like the Clifford group is non-universal but becomes universal with the addition of one additional gate. This example provides one of the few examples of a scalable non-Clifford RB protocol. In all three cases, compared to existing theories, our method requires similar resources, but either provides a more accurate estimate of gate fidelity, or applies to a more general group of gates. In conclusion, we discuss the potential, and challenges, of using non-multiplicity-free character RB to develop new classes of scalable RB protocols and methods of characterizing specific gates. I. INTRODUCTION Advances in accurate and scalable methods for characterizing the performance of quantum gates are critical for the realization of large-scale reliable quantum computers. Quantum process tomography can, in theory, completely characterize an unknown quantum channel [1][2][3][4], but requires resources that scale exponentially in the number of qubits [4]. In addition, any tomographic approach will also include the effect of state preparation and measurement (SPAM) errors, which may be of the same order as the gate error that is being characterized. Randomized benchmarking (RB) [5][6][7][8] provides a method to scalably characterize gates that form a group G with the additional mathematical property of being a "unitary 2-design" [9], most frequently the Clifford group [10][11][12]. Rather than completely characterizing a noise channel, RB determines the average fidelity, a standard measure of gate quality that can be related to other common measures such as entanglement and process fidelity [13,14] and used to bound the gate error rate [15]. RB works by experimentally measuring the overall fidelity of a random circuit as a function of the number of applied gates U ∈ G and fitting this to an exponential decay. The parameters of the decay then determine the average fidelity of a single gate. Unlike tomographic methods, RB provides an estimate for the average fidelity that is independent of SPAM errors. Standard RB, however, is limited to groups that form a unitary 2-design and whose elements can be efficiently compiled (i.e. decomposed) into elementary gates. This limitation prevents standard RB from characterizing any set of quantum gates that are large enough to be universal for quantum computation [11,12], and also prevents standard RB from characterizing smaller subgroups of 2-designs. There are ongoing efforts to extend RB to a larger class of gates. Interleaved RB was proposed to characterize individual Clifford group elements [16] as well as the T -gates needed for universal quantum computation [17], but these methods are specific to the gates considered and only produce bounds on the fidelity. Ref. [18] developed a method to extract the fidelity of the dihedral group on one qubit, which is not a unitary 2design and includes the T gate, while [19] proposed a method of extending dihedral RB to an arbitrary number of qubits. Refs. [20,21] extended this work by deriving decay formulas for the fidelity of random circuits of arbitrary groups, but these formulas involved fitting sums of multiple exponentials, and the decay parameters could not be related to the average fidelity. Ref. [22], introduced character RB to address these limitations, providing a method that only requires fitting a single exponential decay and directly predicts the average fidelity. However, this was only explored for "multiplicity-free" groups, a mathematical limitation on the group's representations (see below). In this work, we provide a generalized derivation of character RB that applies to arbitrary groups. As in [22] but unlike [20,21], this method allows us to directly predict the average fidelity of the gates in G. For nonmultiplicity-free groups, our method potentially requires fitting a sum of multiple exponentials rather than a sin-gle exponential; however, the number of exponentials is significantly reduced compared to [20,21]. In addition, Our primary motivation for this generalization is to improve the recently introduced subspace RB [23] designed to characterize gates that preserve a subspace of the full Hilbert space. Such gates can never form a 2-design, and are never multiplicity-free, necessitating a generalized RB procedure. The original work on subspace RB established decay formulas for the fidelity of certain random circuits but could only give loose bounds on the average fidelity of the gates; our method, in contrast, allows us to directly estimate the average fidelity using a similar number of experiments as the original subspace RB. As an additional application of our method, we present a new protocol for leakage RB [24][25][26], a benchmarking protocol designed to characterize qubits that can "leak" into a non-computational section of the Hilbert space, that reduces the assumptions on the benchmarking group compared to the original [26]. As a final application, we introduce a new scalable RB procedure for the matchgate group [27], a class of quantum circuits that, like the Clifford group, are efficiently simulatable [27][28][29][30] but are very close to universal [29][30][31][32][33][34][35]. This procedure necessarily requires the full non-multiplicity-free character RB, and represents, along with the dihedral group [19,22], one of the few non-Clifford groups that can be scalably benchmarked. Non-multiplicity-free character RB is a general framework for benchmarking quantum groups. It provides a method for characterizing individual gates, described in Section IV, when the gates are components of operations that form a group. This powerful framework expands the family of groups that can be scalably benchmarked. Scalable benchmarking protocols are necessary to measure gate quality in large quantum processors particularly to understand the effects of non-local errors such as crosstalk. While we provide one example of a scalable benchmarking protocol, for the matchgate group, we expect the framework non-multiplicity-free character RB will lead researchers to develop further scalable examples. We discuss the potential of and some challenges for generating further examples. Benchmarking multiple overlapping groups (or subgroups of groups) may allow more accurate error characterization. Our paper is organized as follows. Section II provides mathematical background on the Liouville representation and the definition of average fidelity. Section III outlines the full non-multiplicity-free RB protocol, and proves that it correctly estimates the average fidelity of the gates. The next sections consist of applications. Section IV demonstrates how our method can be used to rigorously estimate the fidelity of gate sets that preserve subspaces, such as those studied in [23]. Section V applies our framework to formulate a leakage RB protocol with fewer assumptions than the current state-of-the-art [26]. Section VI reviews the matchgate group, and describes how our method can be used to derive a scalable RB protocol for this group. We conclude in Section VII with discussion of possible extensions of our work, including some of the challenges. We relegate technical details to appendices, including Appendix B, which provides a self-contained and straightforward proof that generalizations of the Clifford group to qudits for d prime form a unitary 2-design, which may be of general interest. II. MATHEMATICAL PRELIMINARIES In this paper, we use the Liouville representation of quantum channels. In the Liouville representation, given some fixed basis {|i } of our Hilbert space H, a density matrix ρ = ij ρ ij |i j| is represented by a column vector |ρ = ij ρ ij |i ⊗ |j , where we use a doublebracket |· to distinguish elements of H ⊗ H from elements of H. In the case of a pure state ρ = |ψ ψ| we will also sometimes write |ψ in place of |ρ . A quan- In this representation, matrix multiplication corresponds to composition matrix-vector multiplication corresponds to applying a quantum channelΛ |ρ = |Λ(ρ) , and the inner product of two vectors corresponds to the Hilbert-Schmidt inner product of the corresponding density matrices σ|ρ = Tr(σ † ρ). In particular, if M is a projector into some measurement outcome, the overlap M |ρ gives the probability of measuring M from a state ρ. For a more detailed treatment of the Liouville representation, see [36]. Given a unitary group G acting on our Hilbert space H, the natural action of U ∈ G on density matrices is given by U (ρ) = U ρU † . In the Liouville representation, such an operator is represented byÛ = U ⊗ U * . The map φ : U → U ⊗ U * forms a representation [37] of the group G on H ⊗ H that we will refer to as the natural representation of G. We can also define the G-twirl of a quantum channel Λ aŝ where |G| is the order of the group. As we will see, Λ G has properties similar to the original channel Λ, but it has a simpler structure that makes it more tractable to study. If an noisy implementation of a gate U results in applying the channel (Λ•U ), we would like to characterize how close the noise channel Λ is to the identity. We will focus on one common measure of noise, the average fidelity F Λ , given by Here, dψ is the unitary-invariant Haar or Fubini-Study measure on H. The integrand ψ|Λ|ψ is the probability of preserving a state |ψ after the noise operator Λ has been applied. The average fidelity is then simply the average of this probability over all possible input states. III. THE GENERALIZED CHARACTER RANDOMIZED BENCHMARKING PROCEDURE Let G be the unitary group on H that we wish to benchmark. Let φ : G → L(H ⊗ H) be its natural representation, which decomposes into irreducible representations as φ a 1 φ 1 ⊕ · · · ⊕ a I φ I , where a i ∈ Z + is the multiplicity of the irrep φ i . Let H ⊗ H i C ai ⊗ H i be the corresponding decomposition of Hilbert space, such that each φ i acts nontrivially only on a single copy of H i . We will make the standard RB assumption that the gate error Λ associated with U ∈ G is independent of U , although this can be relaxed [22,38,39] Let G ⊆ G be a subgroup of our unitary group with natural representation φ a 1 φ 1 ⊕ · · · ⊕ a I φ I and corre- that there is not in general any relation betweenĪ and I, or the multiplicities a i and a i . We choose G such that for every i ∈ {1, ..., I}, there exists a corresponding i ∈ {1, ..., I} such that C a i ⊗ H i ⊆ C ai ⊗ H i . One may always choose G = G, but we will see below that for this procedure to scale with the number of qubits we must choose G G. We denote the character of the irrep φ i by χ i . Our RB procedure consists of the following steps: 1. For each i ∈ {1, ..., I}, choose an initial state |ρ i and measurement projector |M i such that | M i |P i |ρ i | is large as possible (see Section III C below), whereP i is the projector onto H i . 2. For a given N , choose unitaries U 0 ∈ G and U 1 , ..., U N ∈ G randomly and uniformly (note elements can be repeated). Apply the gates (U 1 U 0 ), U 2 , ..., U N +1 sequentially, where (U 1 U 0 ) is compiled as a single element of G. 5. Repeat steps 2-4 many times, to estimate the character-weighted survival probability for each i, where Pr U0,...,U N +1 is the probability of measuring |M i after applying gates U 0 , ..., U N +1 to |ρ , including the effect of gate and SPAM errors. 6. Repeat steps 2-5 for different values of N . 7. Fit each character-weighted survival probability to a function of the form where the C i,j and λ i,j are fitting parameters independent of N . 8. Estimate the average fidelity of the gate error Λ as where d := 2 n is the dimension of Hilbert space. A similar RB procedure was first proposed in [22] for groups with all a i = 1, the so-called multiplicity-free groups. In this case, each character-weighted survival probability becomes a single exponential decay. Character RB had been previously proposed for the multiplicityfree dihedral group on one qubit [18], and a related approach has been used to simplify standard RB [40]. The idea of including an initial gate U 0 and weighting by characters to isolate exponential decays has also been independently proposed in [41]. We note if we omit the initial gate U 0 and the character-weighting χ * i (U 0 ), we get the method of [19][20][21]; in this case, we get a single survival probability S(N ) that is given by S(N ) = i,j C i,j λ N i,j . Determining the λ i,j then requires fitting all the parameters C i,j and λ i,j simultaneously, and quickly becomes infeasible for a modestly large number of parameters. We see that while both our method and the method of [19][20][21] involve simultaneously fitting multiple exponential decays, our method significantly reduces the number of parameters in each fit. For example, if φ 2φ 1 ⊕ φ 2 ⊕ φ 3 , our method requires fitting three functions, corresponding to φ 1 , φ 2 , and φ 3 , where the first function is a sum of two exponential decays and the latter two functions are single exponential decays. In contrast, [19][20][21] require fitting a single exponential function that is the sum of four exponential decays, one for each copy of each irrep. In addition, the method of [19][20][21] cannot determine F Λ ; this is because it is not possible to match the observed parameters {λ i,j } to their corresponding H i in order to use Eq. 5. The remainder of this section is devoted to deriving this procedure, for groups that are not necessarily multiplicity-free. A. Deriving the decays To derive the form of the character-weighted survival, Eq. 4, we will need two facts from representation theory. Fact 1 (Schur's Lemma). Let φ : G → L(V ) be a representation of a group G on a vector space V , which decomposes into irreducible representations as φ a 1 φ 1 ⊕ · · · ⊕ a I φ I , where a i ∈ Z + are positive integers. The corresponding decomposition of V is V i C ai ⊗ V i . In terms of this decomposition, any linear mapη ∈ L(V ) satisfyingηφ(U ) = φ(U )η for all U ∈ G is of the form whereQ i is some a i × a i matrix for each i. Fact 2 (Projection formula). Let φ and V be as above. Given an irrep φ i : G → L(V i ), define the character ). Then we can write the projector onto C ai ⊗ V i aŝ For proofs of both facts, see [37]. Given these results, we can prove the key property of G-twirls that allows us to compute the average fidelity. Theorem 1 (Form of G-twirls). If G is any unitary group acting on H, let φ a 1 φ 1 ⊕ · · · ⊕ a I φ I be the decomposition of the natural representation into irreps, and let H ⊗ H i C ai ⊗ H i be the corresponding decomposition of H ⊗ H. If Λ is any quantum channel, the G-twirl of Λ is of the form where Q i is defined as in Fact. 1. Proof. We apply Eq. 1 to observe that for any U ∈ G. We can then apply Fact 1. We are now ready to derive the formula for the character-weighted survival probability S i (N ). This proof follows the logic of [22], adapted for nonmultiplicity-free groups. Writing out Eq. 3 explicitly, including the effect of preparation and measurement errors Λ P and Λ M , we have The sum over U 0 gives the projection |G|P i /dim(H i ) according to Eq. 7. To do the sum over U 1 , ..., U N , we can define new group elements D 1 , ..., D N by D i = U i · · · U 1 . In terms of the D i , we then have U i = D i D † i−1 , with the convention that D N +1 = 1. Note that summing over U 1 , ..., U N is the same as summing over D 1 , ..., D N . We therefore may write We can now easily perform the sum over the D i , since each sum just gives a G-twirl according to Eq. 1. Per-forming this sum, and using Thm. 1, gives where in the last line, we used the fact that the range ofP i is included in C ai ⊗ H i . We see that the effect of the character-weighting is to produce a projector that restricts our attention to a single i. If we diagonalizeQ i asQ i = ai j=1 |e i,j λ i,j e i,j | with e i,j |e i,j = δ j,j , thenQ N i = ai j=1 |e i,j λ N i,j e i,j |, and we may write the final form of S i (N ) as which is precisely the form given in Eq. 4. Notice that the λ i,j depend only on the gate error Λ, and not the SPAM errors Λ P , Λ M which are absorbed into the constant prefactor. B. Computing the fidelity Finally, we prove the fidelity can be estimated according to Eq. 5. This was first derived in [21], although we will adopt a simpler proof here using techniques introduced in [13,14]. The key realization is that both the fidelity and the trace of a channel are invariant under twirling by an arbitrary group: F Λ = F Λ G and Tr(Λ) = Tr(Λ G ) (see Eq. 1). In particular, if we choose G to be the full unitary group it is known that the full twirl of a channel is simply a depolarizing channel [13,14][42]: 1|. In terms of the parameter p, we can directly compute Similarly, we can also directly compute Tr(Λ G ) = pd 2 + (1 − p). Combining these equations gives To complete the proof, we note that Tr(Λ) can be written in terms of the matricesQ i in Eq. 8 as which, combined with Eq. 10, gives Eq. 5 as desired. C. Scaling and Feasibility We note that experimentally determining S i (N ) requires Monte Carlo sampling of U 0 , U 1 , ..., U N . Each term in this sample is bounded by max U0∈G (|χ i (U 0 )|) = dim(H i ). Therefore, the standard deviation of the samples is bounded by dim(H i ), and the sample mean has uncertainty bounded by dim(H i )/ √ no. samples. To determine the relative uncertainty, we consider S i (N ) ≈ ai j=1 C i,j which is given by where we've approximated Λ, Λ M , Λ P ≈ 1. The relative uncertainty in S i (N ) is therefore bounded by We see that to efficiently benchmarking a group G, we must have I, a i , and dim(H i ) all small. I must be small so that we only need to estimate a small number of character-weighted survival probabilities S i (N ), a i must be small so that we may fit a function with a small number of parameters, and dim(H i ) must be small for our Monte Carlo estimation of S i (N ) to converge quickly. Note that for any G the natural representation satisfies I i=1 a i dim(H i ) = 4 n where n is the number of qubits, so that choosing G = G will not suffice if the number of qubits is large. In particular, to scalably benchmark a group, we must choose G so that the number of irreps I grows slowly with n, the multiplicity a i of each irrep is bounded by a small constant, and G has corresponding irreps H i whose dimension grows slowly with n. These scaling considerations are similar to those discussed in [22] for multiplicity-free RB, except in our case we allow a i to be bounded rather than strictly 1. Note that the optimal |ρ i with largest | M i |P i |ρ i | is necessarily a pure state, since any mixed state |ρ i = γ p γ |ψ γ has Ref. [22] considered the case of mixed initial states, and included a protocol for sampling from a mixed state |ρ i = γ p γ |ψ γ provided one can efficiently prepare the states {|ψ γ }. However, we see that it suffices to take the initial state to be one of the efficiently preparable |ψ γ , which simplifies initial state preparation. Our scaling estimates are based on the typical case; however, there are a few worst-case failure modes. First, the noise may have some symmetry that restricts e i,j |Pī ≈ 0 for some (i, j). In this case, the corresponding λ i,j will not be accurately estimated by the fitting function. To remedy this, one may choose a set of projectors P i,1 , ...,P i,k such that each e i,j | has overlap with at least oneP i,α . This requires at most a i projectors. We can then definê The modified character-weighted survival probability will require taking additional data to achieve the same relative uncertainty, since the corresponding dim(H i ) = α dim(H i,α ) will be larger, but is otherwise identical. The fitting procedure may also have difficulty fitting multiple exponential decays, especially if the decay rates are similar. In the case of similar decays, the fit might have numerous local minima; worse, the fitting function might simply set the coefficient of one of the decays to zero and the corresponding decay rate to some arbitrary value, and fit the curve using fewer exponential decays. This can be detected during the fitting procedure, and corrected by either taking more data to more closely constrain the fit or by simply fitting fewer exponential decays. IV. APPLICATION: SUBSPACE RANDOMIZED BENCHMARKING As an application of the general character RB method, we can improve on the recently introduced subspace randomized benchmarking method [23]. Subspace RB characterizes the error associated with a group of gates G that preserve a subspace of the Hilbert space. In [23], a benchmarking procedure is introduced that yields two decay parameters that are functions of the noise channel, but the procedure does not give an estimate for the average fidelity or other quantities with simple physical interpretations. The multiplicity-free character RB of [22] is not directly applicable to this situation, as we will see that any group that preserves subspaces necessarily decomposes into irreps with multiplicity. However, using our method we can easily characterize the average fidelity of such gates. To simplify our discussion, we will focus on the particular case discussed in [23]. The system considered in [23] can implement arbitrary symmetric single qubit gates U 1 := U ⊗ U as well as the two-qubit entangling gate U ZZ := exp{−i π 4 Z ⊗ Z}. The symmetric single qubit gates have negligible error compared to the entangling gate, so the goal of the experiment is to characterize the fidelity of U ZZ . This is accomplished by combining the elementary gates into elements of a benchmarking group G, using a fixed number of the relevant gate U ZZ , and then designing an RB procedure to benchmark elements of G. It is straightforward to see that any U ∈ G made up of products of U 1 and U ZZ operators preserves the triplet and singlet subspaces This implies that every gate U ∈ G decomposes as U = U T ⊕U S , with U T and U S acting on the triplet and singlet spaces, respectively. Our method differs from the original in several ways. Most notably, we combine the elementary gates into elements U ∈ G such that G forms a group. This requires a moderate increase in complexity of the combined gates; [23] combined their gates into unitaries involving three U ZZ gates, while our construction requires four. However, in return for this increased complexity, our method offers several advantages. Rather than estimate decay parameters with no clear physical interpretation, our method produces direct estimates of the average fidelity. In addition, the derivation of the form of the exponential decays in [23] required assumptions on the relative phases of U T and U S that could not actually be realized on their experimental platform. In contrast, our method yields rigorous decays thanks to the underlying group structure of G. The original subspace RB can be extended to sets of gates G that preserve some arbitrary splitting of H into subspaces H = H 1 ⊕H 2 provided the set G can be written as are both groups and unitary 2-designs [43] (see below for the definition of a 2-design). However, it is difficult to construct such a set in a way that is experimentally relevant; indeed, [23] could not do this for the simple case of two qubits, and we avoid attempting such a construction here. A more useful approach, which mirrors our approach below, is to construct an arbitrary group out of the elementary gates and perform character RB on whatever irreps result. This method can likely be used to benchmark other two-qubit gates that are symmetric under SWAP besides U ZZ , and may also prove useful for gates that preserve other subspaces. A. Constructing the benchmarking group Ref. [23] constructed their benchmarking set G using a generalization of the Clifford group [11,12] to a dlevel system [44]. We will follow a similar procedure, modified to ensure G forms a group. For a d-level system, analogues of the X and Z qubit operators are defined as [45]: and addition is performed modulo d. These generalized X and Z operators are unitary, and the set {X a Z b : a, b ∈ Z d } forms an orthogonal basis for the set of all d × d matrices. Note that for d = 2 we recover the usual Pauli matrices. Specializing to d = 3, define the generalized Pauli that P is a group follows from the commutation relation ZX = ωXZ. The generalized Clifford group is defined to be the set of all unitaries that stabilize P [44]: An element U ∈ G T is defined (up to a global phase) by its action on X and Z. Defining U XU † = ω ηx X ax Z bx and U ZU † = ω ηz X az Z bz , and noting , leading to a total of 216 elements of G T . We can find the action of U ∈ G T on a general element X a Z b by The action of U on a general density matrix then follows by linearity. Our benchmarking group G is constructed by combining the elementary symmetric gates to act as G T on the triplet subspace, where the three levels |0 , |1 , |2 correspond to the triplet basis |00 , |01 +|10 √ 2 , |11 . The most general composite gate is formed by alternatively applying U 1 and U ZZ gates to our qubits. A straightforward calculation shows that if such a circuit applies an operator U T to the triplet subspace, its action on the singlet subspace is necessarily given by (−1) nz ω η det(U T ) 1/3 , where n z is the number of entangling U ZZ gates. By varying the single-qubit unitaries U 1 , we find computationally that all elements of G T and all relative phases ω η can be generated by circuits of exactly four U ZZ gates, as shown in Fig. 1 [46]. In total, then, the benchmarking group is given by where the first summand acts on the triplet subspace and the second acts on the singlet subspace. Note that every group element contains exactly four entangling gates, so the average fidelity of G gives a useful measure of the fidelity of the entangling gate. Subrep Projector χi(UT ⊕ US) which are described in Table I. These are all clearly subrepresentations of the natural representation; for proof that they are in fact irreducible, we will use the concept of a unitary t-design [9]. Let S be a set of unitaries acting on a space H. A balanced polynomial of degree t is a polynomial in the matrix elements of U and U * where each term in the polynomial has degree d < t in the elements of U and degree d in the elements of U * . S is a unitary t-design if for balanced polynomial p(U, U * ) of degree t, averaging p(U, U * ) over S is the same as averaging over all unitaries on H (weighted by the Haar measure) A classic example is the Clifford group, which forms a unitary 3-design [9,47,48]. The group G T forms a unitary 2-design [49] (see Appendix B for a proof). This allows us to prove the representations in Table I are irreducible, using the following fact: Fact 3 (Schur normalization). Let χ be the character of a representation. The representation is irreducible iff For a proof, see [37]. The representations H T 0 and H S0 are 1D, thus irre-ducible. For the representation H T ⊥ , we have where the second equality follows from the unitary 2design property, and the third follows from the fact that H T ⊥ is an irrep of the natural representation of the full unitary group on H T . Finally, for H T S and H ST we have where the second equality follows from the unitary 2design property and the third follows from the fact that the direct representation of the full unitary group on H T is irreducible. Note that H T 0 and H S0 are two irreducible copies of the trivial representation, so that G is necessarily non-multiplicity-free [50]. The remaining irreps are all unique, since they have different character functions. C. Benchmarking G The form of the decay curves corresponding to each irrep is given by Note that from our general form Eq. 4 we would expect that S 0 (N ) is the sum of two exponentials term, with each λ 0,j corresponding to an eigenvalue ofΛ G restricted to H 0 . However, we know that for trace-preserving noise 1|Λ G = 1|, which implies that one of the eigenvalues is 1. We define two different subgroups G 1 , G 2 ⊆ G for our benchmarking procedure. We will use G 1 to construct S 0 (N ) and S T ⊥ (N ), and G 2 to construct S T S (N ) and S ST (N ). We define For G 1 , we can define the following character functions and their corresponding projectors: We also see that dim(H T ⊥ ) = 1, so that S T ⊥ (N ) will have the best possible relative error (see Section III C). For G 2 , we can define the character functions and corresponding projectors We again see that P T S projects into H T S ⊆ H T S dim(H T S ) = 1, so that S T S (N ) will also have the best possible relative error. As our initial states, we choose Here, we've restricted ourselves to initial states that are a mixture of Z-basis product states, for ease of preparation. As our measurement projectors, we choose Here, we've restricted our measurement projectors to correspond to Z measurements, for ease of measuring. With these choices, the S i (N ) are approximately Note that λ ST = λ * T S , so it is unnecessary to compute both S T S (N ) and S ST (N ). Note also that λ 0 and λ T ⊥ are both necessarily real, as are C 0 and B. The remaining parameters are complex. For convenience, we will rotate S T ⊥ (N ) by e iπ/3 so that S T ⊥ (N ) is approximately real. We demonstrate our method by generating random error channels and simulating our RB procedure. To generate a random error channel Λ on a d-dimensional Hilbert FIG. 2. The predicted and measured character-weighted survival probability for a random error channel. The exact decay (green) is an exponential decay given by Eq. 11. We estimate Si(N ) by applying random gates and measuring the final state (blue points). The data is fit to an appropriate function (orange) from which we estimate the fidelity. . space, we generate a random unitary on a (d 2 +d) dimensional Hilbert space and trace out d 2 auxiliary degrees of freedom; to adjust the fidelity, we take a convex combination of the resulting channel with the identity channel. All channels generated by this method are guaranteed to be completely positive trace-preserving (CPTP), thus valid error channels, and every CPTP channel can be generated via this method [36]. For each error channel, we take data at 15 different values of N , and sample unitary operators at each value of N until we have applied a total of 150, 000 unitary operators in total. For each string of unitary operators, we perform full state-vector simulation to apply the RB sequence of operators, and then generate a measurement outcome of 0 or 1 using the appropriate probability, and compute the characterweighted average. In Fig. 2, we show the exact value of S i (N ), the data we take to estimate S i (N ), and the fit to S i (N ) according to Eq. 11 for a single random error channel Λ. From the fit data, we can estimate F Λ by applying Eq. 5: Note that the imaginary parts of λ T S and λ ST always cancel to give a real F Λ as expected. We use this formula FIG. 3. The exact and estimated fidelity for a selection of randomly generated error channels. Each estimate was based on data taken over 15 different lengths N . Each estimate was arrived at by applying a total of 150,000 benchmarking group elements. This is the same number of elements applied in the experiment described in [23]. The diagonal line denotes the points where the exact and estimated fidelities are equal. The data agree with the line with a reduced χ 2 value of .9, indicating good agreement. Note that the error bars are derived from statistical uncertainty in the data, and vanish in the limit of an infinite number of data points to estimate the fidelity of our randomly generated error channels, and compare our estimate to the true fidelity in Fig. 3. We see that the true fidelity and the estimated fidelity agree within the error bars set by the uncertainty of our fits. We can directly compare this with the original subspace RB method [23]. That method served to estimate only λ 0 and λ T ⊥ (t and r in their notation), and they could only form a measure of gate fidelity using these quantities. They defined a so-called "extended subfidelity"F Λ , which they obtained by replacing λ ST and λ T S with the weighted average of the other eigenvalues: 10 . It is obvious that if F Λ → 1,F Λ → 1 as well, but the reverse is not necessarily true. We can compare the approximate fidelity to the exact fidelity for the various noise sources explored [23]. We consider intensity errors, which correspond to an overrotation e −i ZZ ; optical pumping errors, which cause amplitude-damping on each qubit; inhomogenous fields, which cause phase-damping on each qubit; and SWAP errors, which interchange the qubits.. The results are shown in Fig. 4. We see that while for most error sources F Λ ≈F Λ , there exist worse-case errors, such as SWAP, that cannot be detected byF Λ . This was also noted in [23] as a limitation of their method. Our work also improves upon the original work in the mathematical assumptions needed to derive the benchmarking decays. Ref. [23] derived their decay formulas under the assumption that their benchmarking set was of the form {U T ⊕ σφ U T : U T ∈ G T , σ = ±}, where φ U T is some uncontrolled phase that occurs on the singlet space and σ is a controllable phase between the singlet and triplet spaces. However, in practice they could not control σ using a constant number of U ZZ gates. Instead, they implemented only {U T ⊕ φ U T : U T ∈ G T } and assumed the form of the decay would not change. In our work, by contrast, we have rigorously derived decay formulas for a group of gates that can be directly compiled into elementary symmetric gates using a constant number of U ZZ . We note that our method does require one additional capability that was not required in the original work: in order to estimate S T S (N ), it is necessary to initialize and measure the |01 state. This requires additional experimental overhead to individually address and measure each qubit at the beginning and end of the benchmarking procedure. However, such overhead only contributes to the SPAM errors Λ P , Λ M , and does not affect our estimates of the entangling error. In any case, our method to measure λ 0 and λ T ⊥ does not require individual addressing, and can be viewed as a mathematically rigorous method to extract these parameters with no additional experimental requirements. V. APPLICATION: LEAKAGE RANDOMIZED BENCHMARKING We may also use our generalized character RB to improve the leakage RB introduced in [26]. In leakage RB, like subspace RB, one is given a group G that preserves the splitting of the Hilbert space into subspaces H = H 1 ⊕ H 2 . In leakage RB, however, H 1 ⊕ H 2 does not represent the computational Hilbert space, and the goal is not to compute the average fidelity of the group operations. Instead, H 1 represents the computational space of a quantum system (e.g. the two lowest-level states that encode a qubit), while H 2 represents the leakage space outside the computational space. Leakage RB determines the average probability of "leaking" from H 1 to H 2 or "seeping" from H 2 to H 1 . Noting that the probability of a state |ρ being in subspace α = 1, 2 is given by 1 α |ρ , define the leakage L and seepage S by In addition, leakage RB determines the average fidelity restricted to the subspace H 1 which is the appropriate measure of gate quality, since all computations take place in H 1 . Leakage RB is relevant for any system in which qubits are encoded in the subspace of a larger Hilbert space, which includes superconducting qubits [51,52], quantum dots, [53][54][55][56][57], and trapped ions [58][59][60]. The original leakage RB could only be applied to a group such that {U 1,a1 : a 1 ∈ A 1 } and {U 2,a2 : a 2 ∈ A 2 } form 2designs on their respective subspaces [61]. This is a very stringent condition, as it requires being able to independently control the computational and leakage subspaces. In many experimental implementations such control is not realistic; an experimental implementation of a gate U 1,a on the computational subspace will naturally implement some U 2,a on the leakage subspace. It is therefore desirable to develop a leakage RB that can be applied to more general groups. Using our method, we can derive a leakage RB procedure that is more general than the one described in [26]. Let G be a group of unitary gates that preserve the subspaces of H, and let Λ be their shared error channel. To estimate L and S, we will require that the only trivial representations of G are |1 1 and |1 2 , while to estimate F Λ,1 we additionally require that the subrepresentation H 1⊥ ⊆ H 1 ⊗ H 1 orthogonal to |1 1 is an irrep of multiplicity 1. then the first condition is satisfied provided {U 1,a : a ∈ A} and {U 2,a : a ∈ A} are unitary 1-designs, while the second condition is satisfied if provided these groups are unitary 2-designs with dimensions d 1 = d 2 (see Appendix C for proofs). Note that our requirements are significantly weaker than the original leakage RB, as we are only assuming the ability to implement an independent phase on the leakage space. We outline our procedure for determining L, S, and F Λ,1 for such groups G. Our procedure, like the original leakage RB, requires that SPAM errors do not mix the the subspaces H 1 and H 2 , or at least that such mixing is negligible compared to the gate errors. In our derivations we will assumeΛ M =Λ P =1, although the generalization to errors that act only within the subspaces is trivial. Our modified leakage RB procedure consists of the following steps: 1. Choose an initial state |ρ ∈ H 1 and measurement projector |M i = |1 1 . 2. For a given N , choose unitaries U 0 ∈ G and U 1 , ..., U N ∈ G randomly and uniformly. Compute 4. (a) The extended sub-fidelityFΛ of [23] versus the exact fidelity FΛ that we can estimate in our paper, for a selection of error channels of varying strengths: intensity errors, which correspond to an overrotation e −i ZZ ; optical pumping errors, which cause amplitude-damping on each qubit; inhomogeneous fields, which cause phase-damping on each qubit; and SWAP errors, which interchange the qubits. This plot corresponds to the exact value of both FΛ andFΛ that one estimates in an experiment. Note that while theFΛ agrees with FΛ in the limit FΛ → 1, in general the two do not agree, and there exists worst-case errors such as SWAP thatFΛ cannot detect. (b,c) Simulation of an experiment that estimates FΛ versusFΛ for a total of 300, 000 unitaries, in the case of (b) intensity and (c) SWAP errors of varying strengths. These plots correspond to experiments that estimate the exact values shown in (a). We see that the difference between FΛ andFΛ can be discerned in a realistic experiment. Perform a measurement of the observable M to de- termine if the state is still in H 1 . 5. Repeat steps 2-4 many times, to estimate the zeroth character-weighted survival probability for each i, where Pr U0,...,U N +1 is the probability of remaining in H 1 after applying gates U 0 , ..., U N +1 to |ρ . 6. Repeat steps 2-5 for different values of N . Fit the survival probability to a function of the form where A, B, and λ are independent of N . Estimate L and S as In the remainder of this section, we prove the correctness of this procedure and provide an example of such leakage RB. A. Deriving L and S Written out explicitly, the zeroth character-weighed survival probability is whereP 0 is the projector onto the trivial irrep, and we have made the same substitutions as in Section III A to reduce the sum over {U 0 , ..., U N } to G-twirls and a projector. We know from Thm. 1 thatΛ G has a blockdiagonal formΛ G = iQ i ⊗1 i , where i indexes the irreps. BecauseΛ G is multiplied by the projectorP 0 in Eq. 16, we may ignore all terms exceptQ 0 ⊗ 1 0 . In terms of the eigendecomposition ofQ 0 , we may writê Q 0 ⊗ 1 0 = |e 0 e 0 | + λ|e 1 e 1 |, so that S 0 (N ) = 1 1 |Λ|e 0 e 0 |ρ + 1 1 |Λ|e 1 e 1 |ρ λ N where we have used the fact, noted in Section IV, that one eigenvalue ofQ 0 is always 1. This justifies the fit Eq. 17. So far, we have simply repeated the steps in Section III A with slight modifications. However, in order to estimate L and S we will need to explicitly determine the eigendecomposition ofQ ⊗ 1 0 . We first note that theP 0 subspace is spanned by the orthonormal vectors Thus in terms of these basis vectors, we may writê Noting that M αβ = 1 α |Λ G |1 β = 1 α |Λ|1 β , we can use the definitions of L and S, (Eqs. 13 and 14) to determine the constants Q αβ : From the explicit form of Q αβ , we can determine the eigendecomposition ofQ 0 ⊗ 1 0 via straightforward algebra [23,26]: Putting this together, we can evaluate the zeroth character-weighted survival probability as We then have that B = S L+S , which can be combined with λ = 1 − L − S to immediately give Eqs. 18 and 19. B. Deriving FΛ,1 To establish Eq. 20, we first prove the following: whereP 11 is the projector onto H 1 ⊗H 1 . We use a similar method as in our proof of Eq. 10. We first note that the restricted average fidelities ofΛ andP 11ΛP11 :=Λ 11 are equal.Λ 11 is an error channel restricted to the H 1 subspace. We can twirlΛ 11 by the full unitary group on H 1 to get a depolarizing channel Note that we have p and q rather than p and (1 − p) as in Eq. 9; this is becauseΛ 11 is not necessarily tracepreserving. We can directly compute F (Λ11) G = p + q d1 . Similarly, we can also directly compute Tr (Λ 11 ) G = pd 2 1 + q. Finally, we can directly compute p + q = Combining these three equations gives Eq. 21. To estimate Tr(ΛP 11 ), we can divide this trace up into two pieces: whereP 1⊥ is the projector onto H 1⊥ . The latter trace is simply (d 2 1 − 1)λ 1⊥ . Plugging this in to Eq. 21 gives Eq. 20 as desired. C. Example: Two-qubit logical encodings As an example of our leakage, we consider an encoding of a single logical qubit into the S z = 0 subspace of two physical qubits. This encoding is frequently used in quantum dot qubits [54][55][56]. The computational space H 1 is spanned by and the leakage space H 2 is spanned by Let's assume we implement single-qubit rotations on our computational space by the operators where implementing an X or Z rotation on the computational space naturally induces a specific rotation on the leakage space. We will take our benchmarking group to be the group generated by these two rotations, G = R X , R Z . This group has a total of 16 elements. It cannot be written as a product of a group acting on H 1 and a group acting on H 2 , so the usual leakage RB does not apply. However, elementary calculation shows that the natural representation of this group contains exactly two trivial irreps, spanned by |1 1 and |1 2 , and we can therefore use our procedure to estimate L and S. We illustrate this method by generating random error channels and simulating the RB procedure. In Figs. 5, we show the exact value of S 0 (N ), the data we take to estimate S 0 (N ), and the fit to S 0 (N ) according to Eq. 17. In Fig. 6, we repeat the same fitting procedure for a set of randomly generated error channels, and estimate L and S using Eq. 18. We see that the true values of L and S and our estimate for L and S agree within the error bars set by the uncertainty in our fits. FIG. 5. The predicted and measured S0(N ) for a single randomly generated error channel. The actual decay (green) is an exponential decay given by Eq. 17. We estimate S0(N ) by applying random gates and measuring the final state (blue points). The data is fit to a function of the form of Eq. 17, from which we estimate L and S. We cannot apply our method to find F Λ,1 because in this example H 2⊥ and H 1⊥ share an irrep. This reflects the overall difficulty in applying leakage RB to physically realistic circumstances. While this work provides the most widely applicable method for leakage RB currently available, more work is needed to develop a truly general procedure. We will derive a benchmarking procedure that determines the average fidelity of circuits composed of matchgates using a number of experiments that scales polynomially in the number of qubits. Our method is the matchgate equivalent of traditional Clifford RB, which characterizes the average fidelity of circuits composed of Hadamard, phase, and CNOT gates, and also requires a number of experiments that scales polynomially in the number of qubits. However, we will see that benchmarking matchgate circuits requires the full machinery of nonmultiplicity-free character RB. A. The matchgate group Consider a line of n qubits with nearest-neighbor connectivity. Let G be the matchgate group on n qubits, the group of all unitaries generated from nearest-neighbor matchgates. Naively, G could contain arbitrarily long circuits of matchgates. However, one can prove that every element of G can be realized using circuits of at most 4n 3 matchgates [29,30]. We will provide a simplified proof of this fact below. Following [29,30], our primary tool to understand G will be the Jordan-Wigner transformation [64]. Define 2n Majorana operators {c i } as Claim 2. Any unitary operator U ∈ U (2 n ) that acts on the Majorana operators as a proper rotation is in the matchgate group G. In particular, such a U can be decomposed into a product of at most 2n 3 nearest-neighbor matchgates. These two claims together imply that the matchgate group is isomorphic to SO(2N ), and that every element of the matchgate group can be efficiently implemented in a quantum circuit. Proof of claims Proof of Claim 1. We provide a simplification of the proof in [30]. We prove that a nearest-neighbor matchgate acting on qubits k and k+1 acts as a rotation mixing c 2k−1 , c 2k , c 2k+1 , and c 2k+2 , and that all such rotations are realized by matchgates. It then follows that all products of matchgates also act as rotations on the Majorana operators. Without loss of generality, we can restrict ourselves to k = 1, so our Majorana operators are given by We can write an infinitesimal matchgate as where M must be of the form with α ab ∈ R. One can directly check that U satisfies We therefore see that infinitesimal matchgates generate the whole Lie algebra so(4) of real antisymmetric matrices. By exponentiating the infinitesimal matchgates, we generate the full set of matchgates; in this process, we generate the full group SO(4) as well. We note that an arbitrary rotation between two Majorana operators . Thus, the above decomposition of R into < 4n 3 two-Majorana rotations gives an explicit formula for the matchgates needed to construct R. We provide Python code to realize the Hoffman decomposition of R into elementary rotations, as well as the reduction of R to a matchgate circuit, at [67]. B. Irreps of the matchgate group We want to understand how the natural representation of G decomposes into irreps. This is most convenient in the basis of polynomials of {c m }. Note that c 2 m = 1, so our polynomials are at most degree 1 in any given c m and there are 4 N such polynomials. Explicitly, an orthonormal basis of H ⊗ H is given by Define H i := span{|m 1 · · · m i } to be the space spanned by degree-i basis elements, for each i = 0, ..., 2n. Then H i i C 2n , the i-fold wedge product of C 2n . It's clear thatÛ preserves each H i , so that each H i is a subrepresentation. On H 1 ,Û acts as the rotation operator R associated to U :Û On general H i ,Û acts as the wedge product of the rotation operator: Claim 3. The natural representation of the matchgate group decomposes into the irreps H 0 ⊕ H 1 ⊕ · · · ⊕ H n,1 ⊕ H n,2 ⊕ · · · ⊕ H 2n−1 ⊕ H 2n . Proof. Define the Hodge star operator * : Let G ⊂ G be the subgroup of the matchgate group generated R ∈ SO(2n) with R diagonal. Such an R is always of the form R = diag{σ 1 , ..., σ 2n } with σ 1 σ 2 · · · σ 2n = 1. The action on a state |m 1 · · · m i ∈ H i is given bŷ U |m 1 · · · m i = σ i1 · · · σ im |m 1 · · · m i and therefore the states |i 1 · · · i m are the irreps of the natural representation of G. Because of the constraint σ 1 σ 2 · · · σ 2N = 1, each irrep has multiplicity 2, with the irrep spanned by |m 1 · · · m i isomorphic to the irrep spanned by | 1 · · · 2n−i with { a } the complement of {m a }. For each i = 0, ..., n, we can define a character function and corresponding projector These projectors project into the multiplicty-two irreps H i ⊕ H 2n−i for i = 0, ..., (n − 1), and project into the two inequivalent irreps H n,1 ⊕ H n,2 for i = n. As our initial state, for each i = 0, ..., n we choose where kth qubit is in the + state of the X operator for i = 2k − 1. Provided we can prepare both X-basis and Z-basis single qubit states, we can prepare |ρ i . As our measurement projector, for each i = 0, ..., n we choose For i = 2k − 1, this corresponds to a measurement of the kth qubit in the X basis, while for i = 2k this corresponds to a measurement of the product of the last k qubits in the Z basis. With these choices, the S i (N ) are approximately and the relative uncertainty does not depend on the number of qubits. This is therefore a scalable method to benchmark the matchgate group. The form of the decay is given by FIG. 7. The predicted and measured character-weighted survival probability for a random error channel. The exact decay (green) is an exponential decay given by one of Eq. 22. We estimate Si(N ) by applying random gates and measuring the final state (blue points). The data is fit to an appropriate function (orange) from which we estimate the fidelity. For each i, either λ i,1 , λ i,2 , C i,1 , C i,2 ∈ R or λ i,1 = λ * i,2 and C i,1 = C * i,2 , since S i (N ) is always real. For the case of i = n, we know that the former case holds when n is even and the latter when n is odd, by Claim 3. For 1 ≤ i < n, one should assume whichever case gives the best fit. Note that in all cases, we fit at most 4 real parameters. As an example, we simulate a noisy implementation of the matchgate group on n = 3 qubits. In Fig. 7, we show the exact value of S i (N ), the data we take to estimate S i (N ), and the fit to S i (N ) according to Eq. 22 for a single random error channel Λ. In Fig. 8, we do the same fitting procedure for a set of randomly generated error channels, and estimate their fidelity. We see that the true fidelity and the estimated fidelity agree within the error bars set by the uncertainty of our fits. VII. CONCLUSION AND DISCUSSIONS In this work, we extended the recently introduced character RB of [22] to groups with multiplicity. Compared to earlier work on benchmarking arbitrary groups [20,21], our method allows us to accurately determine the fidelity and fit fewer exponentials to experimental data. The generalization to non-multiplicity-free groups was essen- tial to deriving a rigorous version of subspace RB and a scalable RB protocol for the matchgate group. This generalization also allowed us to develop an improved leakage RB protocol. While we derived the character RB procedure in more generality than [22], our generalization still requires groups of small multiplicity, since the multiplicity of the group determines the number of exponential decays in our fit function. Robustly fitting a sum of many exponential decays is challenging, especially when the decay rates are roughly equal. It is likely straightforward to benchmark groups in which the trivial irrep has multiplicity three, as the corresponding decay S 0 (N ) = A + Bλ N 0,1 + Cλ N 0,2 has only five real parameters. An irrep of multiplicity 3 with a real character function χ has a decay with six parameters, which may be feasible with sufficient data. A general irrep of multiplicity 3, however, requires fitting 9 real parameters, which is likely unfeasible for realistic amounts of data. Higher-multiplicity irreps are correspondingly more difficult. All of the groups we considered in the examples in this paper decomposed into irreps with multiplicity at most 2. All our applications involved a group that preserved some subspace of the Hilbert space. In the case of subspace RB, the group preserved the triplet and singlet subspaces; in the case of leakage RB, the computational and leakage subspaces; and in the case of matchgate RB, the even and odd parity subspaces. Any group that pre-serves subspaces necessarily has multiplicity, since there is always a copy of the trivial irrep in each subspace. It is an open question whether non-multiplicity-free character RB has useful applications to groups that do not preserve subspaces but nonetheless have multiplicity. While our leakage RB necessitates the fewest assumptions to date, it is still too restrictive for many experimental implementations. Most notably, our RB requires the set of gates to be a group, which may be unrealistic; often, the gates will only form a group modulo rotations in the leakage space. In experimental implementations of leakage RB, this problem is usually simply ignored and an exponential decay is posited to exist with the usual relation to the leakage rate [52,57]. It is worth exploring whether the methods used here can be further extended to such sets of gates that are only groups in the computational subspace, modulo rotations in the leakage subspace, to provide a more rigorous foundation for leakage RB experiments. There are two obvious directions for further applications of character RB, with or without multiplicity. First, character RB has the potential to drastically expand the family of groups that can be scalably benchmarked. This requires both finding a group G that can be efficiently compiled into elementary gates whose multiplicity is bounded as the number of qubits n increases, as well as finding a subgroup G ⊆ G whose irreps have slowly growing dimension. As a simple example, the subgroups of the Clifford group considered in [20] likely have a scalable protocol based on character RB, with G given by the Pauli group. Increasing the number of groups that can be scalably benchmarked gives new ways of characterizing compiled gates, especially non-Clifford gates. Second, character RB can be used to characterize specific elementary gates by combining these gates into a group, as we did in Section IV for subspace RB. This requires finding a group that can be implemented by combining a fixed number of the gate to be characterized with known high-fidelity gates. Constructing these groups is a non-trivial task, as we have seen in the case of the U ZZ operator above. We leave the exploration of such applications to future work. In this appendix, we extend the work of [22,38] on gate-dependent errors to the case of non-multiplicity-free character RB. Ref. [22] had previously generalized [38] to establish that multiplicity-free character RB is robust to gate-dependent errors. Here, we largely follow the same logic as [22,38], with appropriate modifications for the case of non-multiplicity-free groups. Our ultimate goal is the following theorem: Theorem 2. Let G be a benchmarking group, and let χ i be a character function for an irrep of the natural representation with multiplicity a i . Assume each gate U ∈ G is realized as a noisy operator U , but do not assume we can write U =ΛÛ for some U -independent noise channel Λ. Then the character-weighted survival probability is given by where N is an error term satisfying | N | < δ 1 δ N 2 and δ 1 , δ 2 are both small for high-fidelity gates. Since we know that λ i,j ≈ 1 for high-fidelity gates, N is negligible compared to S i (N ) for moderately large N . This theorem implies we may safely use the RB protocols even in the presence of gate-dependent errors, although we will see the interpretation of the estimated fidelity is slightly modified. In what follows, we will use the notation E [·] for the average 1 |G| U ∈G (·) to make our equations cleaner. We will use also denote the piece ofÛ acting on the (i, j)th subspace of H ⊗ H by φ i,j (U ). We first prove a technical lemma. Lemma 1. There exist Hermiticity-preserving operatorŝ L andR such that where D = i,j λ i,jPi,j and λ i,j is the largest-magnitude eigenvalue of the matrix operator E U ⊗ φ i,j (U ) * . Proof. We can rewrite Eq. A1 as We decomposeL asL = I i=1 ai j=1L i,j , whereL i,j acts only on the (i, j)th subspace (but has arbitrary range). Then our equation becomes This is an eigenvector equation forL i,j . We can rearrange the matrix elements ofL i,j into a column vector vec(L i,j ), which gives a more explicit form of the eigenvector equation This equation has a solution, since we picked λ i,j to be an eigenvalue of E U ⊗ φ i,j (U ) * . Similarly, we can find a solution to Eq. A2 by expressingR = I i=1 ai j=1R i,j , where R i,j has range restricted to the (i, j)th subspace (but has arbitrary domain). Since we found the R i,j and L i,j by solving eigenvalue equations, they can be multiplied by arbitrary constants and still solve Eqs. A1 and A2. We use this freedom to satisfy Eq. A3. We may write For each (i, j) we know the productR i,jLi,j acts only within the (i, j)th subspace. Conjugating by a unitary does not change this, since the unitaries do not mix irreps. By Theorem 1, the twirl ofR i,jLi,j is thus proportional toP i,j . By multiplyingR i,j by an appropriate constant we may have E ÛR i,jLi,jÛ † = λ i,jPi,j . Then Proof of Theorem 2. We begin with our formula for the character-weighted survival probability [38] demonstrated that ∆ U → 0 as U →Û , so that these error terms become negligible for high-fidelity gates. We still need to relate the measured decay parameters λ i,j to a quality measure of the noisy gates { U }. Without loss of generality, we may assume U =L UÛR whereR is the operator in Lemma 1 andL U is a gate-dependent operator. The operatorRL U is then the gate-dependent analogue of the operatorΛ. Define a gate-dependent average fidelity: F reduces to F Λ for gate-independent noise. Using Eq. 10 to compute the fidelity of a channel in terms of the trace, we have We can evaluate this trace by assumingR is invertible. where we used Eq. A2 in the second line. Therefore, we end up with the same formula for estimating F as Eq. 5 for F Λ : IfR is not invertible, we can perturbR by an arbitrarily small amount to make it invertible and the relationship still holds; thus, it holds for arbitraryR. To prove G forms a unitary 2-design, we need to show (see Section IV B of the main text) 1 |G| U ∈G p(U, U * ) = dU p(U, U * ) for any balanced polynomial p(U, U * ) of degree at most 2 in the elements of U and U * . Any such p(U, U * ) can be written as a linear combination of terms of the form U AU † BU CU † and U DU † , where A, B, C, D are matrices. We are thus reduced to proving for arbitrary matrices A, B, C, D. In the following, we will make repeated use of an elementary identity of complex roots of unity. We evaluate the LHS by using Eq. B1 for the conjugation of a general Pauli element: We note that η = h T v + (· · · ), where (· · · ) denotes terms that do not depend on h. We see by Fact 4 that for fixed M the sum over h gives zero unless v = 0, while when v = 0 it is clear LHS = 1. This proves Eq. B3. Degree 2 polynomials We now turn to Eq. B2. We prove this using methods from [9], who proved the case d = 2. First, we note that the RHS of Eq. B2 is We now need to evaluate the LHS of Eq. B2 for each of the four cases above. In the first case, we find In the second case, we use Eq. B1 to simplify each summand in the LHS Therefore, the average over the group G gives ω − v T A Q v A 1. In the third case, we again simplify each summand using Eq. B1, but with an additional B in between: The average over h does not affect this sum, so we only need to consider the average over M . We evaluate the average by realizing that if d is prime, the Clifford group sends every non-identity Pauli string to every other nonidentity Pauli string uniformly. Thus, letting M run over all symplectic matrices makes M v A run uniformly over all vectors M v A ∈ Z 2n d \ {0}. Therefore, the LHS is given by where in the final step, we used Fact 4. In the last case, we have that each summand is of the form where (· · · ) represents terms that are independent of h. We can again apply Fact 4 to find that the sum over h gives zero. We have thus proved LHS = RHS for each of the four cases, which establishes Eq. B2. Appendix C: Leakage RB irreps Let G be a unitary group index by a ∈ A, G = {U a,σ : a ∈ A σ = ±1} = {U 1,a ⊕ σU 2,a : a ∈ A, σ = ±1}, where {U 1,a } and {U 2,a } are each unitary 1-designs on their respective subspaces. We claim that |1 1 and |1 2 are the only trivial irreps of the natural representation of G. Next, we prove that if {U 1,a } and {U 2,a } are in addition unitary 2-designs and d 1 = d 2 then H 1⊥ is irreducible and multiplicity-free. To prove these statements, we need a standard result from representation theory. Fact 5 (Schur orthonormality). If χ is the character of an arbitrary representation φ, and χ i is the character of an irrep φ i , the multiplicity a i of φ i is For a proof, see [37]. We start with the trivial irreps. It is clear that both |1 1 and |1 2 are trivial irreps. In the case of the trivial irrep we have χ 0 (U ) = 1, so Fact 5 gives where the third equality follows from the unitary 2-design property, and the fourth follows from the fact that H 1⊥ is
16,326.8
2020-10-30T00:00:00.000
[ "Physics", "Computer Science", "Mathematics" ]
Synthetic biology as a source of global health innovation Synthetic biology has the potential to contribute breakthrough innovations to the pursuit of new global health solutions. Wishing to harness the emerging tools of synthetic biology for the goals of global health, in 2011 the Bill & Melinda Gates Foundation put out a call for grant applications to “Apply Synthetic Biology to Global Health Challenges” under its “Grand Challenges Explorations” program. A highly diverse pool of over 700 applications was received. Proposed applications of synthetic biology to global health needs included interventions such as therapeutics, vaccines, and diagnostics, as well as strategies for biomanufacturing, and the design of tools and platforms that could further global health research. are required to solve some of the most difficult global health challenges, where solutions must both comprise breakthrough science and be practical, affordable, and accessible to people in need. Synthetic biology (Andrianantoandro et al. 2006) is one such approach with the potential to deliver global health solutions, including vaccine and drug creation, diagnostics, and combinations of interventions within a single biological system. Characterized by a bottoms-up approach to designing biological systems for a specific purpose, synthetic biology offers opportunities for achieving goals that observation and analysis do not. A synthetic goal forces science to broach and solve problems that are not readily encountered through analysis alone, driving the creation of new solutions. Like global health, synthetic biology is highly multidisciplinary, bringing together tools and perspectives from traditional health and life sciences disciplines such as cell biology, chemistry, genetics, pathology, and immunology, among others, as well as disciplines less tightly linked to the health sciences, such as engineering, materials science, and fabrication. And synthetic biology is already showing promise in fields such as industrial biology and biofuels for improving the economics and accessibility of products, for example by reducing manufacturing costs and improving production efficiency. 1 For these reasons, synthetic biology has the potential to contribute novel and radical innovations to the pursuit of new global health solutions. Specific examples of how synthetic biology could be applied to global health needs might include: • A novel intervention to prevent infectious disease using chemicals, materials or organisms engineered via synthetic biology approaches; • Chemicals or materials biofabricated to improve the efficacy of disease treatment, or to increase the chemical diversity available for new drug discovery/ development; • A diagnostic biosensor for a global health indication using genetic circuitry or other approaches for stimulus and response induction; • A synthetic instance of a biological system to accelerate development of global health interventions (e.g. a predictive model for preclinical drug or vaccine testing); • A synthetic instance of a biological system to test and further our understanding of that system, addressing a critical knowledge gap in global health (e.g. a synthetic model of disease pathogenesis). To be sure, synthetic biology faces significant hurdles on the road to delivering global health solutions. These include potential safety concerns; in particular, preventative interventions such as vaccines for global infectious diseases have a high safety bar. Delivery challenges inherent to an innovative idea (e.g., ensuring that an intervention reaches and enters the relevant tissues or cells in a patient) can become outright barriers when considered in a developing-world context, where infrastructure, technologies, or expertise can easily be a limiting factor. While synthetic biology has the potential to decrease costs of some products, biological interventions in general are not inherently low-cost, and may not be cost-effective in global health settings. In addition, synthetic biology is subject to the same ethical, legal, and social (ELSI) concerns that pertain to genetic engineering, sometimes in magnified form. ELSI issues are outside the scope of this article; see Anderson et al. (2012) for a recent framing of the issues and a perspective on addressing them, as well as work appearing in this Special Issue (Douglas and Stemerding 2013;Hollis 2013;Van den Belt 2012). Results of an open grant call Wishing to harness the emerging tools of synthetic biology for the goals of global health, in 2011 the Bill & Melinda Gates Foundation put out a call for grant applications to ''Apply Synthetic Biology to Global Health Challenges'' under its ''Grand Challenges Explorations'' (GCE) program. 2 The Gates Foundation's Global Health division aims to harness advances in science and technology to save lives in developing countries. GCE grants afforded an ideal mechanism by which to direct the attention and talents of the nascent field of Synthetic Biology toward Global Health needs of which it might otherwise not be aware. In addition, it was intended to attract applicants from disciplines beyond a typical biomedical background, such as bioengineering and biophysics, thus increasing the crossdisciplinary participation and leading to more innovations in the synthetic biology approach to global health. Over 700 applications were received across two rounds of the grant call. A wide array of global health needs was represented, with the majority of proposed projects targeting infectious diseases such as HIV, malaria and tuberculosis. Proposed applications included creating synthetic biology variations on traditional health interventions such as therapeutics, vaccines, and diagnostics, as well as strategies for biomanufacturing, and the design of tools and platforms that could further global health research. The applicant pool was highly diverse on dimensions of geography and institution, indicative of the breadth of interest and activity in the field of synthetic biology. A panel of reviewers assessed applications on multiple dimensions including responsiveness to the grant call, fit with Gates Foundation priorities, innovativeness, and potential for ultimate impact, including practical implications such as cost and deliverability. A total of thirty $100,000 grants were awarded to fund projects seeking to apply synthetic biology to global health needs, summarized in Table 1. The resulting project portfolio reflects the global health priorities of the Bill & Melinda Gates Foundation, 3 and thus cannot be considered to be generally representative of global health research, synthetic biology research, or the intersection thereof. Nevertheless, the sample set is sufficiently large and diverse to allow interesting themes to emerge. These are discussed below, grouped by application area. 2 Grand Challenges Explorations (GCE) fosters innovation in global health research. The Bill & Melinda Gates Foundation has committed $100 million to encourage scientists worldwide to expand the pipeline of ideas to fight our greatest health challenges. Launched in 2008, Footnote 2 continued more than 850 Grand Challenges Explorations grants have been awarded to innovative, early-stage projects in 50 countries. The grant program is open to anyone from any discipline, from student to tenured professor, and from any organization-colleges and universities, government laboratories, research institutions, non-profit organizations and for-profit companies. The initiative uses an agile, accelerated grant-making process with short two-page applications and no preliminary data required. Initial grants of $100,000 are awarded two times a year. Successful projects have the opportunity to receive a follow-on grant of up to $1 million. For more information about GCE, see http://www.grandchallenges.org/Explorations/. 3 To be considered, proposals were required to closely align with one or more of the Bill & Melinda Gates Foundation's Global Health priority areas: malaria, HIV, tuberculosis, pneumonia, enteric disease & diarrhea, maternal neonatal & child health, nutrition, polio, and/or family planning. For information about the foundation's strategies in its priority Global Health areas, see http://www.gatesfoundation.org/ global-health/Pages/global-health-strategies.aspx. Diagnostics/biosensors A prominent theme was the use of synthetic biology approaches to create novel diagnostics and biosensors, with 13 of the 30 awarded grants falling into this category. A recurring strategy was to engineer whole biological systems-living cells, such as yeast or bacteria, or viruses/ phage-to build the molecular apparatus necessary to detect analytes of interest and generate a detectable signal, such as the production of a protein-based pigment. The resulting biosensor could be added to a sample (e.g. blood, urine, water) or, in some cases, could be deployed in vivo in the human body, in the manner of a probiotic. Such biological system-based biosensors have the potential to be relatively low cost to produce and deploy in developingworld environments, leveraging the benefits of biologically replicating systems and pathways. Challenges of the approach (e.g. variation, unpredictability) stem from the inherent complexities of living systems, and include likely difficulties in achieving robust, consistent manufacturing. Some proposals took the biosensor concept a step further, adding a molecular process intended to kill a specific pathogen downstream of its detection, resulting in a combination diagnostic-therapeutic. Another theme in the diagnostic category was the use of DNA as a nanomaterial, rather than its more typical role as a diagnostic analyte. The proposed DNA-based sensors would bind specifically to a particular pathogen to signal its presence in a clinical or environmental sample, and have the potential benefit of being highly specific, stable and amenable to field use. Therapeutics and vaccines Many projects proposed to combat a specific infectious pathogen important to global health, either as a therapeutic (i.e., treating an existing infection) or as a vaccine (i.e., preventing infection). Proposed solutions in the former category included novel classes of engineered large molecules, such as synthetic peptides and artificial transcription factors (which might also be useful as tools/reagents for target identification and validation). Other proposed ''therapeutics'' are actually whole biological systems (cells, viruses), engineered to specifically attack pathogenic organisms. Similarly, numerous projects explored the possibility of genetically engineered whole biological systems, such as commensal yeast, beneficial gut bacteria, or viruses, to produce and deliver antigens for global health pathogens, to be used as an oral/ingested vaccine. Enteric pathogens such as V. cholerae and rotavirus were the most popular targets, though some proposals dealt with HIV, TB, and malaria. Both therapeutics and vaccines based on whole biological systems carry with them the advantages and challenges described above for biosensors. However, the challenges are magnified due to the biosafety concerns and regulatory requirements applicable to an intervention that would be delivered in the human body. Some proposed strategies are so novel that defining and navigating the clinical development and regulatory path is likely to be as significant a potential barrier to eventual deployment in global health as will be the (considerable) technical hurdles. Biomanufacturing A few applications proposed to apply synthetic biology approaches to the problem of producing existing global health interventions, such as drugs against relevant pathogens, typically with the objective of reducing the costs and technical challenges of at-scale manufacturing of critical (and currently limiting) active ingredients (see Vohra and Blakely 2013). Another recurring theme was utilizing biosynthetic pathways to increase chemical diversity, particularly in natural products categories. While biomanufacturing as a general concept is not novel, projects tended to employ a level of systems thinking and tackle a degree of biological complexity that elevates them to the ''synthetic biology'' category. Tools and platforms There was a strong showing-both in numbers and in creativity-of applications proposing to create tools and platforms with the potential to further research and development in global health. These included designing drug and vaccine discovery systems, such as microbiological systems that expressed human genes for screening inhibitors, or aptamers to accelerate in vivo validation of antigens. Others proposed to build a cell-free synthetic system of a pathogen to screen compounds, using building blocks from both pathogenic and non-pathogenic bacteria. Potential advantages here include discovery tools with more informative and/or more rapid readouts. Another theme was developing and enhancing the synthetic biology toolkit itself, with a focus on enabling work in global health priority pathogens (e.g., Plasmodium genetics tools). Concluding thoughts In reviewing the research proposals that were submitted, the question repeatedly arose of what, exactly, constitutes synthetic biology, and whether specific projects and approaches qualified as such. For example, when does a research plan transcend ''conventional'' genetic engineering and become synthetic biology? Replacing an organism's entire genome with a synthetic genome seems to clearly fall into the category of synthetic biology, while inserting a single new synthetic gene might not; but what about projects that propose to undertake extensive engineering of a genome, with over a dozen gene insertions and deletions, and even construct novel gene pathways utterly foreign to the host organism? Other proposals seemed conventional at first read, but offered a synthetic biology ''twist''; for example, some projects proposed peptide mimetics or biomimicking molecules generated via fairly standard chemical synthesis methodologies, but cast them in roles that required these non-biological components to perform biological functions. See Andrianantoandro et al. (2006) for a review that provides a definition of synthetic biology and distinction versus genetic engineering, along with examples and applications. For our purposes, because the primary objective of the call was to support innovative solutions to global health needs rather than drive the field of synthetic biology per se, in the end we were fairly lenient with the definition of ''synthetic biology'' in making grant awards. A few proposals that fell outside of the themes described above gave a glimpse of application areas with large, relatively untapped potential in global health, worthy of consideration when setting future research priorities. For example, the generation of novel enzymes or enzymatic activities, either alone or in concert with more complex metabolic engineering efforts, can be used to design or select for enzymes that perform reactions not found in nature. By generating new enzymatic functions, biological and chemical products of importance to global health could be produced more efficiently and cheaply for use as therapeutics, adjuvants, and a host of other applications, and chemical diversity could be expanded. Another area with immense promise is the use of synthetic biology to enhance crop nutrition and agriculture. Combining the existing knowledge and tools of genetic engineering in agriculture with emerging synthetic biology approaches has the potential to transform the way agriculture is practiced, yet the majority of progress to date has been in staple crops of importance in developed countries, such as corn and soy. Synthetic biology approaches could play an important role in developing-world agriculture if applied to other crops, including the engineering of novel pathogen resistance, enhanced growth characteristics, or the introduction of exogenous biosynthetic pathways for the production of added nutrition into staple crops. While agricultural applications of synthetic biology were outside of the focus of the GCE grant call, several applications in this category were received, among which one was funded: a proposal to introduce a nitrogen-fixing gene cluster from naturally occurring bacteria into agricultural crops, enabling crops to capture and metabolize nitrogen from the atmosphere. Given the early stage of the research of the proposed projects and the highly innovative, technically risky strategies employed, as well as the challenges outlined above to delivering effective, affordable global health solutions, it will be years before the impact of the call to ''Apply Synthetic Biology to Global Health Challenges'' can be measured. However, the initiative can already be declared a success against the objective of drawing the focus of researchers in the space to this area of critical need. Applicants hailed from every inhabited continent, from both public and private institutions, from academia and industry. The high diversity and quality of global health issues, technologies, and strategies in the application pool is an exciting testament to the creativity and capabilities already resident in the synthetic biology research community. There is great opportunity to support continued innovation in this emerging field that can be brought to bear on the needs of the developing world.
3,540.6
2013-07-19T00:00:00.000
[ "Biology", "Engineering", "Environmental Science", "Medicine" ]
Deep learning in data science: Theoretical foundations, practical applications, and comparative analysis . Deep learning has emerged as a transformative technology in data science, revolutionizing various domains through its powerful capabilities. This paper explores the theoretical foundations, practical applications, and comparative analysis of deep learning models. The theoretical foundations section discusses key neural network architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers, highlighting their unique capabilities in processing different types of data. Optimization algorithms crucial for effective training, including Stochastic Gradient Descent (SGD) and Adam, are examined. Regularization techniques for preventing overfitting and enhancing generalization are also addressed. Practical applications in healthcare, finance, and retail showcase the real-world impact of deep learning. A comparative analysis of performance metrics demonstrates the superiority of deep learning models over traditional methods. Despite their advantages, deep learning models face limitations and challenges, including data dependency and interpretability issues. The paper concludes by emphasizing the ongoing research efforts to mitigate these challenges and ensure the continued advancement of deep learning in data science. Introduction In the realm of data science, deep learning has emerged as a transformative force, reshaping how we analyze and derive insights from vast and complex datasets.Unlike traditional machine learning techniques, which often require handcrafted features and struggle with high-dimensional data, deep learning models autonomously learn hierarchical representations of data, leading to remarkable performance improvements in a wide array of tasks.At the core of deep learning lies the neural network architecture, a computational framework inspired by the biological neural networks of the human brain.These architectures, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformers, have proven to be exceptionally adept at processing different types of data, ranging from images and text to sequential and time-series data.Convolutional Neural Networks, for instance, have revolutionized computer vision tasks by preserving spatial hierarchies in visual data, allowing them to excel in tasks like image classification, object detection, and image segmentation.Recurrent Neural Networks, on the other hand, are tailored for sequential data processing, making them ideal for tasks such as natural language processing, speech recognition, and time-series forecasting. Meanwhile, Transformers have introduced a paradigm shift in sequence modeling by leveraging selfattention mechanisms to capture long-range dependencies in data, leading to breakthroughs in tasks like machine translation, text generation, and language understanding.However, the success of deep learning models is not solely attributed to their architectural design.Optimization algorithms play a critical role in training these models effectively, ensuring that they converge to meaningful solutions while avoiding issues like overfitting.Techniques like Stochastic Gradient Descent (SGD), Adam, and RMSprop are commonly used to minimize the loss function during training, enabling the models to learn from largescale datasets efficiently [1].Moreover, regularization techniques such as dropout and L1/L2 regularization are employed to prevent overfitting and improve the generalization of deep learning models.These techniques add constraints to the optimization process, helping the models generalize well to unseen data and improving their robustness in real-world scenarios.In this paper, we delve into the theoretical foundations of deep learning, exploring the nuances of neural network architectures, optimization algorithms, and regularization techniques.We also examine practical applications of deep learning across various domains, highlighting its transformative impact on industries such as healthcare, finance, and retail.Additionally, we conduct a comparative analysis to evaluate the performance of deep learning models against traditional machine learning methods, providing insights into their efficacy and potential limitations.Through this comprehensive exploration, we aim to elucidate the significance of deep learning in data science and pave the way for further advancements in this rapidly evolving field. Neural Network Architectures Deep learning's success in data science largely depends on the architecture of the neural networks employed.In this section, we delve into various architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Transformer models, discussing their unique capabilities and limitations in processing different types of data.CNNs excel in analyzing visual imagery by preserving the spatial hierarchy, which makes them ideal for tasks like image classification and object detection.For instance, a CNN might utilize a series of convolutional layers to detect edges in early layers, shapes in middle layers, and specific objects in deeper layers, as demonstrated by their pivotal role in systems like autonomous vehicles and facial recognition technologies.Recurrent Neural Networks (RNNs) are favored for their ability to handle sequential data like time-series or natural language [2].An RNN processes data sequentially, maintaining an internal state that captures information about previous elements in the sequence, which is crucial for applications such as speech recognition or language translation.For example, in language modeling, RNNs predict the probability of the next word in a sentence based on the previous words, which is fundamental for generating coherent text or performing effective machine translation.Transformers provide an advanced approach to managing sequence-based problems without the need for recurrent processing [3].Unlike RNNs, Transformers use self-attention mechanisms to weigh the importance of different words in a sentence, regardless of their positional distance from each other.This architecture allows for more parallelization during training and leads to significant improvements in tasks such as natural language processing (NLP), where models like BERT and GPT have set new standards for understanding and generating human-like text. Optimization Algorithms Optimization algorithms are critical in training deep learning models effectively.This subsection focuses on algorithms like Stochastic Gradient Descent (SGD), Adam, and RMSprop, explaining their roles and mechanisms in minimizing the loss function during training.SGD, for example, updates the model parameters using a fixed-size step based on the gradient of the loss function, which helps in navigating the complex landscapes of high-dimensional parameter spaces typical of deep networks. Stochastic Gradient Descent (SGD) is a foundational optimization technique in neural network training.Unlike traditional gradient descent, which computes the gradient of the cost function using the entire dataset to update model parameters, SGD updates parameters incrementally for each training example or small batch.This incremental approach helps in reducing the computational burden, making it feasible to train on large datasets.Mathematically, the parameter update rule in SGD is given by: = − • ∇ (; () , () ) ( where θ represents the parameters of the model, η is the learning rate, and ∇ (; () , () ) is the gradient of the cost function with respect to the parameters for the i-th data point ( () , () ).This method benefits from faster iterations and a natural regularization effect due to the noise introduced by the random selection of data points, which helps prevent overfitting [4].Adam, which stands for Adaptive Moment Estimation, combines the benefits of two other extensions of SGD-Root Mean Square Propagation (RMSprop) and Momentum.Adam calculates an exponential moving average of the gradient and the squared gradient, and the parameters beta1 and beta2 control the decay rates of these moving averages.This adjustment helps in handling sparse gradients on noisy problems, which is particularly useful in applications such as training large neural networks for computer vision. Regularization Techniques To prevent overfitting and enhance the generalization of deep learning models, regularization techniques are employed.This part covers methods such as dropout, L1 and L2 regularization, and early stopping.Dropout, specifically, involves randomly setting a fraction of input units to zero at each update during training time, which helps in preventing neurons from co-adapting too much.L1 and L2 regularization add a penalty on the magnitude of coefficients.L1 regularization can yield sparse models where some coefficients can become exactly zero [5].This is useful in feature selection.L2 regularization, on the other hand, tends to spread error among all the terms and is known to be less sensitive to outliers, thereby promoting model reliability.Early stopping, a form of regularization used to avoid overfitting when training a learner with an iterative method, such as gradient descent, involves ending model training as soon as the validation performance begins to deteriorate, despite continued improvement in training performance. Healthcare Deep learning has emerged as a transformative force in healthcare, offering innovative solutions to complex challenges in disease diagnosis and genetic research.One of the most notable applications is in diagnostic imaging, where convolutional neural networks (CNNs) have demonstrated remarkable performance in detecting and classifying various medical conditions, including cancerous lesions in mammography.These CNN models leverage their ability to extract meaningful features from images, enabling accurate and timely diagnoses that rival those made by trained radiologists.In addition to diagnostic imaging, deep learning plays a crucial role in genomics, where it aids in predicting gene activation patterns and understanding disease mechanisms [6].By analyzing vast genomic datasets, deep learning techniques can identify subtle genetic variations associated with diseases, paving the way for personalized medicine approaches tailored to individual patients.Moreover, the speed and accuracy of deep learning models in analyzing genomic data surpass traditional methods, enabling researchers to make significant strides in unraveling the complexities of genetic diseases.The integration of deep learning into healthcare practices not only improves diagnostic accuracy but also enhances patient outcomes by facilitating early disease detection and personalized treatment strategies.However, challenges such as data privacy concerns, model interpretability, and regulatory compliance remain areas of ongoing research and development.Addressing these challenges is crucial to ensuring the safe and effective deployment of deep learning technologies in healthcare settings, ultimately leading to improved patient care and outcome. Finance In the finance sector, deep learning has revolutionized various aspects of financial analytics, including risk assessment, algorithmic trading, and fraud detection.One of the primary applications is in credit risk prediction, where deep learning models analyze vast amounts of financial data to assess the creditworthiness of individuals and businesses.By considering numerous variables such as transaction history, user behavior, and macroeconomic factors, these models can provide more accurate risk assessments compared to traditional methods like logistic regression.Furthermore, deep learning plays a crucial role in algorithmic trading, where it enables high-frequency trading strategies that leverage historical data, sentiment analysis, and market data to predict stock movements.Deep neural networks excel in identifying complex patterns in financial data, allowing traders to make informed decisions and adapt to market dynamics in real-time [7].Additionally, deep learning models are instrumental in fraud detection, where they help identify fraudulent activities that evade traditional detection systems.By analyzing transaction patterns and user behavior, deep learning models can detect anomalies indicative of fraud and flag suspicious activities for further investigation.While the adoption of deep learning in finance offers significant benefits in terms of accuracy and efficiency, challenges such as model interpretability, regulatory compliance, and cybersecurity remain areas of concern.Addressing these challenges is essential to ensuring the robustness and reliability of deep learning applications in the finance sector, ultimately safeguarding the integrity of financial markets and protecting investors' interests. Retail Deep learning technologies have transformed the retail industry, revolutionizing customer experience, and operational efficiency.One of the key applications is in personalized recommendation systems, where deep learning models analyze vast amounts of customer data, including past purchases, browsing history, and search queries, to predict products that customers are likely to be interested in.By leveraging advanced machine learning algorithms, these recommendation systems can deliver personalized shopping experiences that enhance customer satisfaction and increase sales.Moreover, deep learning models play a critical role in inventory management, where they forecast product demand based on sales data, seasonal trends, and economic indicators.By accurately predicting demand, retailers can optimize their inventory levels to prevent overstock and understock situations, thereby reducing carrying costs and maximizing sales opportunities [8].Additionally, deep learning assists in optimizing pricing strategies by analyzing competitors' pricing, market demand, and consumer behavior.By dynamically adjusting prices in real-time, retailers can maximize profit margins while ensuring competitiveness in the market.While the adoption of deep learning in retail offers significant benefits in terms of customer engagement and operational efficiency, challenges such as data privacy concerns, ethical considerations, and regulatory compliance remain areas of concern.Addressing these challenges is essential to ensuring the responsible and ethical deployment of deep learning technologies in the retail sector, ultimately fostering trust and loyalty among customers while driving business growth [9]. Performance Metrics In evaluating the effectiveness of deep learning versus traditional predictive analytics models, key performance metrics such as accuracy, precision, recall, and F1-score are employed.A detailed quantitative analysis of these metrics reveals that deep learning models often outperform their traditional counterparts, particularly in tasks involving large and complex datasets.For instance, a comparative study using CNNs for image recognition tasks reported an accuracy improvement from 80% with traditional machine learning models (such as SVM and random forests) to over 95% with CNNs, as shown in Table 1.Precision and recall metrics also show significant improvements, which is critical in applications like medical diagnostics where false negatives or false positives can have serious implications.The F1-score, which is the harmonic mean of precision and recall, is particularly useful for evaluating models on imbalanced datasets, common in real-world scenarios.Deep learning models tend to achieve higher F1-scores compared to traditional models, demonstrating their superior ability to balance recall and precision [10]. Computational Efficiency While deep learning models offer significant advantages in terms of performance, they also come with higher computational demands.These models require substantial processing power and memory, particularly when training large networks on vast datasets.However, the advent of GPU computing has dramatically improved the computational efficiency of training deep learning models.GPUs offer parallel processing capabilities that are well-suited to the matrix and vector operations fundamental to neural network training.For example, training a deep neural network on a standard CPU might take weeks, but can be reduced to days or even hours with GPU acceleration.Furthermore, techniques such as distributed computing allow deep learning tasks to be scaled by training models across multiple GPUs simultaneously, effectively managing large computational loads [11].Despite these advances, the energy consumption and hardware costs associated with deep learning models are considerable, which might not be justifiable for all applications, particularly those with limited budget or computing resources.Figure 1 illustrates the training times for deep learning models using different technologies. Limitations and Challenges Despite their robust performance, deep learning models face several limitations and challenges that can affect their practical deployment.One of the primary concerns is the dependency on large amounts of training data.Deep learning models are inherently data-hungry; they require vast datasets to perform well, which can be a significant hurdle in fields where data is scarce or expensive to acquire.Moreover, the issue of interpretability remains a major challenge.Unlike traditional models where decision processes might be more transparent (e.g., decision trees), deep learning models often operate as "black boxes," where the decision-making process is not easily understood.This lack of transparency can be problematic in industries requiring rigorous validation and explanation of model decisions, such as healthcare and finance.Additionally, deep learning models are vulnerable to adversarial attacks-small, intentionally designed perturbations to input data can deceive models into making incorrect decisions. This vulnerability poses security risks, particularly in sensitive applications like autonomous driving and cybersecurity.These challenges necessitate ongoing research and development to find solutions that can mitigate these limitations, ensuring that deep learning models are both powerful and practical tools across various applications. Conclusion In conclusion, deep learning has cemented its position as a cornerstone of data science, offering unprecedented capabilities to tackle the complexities of modern datasets and address intricate problems across diverse domains.Throughout this paper, we have delved into the theoretical underpinnings of deep learning, exploring the intricacies of neural network architectures, optimization algorithms, and regularization techniques.The practical applications of deep learning in healthcare, finance, and retail underscore its transformative impact on industries worldwide.From enhancing diagnostic accuracy in medical imaging to revolutionizing risk assessment in financial markets, deep learning has reshaped traditional practices and paved the way for innovative solutions that improve outcomes and drive efficiency.However, deep learning is not without its challenges.Data dependency and interpretability issues remain significant hurdles that must be addressed to ensure the robustness and reliability of deep learning models.Efforts to overcome these challenges through ongoing research and development are essential to unlocking the full potential of deep learning across various domains.Looking ahead, the future of deep learning is promising, with continued advancements poised to revolutionize data science and propel innovation to new heights. Table 1 . Comparative Performance Metrics of Traditional ML vs. Deep Learning Models in Image Recognition
3,715.6
2024-06-21T00:00:00.000
[ "Computer Science", "Mathematics" ]
The Practicality of Vasicek Model in China’s Financial Market . In the changing financial market, the price of financial products fluctuates continuously over time. The study of the static term structure of the interest rate on the market can no longer satisfy the actual needs, and the dynamic model is imperative. Compared with the static term structure, the dynamic model introduces a stochastic differential term on the basis of the static term structure model of interest rate. This paper shows some relevant models including Vasicek Model, Single-Factor Dynamic Model, Multi-Factor Dynamic Model, and Kalman Filter method. To conclude, in the multi-factor dynamic interest rate term structure model, Kalman Filter has many benefits. However, it still has actual limitations. The multi-factor Vasicek model still needs some analysis to find the error and do the correction. In the research of bond pricing, risk management, and other aspects, the multi-factor model should be the main direction. Introduction With the development of the financial market, the continuous change of the market has a huge effect on the interest rate, exchange rate, and security price.At the same time, the financial market faces significant risks.The research of interest rate models and the term structure of interest rate is promoted by theoretical basis and practical application [1].The theory of term structure of interest rate provides a crucial theoretical basis for the pricing of financial products.It is of great significance to the whole financial market.For practical significance, the study of the term structure of interest rate also contributes to the rationalization of asset prices, thus improving the effectiveness of the capital market.This paper firstly introduces the Vasicek Model, the Ornstein-Uhlenbeck (OU) process, and the application of Shanghai Interbank Offered Rate (SHIBOR).Then the single-factor model is analyzed.Some of its assumptions are not valid in China's financial market.For the multi-factor dynamic model, this paper points out the three-factor model and demonstrates the correction direction.Finally, the Kalman Filter is introduced.When applying the Kalman Filter, it is essential to transform the Vasicek model into the state space representation form.This paper shows that further study should surround the dynamic multi-factor term structure model. Vasicek model Interest rate models can be divided into two categories: the Gaussian model and the non-Gaussian model.Based on the Gaussian model, the instantaneous rates may be negative, but its derivative pricing has analytic properties. The Vasicek model, a form of one-factor short-term interest rate model that describes interest rate movements when there is only one source of market risk, is a conventional Gaussian model.The Vasicek model can be built by the OU process. The OU process is a stochastic process with dynamics and defined by the stochastic differential equation: dx � � �θx � dt � σdW � .In this equation, θ and σ are parameters, and W � denotes the Wiener process.When an additional drift is added, dx � � �θ�x � � μ�dt � σdW � .Since θ and μ are greater than 0, the drift term means that the process tends to the mean retractable.Up to μ, it means that x � has moved away from μ and will also advance back to μ with time.The OU process has a good time-series correlation.The equation can be deduced by using the method of solving linear first-order differential equation [2]. Empirical studies point out that the Shanghai Interbank Offered Rate (SHIBOR) of march maturity has the characteristics of mean reversion and thick tail.The Vasicek model has a good ability to depict and describe the dynamic characteristics of the SHIBOR market interest rate.It also has good applicability in studying the characteristics of the interest rate in China [4]. SHIBOR is a daily reference rate based on the interest rates at which banks offer to lend unsecured funds to other banks in the Shanghai wholesale (or "interbank") money market.There are eight SHIBOR rates, with maturities ranging from overnight to a year.They are calculated from rates quoted by 18 banks, excluding the four highest and the four lowest rates, and then averaging the remaining data.During the simulation of 1Y SHIBOR using the single-factor Vasicek model, the result cannot completely match the actual situation [3]. Single-Factor dynamic model and vasicek model In the changing financial market, the price of financial products fluctuates continuously over time.The study of the static term structure of the interest rate on the market can no longer satisfy the actual needs, and the dynamic model is imperative.Compared with the static term structure, the dynamic model introduces a stochastic differential term on the basis of the static term structure model of interest rate [5] is based on the liquidity preference principle.Because of the number of different factors in the model, the model can be divided into the single-factor and multi-factor dynamic interest rate models to explore the dynamic term structure. In the single-factor dynamic term structure model of interest rates, the instantaneous interest rate is expressed as dr � � μr � dt � σr � dWt.σr � dWt is the diffusion equation of the single-factor dynamic interest rate model, and μr � dt is the drift equation.The changes of these two function equations will continuously affect the instantaneous interest rate.Wt is the stochastic interest rate disturbance term in the model.It can be seen that the current data mainly influences the formula, and there is no relationship with the historical data.As a consequence, the instantaneous interest rate in the single-factor dynamic interest rate model is consistent with the Markov property.When a random process is given the present state and all the past states, the conditional probability distribution of its future state depends only on the present state, and the random process has Markov property [6]. Vasicek added a linear function to the drift function term in the single-factor dynamic interest model, then there was the Vasicek model.The value of the instantaneous interest rate and drift rate in the Vasicek model will reach the historical average level.In the Vasicek model, the price of risk in the market is usually set to a constant [6].Setting the drift term as a function of the interest rate makes the model have the mean-reversion property.It makes the Vasicek model more consistent with the actual situation of the economic market.But this model can not exclude the possibility of a negative short-term interest rate and the drift function which is a linear equation does not coincide with the actual situation. Multi-factor dynamic model and vasicek model In a rapidly changing market economy, the factors affecting the dynamic process of the interest rate are more than a short-term interest rate.Considering the short-term interest rate only is obviously unable to meet the requirements of the market.Scholars began to introduce multi-factor dynamic interest rate models, which are the most widely used.The affine interest rate model is the most famous [7].The equation is as follows: The basic assumptions of the Vasicek model are: 1) due to the Markovian nature of instantaneous interest rate changes, the determination of the interest rate level is only affected by the current moment, and has nothing to do with historical interest rate data and future data; 2) the price function P(t, T) of a zero-coupon bond with maturity T represents the estimated function of ���t * �� t � t * � �� .Since R(t) is assumed to be Markovian, ��t * � is related to R(t); 3) the market is efficient.Full information is in the market, and there are no transaction costs; speculators expect the same thing about the same financial asset; speculators must be rational [8]. The Vasicek model has several characteristics: firstly, the instantaneous interest rate is an unobservable value, so after the simulation of the instantaneous interest rate, the interest rate curve needs to be restructured according to the path of the rate.Secondly, the price of bonds at different time points can also be calculated by P(t, T) formula of the Vasicek model [9].However, the shortcoming of the model is that the fitting of the curve cannot be consistent with the market curve.Therefore, the Vasicek model is not a risk-free arbitrage model, but an equilibrium model [1].From the perspective of risk management, Vasicek can effectively explain the change of the curve.As long as it is not user product pricing, this model is still useful.Finally, this model follows the normal distribution of the Wiener process and may have negative values.For the economic system of China, the Vasicek model is suitable for the national debt market of China [10].Because the simulating result of the single-factor model is not ideal, the number of factors was increased and the three-factor Vasicek model was introduced. Kalman filter method and vasicek model The Kalman filter method was first proposed by Rudoff Emil Kalman.The Kalman filter method uses the state space function to describe the dynamic system, and the estimation method of model parameters mainly uses the prediction error decomposition to calculate the likelihood function value [1].Compared with other classical time series models, the filtering model has the following advantages: firstly, the Kalman filter can comprehensively describe the state of a system with small present and past information sets, so there is no need to find a large amount of historical data.Using the Kalman filter to calculate the parameter matrix can find the most preferred estimate process.Secondly, the Kalman filter can not only describe the internal situation of the whole system but also express very comprehensive information, and it can well describe the relationship between the input and output variables [11].Finally, in classical time series models, the modeling process is often based on some basic assumptions, however, it is hard to meet all the assumptions at the same time.There are not so many restrictions in the Kalman filter.Using the Kalman filter to calculate the parameter matrix can find the most preferred estimate process. Because the Kalman filter method requires the model to be transformed into the state space form, it is necessary to transform the Vasicek model into the state space representation form when solving the parameters' value of the three-factor Vasicek model.It can be discussed from the equation of state and the equation of observation. Conclusion The value of capital is reflected in many aspects, the value of its time efficiency has always been one of the core problems in financial research, and the important tool to describe it is the dynamic term structure of interest rates.Hence, the term structure of interest rates must be involved in the pricing of underlying bonds or financial derivatives.The Vasicek model built by the OU process can be used in the valuation of financial derivatives, but there may be negative interest.The single-factor Vasicek model only includes the short-term interest rate, and some of its assumptions are not valid in the Chinese financial market [6].The multi-factor dynamic interest rate term structure model plays a guiding role.Although the Kalman filter has many advantages, it also has certain limitations.For instance, the Kalman filter is generally based on the Markov property, which indicates that the future state and the past state described by the system are independent of each other.If a system does not satisfy the Markov property, it is not suitable to demonstrate by the Kalman filter.The three-factor Vasicek model still needs some empirical analysis to find the pricing error and do some corrections.Then the pricing accuracy can be significantly increased [1].In conclusion, when interest rates need to be used in bond pricing, risk management, determination of benchmark interest rates, and other aspects, the dynamic multi-factor term structure model of interest rates should be taken as a main research direction. Fig. 1 .�e Fig. 1.The quantification of the OU process.The equation of the Vasicek model is dr � � ��� � r � �d� � σdW � , of which b is the long-term average, a is the regression speed, and σ is the standard deviation parameter, which affects the fluctuation of the interest rate.The fluctuation amplitude has the characteristics of instantaneous random flow.a and b can be seen as the θ and μ in the OU process.The parameters and the initial condition r0 are completely dynamic and change instantaneously [3].According to Ito lemma, a method of differentiating the functions for random processes, d�e �� r � � � θe �� r � d� � e �� d� ������� � e �� �θμd� � σdW�� (7) s � �� e �� r � � e �� r � � � e �� θμdu � � � � e �� σdWu � � . It can well describe the characteristics of the yield curve over time.In the term structure model of the interest rate, there is usually no arbitrage model or general equilibrium model.The former is based on the expectation theory, and the latter
2,938.6
2023-01-01T00:00:00.000
[ "Economics", "Business" ]
Comparative Analysis of Production Possibility Frontier in Measuring Social E ffi ciency with Data Envelopment Analysis : An Application to Airports The environmental sustainability is a globally important issue, particularly in the global warming. There are many institutions and people who are interested in the greenhouse gases emissions issue and policies that attempt to improve the problem. The aviation industry is not an exception. Under this background, there has been much research on airport efficiency analysis, undesirable outputs, and on evaluating productivity with respect to environmental factors. In the efficiency analysis models with the undesirable outputs in the airports, there are two main types of production possibility frontiers. The first type is the frontier based on the Shephard technology, which involves a weak-disposability concept, using a single abatement factor. The second one is the frontier on the Lozano-Gutiérrez technology, which tries to take the weak disposability into account by regarding the undesirable outputs as input. However, they have limitations. Additionally, no study has provided how to apply weak disposability correctly. To find out the limitations and give standard to utilize weak disposability, we compare models with two issues that must be scrutinized. In this paper, we show that these two concepts have limitations in making the production possibility area. To overcome this limitation, we propose an undesirable-output model using multiple abatement factors based on weak disposability with the slack-based measure (SBM) approach. We analyze, comparatively, the different social efficiency performances according to two issues among the three approaches in estimating production possibility frontier, using the Shepard model, the Lozano-Gutierrez model, and our proposed model. To provide correct standard of measurement and apply characteristics of undesirable outputs, we study not only theoretically, but also empirically, with data from Korea’s 13 domestic airports. Introduction Environmental issues are of crucial importance, and therefore, many institutes and countries have implemented strict regulation and management criteria.Above air pollution and greenhouse gas (GHG) emissions are the most actively discussed issues.While GHGs primarily include carbon dioxide (CO 2 ), methane (CH 4 ), nitrous oxide (N 2 O), water vapor, etc., reduction in CO 2 emissions is the main objective of policies, because CO 2 constitutes about 77% of total GHGs emissions [1].Transport section takes charge of 13% of total GHGs emissions.In that, Aviation takes 13% of transport's CO 2 emissions [2].Independently, air transportation accounts for 2% of total CO 2 emissions [3].Moreover, because of being adjacent to atmosphere, there is a viewpoint that air transportation influences on air pollution more critically than other fields.These stand for the necessity of managing CO 2 emissions in air transportation.In this situation, eco-efficiency, which has the ability to evaluate the efficiency of production considering environmental factors, plays a role.Previous studies have suggested various standards for estimating eco-efficiency based on models using undesirable outputs.Such models have been used to estimate efficiency by taking into account the harmful side effects of outputs in aviation's case, complaints, flight delays, noise, pollution, etc. In models using the data envelopment analysis (DEA) method, including undesirable-output models, a production possibility function (PPF) consists of constraint in evaluating efficiency.To measure efficiency correctly, the correct PPF is needed, because estimated boundary by a PPF determines degree of efficiency.It can be said that correct evaluation of efficiency is influenced by how accurately a PPF is estimated.Undesirable-output models depend on perspectives concerning undesirable outputs and how to construct PPFs using the perspectives.Previous studies measuring efficiency in the aviation industry have been limited in terms of estimating the production possibility area. Previous studies in aviation industry have estimated the production possibility area of undesirable-output models based mainly on two perspectives.First one is the Shephard technology, which consider weak disposability as a characteristic of undesirable output.Weak disposability is a concept of relation between desirable output and undesirable output, that is, these two outputs produce or decrease together.However, the Shephard technology has limitations in terms of practicality and a violation of convexity axiom by using a single abatement factor.Secondly, Lozano and Gutiérrez suggest a model (Lozano-Gutiérrez model) that is based on the perspective concerning undesirable outputs as input, which takes weak disposability into account [4].However, the hybrid model fails to reflect weak disposability.In spite of the limitations, previous studies accept the perspectives without question. In this context, this paper compares perspectives and undesirable-output models mainly used in the aviation industry and suggests a model that mitigates limitations of conventional models by employing weak disposability and multiple abatement factors (Abatement factor is a variable which allows outputs to reduce to a decreased level of production activity and it forces desirable and undesirable outputs to contract together, when the level of a production activity decrease).The results based on Korean airports indicate that the proposed model with the slack-based measure (SBM) approach evaluates eco-efficiency better than conventional models.The SBM based model has an advantage in that the approach has no directionality to find a benchmark point and it makes the model includes characteristics of undesirable outputs. The paper is organized as follows: Section 2 provides a review of previous research, not only on aviation, but also on other related technologies.Section 3 employs PPFs and areas to compare undesirable-output models in the context of the aviation industry.There are two issues that should be considered regarding using models.According to the issues, the comparison will be provided.We consider the precondition for an undesirable-output model and propose a modified model as a standard of correct constraint.To represent practical analysis of former discussion, with the same issues as Section 3, Section 4 presents the results for the case of Korean airports based on the proposed model and compares the results between models.Section 5 provides conclusion. Literature Review Many studies have considered undesirable-output models using the DEA method, including studies that consider characteristics of undesirable outputs and production possibility areas.An incorporated measurement of productivity with the environmental factor is done by Pittman (1983) [5].Färe et al. (1989) claim that Shephard (1970) constructs the Shephard technology using weakly disposable undesirable outputs in a PPF.Fare et al. (1993), Chung et al. (1997), and Picazo-Tadeo and Prior (2005) apply weak disposability to directional distance function with undesirable outputs.Additionally, Zhou et al. (2006Zhou et al. ( , 2007) ) employ weak disposability based on non-radial approach [6][7][8][9][10][11][12].Haliu and Veeman (2001) regard undesirable outputs as input, asserting that weak disposability makes the essence of undesirable outputs ambiguous [13].By contrast, Färe and Grosskopf (2003) argue that undesirable outputs defined as input, as asserted in Haliu and Veeman, cannot reflect the nature of undesirable outputs [14].While Seiford et al. (2002) provide other perspectives using a linear monotone decreasing transformation on the basis of BBC to reflect the undesirable outputs [15], it cannot reflect weak disposability between desirable and undesirable outputs.Kuosmanen (2005) points out some limitations of the Shephard technology coming from a single abatement factor and provides the Kousmanen technology as an alternative with multiple abatement factors [16].Färe and Grosskopf (2009) argue that the Kuosmanen technology is not a correct production possibility function because it overestimates the area [17].Kuosmanen and Podinovski (2009) verify that the Shephard technology has an underestimation problem and that the Kuosmanen technology is the correct PPF that fully satisfies Shephard's definition and assumption of weak disposability [18]. In the aviation industry, there have been many attempts to consider undesirable outputs when efficiency, based on the DEA method, is evaluated.Yu (2004) evaluates the efficiency of Taiwanese airports by using the directional distance function (DDF) and the window analysis with the DEA method by factoring in aircraft noise as an undesirable output [19].Yu uses the Shephard technology as a constraint to reflect weak disposability.Pathomsiri et al. (2008) use the DDF approach to consider delays as undesirable outputs and the Shephard technology as a constraint to assess the productivity of U.S. airports [20].Lozano and Gutiérrez (2011) produce an undesirable-output model to assess the efficiency of Spanish airports and include delays as an undesirable output based on the SBM [4] (related studies on airlines include, for example, Ha et al. (2011a) and Scotti and Volta (2015) [21,22] approach.Like Hailu and Veeman, they regard undesirable outputs as input, include the slack of undesirable outputs, and reflect weak disposability by setting a single abatement factor.To evaluate the efficiency of Korean airports, while considering CO 2 emissions as an undesirable output, Ha, H.K. (2011b) uses the perspective of Hailu and Veeman on the undesirable-output equation to set the slack of undesirable outputs [23].Fan et al. (2014) measure the efficiency of Chinese airports by employing the Shephard technology as a constraint on the CRS assumption with flight delays [24] (related studies on airlines include, for example, Ha et al. (2011a) and Scotti and Volta (2015) [21,22]). Comparison of Perspectives on Undesirable Outputs (Weak Disposability Issue) There are two issues in manifesting undesirable output on PPF.We address them in Sections 3.1 and 3.2, respectively.The first one is about how to definite characteristics of undesirable outputs, that is, weak disposability issue.It is not only the matter of equality or inequality in undesirable output restriction, but also the matter of reflecting weak disposability or not and how to reflect it.Following perspectives are included in this issue.According to the selection of a perspective, PPF is differently constructed. Undesirable Output as Input (Input Perspective) Undesirable outputs generate from production activity such that they are associated with some desirable outputs.Undesirable outputs have a similar property to that of inputs, that is, the lower the level of undesirable outputs produces, the better the production is.Haliu and Veeman (2001) put undesirable outputs into a PPF as inputs [13].The related axiom is as follows: Y is a technology that produces a desirable output v and an undesirable output w from input x.In this technology, undesirable outputs and inputs have the same inequality, which means they have the same sign of slack.In this case, undesirable outputs influence its efficiency score, particularly under the SBM approach(Slacks-based measure(SBM) is sub-method in DEA provided by Tone(2001) [25], and can be improved.Lozano and Gutiérrez (2011) and Ha, H.K. (2011b) employs this idea as follows [4,23]: K k=1 z k w k j ≤ w j , j = 1, . . ., J. ( Equation ( 1) is an equation of an undesirable output w.The undesirable-output inequality is the same as the input inequality.This constraint implies that both inputs and undesirable outputs are unfavorable.Haliu and Veeman do not accept weak disposability for three reasons.First, the use of an equality restriction can produce a reference set and can thus substantially inflate the efficiency score.Second, weak disposability gives an undesirable output an undetermined effect on efficiency.Third, undesirable outputs, such as pollution, have negative shadow prices [13].Färe and Grosskopf (2003) refute the criticism of Haliu and Veeman, but there are additional issues to consider, which we address later on.Färe and Grosskopf (2003) point out that the perspective of Haliu and Veeman is not realistic.If the inequality is satisfied then an infinite output, namely a bad output, can be generated from a finite input [14].Then, the property of an undesirable output cannot be considered a by-product of a desirable output.Due to this clear limitation, the perspective barely employed in previous research.Therefore, in comparison, we will only employ the critical view of the perspective.Instead of this perspective, hybrid version of the input perspective and weak disposability perspective, namely, the hybrid perspective, will be discussed. Undesirable Output as Fixed Value (Weak Disposability Perspective) If an innovative technology that can reduce pollutants or other harmful products, such as purifying facilities or equipment, is created in an industry, then undesirable outputs can be reduced.This paper assumes that production activities are at the same level as the pollutant reducing technology.To reduce undesirable outputs, desirable outputs also have to be reduced.The Shephard technology for undesirable outputs includes the property that reduces the amount of outputs through weak disposability.Shephard (1970) defines weak disposability of outputs as follows: if (v, w) ∈ P(x) and 0 ≤ θ ≤ 1 then (θv, θw) ∈ P(x).If Y is weakly disposable, then, given input x, production outputs (v, w) can be downsized together by a factor of theta. The Shephard technology axioms are as follows: Axiom 1 shows that an input x and a desirable output v are freely disposable.Axiom 2 shows that outputs (v, w) are weakly disposable.If desirable and undesirable outputs have null-jointness, then axiom 3 is satisfied.If the production activity generates both undesirable and desirable outputs, and if there are no undesirable outputs, then there are no desirable outputs because the production activity has stopped.Axiom 4 is a basic axiom of PPFs in DEA method that can explain Y as convex [7].The formula of the Shephard technology employs the VRS assumption is as follows: k: number of airports m: number of desirable outputs j: number of undesirable outputs n: number of inputs v km : observed amount of desirable output m of airport k w k j : observed amount of undesirable output j of airport k x kn : observed amount of input n of airport k v m : specific observation value of desirable output m w j : specific observation value of undesirable output j x n : specific observation value of input n θ: single abatement factor In Equation (b) in (2), if the equality exists instead of an inequality, then this implies no disposability.An abatement factor and an equality is necessary to impose weak disposability.If the level of production activity changes, then outputs will also change, which is referred to as an abatement effort.Then, the abatement factor, appearing as θ in Equation ( 2), allows outputs to be reduced to a decreased level of production activity and forces desirable and undesirable outputs to contract together. Figure 1 compares the production possibility area.If there are three observations A, B, and C, then the area bounded by aA, AB, and the horizontal line originating from point B is of the Haliu and Veeman technology.The area bounded by 0A, AB, BC, and Cc is of the Shephard technology (P S (x)).In Equation (b) in ( 2), if the equality exists instead of an inequality, then this implies no disposability.An abatement factor and an equality is necessary to impose weak disposability.If the level of production activity changes, then outputs will also change, which is referred to as an abatement effort.Then, the abatement factor, appearing as θ in Equation ( 2), allows outputs to be reduced to a decreased level of production activity and forces desirable and undesirable outputs to contract together.As stated earlier, Haliu and Veeman point out the limitations of weak disposability [13].Färe and Grosskopf (2003) refute Haliu and Veeman, but their argument is limited.In this regard, the following supplements and modifications are provided.First, "[A] weak disposability makes undesirable output[s] leave [an] undetermined effect on efficiency."Färe and Grosskopf state that this problem is related to how efficiency is measured and thus is not a matter of a reference technology but a problem with the entire model.A detailed explanation is provided later.Second, "undesirable output[s] like pollutant[s] have negative shadow price[s]."Färe and Grosskopf insist that because a direction can be imposed on variables in the DDF, the efficiency of effects on negative shadow prices and undesirable outputs can be taken into account [14].In Figure 2, if there is an inefficient observation e, then the arrow d 1 , which is the direction of projection of e, and d 1 , have a contradiction.The direction of arrow d 1 is decided by the DDF model, which uses Shephard technology.A DDF that increases desirable outputs, while reducing undesirable outputs is problematic.Undesirable outputs with weak disposability cannot be decreased by themselves.Undesirable outputs can only be decreased if the decrease involves a decrease in desirable outputs, but the DDF allows undesirable outputs to decrease alone. Figure 2 shows an inefficient point e.The projected direction of d 1 employed by the DDF allows the model to evaluate the efficiency of e.Unfortunately, d 1 implies that desirable outputs increase when undesirable outputs decrease, which contradicts the assumption of weak disposability implied by the Shephard Technology.Undesirable outputs not only have negative shadow prices, as pointed by Haliu and Veeman, but also a productive property.Undesirable outputs represent a variable whose nature differs from that of inputs or desirable outputs.It is not accurate to define undesirable outputs in the same way as inputs, and therefore, undesirable outputs are recognized as a variable with properties of both inputs and desirable outputs.Weak disposability is not a means to make undesirable outputs undetermined (as Haliu and Veeman [13]), but it is a property that represents undesirable outputs as a different variable. inefficient observation e, then the arrow , which is the direction of projection of e, and , have a contradiction.The direction of arrow is decided by the DDF model, which uses Shephard technology.A DDF that increases desirable outputs, while reducing undesirable outputs is problematic.Undesirable outputs with weak disposability cannot be decreased by themselves.Undesirable outputs can only be decreased if the decrease involves a decrease in desirable outputs, but the DDF allows undesirable outputs to decrease alone. Figure 2 shows an inefficient point e.The projected direction of employed by the DDF allows the model to evaluate the efficiency of e.Unfortunately, implies that desirable outputs increase when undesirable outputs decrease, which contradicts the assumption of weak disposability implied by the Shephard Technology.Undesirable outputs not only have negative shadow prices, as pointed by Haliu and Veeman, but also a productive property.Undesirable outputs represent a variable whose nature differs from that of inputs or desirable outputs.It is not accurate to define undesirable outputs in the same way as inputs, and therefore, undesirable outputs are recognized as a variable with properties of both inputs and desirable outputs.Weak disposability is not a means to make undesirable outputs undetermined(as Haliu and Veeman [13]), but it is a property that represents undesirable outputs as a different variable.In the absence of a DDF, an increase in desirable outputs and a decrease in inputs must be selected in the production of the same amount of undesirable outputs (e.g. a movement from point e to the benchmark point B).This implies that the equality affects the efficiency score in terms of the whole measurement. Undesirable output as hybrid variable (hybrid perspective) Lozano and Gutiérrez (2011) reflect weak disposability by using an abatement factor as well as null-jointness (Lozano and Gutiérrez (2011) and Ha, H.K. (2011b) represents similar PPFs [4,23].This paper considers the model in Lozano and Gutiérrez representatively because this is the first to take these types of PPF and SBM approaches,), and introduce weak disposability perspective on the Haliu and Veeman technology.That is, they attempt to reflect a mixture of characteristics of inputs and In the absence of a DDF, an increase in desirable outputs and a decrease in inputs must be selected in the production of the same amount of undesirable outputs (e.g., a movement from point e to the benchmark point B).This implies that the equality affects the efficiency score in terms of the whole measurement. Undesirable Output as Hybrid Variable (Hybrid Perspective) Lozano and Gutiérrez (2011) reflect weak disposability by using an abatement factor as well as null-jointness (Lozano and Gutiérrez (2011) and Ha, H.K. (2011b) represents similar PPFs [4,23].This paper considers the model in Lozano and Gutiérrez representatively because this is the first to take these types of PPF and SBM approaches,), and introduce weak disposability perspective on the Haliu and Veeman technology.That is, they attempt to reflect a mixture of characteristics of inputs and outputs in undesirable outputs.Equation ( 3) is the PPF of the Lozano and Gutiérrez model, which includes the VRS assumption: k: number of airports m: number of desirable outputs j: number of undesirable outputs n: number of inputs v km : observed amount of desirable output m of airport k w k j : observed amount of undesirable output j of airport k x kn : observed amount of input n of airport k v m : specific observation value of desirable output m desirable output m w j : specific observation value of undesirable output j x n : specific observation value of input n θ: single abatement factor The abatement factor θ causes desirable and undesirable outputs to move together, but θ cannot satisfy weak disposability by itself.An abatement factor can make only the output move proportionally.In the Shephard technology, undesirable outputs have no directional nature because of an equality in (b) found in Equation (2).In this case, the abatement factor can let undesirable outputs follow desirable outputs if the level of production activity remains unchanged.In the case of the directional nature of undesirable outputs, as in the case of inputs, changes in desirable and undesirable outputs are proportional but in different directions.This implies that they never move together and do not have weak disposability.In this regard, Lozano and Gutiérrez combine the advantages of the Haliu and Veeman technology and the assumption of weak disposability, but as long as the inequality or slack of undesirable outputs (the equation with inequality can be transposed as that of equality by using slack.), which can move independently, exists, weak disposability cannot be reflected in the measurement.In terms of the whole measurement based on the SBM approach, a projection may head toward a benchmark point with only less undesirable outputs.Lozano and Gutiérrez fail to consider weakly disposable undesirable outputs.To impose weak disposability on the technology, the equality and the abatement factor are required.Figure 1 shows the production possibility area of the Lozano and Gutiérrez technology.The area bounded by 0A, AB, and the horizontal line from B is the area under the Lozano and Gutiérrez technology.Based on a comparison of production possibility areas, the Lozano and Gutiérrez technology shows an area different from that of the Shephard technology. Comparison of Perspectives on the Abatement Factor (the Abatement Factor Issue) Second is the abatement factor issue.The abatement factor (θ) has a crucial influence on realizing weak disposability between desirable and undesirable outputs by making them share an impact through it.In this paragraph, we discuss about problems from misused abatement factor in conventional models and how to correctly reflect it.Yu (2004) [19,20,24].The Shephard technology imposes a single abatement factor on all observations.It has two key limitations.In Equation (2), θ is a single abatement factor that makes all observations or production activities have the same abatement effort.This means that if there are three observations, production activities, or three decision-making units (DMUs), for example, firms A, B, and C, then these firms reflect the same proportion when they control their own level of production activity.Three abatement factors θ A , θ B and θ C should exist.It is not practical to set the same abatement factor for all firms.In addition, the Shephard technology has a limitation in convexity.Kuosmanen and Podinovski (2009) and Podinovski and Kuosmanen (2011) verify the violation of the axiom of the Shephard technology [18,26]. k: number of airports m: number of desirable outputs j: number of undesirable outputs n: number of inputs v km : observed amount of desirable output m of airport k w k j : observed amount of undesirable output j of airport k x kn : observed amount of input n of airport k v m : specific observation value of desirable output m w j : specific observation value of undesirable output j x n : specific observation value of input n θ k : abatement factors Kuosmanen (2005) provides the Kuosmanen technology by using multiple abatement factors to address the limitations.With multiple abatement factors, each observation can have an abatement effort in the Kuosmanen technology [16].Based on the CRS assumption, θ and θ k are equal to one (As suggested in Färe and Grosskopf (2003) and Shephard (1974) [14,27]), if a model has weak disposability by setting the same abatement factor, then desirable (good) and undesirable (bad) outputs can contract together in a PPF.This implies that the PPF has an equality sign in the formula of undesirable outputs and is under the variable returns to scale (VRS) assumption because, under the constant returns to scale (CRS) assumption the abatement factor is equal to one.Based on the VRS assumption, however, there is a distinct difference between θ and θ k .The equation of the Kuosmanen technology is provided as (4).The difference between P K (x) and P S (x) is that the abatement factor changes from θ to θ k .Here, P K (x) satisfies the convexity axiom and has practicality in terms of abatement efforts.Figures 3 and 4 presents difference of estimated boundary and area caused by different abatement factors.Additionally, the difference between figures demonstrates the violation from Shephard technology.The shephard's method does not include an area in its PPF, in which it must be included along with an axiom4, convexity.Figures 3 and 4 present PPFs generated from DMU B and C based on Shephard's and Kuosmanen's method.The difference between them is whether they have CH or not.Shephard's PPF employs a single abatement factor(θ) and does not include CH and an area by it which comes from CB in Kuousmanen's by multiple abatement factors (θ k ).∆BDH is able to produce from B. Then, if H is possible to produce, CH and its area should be included in its PPF.However, in Figure 3, CH and its area are not included in its PPF.Therefore, Shephard's method violates an axiom4.This difference comes from abatement factors in each model.Multiple abatement factors in Kuousmanen's method cause inclusion of CH and its area, while a single abatement factor does not.More explanation is provided in Kuosmanen and Podinovski (2009) [18]. The Kuosmanen technology is a PPF that corresponds to the definition of weak disposability and satisfies all axioms.The Kuosmanen technology is the correct minimum extrapolation technology (The minimum extrapolation technology is the smallest production possibility set that satisfies all given axioms.Banker, Charnes, and Cooper (1984) generate this principle [28].This is introduced from Kusomanen and Podinovski (2009) [18]), although the technology has a larger area than the Shephard technology because the Shephard technology has a violation and is limited.To address the violation and limitations, the Kuosmanen technology employs an equality and multiple abatement factors.Conventional models have limitations caused by PPFs, which are regarded as constraints in the respective models.To address these limitations, this paper proposes a model that employs an equality and multiple abatement factors. PPF.Therefore, Shephard's method violates an axiom4.This difference comes from abatement factors in each model.Multiple abatement factors in Kuousmanen's method cause inclusion of and its area, while a single abatement factor does not.More explanation is provided in Kuosmanen and Podinovski (2009) [18].The Kuosmanen technology is a PPF that corresponds to the definition of weak disposability and satisfies all axioms.The Kuosmanen technology is the correct minimum extrapolation technology (The minimum extrapolation technology is the smallest production possibility set that satisfies all given axioms.Banker, Charnes, and Cooper (1984) generate this principle [28].This is introduced from Kusomanen and Podinovski (2009) [18]), although the technology has a larger area than the Shephard technology because the Shephard technology has a violation and is limited.To address the violation and limitations, the Kuosmanen technology employs an equality and multiple abatement factors.Conventional models have limitations caused by PPFs, which are regarded as constraints in the respective models.To address these limitations, this paper proposes a model that employs an equality The Proposed Model The majority of measurements using undesirable output model have employed DDF and SBM approach, particularly in aviation field.However, as mentioned in the previous section, the use of the DDF contradicts with the assumption of weak disposability.The proposed model takes the SBM approach because it is appropriate to reflect characteristics of undesirable outputs in terms of that it does not have constrained directions to find the benchmark point.In addition, SBM approach allows to observe difference between subject and benchmark point by giving the specific amount of gap with slack.The PPF is defined based on Shephard (1970Shephard ( , 1974)), and axioms are the same as those of Shephard (Section 3).From the perspective of the Kuosmanen technology, the proposed model employs multiple abatement factors and the VRS is assumed in the measurement (the reason is the same as that of Lozano and Gutiérrez (2011), which mentions that "given the limited competition among the airports, it cannot be expected that they operate at the most productive scale size."This is introduced in Banker (1984) [4,28]).The proposed model is shown as follows: k, m, j, n, v km , w k j , x kn , v m , w j , x n , θ k : same explanation as Equation ( 4) s + m : slack of desirable output s − n : slack of input Unlike conventional models using the SBM approach, Equation (5) has weak disposability between desirable and undesirable outputs by setting no slack in the equation of undesirable outputs and abatement factors.To correct the error from a single abatement factor, the abatement factor is represented by multiple abatement factors in Equation (5).Then, the model has a practical meaning that airports create different abatement efforts under the VRS assumption when the level of production activity is reduced.In this model, when a DMU find a benchmark point, it is based on its undesirable outputs.The difference between the conventional approach and the proposed models about undesirable outputs is directionality to find a benchmark point.In terms of outputs, the conventional one has independent directionality, which allows to increase desirable outputs and to decrease undesirable outputs, respectively.The proposed model finds frontiers, which have the same level of undesirable outputs with more desirable outputs.The latter has more practical perspective to find DMU's benchmark point than the former because it is almost impossible to increase production with less pollution without technological innovations, but in evaluation analysis, we usually assume that there are no such technological inventions. Data and Analyses The proposed model is used to measure the efficiency of 13 Korean airports (for consistency in dataset, Muan and Yangyang airports, which opened recently or temporarily, stopped operation are excluded.)for the 2004-2013 period.(Efficiency of Korean airports, along with other Northeast Asian airports, has also been studied in, e.g., Ha et al. (2010) and Ha et al. (2013) [29,30].).But these studies did not address the issue of undesirable outputs.)The details of airports are described in Figure 5 and Table 1.Seven variables are included in the evaluation.The length of the runway (in meters), the number of employees (in persons), and the terminal area (in square meters) are considered as input factors; the number of passengers (in persons), the amount of cargo (in tons), and the number of flights are considered as desirable outputs; and the level of CO 2 emissions (in tons of CO 2 ) is included as an undesirable output.To construct "smoothed-surface" frontiers in each analysis, a measurement requires 14 observations for the seven variables (Yu, 2004 [19]), but there are only 13 observations in each year.Therefore, the validity of efficiency cannot be guaranteed.To address this problem, the three-year-window DEA method (this method is recommended by Nghiem and Coelli (2002) [31].) is adopted.This method provides efficiency trends and stability over time as well as supplements the number of observations.In this analysis, the first window (or analysis) includes the 2004-2006 period, and the last window includes the 2011-2013 period.Therefore, a total of 39 observations are included in each analysis. The proposed model is used to measure the efficiency of 13 Korean airports (for consistency in dataset, Muan and Yangyang airports, which opened recently or temporarily, stopped operation are excluded.)for the 2004-2013 period.(Efficiency of Korean airports, along with other Northeast Asian airports, has also been studied in, e.g., Ha et al. (2010) and Ha et al. (2013) [29,30].).But these studies did not address the issue of undesirable outputs.)The details of airports are described in Figure 5 and Table 1.Seven variables are included in the evaluation.The length of the runway (in meters), the number of employees (in persons), and the terminal area (in square meters) are considered as input factors; the number of passengers (in persons), the amount of cargo (in tons), and the number of flights are considered as desirable outputs; and the level of CO2 emissions (in tons of CO2) is included as an undesirable output.To construct "smoothed-surface" frontiers in each analysis, a measurement requires 14 observations for the seven variables (Yu, 2004 [19]), but there are only 13 observations in each year.Therefore, the validity of efficiency cannot be guaranteed.To address this problem, the three-year-window DEA method (this method is recommended by Nghiem and Coelli (2002) [31].) is adopted.This method provides efficiency trends and stability over time as well as supplements the number of observations.In this analysis, the first window (or analysis) includes the 2004-2006 period, and the last window includes the 2011-2013 period.Therefore, a total of 39 observations are included in each analysis. Comparisons between the Models As we mentioned, there are two important issues which need to be considered in specific model.If all perspectives or technologies are compared at once by only using results, the comparison have pointless conclusion.In order to draw meaningful conclusion of the comparison induced from difference, it should be compared between perspectives, which have the same conditions.In that way, the perspectives and comparisons are arranged in terms of each issue.According to two issues discussed previously, an empirical comparison is discussed. A Comparison between the Lozano-Gutiérrez Model and the Shephard Model Similar to the theoretical comparison in Section 3.1, we implement an empirical comparison about the weak disposability issue, using Korean airport's case.Additionally, through empirical comparison, the previous debate extends. The model using the Lozano and Gutiérrez technology, that is, the Lozano and Gutiérrez model, considers undesirable outputs as inputs not as a variable for weak disposability.The difference between conditions of the equality and inequality in the equation of undesirable outputs can be shown by comparing the results for the Lozano and Gutiérrez model and the Shephard model: The Lozano and Gutiérrez model is shown as Equation ( 7) (Unlike in the case of Equation ( 6), Lozano and Gutiérrez (2011) includes no x (input) in the objective function [4].However, to compare technologies, this paper includes x in the objective function, as in Equation ( 6).).The model using the Shephard technology, that is, the Shephard model, is shown as Equation ( 6).Both models have a single abatement factor.The difference between these two models is the slack of w or inequality in the equation of undesirable outputs: Minimize ρ = k, m, j, n, v km , w k j , x kn , v m , w j , x n : same explanation as Equation ( 4) s + m , s − n : same explanation as Equation (5) s − j : slack of undesirable output These appear in the production possibility area.There is a proportional line created by the abatement factor θ which lies in the front areas of the Shephard technology and Lozano and Gutiérrez technology graphs.The front areas have the same shape because they share something in common-the same abatement factor.The rear parts have different areas because of differences between equations of w.Spatially, the Lozano and Gutiérrez model in Equation ( 7) has a larger area because of the inequality, which implies that production can infinitely produce undesirable outputs like inputs.The argument of Haliu and Veeman that weak disposability and equality greatly inflate the efficiency score is considered in the same vein.However, the gap in efficiency between the Lozano and Gutiérrez model and the Shephard model is generally insignificant.Figures 6 and 7 show the results for the measurement of efficiency based on the models.Figure 6 shows similar trends in efficiency, which implies that the two models show minor differences in efficiency scores over time.Figure 7 shows a chart of average efficiency by airport.Airports show slight differences between the models but there is no pattern for the differences.For example, Yeosu Airport (RSU) and Incheon International Airport (ICN) have higher efficiency scores under the Shephard model than under the Lozano and Gutiérrez model, but Gimpo International Airport (GMP) and Sacheon Airport (HIN) show the opposite results.Some airports such as Ulsan Airport (USN) show the same efficiency score between the models. Sustainability 2018, 10, x FOR PEER REVIEW 15 of 21 equations of w.Spatially, the Lozano and Gutiérrez model in Equation ( 7) has a larger area because of the inequality, which implies that production can infinitely produce undesirable outputs like inputs.The argument of Haliu and Veeman that weak disposability and equality greatly inflate the efficiency score is considered in the same vein.However, the gap in efficiency between the Lozano and Gutiérrez model and the Shephard model is generally insignificant.Figure 6 and 7 show the results for the measurement of efficiency based on the models.Figure 6 shows similar trends in efficiency, which implies that the two models show minor differences in efficiency scores over time. Figure 7 shows a chart of average efficiency by airport.Airports show slight differences between the models but there is no pattern for the differences.For example, Yeosu Airport (RSU) and Incheon International Airport (ICN) have higher efficiency scores under the Shephard model than under the Lozano and Gutiérrez model, but Gimpo International Airport (GMP) and Sacheon Airport (HIN) show the opposite results.Some airports such as Ulsan Airport (USN) show the same efficiency score between the models.The Lozano and Gutiérrez model has the slack of undesirable outputs (CO 2 emissions), and therefore the slack influences the efficiency score.This is consistent with the perspective of Haliu and Veeman, who assert that weak disposability, which has an equality in the equation of w, produces an undetermined effect of undesirable outputs on efficiency.If this is the case, then there must be a significant difference between the technologies.However, a comparison of the results shows no significant difference between the Lozano and Gutiérrez model and the Shephard model.This confirms that undesirable outputs have considerable influence on efficiency by influencing the projecting direction of an observation to any benchmark point for a reference set in weak disposability, although it is not included in the objective function. Table 2 provides the difference between models, by comparing the number of being benchmark in two models.Especially ICN presents significant different between models.This difference is caused by different PPFs and direction to find benchmark point.The perspective of an undesirable output as an input is erroneous and limited theoretically and practically.Based on this perspective, Lozano and Gutiérrez attempt to reflect weak disposability in the model.The Lozano and Gutiérrez model has the same limitations because of the slack of undesirable outputs, which can decrease independently.In addition, the limitations of weak disposability introduced by Haliu and Veeman are not valid with respect to empirical results.Therefore, it is desirable to take weak disposability into account in an undesirable-output model by this equality.The Lozano and Gutiérrez model has the slack of undesirable outputs (CO2 emissions), and therefore the slack influences the efficiency score.This is consistent with the perspective of Haliu and Veeman, who assert that weak disposability, which has an equality in the equation of w, produces an undetermined effect of undesirable outputs on efficiency.If this is the case, then there must be a significant difference between the technologies.However, a comparison of the results shows no significant difference between the Lozano and Gutiérrez model and the Shephard model.This confirms that undesirable outputs have considerable influence on efficiency by influencing the projecting direction of an observation to any benchmark point for a reference set in weak disposability, although it is not included in the objective function. Table 2 provides the difference between models, by comparing the number of being benchmark in two models.Especially ICN presents significant different between models.This difference is caused by different PPFs and direction to find benchmark point.The perspective of an undesirable output as an input is erroneous and limited theoretically and practically.Based on this perspective, Lozano and Gutiérrez attempt to reflect weak disposability in the model.The Lozano and Gutiérrez model has the same limitations because of the slack of undesirable outputs, which can decrease independently.In addition, the limitations of weak disposability introduced by Haliu and Veeman are not valid with respect to empirical results.Therefore, it is desirable to take weak disposability into account in an undesirable-output model by this equality.Figure 8 compares the proposed model to the Shephard model by considering weak disposability in the model and showing the efficiency trends.The proposed model has lower scores than the Shephard model for consecutive years.This verifies that multiple abatement factors include the front part of the area of the PPF, which a single abatement factor does not include, and that this found area influences the efficiency score.Figure 9 compares efficiency scores by airport.In general, the proposed model has lower efficiency scores.In particular, Gunsan Airport (KUV) shows a large difference in the efficiency score between the models.Table 3 shows the difference between the models.If the difference is less than 0, then the proposed model has a lower efficiency score than the Shephard model.Gunsan Airport (KUV) and Sacheon Airport (HIN) change by approximately 105% and 36%, respectively.These changes are caused by multiple abatement factors.This verifies that multiple abatement factors allow a technology to include an area that is not included in a technology using a single abatement factor. the proposed model has lower efficiency scores.In particular, Gunsan Airport (KUV) shows a large difference in the efficiency score between the models.Table 3 shows the difference between the models.If the difference is less than 0, then the proposed model has a lower efficiency score than the Shephard model.Gunsan Airport (KUV) and Sacheon Airport (HIN) change by approximately 105% and 36%, respectively.These changes are caused by multiple abatement factors.This verifies that multiple abatement factors allow a technology to include an area that is not included in a technology using a single abatement factor.difference in the efficiency score between the models.Table 3 shows the difference between the models.If the difference is less than 0, then the proposed model has a lower efficiency score than the Shephard model.Gunsan Airport (KUV) and Sacheon Airport (HIN) change by approximately 105% and 36%, respectively.These changes are caused by multiple abatement factors.This verifies that multiple abatement factors allow a technology to include an area that is not included in a technology using a single abatement factor.The SBM approach provides the amount of slack, which indicates the ability to improve production, and therefore a comparison of the model's slack is meaningful.Table 4 shows the average change in this slack.This change is calculated by dividing the difference in the slack score between the models (gap) by the mean value of observed data of each airport.This difference is the absolute value of the difference in average slack between the Shephard model and the proposed model.The change in cargo slack is about 1.4 times larger for Gunsan Airport (KUV) than for the observed data, and for Sacheon Airport (HIN), it is about three times the observed data.Ulsan Airport (USN), Gunsan Airport (KUV), and Sacheon Airport (HIN) also show large changes.This is caused by change in the benchmark point because the frontier is changed by multiple abatement factors.Based on a calculation of multiple slack values between the models, the cargo slack is approximately 3.6 times larger in the proposed model; the passenger slack is approximately 4.2 times larger; the air movement slack is approximately 4.8 times larger for Gunsan Airport (KUV), and the passenger slack is approximately 2.1 times larger for the Ulsan Airport (USN). Conclusions This paper has considered undesirable outputs to evaluate efficiency by comparing previous undesirable-output models, namely the Lozano and Gutiérrez model and the Shepard model, in the context of the aviation industry and proposes a new model.The Lozano and Gutiérrez model is based on the Haliu and Veeman model, which regards undesirable outputs as inputs and considers weak disposability in the model.Ha, H.K. (2011b) takes the SBM approach to present a similar perspective.This method allows undesirable outputs to decrease independently based on the slack of undesirable outputs, although the model has an abatement factor in the PPF.A single abatement factor makes variables contract in the same proportion but cannot make them move in the same direction to increase or decrease together.Therefore, the perspective of the Lozano and Gutiérrez model has a limitation in reflecting weak disposability.The Shephard technology assumes weak disposability as a property of undesirable outputs and provides a PPF using an equality in the equation of undesirable outputs (w) and a single abatement factor (θ). Yu (2004), Pathomsiri et al. (2008), and Fan et al. (2014) follow this perspective.However, a single abatement factor causes practical and technical limitations.First, there is a practical problem with a single abatement factor implying that all airports make the same abatement effort if there is a reduction in production activity.This is not practical because each airport has its own manner of production such that there are differences in abatement efforts across airports.A single abatement factor causes a technical problem in that the convexity axiom is violated.The Shephard technology neglects the area or subset that should be included in the production possibility set by convexity [18].To address these limitations and reflect weak disposability correctly, this paper proposes a model using an equality in the equation of undesirable outputs (w) and multiple abatement factors based on the SBM approach.The results provide no support for Haliu and Veeman's criticism and suggests that the equality reflects weak disposability based on a comparison of results between the Lozano and Gutiérrez model and the Shephard model.The results based on a comparison of the airports m: number of desirable outputs j: number of undesirable outputs n: number of inputs : observed amount of desirable output m of airport k : observed amount of undesirable output j of airport k : observed amount of input n of airport k : specific observation value of desirable output m : specific observation value of undesirable output j : specific observation value of input n : single abatement factor Figure 1 compares the production possibility area.If there are three observations A, B, and C, then the area bounded by , , and the horizontal line originating from point B is of the Haliu and Veeman technology.The area bounded by 0 , , , and is of the Shephard technology ( ()). Figure 2 . Figure 2. The direction of the DDF projection from Färe and Grosskopf (2003). Figure 2 . Figure 2. The direction of the DDF projection from Färe and Grosskopf (2003). , Pathomsiri et al. (2008), and Fan et al. (2014) employ the Shephard technology as constraints in their models Figure 3 . Figure 3.The Shephard technology induced by activities B and C. Source: Kuosmanen and Podinovski (2009). Figure 3 . Figure 3.The Shephard technology induced by activities B and C. Source: Kuosmanen and Podinovski (2009). Figure 4 . Figure 4.The Kuosmanen technology induced by activities B and C. Source: Kuosmanen and Podinovski (2009). Figure 4 . Figure 4.The Kuosmanen technology induced by activities B and C. Source: Kuosmanen and Podinovski (2009). Figure 6 .Figure 6 . Figure 6.A comparison of trends in model efficiency by year. Sustainability 2018 , 21 Figure 7 . Figure 7.A comparison of model efficiency by airport. Figure 8 Figure 7 . Figure 7.A comparison of model efficiency by airport. Figure 8 . Figure 8.A comparison of efficiency trends. Figure 9 . Figure 9.A comparison of efficiency scores by airport. Figure 8 . Figure 8.A comparison of efficiency trends. Figure 8 . Figure 8.A comparison of efficiency trends. Figure 9 . Figure 9.A comparison of efficiency scores by airport. Figure 9 . Figure 9.A comparison of efficiency scores by airport. Figure 5. Map of Korean Airports. Table 2 . The number of being referenced by DMUs. 4.2.2.A Comparison of the Shephard Model and the Proposed ModelBased on the discussion on the abatement factor issue in Section 3.2, Shephard model and the proposed model are compared empirically. Table 2 . The number of being referenced by DMUs.Comparison of the Shephard Model and the Proposed Model Based on the discussion on the abatement factor issue in Section 3.2, Shephard model and the proposed model are compared empirically. Table 3 . Gaps in efficiency scores between models. Table 3 . Gaps in efficiency scores between models. Table 3 . Gaps in efficiency scores between models.Notes: Gap: average efficiency of the proposed model − average efficiency of Shephard model (by airports), Change rate: gap/average efficiency of the proposed model (by airports). Table 4 . Average change ratios in slack between models (%).(|average slack of the proposed model−average slack of Shephard model|) mean value of observed data of an airport (by airports). Notes: Change rate: gap
11,247.6
2019-04-11T00:00:00.000
[ "Environmental Science", "Economics", "Engineering" ]